The present disclosure claims the priority and benefit of Chinese Patent Application No. 202110308608.0, filed on Mar. 23, 2021, entitled “METHOD AND APPARATUS FOR TRAINING SPEECH RECOGNITION MODEL, DEVICE AND STORAGE MEDIUM.” The disclosure of the above application is incorporated herein by reference in its entirety.
The present disclosure relates to the field of computer technologies, and particularly relates to the fields of speech recognition technologies, deep learning technologies, or the like, and more particularly to a method for training a speech recognition model, a device and a storage medium.
Automatic speech recognition (ASR) is a technology of converting a speech into a text. Different from a conventional ASR solution in which a speech recognition task is divided into a plurality of subtasks, input of an end-to-end speech recognition model is acoustic features, and output thereof is directly a natural language text, thereby simplifying a model training process.
The end-to-end speech recognition model may be configured as a sequence-to-sequence (Seq2Seq) model, the sequence-to-sequence model includes a decoder, and when the end-to-end speech recognition model is trained, the decoder may obtain a plurality of decoding results by means of beam search.
In a related art, when the decoder performs the beam search, the input only includes the output text at a previous moment and acoustic related information
The present disclosure provides a method for training a speech recognition model, a device and a storage medium.
According to an embodiment of the present disclosure, there is provided a method for training a speech recognition model, including: obtaining a fusion probability of each of at least one candidate text corresponding to a speech based on an acoustic decoding model and a language model; selecting a preset number of the candidate texts based on the fusion probabilities of the candidate texts, and determining a predicted text based on the preset number of the candidate texts; and obtaining a loss function based on a standard text corresponding to the speech and the predicted text, and training the speech recognition model based on the loss function.
According to another embodiment of the present disclosure, there is provided an electronic device, including: at least one processor; and a memory connected with the at least one processor communicatively, wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to carry out the method according to any one of the above-mentioned aspects.
According to another embodiment of the present disclosure, there is provided a non-transitory computer readable storage medium including computer instructions, which, when executed by a computer, cause the computer to carry out the method according to any one of the above-mentioned aspects.
It should be understood that the statements in this section are not intended to identify key or critical features of the embodiments of the present disclosure, nor limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
The drawings are used for better understanding the present solution and do not constitute a limitation of the present disclosure. In the drawings:
The following part will illustrate exemplary embodiments of the present disclosure with reference to the drawings, including various details of the embodiments of the present disclosure for a better understanding. The embodiments should be regarded only as exemplary ones. Therefore, those skilled in the art should appreciate that various changes or modifications can be made with respect to the embodiments described herein without departing from the scope and spirit of the present disclosure. Similarly, for clarity and conciseness, the descriptions of the known functions and structures are omitted in the descriptions below.
101: obtaining a fusion probability of each of at least one candidate text corresponding to a speech based on an acoustic decoding model and the language model.
102: selecting a preset number of one or more candidate texts based on the fusion probability of each of the at least one candidate text and determining a predicted text based on the preset number of the one or more candidate texts.
103: obtaining a loss function based on the predicted text and a standard text corresponding to the speech, and training the speech recognition model based on the loss function.
In the embodiment of the present disclosure, the speech recognition model, for example, is an end-to-end speech recognition model. The end-to-end speech recognition model, for example, is an attention-based sequence-to-sequence model.
As shown in
Output of the attention model may be understood to resemble output of an acoustic model in a conventional ASR solution, and thus, the output cu of the attention model may be understood as the acoustic related information; in the related art, the input of the decoder includes only the output text yu-1 at the previous moment and the acoustic related information cu, and correspondingly, the decoder in the related art may be understood to include only the acoustic decoding model.
In the embodiment of the present disclosure, referring to
After the fusion probability of each candidate text is obtained, assuming that a beam search width is N, N candidate texts may be selected according to the descending fusion probabilities. For example, for “(jintian tianqi)”, when the output character at a first moment is predicted, the candidate texts may include “(jin),” “(jin)” and “(jin),” assuming that the fusion probability of “ (jin)” is 0.7, the fusion probability of “(jin)” is 0.2, and the fusion probability of “ (jin)” is 0.1, if N=2, “(jin)” and “(jin)” may be selected.
After selection of the N candidate texts, the predicted text may be determined based on the N candidate texts. For example, the N candidate texts may be directly used as the predicted texts. For example, for the first moment, “(jin)” and “(jin)” are used as the predicted texts.
After the predicted text is obtained, the loss function may be calculated based on the predicted text and the standard text. The standard text refers to the correct text of the speech, and may be obtained by means of manual annotation; for example, in the above example, the standard text is “(jin tian tian qi).” The loss function may be a loss function adopted in a discrimination training algorithm, and a specific form may be selected according to actual requirements, for example, a cross entropy function, or the like. In the embodiment of the present disclosure, the loss function is a word error rate function. The word error rate function is formulated as:
wherein LwerrN-best(x, y*) is the loss function; yi is the ith predicted text, and N predicted texts are provided in total; y* is the standard text; W(yi, y*) is a number of errors of the ith predicted text, i.e., a number of errors of yi with respect to y*; Ŵ is an average number of errors of the N predicted texts; {circumflex over (P)}(yi|x) is a normalized value, and P(yi|x) is a distribution probability of the output character yi, such as the fusion probability P of yi.
After the loss function is obtained, the speech recognition model may be trained based on the loss function; that is, the speech recognition model is initialized randomly or by loading a pre-trained model; after the initialization, parameters of the speech recognition model are adjusted until the loss function converges, and the speech recognition model when the loss function converges is used as the finally obtained speech recognition model. The speech recognition model includes the encoder, the attention model and the decoder, the decoder includes the acoustic decoding model and the language model, the encoder, the acoustic decoding model and the language model may all be configured as deep neural network models, and specific model structures may be selected according to actual requirements; for example, the encoder, the acoustic decoding model and the language model may all be configured as recurrent neural network (RNN) models, and a multi-headed attention model is used as the attention model.
In this embodiment, the fusion probability of the candidate text is calculated based on the acoustic decoding model and the language model, and the candidate text is selected based on the fusion probability, such that reference may be made to both the acoustic related information and related information of the language model when the candidate text is selected, thereby improving recognition accuracy of the speech recognition model.
301: extracting acoustic features of a speech.
The speech is, for example, a speech corresponding to “(jin tian tian qi).”
The acoustic features, such as FilterBank features, may be extracted using various related arts.
302: encoding the acoustic features using an encoder to obtain encoded features.
The encoder may be configured as an RNN model, such as a long short-term memory (LSTM) model.
303: performing an attention processing operation on the encoded features using the attention model to obtain features after the attention processing operation.
The attention model may adopt a model in various related arts, such as a multi-headed attention model.
In this embodiment, the acoustic features are extracted and encoded, and the attention processing operation is performed on the encoded features, such that semantic features may be obtained and decoded to obtain a predicted text, thereby training the speech recognition model based on the predicted text.
304: processing an output character at a previous moment and the feature after the attention processing operation using an acoustic decoding model to obtain a first probability corresponding to each of at least one candidate text corresponding to the speech.
The acoustic decoding model may be configured as an RNN model, such as a long short-term memory (LSTM) model.
For example, if the speech is a speech corresponding to “(jin tian tian qi)” and “(jin)” is to be predicted at a current moment, a processing operation may be performed by the acoustic decoding model based on a beginning character [SOS] and the feature c1 after the attention processing operation at the current moment, so as to obtain the first probability corresponding to each candidate text; for example, if the candidate texts include “(jin),” “ (jin)”, or the like, the first probabilities of “(jin),” “ (jin),” or the like, may be predicted.
305: processing the output character at the previous moment using a language model to obtain a second probability corresponding to each candidate text.
The language model may be configured as a neural network model, such as an RNN model, a Transformer model, or the like.
For example, if “(jin)” is to be predicted at the current moment, a processing operation may be performed by the language model based on the beginning character [SOS], so as to obtain the second probability corresponding to each candidate text; for example, if the candidate texts include “(jin),” “ (jin),” or the like, the second probabilities of “(jin),” “ (jin),” or the like, may be predicted.
306: obtaining a fusion probability corresponding to each candidate text based on the first probability and the second probability.
Specifically, for each candidate text, the first probability and the second probability may be subjected to weighted summation to obtain a weighted summation value, and the weighted summation value may be determined as the fusion probability of the corresponding candidate text.
For example, the first probability and the second probability of “(jin)” are subjected to weighted summation to obtain the fusion probability of “(jin).”
In this embodiment, the fusion probability is obtained by performing weighted summation on the first probability and the second probability, thus simply and conveniently calculating the fusion probability.
307: selecting a preset number of candidate texts based on the fusion probability.
Specifically, the candidate texts with a number equal to a beam search width may be selected according to the fusion probabilities in descending order; for example, the beam search width is represented by N, and if N=2, for a first moment, assuming that “(jin)” and “ (jin)” have higher fusion probabilities, “(jin)” and “ (jin)” are selected as the candidate texts at the first moment.
308: judging whether a standard text corresponding to the speech exists in the preset number of candidate texts, if yes, executing 309, and otherwise, executing 310.
The standard text corresponding to the speech may be obtained by means of manual annotation; for example, the standard text is “(jin)” for the first moment.
309: determining the preset number of candidate texts as the predicted texts.
310: replacing one of the preset number of candidate texts with the standard text to obtain texts after the replacing, and determining the texts after the replacing as the predicted texts.
For example, for the first moment, if the standard text is “(jin)”, and the selected N candidate texts are “ (jin)” and “ (jin),” but do not include “ (jin),” the standard text “(jin)” may be forcibly included in the predicted text. Specifically, the previous candidate text may be replaced with the standard text by means of a code in a specified output path or a randomly selected output path; for example, “ (jin)” is replaced with “(jin),” and then, the predicted texts are “(jin)” and “ (jin).”
In the related art, in a discrimination training process, generally, a candidate text with an error rate higher than an average error rate is suppressed, and a candidate text with an error rate lower than the average error rate is encouraged. However, if the N candidate texts do not have a completely correct result, there exists a problem of encouraging an erroneous result.
In this embodiment, by replacing the candidate text with the standard text, the standard text may be forcibly included in the predicted text, thus improving a recognition effect of the speech recognition model.
311: obtaining an accumulated number of errors of the predicted text based on the standard text corresponding to the speech and the predicted text, the accumulated error number being obtained based on a historical error number and a current error number.
The current error number is a number of errors of the predicted text at the current moment with respect to the standard text, and the historical error number is a number of errors of the predicted text at a historical moment before the current moment with respect to the standard text.
For example, referring to
In this embodiment, as shown on the lower side of
In this embodiment, the local error optimizing effect may be achieved by calculating the accumulated error number.
312: obtaining a loss function based on the accumulated error number of the predicted text.
313: training the speech recognition model based on the loss function.
In this embodiment, the first probability is calculated using the acoustic decoding model, the second probability is calculated using the language model, the fusion probability is obtained based on the first probability and the second probability, and the candidate text is selected based on the fusion probability, such that the more accurate candidate text may be obtained, thereby improving the recognition effect of the speech recognition model.
In some embodiments, the processing module 501 is specifically configured for: processing an output text at a previous moment and acoustic related information at a current moment using the acoustic decoding model to obtain a first probability corresponding to the at least one candidate text corresponding to the speech; processing the output text at the previous moment using the language model to obtain a second probability corresponding to the candidate text; and obtaining the fusion probability of the candidate text based on the first probability and the second probability.
In some embodiments, the processing module 501 is specifically configured for: for the candidate text, performing weighted summation of the first probability and the second probability to obtain a weighted summation value, and determining the weighted summation value as the fusion probability of the corresponding candidate text.
In some embodiments, the determining module 502 is specifically configured for: if the preset number of the one or more candidate texts include the standard text, determining the preset number of the one or more candidate texts as the predicted texts; or if the preset number of the one or more candidate texts do not include the standard text, replace one candidate text of the preset number of the one or more candidate texts with the standard text to obtain one or more texts after the replacing, and determining the one or more texts after the replacing as the predicted text.
In some embodiments, the training module 503 is specifically configured for: obtaining an accumulated number of errors of the predicted text based on the predicted text and the standard text corresponding to the speech, the accumulated error number being obtained based on a historical error number and a current error number; and obtaining the loss function based on the accumulated error number of the predicted text.
As shown in
The extracting module 604 is configured for extracting acoustic features of a speech; the encoding module 605 is configured for encoding the acoustic features to obtain encoded features; the attention processing module 606 is configured for processing the encoded features to obtain features after the attention processing operation.
In the embodiment of the present disclosure, the acoustic features are extracted and encoded, and the attention processing operation is performed on the encoded features, such that semantic features may be obtained and decoded to obtain the predicted text, thereby training the speech recognition model based on the predicted text. The first probability is calculated using the acoustic decoding model, the second probability is calculated using the language model, the fusion probability is obtained based on the first probability and the second probability, and the candidate text is selected based on the fusion probability, such that the more accurate candidate text may be obtained, thereby improving the recognition effect of the speech recognition model. The fusion probability is obtained by performing weighted summation of the first probability and the second probability, thus simply and conveniently calculating the fusion probability. The local error optimizing effect may be achieved by calculating the accumulated error number.
It may be understood that in the embodiments of the present disclosure, mutual reference may be made to the same or similar contents in different embodiments.
It may be understood that “first”, “second”, or the like, in the embodiments of the present disclosure are only for distinguishing and do not represent an importance degree, a sequential order, or the like.
According to the embodiment of the present disclosure, there are also provided an electronic device, a readable storage medium and a computer program product.
As shown in
The plural components in the electronic device 700 are connected to the I/O interface 705, and include: an input unit 706, such as a keyboard, a mouse, or the like; an output unit 707, such as various types of displays, speakers, or the like; the storage unit 708, such as a magnetic disk, an optical disk, or the like; and a communication unit 709, such as a network card, a modem, a wireless communication transceiver, or the like. The communication unit 709 allows the electronic device 700 to exchange information/data with other devices through a computer network, such as the Internet, and/or various telecommunication networks.
The computing unit 701 may be a variety of general and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit 701 include, but are not limited to, a central processing unit (CPU), a graphic processing unit (GPU), various dedicated artificial intelligence (AI) computing chips, various computing units running machine learning model algorithms, a digital signal processor (DSP), and any suitable processor, controller, microcontroller, or the like. The computing unit 701 performs the methods and processing operations described above, such as the method for training a speech recognition model. For example, in some embodiments, a human-machine conversation method may be implemented as a computer software program tangibly contained in a machine readable medium, such as the storage unit 708. In some embodiments, part or all of the computer program may be loaded and/or installed into the electronic device 700 via the ROM 702 and/or the communication unit 709. When the computer program is loaded into the RAM 703 and executed by the computing unit 701, one or more steps of the method for training a speech recognition model described above may be performed. Alternatively, in other embodiments, the computing unit 701 may be configured to perform the human-machine conversation method by any other suitable means (for example, by means of firmware).
Various implementations of the systems and technologies described herein above may be implemented in digital electronic circuitry, integrated circuitry, field programmable gate arrays (FPGA), application specific integrated circuits (ASIC), application specific standard products (ASSP), systems on chips (SOC), complex programmable logic devices (CPLD), computer hardware, firmware, software, and/or combinations thereof. The systems and technologies may be implemented in one or more computer programs which are executable and/or interpretable on a programmable system including at least one programmable processor, and the programmable processor may be special or general, and may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program codes for implementing the method according to the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or a controller of a general purpose computer, a special purpose computer, or other programmable data processing devices, such that the program code, when executed by the processor or the controller, causes functions/operations specified in the flowchart and/or the block diagram to be implemented. The program code may be executed entirely on a machine, partly on a machine, partly on a machine as a stand-alone software package and partly on a remote machine, or entirely on a remote machine or a server.
In the context of the present disclosure, the machine readable medium may be a tangible medium which may contain or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine readable medium may be a machine readable signal medium or a machine readable storage medium. The machine readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of the machine readable storage medium may include an electrical connection based on one or more wires, a portable computer disk, a hard disk, a random access memory (RAM), a read only memory (ROM), an erasable programmable read only memory (EPROM or flash memory), an optical fiber, a portable compact disc read only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide interaction with a user, the systems and technologies described here may be implemented on a computer having: a display device (for example, a cathode ray tube (CRT) or liquid crystal display (LCD) monitor) for displaying information to a user; and a keyboard and a pointing device (for example, a mouse or a trackball) by which a user may provide input for the computer. Other kinds of devices may also be used to provide interaction with a user; for example, feedback provided for a user may be any form of sensory feedback (for example, visual feedback, auditory feedback, or tactile feedback); and input from a user may be received in any form (including acoustic, speech or tactile input).
The systems and technologies described here may be implemented in a computing system (for example, as a data server) which includes a back-end component, or a computing system (for example, an application server) which includes a middleware component, or a computing system (for example, a user computer having a graphical user interface or a web browser through which a user may interact with an implementation of the systems and technologies described here) which includes a front-end component, or a computing system which includes any combination of such back-end, middleware, or front-end components. The components of the system may be interconnected through any form or medium of digital data communication (for example, a communication network). Examples of the communication network include: a local area network (LAN), a wide area network (WAN) and the Internet.
A computer system may include a client and a server. Generally, the client and the server are remote from each other and interact through the communication network. The relationship between the client and the server is generated by virtue of computer programs which run on respective computers and have a client-server relationship to each other. The server may be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so as to overcome the defects of high management difficulty and weak service expansibility in conventional physical host and virtual private server (VPS) service. The server may also be a server of a distributed system, or a server incorporating a blockchain.
It should be understood that various forms of the flows shown above may be used and reordered, and steps may be added or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, which is not limited herein as long as the desired results of the technical solution disclosed in the present disclosure may be achieved.
The above-mentioned implementations are not intended to limit the scope of the present disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made, depending on design requirements and other factors. Any modification, equivalent substitution and improvement made within the spirit and principle of the present disclosure all should be included in the extent of protection of the present disclosure.
Number | Date | Country | Kind |
---|---|---|---|
202110308608.0 | Mar 2021 | CN | national |
Number | Name | Date | Kind |
---|---|---|---|
6208964 | Sabourin | Mar 2001 | B1 |
10896669 | Arik | Jan 2021 | B2 |
11107463 | Prabhavalkar | Aug 2021 | B2 |
11461583 | Urtasun | Oct 2022 | B2 |
11587569 | Ye | Feb 2023 | B2 |
11798531 | Wang | Oct 2023 | B2 |
20120143591 | Deng | Jun 2012 | A1 |
20160336007 | Hanazawa | Nov 2016 | A1 |
20190348023 | Kwon | Nov 2019 | A1 |
20200027444 | Prabhavalkar | Jan 2020 | A1 |
20200043468 | Willett et al. | Feb 2020 | A1 |
20200051549 | Chen | Feb 2020 | A1 |
20200175961 | Thomson et al. | Jun 2020 | A1 |
20200251124 | Yuan | Aug 2020 | A1 |
20200258500 | Heigold | Aug 2020 | A1 |
20200294488 | Li | Sep 2020 | A1 |
20200327884 | Bui | Oct 2020 | A1 |
20200335083 | Wan | Oct 2020 | A1 |
20200335108 | Meng | Oct 2020 | A1 |
20200357388 | Zhao et al. | Nov 2020 | A1 |
20200402500 | Zhao | Dec 2020 | A1 |
20210020175 | Shao | Jan 2021 | A1 |
20210027766 | Shi | Jan 2021 | A1 |
20210104223 | Beaufays | Apr 2021 | A1 |
20210110259 | Lee | Apr 2021 | A1 |
20210193119 | Lee | Jun 2021 | A1 |
20210193161 | Delcroix | Jun 2021 | A1 |
20210210109 | Weng | Jul 2021 | A1 |
20210233512 | Peyser | Jul 2021 | A1 |
20210280170 | Chen | Sep 2021 | A1 |
20210327410 | Beaufays | Oct 2021 | A1 |
20210327421 | Beaufays | Oct 2021 | A1 |
20210335340 | Gowayyed | Oct 2021 | A1 |
20210343273 | Tripathi | Nov 2021 | A1 |
20210350786 | Chen | Nov 2021 | A1 |
20210375270 | Weng | Dec 2021 | A1 |
20210390944 | Richards | Dec 2021 | A1 |
20220004868 | Moriya | Jan 2022 | A1 |
20220044678 | Chen | Feb 2022 | A1 |
20220068257 | Biadsy | Mar 2022 | A1 |
20220093088 | Rangarajan Sridhar | Mar 2022 | A1 |
20220122587 | Thomson et al. | Apr 2022 | A1 |
20220147510 | Singh | May 2022 | A1 |
20220148571 | Wang | May 2022 | A1 |
20220165253 | Sharifi | May 2022 | A1 |
20220172707 | Wang | Jun 2022 | A1 |
20220262344 | Yoon | Aug 2022 | A1 |
20220383853 | Xu | Dec 2022 | A1 |
20230072352 | Geng | Mar 2023 | A1 |
20230377564 | Peyser | Nov 2023 | A1 |
20230386473 | Li | Nov 2023 | A1 |
20230419970 | Krishnaswamy | Dec 2023 | A1 |
Number | Date | Country |
---|---|---|
107578771 | Jan 2018 | CN |
109754809 | May 2019 | CN |
110517693 | Nov 2019 | CN |
110534095 | Dec 2019 | CN |
111753549 | Oct 2020 | CN |
111128394 | Dec 2020 | CN |
112102815 | Dec 2020 | CN |
112509562 | Mar 2021 | CN |
2019133046 | Aug 2019 | JP |
2021501376 | Jan 2021 | JP |
2021039220 | Mar 2021 | JP |
20160082150 | Jul 2016 | KR |
Entry |
---|
Search Report of Chinese application No. 2021103086080 dated Aug. 27, 2021, 3 pages. |
Search Report of Chinese application No. 2021103086080 dated Nov. 5, 2021, 2 pages. |
Zhijun Fang et al., “Introduction to Data Science and Big Data Technology,” Huazhong University of Science and Technology, Aug. 2019, 10 pages. |
Extended European Search Report of European application No. 22150464.0 dated May 11, 2022, 8 pages. |
Vaswani et al., “Attention Is All You Need”, 31st Conference on Neural Information Processing Systems NIPS 2017, Dec. 9, 2017 (Dec. 9, 2017), XP055832424, Long Beach, CA, USA Retrieved from the Internet: URL:https://papers.nips.cc/paper/2017/file/3f5ee243547dee91 fbd053c1 c4a845aa-Paper.pdf, 11 pages. |
Number | Date | Country | |
---|---|---|---|
20220310064 A1 | Sep 2022 | US |