The present application claims the priority of Chinese Patent Application No. 202110861985.7, filed on Jul. 29, 2021, with the title of “TEXT PROCESSING METHOD AND APPARATUS, ELECTRONIC DEVICE AND STORAGE MEDIUM.” The disclosure of the above application is incorporated herein by reference in its entirety.
The present disclosure relates to the field of artificial intelligence technologies, and, in particular, to a text processing method and apparatus, an electronic device and a storage medium in the fields such as deep learning and natural language processing.
In practical applications, pre-processing, such as machine translation or emotion recognition, for a to-be-processed text may be realized by means of a Transformer model.
The Transformer model generally adopts a multi-head-attention mechanism, which includes multiple attention modules and has high time complexity. Moreover, the time complexity may increase with an increase in a text length. The text length generally refers to a number of tokens.
In order to reduce the time complexity and improve the efficiency of text processing, a computational sparsity method, such as a sparse self-attention (Longformer) method, may be adopted. However, in this method, each head adopts a same attention pattern, which affects model performance and reduces a text processing effect.
The present disclosure provides a text processing method and apparatus, an electronic device and a storage medium.
A text processing method includes configuring, for a to-be-processed text, attention patterns corresponding to heads in a Transformer model using a multi-head-attention mechanism respectively, wherein at least one head corresponds to a different attention pattern from the other N−1 heads, and N denotes a number of heads and is a positive integer greater than 1; and processing the text by using the Transformer model.
An electronic device includes at least one processor; and a memory communicatively connected with the at least one processor; wherein the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to enable the at least one processor to perform a text processing method, wherein the text processing method includes: configuring, for a to-be-processed text, attention patterns corresponding to heads in a Transformer model using a multi-head-attention mechanism respectively, wherein at least one head corresponds to a different attention pattern from the other N−1 heads, and N denotes a number of heads and is a positive integer greater than 1; and processing the text by using the Transformer model.
A non-transitory computer readable storage medium with computer instructions stored thereon, wherein the computer instructions are used for causing a computer to perform a text processing method, wherein the text processing method includes configuring, for a to-be-processed text, attention patterns corresponding to heads in a Transformer model using a multi-head-attention mechanism respectively, wherein at least one head corresponds to a different attention pattern from the other N−1 heads, and N denotes a number of heads and is a positive integer greater than 1; and processing the text by using the Transformer model.
One of the embodiments disclosed above has the following advantages or beneficial effects. The heads no longer adopt the same attention pattern, but different heads may correspond to different attention patterns, so as to improve connectivity between tokens, thereby improving the model performance and correspondingly improving the text processing effect.
It should be understood that the content described in this part is neither intended to identify key or significant features of the embodiments of the present disclosure, nor intended to limit the scope of the present disclosure. Other features of the present disclosure will be made easier to understand through the following description.
The drawings are intended to provide a better understanding of the solutions and do not constitute limitations on the present disclosure. In the drawings,
Exemplary embodiments of the present disclosure are illustrated below with reference to the accompanying drawings, which include various details of the present disclosure to facilitate understanding and should be considered only as exemplary. Therefore, those of ordinary skill in the art should be aware that various changes and modifications can be made to the embodiments described herein without departing from the scope and spirit of the present disclosure. Similarly, for clarity and simplicity, descriptions of well-known functions and structures are omitted in the following description.
In addition, it is to be understood that the term “and/or” herein is merely an association relationship describing associated objects, indicating that three relationships may exist. For example, A and/or B indicates that there are three cases of A alone, A and B together, and B alone. Besides, the character “/” herein generally means that associated objects before and after it are in an “or” relationship.
In step 101, for a to-be-processed text, attention patterns corresponding to heads in a Transformer model using a multi-head-attention mechanism are configured respectively, wherein at least one head corresponds to a different attention pattern from the other N−1 heads, and N denotes a number of heads and is a positive integer greater than 1.
In step 102, the text is processed by using the Transformer model.
As can be seen, in the solution of the above method embodiment, the heads no longer adopt the same attention pattern, but different heads may correspond to different attention patterns, so as to improve connectivity between tokens, thereby improving the model performance and correspondingly improving the text processing effect.
The specific value of N may be determined according to an actual requirement. Corresponding attention patterns may be configured for N heads respectively. At least one head corresponds to a different attention pattern from the other N−1 heads. That is, N attention patterns corresponding to the N heads include at least two different attention patterns.
In one embodiment of the present disclosure, the attention pattern may include: a local pattern and a global pattern. That is, the attention pattern may be composed of a local pattern and a global pattern. The local pattern may also be called a local attention, and the global pattern may also be called a global attention.
In one embodiment of the present disclosure, the heads may correspond to a same local pattern. That is, a uniform local pattern may be configured for the heads. In this way, an effect of configuring different attention patterns may be achieved only by configuring different global patterns for any two heads, thereby simplifying the configuration process and improving the processing efficiency.
In one embodiment of the present disclosure, the heads may correspond to different global patterns respectively, wherein change rules between the global patterns corresponding to each two adjacent heads may be the same.
For example, if the value of N is 4, different global patterns may be configured for the 1st head, the 2nd head, the 3rd head and the 4th head respectively. That is, global patterns corresponding to any two heads may be different.
With the above processing, the connectivity between the tokens may be further improved, thereby further improving the model performance and the text processing effect.
In one embodiment of the present disclosure, a specific implementation of configuring different global patterns corresponding to the heads respectively may be shown in
In step 201, a global pattern corresponding to the 1st head is configured.
The specific form of the global pattern is not limited.
In step 202, for the ith head, the global pattern corresponding to an i−1th head is adjusted according to a predetermined adjustment rule, and the adjusted global pattern is taken as the global pattern corresponding to the ith head.
An initial value of i is 2.
In addition, the predetermined adjustment rule is not specifically limited.
In step 203, it is determined whether i is equal to N, where N denotes a number of heads; if yes, the process is ended; and otherwise, step 204 is performed.
If i is equal to N, which indicates that all the heads have been configured, correspondingly, the process may be ended; and otherwise, processing is continued for next head.
In step 204, i=i+1 is configured, and then step 202 is repeated.
That is, 1 may be added to the value of i to obtain an updated i, and step 202 is repeated for the ith head.
Assuming that the value of N is 4, global patterns corresponding to the heads may be sequentially obtained according to the method in the embodiment shown in
As can be seen, after the global pattern is configured according to the above method, a change rule between the global patterns corresponding to each two adjacent heads is the same, enabling more tokens to have a chance to become global tokens. Moreover, the global patterns corresponding to the heads may be quickly and efficiently configured through regular adjustment.
As an example,
As shown in
As shown in
As shown in
As can be seen, for the ith head, 1≤i≤N, and as i constantly increases, the corresponding global pattern shows regular changes. As shown in
Correspondingly, taking the 1st head as an example, as shown in
As described above, each small square may correspond to one token respectively. Assuming that the tokens are numbered as token1, token2, token3 . . . , and token M from top to bottom, where M denotes a number of the tokens, for token1, its receptive field is global, that is, including all the tokens; for token2, its receptive field is also global, that is, the same as token1, including all the tokens; for token3, its receptive field includes 5 tokens, namely token1, token2, token3, token4 and token5; for token4, its receptive field includes 5 tokens, namely token1, token2, token4, token5 and token6; for token5, its receptive field includes 5 tokens, namely token1, token2, token5, token6 and token1.
Refer to
Pre-processing, such as machine translation or emotion recognition, for the to-be-processed text may be realized by means of the Transformer model according to the present disclosure. For example, semantic expression coding or the like may be performed by using the Transformer model. A specific implementation is the prior art.
After the attention patterns corresponding to the heads are configured based on the method according to the present disclosure, the performance of the Transformer model is improved. Then, correspondingly, text processing by using the Transformer model may improve a text processing effect. For example, the accuracy of machine translation or the accuracy of emotion recognition results may be improved.
It is to be noted that, to make the description brief, the foregoing method embodiments are expressed as a series of actions. However, those skilled in the art should appreciate that the present disclosure is not limited to the described action sequence, because according to the present disclosure, some steps may be performed in other sequences or performed simultaneously. Next, those skilled in the art should also appreciate that all the embodiments described in the specification are preferred embodiments, and the related actions and modules are not necessarily mandatory to the present disclosure. Besides, for a part that is not described in detail in one embodiment, refer to related descriptions in other embodiments.
The above is an introduction to the method embodiments. The solution according to the present disclosure is further illustrated below through apparatus embodiments.
The configuration module 401 is configured to configure, for a to-be-processed text, attention patterns corresponding to heads in a Transformer model using a multi-head-attention mechanism respectively, wherein at least one head corresponds to a different attention pattern from the other N−1 heads, and N denotes a number of heads and is a positive integer greater than 1.
The processing module 402 is configured to process the text by using the
Transformer model.
As can be seen, in the solution of the above apparatus embodiment, the heads no longer adopt the same attention pattern, but different heads may correspond to different attention patterns, so as to improve connectivity between tokens, thereby improving the model performance and correspondingly improving the text processing effect.
The specific value of N may be determined according to an actual requirement. The configuration module 401 may configure corresponding attention patterns for N heads respectively. At least one head corresponds to a different attention pattern from the other N−1 heads. That is, N attention patterns corresponding to the N heads include at least two different attention patterns.
In one embodiment of the present disclosure, the attention pattern may include: a local pattern and a global pattern. That is, the attention pattern may be composed of a local pattern and a global pattern.
In one embodiment of the present disclosure, the configuration module 401 may configure a same local pattern for the heads. That is, a uniform local pattern may be configured for the heads. In this way, an effect of configuring different attention patterns may be achieved only by configuring different global patterns for any two heads.
In one embodiment of the present disclosure, the configuration module 401 may configure different global patterns corresponding to the heads respectively, wherein a change rule between the global patterns corresponding to each two adjacent heads may be the same.
In one embodiment of the present disclosure, the configuration module 401 may configure a global pattern corresponding to the 1st head; perform the following processing for an ith head, an initial value of i being 2: adjusting the global pattern corresponding to an i−1th head according to a predetermined adjustment rule, and taking the adjusted global pattern as the global pattern corresponding to the ith head; and end the processing if i is determined to be equal to N, and otherwise, configure i=i+1, and repeat the first processing for the ith head.
Assuming that the value of N is 4, a global pattern corresponding to the 1st head may be configured; then, for the 2nd head, the global pattern corresponding to the 1st head may be adjusted according to the predetermined adjustment rule, and the adjusted global pattern is taken as the global pattern corresponding to the 2nd head. Then, for the 3rd head, the global pattern corresponding to the 2nd head may be adjusted according to the predetermined adjustment rule, and the adjusted global pattern is taken as the global pattern corresponding to the 3rd head. Then, for the 4th head, the global pattern corresponding to the 3rd head may be adjusted according to the predetermined adjustment rule, and the adjusted global pattern is taken as the global pattern corresponding to the 4th head.
Upon completion of the above processing, the processing module 402 may realize pre-processing, such as machine translation or emotion recognition, for the to-be-processed text by means of the Transformer model. For example, semantic expression coding or the like may be performed by using the Transformer model.
A specific work flow of the apparatus embodiment shown in
After the attention patterns corresponding to the heads are configured based on the method according to the present disclosure, the performance of the Transformer model is improved. Then, correspondingly, text processing by using the Transformer model may improve a text processing effect. For example, the accuracy of machine translation or the accuracy of emotion recognition results may be improved.
Acquisition, storage and application of users' personal information involved in the technical solutions of the present disclosure comply with relevant laws and regulations, and do not violate public order and moral.
The solutions according to the present disclosure may be applied to the field of artificial intelligence, and in particular, relate to the fields such as deep learning and natural language processing. Artificial intelligence is a discipline that studies how to make computers simulate certain thinking processes and intelligent behaviors (such as learning, reasoning, thinking and planning) of human beings, which includes hardware technologies and software technologies. The artificial intelligence hardware technologies generally include sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing and other technologies. The artificial intelligence software technologies mainly include a computer vision technology, a speech recognition technology, a natural language processing technology, machine learning/deep learning, a big data processing technology, a knowledge graph technology and other major directions.
According to embodiments of the present disclosure, the present disclosure further provides an electronic device, a readable storage medium and a computer program product.
As shown in
A plurality of components in the device 500 are connected to the I/O interface 505, including an input unit 506, such as a keyboard and a mouse; an output unit 507, such as various displays and speakers; a storage unit 508, such as disks and discs; and a communication unit 509, such as a network card, a modem and a wireless communication transceiver. The communication unit 509 allows the device 500 to exchange information/data with other devices over computer networks such as the Internet and/or various telecommunications networks.
The computing unit 501 may be a variety of general-purpose and/or special-purpose processing components with processing and computing capabilities. Some examples of the computing unit 501 include, but are not limited to, a central processing unit (CPU), a graphics processing unit (GPU), various artificial intelligence (AI) computing chips, various computing units that run machine learning model algorithms, a digital signal processor (DSP), and any appropriate processor, controller or microcontroller, etc. The computing unit 501 performs the methods and processing described above, such as the method according to the present disclosure. For example, in some embodiments, the method according to the present disclosure may be implemented as a computer software program that is tangibly embodied in a machine-readable medium, such as the storage unit 508. In some embodiments, part or all of a computer program may be loaded and/or installed on the device 500 via the ROM 502 and/or the communication unit 509. One or more steps of the method according to the present disclosure may be performed when the computer program is loaded into the RAM 503 and executed by the computing unit 501. Alternatively, in other embodiments, the computing unit 501 may be configured to perform the method according to the present disclosure by any other appropriate means (for example, by means of firmware).
Various implementations of the systems and technologies disclosed herein can be realized in a digital electronic circuit system, an integrated circuit system, a field programmable gate array (FPGA), an application-specific integrated circuit (ASIC), an application-specific standard product (ASSP), a system on chip (SOC), a complex programmable logic device (CPLD), computer hardware, firmware, software, and/or combinations thereof. Such implementations may include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which can be special or general purpose, configured to receive data and instructions from a storage system, at least one input apparatus, and at least one output apparatus, and to transmit data and instructions to the storage system, the at least one input apparatus, and the at least one output apparatus.
Program codes configured to implement the methods in the present disclosure may be written in any combination of one or more programming languages. Such program codes may be supplied to a processor or controller of a general-purpose computer, a special-purpose computer, or another programmable data processing apparatus to enable the function/operation specified in the flowchart and/or block diagram to be implemented when the program codes are executed by the processor or controller. The program codes may be executed entirely on a machine, partially on a machine, partially on a machine and partially on a remote machine as a stand-alone package, or entirely on a remote machine or a server.
In the context of the present disclosure, machine-readable media may be tangible media which may include or store programs for use by or in conjunction with an instruction execution system, apparatus or device. The machine-readable media may be machine-readable signal media or machine-readable storage media. The machine-readable media may include, but are not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, apparatuses or devices, or any suitable combinations thereof. More specific examples of machine-readable storage media may include electrical connections based on one or more wires, a portable computer disk, a hard disk, an RAM, an ROM, an erasable programmable read only memory (EPROM or flash memory), an optical fiber, a compact disk read only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination thereof.
To provide interaction with a user, the systems and technologies described here can be implemented on a computer. The computer has: a display apparatus (e.g., a cathode-ray tube (CRT) or a liquid crystal display (LCD) monitor) for displaying information to the user; and a keyboard and a pointing apparatus (e.g., a mouse or trackball) through which the user may provide input for the computer. Other kinds of apparatuses may also be configured to provide interaction with the user. For example, a feedback provided for the user may be any form of sensory feedback (e.g., visual, auditory, or tactile feedback); and input from the user may be received in any form (including sound input, speech input, or tactile input).
The systems and technologies described herein can be implemented in a computing system including background components (e.g., as a data server), or a computing system including middleware components (e.g., an application server), or a computing system including front-end components (e.g., a user computer with a graphical user interface or web browser through which the user can interact with the implementation mode of the systems and technologies described here), or a computing system including any combination of such background components, middleware components or front-end components. The components of the system can be connected to each other through any form or medium of digital data communication (e.g., a communication network). Examples of the communication network include: a local area network (LAN), a wide area network (WAN) and the Internet.
The computer system may include a client and a server. The client and the server are generally far away from each other and generally interact via the communication network. A relationship between the client and the server is generated through computer programs that run on a corresponding computer and have a client-server relationship with each other. The server may be a cloud server, a distributed system server, or a server combined with blockchain.
It should be understood that the steps can be reordered, added, or deleted using the various forms of processes shown above. For example, the steps described in the present disclosure may be executed in parallel or sequentially or in different sequences, provided that desired results of the technical solutions disclosed in the present disclosure are achieved, which is not limited herein.
The above specific implementations do not limit the protection scope of the present disclosure. Those skilled in the art should understand that various modifications, combinations, sub-combinations, and replacements can be made according to design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principle of the present disclosure all should be included in the protection scope of the present disclosure.
Number | Date | Country | Kind |
---|---|---|---|
202110861985.7 | Jul 2021 | CN | national |