This application claims priority to Chinese Patent Application No. 201811361939.5 with a filing date of Nov. 15, 2018. The content of the aforementioned application, including any intervening amendments thereto, are incorporated herein by reference.
The invention relates to the field of artificial intelligence technologies, more particularly to a device, a storage medium and a text representation method applied to sentence embedding.
Sentence embedding maps text space to real-valued vectors or matrices, which plays an important role in machine understanding of text representation. Its applications include sentiment classification, question-answering systems and text summaries. The model on sentence embedding can be classified into the following three categories, namely, statistical embedded model, serialization embedded model and structured embedding model. Statistical embedded models are estimated based on statistical indicators, such as the frequency of co-occurring words, the co-occurrence word versus frequency and the weight of words in different texts (in the TF-IDF model). The serialization embedded model relies mainly on the neural network structure to learn text representation, based on a single layer of hidden layers, which is a convolutional neural network or a recurrent neural network (RNN). The structured embedding model considers the syntactic structure to reflect the semantics of the text, such as the recursive neural network and the long-short memory network (Tree-LSTM) of the tree structure.
The current sentence embedding model has achieved good results in text classification tasks. However, in the existing embedded model, the process of generating sentence embedding usually follows one-way action. That is to say, the representation generated by the previous text determines the representation of the next text, which is limited by the one-way effect, causing partial semantic loss.
The purpose of the invention is to provide a device, a storage medium and a text representation method applied to sentence embedding to improve the accuracy and efficiency of sentence embeddings.
According to one aspect of the invention, the invention provides a text representation method applied to sentence embedding, comprising: obtaining a file to be processed, extracting a sentence from the file; wherein the file includes a text file and a webpage file; obtaining n parent word corresponding to n words in the sentence; determining the parent word and the child words set C(p) corresponding to the parent word, setting hidden states hk and memory cells ck for each child words in the C(p), wherein k∈{1, 2, . . . ,|C(p)|}, |C(p)| is the number of child words in the C(p); obtaining a hidden interaction state {tilde over (h)}p of the parent word based on a hidden interaction state of all child words in the C(p); inputting the {tilde over (h)}p and the parent word into the LSTM cell to obtain the memory cells and hidden states of the parent word; obtaining a sequence of the parent word {x1, x2, . . . , xn} corresponding to then parent word and obtaining a hidden state sequence {h1, h2, . . . , hn} corresponding to the {x1, x2, . . . , xn} based on the hidden state of the parent word; obtaining the interaction representation sequence {r1, r2, . . . , rn} of each parent word and other parent word in the {x1, x2, . . . , xn} based on {h1, h2, . . . , hn}, generating sentence embeddings based on the {r1, r2, . . . , rn}.
Optionally, the memory cells and hidden states of the parent word comprises: the parent word xp is converted into a hidden representation
calculating the word weight
of the kth child words in the parent word xp;
obtaining the hidden interaction state
that relates to all child states of the parent word xp;
inputting the {tilde over (h)}p and the parent word xp into the LSTM cell, obtaining the memory cells and hidden states of the parent word xp.
Optionally, inputting the {tilde over (h)}p and the parent word xp into the LSTM cell, obtaining the memory cells and hidden states of the parent word xp comprises: using the hidden interaction state {tilde over (h)}p and the parent word xp as the input to the LSTM cell to get:
ip=σ(U(i)xp+W(i){tilde over (h)}p+b(i));
op=σ(U(o)xp+W(o){tilde over (h)}p+b(o));
up=tanh(U(u)xp+W(u){tilde over (h)}p+b(u));
fkp=σ(U(f)xp+W(f)hk+b(f));
wherein ip, op and fkp are the input gate, the output gate and the forget gate, respectively; up is the candidate hidden state of xp; the corresponding weight matrix of xp are U(i), U(o), U(u), and U(f), the corresponding weight matrix of {tilde over (h)}p or hk are W(i), W(o), W(u) and W(f), the bias terms are b(i), b(o), b(u) and b(f);
obtaining the memory cell of the parent word xp, the memory cell is represented as:
obtaining the hidden state of the parent word xp, the hidden state is represented as:
hp=op⊙ tanh(cp).
Optionally, based on the {r1, r2, . . . , rn} generating sentence embeddings comprises: obtaining the connective representation sequence {αg1, αg2, . . . , αgn} between the word xg in the {x1, x2, . . . , xn} and other words;
calculating the interaction weight of word xk and word xg in the {x1, x2, . . . , xn}, get:
the interaction representation of xg in {x1, x2, . . . , xn} is represented as:
enumerating all the words in the {x1, x2, . . . , xn}, obtaining the interaction representation sequence {r1, r2, . . . , rn} of {x1, x2, . . . , xn}, generating the sentence embeddings s=max{r1, r2, . . . , rn}.
Optionally, obtaining the predicted label corresponding to the sentence embeddings s:
ŝ=arg max p(|s);
wherein ŝ∈, is the class label set; p(|s)=softmax(W(s)s+b(s)); W(s) and b(s) are the reshape matrix and the bias term, respectively;
setting the loss function:
wherein hi is the hidden state, {tilde over (w)}i is the true class label of word xi, {tilde over (s)} is the true class label of sentence embeddings s; evaluating the quality of the sentence embeddings s based on the loss function.
According to a second aspect of the invention, the invention provides a text representation device applied to sentence embedding, comprising: a word extraction module, configured to obtain a file to be processed and extract a sentence from the file; wherein the file includes: a text file, a webpage file; obtaining n parent word corresponding to n words in the sentence; a child word processing module, configured to determine the parent word and a child word set C(p) corresponding to the parent word, setting hidden states hk and memory cells ck for each child words in the C(p), wherein k∈{1, 2, . . . ,|C(p)|}; a parent word processing module, configured to obtain hidden interaction states {tilde over (h)}p of the parent word based on the hidden interaction states of all child words in the C(p); inputting the {tilde over (h)}p and the parent word into the LSTM cell to obtain the memory cells and hidden states of parent word; a hidden state processing module, configured to obtain a sequence {x1, x2, . . . , xn} of parent word corresponding to n parent word as well as obtain a hidden state sequence {h1, h2, . . . , hn} corresponding to the {x1, x2, . . . , xn} based on the hidden state of parent word; a sentence embedding processing module, configured to obtain a interaction representation sequence {r1, r2, . . . , rn} of each parent words and other parent words in {x1, x2, . . . , xn} based on the {h1, h2, . . . , hn}, generating sentence embeddings based on the {r1, r2, . . . , rn}.
Optionally, the parent word processing module comprises: a hidden representation unit, configured to convert the parent word xp into a hidden representation
of the kth child words of the parent word xp; obtaining a hidden interaction state
of the parent word xp; inputting the {tilde over (h)}p and the parent word xp into the LSTM cell to obtain the memory cells and hidden states of the parent word xp.
Optionally, the hidden state processing module, configured to use the hidden interaction state {tilde over (h)}p and the parent word xp as the input to the LSTM cell to get:
ip=σ(U(i)xp+W(i){tilde over (h)}p+b(i));
op=σ(U(o)xp+W(o){tilde over (h)}p+b(o));
up=tanh(U(u)xp+W(u){tilde over (h)}p+b(u));
fkp=σ(U(f)xp+W(f)hk+b(f));
wherein ip, op and fkp are the input gate, output gate and forget gate, respectively; up is the candidate hidden state of xp; the corresponding weight matrix of xp are U(i), U(o), U(u) and U(f), the corresponding weight matrix of {tilde over (h)}p or hk are W(i), W(o), W(u) and W(f), the bias terms are b(i), b(o), b(u) and b(f);
the hidden state processing module, configured to obtain the memory cell of the parent word xp, the memory cell is represented as:
the hidden state processing module, configured to obtain the hidden state of the parent word xp, the hidden state is represented as:
hp=op⊙ tanh(cp).
Optionally, the sentence embedding processing module, configured to obtain the connective representation sequence {αg1, αg2, . . . , αgn} between the word xg in the {x1, x2, . . . , xn} and other words; calculating the interaction weight of word xk and word xg in the {x1, x2, . . . , xn}, get:
the sentence embedding processing module, configured to obtain the interaction representation of xg in {x1, x2, . . . , xn}, which can be represented as:
the sentence embedding processing module, configured to enumerate all the words in the {x1, x2, . . . , xn}, obtaining the interaction representation sequence {r1, r2, . . . , rn} of the {x1, x2, . . . , xn}, generating the sentence embeddings s=max{r1, r2, . . . , rn}.
Optionally, a quality evaluation module, configured to obtain the predicted label corresponding to the sentence embeddings s:
ŝ=arg max p(|s);
wherein ŝ∈, is the class label set; p(|s)=softmax(W(s)s+b(s)); W(s) and b(s) are the reshape matrix and the bias term, respectively; the quality evaluation module, configured to set the loss function:
wherein hi is the hidden state, {tilde over (w)}i is the true class label of word xi, {tilde over (s)} is the true class label of sentence embeddings s; evaluating the quality of the sentence embeddings s based on the loss function.
According to a third aspect of the invention, the invention provides a text representation device applied to sentence embedding, comprising: a memorizer; a processor coupled to the memorizer; the processor is configured to perform the aforementioned method based on instructions stored in the memorizer.
According to a fourth aspect of the invention, the invention provides a computer readable storage medium, storing computer program instructions, implements the steps of the aforementioned method when the instructions are executed by the processor.
The invention proposes to realize sentence embeddings through a two-level interaction representation. The two-level interaction representation is a local interaction representation (LIR) and a global interaction representation (GIR), respectively. The invention combines the two-level representation to generate a hybrid interaction representation (HIR), which can improve the accuracy and efficiency of sentence embeddings and be significantly better than the Tree-LSTM model in terms of accuracy.
The embodiments of the invention can be applied to computer systems/servers, being able to operate with general-use or special-use computing system environments or configurations. The embodiments is able to be applied to computing systems, computing environments, and/or configurations suitable for using with a computer system/server, including but not limited to: a smartphone, a personal computer system, a server computer system, a thin client, a thick client, a handheld or a laptop devices, microprocessor-based systems, set-top boxes, programmable consumer electronics, network personal computers, small computer systems, large computer systems and distributed cloud computing technology environments including any of the above systems, etc.
The computer system/server can be described in the general context of computer system executable instructions (such as program modules) being executed by a computer system. Generally, program modules include routines, programs, target programs, components, logic and data structures, etc. They perform particular tasks or implement particular abstract data types. The computer system/server can be implemented in a distributed cloud computing environment where tasks are performed by remote processing devices that are linked through a communication network. In a distributed cloud computing environment, program modules can be located on a local or remote computing system storage medium including storage devices.
Step 101, obtaining n parent word corresponding to n words in the sentence.
obtaining a file to be processed, extracting a sentence from the file, analyzing and processing the sentence; wherein the file includes a text file, a webpage file and so on. For instance, the back-end system of the e-commerce website obtains an evaluation document about electronic products. In the evaluation document, there are different customers' comments on the electronic products. The sentences related to the comments are extracted from the evaluation document based on the extraction rules, processing the sentences correspondingly.
Step 102, determining the parent word, and the child words set C(p) corresponding to the parent word, setting hidden states hk and memory cells ck for each child words in the C(p), wherein k∈{1, 2, . . . ,|C(p)|}.
For example, for a sentence ‘a dog crossing a ditch’, the sentence is based on a grammatical dependency, the parent word is ‘crossed’ and the child word set is ‘a dog, a ditch’. Hidden states hk and memory cells ck are inherent components of recurrent neural networks. The hidden state is used to record the state representation in the network at the current time, while the memory cell is used to record the state information of the network from the beginning to the present.
Step 103, obtaining a hidden interaction state {tilde over (h)}p of the parent word based on a hidden interaction state of all child words in the C(p). The hidden state and the hidden interaction state are different concepts. The hidden state is inherent to the RNN neural network, while the hidden interaction state is a hidden state representation obtained by the interaction between the parent word and the child words.
Step 104, inputting {tilde over (h)}p the and the parent word into the LSTM cell to obtain the memory cells and hidden states of the parent word;
Step 105, obtaining a sequence of the parent word {x1, x2, . . . , xn} corresponding to the n parent word and obtaining a hidden state sequence {h1, h2, . . . , hn} corresponding to the {x1, x2, . . . , xn} based on the hidden state of the parent word.
Step 106, obtaining the interaction representation sequence {r1, r2, . . . , rn} of each parent word and other parent word in the {x1, x2, . . . , xn} based on {h1, h2, . . . , hn}, generating sentence embeddings based on the {r1, r2, . . . , rn}.
The sentence embedding can help machine understand the text. The semantics of the text is the product of the mutual influence of the words in the text. The subsequent words also contribute to the semantics of the previous words. The embodiment introduces the concept of interaction, proposing a two-level interaction representation applied to sentence embedding, namely a local interaction representation and a global interaction representation. Combining the two interaction representations provides a hybrid interaction representation. For example, a local interaction representation (LIR) and a global interaction representation (GIR) generates a hybrid interaction representation (HIR).
Apply a softmax function on a sequence of connective representations {α1, α2, . . . , α|C(p)|} to get the weight λk. Calculate the word weight
of the kth child words in the parent word xp. Obtain the hidden interaction state
that relates to all child states of the parent word xp; in the action child words→parent word, input the {tilde over (h)}p and the parent word into the LSTM cell to obtain the memory cells and hidden states of the parent word xp. LSTM (Long Short-Term Memory) is a long-term and short-term memory network. It is a time recurrent neural network suitable for processing and predicting important events with relatively long intervals and delays in time series.
Use the hidden interaction state {tilde over (h)}p and the parent word xp as the input to the LSTM cell to get:
ip=σ(U(i)xp+W(i){tilde over (h)}p+b(i));
op=σ(U(o)xp+W(o){tilde over (h)}p+b(o));
up=tanh(U(u)xp+W(u){tilde over (h)}p+b(u));
fkp=σ(U(f)xp+W(f)hk+b(f));
wherein ip, op and fkp are the input gate, the output gate and the forget gate, respectively; up is the candidate hidden state of xp; the corresponding weight matrix of xp are U(i), U(o), U(u) and U(f), the corresponding weight matrix of {tilde over (h)}p or hk are W(i), W(o), W(u) and W(f), the bias terms are b(i), b(o), b(u) and b(f).
Obtain the memory cell of the parent word xp, the memory cell is represented as:
obtain the hidden state of the parent word xp, the hidden state is represented as:
hp=op⊙ tanh(cp);
wherein ⊙ is the element multiplication, while ck is the memory cells of a child word.
For example, supposing there is a synthetic parse tree, wherein xl represents a left child word and xr represents a right child word of the parent word xp. The parent word xp is a non-terminating word (i.e. xp is a zero vector), using xl and xr as controllers instead of xp. Therefore, based on the above formula, the hidden interaction state {tilde over (h)}l and {tilde over (h)}r can be obtained separately according to the xl and the xr. Connect the {tilde over (h)}l and the {tilde over (h)}r to represent the hidden interaction state representation of the parent word, which is {tilde over (h)}p=[{tilde over (h)}l:{tilde over (h)}r]. According to the above formula, it is able to obtain the memory cells cp and hidden states hp of the parent word xp. In the local interaction representation of the child word→parent word, the parent word contains all the information of child words. Therefore, the hidden state of this parent word can be embedded as a sentence.
In one embodiment, the GIR employs an enumeration-based strategy to utilize the attention mechanism for all words in a sentence. After applying the Tree-LSTM module to n words in a sentence, it is able to obtain the hidden representation {h1, h2, . . . , hn} corresponding to the word sequence {x1, x2, . . . , xn,}. Tree-LSTM is similar to RNN. After inputting the word sequence {x1, x2, . . . , xn} into the network in time series, it correspondingly obtains the hidden state representation of each moment.
In order to represent the interaction of word xg with the other words in one sentence, the word xg can be recognized as a semantic weights controller in the {x1, x2, . . . , xn} besides word xg. A common connection is applied to connect word xg to the other words, i.e. αgk=hgWαhk, wherein αgk is the connective representation of hg and hk (g, k∈(1, 2, . . . , n)). It is able to obtain all the connective representation {αg1, αg2, . . . , αgn} between word xg and the other words.
The softmax function maps the original output to the probability space of (0, 1), while the sum of these values equals to 1. Supposing there is an array V, Vi represents the i-th element in V, then the softmax value of this element is:
Apply the softmax function to connective representation sequence to calculate the weight, calculating the interaction weight of word xk and word xg in the {x1, x2, . . . , xn}, get:
wherein λgk is the interaction weight of word xk and word xg in the {x1, x2, . . . , xn}. Finally, the interaction representation of word xg in the {x1, x2, . . . , xn} is represented as:
it is able to enumerate all the words in a sentence and return to the interaction representation sequence, i.e. {r1, r2, . . . , rn}. The max-pooling method refers to maximizing sampling in a specified dimension, in other words, it obtains the maximum value of the dimension. Sentence embedding refers to sentence representation, which represents the sentence as a low-dimensional and dense vector, which is convenient for the computer to understand and calculate.
Apply max-pooling method to generate the final sentence embeddings s in the sequence, s=max{r1, r2, . . . , rn}, completing to define the global interaction representation. That is: enumerating all the words in the {x1, x2, . . . , xn}, obtaining the interaction representation sequence {r1, r2, . . . , rn} of the {x1, x2, . . . , xn}, generating sentence embedding s=max{r1, r2, . . . , rn}.
In order to obtain the local and global interaction between words, LIR and GIR are integrated to form a hybrid interaction representation HIR. The HIR first generates a hidden state sequence {h1, h2, . . . , hn} corresponding to the word sequence {x1, x2, . . . , xn} based on the steps of LIR. Then, the HIR apply the process of GIR in the hidden state sequence to obtain the final sentence embeddings s.
In one embodiment, obtain the predicted label corresponding to the sentence embeddings s:
ŝ=arg max p(|s);
wherein ŝ∈, is the class label set; p(|s)=softmax(W(s)s+b(s)); W(s) and b(s) are the reshape matrix and the bias term, respectively.
Set the loss function:
wherein hi is the hidden state, {tilde over (w)}i is the true class label of word xi, {tilde over (s)} is the true class label of sentence embeddings s. Evaluate the quality of the sentence embeddings s based on the loss function.
In the category prediction process, apply a sotfmax classifier to the sentence embedding to obtain a predicted label ŝ, wherein ŝ∈, is a class label set, i.e.:
ŝ=arg max p(|s).
and
p(|s)=softmax(W(s)s+b(s))
wherein W(s) and b(s) are the reshape matrix and the bias term, respectively. For the loss function in the formulated HIR, the corresponding losses in LIR and GIR can be combined as follows:
wherein the former's loss derives from LIR and the latter's loss derives from GIR. The hi is the hidden state, while {tilde over (w)}i is the true class label of the word xi in the LIR, {tilde over (s)} is the true class label of the sentence embeddings s in the GIR.
In order to evaluate the quality of the proposed sentence embedding, consider an emotional classification task and try to answer the following question; (RQ1) Can the sentence embedding model combined with the interaction representation improve the performance of sentiment classification? (RQ2) What is the effect of the length of the sentence on performance?
Compare the performance of the methods provided by the embodiment with other recent recursive neural network based nested models. The following benchmark models can be used for comparison: (1) LSTM: A nested model based on long and short memory networks [6]. (2) Tree-LSTM: An LSTM-based nested model that combines a parse tree. They are compared with the model corresponding to the sentence nesting method proposed by the embodiment: (1) LIR (2.1), (2) GIR (2.2), and (3) HIR (2.3).
Use the Stanford Sentiment Treebank dataset sampled from the film review. The data set has five types of labels for each sentence: very negative, negative, medium, positive, and very positive. In addition, the data set discards some medium sentences to divide the label into two categories, negative and positive. This data set can be used as a 2-class classification task or 5-class classification task. Table 1 below details the statistical characteristics of this data set. The accuracy of use (at the sentence level) is recognized as the evaluation criterion for the discussion model.
For word embedding, the random initialization word embedding matrix We, which will be learned during the training phase, setting the word embedding dimension to 300. The fixed parameters are set as follows: the batch size is set to 5, which is 5 sentences per batch; the hidden vector dimension is set to 150; the loss rate is set to 0.5. To initialize the neural network, each matrix is initialized by a normal Gaussian distribution and each bias term is initialized with a zero vector. In addition, the model was trained by using the AdaGrad algorithm with a learning rate of 0.05 and the entire training process was set to 15 cycles.
In order to answer RQ1, in Table 2, the experimental results of the five-class classification task and the two-class classification task of all the discussion models are presented. Table 2 shows the accuracy on the sentiment classification task. The best benchmark model and the best performance model in each column are underlined and bolded, respectively.
For the benchmark model, Tree-LSTM outperforms LSTM, achieving 7.67% and 4.92% accuracy improvement on 5-class and 2-class classification tasks, which means that the grammatical structure is combined with the serialized nested model. Structured sentence embedding models do better represent text for sentence classification. Models with interaction representations, such as LIR, GIR, and HIR, are generally superior to benchmark models. HIR is the best performing model in the proposed model. In the five-class classification task, HIR has a 3.15% accuracy improvement for the best benchmark model Tree-LSTM, and a 1.96% and 1.97% improvement for GIR and LIR, respectively. In the 2-class classification task, compared with Tree-LSTM, GIR and LIR, the HIR reached 2.03%, 1.48% and 1.78% accuracy respectively. By characterizing the local and global interactions between words, HIR can achieve better sentence embedding, which is conducive to emotional classification.
GIR, like HIR, is also superior to Tree-LSTM, achieving a 1.35% improvement in 5-class classification task and a 0.54% improvement in 2-class classification task. LIR performance is slightly worse than GIR, but still achieves 1.15% over 5-class task for Tree-LSTM and 0.27% accuracy for 2-class task. The difference between LIR and GIR can be explained by the fact that LIR pays too much attention to the interaction between local words, but not the global interaction of words in a sentence.
In order to answer RQ2, the sentences are manually divided into three groups according to the length of the sentence, namely, short sentences (l∈(0,10)), medium sentences (l∈(10,20)) and long sentences (l∈(20,+∞)). The test results of 5-class classification task and 2-class classification task are drawn on
For both classification tasks, it can be observed that as the length of the sentence increases, the performance of all of the models discussed decreases monotonically. The longer the sentence, the more complex the relationship in the sentence, making it harder to get good sentence embedding. For the benchmark model, Tree-LSTM is superior to LSTM for each sentence length in five-class task. The method model proposed by the embodiment generally has an advantage in each sentence in emotional classification. When the length of the sentence is short, medium and long, HIR is 5.94%, 5.86%, and 3.10% higher than Tree-LSTM. This similar phenomenon can also be found in the comparison of LIR, GIR and baseline models. The advantages of characterizing interactions decrease as the length of the sentence increases.
In 2-class task, the results similar to 5-class task were obtained. Compared to the results in
As illustrated in
A word extraction module 31 obtains a file to be processed, extracting a sentence from the file; wherein the file includes a text file and a webpage file; obtaining n parent word corresponding to n words in the sentence. A child word processing module 32 determines the parent word and the child words set C(p) corresponding to the parent word, setting hidden states hk and memory cells ck for each child words in the C(p), wherein k∈{1, 2, . . . ,|C(p)|}. A parent word processing module 33 obtains a hidden interaction state {tilde over (h)}p of the parent word based on a hidden interaction state of all child words in the C(p), inputting the {tilde over (h)}p and the parent word into the LSTM cell to obtain the memory cells and hidden states of the parent word.
A hidden state processing module 34 obtains a sequence {x1, x2, . . . , xn} of parent word corresponding to n parent word as well as obtains a hidden state sequence {h1, h2, . . . , hn} corresponding to the {x1, x2, . . . , xn} based on the hidden state of parent word. A sentence embedding processing module 35 obtains a interaction representation sequence {r1, r2, . . . , rn} of each parent words and other parent words in {x1, x2, . . . , xn} based on the {h1, h2, . . . , hn}, generating sentence embeddings based on the {r1, r2, . . . , rn}.
As illustrated in
of the kth child words of the parent word xp; obtaining a hidden interaction state
of the parent word xp; inputting the {tilde over (h)}p and the parent word xp into the LSTM cell to obtain the memory cells and hidden states of the parent word xp.
The hidden state extracting unit 333 uses the hidden interaction state {tilde over (h)}p and the parent word xp as the input to the LSTM cell to get:
ip=σ(U(i)xp+W(i){tilde over (h)}p+b(i));
op=σ(U(o)xp+W(o){tilde over (h)}p+b(o));
up=tanh(U(u)xp+W(u){tilde over (h)}p+b(u));
fkp=σ(U(f)xp+W(f)hk+b(f));
wherein ip, op and fkp are the input gate, output gate and forget gate, respectively; up is the candidate hidden state of xp; the corresponding weight matrix of xp are U(i), U(o), U(u) and U(f), the corresponding weight matrix of {tilde over (h)}p or hk are W(i), W(o), W(u) and W(f), the bias terms are b(i), b(o), b(u) and b(f);
the hidden state extracting unit 333 obtains the memory cell of the parent word xp, the memory cell is represented as:
the hidden state extracting unit 333 obtains the hidden state of the parent word xp, the hidden state is represented as:
hp=op⊙ tanh(cp).
The sentence embedding processing module 35 obtains the connective representation sequence {αg1, αg2, . . . , αgn} between the word xg in the {x1, x2, . . . , xn} and other words; calculating the interaction weight of word xk and word xg in the {x1, x2, . . . , xn}, get:
the sentence embedding processing module 35 obtains the interaction representation of xg in {x1, x2, . . . , xn}, which can be represented as:
the sentence embedding processing module 35 enumerates all the words in the {x1, x2, . . . , xn}, obtaining the interaction representation sequence {r1, r2, . . . , rn} of the {x1, x2, . . . , xn}, generating the sentence embeddings s=max{r1, r2, . . . , rn}.
The quality evaluation module 36 obtains the predicted label corresponding to the sentence embeddings s:
ŝ=arg max p(|s);
wherein ŝ∈, is the class label set; p(|s)=softmax(W(s)s+b(s)); W(s) and b(s) are the reshape matrix and the bias term, respectively; the quality evaluation module 36 sets the loss function:
wherein hi is the hidden state, {tilde over (w)}i is the true class label of word xi, {tilde over (s)} is the true class label of sentence embeddings s; evaluating the quality of the sentence embeddings s based on the loss function.
In one embodiment, as shown in
The memorizer 51 can be a high-speed RAM memorizer, a non-volatile memorizer and a memorizer array. The memorizer 51 can also be partitioned and the blocks can be combined into a virtual volume according to certain rules. The processor 52 can be a central processing unit (CPU), or an Application Specific Integrated Circuit (ASIC), or one or more integrated circuits configured to implement the text representation method applied to sentence embedding of the embodiment.
In one embodiment, the embodiment provides a computer readable storage medium, storing computer program instructions, implements the steps of the aforementioned text representation method applied to sentence embedding when the instructions are executed by the processor.
A device and text representation method applied to sentence embedding in the above embodiment proposes to realize sentence embeddings through a two-level interaction representation. The two-level interaction representation is a local interaction representation (LIR) and a global interaction representation (GIR), respectively. The invention combines the two-level representation to generate a hybrid interaction representation (HIR), which can improve the accuracy and efficiency of sentence embeddings and be significantly better than the Tree-LSTM model in terms of accuracy.
The methods and systems of the embodiments can be implemented in a number of ways. For example, the methods and systems of the embodiments can be implemented in software, hardware, firmware or the combinations. The aforementioned sequence of steps for the method is for illustrative purposes only. The steps of the method of the embodiments are not limited to the order specifically described above unless there are specifically statement. Moreover, in some embodiments, they can also be embodied as a program recorded in a recording medium, the program comprising machine readable instructions for implementing the method according to the embodiment. Thus, the embodiment also covers a recording medium storing a program for performing the method according to the embodiment.
Number | Date | Country | Kind |
---|---|---|---|
201811361939.5 | Nov 2018 | CN | national |
Number | Name | Date | Kind |
---|---|---|---|
9129008 | Kuznetsov | Sep 2015 | B1 |
11023675 | Neervannan | Jun 2021 | B1 |
11030414 | Peters | Jun 2021 | B2 |
11205043 | Neervannan | Dec 2021 | B1 |
11205044 | Neervannan | Dec 2021 | B1 |
11216164 | Neervannan | Jan 2022 | B1 |
11222253 | Hashimoto | Jan 2022 | B2 |
11227109 | Neervannan | Jan 2022 | B1 |
11244273 | Neervannan | Feb 2022 | B1 |
11281739 | Neervannan | Mar 2022 | B1 |
20140337320 | Hernandez | Nov 2014 | A1 |
20140358523 | Sheth | Dec 2014 | A1 |
20150154002 | Weinstein | Jun 2015 | A1 |
20170075978 | Zhang | Mar 2017 | A1 |
20190188590 | Wu | Jun 2019 | A1 |
20190197109 | Peters | Jun 2019 | A1 |
20190197121 | Jeon | Jun 2019 | A1 |
20200104366 | Xu | Apr 2020 | A1 |
20200126545 | Kakkar | Apr 2020 | A1 |
20200134035 | Rakshit | Apr 2020 | A1 |
20210185066 | Shah | Jun 2021 | A1 |
20210312134 | Creed | Oct 2021 | A1 |
Entry |
---|
Zheng et al., Hierarchical Collaborative Embedding for Context-Aware Recommendations, IEEE Inernational Confernce on Big Data (BIGDATA), 2017, pp. 867-876 (Year: 2017). |
Kiperwasser et al., Easy-First Dependency Parsing with Hierarchical Tree LSTMs, 2016, Transactions of the Association for Computational Linguistics, vol. 4, pp. 445-461. (Year: 2016). |
Huang et al., Encoding syntactic knowledge in neural networks for sentiment classification, 2017, journal={ACM Transactions on Information Systems (TOIS)}, vol. 35, No. 3, publisher={ACM New York, NY, USA} (Year: 2017). |
Dong et al., Adaptive recursive neural network for target-dependent twitter sentiment classification, 2014, Proceedings of the 52nd annual meeting of the association for computational linguistics (vol. 2: Short papers), pp. 49-54, year={2014} (Year: 2014). |
Number | Date | Country | |
---|---|---|---|
20200159832 A1 | May 2020 | US |