The present invention generally relates to a natural language processing system configured to receive an input sequence of input words representing a first sequence of words in a natural language of a first text and to generate an output sequence of output words representing a second sequence of words in a natural language of a second text.
Probabilistic topic models are often used to extract topics from text collections and predict the probabilities of each word in a given document belonging to each topic. Subsequently, such models learn latent document representations that can be used to perform natural language processing (NLP) tasks such as information retrieval (IR), document classification or summarization. However, such probabilistic topic models ignore the word order and represent a given context as a bag of its words, thereby disregarding semantic information. Examples for such probabilistic topic models are Latent Dirichlet Allocation (LDA) (Blei, Ng, and Jordan, 2003), Replicated Softmax (RSM) (Salakhutdinov & Hinton, 2009) and Document Neural Autoregressive Distribution Estimator (DocNADE) (Larochelle & Lauly, 2012; Zheng et al., 2016; Lauly et al., 2017). An example of a completely different meaning of a word depending on the context and the word order is the word “bear” in the following two sentences:
When estimating the probability of a word in a given context such as “bear” in this example, traditional topic models do not account for the language structure since they ignore the word order within the context and are based on “bag-of-words” (BoWs) only. In this particular setting, the two sentences have the same unigram statistics, but are about different topics. On deciding which topic generated the word “bear” in the second sentence, the preceding words “market falls” make it more likely that it was generated by a topic that assigns a high probability to words related to stock market trading, where a “bear territory” is a colloquial expression in the domain. In addition, the language structure (e.g., syntax and semantics) is also ignored by traditional topic models. For instance, the word “bear” in the first sentence is a proper noun and subject while it is an object in the second. In practice, topic models also ignore functional words such as “into”, which may not be appropriate in some scenarios.
Recently, Peters et al. (2018) have shown that a language model based on deep contextualized Long Short-Term Memory (LSTM-LM) is able to capture different language concepts in a layer-wise fashion, e.g., the lowest layer captures language syntax and the topmost layer captures semantics. However, in LSTM-LMs the probability of a word is a function of its sentence only and word occurrences are modelled in a fine granularity. Consequently, LSTM-LMs do not capture semantics at a document level.
Similarly, while bi-gram Latent Dirichlet Allocation (LDA) based topic models (Wallach, 2006; Wang et al., 2007) and n-gram based topic learning (Lauly et al., 2017) can capture word order in short contexts, they are unable to capture long term dependencies and language concepts. In contrast, Document Neural Autoregressive Distribution Estimator (DocNADE) (Larochelle & Lauly, 2012) learns word occurrences across documents and provides a coarse granularity in the sense that the topic assigned to a given word occurrence equally depends on all the other words appearing in the same document. However, since it is based on the Bag of Words (BoW) assumption all language structure is ignored. In language modeling, Mikolov et al. (2010) have shown that recurrent neural networks result in a significant reduction of perplexity over standard n-gram models.
Furthermore, there is a challenge in settings with short texts and few documents. Related work such as Sahami & Heilman (2006) employed web search results to improve the information in short texts and Petterson et al. (2010) introduced word similarity via thesauri and dictionaries into LDA. Das et al. (2015) and Nguyen et al. (2015) integrated word embeddings into LDA and Dirichlet Multinomial Mixture (DMM) (Nigam et al., 2000) models. However, these works are based on LDA-based models without considering language structure, e.g. word order.
Generative models are based on estimating the probability distribution of multidimensional data, implicitly requiring modeling complex dependencies. Restricted Boltzmann Machine (RBM) (Hinton et al., 2006) and its variants (Larochelle and Bengio, 2008) are probabilistic undirected models of binary data. Replicated Softmax Model (RSM) (Salakhutdinov and Hinton, 2009) and its variants (Gupta et al., 2018b) are generalization of the RBM, that are used to model word counts. However, estimating the complex probability distribution of the underlying high-dimensional observations is intractable. To address this challenge, NADE (Larochelle & Murray, 2011) decomposes the joint distribution of binary observations into autoregressive conditional distributions, each modeled using a feed-forward network. Unlike for RBM/RSM, this leads to tractable gradients of the data negative log-likelihood.
As an extension of the Neural Autoregressive Topic Model (NADE) and RSM, the Document Neural Autoregressive Topic Model (DocNADE) (Larochelle & Lauly, 2012; Zheng, Zhang and Larochelle, 2016) models collections of documents as orderless sets of words (BoW approach), thereby disregarding any language structure. It is a probabilistic graphical model that learns topics over sequences of words corresponding to a language model (Manning and Schütz, 1999; Bengio et al., 2003) that can be interpreted as a neural network with several parallel hidden layers. In other words, it is trained to learn word representations reflecting the underlying topics of the documents only, ignoring syntactical and semantic features as those encoded in word embeddings (Bengio et al., 2003; Mikolov et al., 2013; Pennington et al., 2014; Peters et al., 2018).
While this is a powerful approach for incorporating contextual information in particular for long texts and corpora with many documents, learning contextual information remains challenging in topic models with short texts and few documents, due to a) limited word co-occurrences or little context and b) significant word non-overlap in such short texts.
It is therefore an object of the present invention to improve topic modelling for short-text and long-text documents, especially for providing a better estimation of the probability of a word in a given context of a text corpus.
According to a first aspect, the invention provides a language processing system configured for receiving an input sequence ci of input words (v1, v2, . . . vN) representing a first sequence of words in a natural language of a first text and generating an output sequence of output words (, , . . . ) representing a second sequence of words in a natural language of a second text and modeled by a multinominal topic model, wherein the multinominal topic model is extended by an incorporation of full contextual information around a word vi, wherein both previous v<i and v>I following words around each word vi are captured by using a bi-directional language modelling and a feed-forward fashion, wherein position dependent forward hidden layers {right arrow over (h)}i and backward hidden layers for each word vi are computed.
In a preferred embodiment, the multinominal topic model is a neural autoregressive topic model (DocNADE) and the extended multinominal topic model is an iDocNADE model.
Preferably, the forward hidden layers {right arrow over (h)}i and backward hidden layers for each word vi are computed as:
{right arrow over (h)}
i(v<i)=g(D{right arrow over (c)}+Σk<iW:,v
(v>i)=g(D+Σk>iW:,v
where {right arrow over (c)}∈H and ∈H are bias parameters in forward and backwards passes and H is the number of hidden topics).
Advantageously, the log-likelihood DN(v) for a document v is computed by using forward and backward language models as:
log p(v)=½Σi=1D log p(vi|v<i)+log p(vi|v>i)
In an advantageous embodiment, the iDocNADE model is extended by the incorporation of word embeddings for generating an iDocNADE2 model.
In a further preferred embodiment, the word embeddings is a word embedding aggregation at each representation Σk<iE:,v
Preferably, the position dependent forward {right arrow over (h)}i,e(v<i) and backward (v>i), hidden layers for each word vi depend now on E as:
{right arrow over (h)}
i,e(v<i)=g(D{right arrow over (c)}+Σk<iW:,v
(v>i)=g(D+Σk>iW:,v
and the forward and backward autoregressive conditionals are computed via the hidden vectors {right arrow over (h)}i,e(v<i) and (v>i).
According to a second aspect, the invention provides a method for processing natural language in a neural system, comprising receiving an input sequence ci of input words (v1, v2, . . . vN) representing a first sequence of words in a natural language of a first text and generating an output sequence of output words (, , . . . ) representing a second sequence of words in a natural language of a second text and modeled by a multinominal topic model, comprising the steps:
In a preferred embodiment, the multinominal topic model is a document neural autoregressive topic model, DocNADE, and the extended multinominal topic model is a document informed autoregressive topic model, iDocNADE.
Preferably, the forward hidden layers {right arrow over (h)}i and backward hidden for each word vi are computed as:
{right arrow over (h)}
i(v<i)=g(D{right arrow over (c)}+Σk<iW:,v
(v>i)=g(D+Σk>iW:,v
where {right arrow over (c)}∈H and ∈H are bias parameters in forward and backwards passes and H is the number of hidden topics).
Advantageously, the log-likelihood DN(v) for a document v is computed by using forward and backward language models as:
log p(v)=½Σi=1D log p(vi|v<i)+log p(vi|v>i)
Preferably, the iDocNADE model is extended by the incorporation of word embeddings for generating an iDocNADE2 model.
In a further preferred embodiment, the word embeddings is a word embedding aggregation at each representation Σk<iE:,v
Advantageously, the position dependent forward hidden layers {right arrow over (h)}i,e(v<i) and backward hidden layers (v>i) for each word vi depend now on E as:
{right arrow over (h)}
i,e(v<i)=g(D{right arrow over (c)}+Σk<iW:,v
(v>i)=g(D+Σk>iW:,v
and the forward and backward autoregressive conditionals are computed via the hidden vectors {right arrow over (h)}i,e(v<i) and (v<i).
According to a third aspect, the invention provides a computer program product comprising executable program code configured to, when executed, perform the method according to the second aspect.
Additional features, aspects and advantages of the invention or of its embodiments will become apparent on reading the detailed description in conjunction with the following figures:
In the following description, for purposes of explanation and not limitation, specific details are set forth in order to provide a thorough understanding of the present invention. It will be apparent to one skilled in the art that the present invention may be practiced in other implementations that depart from these specific details.
In
where g(x) is an activation function, U∈K×H is a weight matrix connecting hidden units to output, e∈H and b∈K are bias vectors, W∈H×K is a word representation matrix in which a column W:,v
DocNADE(v)=Σi=1D log p(vi|v<i) (3)
The past word observations v<i are orderless due to BoWs and may not correspond to the words preceding the ith word in the document itself.
To predict the word vi, each hidden layer hi takes as input the sequences of preceding words v<i. However, it does not take into account the following words v>i in the sequence.
In
iDocNADE 200 accounts for the full context information as capturing both previous v<i and v>i following words around each word vi. Therefore, the log-likelihood DN(v) for a document v is computed by using forward and backward language models as:
i.e., the mean of the forward () and backward () log-likelihoods. This is achieved in a bi-directional language modeling and feed-forward fashion by computing position dependent forward hidden layers {right arrow over (h)}i and backward hidden layers for each word vi, as:
{right arrow over (h)}
i(v<i)=g(D{right arrow over (c)}+Σk<iW:,v
(v>i)=g(D+Σk>iW:,v
where {right arrow over (c)}∈H and ∈H are bias parameters in forward and backwards passes, respectively. H is the number of hidden units (topics).
Two autoregressive conditionals are computed for each ith word using the forward and backward hidden vectors,
For i∈[1, . . . , D] where {right arrow over (b)}∈K and ∈K are biases in forward and backward passes, respectively. The parameters W and U are shared between the two networks.
In
By introducing additional semantic information for each word into DocNADE-like models via its pre-trained embedding vector, better textual representations and semantically more coherent topic distributions can be enabled, in particular for short texts. DocNADE 100 is extended with word embedding aggregation at each representation, i.e., Σk<iE:,v
{right arrow over (h)}
i,e(v<i)=g(D{right arrow over (c)}+Σk<iW:,v
(v<i)=g(D+Σk>iW:,v
The forward and backward autoregressive conditionals are computed via hidden vectors {right arrow over (h)}i,e(v<i) and (v>i), respectively.
Furthermore, DocNADE 100 can be extended to a deep, multiple hidden layer architecture by adding new hidden layers as in a regular deep feed-forward neural network, allowing for improved performance. In this deep version of DocNADE variants, the first hidden layers are computed in an analogous fashion to iDocNADE.
Similar to DocNADE 100, the conditionals p(vi=w|v<1) and p(vi=w|v>1) in DocNADE2 300, iDocNADE 200 or iDocNADE2 are computed by a neural network for each word vi, allowing efficient learning of informed representations {right arrow over (h)}i and (or {right arrow over (h)}i,e(v<i) and (v>i)), as it consists simply of a linear transformation followed by a non-linearity. The weight matrix W or prior embedding matrix E is the same across all conditionals.
To compute the likelihood of a document, the autoregressive conditionals p(vi=w|v<1) and p(vi=w|v>1) are computed for each word vi. A probabilistic tree may be used for the computation of the conditionals. All words in the documents are randomly assigned to a different leaf in a binary tree and the probability of a word is computed as the probability of reaching its associated leaf from the root. Each left/right transition probability is modeled using a binary logistic regressor with the hidden layer {right arrow over (h)}i or ({right arrow over (h)}i,e(v<i) or (v>i)) as its input. In the binary tree, the probability of a given word is computed by multiplying each of the left/right transition probabilities along the tree path.
Algorithm 1 shows the computation of log p(v) using iDocNADE (or iDocNADE2) structure, where the autoregressive conditionals (lines 14 and 15) for each word vi are obtained from the forward and backward networks and modeled into a binary word tree, where π(vi) denotes the sequence of binary left/right choices at the internal nodes along the tree path and l(vi), the sequence of tree nodes on that tree path. For instance, l(vi)1 will always be the root of the binary tree and π(vi)1 will be 0 if the word leaf vi is in the left subtree or l otherwise. Therefore, each of the forward and backward conditionals are computed as:
p(vi=w|v<i)=Πm=1|π(v
p(vi=w|v>i)=Πm=1|π(v
p(π(vi)m|v<i)=g({right arrow over (b)}l(v
p(π(vi)m|v>i)=g(+Ul(v
where U∈T×H is the matrix of logistic regressions weights, T is the number of internal nodes in the binary tree, and {right arrow over (b)} and are bias vectors.
Each of the forward and backward conditionals p(vi=w|v<1) and p(vi=w|v>1) requires the computation of its own hidden layers layer {right arrow over (h)}i and . With H being the size of each hidden layer and D the number of words in v, computing a single hidden layer requires O(HD), and since there are D hidden layers to compute, a naïve apps oach for computing all hidden layers would be in O(D2H). However, since the weights in the matrix W are tied, the linear activations {right arrow over (a)} and (algorithm 1) can be re-used in every hidden layer and computational complexity reduces to O(HD).
← + Σi>1 W:,v
← + Σi>1 W:,v
← g( )
← {right arrow over (a)} − W:,v
← {right arrow over (a)} W:,v
The modeling approaches according to the present invention have been applied to 8 short-text and 6 long-text datasets from different domains. With the learned representations, a gain of 8.4% in perplexity, 8.8% in precision at retrieval fraction and 5.2% for text categorization, compared to the DocNADE model could be achieved.
Distributional word representations (i.e. word embeddings) have shown to capture both the semantic and syntactic relatedness in words and demonstrated impressive performance in natural language processing (NLP) tasks. For example, the two short text fragments: “Goldman shares drop sharply downgrade” and “Falling market homes weaken economy” have a common embedding space as they relate to economy. However, traditional topic models are not able to infer relatedness between word pairs across the sentences such as (economy, shares) due to lack of word-overlap between sentences.
In
Therefore, according to the present invention word embeddings as fixed prior are incorporated in neural topic models in order to introduce complementary information. The proposed neural architectures learn task specific word vectors in association with static embedding priors leading to better text representation for topic extraction, information retrieval and classification, especially in document management. Therefore, common natural language processing tasks such as document modeling, generalization and applications in document information retrieval, document representations, learning and topic modeling can be improved by the present invention. Furthermore, industrial documents, such as contract documents, service reports, can be analyzed and recommendations regarding replacement, amendments, inspection, repair, etc. can be provided. The encoded semantics via distributed document representations help in analyzing contract document so that for example similarities in contract documents can be found or topic extraction and text retrieval is achieved much easier.
Furthermore, the presented invention can be used for artificial and deep learning frameworks and allow an expert of technician to interact and qualitatively analyze the machine learning system. The work flow and system output can be optimized and improved.