The present disclosure relates to Natural Language Processing. More specifically, the present disclosure relates to a method and an apparatus for full natural language parsing.
Parsing has been pursued with tremendous efforts in the Natural Language Processing (NLP) community. Since the introduction of lexicalized probabilistic context-free grammar (PCFGs) parsers, improvements have been achieved over the years, but generative PCFGs parsers of the last decade still remain standard benchmarks. Given the success of discriminative learning algorithms for classical NLP tasks (Part-Of-Speech (POS) tagging, Name Entity Recognition, Chunking . . . ), the generative nature of such parsers has been questioned. First, discriminative parsing algorithms have not reached the performance of standard PCFG-based generative parsers. The parser reported in DISCRIMINATIVE TRAINING OF A NEURAL NETWORK STATISTICAL PARSER, by J. Henderson outperforms the parser reported in HEAD-DRIVEN STATISTICAL MODELS FOR NATURAL LANGUAGE PARSING, by M. Collins, only by using a generative model and performing re-ranking. The pure discriminative parsers reported in MAX-MARGIN PARSING, by B. Taskar et al. and ADVANCES IN DISCRIMINATIVE PARSING by J. Turian et al. finally reached Collins' parser performance, with various simple template features. However, these parsers are slow to train and are limited to sentences with less than 15 words. Most recent discriminative parsers are based on Conditional Random Fields (CRFs) with PCFG-like features.
Accordingly, there is a need for a fast discriminative parser which does not rely on information extracted from PCFG's or on most classical parsing features.
A method is disclosed herein for generating a linguistic parse tree for a sentence. The method comprises the steps of: predicting in a computer process a first level of chunk tags for the sentence; and predicting in a computer process at least a second level of chunk tags for the sentence using the first level or a previous level of chunk tags.
Also disclosed is an apparatus for generating a linguistic parse tree for a sentence. The apparatus comprises a processor executing instructions for predicting a first level of chunk tags for the sentence, and predicting at least a second level of chunk tags for the sentence using the first level or a previous level of chunk tags.
Further disclosed is an apparatus for generating a linguistic parse tree for a sentence. The apparatus comprises a Graph Transformer Network (GTN) for predicting a first level of chunk tags for the sentence, and predicting at least a second level of chunk tags for the sentence using the first level or a previous level of chunk tags.
Many Natural Language Processing (NLP) tasks involve finding chunks of words in a sentence, which can be viewed as a tagging task. For instance, “chunking” is a task related to parsing, a label is obtained for the lowest parse tree node where a word ends up. For the parse tree shown in
using an IOBES (inside, other, beginning, end, and single) tagging scheme to label chunk boundaries, i.e., identify the location of the current word in the chunk. The “S-NP” chunk tag is used to label the noun phrase containing the single word “stocks,” the “B-VP” chunk tag is used to label the first word “kept” of the verb phrase “kept falling,” and the “E-VP” chunk tag is used to label the last word “falling” of the verb phrase “kept falling.” The IOBES tagging scheme also includes other chunk tags comprising, without limitation, the “B-NP” chunk tag for labeling the first word of a noun phrase, the “I-NP” chunk tag is for labeling the intermediate word of a noun phrase, and the “E-NP” chunk tag for labeling the last word of a noun phrase. The “O” chunk tag is for labeling words that are not members of a chunk.
Instead of building the linguistic parse tree in a conventional top-down fashion (building from the root), the discriminative parsing method and apparatus of the present disclosure views a parse tree as levels of chunk tags and thus, generates each level of the parse tree from the bottom up, i.e., the number of chunk tags become less at each level moving from the leaves to the root, with the chunk tags spanning longer segments of the sentence. Each level of the parse tree will have tags that spans several consecutive tags in the previous level. Eventually a level is reached that has only one tag that covers the entire sentence. A full parse tree is realized by connecting each chunk tag to the chunks tags in the previous level.
In block 110, a first level of chunk tags is predicted for the sentence. The first level of chunk tags is predicted using features including, without limitation, a lookup table of the words and the part-of-speech of the words.
In block 115, a determination is made as to whether the level of chunk tags predicted in block 110 has only one chunk tag that spans the entire sentence. If the level predicted in block 110 has only one chunk tag that spans the entire sentence, then a completed parse tree outputted in block 140 and the method ends. If not, the method moves to block 120.
In block 120, one or more levels of chunk tags is/are predicted for the sentence from the first/previous level of chunk tags. The first/previous level of chunk tags are used as features and constraints to predict the chunk tags of the current level.
In block 130, a determination is made as to whether the level of chunk tags predicted in block 120 has only one chunk tag that spans the entire sentence. If the level predicted in block 120 has only one chunk tag that spans the entire sentence, then a completed parse tree outputted in block 140 and the method ends. If, however, the level predicted in block 120 has more than one chunk tag, then the method returns to block 120 where another level of chunk tags is predicted from the previous level of chunk tags and this level is evaluated in block 130 to determine whether this level has only one chunk tag that spans the entire sentence. Blocks 120 and 130 are performed until a level is reached that has only one tag that covers the entire sentence.
The method of
The tagging process fits naturally into the recursive definition of the parse tree levels. However, the predicted tags must correspond to a parse tree as described earlier above with respect to parse tree of
Accordingly, tree nodes spanning the same words for several consecutive level are first replaced by one node in the whole training set. The label of this new node is the concatenation of replaced node labels as illustrated in
Constraint 1: Any chunk at level a overlapping a chunk at level j<i must span at least this over-lapped chunk, and be larger.
As a result, the iterative tagging process described above will generate a chunk of size N in at most N levels, given a sentence of N words. At this time, the iterative loop is stopped, and the full tree can be deduced. The process might also be stopped if no new chunks are found (all tags are O). Assuming the tree pre-processing has been performed, this method can be used with any tagger that could handle a history of labels and tagging constraints. Even though the tagging process is greedy because there is no global inference of the tree, it performs surprisingly well.
A look-up table module 402 comprises a plurality of look-up tables which assign a “latent-feature” vector for each feature. The values in these vectors have latent semantic meanings to aid the parsing process. The vectors are concatenated into one feature vector per word, and are inputted to the CNN module 403.
The CNN module 403 is applied on every window of words (given a fixed window size) and outputs probabilities (scores) for each tag for the word in the middle of the window. A CNN module 403 multiplies a filter matrix M to a sliding window of words. If, for example but not limitation, the window size is set to 3, then for each word, the lookup table entries (vectors) of the word before it, the word itself, and the word after it, are concatenated to a single vector. Then the filter matrix M is applied to the concatenated vector.
Note that the “padding” features are added to the beginning and the end of the sentence to cause the first several words and the last several words to be in the center of their window. Padding refers to the process of placing a “fake” word before the sentence and after the sentence to ensure every word to be tagged in the sentence is generally in the middle of the window when the CNN module 403 is applied. For example but not limitation, if the lookup table module 402 has a window size of 3, therefore, three consecutive words are to be concatenated and the word to be tagged is in the middle of this window. So for the first word to be tagged, we have to insert a fake word before it, so the first word can be in the middle when that 3-window is applied. The same process is performed for the last word of the sentence. The faked words have their own lookup tables. In another example, if the window size is 5, then two padding words are added at the beginning and at the end. In general, the number of padding words is (n−1)/2, given n is the size of the sliding window and n is a odd number.
The graph module 404 enforces the dependency of the parsing tags of neighboring words using Viterbi algorithms. The graph module 404 calculates the likelihood score of a possible label sequence by combining the scores of nodes provided by the CNN module 403 and additional transition scores for edges of a graph. The score for an edge is defined as follows: if an edge is between two nodes and we label the first node “NP”, the second node “VP”, the edge will have a higher score than if you label both of them “VP”, because the edge is less likely to have two verb phrases in a row. All network and graph parameters are trained in a end-to-end way with a stochastic gradient maximizing a graph likelihood. The GTN based tagger 400 runs recursively to generate the parse tree level by level. For example, given the text in block 301, the three levels of parse trees 302, 303, and 304 are built by running the tagger 400 three time.
The following discussion describes in detail the method performed by the tagger 400.
Consider a fixed-sized word dictionary W, where unknown words are mapped to a special “UNKNOWN” word, and where numbers are mapped to a “NUMBER” word. Given a sentence of N words {w1, w2, . . . , wN}, each word wNεW is first embedded into a D-dimensional vector space, by applying a lookup-table operation:
where the matrix WεDx|w| represents the parameters to be trained in this lookup layer. Each column WnεD corresponds to the embedding of the nth word in the dictionary W. In view of the matrix-vector notation in equation (1), the lookup-table applied over the sentence can be seen as an efficient implementation of a convolution with a kernel width of size 1.
In practice, a word should be represented with more than one feature. In one embodiment, at least the lower case words and a “caps” feature:
wn=wnlowcaps,wncaps) are taken. In this embodiment, a different lookup-table is applied for each discrete feature LTw
LTWwords(wn)=(LTW
For simplicity, the remainder of the description considers only one lookup-table.
Scores for all tags T and all words in the sentence are produced by applying the convolutional neural network over the lookup-table embeddings of equation (1). More precisely, all successive windows of text (of size K) is considered, sliding over the sentence, from position 1 to N. At position n, the network is fed with the vector xn resulting from the concatenation of the embeddings:
xn=(Ww
The words with indices exceeding the sentence boundaries (n−(K−1)/2)<1 or (n+(K−1)/2>N) are mapped to a special “PADDING” word. As with any neural network, the tagger of the present disclosure performs several matrix-vector operations on its inputs, interleaved with some non-linear transfer function h(·). The tagger outputs a vector of size |T| for each word at position n, interpreted as a score for each tag in T and each word wn in the sentence:
s(xn)=M2h(M1xn) (3)
where the matrices M1εH×(KD) and M2ε|T|×H are the trained parameters of the network. The number of hidden units H is a hyper-parameter to be tuned. In one embodiment, the transfer function can comprise a hyperbolic tangent h(z)=tan h(z).
The “window” approach described above assumes that the tag of a word is solely determined by the surrounding words in the window. This process works on short sentences but falls short on long sentences. Therefore, in an alternate embodiment of the tagger (the sentence process), all words {w1, w2, . . . , wN} are considered for tagging a given word wn. To specify to the network that we want to tag the word an additional lookup-table is introduced in equation (2), which embeds the relative distance (m−n) of each word wm in the sentence with respect to wn. At each position 1≦m≧N, the outputs of the all lookup-tables in equation (2) (low caps word, caps, relative distance, etc.) LTW
This feature vector is then fed to scoring layers in equation (3). The matrix M0 is trained by back-propagation, as with any other network parameter.
It is known that there are strong dependencies between parsing tags in a sentence: not only are tags organized in chunks, but some tags cannot follow other tags. It is, therefore, natural to infer tags from the scores in equation (3) using a structured output approach. Therefore, a transition score Atu, is used for jumping from tags tεT to uεT in successive words, and an initial score At0 is used for starting from the tth tag. The last module of the GTN tagger outputs a graph with |T|×N nodes Gtn (
where θ represents all the trainable parameters of the GTN tagger (W, M1, M2 and A). The sentence tags [t*]1N are then inferred by finding the path which leads to the maximal score:
The Viterbi algorithm can be used for this inference.
All the parameters of the network θ are trained in an end-to-end manner as follows. Following the GTN tagger's training method, a probabilistic framework is considered, where a likelihood is maximized over all the sentences [w]1N in a training set, with respect to network θ. The score of equation (5) can be interpreted as a conditional probability over a path by taking it to the exponential (making it positive) and normalizing with respect to all possible paths (summing to 1 over all paths). Taking the log(·) leads to the following conditional log-probability:
where the notation log addizi=log(Σiex
Computing the log-likelihood of equation (7) efficiently is not straightforward, as the number of terms in the log add grows exponentially with the length of the sentence. Fortunately, in the same spirit as the Viterbi algorithm, one can compute it in linear time with the following classical recursion over n:
followed by the termination log add∀[u]
The log-likelihood of equation (7) can be maximized using stochastic gradient ascent, which has the main advantage to be extremely scalable. Random training sentences [w]1N and their associated tag labeling [t]1N are iteratively selected. The following gradient step can then performed:
θ←θ+λ∂ log p([t]1N|[w]1N,θ)/∂θ, (9)
where λ is a chosen learning rate. The gradient in equation (9) is efficiently computed via back-propagation: the differentiation chain rule is applied to the recursion of equation (8), and then to all network layers of equation (3), including the word embedding layers of equation (1).
The GTN tagger of the present disclosure is made “recursive” by adding an additional feature (and its corresponding lookup-table of equation (1)) describing a history of previous tree levels. For that purpose, all chunks which were discovered in previous tree levels are gathered. If several chunks were overlapping at different levels, only the largest one is considered. Assuming that Constraint 1 is true, a word can be at most in one of the remaining chunks. This is the history Another kind of history can be selected (e.g. a feature for each arbitrary chosen LEN previous levels), however, the proposed history for implementing Constraint 1 must be computed. The corresponding IOBES tags of each word are be fed as features to the GTN tagger. For instance, assuming the labeling in
Implementing Constraint 1 is now made easy using this history and a IOBES tagging scheme. For each chunk cεC, the graph outputted by the GTN is adapted such that any new candidate chunk {tilde over (c)} overlapping chunk c includes chunk c, and is larger than chunk c. For each candidate label (e.g., VP), multiple possible paths (
While exemplary drawings and specific embodiments of the present disclosure have been described and illustrated, it is to be understood that that the scope of the invention as set forth in the claims is not to be limited to the particular embodiments discussed. Thus, the embodiments shall be regarded as illustrative rather than restrictive, and it should be understood that variations may be made in those embodiments by persons skilled in the art without departing from the scope of the invention as set forth in the claims that follow and their structural and functional equivalents.
This application claims the benefit of U.S. Provisional Application No. 61/350,580, filed Jun. 2, 2010, the entire disclosure of which is incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
5475588 | Schabes et al. | Dec 1995 | A |
6108620 | Richardson et al. | Aug 2000 | A |
6631346 | Karaorman et al. | Oct 2003 | B1 |
6859771 | Huang et al. | Feb 2005 | B2 |
6952666 | Weise | Oct 2005 | B1 |
7013262 | Tokuda et al. | Mar 2006 | B2 |
7302384 | Moore | Nov 2007 | B2 |
7464024 | Luo et al. | Dec 2008 | B2 |
7603651 | De Brabander | Oct 2009 | B2 |
7610191 | Gao et al. | Oct 2009 | B2 |
8180633 | Collobert et al. | May 2012 | B2 |
20060277028 | Chen et al. | Dec 2006 | A1 |
20070016398 | Buchholz | Jan 2007 | A1 |
20090030686 | Weng et al. | Jan 2009 | A1 |
Entry |
---|
Nobuyuki Shimizu and Andrew Haas. 2006. Exact decoding for jointly labeling and chunking sequences. In Proceedings of the COLING/ACL on Main conference poster sessions (COLING-ACL '06). Association for Computational Linguistics, Stroudsburg, PA, USA, 763-770. |
Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: deep neural networks with multitask learning. In Proceedings of the 25th international conference on Machine learning (ICML '08). ACM, New York, NY, USA, 160-167. |
Yoshimasa Tsuruoka, Jun'ichi Tsujii, and Sophia Ananiadou. 2009. Fast full parsing by linear-chain conditional random fields. In Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics (EACL '09). Association for Computational Linguistics, Stroudsburg, PA, USA, 790-798. |
Thollard, Franck, and Alexander Clark. “Shallow parsing using probabilistic grammatical inference.” Grammatical Inference: Algorithms and Applications. Springer Berlin Heidelberg, 2002. 269-282. |
Number | Date | Country | |
---|---|---|---|
20110301942 A1 | Dec 2011 | US |
Number | Date | Country | |
---|---|---|---|
61350580 | Jun 2010 | US |