SYSTEM AND METHODS FOR PERFORMING NLP RELATED TASKS USING CONTEXTUALIZED WORD REPRESENTATIONS

Information

  • Patent Application
  • 20190197109
  • Publication Number
    20190197109
  • Date Filed
    December 18, 2018
    5 years ago
  • Date Published
    June 27, 2019
    5 years ago
Abstract
Systems, apparatuses, and methods for representing words or phrases, and using the representation to perform NLP and NLU tasks, where these tasks include sentiment analysis, question answering, and conference resolution. Embodiments introduce a type of deep contextualized word representation that models both complex characteristics of word use, and how these uses vary across linguistic contexts. The word vectors are learned functions of the internal states of a deep bidirectional language model (biLM), which is pre-trained on a large text corpus. These representations can be added to existing task models and significantly improve the state of the art across challenging NLP problems, including question answering, textual entailment and sentiment analysis.
Description
BACKGROUND

Natural Language Processing (NLP) and Natural Language Understanding (NLU) involves may tasks, including sentiment analysis, question answering, and conference resolution. In order to perform these tasks, words and phrases are represented as word vectors or as combinations of word vectors. These vectors are used as inputs to train a recurrent neural network (RNN). The trained network is then used as part of performing an NLP or NLU task using different inputs.


However, it can be challenging to develop high quality representations of words and phrases to train a neural network and to produce useful results. This is because to be most useful, the representations should ideally model both (1) complex characteristics of word use (e.g., syntax and semantics), and (2) how these uses vary across linguistic contexts (i.e., to model polysemy). Unfortunately, conventional approaches to generating word representations are unable to effectively produce representations having these capabilities.


Embodiments of the invention are directed toward solving these and other problems or disadvantages with conventional approaches to representing words and phrases for training neural networks to perform NLP and NLU tasks, both individually and collectively.


SUMMARY

The terms “embodiments of the invention”, “invention,” “the invention,” “the inventive” and “the present invention” as used herein are intended to refer broadly to all the subject matter described in this document and to the claims. Statements containing these terms should be understood not to limit the subject matter described herein or to limit the meaning or scope of the claims. The embodiments described herein are defined by the claims and not by this summary. This summary is a high-level overview of various aspects of the embodiments and introduces some of the concepts that are further described in the Detailed Description section below. This summary is not intended to identify key, required or essential features of the claimed subject matter, nor is it intended to be used in isolation to determine the scope of the claimed subject matter. The subject matter should be understood by reference to appropriate portions of the entire specification of this patent, to any or all drawings, and to each claim.


Embodiments described herein are directed to systems, apparatuses, and methods for representing words or phrases, and using the representation to perform NLP and NLU tasks, where these tasks include sentiment analysis, question answering, and conference resolution. The embodiments introduce a type of deep contextualized word representation that models both (1) complex characteristics of word use (e.g., syntax and semantics), and (2) how these uses vary across linguistic contexts (i.e., to model polysemy). The word vectors are learned functions of the internal states of a deep bidirectional language model (biLM or BiLM), which is pre-trained on a large text corpus. These representations can be added to existing models and significantly improve the state of the art across challenging NLP problems, including question answering, textual entailment and sentiment analysis. As realized by the inventors and as confirmed by their analysis, exposing the deep internal layers of the pre-trained network is an important aspect of implementing the embodiments, as it allows downstream models to mix different types of semi-supervision signals.


In one embodiment, the invention is directed to a method for improving the performance of a neural network used for a natural language understanding (NLU) or a natural language processing (NLP) task, where the method includes:


representing a natural language sequence or sequences with a bidirectional language model;


implementing the bidirectional language model in a neural network;


training the neural network in which the language model is implemented using a corpus of unlabeled text;


extracting contextual word representations from the trained neural network, the contextual word representations being extracted from one or more layers of the trained network; and


transferring the extracted contextual word representations to the neural network used for the natural language understanding (NLU) or natural language processing (NLP) task


In another embodiment, the invention is directed to a system for improving the performance of a neural network used for a natural language understanding (NLU) or a natural language processing (NLP) task, where the system includes:

    • a set of computer-executable instructions for representing a natural language sequence or sequences with a bidirectional language model;
    • a set of computer-executable instructions for implementing the bidirectional language model in a neural network;
    • a set of computer-executable instructions for training the neural network in which the language model is implemented using a corpus of unlabeled text;
    • a set of computer-executable instructions for extracting contextual word representations from the trained neural network, the contextual word representations being extracted from one or more layers of the trained network; and
    • a set of computer-executable instructions for transferring the extracted contextual word representations to the neural network used for the natural language understanding (NLU) or natural language processing (NLP) task.


In yet another embodiment, the invention is directed to a method for improving the performance of a neural network used for a natural language understanding (NLU) or a natural language processing (NLP) task, where the method includes:


representing a sequence of tokens as a bidirectional language model;


implementing the bidirectional language model in the form of a neural network;


training the neural network in which the language model is implemented using a corpus of text;


identifying representations for each token in the sequence from one or more layers of the resultant trained neural network;


forming a contextual vector or vectors for each token from the identified representations of the trained neural network;


obtaining a neural network intended for use in a specific task;


introducing the contextual vectors into the obtained neural network;


inputting text to be analyzed into the obtained neural network; and


generating an output of the obtained neural network.


Other objects and advantages will be apparent to one of ordinary skill in the art upon review of the detailed description of the embodiments described herein and the included figures.





BRIEF DESCRIPTION OF THE DRAWINGS

Embodiments in accordance with the present disclosure will be described with reference to the drawings, in which:



FIG. 1 is a flowchart or flow diagram illustrating a method, process, operation or function for training a bidirectional language model (biLM) and using the trained model to modify a neural network that is intended to be used for a specific NLP or NLU task;



FIG. 2 is a diagram illustrating an example of a bidirectional recurrent neural network (RNN) that may be used in performing a specific NLP or NLU task; and



FIG. 3 is a diagram illustrating elements or components that may be present in a computer device or system configured to implement a method, process, function, or operation in accordance with an embodiment of the invention.





Note that the same numbers are used throughout the disclosure and figures to reference like components and features.


DETAILED DESCRIPTION

The subject matter of embodiments is described here with specificity to meet statutory requirements, but this description is not necessarily intended to limit the scope of the claims. The claimed subject matter may be embodied in other ways, may include different elements or steps, and may be used in conjunction with other existing or future technologies. This description should not be interpreted as implying any particular order or arrangement among or between various steps or elements except when the order of individual steps or arrangement of elements is explicitly described.


Embodiments will be described more fully hereinafter with reference to the accompanying drawings, which form a part hereof, and which show, by way of illustration, exemplary embodiments by which the invention may be practiced. The invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will satisfy the statutory requirements and convey the scope of the invention to those skilled in the art. Accordingly, embodiments are not limited to the embodiments described herein or depicted in the drawings, and various embodiments and modifications can be made without departing from the scope of the claims presented.


Among other things, the present invention may be embodied in whole or in part as a system, as one or more methods, or as one or more devices. Embodiments of the invention may take the form of a hardware-implemented embodiment, a software implemented embodiment, or an embodiment combining software and hardware aspects. For example, in some embodiments, one or more of the operations, functions, processes, or methods described herein may be implemented by one or more suitable processing elements (such as a processor, microprocessor, CPU, GPU, controller, etc.) that is part of a client device, server, network element, or other form of computing or data processing device/platform. The processing element or elements are programmed with a set of executable instructions (e.g., software instructions), where the instructions may be stored in a suitable data storage element. In some embodiments, one or more of the operations, functions, processes, or methods described herein may be implemented by a specialized form of hardware, such as a programmable gate array (PGA or FPGA), application specific integrated circuit (ASIC), or the like. Note that an embodiment of the inventive methods may be implemented in the form of an application, a sub-routine that is part of a larger application, a “plug-in”, an extension to the functionality of a data processing system or platform, or other suitable form. The following detailed description is, therefore, not to be taken in a limiting sense.


In some embodiments, one or more of the operations, functions, processes, or methods described herein (such as for the language model) may be implemented in whole or in part by the development or training of a neural network, the application of a machine learning technique or techniques, or the development or implementation of an appropriate decision process. Typically, such a network is implemented by the execution of a set of computer-executable instructions, where the instructions may be stored in (or on) a non-transitory computer-readable medium and executed by a programmed processor or processing element. Note that a neural network or deep learning model may be represented as a set of layers, with each layer composed of nodes of “neurons” and with connections between nodes in the same or different layers. The set of layers operate on an input to provide a decision (such as a classification) as an output.


A neural network is a system of interconnected artificial “neurons” that exchange messages between each other. The connections between neurons (which form the nodes in a network) have numeric weights that are tuned during a training process, so that a properly trained network will respond correctly when presented with an image or pattern to recognize (for example). The network consists of multiple layers of feature-detecting “neurons”, including an input later, an output layer, and typically one or more hidden layers. Each neuron may perform a specific set of operations on its inputs, such as forming a linear or non-linear combination of inputs and weights, and then subjecting the result to a non-linear activation function to produce an output.


Each layer has many neurons that respond to different combinations of inputs from the previous layers. Training of a network is performed using a “labeled” or annotated dataset of inputs in an assortment of representative input patterns that are associated with their intended output response. Training uses optimization methods to iteratively determine the weights for intermediate and final feature neurons. In terms of a computational model, in some embodiments, each neuron calculates the dot product of inputs and weights, adds a bias, and applies a non-linear trigger function (for example, using a sigmoid response function)



FIG. 1 is a flowchart or flow diagram illustrating a method, process, operation or function for training a bidirectional language model (biLM) and using the trained model to modify a neural network that is intended to be used for a specific NLP or NLU task. As shown in the figure, an implementation of an embodiment of the system and methods described herein includes the training of a bi-directional language model (biLM), which is a form of neural network (step or stage 102). For purposes of some embodiments, a bi-directional Language Model (biLM) was selected to have the following form (as suggested by stage or step 100):









κ
=
1

N







(


log






p


(



t
k

|

t
1


,





,


t

κ
-
1


;

Θ
x


,


Θ
LSTM



,

Θ
s


)



+

log






p


(



t
k

|

t

κ
-
1



,





,


t
N

;

Θ
x


,


Θ
LSTM



,

Θ
s


)




)





This form combines both a forward1 and backward2 language model, with the formulation jointly maximizing the log likelihood of the forward and backward directions. 1Given a sequence of N tokens, (t1, t2, . . . , tN), a forward language model computes the probability of the sequence by modeling the probability of token tK given the history (t1, . . . , tN):







p


(


t
1

,

t
2

,





,

t
N


)


=




κ
=
1

N







p


(



t
k

|

t
1


,

t
2

,





,

t

κ
-
1



)












p


(


t
1

,

t
2

,





,

t
N


)


=




κ
=
1

N







p


(



t
k

|

t

κ
+
1



,

t

κ
+
2


,





,

t
N


)







Further, this formulation of a language model ties (i.e., depends upon or interconnects) the parameters for both the token representation (Θx) and Softmax layer (Θs) in the forward and backward directions while maintaining separate parameters for the LSTMs (Long Short-Term Memory network, a type of Recurrent Neural Network) in each direction, The chosen language model also shares some weights between directions instead of using completely independent parameters. The reason for choosing some of these characteristics is discussed in greater detail in the following. 2A backward LM is similar to a forward LM, except it runs over the sequence in reverse, predicting the previous token given the future context:


The selected model is represented by a neural network that is designed to implement or represent the model (as suggested by stage or step 102), and is trained using a sufficiently large corpus of unlabeled text (words, phrases, etc.). The resulting trained neural network (in typical cases, a Long Short-Term Memory network) includes a set of layers, with each layer including nodes that are connected to nodes in another layer by a weighted connection. The weights are “learned” or set as a result of the training process.


Note that other forms of biLM are possible depending upon computational requirements or constraints, the specific task for which the language model will be used, or the neural network architecture being used (such as a convolutional neural network (CNN) being used instead of a LSTM neural network architecture). For example, the following forms for a biLM might also (or instead) be used:

    • 1. Separate forward and backward directions—in the language model described herein, the inventors chose to combine the two directions to minimize model complexity and improve computational efficiency;
    • 2. A biLM with greater or fewer layers—the inventors' experiments indicated that two layers (as in the model described herein) performs extremely well as a language model and in the downstream tasks;
    • 3. A biLM with a larger or smaller dimension—the inventors chose the sizes to balance overall model computational requirements and expressiveness (as discussed in Section 3 of the article entitled “Deep Contextualized Word Representations” and the Supplemental Material, authored by the inventors of the present application and contained in the Appendix to the provisional patent application from which the current application claims priority); or
    • 4. A different neural architecture, e.g. a convolutional network—as biLSTMs were (and still are) state of the art for language modeling, they were used for implementing embodiments of the system and methods described herein.


As noted, separate parameters were used in the biLM for the LSTMs; this was done because the LSTM encodes sequential context into a single vector—this is similar to how a person would read a sentence. Due to the left-to-right structure of English, the inventors reasoned that the knowledge necessary to make sense of left-to-right words might be different from the knowledge necessary to make sense of right-to-left words. In addition, weights were shared between layers because the character representation weights and SoftMax weights are not direction specific; thus by sharing them, the model shares representational power between directions. This approach is also more computationally efficient.


In general, when deciding upon the language model to use in a specific situation or for performing a specific task, the following factors are typically considered:

  • 1. desired or required accuracy vs. execution speed; and
  • 2. the nature of the specific task—NLU tasks that require making only a single prediction for a given sentence (e.g., sentiment classification makes one prediction (positive/negative) for each review) may only require one direction in the LM. However, NLU tasks that require making predictions for each word (e.g., co-reference resolution) would be expected to benefit from including both directions in the LM.


Once the bidirectional language model (biLM) is trained, this results in establishing or “setting” the internal weights between nodes in the network. The resulting layers of the LSTM consist of nodes and paths between those nodes and nodes in different layers, where the paths have weights or values corresponding to the neural network's response to the training inputs. Thus, after training, the nodes and connection weights represent the learned influence of the context of a word on its meaning or its use in a NLP or NLU task.


Next, the process identifies representations of the biLM across all layers of the trained neural network (e.g., the LSTM), as suggested by step or stage 104. Once these representations are identified or otherwise determined, a task-specific expression of these layers is formed (identified as ELMoTask, as suggested by stage or step 106). The task specific expression may be a weighted combination of the layers, for example. In general, the approach described herein enables or facilitates a task model determining its own combination for each task by choosing ELMoTask to be a neural network.


In one embodiment, the task-specific combination of the intermediate layer representations in the biLM is based on the following process. For each token tk, a L-layer biLM computes a set of 2L+1 representations






R
κ
={X
κ
LM
,{right arrow over (h)}
κ,j
LM,custom-characterκ,jLM|j=1, . . . , L}={hκ,jLM|j=0, . . . , L},


where hκ,0LM is the token layer and hκ,jLM=[{right arrow over (h)}κ,jLM;custom-characterκ,jLM|, for each biLSTM layer.


In one embodiment, for inclusion in a downstream model (such as a task specific model), the process collapses all layers in R into a single vector, termed ELMo (for Embeddings from Language Models), where ELMoκ=E(Rκ; Θe). In the simplest case, ELMo selects just the top layer, E(Rk)=hκ,LLM.


Across the tasks considered (such as textual entailment, question answering, etc.), the inventors found that the best performance was achieved by weighting all biLM layers with softmax-normalized learned scalar weights, s=Sj(Task)=Softmax(w):








ELMo
k
Task

=


E


(


R
κ

;
w
;
ϒ

)


=

ϒ





i
=
0

n








s
j



h

κ
,
j

LM






where







(


s
j

=

S
j
Task


)

.














The scalar parameter γ=γTask allows the task model to scale the entire ELMo vector and is of practical importance to the optimization process. Considering that the activations of each biLM layer have a different distribution, in some cases it may also help to apply layer normalization to each biLM layer before weighting.


Note that ELMok includes some parameters that are learned as part of the downstream model (e.g., γ (gamma) and ω). As a result, the downstream model can choose to concentrate on the biLM layers most suitable for its end goal. Empirically, the inventors found that different models do choose different weights. A possible reason for this behavior is that it is very general: it allows a task model to focus its attention on whatever parts of the biLM are most useful, without requiring the user of the system to make a (very likely) sub-optimal choice.


Next, a task model for a specific NLP or NLU task is obtained (as suggested by step or stage 108, and as an example, a RNN). The task specific model may be a form of neural network that is used for a task such as Textual Entailment, Question Answering, Semantic Role Labeling, Coreference Resolution, Named Entity Extraction, or Sentiment Analysis, for example. Next, the weighted task-specific expression is introduced into the neural network that has been designed to perform the specific NLP/NLU task.


In one embodiment, this is accomplished by concatenating ELMokTask with the token representation XkLM in the task model (as suggested by step or stage 110). (Note that in some situations it may be desirable to concatenate ELMo with a different layer, or with something other than the token representation. This possibility is explored in some of the results presented in the technical article that was included in the Appendix filed with the provisional patent application upon which the present application is based.) Next, the concatenated expression is introduced into the task model, for example in the lower level (typically the token level) of the model (as suggested by step or stage 112).


The input text to be analyzed is then provided to the modified task model (as suggested by step or stage 114). The output of the task model uses the information from the ELMo representation(s) to perform a more accurate evaluation of the input task, and an evaluation that takes into account the contextual information from the trained LSTM. Note that the task for which the task model neural network is designed is the same as the task used to define the task-specific expression for ELMo referred to previously.



FIG. 2 is a diagram illustrating an example of a bidirectional recurrent neural network (biRNN) that may be used as a task model in performing a specific NLP or NLU task. As shown in the figure, such a bi-directional neural network includes two types of layers, a forward layer or layers 202 and a backward layer or layers 204. The forward layers 202 may be interconnected to each other, as are the backward layers interconnected to each other.


The input to the network (bottom of the figure) is passed into a RNN cell. The forward layer accepts the first input and uses its learned parameters to compute an internal representation of the input (h−>t−1). It then reads the second input and uses it along with the representation from the first input to update its representation of the entire sequence so far (h−>t). This is then combined with the third input to form a representation of the sentence up to the third word (h−>t+1). This process is repeated until the end of the sentence. Simultaneously, the backward RNN performs the same calculations over the sequence in reverse.


In general, a neural network is formulated mathematically, which includes defining the loss function to be optimized (the joint maximization of the log likelihood of the forward and backward directions in the case of the biLM, or the task specific learning objective in the case of other NLU tasks such as question answering, sentiment classification, etc.). The set of mathematical equations are expressed as computer code (which is executed by a suitably programmed CPU, GPU, or other processing element) and an optimization method is selected to minimize the loss as a function of the network's parameters. Note that in the case of the biLM, the inventors used Adagrad as the optimization method. The “training” of the network refers to minimizing the loss function by adjusting the network's parameters. Note that further details regarding the implementation of the neural network are described in the article entitled “Deep contextualized word representations” and the Supplemental Material, which was part of the Appendix to the previously filed provisional patent application from which the present application derives priority (e.g., sections 3, 4, and the Supplemental Material in the article).



FIG. 3 is a diagram illustrating elements or components that may be present in a computer device or system configured to implement a method, process, function, or operation in accordance with an embodiment of the invention. As noted, in some embodiments, the system and methods described herein may be implemented in the form of an apparatus that includes a processing element and set of executable instructions. The executable instructions may be part of a software application and arranged into a software architecture. In general, an embodiment of the invention may be implemented using a set of software instructions that are designed to be executed by a suitably programmed processing element (such as a CPU, GPU (graphics processing unit), microprocessor, processor, controller, computing device, etc.). In a complex application or system such instructions are typically arranged into “modules” with each such module typically performing a specific task, process, function, or operation. The entire set of modules may be controlled or coordinated in their operation by an operating system (OS) or other form of organizational platform.


Each application module or sub-module may correspond to a particular function, method, process, or operation that is implemented by the module or sub-module. Such function, method, process, or operation may include those used to implement one or more aspects of the system and methods described herein, such as for:

    • Defining (implementing) a desired language model (LM);
    • Defining a cost function;
    • Defining an optimization method;
    • Defining a method to compute the network output given suitably formatted input;
    • Defining a method to read textual data and convert it to a form usable by (i.e., capable of being input to) the network (typically, this is a function that converts strings to non-negative integers though the use of a vocabulary that assigns each word or character a unique integer identifier);
    • Defining a method to save the trained network parameters to disk for re-use;
    • Defining a method or process to identify the resulting layers in the trained neural network representing the language model (LM);
    • Defining a method or process to form a desired combination of the identified layers; or
    • Defining a method or process to form a concatenation of the desired combination with a layer in a task specific model or neural network (or another method or process to introduce the representation of the language model into a task model).


In some embodiments, an implementation of the system and methods described herein may include the following steps, stages, operations, processes, functional capabilities, etc.:

    • Form token representation, Xk;
    • Input tokens (from training corpus) into L-layer LSTM based neural network that implements the language model (biLM);
    • Train the LSTM;
    • Identify layers of the resulting trained LSTM, and find representation(s) of the biLM;
    • Form a task-specific linear combination of the identified layers;
    • Obtain a desired task model (for a specific purpose, such as textual entailment, question answering, semantic role labeling, co-reference resolution, named entity extraction, or text classification);
    • Concatenate a task-specific linear combination of layer weights with the token representation used in the task model (or otherwise introduce the representation(s) of the biLM into the task model);
    • Insert the concatenated result into task model;
    • Input text to be analyzed as part of the task; and
    • Use the output of the task model for the prescribed purpose.


The application modules and/or sub-modules may include any suitable computer-executable code or set of instructions (e.g., as would be executed by a suitably programmed processor, microprocessor, or CPU), such as computer-executable code corresponding to a programming language. For example, programming language source code may be compiled into computer-executable code. Alternatively, or in addition, the programming language may be an interpreted programming language such as a scripting language. The computer-executable code or set of instructions may be stored in (or on) any suitable non-transitory computer-readable medium. In general, with regards to the embodiments described herein, a non-transitory computer-readable medium may include almost any structure, technology or method apart from a transitory waveform or similar medium.


As described, the system, apparatus, methods, processes, functions, and/or operations for implementing an embodiment of the invention may be wholly or partially implemented in the form of a set of instructions executed by one or more programmed computer processors such as a central processing unit (CPU) or microprocessor. Such processors may be incorporated in an apparatus, server, client or other computing or data processing device operated by, or in communication with, other components of the system. As an example, FIG. 3 is a diagram illustrating elements or components that may be present in a computer device or system 300 configured to implement a method, process, function, or operation in accordance with an embodiment of the invention. The subsystems shown in FIG. 3 are interconnected via a system bus 302. Additional subsystems include a printer 304, a keyboard 306, a fixed disk 308, and a monitor 310, which is coupled to a display adapter 312. Peripherals and input/output (I/O) devices, which couple to an I/O controller 314, can be connected to the computer system by any number of means known in the art, such as a serial port 316. For example, the serial port 316 or an external interface 318 can be utilized to connect the computer device 300 to further devices and/or systems not shown in FIG. 3 including a wide area network such as the Internet, a mouse input device, and/or a scanner. The interconnection via the system bus 302 allows one or more processors 320 (e.g., CPU, GPU, controller, or some combination) to communicate with each subsystem and to control the execution of instructions that may be stored in a system memory 322 and/or the fixed disk 308, as well as the exchange of information between subsystems. The system memory 322 and/or the fixed disk 308 may embody a tangible computer-readable medium.


Any of the software components, processes or functions described in this application may be implemented as software code to be executed by a processor using any suitable computer language such as, for example, Java, JavaScript, C++ or Perl using, for example, conventional or object-oriented techniques. The software code may be stored as a series of instructions, or commands in (or on) a non-transitory computer-readable medium, such as a random-access memory (RAM), a read only memory (ROM), a magnetic medium such as a hard-drive or a floppy disk, or an optical medium such as a CD-ROM. In this context, a non-transitory computer-readable medium is almost any medium suitable for the storage of data or an instruction set aside from a transitory waveform. Any such computer readable medium may reside on or within a single computational apparatus, and may be present on or within different computational apparatuses within a system or network.


According to one example implementation, the term processing element or processor, as used herein, may be a central processing unit (CPU), or conceptualized as a CPU (such as a virtual machine). In this example implementation, the CPU or a device in which the CPU is incorporated may be coupled, connected, and/or in communication with one or more peripheral devices, such as display. In another example implementation, the processing element or processor may be incorporated into a mobile computing device, such as a smartphone or tablet computer.


The non-transitory computer-readable storage medium referred to herein may include a number of physical drive units, such as a redundant array of independent disks (RAID), a floppy disk drive, a flash memory, a USB flash drive, an external hard disk drive, thumb drive, pen drive, key drive, a High-Density Digital Versatile Disc (HD-DV D) optical disc drive, an internal hard disk drive, a Blu-Ray optical disc drive, or a Holographic Digital Data Storage (HDDS) optical disc drive, synchronous dynamic random access memory (SDRAM), or similar devices or other forms of memories based on similar technologies. Such computer-readable storage media allow the processing element or processor to access computer-executable process steps, application programs and the like, stored on removable and non-removable memory media, to off-load data from a device or to upload data to a device. As mentioned, with regards to the embodiments described herein, a non-transitory computer-readable medium may include almost any structure, technology or method apart from a transitory waveform or similar medium.


Certain implementations of the disclosed technology are described herein with reference to block diagrams of systems, and/or to flowcharts or flow diagrams of functions, operations, processes, or methods. It will be understood that one or more blocks of the block diagrams, or one or more stages or steps of the flowcharts or flow diagrams, and combinations of blocks in the block diagrams and stages or steps of the flowcharts or flow diagrams, respectively, can be implemented by computer-executable program instructions. Note that in some embodiments, one or more of the blocks, or stages or steps may not necessarily need to be performed in thfe order presented, or may not necessarily need to be performed at all.


These computer-executable program instructions may be loaded onto a general-purpose computer, a special purpose computer, a processor, or other programmable data processing apparatus to produce a specific example of a machine, such that the instructions that are executed by the computer, processor, or other programmable data processing apparatus create means for implementing one or more of the functions, operations, processes, or methods described herein. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a specific manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means that implement one or more of the functions, operations, processes, or methods described herein.


While certain implementations of the disclosed technology have been described in connection with what is presently considered to be the most practical and various implementations, it is to be understood that the disclosed technology is not to be limited to the disclosed implementations. Instead, the disclosed implementations are intended to cover various modifications and equivalent arrangements included within the scope of the appended claims. Although specific terms are employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation.


This written description uses examples to disclose certain implementations of the disclosed technology, and also to enable any person skilled in the art to practice certain implementations of the disclosed technology, including making and using any devices or systems and performing any incorporated methods. The patentable scope of certain implementations of the disclosed technology is defined in the claims, and may include other examples that occur to those skilled in the art. Such other examples are intended to be within the scope of the claims if they have structural and/or functional elements that do not differ from the literal language of the claims, or if they include structural and/or functional elements with insubstantial differences from the literal language of the claims.


All references, including publications, patent applications, and patents, cited herein are hereby incorporated by reference to the same extent as if each reference were individually and specifically indicated to be incorporated by reference and/or were set forth in its entirety herein.


The use of the terms “a” and “an” and “the” and similar referents in the specification and in the following claims are to be construed to cover both the singular and the plural, unless otherwise indicated herein or clearly contradicted by context. The terms “having,” “including,” “containing” and similar referents in the specification and in the following claims are to be construed as open-ended terms (e.g., meaning “including, but not limited to,”) unless otherwise noted. Recitation of ranges of values herein are merely indented to serve as a shorthand method of referring individually to each separate value inclusively falling within the range, unless otherwise indicated herein, and each separate value is incorporated into the specification as if it were individually recited herein. All methods described herein can be performed in any suitable order unless otherwise indicated herein or clearly contradicted by context. The use of any and all examples, or exemplary language (e.g., “such as”) provided herein, is intended merely to better illuminate embodiments of the invention and does not pose a limitation to the scope of the invention unless otherwise claimed. No language in the specification should be construed as indicating any non-claimed element as essential to each embodiment of the invention.

Claims
  • 1. A method for improving the performance of a neural network used for a natural language understanding (NLU) or a natural language processing (NLP) task, comprising: representing a natural language sequence or sequences with a bidirectional language model;implementing the bidirectional language model in a neural network;training the neural network in which the language model is implemented using a corpus of unlabeled text;extracting contextual word representations from the trained neural network, the contextual word representations being extracted from one or more layers of the trained network; andtransferring the extracted contextual word representations to the neural network used for the natural language understanding (NLU) or natural language processing (NLP) task.
  • 2. The method of claim 1, wherein the natural language understanding (NLU) or natural language processing (NLP) task is one of textual entailment, question answering, semantic role labeling, co-reference resolution, named entity extraction, or text classification.
  • 3. The method of claim 1, wherein the natural language sequence or sequences are one or more of words or characters.
  • 4. The method of claim 1, wherein the bidirectional language model includes a forward and a backward language model, with the formulation of the language model jointly maximizing the log likelihood of the forward and backward directions.
  • 5. The method of claim 1, wherein the bidirectional language model shares one or more weights between directions instead of using independent parameters.
  • 6. The method of claim 1, wherein the neural network in which the bidirectional language model is implemented is a long short term memory (LSTM) neural network.
  • 7. The method of claim 1, wherein extracting contextual word representations from the trained neural network further comprises identifying a representation for each natural language sequence from one or more layers of the trained neural network in which the language model is implemented.
  • 8. The method of claim 1, wherein transferring the extracted contextual word representations to the neural network used for the natural language understanding (NLU) or natural language processing (NLP) task further comprises concatenating the extracted contextual word representations with the token representation of the neural network used for the natural language understanding (NLU) or natural language processing (NLP) task.
  • 9. A system for improving the performance of a neural network used for a natural language understanding (NLU) or a natural language processing (NLP) task, comprising: a set of computer-executable instructions for representing a natural language sequence or sequences with a bidirectional language model;a set of computer-executable instructions for implementing the bidirectional language model in a neural network;a set of computer-executable instructions for training the neural network in which the language model is implemented using a corpus of unlabeled text;a set of computer-executable instructions for extracting contextual word representations from the trained neural network, the contextual word representations being extracted from one or more layers of the trained network; anda set of computer-executable instructions for transferring the extracted contextual word representations to the neural network used for the natural language understanding (NLU) or natural language processing (NLP) task.
  • 10. The system of claim 9, wherein the natural language understanding (NLU) or natural language processing (NLP) task is one of textual entailment, question answering, semantic role labeling, co-reference resolution, named entity extraction, or text classification.
  • 11. The system of claim 9, wherein the bidirectional language model includes a forward and a backward language model, with the formulation of the language model jointly maximizing the log likelihood of the forward and backward directions.
  • 12. The system of claim 9, wherein the bidirectional language model shares one or more weights between directions instead of using independent parameters.
  • 13. The system of claim 9, wherein the neural network in which the bidirectional language model is implemented is a long short term memory (LSTM) neural network.
  • 14. The system of claim 9, wherein the set of computer-executable instructions for extracting contextual word representations from the trained neural network includes instructions for identifying a representation for each natural language sequence from one or more layers of the trained neural network in which the language model is implemented.
  • 15. The system of claim 9, wherein the set of computer-executable instructions for transferring the extracted contextual word representations to the neural network used for the natural language understanding (NLU) or natural language processing (NLP) task includes instructions for concatenating the extracted contextual word representations with the token representation of the neural network used for the natural language understanding (NLU) or natural language processing (NLP) task.
  • 16. A method for improving the performance of a neural network used for a natural language understanding (NLU) or a natural language processing (NLP) task, comprising: representing a sequence of tokens as a bidirectional language model;implementing the bidirectional language model in the form of a neural network;training the neural network in which the language model is implemented using a corpus of text;identifying representations for each token in the sequence from one or more layers of the resultant trained neural network;forming a contextual vector or vectors for each token from the identified representations of the trained neural network;obtaining a neural network intended for use in a specific task;introducing the contextual vectors into the obtained neural network;inputting text to be analyzed into the obtained neural network; andgenerating an output of the obtained neural network.
  • 17. The method of claim 16, wherein the bidirectional language model includes a forward and a backward language model, with the formulation of the language model jointly maximizing the log likelihood of the forward and backward directions.
  • 18. The method of claim 16, wherein the bidirectional language model is implemented as a long short term memory (LSTM) neural network.
  • 19. The method of claim 16, wherein the specific task is one of textual entailment, question answering, semantic role labeling, co-reference resolution, named entity extraction, or sentiment analysis.
  • 20. The method of claim 16, wherein introducing the contextual vectors into the obtained neural network further comprises concatenating the contextual vector or vectors with the token representation of the obtained network.
CROSS REFERENCE TO RELATED APPLICATION

This application claims the benefit of U.S. Provisional Application No. 62/610,447, entitled “System and Methods for Performing NLP Related Tasks Using Contextualized Word Representations,” filed Dec. 26, 2017, which is incorporated herein by reference in its entirety (including the Appendix containing the article entitled “Deep contextualized word representations” and Supplemental Material) for all purposes.

Provisional Applications (1)
Number Date Country
62610447 Dec 2017 US