The present invention relates to a language model score calculation apparatus, a learning apparatus, a language model score calculation method, a learning method, and a program.
In speech recognition, machine translation, or the like, a language model is needed for performing linguistic predictions. The language model can calculate language model scores (for example, a prediction probability of a word, etc.) that represent linguistic likelihood, and the performance thereof decides the performance of speech recognition, machine translation, or the like. While various kinds of language models have been proposed so far, in recent years, a language model based on a recurrent neural network (RNN) has attracted attention (for example, see NPL 1 and 2). This recurrent neural network based language model has very high language prediction performance and is actively used in speech recognition, machine translation, or the like.
The recurrent neural network based language model can learn from text data. When learning from text data that corresponds to a target task, the recurrent neural network based language model can achieve high language prediction performance. The learning of the recurrent neural network based language model refers to updating of a model parameter (namely, a parameter of the recurrent neural network) by leaning.
When predicting a current word wi under the condition that a word sequence w1, . . . , wi-1 has been observed, the recurrent neural network based language model receives an immediately preceding word wi-1 and an output si-1 immediately preceding an intermediate layer as inputs, the recurrent neural network based language model outputs probability distribution of a prediction probability P(wi|wi-1, si-1, θ) of the current word wi. In this prediction probability P, θ is a model parameter of the recurrent neural network based language model. The prediction probability P is a language model score.
Since a word sequence w1, . . . , wi-2 which includes all the words up to the (i−2)th word is embedded in the output si-1 in the intermediate layer, the recurrent neural network based language model can calculate the prediction probability P(wi|wi-1, si-1, θ) of the current word wi, namely, the language model score, by explicitly using long-term word history information. Hereinafter, an output Si in the intermediate layer is also referred to as “word history vector”. There are various kinds of recurrent neural networks that can be used as the recurrent neural network based language model. For example, various recurrent neural networks such as an LSTM (Long Short-Term Memory) and a GRU (Gated Recurrent Unit) can be used.
However, since the conventional recurrent neural network based language model does not take speakers into account, its use has been assumed for, for example, speech recognition of a single speaker. Therefore, with the conventional recurrent neural network based language model, a prediction probability of a current word (namely, a language model score of the recurrent neural network based language model) cannot be calculated by explicitly using information such as who has spoken what and who is going to speak next in a conversation etc. among a plurality of people.
With the foregoing in view, it is an object of an embodiment of the present invention to calculate a language model score taking speakers into account.
To achieve the above object, according to the embodiment of the present invention, there is provided a language model score calculation apparatus that calculates a prediction probability of a word wi as a language model score of a language model based on a recurrent neural network, the language model score calculation apparatus including: word vector representation means for converting a word wi 1 that is observed immediately before the word wi into a word vector ϕ(wi-1); speaker vector representation means for converting a speaker label ri-1 corresponding to the word wi-1 and a speaker label ri corresponding to the word wi-1 into a speaker vector ψ ri-1) and a speaker vector ψ(ri), respectively; word history vector representation means for calculating a word history vector si by using the word vector ϕ(wi-1), the speaker vector ψ(ri-1), and a word history vector si-1 that is obtained when a prediction probability of the word wi-1 is calculated; and prediction probability calculation means for calculating a prediction probability of the word wi by using the word history vector si-1 and the speaker vector ψ(ri).
A language model score taking speakers into account can be calculated.
Hereinafter, an embodiment of the present invention will be described in detail with reference to the drawings. In the embodiment of the present invention, a language model score calculation apparatus 10 that calculates a language model score taking speakers into account in a recurrent neural network based language model will be described. Also, in the embodiment of the present invention, a model parameter learning apparatus 20 that learns a model parameter θ of the recurrent neural network based language model, which can calculate a language model score taking speakers into account, will be described.
Generally, contents of speech vary depending on gender, a role, etc. of a speaker. For example, when a prediction probability of a word to be spoken next in a dialog between an operator in a call center and a customer is calculated, if a word sequence that has been spoken by each of the operator and the customer can explicitly be observed and if it is known whether a target to be observed is the operator or the customer, more sophisticated prediction of the next word to be spoken can be expected.
Namely, when predicting a word that the operator is going to speak next, for example, it can be expected that the operator is going to speak a word based on a speech style that has been used by the operator and that the operator is going to speak a word that corresponds to the immediately preceding word spoken by the customer. Therefore, by calculating a language model score taking speakers into account, more sophisticated word prediction can be performed.
The language model score calculation apparatus 10 according to the embodiment of the present invention explicitly introduces speaker information to the recurrent neural network based language model so that a language model score taking speakers into account is calculated. Speaker information refers to a speaker label that represents a speaker who has spoken a word or a speaker who is going to speak a word. Hereinafter, a speaker who has spoken a word wi or a speaker who is going to speak a word wi is represented by a speaker label ri. For example, when a prediction probability of the i-th word wi is calculated, speakers who have spoken a word sequence w1, . . . , wi-1 that has been observed are represented by a speaker label sequence r1, . . . , ri-1, and a speaker who is going to speak a word wi is represented by a speaker label ri.
First, a functional configuration of the language model score calculation apparatus 10 according to the embodiment of the present invention will be described with reference to
As illustrated in
The language model 100 receives, as inputs, a word wi-1, a speaker label ri-1 that corresponds to the word wi-1, a speaker label ri that corresponds to a word wi, a word history vector si-1, and a model parameter θ and outputs probability distribution of a prediction probability P(wi|ri, wi-1, ri-1, si-1, θ) of the word wi (namely, a prediction probability distribution of the word wi). In this operation, the language model 100 of the language model score calculation apparatus 10 uses a model parameter θ that has been learned by a model parameter learning apparatus 20. This prediction probability P(wi|ri, wi-1, ri-1, si-1, θ) is a language model score of the language model 100. However, the language model score is not limited to this example. A value based on this prediction probability P(wi|ri, wi-1, ri-1, si-1, θ) (for example, a value obtained by taking a natural logarithm of the prediction probability P(wi|ri, wi-1, ri-1, si-1, θ)) may serve as a language model score.
A value of the speaker label ri can be determined based on channels or the like of voice inputs. For example, in a case in which there are two channels, which are channel A and channel B, a value of the speaker label ri corresponding to the word wi included in the voice input from the channel A can be determined to be “1”, and a value of the speaker label ri corresponding to the word wi included in the voice input from the channel B can be determined to be “2”. Alternatively, for example, as preprocessing to be performed before the word wi is input to the language model 100, the speaker label ri may be acquired by any speaker label determiner.
The language model 100 includes a word vector representation unit 101, a speaker vector representation unit 102, a word history vector representation unit 103, and a prediction probability calculation unit 104 as the functional units.
When a prediction probability distribution of the word wi is calculated, the word vector representation unit 101 receives, as inputs, a word wi-1 and a model parameter θ and outputs a word vector ϕ(wi-1). Namely, the word vector representation unit 101 converts the word wi-1 into the word vector ϕ(wi-1) in accordance with the model parameter θ.
For example, as the word vector ϕ(wi-1), it is possible to adopt a one-hot vector in which only the element of a dimension corresponding to the word wi-1 is set to 1 and the elements other than that are set to 0. For example, the one-hot vector is discussed in the above NPL 1. Alternatively, for example, a method in which linear conversion is performed on the one-hot vector could be adopted. For example, an example of the linear conversion performed on the one-hot vector is discussed in the above NPL 2.
When a prediction probability distribution of the word wi is calculated, the speaker vector representation unit 102 receives, as inputs, a speaker label ri-1 and the model parameter θ and outputs a speaker vector ψ(ri-1). In addition, the speaker vector representation unit 102 receives, as inputs, a speaker label ri and the model parameter θ and outputs a speaker vector ψ(ri). Namely, the speaker vector representation unit 102 converts speaker labels ri-1 and ri into speaker vectors ψ(ri-1) and ψ(ri), respectively, in accordance with the model parameter θ.
For example, as the speaker vector (ri-1), it is possible to adopt a one-hot vector in which only the element of a dimension corresponding to the speaker label ri-1 is set to 1 and the elements other than that are set to 0. The same applies to the speaker vector ψ(ri). For example, the one-hot vector is discussed in the above NPL 1. Alternatively, for example, a method in which linear conversion is performed on the one-hot vector could be adopted. For example, an example of the linear conversion performed on the one-hot vector is discussed in the above NPL 2.
When a prediction probability distribution of the word wi is calculated, the word history vector representation unit 103 receives, as inputs, the word vector ϕ(wi-1), the speaker vector ψ(ri-1), a past word history vector si-1 and the model parameter θ, and outputs a word history vector si. Namely, the word history vector representation unit 103 converts the word vector ϕ(wi-1), the speaker vector ψ(ri-1), and the past word history vector si-1 into the word history vector si in accordance with the model parameter θ. In this operation, the word history vector representation unit 103 generates a vector (hereinafter, referred to as “concatenated vector”) in which the word vector ϕ(wi-1) and the speaker vector ψ(ri-1) are concatenated. Next, the word history vector representation unit 103 performs conversion processing on this concatenated vector based on the recurrent neural network so that the word history vector representation unit 103 can output the word history vector si. For example, the conversion processing based on the recurrent neural network is discussed in the above NPLs 1 and 2.
For example, if the dimensionality of the word vector ϕ(wi-1) is 200 and the dimensionality of the speaker vector ψ(ri-1) is 64, the concatenated vector is represented by a 264-dimensional vector. In addition, the past word history vector si-1 is calculated through recursive processing performed by the word history vector representation unit 103. A past word history vector s0 used when a prediction probability distribution of the word wi is calculated may be a vector whose elements are all set to zero.
The prediction probability calculation unit 104 receives, as inputs, the past word history vector si-1, the speaker vector ψ(ri), and the model parameter θ, and outputs a prediction probability distribution of the word wi. Namely, the prediction probability calculation unit 104 outputs probability distribution of a prediction probability P(wi|ri, wi-1, ri-1, si-1, θ) of the word wi based on the past word history vector si-1 and the speaker vector ψ(ri) in accordance with the model parameter θ. The prediction probability calculation unit 104 can obtain the prediction probability distribution of the word wi by performing conversion using a softmax function. For example, the conversion using a softmax function is discussed in the above NPLs 1 and 2.
The prediction probability distribution of the word wi output by the above operation can be used in speech recognition, for example. Specifically, for example, based on the prediction probability of the word wi, scoring is performed on the top M (≥1) speech recognition hypotheses output from a speech recognition system so that the speech recognition hypotheses are rescored. For example, the rescoring is performed by using a score obtained by adding a score that is output from the speech recognition system and a score that is the natural logarithm of this prediction probability.
<Functional Configuration of Model Parameter Learning Apparatus 20>
Next, a function configuration of the model parameter learning apparatus 20 according to the embodiment of the present invention will be described with reference to
As illustrated in
the language model 100 receives, as inputs, a word sequence w1, . . . , wN and a speaker label sequence r1, . . . , rN, and outputs prediction probability distributions of the respective words wi by using a model parameter θ that has not yet been learned. Namely, the language model 100 receives, as inputs, words wi-1 in sequence from i=1 to i=N, a speaker label ri, and a speaker label ri-1 and outputs prediction probability distributions of the respective words wi. In this way, prediction probability distributions of the word wi to the word wN, respectively, can be obtained. The word sequence w1, . . . , wN and the speaker label sequence r1, . . . , rN are, for example, the word sequence and the speaker label sequence from which conversation data among a plurality of persons has been generated.
The model parameter learning unit 200 receives, as inputs, the word sequence w1, . . . , wN and the prediction probability distributions of the respective words wi, which have been output from the language model 100, updates the model parameter θ based on the inputs, and outputs the updated model parameter θ. In this way, the model parameter θ is learned.
In this operation, the model parameter learning unit 200 updates the model parameter θ to a value such that a likelihood function L(θ) expressed by formula (1) below is maximized.
In this formula, P(wi|ri, wi-1, ri-1, si-1, θ) is the prediction probability of the word wi in the input word sequence w1, . . . , wN. For example, in a case where the word wi can be “word 1”, “word 2”, or “word 3”, if the i-th word wi in the input word sequence is “word 2”, P(wi|ri, wi-1, ri-1, si-1, θ) is the prediction probability of the word “word 2”, namely, P(word 2|ri, wi-1, ri-1, si-1, θ). Therefore, the model parameter θ that maximizes the likelihood function L(θ) expressed by formula (1) above means a model parameter with which a prediction probability P(wi|ri, wi-1, ri-1, si-1, θ) of a correct word wi (namely, the i-th word wi in the input word sequence) is maximized.
Thus, the model parameter learning unit 200 can estimate argmax L(θ) and use this estimated value as an updated model parameter θ. Various methods can be used as the method for estimating the model parameter θ that maximizes the likelihood function L(θ). The examples of such a method include a back propagation method.
In the embodiment of the present invention, while the language model score calculation apparatus 10 and the model parameter learning apparatus 20 have been described as different apparatuses, this configuration is merely an example. The language model score calculation apparatus 10 and the model parameter learning apparatus 20 may be the same apparatus, for example.
Next, processing in which the language model score calculation apparatus 10 according to the embodiment of the present invention calculates prediction probability distributions will be described with reference to
Step S101: The word vector representation unit 101 receives, as inputs, an immediately preceding word wi-1 and the model parameter θ, and obtains a word vector ϕ(wi-1). Namely, the word vector representation unit 101 converts a word wi-1 into a word vector ϕ(wi-1) in accordance with the model parameter θ.
Step S102: The speaker vector representation unit 102 receives, as inputs, a speaker label ri-1 and the model parameter θ and obtains a speaker vector ψ(ri-1). Namely, the speaker vector representation unit 102 converts a speaker label ri-1 into a speaker vector ψ(ri-1) in accordance with the model parameter θ.
Step S103: The speaker vector representation unit 102 receives, as inputs, a speaker label ri and the model parameter θ, and obtains a speaker vector ψ(ri). Namely, the speaker vector representation unit 102 converts a speaker label ri into a speaker vector ψ(ri) in accordance with the model parameter θ.
The above processing of steps S101 to S103 may be performed in random order. Alternatively, the above processing of step S101 may be performed in parallel with the above processing of step S102 or step S103. Still alternatively, the above processing of step S103 may be performed after the processing of step S104 described below has been performed.
Step S104: The word history vector representation unit 103 receives, as inputs, the word vector ϕ(wi-1), the speaker vector ψ(ri-1), a past word history vector si-1, and the model parameter θ, and obtains a word history vector si. Namely, after generating a concatenated vector in which the word vector ϕ(wi-1) and the speaker vector ψ(ri-1) are concatenated, the word history vector representation unit 103 converts the concatenated vector and the past word history vector si-1 into the word history vector si in accordance with the model parameter θ.
Step S105: The prediction probability calculation unit 104 receives, as inputs, the past word history vector si-1, the speaker vector ψ(ri), and the model parameter θ, and obtains a prediction probability distribution of the word wi. Namely, the prediction probability calculation unit 104 obtains probability distribution of a prediction probability P(wi|ri, wi-1, ri-1, si-1, θ) of the word wi based on the past word history vector si-1 and the speaker vector ψ(ri) in accordance with the model parameter θ.
In this way, for example, a prediction probability P(wi|ri, wi-1, ri-1, si-1, θ) of each of the words wi is obtained as a language model score of the language model 100. Since the individual prediction probability P is a language model score taking the speakers into account, more sophisticated word predictions can be performed based on such a language model score.
<Processing for Learning Model Parameters>
Next, processing in which the model parameter learning apparatus 20 according to the embodiment of the present invention learns a model parameter will be described with reference to
Step S201: The language model 100 receives, as inputs, a word sequence w1, . . . , wN and a speaker label sequence r1, . . . , rN, and outputs prediction probability distributions of words wi by using the model parameter θ that has not yet been learned. Namely, the language model 100 receives, as inputs, words wi-1 in sequence from i=1 to i=N, speaker labels ri, and speaker labels ri-1, and outputs prediction probability distributions of the respective words wi by performing the above processing of steps S101 to S105. In this way, the respective prediction probability distributions of the word wi to the word wN can be obtained.
Step S202: Next, the model parameter learning unit 200 receives, as inputs, the word sequence w1, . . . , wN and the prediction probability distributions of the respective words w output from the language model 100, updates the model parameter θ based on the inputs, and outputs the updated model parameter θ. In this operation, the model parameter learning unit 200 updates the model parameter θ such that a likelihood function L(θ) expressed by formula (1) above is maximized. In this way, the model parameter θ is learned.
In a case where, for example, multiple sets of a word sequence w1, . . . , wN and a speaker label sequence r1, . . . , rN are provided, the above processing of steps S201 and S202 may be repeated for each set.
Next, a hardware configuration of the language model score calculation apparatus 10 and the model parameter learning apparatus 20 according to the embodiment of the present invention will be described with reference to
As illustrated in
The input device 301 is, for example, a keyboard, a mouse, a touch panel, or the like and is used for inputting various user operations. The display device 302 is, for example, a display or the like and displays results of processing performed by the language model score calculation apparatus 10. The language model score calculation apparatus 10 and the model parameter learning apparatus 20 may be provided with at least one of the input device 301 and the display device 302.
The external I/F 303 is an interface between the language model score calculation apparatus 10 and an external device. The external device includes a recording medium 303a or the like. The language model score calculation apparatus 10 can read from and write to the recording medium 303a or the like via the external I/F 303. In the recording medium 303a, at least one program that implements the language model 100 and the model parameter learning unit 200, a model parameter θ, etc. may be recorded.
Examples of the recording medium 303a include a flexible disk, a CD (Compact Disc), a DVD (digital Versatile Disk), an SD memory card (Secure Digital memory card), and a USB (Universal Serial Bus) memory card.
The RAM 304 is a volatile semiconductor memory that temporarily holds programs and data. The ROM 305 is a non-volatile semiconductor memory that can hold programs and data even after the power is turned off. The ROM 305 stores, for example, setting information on the OS (Operating System), setting information on a communication network, or the like.
The processor 306 is, for example, a CPU (Central Processing Unit), a GPU (Graphics Processing Unit), or the like. The processor 306 is a calculation apparatus that reads programs and data from the ROM 305, the auxiliary storage device 308, or the like into the RAM 304 and perform processing. The language model 100 and the model parameter learning unit 200 are implemented when at least one program stored in the auxiliary storage device 308 is executed by the processor 306, for example. The language model score calculation apparatus 10 and the model parameter learning apparatus 20 may include both the CPU and the GPU or may include either the CPU or the GPU, as the processor 306.
The communication I/F 307 is an interface for connecting the language model score calculation apparatus 10 to the communication network. At least one program for implementing the language model 100 and the model parameter learning unit 200 may be acquired (downloaded) from a predetermined server or the like via the communication I/F 307.
The auxiliary storage device 308 is a non-volatile storage device such as an HDD (Hard Disk Drive) or an SSD (Solid State Drive) to store programs and data. Examples of the programs and data stored in the auxiliary storage device 308 include an OS, an application program, at least one program for implementing the language model 100 and the model parameter learning unit 200, and the model parameter θ.
The language model score calculation apparatus 10 and the model parameter learning apparatus 20 according to the embodiment of the present invention can implement the various processing described above by having the hardware configuration illustrated in
As described above, the language model score calculation apparatus 10 according to the embodiment of the present invention can calculate a prediction probability of a word wi, while taking speakers into account, as a language model score of the language model 100 based on a recurrent neural network, by using the speaker label ri-1 that corresponds to the immediately preceding word wi-1 and the speaker label ri that corresponds to the current word wi. As a result, by using the language model score calculated by the language model score calculation apparatus 10 according to the embodiment of the present invention, more sophisticated word prediction can be performed.
The present invention is not limited to the above embodiment specifically disclosed, and various modifications and changes can be made without departing from the scope of the invention.
Number | Date | Country | Kind |
---|---|---|---|
2018-153495 | Aug 2018 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2019/024799 | 6/21/2019 | WO | 00 |