The present invention relates to a training method, a storage medium, and a training device.
Typically, in the field of data analysis, there is an autoencoder that extracts feature data, called a latent variable in a latent space having a relatively small number of dimensions, from real data in a real space having a relatively large number of dimensions. For example, there is a case where data analysis accuracy is improved by using the feature data extracted from the real data by the autoencoder, instead of the real data.
The related art, for example, learns a latent variable by performing unsupervised learning using a neural network. Furthermore, for example, there is a technique for learning the latent variable as a probability distribution. Furthermore, for example, there is a technique for learning the Gaussian mixture distribution expressing the probability distribution of the latent space at the same time as learning an autoencoder.
According to an aspect of the embodiments, a training method of an autoencoder that performs encoding and decoding, for a computer to execute a process includes encoding input data; obtaining a probability distribution of feature data obtained by encoding the input data; adding a noise to the feature data; generating decoded data by decoding the feature data to which the noise is added; and training the autoencoder to train the probability distribution of the feature data so that an information entropy of the probability distribution and an error between the decoded data and the input data are decreased.
The object and advantages of the invention will be realized and attained by means of the elements and combinations particularly pointed out in the claims.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory and are not restrictive of the invention.
In the related art, in a case where a probability distribution of feature data is used instead of a probability distribution of real data or the like, it is difficult to improve data analysis accuracy. For example, as a match degree between the probability distribution of the real data and the probability distribution of the feature data is smaller, it is more difficult to improve the data analysis accuracy.
In one aspect, an object of the present invention is to improve data analysis accuracy.
According to one aspect, it is possible to improve data analysis accuracy.
Hereinafter, an embodiment of a learning method, a learning program, and a learning device according to the present invention will be described in detail with reference to the drawings.
(Example of Learning Method According to Embodiment)
The autoencoder is used to improve efficiency of data analysis, for example, reducing a data analysis processing amount, improving data analysis accuracy, or the like. At the time of data analysis, it is considered to reduce the data analysis processing amount, improve the data analysis accuracy, or the like by using the feature data in the latent space having the relatively small number of dimensions, instead of the real data in the real space having the relatively large number of dimensions.
Specifically, an example of the data analysis is, for example, anomaly detection for determining whether or not target data is outlier data or the like. The outlier data is data indicating an outlier that is statistically hard to appear and has a relatively high possibility of being an abnormal value. At the time of anomaly detection, it is considered to use the probability distribution of the feature data in the latent space instead of the probability distribution of the real data in the real space. Then, it is considered to determine whether or not the target data is the outlier data in the real space on the basis of whether or not the feature data extracted from the target data by the autoencoder is the outlier data in the latent space.
However, in the related art, even if the probability distribution of the feature data in the latent space is used instead of the probability distribution of the real data in the real space, there is a case where it is difficult to improve the data analysis accuracy. Specifically, with the autoencoder according to the related art, it is difficult to match the probability distribution of the real data in the real space and the probability distribution of the feature data in the latent space and to make a probability density of the real data and a probability density of the feature data be proportional to each other.
Specifically, even if the autoencoder is learned with reference to Non-Patent Document 1 described above, it is not guaranteed to match the probability distribution of the real data in the real space and the probability distribution of the feature data in the latent space. Furthermore, even if the autoencoder is learned with reference to Non-Patent Document 2 described above, an independent normal distribution for each variable is assumed, and it is not guaranteed to match the probability distribution of the real data in the real space and the probability distribution of the feature data in the latent space. Furthermore, even if the autoencoder is learned with reference to Non-Patent Document 3 described above, because the probability distribution of the feature data in the latent space is limited, it is not guaranteed to match the probability distribution of the real data in the real space and the probability distribution of the feature data in the latent space.
Therefore, even if the feature data extracted from the target data by the autoencoder is the outlier data in the latent space, there is a case where the target data is not the outlier data in the real space, and there is a case where it is not possible to improve anomaly detection accuracy.
Therefore, in the present embodiment, a learning method will be described that can learn an autoencoder that easily matches the probability distribution of the real data in the real space and the probability distribution of the feature data in the latent space and can improve the data analysis accuracy.
In
(1-1) The learning device 100 generates feature data z obtained by encoding data x from a domain D to be a sample for learning the autoencoder 110. The feature data z is a vector of which the number of dimensions is less than that of the data x. The data x is a vector. The learning device 100 generates the feature data z corresponding to a function value fθ (x) obtained by substituting the data x, for example, by an encoder 111 that achieves a function fθ (⋅) for encoding.
(1-2) The learning device 100 calculates a probability distribution Pzψ (z) of the feature data z. For example, the learning device 100 calculates the probability distribution Pzψ (z) of the feature data z on the basis of the model, before being updated, to be learned that defines a probability distribution. The learning target is, for example, a parameter ψ that defines the probability distribution. Before being updated means a state where the parameter ψ that defines the probability distribution to be learned is before being updated. Specifically, the learning device 100 calculates the probability distribution Pzψ (z) of the feature data z according to a probability density function (PDF) including the parameter ψ. The probability density function is, for example, parametric.
(1-3) The learning device 100 generates added data z+ε by adding a noise ε to the feature data z. For example, the learning device 100 generates the noise ε by a noise generator 112 and generates the added data z+ε. The noise ε is a uniform random number, based on a distribution of which an average is zero, that has dimensions as many as the feature data z and is uncorrelated between dimensions.
(1-4) The learning device 100 generates decoded data x∨ by decoding the added data z+ε. The decoded data x∨ is a vector. Here, x∨ in the text indicates a symbol adding v to the upper portion of x in the figures and formulas. The learning device 100 generates the decoded data x∨ corresponding to a function value gξ (z+ε) obtained by substituting the added data z+ε, for example, by a decoder 113 that achieves a function gξ (⋅) for decoding.
(1-5) The learning device 100 calculates a first error D1 between the generated decoded data x∨ and the data x. The learning device 100 calculates the first error D1 according to the following formula (1).
[Expression 1]
D1=(x−{hacek over (x)})2 (1)
(1-6) The learning device 100 calculates an information entropy R of the calculated probability distribution Pzψ (z). The information entropy R is a selected information amount and indicates difficulty of generating the feature data z. The learning device 100 calculates the information entropy R, for example, according to the following formula (2).
[Expression 2]
R=−log(Pzψ(z)) (2)
(1-7) The learning device 100 learns the autoencoder 110 and the probability distribution of the feature data z so as to minimize the calculated first error D1 and the information entropy R of the probability distribution. For example, the learning device 100 learns an encoding parameter θ of the autoencoder 110, a decoding parameter ξ of the autoencoder 110, and the parameter ψ of the model so as to minimize a weighted sum E according to the following formula (3). The weighted sum E is a sum of the first error D1 to which a weight λ1 is added and the information entropy R of the probability distribution.
As a result, the learning device 100 can learn the autoencoder 110 that can extract the feature data z from the input data x so that a proportional tendency appears between a probability density of the input data x and a probability density of the feature data z. Therefore, the learning device 100 may improve the data analysis accuracy by the learned autoencoder 110.
Here, for convenience, a case has been focused and described where the number of pieces of data x to be a sample for learning the autoencoder 110 is one. However, the number is not limited to this. For example, there may also be a case where the learning device 100 learns the autoencoder 110 on the basis of a set of the data x to be a sample for learning the autoencoder 110. In this case, the learning device 100 uses an average value of the first error D1 to which the weight λ1 is added, an average value of the information entropy R of the probability distribution, or the like in the formula (3) described above.
(Example of Data Analysis System 200)
Next, an example of the data analysis system 200 to which the learning device 100 illustrated in
In the data analysis system 200, the learning device 100 and the terminal device 201 are connected via a wired or wireless network 210. The network 210 is, for example, a local area network (LAN), a wide area network (WAN), the Internet, or the like.
The learning device 100 receives a set of data to be a sample from the terminal device 201. The learning device 100 learns the autoencoder 110 on the basis of the received set of data to be a sample. The learning device 100 receives data to be a data analysis processing target from the terminal device 201 and provides a data analysis service to the terminal device 201 using the learned autoencoder 110. The data analysis is, for example, anomaly detection.
The learning device 100 receives, for example, data to be a processing target of anomaly detection from the terminal device 201. Next, the learning device 100 determines whether or not the received data to be processed is outlier data using the learned autoencoder 110. Then, the learning device 100 transmits a result of determining whether or not the received data to be processed is the outlier data to the terminal device 201. The learning device 100 is, for example, a server, a personal computer (PC), or the like.
The terminal device 201 is a computer that can communicate with the learning device 100. The terminal device 201 transmits data to be a sample to the learning device 100. The terminal device 201 transmits the data to be the data analysis processing target to the learning device 100 and uses the data analysis service. The terminal device 201 transmits, for example, the data to be the processing target of anomaly detection to the learning device 100. Then, the terminal device 201 receives the result of determining whether or not the transmitted data to be processed is the outlier data from the learning device 100. The terminal device 201 is, for example, a PC, a tablet terminal, a smartphone, a wearable terminal, or the like.
Here, a case has been described where the learning device 100 and the terminal device 201 are different devices. However, the present invention is not limited to this. For example, there may also be a case where the learning device 100 also operates as the terminal device 201. In this case, the data analysis system 200 does not need to include the terminal device 201.
Here, a case has been described where the learning device 100 receives the set of data to be a sample from the terminal device 201. However, the present invention is not limited to this. For example, there may also be a case where the learning device 100 accepts an input of the set of data to be a sample on the basis of a user's operation input. Furthermore, for example, there may also be a case where the learning device 100 reads the set of data to be a sample from an attached recording medium.
Here, a case has been described where the learning device 100 receives the data to be the data analysis processing target from the terminal device 201. However, the present invention is not limited to this. For example, there may also be a case where the learning device 100 accepts the input of the data to be the data analysis processing target on the basis of a user's operation input. Furthermore, for example, there may also be a case where the learning device 100 reads the data to be the data analysis processing target from an attached recording medium.
(Hardware Configuration Example of Learning Device 100)
Next, a hardware configuration example of the learning device 100 will be described with reference to
Here, the CPU 301 controls the entire learning device 100. For example, the memory 302 includes a read only memory (ROM), a random access memory (RAM), a flash ROM, and the like. Specifically, for example, the flash ROM or the ROM stores various programs, and the RAM is used as a work area for the CPU 301. The program stored in the memory 302 is loaded to the CPU 301 to cause the CPU 301 to execute coded processing.
The network I/F 303 is connected to the network 210 through a communication line and is connected to another computer via the network 210. Then, the network I/F 303 is in charge of an interface between the network 210 and the inside and controls input and output of data to and from another computer. For example, the network I/F 303 is a modem, a LAN adapter, or the like.
The recording medium I/F 304 controls reading and writing of data from and to the recording medium 305 under the control of the CPU 301. For example, the recording medium I/F 304 is a disk drive, a solid state drive (SSD), a universal serial bus (USB) port, or the like. The recording medium 305 is a nonvolatile memory that stores data written under the control of the recording medium I/F 304. The recording medium 305 includes, for example, a disk, a semiconductor memory, a USB memory, and the like. The recording medium 305 may also be attachable to and detachable from the learning device 100.
The learning device 100 may further include, for example, a keyboard, a mouse, a display, a printer, a scanner, a microphone, a speaker, or the like in addition to the above-described components. Furthermore, the learning device 100 may also include a plurality of the recording medium I/Fs 304 and the recording medium 305. Furthermore, the learning device 100 does not need to include the recording medium I/F 304 and the recording medium 305.
(Hardware Configuration Example of Terminal Device 201)
Because a hardware configuration example of the terminal device 201 is similar to the hardware configuration example of the learning device 100 illustrated in
(Functional Configuration Example of Learning Device 100)
Next, a functional configuration example of the learning device 100 will be described with reference to
The storage unit 400 is implemented by a storage region such as the memory 302, the recording medium 305, or the like illustrated in
The acquisition unit 401 through the output unit 408 function as an example of a control unit. Specifically, for example, the acquisition unit 401 through the output unit 408 implement functions thereof by causing the CPU 301 to execute a program stored in the storage region such as the memory 302, the recording medium 305, or the like illustrated in
The storage unit 400 stores various types of information to be referred to or updated in the processing of each functional unit. The storage unit 400 stores the encoding parameter and the decoding parameter. The storage unit 400 stores, for example, the parameter θ that defines a neural network for encoding, used by the encoding unit 402. The storage unit 400 stores, for example, the parameter ξ that defines a neural network for decoding, used by the decoding unit 404.
The storage unit 400 stores a pre-update model to be learned that defines the probability distribution. The model is, for example, a probability density function. The model is, for example, a Gaussian mixture model (GMM). A specific example in which the model is a Gaussian mixture model will be described later in a first example with reference to
The acquisition unit 401 acquires various types of information to be used for the processing of each functional unit. The acquisition unit 401 stores the acquired various types of information in the storage unit 400 or outputs the acquired various types of information to each functional unit. Furthermore, the acquisition unit 401 may also output various types of information stored in the storage unit 400 to each functional unit. The acquisition unit 401 may also acquire various types of information on the basis of a user's operation input. The acquisition unit 401 may also receive various types of information from a device different from the learning device 100.
The acquisition unit 401, for example, accepts inputs of various types of data. The acquisition unit 401, for example, accepts inputs of one or more pieces of data to be a sample for learning the autoencoder 110. In the following description, there may be a case where the data to be the sample for learning the autoencoder 110 is expressed as “sample data”. Specifically, the acquisition unit 401 accepts an input of the sample data by receiving the sample data from the terminal device 201. Specifically, the acquisition unit 401 may also accept the input of the sample data on the basis of a user's operation input. As a result, the acquisition unit 401 can enable the encoding unit 402, the optimization unit 406, or the like to refer to a set of the sample data and to learn the autoencoder 110.
The acquisition unit 401 accepts, for example, inputs of one or more pieces of data to be the data analysis processing target. In the following description, there is a case where the data to be the data analysis processing target is expressed as “target data”. Specifically, the acquisition unit 401 accepts an input of the target data by receiving the target data from the terminal device 201. Specifically, the acquisition unit 401 may also accept the input of the target data on the basis of a user's operation input. As a result, the acquisition unit 401 can enable the encoding unit 402 or the like to refer to the target data and to perform data analysis.
The acquisition unit 401 may also accept a start trigger to start the processing of any one of the functional units. The start trigger may also be a signal that is periodically generated in the learning device 100. The start trigger may also be, for example, a predetermined operation input by a user. The start trigger may also be, for example, receipt of predetermined information from another computer. The start trigger may also be, for example, output of predetermined information by any one of the functional units.
The acquisition unit 401 accepts, for example, the receipt of the input of the sample data to be a sample as the start trigger to start processing of the encoding unit 402 through the optimization unit 406. As a result, the acquisition unit 401 can start processing for learning the autoencoder 110. The acquisition unit 401 accepts, for example, receipt of the input of the target data as a start trigger to start processing of the encoding unit 402 through the analysis unit 407. As a result, the acquisition unit 401 can start processing for performing data analysis.
The encoding unit 402 encodes various types of data. The encoding unit 402 encodes, for example, the sample data. Specifically, the encoding unit 402 encodes the sample data by the neural network for encoding so as to generate feature data. In the neural network for encoding, the number of nodes of an output layer is less than the number of nodes of an input layer, and the feature data has the number of dimensions less than that of the sample data. The neural network for encoding is defined, for example, by the parameter θ. As a result, the encoding unit 402 can enable the estimation unit 405, the generation unit 403, and the decoding unit 404 to refer to the feature data obtained by encoding the sample data.
Furthermore, the encoding unit 402 encodes, for example, the target data. Specifically, the encoding unit 402 encodes the target data by the neural network for encoding so as to generate the feature data. As a result, the encoding unit 402 can enable the analysis unit 407 or the like to refer to the feature data obtained by encoding the target data.
The generation unit 403 generates a noise and adds the noise to the feature data obtained by encoding the sample data so as to generate the feature data. The noise is a uniform random number, based on a distribution of which an average is zero, that has dimensions as many as the feature data and is uncorrelated between dimensions. As a result, the generation unit 403 can generate the added feature data to be processed by the decoding unit 404.
Furthermore, the decoding unit 404 generates decoded data by decoding the added feature data. For example, the decoding unit 404 generates the decoded data by decoding the added feature data by a neural network for decoding. It is preferable that the neural network for decoding can have the number of nodes of the input layer less than the number of nodes of the output layer and can generate the decoded data having the same number of dimensions as the sample data. The neural network for decoding is defined, for example, by the parameter ξ. As a result, the decoding unit 404 can enable the optimization unit 406 or the like to refer to the decoded data to be an index for learning the autoencoder 110.
The estimation unit 405 calculates the probability distribution of the feature data. The estimation unit 405 calculates the probability distribution of the feature data obtained by encoding the sample data, for example, on the basis of a model that defines the probability distribution. Specifically, the estimation unit 405 parametrically calculates the probability distribution of the feature data obtained by encoding the sample data. A specific example in which the probability distribution is parametrically calculated will be described later, for example, in a third example. As a result, the estimation unit 405 can enable the optimization unit 406 or the like to refer to the probability distribution of the feature data obtained by encoding the sample data, to be the index for learning the autoencoder 110.
The estimation unit 405 may also calculate the probability distribution of the feature data obtained by encoding the sample data, for example, on the basis of a similarity between the decoded data and the sample data. The similarity is, for example, a cosine similarity or a relative Euclidean distance, or the like. The estimation unit 405 combines the similarity between the decoded data and the sample data with the feature data obtained by encoding the sample data, and then, calculates the probability distribution of the combined feature data. A specific example using the similarity between the decoded data and the sample data will be described later in a second example, for example, with reference to
The estimation unit 405 calculates the probability distribution of the feature data obtained by encoding the target data, for example, on the basis of the model that defines the probability distribution. Specifically, the estimation unit 405 parametrically calculates the probability distribution of the feature data obtained by encoding the target data. As a result, the estimation unit 405 can enable the analysis unit 407 or the like to refer to the probability distribution of the feature data obtained by encoding the target data to be the index for performing data analysis.
The optimization unit 406 learns the autoencoder 110 and the probability distribution of the feature data so as to minimize the first error between the decoded data and the sample data and the information entropy of the probability distribution. The first error is calculated on the basis of an error function that is defined so that a differentiated result satisfies a predetermined condition. The first error is, for example, a squared error between the decoded data and the sample data. The first error may also be, for example, a logarithm of the squared error between the decoded data and the sample data.
When δX is an arbitrary microvariation X, A (X) is an N×N Hermitian matrix dependent on X, L (X) is a Cholesky decomposition matrix of A (X), the first error may also be an error such that an error between the decoded data and the sample data can be approximated by the following formula (4). Such an error includes, for example, (1−SSIM) in addition to the squared error. Furthermore, the first error may also be a logarithm of (1−SSIM).
[Expression 4]
D(X,X+X)+δX)≅tδX·A(X)·δX=∥L(X)·δX∥2 (4)
The optimization unit 406 learns the autoencoder 110 and the probability distribution of the feature data, for example, so as to minimize a weighted sum of the first error and the information entropy. Specifically, the optimization unit 406 learns the encoding parameter and the decoding parameter of the autoencoder 110 and the parameter of the model.
The encoding parameter is the parameter θ of the neural network for encoding described above. The decoding parameter is the parameter ξ of the neural network for decoding described above. The parameter of the model is the parameter ψ of the Gaussian mixture model. A specific example in which the parameter ψ of the Gaussian mixture model is learned will be described later in the first example, for example, with reference to
As a result, the optimization unit 406 can learn the autoencoder 110 that can extract feature data from input data so that a proportional tendency appears between a probability density of the input data and a probability density of the feature data. The optimization unit 406 can learn the autoencoder 110, for example, by updating the parameters θ and ξ respectively used by the encoding unit 402 and the decoding unit 404 forming the autoencoder 110.
The analysis unit 407 performs data analysis on the basis of the learned autoencoder 110 and the learned probability distribution of the feature data. The analysis unit 407 performs data analysis, for example, on the basis of the learned autoencoder 110 and the learned model. The data analysis is, for example, anomaly detection. The analysis unit 407 performs anomaly detection regarding the target data, for example, on the basis of the encoding unit 402 and the decoding unit 404 corresponding to the learned autoencoder 110 and the learned model.
Specifically, the analysis unit 407 acquires the probability distribution calculated by the estimation unit 405 on the basis of the learned model, regarding the feature data obtained by encoding the target data by the encoding unit 402 corresponding to the learned autoencoder 110. The analysis unit 407 performs anomaly detection on the target data on the basis of the acquired probability distribution. As a result, the analysis unit 407 can accurately perform data analysis.
The output unit 408 outputs a processing result of any one of the functional units. An output format is, for example, display on a display, print output to a printer, transmission to an external device by the network I/F 303, or storage in the storage region such as the memory 302 or the recording medium 305. As a result, the output unit 408 makes it possible to notify the user of the processing result of any one of the functional units, and may improve convenience of the learning device 100. The output unit 408 outputs, for example, the learned autoencoder 110.
Specifically, the output unit 408 outputs the parameter θ for encoding and the parameter ξ for decoding used to achieve the learned autoencoder 110. As a result, the output unit 408 can enable another computer to use the learned autoencoder 110. The output unit 408 outputs, for example, a result of performing anomaly detection. As a result, the output unit 408 can enable another computer to refer to the result of performing anomaly detection.
Here, a case has been described where the learning device 100 includes the acquisition unit 401 through the output unit 408. However, the present invention is not limited to this. For example, there may also be a case where another computer different from the learning device 100 includes any one of the functional units including the acquisition unit 401 through the output unit 408 and the learning device 100 and another computer cooperate with each other. Specifically, there may also be a case where the learning device 100 transmits the learned autoencoder 110 and the learned model to another computer including the analysis unit 407 and the another computer can perform data analysis.
(First Example of Learning Device 100)
Next, the first example of the learning device 100 will be described with reference to
(5-1) The learning device 100 generates the feature data z by encoding the data x by an encoder 501 each time when the data x is acquired. The encoder 501 is a neural network defined by the parameter θ.
(5-2) The learning device 100 calculates a parameter p of the Gaussian mixture distribution corresponding to the feature data z each time when the feature data z is generated. The parameter p is a vector. For example, the learning device 100 calculates p corresponding to the feature data z by an Estimation Network p=MLN (z; ψ) that uses the feature data z as an input, is defined by the parameter ψ, and estimates the parameter p of the Gaussian mixture distribution. The MLN is a multi-layer neural network. Regarding the Estimation Network, for example, Non-Patent Document 3 described above can be referred to.
(5-3) The learning device 100 generates the added data z+ε by adding the noise ε to the feature data z each time when the feature data z is generated. The noise ε is a uniform random number, based on a distribution of which an average is zero, that has dimensions as many as the feature data z and is uncorrelated between dimensions.
(5-4) The learning device 100 generates the decoded data x∨ by decoding the added data z+ε by a decoder 502 each time when the added data z+ε is generated. The decoder 502 is a neural network defined by the parameter ξ.
(5-5) The learning device 100 calculates the first error D1 between the decoded data x∨ and the data x for each combination of the decoded data x∨ and the data x according to the formula (1) described above.
(5-6) The learning device 100 calculates the information entropy R on the basis of N parameters p calculated from N pieces of feature data z. The information entropy R is, for example, an average information amount. The learning device 100 calculates the information entropy R, for example, according to the following formulas (5) to (9). Here, a number of the data x is defined as i. i=1, 2, . . . , N is satisfied. A component of the multidimensional Gaussian mixture model is defined as k. k=1, 2, . . . , and K is satisfied.
Specifically, the learning device 100 calculates a burden rate γ∧ of the sample according to the following formula (5). Here, γ∧ in the text indicates a symbol adding ∧ to the upper portion of γ in the figures and formulas.
[Expression 5]
{circumflex over (γ)}=softmax(p) (5)
Next, the learning device 100 calculates a mixture weight φk∧ of the Gaussian mixture distribution according to the following formula (6). Here, φk∧ in the text indicates a symbol adding ∧ to the upper portion of φk in the figures and formulas.
Next, the learning device 100 calculates an average μk∧ of the Gaussian mixture distribution according to the following formula (7). Here, μk∧ in the text indicates a symbol adding ∧ to the upper portion of μk in the figures and formulas. The reference zi is i-th encoded data z obtained by encoding i-th data x.
Next, the learning device 100 calculates a variance-covariance matrix Σk∧ of the Gaussian mixture distribution according to the following formula (8). Here, Σk∧ in the text indicates a symbol adding ∧ to the upper portion of Σk in the figures and formulas.
Then, the learning device 100 calculates the information entropy R according to the following formula (9).
(5-7) The learning device 100 learns the parameter θ of the encoder 501, the parameter ξ of the decoder 502, and the parameter ψ of the Gaussian mixture distribution so as to minimize the weighted sum E according to the formula (3) described above. The weighted sum E is a sum of the first error D1 to which the weight λ1 is added and the information entropy R. As the first error D1 in the formula, an average value of the calculated first error D1 or the like can be adopted.
As a result, the learning device 100 can learn the autoencoder 110 that can extract the feature data z from the input data x so that a proportional tendency appears between a probability density of the input data x and a probability density of the feature data z. Therefore, the learning device 100 may improve the data analysis accuracy by the learned autoencoder 110. For example, the learning device 100 may improve accuracy of anomaly detection.
(Second Example of Learning Device 100)
Next, the second example of the learning device 100 will be described with reference to
(6-1) The learning device 100 generates the feature data zc by encoding the data x by an encoder 601 each time when the data x is acquired. The encoder 601 is a neural network defined by the parameter θ.
(6-2) The learning device 100 generates added data zc+ε by adding the noise ε to the feature data zc each time when the feature data zc is generated. The noise ε is a uniform random number, based on a distribution of which an average is zero, that has dimensions as many as the feature data zc and is uncorrelated between dimensions.
(6-3) The learning device 100 generates the decoded data x∨ by decoding the added data zc+ε by a decoder 602 each time when the added data zc+ε is generated. The decoder 602 is a neural network defined by the parameter ξ.
(6-4) The learning device 100 calculates the first error D1 between the decoded data x∨ and the data x for each combination of the decoded data x∨ and the data x according to the formula (1) described above.
(6-5) The learning device 100 generates combined data z by combining an explanatory variable zr with the feature data zc each time when the feature data zc is generated. The explanatory variable zr is, for example, a cosine similarity, a relative Euclidean distance, or the like. The explanatory variable zr is, specifically, a cosine similarity (x·x∨)/(|x|·|x∨ |), a relative Euclidean distance (x−x∨)/|x|, or the like.
(6-6) The learning device 100 calculates p corresponding to the combined data z by the Estimation Network p=MLN (z; ψ) each time when the combined data z is generated.
(6-7) The learning device 100 calculates the information entropy R on the basis of N parameters p calculated from N pieces of combined data z according to the formulas (5) to (9) described above. The information entropy R is, for example, an average information amount.
(6-8) The learning device 100 learns the parameter θ of the encoder 601, the parameter ξ of the decoder 602, the parameter ψ of the Gaussian mixture distribution so as to minimize the weighted sum E according to the formula (3) described above. The weighted sum E is a sum of the first error D1 to which the weight λ1 is added and the information entropy R. As the first error D1 in the formula, an average value of the calculated first error D1 or the like can be adopted.
As a result, the learning device 100 can learn the autoencoder 110 that can extract the feature data z from the input data x so that a proportional tendency appears between a probability density of the input data x and a probability density of the feature data z. Furthermore, the learning device 100 can learn the autoencoder 110 that can extract the feature data z from the input data x so that the number of dimensions of the feature data z becomes relatively small. Therefore, the learning device 100 can relatively largely improve the data analysis accuracy by the learned autoencoder 110. For example, the learning device 100 can relatively largely improve the accuracy of anomaly detection.
(Third Example of Learning Device 100)
Next, the third example of the learning device 100 will be described. In the third example, the learning device 100 assumes a probability distribution Pzψ (z) of z as an independent distribution and estimates the probability distribution Pzψ (z) of z as a parametric probability density function. For estimating the probability distribution Pzψ (z) of z as a parametric probability density function, for example, Non-Patent Document 4 described below can be referred.
As a result, the learning device 100 can learn the autoencoder 110 that can extract the feature data z from the input data x so that a proportional tendency appears between a probability density of the input data x and a probability density of the feature data z. Therefore, the learning device 100 may improve the data analysis accuracy by the learned autoencoder 110. For example, the learning device 100 may improve accuracy of anomaly detection.
(Example of Effect Obtained by Learning Device 100)
Next, an example of an effect obtained by the learning device 100 will be described with reference to
Here, a relationship between a distribution of the feature data z, a probability density p (x) of the artificial data x, and a probability density p (z) of the feature data z in a case where the feature data z is extracted from the artificial data x by an autoencoder a with the typical method is described.
Specifically, a graph 710 in
As illustrated in the graphs 710 and 711, with the autoencoder a with the typical method, the probability density p (x) of the artificial data x is not proportional to the probability density p (z) of the feature data z, and a linear relationship does not appear. Therefore, even if the feature data z according to the autoencoder a with the typical method is used instead of the artificial data x, it is difficult to improve the data analysis accuracy.
On the other hand, a case will be described where the learning device 100 extracts the feature data z from the artificial data x by the autoencoder 110 learned by using the formula (1) described above. Specifically, a relationship between the distribution of the feature data z, the probability density p (x) of the artificial data x, and the probability density p (z) of the feature data z in this case will be described.
Specifically, a graph 720 in
As illustrated in the graphs 720 and 721, according to the autoencoder 110, the probability density p (x) of the artificial data x and the probability density p (z) of the feature data z tend to be proportional to each other, and the linear relationship appears. Therefore, the learning device 100 may improve the data analysis accuracy by using the feature data z according to the autoencoder 110, instead of the artificial data x.
(Learning Processing Procedure)
Next, an example of a learning processing procedure executed by the learning device 100 will be described with reference to
Next, the learning device 100 generates xv by decoding z+ε, obtained by adding the noise ε to the latent variable z, by the decoder (step S804). Then, the learning device 100 calculates cost (step S805). The cost is the weighted sum E described above.
Next, the learning device 100 updates the parameters θ, ψ, and ξ so as to reduce the cost (step S806). Then, the learning device 100 determines whether or not learning is converged (step S807). Here, in a case where learning is not converged (step S807: No), the learning device 100 returns to the processing in step S801.
On the other hand, in a case where learning is converged (step S807: Yes), the learning device 100 ends the learning processing. The convergence of learning indicates, for example, that change amounts of the parameters θ, ψ, and ξ caused by update are equal to or less than a certain value. As a result, the learning device 100 can learn the autoencoder 110 that can extract the latent variable z from the input x so that a proportional tendency appears between a probability density of the input x and a probability density of the latent variable z.
(Analysis Processing Procedure)
Next, an example of an analysis processing procedure executed by the learning device 100 will be described with reference to
Next, if the outlier is equal to or more than a threshold, the learning device 100 outputs the input x as an anomaly (step S903). Then, the learning device 100 ends the analysis processing. As a result, the learning device 100 can accurately perform anomaly detection.
Here, the learning device 100 may also switch an order of the processing in some steps in
As described above, according to the learning device 100, it is possible to encode the input data x. According to the learning device 100, the probability distribution of the feature data z obtained by encoding the data x can be calculated. According to the learning device 100, it is possible to add the noise ε to the feature data z. According to the learning device 100, it is possible to decode the feature data z+ε to which the noise ε is added. According to the learning device 100, it is possible to calculate the first error between the decoded data x∨ obtained by decoding and the data x and the information entropy of the calculated probability distribution. According to the learning device 100, it is possible to learn the autoencoder 110 and the probability distribution of the feature data so as to minimize the first error, the second error, and the information entropy of the probability distribution. As a result, the learning device 100 can learn the autoencoder 110 that can extract the feature data z from the data x so that the proportional tendency appears between the probability density of the data x and the probability density of the feature data z. Therefore, the learning device 100 may improve the data analysis accuracy by the learned autoencoder 110.
According to the learning device 100, it is possible to calculate the probability distribution of the feature data z on the basis of the model that defines the probability distribution. According to the learning device 100, it is possible to learn the autoencoder 110 and the model that defines the probability distribution. As a result, the learning device 100 can optimize the autoencoder 110 and the model that defines the probability distribution.
According to the learning device 100, the Gaussian mixture model can be adopted as the model. According to the learning device 100, it is possible to learn the encoding parameter and the decoding parameter of the autoencoder 110 and the parameter of the Gaussian mixture model. As a result, the learning device 100 can optimize the encoding parameter and the decoding parameter of the autoencoder 110 and the parameter of the Gaussian mixture model.
According to the learning device 100, it is possible to calculate the probability distribution of the feature data z on the basis of the similarity between the decoded data x∨ and the data x. As a result, the learning device 100 can easily learn the autoencoder 110.
According to the learning device 100, it is possible to parametrically calculate the probability distribution of the feature data z. As a result, the learning device 100 can easily learn the autoencoder 110.
According to the learning device 100, as the noise ε, it is possible to adopt a uniform random number, based on a distribution of which an average is zero, that has dimensions as many as the feature data z and is uncorrelated between dimensions. As a result, the learning device 100 can ensure that the proportional tendency appears between the probability density of the data x and the probability density of the feature data z.
According to the learning device 100, as the first error, the squared error between the decoded data x∨ and the data x can be adopted. As a result, the learning device 100 can suppress an increase in the processing amount required when the first error is calculated.
According to the learning device 100, it is possible to perform anomaly detection on the input new data x on the basis of the learned autoencoder 110 and the learned probability distribution of the feature data z. As a result, the learning device 100 may improve the anomaly detection accuracy.
Note that the learning method described in the present embodiment may be implemented by executing a prepared program on a computer such as a personal computer (PC) or a workstation. The learning program described in the present embodiment is executed by being recorded on a computer-readable recording medium and being read from the recording medium by the computer. The recording medium is a hard disk, a flexible disk, a compact disc read only memory (CD-ROM), a magneto-optical disc (MO), a digital versatile disc (DVD), or the like. Furthermore, the learning program described in the present embodiment may also be distributed via a network such as the Internet.
All examples and conditional language provided herein are intended for the pedagogical purposes of aiding the reader in understanding the invention and the concepts contributed by the inventor to further the art, and are not to be construed as limitations to such specifically recited examples and conditions, nor does the organization of such examples in the specification relate to a showing of the superiority and inferiority of the invention. Although one or more embodiments of the present invention have been described in detail, it should be understood that the various changes, substitutions, and alterations could be made hereto without departing from the spirit and scope of the invention.
This application is a continuation application of International Application PCT/JP2019/037371 filed on Sep. 24, 2019 and designated the U.S., the entire contents of which are incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/JP2019/037371 | Sep 2019 | US |
Child | 17697716 | US |