Data analysis apparatus, data analysis method, and data analysis program

Information

  • Patent Grant
  • 11526722
  • Patent Number
    11,526,722
  • Date Filed
    Thursday, August 30, 2018
    6 years ago
  • Date Issued
    Tuesday, December 13, 2022
    a year ago
Abstract
Facilitation of an explanation about an object to be analyzed is realized with high accuracy and with efficiency.
Description
CLAIM OF PRIORITY

The present application claims priority from Japanese patent application JP 2017-206069 filed on Oct. 25, 2017, the content of which is hereby incorporated by reference into this application.


TECHNICAL FIELD

The present invention relates to a data analysis apparatus, a data analysis method, and a data analysis program.


BACKGROUND ART

As an approach for predicting output data from input data, there is known an approach called perceptron. The perceptron outputs a predicted value from a computation result of linear combination between feature vectors that are inputs and weight vectors. A neural network is also called multilayer perceptron, which is a technique that allows for a capability of solving a linearly inseparable problem by superimposing a plurality of perceptrons, and which emerged in the 1980s. Since around 2012, a neural network that introduced new technologies such as dropout has been called deep learning.


In the field of machine learning, to calculate learning parameters (weight vectors and the like in the perceptron) in such a manner that an error between a predicted value obtained from feature vectors and an actual value (true value) becomes a minimum value is called learning. Upon completion with a learning process, a new predicted value can be calculated from data not used in learning (hereinafter, referred to as “test data”). In the perceptron, a magnitude of each element value of weight vectors is used as an importance of a factor contributing to a prediction.


On the other hand, in the neural networks including the deep learning, each element of feature vectors is subjected to weighted product-sum operation with other elements whenever passing through a perceptron; thus, in principle, it is difficult to grasp the importance of a single element.


An approach of Non-Patent Document 1 is one for causing linear regression to be learned anew so that a discrimination result of a machine learning approach such as the deep learning that does not have a function to calculate importances of features is explainable. Furthermore, logistic regression is a machine learning model equivalent to the perceptron and most widely used in every field. For example, the logistic regression illustrated in Non-Patent Document 2, page 119 has a function to calculate the importances of features for entire data samples.


PRIOR ART DOCUMENT
Non-Patent Document



  • Non-Patent Document 1: Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. “Why should I trust you?: Explaining the predictions of any classifier.” Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 2016.

  • Non-Patent Document 2: Friedman J, Trevor H. Robert T. The elements of statistical learning. second edition. New York: Springer series in statistics, 2001.



SUMMARY OF THE INVENTION
Problem to be Solved by the Invention

The approach of Non-Patent Document 1 merely tries to give an explanation afterwards by the linear regression and does not mathematically guarantee that the importances of features used at a time of the prediction by the deep learning can be completely calculated. Furthermore, if the linear regression can completely achieve identical prediction accuracy to that of the deep learning, then the initial deep learning is no longer necessary, and a configuration and a concept of the approach are contradictory. Moreover, the logistic regression does not have a function to calculate the importances of the features for individual data samples.


The present invention has been achieved in the light of the foregoing and an object of the present invention is to realize facilitating an explanation about an object to be analyzed with high accuracy and with efficiency.


Means for Solving the Problems

A data analysis apparatus according to one aspect of the invention disclosed in the present application is a data analysis apparatus using a first neural network configured with an input layer, an output layer, and two or more intermediate layers which are provided between the input layer and the output layer and each of which performs calculation by giving data from a layer of a previous stage and a first learning parameter to a first activation function and outputs a calculation result to a layer of a subsequent stage, the data analysis apparatus including: a conversion section that converts a number of dimensions of output data from each of the intermediate layers into a number of dimensions of the same size on the basis of the output data and a second learning parameter and outputs respective output data after conversion; a reallocation section that reallocates first input data in a first feature space given to the input layer to a second feature space on the basis of the output data after conversion from the conversion section and the first input data in the first feature space; and an importance calculation section that calculates a first importance of the first input data in each of the intermediate layers on the basis of the respective output data after conversion and a third learning parameter.


Effect of the Invention

According to a representative embodiment of the present invention, facilitation of an explanation about an object to be analyzed can be realized with high accuracy and with efficiency. Objects, configurations, and effects other than those mentioned above will be readily apparent from the description of embodiments given below.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is an explanatory diagram illustrating an example of reallocation of feature vectors.



FIG. 2 is a block diagram illustrating an example of a configuration of a data analysis system.



FIG. 3 is an explanatory diagram illustrating an example of a structure of a first neural network according to a first embodiment.



FIG. 4 is a block diagram illustrating an example of a functional configuration of a data analysis apparatus.



FIG. 5 is a flowchart illustrating an example of a data analysis processing procedure by the data analysis apparatus according to the first embodiment.



FIG. 6 is an explanatory diagram illustrating an example of a structure of a second neural network according to a second embodiment.



FIG. 7 is a flowchart illustrating an example of a data analysis processing procedure by a data analysis apparatus according to the second embodiment.



FIG. 8 is an explanatory diagram illustrating feature vectors Features and correct data Target.



FIG. 9 is an explanatory diagram illustrating an experimental result.





MODES FOR CARRYING OUT THE INVENTION
First Embodiment

<Example of Reallocation of Feature Vectors>


AI (Artificial Intelligence) has a capability to solve a linearly inseparable problem; however, it is unclear why the AI made such a decision. A machine learning approach such as deep learning, in particular, is high in prediction accuracy but low in explainability. For example, in a case in which the AI outputs a diagnosis result that “prone to catch a cold” to a certain patient, a doctor is unable to answer a question of why the AI obtained such a result. If the AI can determine a cause of the result, the doctor can give proper treatment to the patient.



FIG. 1 is an explanatory diagram illustrating an example of reallocation of feature vectors. In (A), in a feature space SP1, a plurality of feature vectors xn (n=1, 2, . . . , N, where N is the number of images) are present. The plurality of feature vectors xn are discriminated as correct labels La and Lb by, for example, a nonlinear prediction model PM1. In (B), in a feature space SP2, a plurality of feature vectors xn are present. The plurality of feature vectors xn are discriminated as correct labels La and Lb by, for example, a linear prediction model PM2.


In (A), the machine learning approach such as the deep learning learns linear regression anew for explaining the prediction model PM1 that is a discrimination result. Specifically, for example, this machine learning approach executes a retrofitted process of determining the prediction model PM1 and then locally performing straight-line approximation on the prediction model PM1. However, it is unclear in such a retrofitted process whether a straight-line approximated local part of the prediction model PM1 can correctly explain the feature vectors xn. Furthermore and more importantly, executing logistic regression called straight-line approximation makes it necessary to execute machine learning twice after all.


Since the prediction model PM2 in (B) is linear, referring to an inclination of the prediction model PM2 makes it possible to grasp with which parameter in the feature space SP2 each of the feature vectors xn is weighted and to correctly explain the feature vector xn. In a first embodiment, the plurality of feature vectors xn in the feature space SP1 are reallocated to the other feature space SP2 without determining the nonlinear prediction model PM1 like (A) for the plurality of feature vectors xn. The linear prediction model PM2 is thereby obtained; thus, it is possible to grasp with which parameter in the feature space SP2 each of the feature vectors xn is weighted and to correctly explain each feature vector xn in response to an importance of the parameter.


In other words, a user can grasp which factor (feature) included in the features xn contributes to a prediction result for every sample (for example, for every patient) having the feature vectors xn; thus, it is easy to explain why such a prediction result is obtained. Therefore, it is possible to improve explainability of the machine learning. According to the above example, the user can grasp why the AI outputted the diagnosis result of “prone to catch a cold” to the certain patient. Furthermore, it is possible to improve efficiency of the machine learning since it is unnecessary to execute the machine learning twice unlike (A). Therefore, it is possible to promptly provide an explanation described above.


<Example of System Configuration>



FIG. 2 is a block diagram illustrating an example of a configuration of a data analysis system. While a server-client type data analysis system 2 will be taken by way of example in FIG. 2, the data analysis system may be a stand-alone type. (A) is a block diagram illustrating an example of a hardware configuration of the data analysis system 2, and (B) is a block diagram illustrating an example of a functional configuration of the data analysis system 2. In (A) and (B), the same configuration is denoted by the same reference character.


The data analysis system 2 is configured such that a client terminal 200 and a data analysis apparatus 220 that is a server is communicably connected to each other by a network 210.


In (A), the client terminal 200 has an HDD (hard disk drive) 201 that is an auxiliary storage device, a memory 202 that is a main storage device, a processor 203, an input device 204 that is a keyboard and a mouse, and a monitor 205. The data analysis apparatus 220 has an HDD 221 that is an auxiliary storage device, a memory 222 that is a main storage device, a processor 223, an input device 224 that is a keyboard and a mouse, and a monitor 225. It is noted that the main storage device, the auxiliary storage device, and a transportable storage medium, which is not shown, will be generically referred to as “memory devices.” The memory devices each store a first neural network 300 and learning parameters of the first neural network 300.


In (B), the client terminal 200 has a client database (DB) 251. The client DB 251 is stored in the memory device such as the HDD 201 or the memory 202. The client DB 251 stores a test data set 252 and a prediction result 253. The test data set 252 is a set of test data. The prediction result 253 is data obtained from a prediction section 262 via the network 210. It is noted that one or more client terminals 200 are present in the case of the server-client type.


The data analysis apparatus 220 has a learning section 261, the prediction section 262, and a server database (DB) 263. The learning section 261 is a functional section that executes a process illustrated in FIGS. 1 and 2 and that outputs learning parameters 265.


The prediction section 262 is a functional section that constructs a first neural network 300 using the learning parameters 265, that executes a prediction process by applying the test data to the first neural network 300, and that outputs the prediction result 253 to the client terminal 200. The learning section 261 and the prediction section 262 realize functions thereof by causing the processor 223 to execute a program stored in the memory device such as the HDD 221 and the memory 222.


The server DB 263 stores a training data set 264 and the learning parameters 265. The training data set 264 includes images xn that are an example of the feature vectors and correct labels tn. The learning parameters 265 are output data from the learning section 261 and include matrices WlD, WlR, WlE, and WA, and a weight vector wO. It is noted that the neural network to which the learning parameters are set will be referred to as “prediction model.”


It is noted that the data analysis apparatuses 220 may be configured with a plurality of data analysis apparatuses. For example, a plurality of data analysis apparatuses 220 may be present for load distribution. Furthermore, the data analysis apparatus 220 may be configured with a plurality of parts according to functions. For example, the data analysis apparatus 220 may be configured with a first server that includes the learning section 261 and the server DB 263 and a second server that includes the prediction section 262 and the server DB 263. Alternatively, the data analysis apparatus 220 may be configured with a first data analysis apparatus that includes the learning section 261 and the prediction section 262 and a second data analysis apparatus that includes the server DB 263. In another alternative, the data analysis apparatus 220 may be configured with a first server that includes the learning section 261, a second data analysis apparatus that includes the prediction section 262, and a third data analysis apparatus that includes the server DB 263.


<Example of Structure of Neural Network>



FIG. 3 is an explanatory diagram illustrating an example of a structure of a first neural network according to the first embodiment. The first neural network 300 has a data unit group DU, a reporting unit group RU, a harmonizing unit group HU, a reallocation unit RAU, a unifying unit UU, a decision unit DCU, and an importance unit IU.


The data unit group DU is configured such that a plurality of data units DUl (l is a hierarchical number and 1≤l≤L, where L is the hierarchical number of a lowest layer and L=4 in FIG. 1) are connected in series. The data unit DU1 that is a highest stage of l=1 corresponds to an input layer 301 of the first neural network 300 and the data units DUl of l≤2 correspond to intermediate layers (also referred to as “hidden layers”) of the first neural network 300. Each data unit DUl is a perceptron to which output data from the data unit DU(l−1) of a previous stage is input, which performs calculation using a learning parameter of the own data unit DUl, and which outputs output data.


It is noted, however, that the data unit DUl retains training data at a time of learning by the learning section 261. The training data means herein, for example, sample data configured with a combination {xn, tn} of an image xn that is an example of the feature vector xn and a correct label tn (n=1, 2, . . . , N, where N is the number of images). The image xn is data having a two-dimensional matrix structure and dealt with as a d-dimensional vector (where d is an integer satisfying d≥1) obtained by raster scanning. For easier description, in a case of designating “x,” it is assumed that the vector is a one-dimensional vector obtained by raster scanning the image xn in a matrix form.


The correct label tn is a K-dimensional vector that indicates a type (for example, animal such as dog or cat) in a one-hot representation with respect to the number of types K of the images xn. In the one-hot representation, a certain element of a vector corresponds to the type of the image xn, 1.0 is stored in only one element, and 0.0 is stored in all the other elements. The type (for example, dog) corresponding to the element storing 1.0 is a correct type. It is noted that in a case in which a medical image xn such as a CT image, an MRI image, or an ultrasound image is an input, the label tn is a true value that represents a type of disease or a prognosis (good or bad) of a patient.


It is assumed that an image xn∈Rd (Rd is a d-dimensional real number) is a feature vector configured with the d-dimensional real number Rd. A function hl+1D that indicates the data unit DU(l+1) is expressed by the following Equation (1).

[Formula 1]
hDl+1=fDl(WDlhDl)  (1)

    • where hDlcustom characterdl is input/output vector of data unit
      • WDlcustom characterdl+1×dl is learning parameter
      • when l=1, hD1=xn


In Equation (1), an index l (integer satisfying 1≤l≤L) denotes the hierarchical number (the same applies to the following equations). L is an integer equal to or greater than 1 and denotes a deepest hierarchical number. In addition, fDl on a right side is an activation function. As the activation function, any of various activation functions such as a sigmoid function, a hyperbolic tangent function (tanh function), and an ReLU (Rectified Linear Unit) function may be used. A matrix WlD is a learning parameter of the data unit DUl. A vector hlD on the right side is an input vector input to the data unit DUl, that is, an output vector from the data unit DUl of the previous stage. It is noted that an output vector hlD from the data unit DUl in a case in which the number of layers l=1 is hlD=xn.


It is noted that the data unit DUl retains the image xn that is the feature vector as the test data at a time of prediction by the prediction section 262.


An output vector hlD from the data unit DUl on the same hierarchy is input to the reporting unit RUl (2≤l≤L), and the reporting unit RUl contracts the number of dimensions of the output vector hlD. A function hlR that indicates the reporting unit RUl is expressed by the following Equation (2).

[Formula 2]
hRl=σ(WRlhDl)  (2)


In Equation (2), a matrix WlR is a learning parameter of the reporting unit RUl. The d-dimensional output vector hlD from the data unit DUl is contracted to an m-dimensional output vector hlR by Equation (2). Further, σ is a sigmoid function.


Each harmonizing unit HUl (2≤1≤L) is provided between the data unit DUl and the reallocation unit RAU on each intermediate layer for each data unit DUl on the intermediate layer. Each harmonizing unit HUl converts the number of dimensions of each output data from the data unit DUl on the intermediate layer into the same size. Therefore, output data made to have the same number of dimensions by the harmonizing unit HUl is input to the reallocation unit RAU.


In other words, the output vector hlD is input to the harmonizing unit HUl from the data unit DUl on the same hierarchy, and the harmonizing unit HUl converts the number of dimensions of the output vector hlD into the same number of dimensions. A function hlH that indicates the harmonizing unit HUl is expressed by the following Equation (3).

[Formula 3]
hHl=fH(WHlhDl)  (3)


where WHlcustom characterd1×dl is learning parameter


In Equation (3), a matrix WlH is a learning parameter of the harmonizing unit HUl. The d-dimensional output vector hlD from the data unit DUl is thereby converted into an m-dimensional output vector hlH. It is noted that m is a hyperparameter that determines the number of dimensions. Further, d and m may differ from d and m in the reporting unit RUl. Furthermore, fH is an activation function.


The attention unit AU calculates a weight α of each data unit DUl using the output vector hlR from each reporting unit RUl. A function α that indicates the attention unit AU is expressed by the following Equation (4).

[Formula 4]
α=softmax(WAhR)  (4)

    • where WAcustom character(L−1)×m (m=r(L−1))


In Equation (4), a matrix WA is a learning parameter of the attention unit AU. A softmax function that is one type of activation function calculates a vector hR in dimensions equal to the number of layers (L=4 in an example of Equation (5) below). As indicated by the following Equation (5), a vector hR on the right side of Equation (4) is a vector obtained by stacking hlR in a perpendicular direction.




embedded image


Therefore, the matrix WA becomes a matrix of L rows by M columns (where M is the number of elements of a vector hR). By adopting the softmax function to the attention unit AU, each element (a sum of all the elements is 1) of the vector hR with the number of layers being L represents the weight of the corresponding data unit DUl.


The reallocation unit RAU reallocates the feature vectors (images xn) in the certain feature space to the other feature space. Specifically, as illustrated in FIG. 1, for example, the prediction model obtained by a feature vector group on the feature space SP1 can be nonlinear; thus, the reallocation unit RAU transfers the feature vector group to the feature space SP2 so that a linear prediction model can be obtained in the feature space SP2. A function hlT that indicates the reallocation unit RAU is expressed by the following Equation (6).

[Formula 6]
hTl=fT(hHl,xn)  (6)


As a function fT, an Hadamard product between the vectors, element addition, or the like can be used. In the present embodiment, the Hadamard product is used (refer to the following Equation (7)). In Equation (7), an Hadamard product between the output vector hlH from the harmonizing unit HUl and the feature vector xn is obtained.

[Formula 7]
hTl=hHl⊙xn  (7)


The unifying unit UU unifies the output vector hlT from the reallocation unit RAU with the output vector α from the attention unit AU. In other words, the unifying unit UU weights the output vector hlT from the reallocation unit RAU with the output vector α from the attention unit AU. A function hU that indicates the unifying unit UU is expressed by the following Equation (8).









[

Formula


8

]










h
U

=




k
=
1


L
-
1




α
[
k
]



h
T

k
+
1










(
8
)








In Equation (8), α on the right side indicates an element (weight) in a k-th dimension of the output vector α of Equation (4).


The decision unit DCU decides on a predicted value yn and outputs the predicted value yn to an output layer 303. Specifically, for example, the decision unit DCU weights the output vector hU from the unifying unit UU with a weight vector wO that is one of the learning parameters and gives the resultant vector to the sigmoid function σ, thereby obtaining the predicted value yn. A function yn that indicates the decision unit DCU is expressed by the following Equation (9). In Equation (9), t in wOt means a transpose.

[Formula 9]
yn=σ(wOthU)  (9)


The importance unit IU calculates an importance vector sln that indicates an importance of a feature on each layer of the neural network and outputs the importance vector sln to the output layer 303. A function sln that indicates the importance unit IU is expressed by the following Equation (10).

[Formula 10]
snl=α[l]fT(wO,hHl)  (10)


In Equation (10), α on the right side indicates an element (weight) on an l-th hierarchy of the output vector α of Equation (4). As the function fT, an Hadamard product between the vectors, element addition, or the like can be used, similarly to Equation (6). In the first embodiment, the Hadamard product is used. In Equation (10), the importance vector sln is the Hadamard product between the weight vector wO and the output vector hlH from the harmonizing unit HUl. The importance vector sln is an importance of the n-th feature vector (image) xn in the hierarchy l.


<Example of Functional Configuration of Data Analysis Apparatus 220>



FIG. 4 is a block diagram illustrating an example of a functional configuration of the data analysis apparatus 220. The data analysis apparatus 220 has the input layer 301, intermediate layers 302, the output layer 303, a conversion section 401, a reallocation section 402, a predicted data calculation section 403, an importance calculation section 404, a setting section 405, a unifying section 406, a contraction section 407, and a selection section 408. These are an example of internal configurations of the learning section 261 and the prediction section 262. Since the selection section 408 is a constituent element in a second embodiment to be described later, the selection section 408 will be described later in the second embodiment.


As indicated by Equation (3), the conversion section 401 contracts the number of dimensions d of the output vector hlD on the basis of the output vector hlD from the DUl (where l≥2) on each intermediate layer and the matrix WlH, and outputs the output vector hlH after conversion. The conversion section 401 is the harmonizing unit group HU described above.


As indicated by Equations (6) and (7), the reallocation section 402 reallocates the feature vectors xn in the first feature space SP1 to the second feature space SP2 on the basis of the output vector hlH after conversion from the conversion section 401 and the feature vector xn in the first feature space SP1 given to the input layer 301. The reallocation section 402 is the reallocation unit RAU described above.


As indicated by Equation (9), the predicted data calculation section 403 calculates a predicted vector yn with respect to each feature vector xn in the feature space SP1 on the basis of a reallocation result hTl of the reallocation section 402 and the weight vector wO. The predicted data calculation section 403 is the decision unit DCU described above.


As indicated by Equation (10), the importance calculation section 404 calculates the importance vector sln of the feature vector xn on each hierarchy l of the intermediate layers 302 on the basis of the output vector hlH after conversion and the weight vector wO. The importance calculation section 404 is the importance unit IU described above.


For example, as for the image xn that expresses an animal, it is assumed that an output vector hlaD on a certain hierarchy la is a feature that indicates whether a contour of a face is suitable for a cat and that an output vector hlbD on a certain hierarchy lb (≠la) is a feature that indicates whether a contour of an ear is suitable for a cat. In this case, referring to corresponding importance vectors slan and slbn enables the user to explain in the light of which feature of the face in the image xn the data analysis apparatus 220 discriminates the animal as a cat. For example, in a case in which the importance vector slan is low but the importance vector slbn is high, the user can explain that the data analysis apparatus 220 discriminates the animal as a cat in the light of a shape of the ear in the image xn.


As indicated by Equations (4) and (5), the setting section 405 sets the weight α of each intermediate layer 302 on the basis of the output vector hlD from the intermediate layer 302 and the matrix WA. The setting section 405 is the attention unit AU described above.


As indicated by Equation (8), the unifying section 406 unifies the reallocation result hTl with the weight α set by the setting section 405. The unifying section 406 is the unifying unit UU described above. In this case, the predicted data calculation section 403 calculates the predicted vector yn on the basis of a unifying result hU of the unifying section 406 and the weight vector wO. Furthermore, the importance calculation section 404 calculates the importance vector snl on the basis of the weight α set by the setting section 405, the output vector hlH after conversion, and the weight vector wO.


As indicated by Equation (2), the contraction section 407 contracts the number of dimensions d of the output vector hlD from each intermediate layer 302 on the basis of the output vector hlD from the intermediate layer 302 and the matrix WlR, and outputs the output vector hlR after contraction. The contraction section 407 is the reporting unit group RU described above. In this case, the setting section 405 sets the weight α of each intermediate layer 302 on the basis of the output vector hlR after contraction from the contraction section 407 and the matrix WA.


In a case in which the training data that includes the feature vector xn in the feature space SP1 and the correct label tn with respect to the predicted vector yn is given, the learning section 261 optimizes the matrix WlD that is a first learning parameter, the matrix WlH that is a second learning parameter, the weight vector wO that is a third learning parameter, the matrix WA that is a fourth learning parameter, and the matrix WlR that is a fifth learning parameter using the predicted vector yn and the correct label tn in such a manner, for example, that a cross entropy between the correct label tn and the predicted value yn becomes a minimum.


The prediction section 262 sets the optimized learning parameters 265 to the first neural network 300 and gives a feature vector x′n as the test data to the input layer 301, thereby causing the predicted data calculation section 403 to calculate a predicted vector y′n in an extraction manner.


<Example of Data Analysis Processing Procedure>



FIG. 5 is a flowchart illustrating an example of a data analysis processing procedure by the data analysis apparatus 220 according to the first embodiment. In FIG. 5, Steps S501 and S502 correspond to a learning process by the learning section 261 and Steps S503 to S507 correspond to the prediction process by th prediction section 262. First, the data analysis apparatus 220 reads the training data set 264 (Step S501).


The data analysis apparatus 220 performs learning by giving each training data {xn, tn} to the first neural network 300, and generates the learning parameters 265 (matrices WlD, WlR, WlH, and WA, and the weight vector wO) (Step S502). In the learning (Step S502), the learning section 261 optimizes the learning parameters 265 by, for example, a statistical gradient method in such a manner that the cross entropy between the correct label tn and the predicted value yn becomes a minimum. The data analysis apparatus 220 stores the generated learning parameters 265 in the server DB 263.


Next, the data analysis apparatus 220 reads the test data set 252 (Step S503), gives each test data image x′n to the neural network in which the learning parameters 265 are already reflected, calculates the predicted value yn′ by Equation (9) (Step S504), and calculates the importance vector sln of each image x′n by Equation (10) (Step S505).


Furthermore, the data analysis apparatus 220 stores the prediction result 253 that is a set of the predicted values yn′ and the importance vectors sln (Step S506), and outputs the prediction result 253 to the client terminal 200 (Step S507). The client terminal 200 displays the prediction result 253 on the monitor 205.


In this way, according to the first embodiment, reallocating the feature vectors xn that are the sample data in advance makes it possible to calculate the importance of each feature even if the neural network is multi-layered, and to realize facilitation of an explanation per sample (feature vector xn) with high accuracy and with efficiency. Moreover, since the linear prediction model is obtained by reallocating the samples (feature vectors xn) in advance, it is possible to calculate the predicted value with high accuracy and with a low load at times of learning and prediction.


Second Embodiment

The second embodiment will be described. The second embodiment is an example of enhancing interpretability of the importance compared with the first embodiment, and uses the importance vector sln obtained in the first embodiment. It is noted that the same configurations as those in the first embodiment are denoted by the same reference characters and description thereof will be omitted.


<Example of Structure of Neural Network>



FIG. 6 is an explanatory diagram illustrating an example of a structure of a second neural network according to the second embodiment. A second neural network 600 has the data unit group DU, the reporting unit group RU, a selection unit SU, a harmonizing unit group HUa, a reallocation unit RAUa, a unifying unit UUa, a decision unit DCUa, and an importance unit IUa.


The selection unit SU calculates an average importance sav∈Rd for the importance vector sln. A function sav that indicates the selection unit SU is expressed by the following Equation (11).




embedded image


Each element of the average importance sav indicates an average importance of each feature for the number of hierarchies and the number of samples, and the data analysis apparatus selects v features (v is an arbitrary integer equal to or greater than 1), for each of which an absolute value of the average importance of the element is higher, from the feature vectors xn, and generates a new v-dimensional feature vector zn∈Rd.


Each harmonizing unit HUal (2≤l≤L) is provided between the data unit DUl and the reallocation unit RAUa on each intermediate layer for each data unit DUl on the intermediate layer. The harmonizing unit HUal executes conversion of making the number of dimensions of each output data from the data unit DUl on the intermediate layer uniform. Therefore, output data made to have the number of dimensions of the same size by the harmonizing units HUal is input to the reallocation unit RAUa.


The output vector hlD from the data unit DUl on the same hierarchy is input to the harmonizing unit HUal, and the harmonizing unit HUal converts the number of dimensions of the output vector hlD into the same number of dimensions of the same size. A function hlH that indicates the harmonizing unit HUal is expressed by the following Equation (12).

[Formula 12]
hHl=fH(WHlhDl)  (12)


where WHlcustom characterv×dl is learning parameter


In Equation (12), the matrix WlH is a learning parameter 265 of the harmonizing unit HUal. The d-dimensional output vector hlD from the data unit DUl is thereby converted into an m-dimensional output vector hlH. Furthermore, fH is an activation function.


The reallocation unit RAUa reallocates the feature vectors (images xn) in the certain feature space to the other feature space. Specifically, if the prediction model obtained by the feature vector group on the feature space SP1 is nonlinear as illustrated in, for example, FIG. 1, the reallocation unit RAUa transfers the feature vector group to the feature space SP2 so that a linear prediction model can be obtained in the feature space SP2. A function h′lT that indicates the reallocation unit RAUa is expressed by the following Equation (13).

[Formula 13]
h′Tl=fT(hHl,zn)  (13)


As the function fT, the Hadamard product between the vectors, the element addition, or the like can be used. In the present embodiment, the Hadamard product is used (refer to the following Equation (14)). In Equation (14), an Hadamard product between the output vector hlH from the harmonizing unit HUal and a new feature vector zn from the selection unit SU is obtained.

[Formula 14]
h′Tl=hHl⊙zn  (14)


The unifying unit UUa unifies the output vector h′lT from the reallocation unit RAUa with the output vector α from the attention unit AU. In other words, the unifying unit UUa weights the output vector h′lT from the reallocation unit RAUa with the output vector α from the attention unit AU. A function h′U that indicates the unifying unit UUa is expressed by the following Equation (15).









[

Formula


15

]










h
U


=




k
=
1


L
-
1




α
[
k
]



h
T




k

+
1








(

15
)







In Equation (15), α on the right side indicates an element (weight) in the k-th dimension of the output vector α of Equation (4).


The decision unit DCUa decides on the predicted value yan. Specifically, for example, the decision unit DCUa weights the output vector h′U from the unifying unit UUa with a weight vector w′Ot that is one of the learning parameters 265 and gives the resultant vector to the sigmoid function σ, thereby obtaining a predicted value yan. A function yan that indicates the decision unit DCUa is expressed by the following Equation (16). In Equation (16), t in w′Ot means a transpose.

[Formula 16]
yan=σ(w′Oth′U)  (16)


The importance unit IUa calculates an importance vector s′ln that indicates an importance of a feature on each layer of the second neural network 600. A function s′ln that indicates the importance unit IUa is expressed by the following Equation (17).

[Formula 17]
s′nl=α[l]fT(w′O,h′Hl)  (17)


In Equation (17), α on the right side indicates an element (weight) on the l-th hierarchy of the output vector α of Equation (4). As the function fT, the Hadamard product between the vectors, the element addition, or the like can be used, similarly to Equation (13). In the second embodiment, the Hadamard product is used. In Equation (17), the Hadamard product between the weight vector w′O and the output vector h′lH from the harmonizing unit HUal is obtained.


<Example of Functional Configuration of Data Analysis Apparatus>


An example of a functional configuration of the data analysis apparatus 220 according to the second embodiment will next be described with reference to FIG. 4. Description of the same configurations as those in the first embodiment will be omitted. In the second embodiment, the selection section 408 is newly added. The selection section 408 calculates the importance vector sav of each element that configures the feature vector xn on the basis of the feature vector xn and the importance vector snl, and selects part of elements from the feature vector xn on the basis of the importance vector sav to generate the feature vector zn. The selection section 408 is the selection unit SU described above.


In this case, the conversion section 401 applies Equation (12), and the reallocation section 402 reallocates, as shown in Equation (13) and Equation (14), the feature vectors zn from the selection section 408 to the second feature space SP2 on the basis of the output vectors hlH after conversion obtained by application of Equation (12) and the feature vectors zn. Furthermore, the unifying section 405 applies Equation (15), the predicted data calculation section 403 applies Equation (16), and the importance calculation section 404 applies Equation (17).


<Example of Data Analysis Processing Procedure>



FIG. 7 is a flowchart illustrating an example of a data analysis processing procedure by the data analysis apparatus 220 according to the second embodiment. In FIG. 7, Steps S701 and S702 correspond to the learning process by the learning section 261 and Steps S703 to S707 correspond to the prediction process by the prediction section 262. First, the data analysis apparatus 220 reads the training data set 264 and importances of features (Step S701).


The data analysis apparatus 220 performs learning by giving each training data {xn, tn} to the second neural network 600, and generates the learning parameters 265 (matrices WlD, WlR, W′lH, and WA, and the weight vector w′O) (Step S702). In the learning (Step S702), the learning section 261 optimizes the learning parameters 265 by, for example, the statistical gradient method in such a manner that the cross entropy between the correct label tn and the predicted value yn becomes a minimum. The data analysis apparatus 220 stores the generated learning parameters 265 in the server DB 263.


Next, the data analysis apparatus 220 reads the test data set 252 (Step S703), gives each image x′n of the test data to the neural network in which the learning parameters 265 are already reflected, calculates the predicted value ya′n by Equation (16) (Step S704), and calculates the importance vector s′ln of each image x′n by Equation (17) (Step S705).


Furthermore, the data analysis apparatus 220 stores the prediction result 253 that is a set of predicted values ya′n and the importance vectors s′ln (Step S706), and outputs the prediction result 253 to the client terminal 200 (Step S707). The client terminal 200 displays the prediction result 253 on the monitor 205.


In this way, according to the second embodiment, carefully selecting the feature vectors zn that are sample data of the elements having the higher importances from among the feature vectors xn that are sample data makes it possible to obtain the importances sln and s′ln for the hierarchies l of the carefully selected feature vectors zn, and to enhance the interpretability of the importances sln and s′ln, compared with the first embodiment. Moreover, similarly to the first embodiment, the linear prediction model is obtained by reallocating the samples (feature vectors zn) in advance; thus, it is possible to calculate the predicted value with high accuracy and with a low load at times of learning and prediction.


Third Embodiment

A third embodiment will be described. In the third embodiment, an example of predicting Boston house prices to show that the data analysis apparatus 220 can handle information other than the image data and an approach classified into regression. A performance validation was conducted by data used in Non-Patent Document 3 mentioned below.


<Non-Patent Document 3>




  • Used in Belsley, Kuh & Welsch, ‘Regression diagnostics . . . ’, Wiley, 1980. N.B. Various transformations are used in the table on pages 244-261.




FIG. 8 is an explanatory diagram illustrating feature vectors Features and correct data Target. In an experiment, using 10-fold cross validation, evaluation was conducted as follows. A. The first embodiment was applied in a case of only four factors ((1) to (4) (CRIM, ZN, INDUS, and CHAS). B. The first embodiment was applied in a case of using all factors (1) to (13). C. The second embodiment was applied in a case in which the feature vectors zn of the second embodiment were the four factors (1) to (4) (CRIM, ZN, INDUS, and CHAS) and in which the feature vectors xn of the first embodiment were the 13 factors (1) to (13). For A to C, the evaluation was conducted on a scale of determinant coefficients r2(r2=0.0 to 1.0). Because of a regression problem, calculation methods of the decision units DCU and DCUa were changed to the following Equations (18) and (19), respectively.

[Formula 18]
yn=wOthU  (18)
y′n=w′Oth′U  (19)


It should be noted that only the sigmoid function is eliminated in Equations (18) and (19). The learning section 261 optimizes the learning parameters 265 described above in such a manner that a square error between the correct label tn and the predicted value yn or y′n becomes a minimum by the statistical gradient method.



FIG. 9 is an explanatory diagram illustrating an experimental result. Since results of B and C indicate that the determinant coefficients r2 exceeded 0.8, it was possible to make predictions with strong correlations according to the first and second embodiments, respectively. According to the second embodiment, in particular, the best result was obtained that the determinant coefficients r2=0.873.


The present invention is not limited to the embodiments described above but encompasses various modifications and equivalent configurations within a scope of the spirit of the accompanying claims. For example, the abovementioned embodiments have been described in detail for describing the present invention so that the present invention is easy to understand, and the present invention is not always limited to the embodiments having all the described configurations. Furthermore, a part of the configurations of a certain embodiment may be replaced by configurations of another embodiment. Moreover, the configurations of another embodiment may be added to the configurations of the certain embodiment. Further, for part of the configurations of each embodiment, addition, deletion, or replacement may be made for the other configurations.


As described so far, according to the embodiments described above, the data analysis apparatus 220 has the conversion section 401, the reallocation section 402, and the importance calculation section 404. Therefore, the linear prediction model is obtained by reallocating the feature vectors (xn, x′n) in advance; thus, it is possible to calculate the predicted value with high accuracy and with a low load at times of learning and prediction. Furthermore, it is possible to grasp features possessed by the feature vectors (xn, x′n) by the importance for every hierarchy l from the importance calculation section 404. It is thereby possible to realize facilitation of an explanation about the feature vectors (xn, x′n) given to the neural network as an object to be analyzed with high accuracy and with efficiency.


Moreover, the data analysis apparatus 220 has the predicted data calculation section 403; thus, it is possible to realize facilitation of an explanation about the reason for obtaining the prediction results (yn, y′n) from the neural network as an object to be analyzed with respect to the feature vectors (xn, x′n) with high accuracy and with efficiency.


Furthermore, the data analysis apparatus 220 has the setting section 405 and the unifying section 406; thus, the predicted data calculation section 403 can calculate the prediction result based on the reallocation result with high accuracy.


Moreover, the data analysis apparatus 220 has the contraction section 407; thus, it is possible to improve efficiency of the data analysis.


Furthermore, the data analysis apparatus 220 can construct a high accuracy prediction model by learning by the learning parameters 265.


Moreover, the data analysis apparatus 220 has the selection section 408; thus, carefully selecting the feature vectors zn that are the elements having the higher importances from among the feature vectors xn makes it possible to enhance the interpretability of the importances sln, s′ln.


Moreover, apart of or all of each of the configurations, the functions, the processing sections, processing means, and the like described above may be realized by hardware by being designed, for example, as an integrated circuit, or may be realized by software by causing a processor to interpret and execute programs that realize the functions.


Information in a program, a table, a file, and the like for realizing the functions can be stored in a storage device such as a memory, a hard disc, or an SSD (Solid State Drive), or in a recording medium such as an IC (Integrated Circuit) card, an SD card, or a DVD (Digital Versatile Disc).


Furthermore, control lines or information lines considered to be necessary for the description are illustrated and all the control lines or the information lines necessary for implementation are not always illustrated. In actuality, it may be contemplated that almost all the configurations are mutually connected.


DESCRIPTION OF REFERENCE CHARACTERS




  • 2: data analysis system


  • 200: client terminal


  • 220: data analysis apparatus


  • 252: test data aggregate


  • 253: prediction result


  • 261: learning section


  • 262: prediction section


  • 264: training data set


  • 265: learning parameter


  • 300: first neural network


  • 301: input layer


  • 302: intermediate layer


  • 303: output layer


  • 401: conversion section


  • 402: reallocation section


  • 403: predicted data calculation section


  • 404: importance level calculation section


  • 405: setting section


  • 406: unifying section


  • 407: contraction section


  • 408: selection section


  • 600: second neural network

  • AU: attention unit

  • DCU: decision unit

  • DU: data unit group

  • HU: harmonizing unit group

  • RAU: reallocation unit

  • RU: reporting unit group


Claims
  • 1. A data analysis apparatus comprising: a processor;an input device; anda memory comprising a main storage device, an auxiliary storage device, and a transportable storage medium, wherein each of the main storage device, auxiliary storage device, and transportable storage medium stores a first neural network and learning parameters of the first neural network,wherein the processor is configured to:construct the first neural network configured with an input layer, an output layer, and two or more intermediate layers which are provided between the input layer and the output layer, wherein each of the input layer, output layer, and intermediate layers is further configured to: perform a calculation by sending data from a layer of a previous stage and a first learning parameter to a first activation function and output a calculation result vector to a layer of a subsequent stage, wherein the calculation result vector is output data, andconvert a number of vectorial dimensions of output data from each of the intermediate layers into a number of vectorial dimensions of the same size on the basis of the output data and a second learning parameter such that the converted output data from each of the intermediate layers has the same number of vectorial dimensions;reallocate first input data in a first feature space given to the input layer to a second feature space on the basis of the converted output data and the first input data in the first feature space;calculate a first importance vector of the first input data in each of the intermediate layers on the basis of the respective output data after conversion and a third learning parameter;calculate a second importance vector of each element that configures the first input data on the basis of the first input data and the first importance vector;generate second input data by selecting part of elements from the first input data on the basis of the second importance vector; andreallocate the second input data to the second feature space on the basis of the converted output data and the second input data.
  • 2. The data analysis apparatus according to claim 1, wherein the processor is further configured to: calculate predicted data with respect to the first input data in the first feature space on the basis of a reallocation result of the reallocation section and the third learning parameter.
  • 3. The data analysis apparatus according to claim 2, wherein the processor is further configured to: adjust the first learning parameter, the second learning parameter, and the third learning parameter using the predicted data; andcorrect data with respect to the predicted data in a case of being given training data including the first input data in the first feature space and the correct data.
  • 4. The data analysis apparatus according to claim 2, wherein the processor is further configured to: set a weight of each of the intermediate layers on the basis of the output data from each of the intermediate layers and a fourth learning parameter; andunify the reallocation result with the weight set by the setting section, wherein the processor:calculates the predicted data on the basis of a unifying result of the unifying section and the third learning parameter, andcalculates the first importance vector on the basis of the weight set by the setting section, the respective output data after conversion, and the third learning parameter.
  • 5. The data analysis apparatus according to claim 4, wherein the processor is further configured to: contract the number of vectorial dimensions of the respective output data on the basis of output data from each of the intermediate layers and a fifth learning parameter, whereinset the weight of each of the intermediate layers on the basis of the respective contracted output data and the fourth learning parameter.
  • 6. A data analysis method causing a data analysis apparatus to execute: a construction process that constructs a first neural network configured with an input layer, an output layer, and two or more intermediate layers which are provided between the input layer and the output layer, wherein each of the input layer, output layer, and intermediate layers is further configured to perform a calculation by sending data from a layer of a previous stage and a first learning parameter to a first activation function and output a calculation result vector to a layer of a subsequent stage, wherein the calculation result vector is output data, and;a conversion process that converts a number of vectorial dimensions of output data from each of the intermediate layers into a number of vectorial dimensions of the same size on the basis of the output data and a second learning parameter and outputs converted output data;a reallocation process that reallocates first input data in a first feature space given to the input layer to a second feature space on the basis of the converted output data from the conversion process and the first input data in the first feature space;an importance calculation process that calculates a first importance vector of the first input data in each of the intermediate layers on the basis of the converted output data and a third learning parameter; anda selection process that calculates a second importance vector of each element that configures the first input data on the basis of the first input data and the first importance vector, and that generates second input data by selecting part of elements from the first input data on the basis of the second importance vector, wherein the second input data is reallocated to the second feature space on the basis of the converted output data and the second input data.
Priority Claims (1)
Number Date Country Kind
JP2017-206069 Oct 2017 JP national
US Referenced Citations (5)
Number Name Date Kind
10140544 Zhao Nov 2018 B1
20100063774 Cook Mar 2010 A1
20160125289 Amir May 2016 A1
20170011280 Soldevila Jan 2017 A1
20180039856 Hara Feb 2018 A1
Non-Patent Literature Citations (4)
Entry
Pareek, V. K., et al. “Artificial neural network modeling of a multiphase photodegradation system.” Journal of Photochemistry and Photobiology A: Chemistry 149.1-3 (2002): 139-146. (Year: 2002).
Meyer, Maria Ines, et al. “A deep neural network for vessel segmentation of scanning laser ophthalmoscopy images.” International Conference Image Analysis and Recognition. Springer, Cham, 2017. (Year: 2017).
Marco Tulio Ribeiro et al.,“Why Should I Trust You? Explaining the Predictions of Any Classifier” 2016 ACM, ISBN 978-1-4503-4232-2/16/08.
Trevor Hastie, Robert Tibshirani and Jerome Friedman, “The Elements of Statistical Learning Data Mining, Inference, and Prediction” Second Edition, Springer Series in Statistics, 2001.
Related Publications (1)
Number Date Country
20190122097 A1 Apr 2019 US