This application is based upon and claims the benefit of priority from Japanese Patent Application No. 2021-110289, filed Jul. 1, 2021, the entire contents of which are incorporated herein by reference.
Embodiments described herein relate generally to a machine learning apparatus, an abnormality detection apparatus, and an abnormality detection method.
An abnormality detection apparatus determines whether given diagnostic data is normal or abnormal. The abnormality detection apparatus reconstructs diagnostic data by applying the weighted sum of normal data prepared in advance and determines that the diagnostic data is abnormal if the reconstruction error is larger than a threshold. Since the diagnostic data is reconstructed by the weighted sum of normal data, highly accurate abnormality detection can be implemented using the fact that the reconstruction error of abnormal data is larger than the reconstruction error of normal data. However, to correctly reconstruct normal data, it is necessary to store many normal data in a memory and perform reconstruction using these. For this reason, an enormous memory capacity depending on the number of normal data is required for reconstruction.
A machine learning apparatus according to the embodiment includes a processing circuit. The processing circuit trains a first learning parameter of an extraction layer configured to extract, from input data, feature data of the input data, based on a plurality of training data. The processing circuit trains a second learning parameter of a reconstruction layer configured to generate reconstructed data of the input data, based on a plurality of training feature data obtained by applying the trained extraction layer to the plurality of training data. The second learning parameter represents representative vectors as many as a dimension count of the feature data, and the representative vectors as many as the dimension count are defined by a weighted sum of the plurality of training data.
A machine learning apparatus, an abnormality detection apparatus, and an abnormality detection method according to the embodiment will now be described with reference to the accompanying drawings.
The machine learning apparatus according to this embodiment is a computer that trains a machine learning model configured to determine the presence/absence of abnormality of input data. The abnormality detection apparatus according to this embodiment is a computer that determines the presence/absence of abnormality of input data concerning an abnormality detection target using the machine learning model trained by the machine learning apparatus.
Input data in this embodiment is data input to the machine learning model 1, and is data concerning an abnormality determination target. As the type of the input data according to this embodiment, image data, network security data, voice data, sensor data, video data, or the like can be applied. The input data according to this embodiment varies depending on the abnormality determination target. For example, if the abnormality determination target is an industrial product, the image data of the industrial product, output data from a manufacturing machine for the industrial product, or output data from the inspection device of the manufacturing machine is used as the input data. As another example, if the abnormality determination target is a human body, medical image data obtained by a medical image diagnostic apparatus, clinical examination data obtained by a clinical examination device, or the like is used as the input data.
The feature extraction layer 11 is a network layer that receives the input data and outputs the feature data of the input data. The reconstruction layer 12 is a network layer that receives the feature data and outputs reconstructed data that reproduces the input data. The error calculation layer 13 is a network layer that calculates the error between the input data and the reconstructed data. The determination layer 14 is a network layer that outputs the determination result of the presence/absence of abnormality of the input data based on comparison between a threshold and the error output from the error calculation layer 13. As an example, an abnormal or normal class is output as the determination result.
The feature extraction layer 11 and the reconstruction layer 12 train the learning parameters such that normal data is reproduced, and determination result is not reproduced by the combination of the feature extraction layer 11 and the reconstruction layer 12. Here, normal data means input data when the abnormality determination target is normal, and abnormal data means input data when the abnormality determination target is abnormal. Typically, abnormal data cannot be obtained at the time of training of the machine learning model 1, and the machine learning model 1 is trained using normal data. For this reason, the feature extraction layer 11 and the reconstruction layer 12 can reproduce normal data and inhibit reproduction of abnormal data. If the input data is normal data, the error between the input data and the reconstructed data has a relatively small value. If the input data is abnormal data, the error between the input data and the reconstructed data has a relatively large value. Hence, when an appropriate threshold is set, if the input data is normal data, it is correctly determined as “normal”, and if the input data is abnormal data, it is correctly determined as “abnormal” (First Embodiment.
The processing circuit 21 includes a processor such as a CPU (Central Processing Unit), and a memory such as a RAM (Random Access Memory). The processing circuit 21 includes an acquisition unit 211, a first learning unit 212, a second learning unit 213, a false detection rate calculation unit 214, a threshold setting unit 215, and a display control unit 216. The processing circuit 21 executes a machine learning program concerning machine learning according to this embodiment, thereby implementing the functions of the units 211 to 216. The machine learning program is stored in a non-transitory computer-readable storage medium such as the storage device 22. The machine learning program may be implemented as a single program that describes all the functions of the units 211 to 216, or may be implemented as a plurality of modules divided into several functional units. In addition, the units 211 to 216 may be implemented by an integrated circuit such as an ASIC (Application Specific Integrated Circuit). In this case, the units may be implemented on a single integrated circuit, or may be individually implemented on a plurality of integrated circuits.
The acquisition unit 211 acquires a plurality of training data. The training data means input data for training. The training data may be normal data, or may be abnormal data.
The first learning unit 212 trains the first learning parameter of the feature extraction layer 11 based on the plurality of training data. Here, the first learning parameter means the learning parameter of the feature extraction layer 11. The learning parameter is a parameter as the training target of machine learning, and is, for example, a weight parameter or a bias.
The second learning unit 213 trains the second learning parameter of the reconstruction layer 12 based on a plurality of training feature data obtained by applying the trained feature extraction layer 11 to the plurality of training data. Here, the second learning parameter means the learning parameter of the reconstruction layer 12. As an example, the second learning parameter represents representative vectors as many as the dimensions of feature data. The representative vectors as many as the dimensions are defined by the weighted sum of the plurality of training data. The second learning unit 213 trains the second learning parameter by minimizing the error between the training feature data and training reconstructed data obtained by applying the training feature data to the reconstruction layer 12.
The false detection rate calculation unit 214 calculates a false detection rate concerning abnormality detection based on the training feature data obtained by applying the trained feature extraction layer 11 to the training data and the training reconstructed data obtained by applying the trained reconstruction layer 12 to the training feature data. More specifically, the false detection rate calculation unit 214 calculates the probability distribution of the error between the training feature data and the training reconstructed data, and calculates a probability for making the error equal to or more than a threshold in the probability distribution as the false detection rate.
The threshold setting unit 215 sets a threshold (to be referred to as an abnormality detection threshold hereinafter) for abnormality detection, which is used by the determination layer 14. The threshold setting unit 215 sets the abnormality detection threshold to a value designated on a graph representing the false detection rate for each threshold.
The display control unit 216 displays various kinds of information on the display device 25. As an example, the display control unit 216 displays the false detection rate in a predetermined display form. More specifically, the display control unit 216 displays a graph representing the false detection rate for each threshold.
The storage device 22 is formed by a ROM (Read Only Memory), an HDD (Hard Disk Drive), an SSD (Solid State Drive), an integrated circuit storage device, or the like.
The storage device 22 stores training data, a machine learning program, and the like.
The input device 23 inputs various kinds of instructions from a user. As the input device 23, a keyboard, a mouse, various kinds of switches, a touch pad, a ouch panel display, and the like can be used. An output signal from the input device 23 is supplied to the processing circuit 21. Note that the input device 23 may be an input device of a computer connected to the processing circuit 21 by a cable or wirelessly.
The communication device 24 is an interface configured to perform data communication with an external device connected to the machine learning apparatus 2 via a network. For example, the communication device 24 receives training data from a training data generation device, a storage device, or the like.
The display device 25 displays various kinds of information. As an example, the display device 25 displays a false detection rate under the control of the display control unit 216. As the display device 25, a CRT (Cathode-Ray Tube) display, a liquid crystal display, an organic EL (Electro Luminescence) display, an LED (Light-Emitting Diode) display, a plasma display, or another arbitrary display known in the technical field can appropriately be used. Also, the display device 25 may be a projector.
Training processing of the machine learning model 1 by the machine learning apparatus 2 according to the first embodiment will be described below. In this embodiment, as an example, input data is image data in which one number of “0” to “9” is drawn. Image data in which “0” is drawn is abnormal data, and image data in which one of remaining “1” to “9” is drawn is normal data. In this embodiment, training data is normal data.
As shown in
When step S301 is performed, the first learning unit 212 trains a learning parameter Θ of the feature extraction layer 11 based on the normal data xi acquired in step S301 (step S302). In step S302, the first learning unit 212 trains the learning parameter Θ of the feature extraction layer 11 by contrastive learning based on the N normal data xi. Step S302 will be described below in detail.
The feature extraction layer 11 is a function for receiving data x as an input and outputting a feature ϕ(x). The learning parameter Θ is assigned to the feature extraction layer 11. The data x is a 784-dimensional real number vector, and a feature Φ(x) is an H-dimensional real number vector. H is preferably set to an arbitrary natural number as long as it is smaller than the dimension count of the data x.
In step S302, the first learning unit 212 generates extended normal data x'i from the normal data xi. As an example, the normal data xi including 28×28 images is rotated or enlarged/reduced at random, thereby performing data extension processing and arranging the normal data after the data extension processing to a 784-dimensional vector. Accordingly, the extended normal data x'i is generated. The extended normal data x'i is also an example of the normal data xi.
Next, the first learning unit 212 initializes the learning parameter Θ of the untrained feature extraction layer 11. The initial value of the learning parameter Θ is preferably set at random. Note that the initial value of the learning parameter Θ may be set to a predetermined value.
Next, the first learning unit 212 inputs the normal data xi to the feature extraction layer 11 and outputs feature data z2i-1=Φ(xi), and inputs the extended normal data x'i to the feature extraction layer 11 and outputs feature data z2i1=Φ(x'i).
The first learning unit 212 trains the learning parameter Θ such that a contrastive loss function L shown in equation (1) is minimized. As an optimization method, stochastic gradient descent or the like is preferably used. The contrastive loss function L is defined by the total sum of a normalized temperature-scaled cross entropy l(2i-l, 2i) of the feature data z2i-l for the feature data z2i-l and a normalized temperature-scaled cross entropy l(2i, 2i-l) of the feature data z2i for the feature data z2i. B is the suffix set of data used in a mini batch of stochastic gradient descent, ═B═ is the number of elements of the set B, si,j is the cosine similarity between a vector zi and a vector zj, and τ is a temperature parameter set by the user. In equation (1), 1 is a characteristic function that takes 1 when k≠ i.
where
Contrastive learning for the feature extraction layer 11 is performed by minimizing the contrastive loss function L shown in equation (1). In the contrastive learning shown in equation (1), training is performed such that the cosine similarity between the feature data z2i-l, based on given normal data xi and the feature data z2i based on the extended normal data x'i becomes large, and training is performed such that the cosine similarity between the feature data z2i-l based on the normal data xi and the feature data zj (where j≠2i, 2i-1) of data in a mini batch that is not associated with that becomes small. That is, the combination of the feature data z2i-l based on given normal data xi and the feature data z2i based on the extended normal data x'i is used as a positive example, and the combination of the feature data z2i based on the normal data xi and the feature data zj of data in the mini batch that is not associated with that is used as a negative example. Note that the feature data zj includes the feature data z2i-li based on another normal data that is not associated with the normal data xi and the feature data z2i based on the extended normal data x'i that is not associated with the normal data xi.
When step S302 is performed, the second learning unit 213 applies the trained feature extraction layer 11 generated in step S302 to the normal data xi acquired in step S301, thereby generating normal feature data Φ(xi) (step S303).
When step S303 is performed, the second learning unit 213 trains a learning parameter W of the reconstruction layer 12 based on the normal data xi acquired in step S301 and the normal feature data Φ(xi) generated in step S303 (step S304). The reconstruction layer 12 is a linear regression model.
In step S304 first, the second learning unit 213 applies the normal feature data Q (xi) to the untrained reconstruction layer 12; thereby generating normal reconstructed data yi=WΦ(xi). Next, the second Learning unit 213 optimizes the learning parameter W to minimize the error between the normal data xi and the normal reconstructed data yi.
More specifically, the learning parameter W is optimized to minimize the loss function L shown in equation (2). The loss function L is defined by the sum of the total sum of square errors between the normal data xi and the normal reconstructed data yi and the regularization term of the learning parameter W. λ is a regularization intensity parameter set by the user. Since the learning parameter W is decided by minimizing the loss function L to which the regularization term of the learning parameter W is added, the reconstruction by the reconstruction layer 12 can be called kernel ridge reconstruction.
The learning parameter W that minimizes equation (2) can be analytically expressed, as shown by equation (3). X is a value obtained by arranging the normal data xi (i=1, 2, . . . , N) in each column of a real valued matrix of 784×N, and Φ(X) is a value obtained by arranging the feature F (xi) of the normal data in each column of a real valued matrix of H×N.
W=XΦ(X)τ[Φ(X)Φ(X)τ+λI]−1 (3)
Details of training of the feature extraction layer 11 and the reconstruction layer 12 will be described here. The square error between input data x and reconstructed data y can be expressed by
∥x−y∥2=∥x∥2+∥y∥2−2xTXϕ(X)T{ϕ(X)ϕ(X)T+λI}−1ϕ(x) (4)
According to equation (4), to achieve a high w abnormality detection accuracy, it is preferable to have the following two characteristics. 1. If the input data x is normal data, the error between the input data x and the reconstructed data y is small. 2. If the input data x is abnormal data, the error between the input data x and the reconstructed data y is large.
Placing focus on the third term on the right-hand side of equation (4), the above-described two characteristics can be reworded as follows. 1. When the input data x is normal data, if the inner product of the input data is large (or small), the inner product of the feature data is also large (or small). That is, the inner product of the input data and the inner product of the feature data have positive correlation. Note that the inner product of the input data is xTX in equation (4), and the inner product of the feature data is ϕ(X)T{ϕ(X)ϕ(X)T+λI}−1ϕ(x) in equation (4) Its metric space is the inverse matrix of covariance. 2. When the input data x is abnormal data, if the inner product of the input data is large (or small), the inner product of the feature data is small (or large). That is, the inner product of the input data and the inner product of the feature data have negative correlation.
In this embodiment, the learning parameter Θ is trained such that the feature extraction layer 11 has the characteristic 1. That is, If training data includes only normal data (strictly, normal data and extended normal data), the first learning unit 212 trains the learning parameter of the feature extraction layer 11 such that the positive correlation between the inner product of two normal data and the inner product of two feature data corresponding to the two normal data becomes high. This is because abnormal data cannot be prepared at the time of training in a normal case. As another reason, the inner product of normal data and extended normal data thereof is large, and in contrastive learning, training is performed such that the inner product of the pair of feature data based on normal data and feature data based on the extended normal data of the normal data becomes large, and training is performed such that the inner product of the pair of feature data based on normal data and feature data of data in a mini batch that is not associated with that becomes small.
When step S304 is performed, the false detection rate calculation unit 214 applies the trained reconstruction layer 12 generated in step S304 to the normal feature data Φ(xi) generated in step S303, thereby generating the normal reconstructed data yi (step S305).
When step S305 is performed, the false detection rate calculation unit 214 calculates a false detection rate for each threshold based on the normal data xi acquired in step S301 and the normal reconstructed data yi generated in step S305 (step S306). The false detection rate means a rate of determining normal data as abnormal data.
In step S306, first, the false detection rate calculation unit 214 calculates a probability distribution p of the error between the normal data xi and the normal reconstructed data yi. The error may be an index such as a square error, an L1 loss, or an L2 loss as long as it is an index capable of evaluating the difference between the normal data xi and the normal reconstructed data yi. In the following description, the error is a square error. Next, for each of a plurality of thresholds r, the false detection rate calculation unit 214 calculates a probability (∥xi−yi∥>r) that the square error becomes equal to or larger than the threshold r in the probability distribution p. The threshold r is preferably set to an arbitrary value within a possible range. The calculated probability is used as a false detection rate.
When step S306 is performed, the display control unit 216 displays a graph representing the false detection rate for each threshold (step S307). The graph representing the false detection rate for each threshold is displayed on the display device 25 or the like.
When step S307 is performed, the threshold setting unit 215 sets an abnormality detection threshold to be used by the determination layer 14 (step S308). For example, the operator observes the graph shown in
When steps S301 to S308 are performed, the learning parameter of the feature extraction layer 11, the learning parameter of the reconstruction layer 12, and the abnormality detection threshold of the determination layer 14 are decided. The learning parameter of the feature extraction layer 11, the learning parameter of the reconstruction layer 12, and the abnormality detection threshold of the determination layer 14 are set in the machine learning model 1. The trained machine learning model 1 is thus completed. The trained machine learning model 1 is stored in the storage device 22. In addition, the trained machine learning model 1 is transmitted to an abnormality detection apparatus according to the second embodiment via the communication device 24.
Training processing of the machine learning model 1 is thus ended.
Note that the above-described embodiment is merely an example. The embodiment is not limited to this, and various changes and modifications can be made. For example, in step S306, the false detection rate calculation unit 214 calculates the false detection rate using correct answer data used to train the feature extraction layer 11 and the reconstruction layer 12. However, the false detection rate calculation unit 214 may calculate the false detection rate using another correct answer data that is not used to train the feature extraction layer 11 and the reconstruction layer 12.
The advantage of the weight parameter W according to this embodiment will be described here using a neural network nearest neighbor method shown in non-patent literature (Y. Kato et al, “An Anomaly Detection Method with Neural Network Near Neighbor”, The Annual Conference of the Japanese Society for Artificial Intelligence, 2020) as a comparative example. In the neural network nearest neighbor method, reconstructed data is generated using a DTM (Data Transformation Matrix). The data size of the DTM depends on the number of training data and the dimension count of input data. The number of training data is enormous. Hence, in the neural network nearest neighbor method, a large memory capacity is required to generate reconstructed data.
The data size of the weight parameter W according to this embodiment depends on the dimension count H of feature data and the dimension count of input data. The dimension count H of feature data is smaller than the number N of normal data to be used for training, Hence, the data size of the weight parameter W according to this embodiment is smaller than the data size of the DTM shown in the comparative example. Hence, according to this embodiment, the memory capacity necessary for generation of reconstructed data can be reduced as compared to the comparative example.
The processing circuit 71 includes a processor such as a CPU and a memory such as a PAM. The processing circuit 71 includes an acquisition unit 711, a feature extraction unit 712, a reconstruction unit 713, an error calculation unit 714, a determination unit 715, and a display control unit 716. The processing circuit 71 executes an abnormality detection program concerning abnormality detection using a machine learning model according to this embodiment, thereby implementing the functions off the units 711 to 716.
The abnormality detection program is stored in a non-transitory computer-readable recording medium such as the storage device 72. The abnormality detection program may be implemented as a single program that describes all the functions of the units 711 to 716, or may be implemented as a plurality of modules divided into several functional units. In addition, the units 711 to 716 may be implemented by an integrated circuit such as an ASIC. In this case, the units may be implemented on a single integrated circuit, or may be individually implemented on a plurality of integrated circuits.
The acquisition unit 711 acquires diagnostic data. The diagnostic data is data of an abnormality detection target and means input data to a trained machine learning model.
The feature extraction unit 712 applies the diagnostic data to a feature extraction layer 11 of a machine learning model 1, thereby generating feature data (to be referred to as diagnostic feature data hereinafter) corresponding to the diagnostic data.
The reconstruction unit 713 applies the diagnostic feature data to a reconstruction layer 12 of the machine learning model 1, thereby generating reconstructed data (to be referred to as diagnostic reconstructed data hereinafter) that reproduces the diagnostic data.
The error calculation unit 714 calculates the error between the diagnostic data and the diagnostic feature data. More specifically, the error calculation unit 714 applies the diagnostic data and the diagnostic feature data to an error calculation layer 13 of the machine learning model 1, thereby calculating the error.
The determination unit 715 compares the error between the diagnostic data and the diagnostic feature data with an abnormality determination threshold, thereby determining the presence/absence of abnormality of the diagnostic data, in other words, abnormality or normality. More specifically, the determination unit 715 applies the error to a determination layer 14 of the machine learning model 1, and outputs a determination result of the presence/absence of abnormality.
The display control unit 716 displays various kinds of information on the display device 75. As an example, the display control unit 716 displays the determination result of the presence/absence of abnormality in a predetermined display form.
The storage device 72 is formed by a ROM, an HDD, an SSD, an integrated circuit storage device, or the like. The storage device 72 stores a trained machine learning model generated by the machine learning apparatus 2 according to the first embodiment, an abnormality detection program, and the like.
The input device 73 inputs various kinds of instructions from a user. As the input device 73, a keyboard, a mouse, various kinds of switches, a touch pad, a touch panel display, and the like can be used. An output signal from the input device 73 is supplied to the processing circuit 71. Note that the input device 73 may be an input device of a computer connected to the processing circuit 71 by a cable or wirelessly.
The communication device 74 is an interface configured to perform data communication with an external device connected to the abnormality detection apparatus 7 via a network. For example, the communication device 74 receives training data from a training data generation device, a storage device, or the like. In addition, the communication device 74 receives a trained machine learning model from the machine learning apparatus 2.
The display device 75 displays various kinds of information. As an example, the display device 75 displays a determination result of the presence/absence of abnormality under the control of the display control unit 716, As the display device 75, a CRT display, a liquid crystal display, an organic EL display, an LED display, a plasma display, or another arbitrary display known in the technical field can appropriately be used. Also, the display device 75 may be a projector.
Abnormality detection processing for diagnostic data by the abnormality detection apparatus 7 according to the second embodiment will be described below. The abnormality detection processing is performed using the trained machine learning model 1 generated by the machine learning apparatus 2 according to the first embodiment. The trained machine learning model 1 is stored in the storage device 72 or the like.
As shown in
When step S801 is performed, the feature extraction unit 712 applies the diagnostic data acquired in step S801 to the feature extraction layer 11, thereby generating diagnostic feature data (step S802). A learning parameter optimized in step S302 according to the first embodiment is assigned to the feature extraction layer 11.
When step S802 is performed, the reconstruction unit 713 applies the diagnostic feature data generated in step S802 to the reconstruction layer 12, thereby generating diagnostic reconstructed data (step S803). A learning parameter W optimized in step S304 is assigned to the reconstruction layer 12. The reconstruction layer 12 multiplies diagnostic feature data Φ(x) by the learning parameter W, thereby outputting reconstructed data y=WΦ(x). As described above, the learning parameter W has representative vectors as many as a dimension count H of the diagnostic feature data Φ(x). An operation in the reconstruction layer 12 results in a weighted sum using the component of the diagnostic feature data Φ(x) corresponding to the representative vector as a weight.
As described above, the learning parameter W has representative vectors Vh as many as the dimension count. H of the diagnostic feature data Φ(x). The diagnostic reconstructed data y is calculated by the weighted sum (linear combination) of the representative vector Vh using a component ϕh of the diagnostic feature data Φ(x) corresponding to the representative vector Vh as a weight (coefficient), The component ϕh functions as a weight for the representative vector Vh. The representative vector Vh corresponds to the weighted sum of N normal data xi used for machine learning of the reconstruction layer 12. The weight here corresponds to a component corresponding to each normal data xi of Φ(X)T [Φ(X)Φ(X)T+λI]−1 shown by equation (3).
When step S803 is performed, the error calculation unit 714 calculates the error between the diagnostic data acquired in step S801, and the diagnostic reconstructed data generated in step S803 (step S804) More specifically, the error calculation unit 714 applies the diagnostic data and the diagnostic reconstructed data to the error calculation layer 13, thereby calculating the error. As the error, the error calculated in step S606 is used, and, in this embodiment, a square error is preferably used.
When step S804 is performed, the determination unit 715 applies the error calculated in step S304 to the determination Layer 14, and outputs the determination result of the presence/absence of abnormality of the diagnostic data (step S805). An abnormality detection threshold set in step S607 is assigned to the determination layer 14. If the error is larger than the abnormality detection threshold, it is determined that the diagnostic data is abnormal. If the error is smaller than the abnormality detection threshold, it is determined that the diagnostic data is normal.
When step S805 is performed, the display control unit 716 displays the determination result output in step S805 (step S806). For example, whether the diagnostic data is abnormal or normal is preferably displayed as the determination result on the display device 75.
The abnormality detection performance of the machine learning model 1 according to this embodiment will be described here. The abnormality detection performance is a capability of correctly reproducing input data that is normal data and inhibiting correct reproduction of input data that is abnormal data.
As shown in
The abnormality detection processing is thus ended.
Note that the above-described embodiment is merely an example. The embodiment is not limited to this, and various changes and modifications can be made. For example, in step S806, the display control unit 716 displays the determination result. However, the determination result may be transferred to another computer and displayed,
(Modification 1)
In the above description, training data includes only normal data. However, the embodiment is not limited to this. Training data according to Modification 1 includes normal data and abnormal data.
The first learning unit 212 according to Modification 1 trains the learning parameter Θ by contrastive learning such that the feature extraction layer 11 has the characteristic 2. (When the input data x is abnormal data, if the inner product of the input data is large (or small), the inner product of the feature data is small). That is, if training data includes normal data and abnormal data, the first learning unit 212 trains the learning parameter Θ of the feature extraction layer 11 such that the negative correlation between the inner product of the normal data and the abnormal data and the inner product of feature data corresponding to the normal data and feature data corresponding to the abnormal data becomes high.
When abnormal data is used as training data, it is expected that the performance of identifying normal data and abnormal data by the feature extraction layer 11 improves, and the abnormality detection performance by the machine learning model 1 improves.
(Modification 2)
The first learning unit 212 according to Modification 2 may train the learning parameter Θ by contrastive learning and decorrelation based on the feature data of normal data. By the decorrelation, the correlation between certain normal data and another normal data can be set to almost zero. In this case, a regularization term for decorrelating feature data is preferably added to the contrastive loss function L. As an example a regularization term R for decorrelation is defined by equation (5), The regularization term R is added to the contrastive loss function L of equation (1). However, H in equation (5) is the dimension count of a feature vector z, r{i, j} are the correlation coefficients of ith and jth elements of a vector, and T is a temperature parameter.
When decorrelation is performed, it is expected that the performance of identifying normal data and abnormal data by the feature extraction layer 11 improves, and the abnormality detection performance by the machine learning model 1 improves.
(Modification 3)
In the above-described embodiment, the dimension count H is decided in advance. The dimension count H according to Modification 3 may be decided in accordance with a storage capacity needed for the machine learning model 1 and assigned to the storage device 72 of the abnormality detection apparatus 7 that implements the machine learning model 1. As an example, if there is not a sufficient margin for the storage capacity for the machine learning model 1, the dimension count H is preferably set to a relatively small value. As another example, if there is a sufficient margin for the storage capacity for the machine learning model 1, the dimension count H is preferably set to a relatively large value while placing focus on the performance of the machine learning model 1. The storage capacity needed for the machine learning model 1 is preferably designated by the operator. The processing circuit 21 can calculate the dimension count H based on the designated storage capacity and the storage capacity required per dimension.
(Modification 4)
In the above-described embodiment, the machine learning model 1 includes the feature extraction layer 11, the reconstruction layer 12, the error calculation layer 13, and the determination layer 14, as shown in
(Additional Remarks)
As described above, the machine learning apparatus 2 according to the first embodiment trains the feature extraction layer 11 that extracts, from input data, feature data of the input data, and the reconstruction layer 12 that generates the reconstructed data of the input data From the feature data. The machine learning apparatus 2 includes the first learning unit 212 and the second learning unit 213. The first learning unit 212 trains the first learning parameter Θ of the feature extraction layer 11 based on N training data. The second learning unit 213 trains the second learning parameter W of the reconstruction layer based on N training feature data obtained by applying the trained feature extraction layer 11 to the N training data. The learning parameter W representing representative vectors as many as the dimension count of the feature data. The representative vectors as many as the dimension count are defined by the weighted sum of the plurality of training data.
As described above, the abnormality detection apparatus 7 according to the second embodiment includes the feature extraction unit 712, the reconstruction unit 713, and the determination unit 715. The feature extraction unit 712 extracts feature data from diagnostic data. The reconstruction unit 713 generates reconstructed data from the feature data. Here, the reconstruction unit 713 generates the reconstructed data based on the weighted sum of the feature data and representative vectors as many as the dimension count of the feature data. The determination unit 715 determines the presence/absence of abnormality of the diagnostic data based on the diagnostic data and the reconstructed data.
According to the above-described configuration, it is possible to save the memory capacity and achieve high abnormality detection performance.
While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions.
Indeed, the novel embodiments described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the embodiments described herein may be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the inventions.
Number | Date | Country | Kind |
---|---|---|---|
2021-110289 | Jul 2021 | JP | national |