This U.S. patent application claims priority under 35 U.S.C. § 119 to: Indian Patent Application No. 202321044922, filed on Jul. 4, 2023. The entire contents of the aforementioned application are incorporated herein by reference.
The disclosure herein generally relates to an unsupervised domain adaptation, and, more particularly, to methods and systems for graph assisted unsupervised domain adaptation for machine fault diagnosis.
Unsupervised Domain adaptation (UDA) has become an emerging technology for many useful applications such as a machine fault diagnosis. The UDA leverages knowledge learned from labeled data in a source domain to build an effective classifier for an unlabeled data in a target domain, given that the source and target data have different underlying distributions. Classical data-driven machine learning algorithms for machine diagnosis assume that the training (source) and test (target) data follow the same data distribution. However, in practical industrial scenario, such assumption does not always hold as machine data from both domains are significantly different due to different working conditions, sampling frequency, location of sensor placement, etc. Additionally, for machine fault diagnosis, access to labeled data of every machine is not always available as manual labelling is time consuming and inducing faults in machine is economically not viable. Moreover, limited data is available for training. So, knowledge transfer between different but related machines can be beneficial.
Most of the existing techniques aim to address a marginal distribution discrepancy aspect alone, ignoring a conditional distribution discrepancy that may exist between the two domains. In order to achieve good adaptation performance, both the marginal and conditional distributions of the source and target data need to be aligned. The problem becomes challenging when the data is limited, and no labels are available for the target domain data.
Further, existing graph-based domain adaptation work focuses on jointly optimizing the domain invariant feature learning by a divergence loss and label propagation loss over a fixed graph which is obtained by augmenting source and target domain data, to learn the labels of the target domain data. The labels are considered as graph signals which are projected onto the graph. Using the known source labels, the target domain labels are predicted by label propagation over this fixed graph. When the domain discrepancy is less, the fixed graph has edge connectivity between source and target nodes which eventually helps in label propagation. However, when the domain discrepancy between the source and target domain is large, then the fixed graph results in two disjoint sub-graphs for source and target data respectively, with no edge connectivity between the source and target nodes. Hence, the label propagation will not be able to estimate the labels of the target domain data.
Embodiments of the present disclosure present technological improvements as solutions to one or more of the above-mentioned technical problems recognized by the inventors in conventional systems.
In an aspect, a processor-implemented method for graph assisted unsupervised domain adaptation for machine fault diagnosis is provided. The method including the steps of: receiving a labeled source domain S data {Xs, Ys}, and an unlabeled target domain T data {Xt}, wherein the labeled source domain S data comprising a plurality of labeled source domain samples
and each labeled source domain sample comprising a source domain feature and a source domain label, and the unlabeled target domain T data comprising one or more unlabeled target domain samples
and each unlabeled target domain sample comprises a target domain feature; performing an optimization of a set of parameters including (i) a source projection matrix Ps, (ii) a target projection matrix Pt, and (iii) a probabilistic target domain label associated with each target domain feature present in each unlabeled target domain sample, and wherein the optimization comprising: (a) initializing the source projection matrix Ps and the target projection matrix Pt, from the labeled source domain S data and the unlabeled target domain T data respectively, using a principal component analysis (PCA) technique; (b) determining a source projected data Xsp and a target projected data Xtp, from (i) the labeled source domain S data and the source projection matrix Ps, and (ii) the unlabeled target domain T data and the target projection matrix Pt, respectively; (c) augmenting the source projected data Xsp and the target projected data Xtp, to construct a joint graph G, using a Gaussian kernel; (d) estimating the probabilistic target domain label associated with each target domain feature present in each unlabeled target domain sample, through a label propagation over the joint graph G, using the source domain label associated with the source domain feature present in each labeled source domain sample, by minimizing a graph total variation (GTV) loss; and (e) iteratively performing a joint learning using the initialized parameters at step (a) and the set of parameters from step (b) through step (d) in a first iteration and learnt parameters thereafter until a convergence criterion is met, wherein the joint learning comprising: learning each of the source projection matrix Ps and the target projection matrix Pt, using (i) the labeled source domain S data and the unlabeled target domain T data respectively, and (ii) the probabilistic target domain label associated with each target domain feature present in each unlabeled target domain sample, by minimizing a weighted class-wise maximum mean discrepancy (CMMD) loss; determining the source projected data Xsp and the target projected data Xtp, from (i) the labeled source domain S data and the source projection matrix Ps, and (ii) the unlabeled target domain T data and the target projection matrix Pt, respectively; augmenting the source projected data Xsp and the target projected data Xtp, to construct the joint graph G, using the Gaussian kernel; learning the probabilistic target domain label associated with each target domain feature present in each unlabeled target domain sample, through label propagation over the joint graph G, using the source domain label associated with the source domain feature present in each labeled source domain sample, by minimizing the graph total variation (GTV) loss; and wherein the convergence criterion is when the GTV loss being less than an empirically determined threshold value, to obtain (i) the learnt source projection matrix Ps, (ii) the learnt target projection matrix Pt, and (iii) the learnt probabilistic target domain label associated with each target domain feature present in each unlabeled target domain sample; and determining a target domain label associated with each target domain feature present in each unlabeled target domain sample, from the learnt probabilistic target domain label associated with each target domain feature present in each unlabeled target domain sample.
In another aspect, a system for graph assisted unsupervised domain adaptation for machine fault diagnosis is provided. The system includes: a memory storing instructions; one or more Input/Output (I/O) interfaces; and one or more hardware processors coupled to the memory via the one or more I/O interfaces, wherein the one or more hardware processors are configured by the instructions to: receive a labeled source domain S data {Xs, Ys}, and an unlabeled target domain T data {Xt}, wherein the labeled source domain S data comprising a plurality of labeled source domain samples
and each labeled source domain sample comprising a source domain feature and a source domain label, and the unlabeled target domain T data comprising one or more unlabeled target domain samples
and each unlabeled target domain sample comprises a target domain feature; perform an optimization of a set of parameters including (i) a source projection matrix Ps, (ii) a target projection matrix Pt, and (iii) a probabilistic target domain label associated with each target domain feature present in each unlabeled target domain sample, and wherein the optimization comprising: (a) initializing the source projection matrix Ps and the target projection matrix Pt, from the labeled source domain S data and the unlabeled target domain T data respectively, using a principal component analysis (PCA) technique; (b) determining a source projected data Xsp and a target projected data Xtp, from (i) the labeled source domain S data and the source projection matrix Ps, and (ii) the unlabeled target domain T data and the target projection matrix Pt, respectively; (c) augmenting the source projected data Xsp and the target projected data Xtp, to construct a joint graph G, using a Gaussian kernel; (d) estimating the probabilistic target domain label associated with each target domain feature present in each unlabeled target domain sample, through a label propagation over the joint graph G, using the source domain label associated with the source domain feature present in each labeled source domain sample, by minimizing a graph total variation (GTV) loss; and (e) iteratively performing a joint learning using the initialized parameters at step (a) and the set of parameters from step (b) through step (d) in a first iteration and learnt parameters thereafter until a convergence criterion is met, wherein the joint learning comprising: learning each of the source projection matrix Ps and the target projection matrix Pt, using (i) the labeled source domain S data and the unlabeled target domain T data respectively, and (ii) the probabilistic target domain label associated with each target domain feature present in each unlabeled target domain sample, by minimizing a weighted class-wise maximum mean discrepancy (CMMD) loss; determining the source projected data Xsp and the target projected data Xtp, from (i) the labeled source domain S data and the source projection matrix Ps, and (ii) the unlabeled target domain T data and the target projection matrix Pt, respectively; augmenting the source projected data Xsp and the target projected data Xtp, to construct the joint graph G, using the Gaussian kernel; learning the probabilistic target domain label associated with each target domain feature present in each unlabeled target domain sample, through label propagation over the joint graph G, using the source domain label associated with the source domain feature present in each labeled source domain sample, by minimizing the graph total variation (GTV) loss; and wherein the convergence criterion is when the GTV loss being less than an empirically determined threshold value, to obtain (i) the learnt source projection matrix Ps, (ii) the learnt target projection matrix Pt, and (iii) the learnt probabilistic target domain label associated with each target domain feature present in each unlabeled target domain sample; and determine a target domain label associated with each target domain feature present in each unlabeled target domain sample, from the learnt probabilistic target domain label associated with each target domain feature present in each unlabeled target domain sample.
In yet another aspect, there are provided one or more non-transitory machine-readable information storage mediums comprising one or more instructions which when executed by one or more hardware processors cause: receiving a labeled source domain S data {Xs, Ys}, and an unlabeled target domain T data {Xt}, wherein the labeled source domain S data comprising a plurality of labeled source domain samples
and each labeled source domain sample comprising a source domain feature and a source domain label, and the unlabeled target domain T data comprising one or more unlabeled target domain samples
and each unlabeled target domain sample comprises a target domain feature; performing an optimization of a set of parameters including (i) a source projection matrix Ps, (ii) a target projection matrix Pt, and (iii) a probabilistic target domain label associated with each target domain feature present in each unlabeled target domain sample, and wherein the optimization comprising: (a) initializing the source projection matrix Ps and the target projection matrix Pt, from the labeled source domain S data and the unlabeled target domain T data respectively, using a principal component analysis (PCA) technique; (b) determining a source projected data Xsp and a target projected data Xtp, from (i) the labeled source domain S data and the source projection matrix Ps, and (ii) the unlabeled target domain T data and the target projection matrix Pt, respectively; (c) augmenting the source projected data Xsp and the target projected data Xtp, to construct a joint graph G, using a Gaussian kernel; (d) estimating the probabilistic target domain label associated with each target domain feature present in each unlabeled target domain sample, through a label propagation over the joint graph G, using the source domain label associated with the source domain feature present in each labeled source domain sample, by minimizing a graph total variation (GTV) loss; and (e) iteratively performing a joint learning using the initialized parameters at step (a) and the set of parameters from step (b) through step (d) in a first iteration and learnt parameters thereafter until a convergence criterion is met, wherein the joint learning comprising: learning each of the source projection matrix Ps and the target projection matrix Pt, using (i) the labeled source domain S data and the unlabeled target domain T data respectively, and (ii) the probabilistic target domain label associated with each target domain feature present in each unlabeled target domain sample, by minimizing a weighted class-wise maximum mean discrepancy (CMMD) loss; determining the source projected data Xsp and the target projected data Xtp, from (i) the labeled source domain S data and the source projection matrix Ps, and (ii) the unlabeled target domain T data and the target projection matrix Pt, respectively; augmenting the source projected data Xsp and the target projected data Xtp, to construct the joint graph G, using the Gaussian kernel; learning the probabilistic target domain label associated with each target domain feature present in each unlabeled target domain sample, through label propagation over the joint graph G, using the source domain label associated with the source domain feature present in each labeled source domain sample, by minimizing the graph total variation (GTV) loss; and wherein the convergence criterion is when the GTV loss being less than an empirically determined threshold value, to obtain (i) the learnt source projection matrix Ps, (ii) the learnt target projection matrix Pt, and (iii) the learnt probabilistic target domain label associated with each target domain feature present in each unlabeled target domain sample; and determining a target domain label associated with each target domain feature present in each unlabeled target domain sample, from the learnt probabilistic target domain label associated with each target domain feature present in each unlabeled target domain sample.
In an embodiment, the source domain feature present in each labeled source domain sample and the target domain feature present in each unlabeled target domain sample are obtained from one or more sensors present in the machine whose faults to be diagnosed.
In an embodiment, (i) the source domain label associated with each source domain feature, and (ii) the target domain label associated with each target domain feature, are part of a plurality of predefined labels.
In an embodiment, minimizing the graph total variation (GTV) loss propagates the source domain labels over the joint graph G to estimate the probabilistic target domain labels associated with the target domain T data.
In an embodiment, the weighted class-wise maximum mean discrepancy (CMMD) loss is defined as a sum of the class-wise distance between a mean of the projected source domain data Xsp and the mean of the projected target domain data Xtp associated with similar labels among the plurality of predefined labels.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles:
Exemplary embodiments are described with reference to the accompanying drawings. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the scope of the disclosed embodiments.
Classical machine learning algorithms assume that the training and test data follow the same data distribution. However, in practice, this assumption does not always hold, which leads to deterioration in their performance. Interestingly, Domain Adaptation (DA) has emerged as one of the upcoming techniques to tackle this issue, where the training (source) and test (target) data can be from different distributions. DA relies on leveraging the information learned from well-studied source domain to improve the classification performance on the target domain. According to the availability of label information in the target domain, the DA can be categorized as Unsupervised DA (UDA) where the target domain is completely unlabeled, and semi-supervised DA (SDA) where the target domain has limited labels.
Among all the existing DA techniques, divergence and adversarial learning-based techniques have been successfully applied in different applications. Divergence based DA techniques map instances from both source and target domains to a common feature space to learn domain invariant features. However, they fail to perform when a large distribution discrepancy exists between the two domains.
Adversarial learning based DA methods are able to handle such a scenario, as they learn data translation between source and target domains by training a generator and discriminator network. However, these techniques do not guarantee that class discriminability is preserved during the data translation. Also, they require massive data for training, which may not be always available in many practical application scenarios.
Apart from the existing techniques mentioned above, graph-based techniques have recently been used for DA, as graphs can capture the actual data manifolds effectively. The existing techniques are based on Graph Convolutional Networks (GCN), Graph Signal Processing (GSP), and hybrid techniques that utilize divergence method with graph to learn domain invariant features. An unsupervised Domain Adaptive Network Embedding (DANE) framework has been proposed using GCN and adversarial network that learns transferable embeddings between the source and target domain. Another UDA technique utilized a dual GCN for local and global consistency for feature aggregation. Although popular, these techniques ignore the property of graph structured data while carrying out classification.
To effectively exploit the underlying structure of the data, the concepts of GSP has been utilized for the SDA. The technique is based on aligning the Fourier bases of the graphs constructed using source and target domain data. The spectrum of the labels learned from the source graph is transferred to the target graph for the DA. This work was extended by incorporating graph learning into the optimization formulation that aligns the spectrum of the graphs associated with the source and target data, which resulted in improved performance.
Further, a Graph Adaptive Knowledge Transfer (GAKT) technique has been proposed that jointly optimized the domain invariant feature learning by weighted class-wise adaptation loss and label propagation over the graph. A joint graph is employed by augmenting source and target domain data to propagate the labels from known source to unknown target data. When the domain discrepancy is less, the joint (fixed) graph has the edge connectivity between source and target nodes which eventually helps in label propagation. However, when the domain discrepancy between the source and target domains is large, then the joint (fixed) graph results in two disjoint sub-graphs for source and target data respectively, with no edge connectivity between the source and target nodes. Hence, the label propagation will not be able to estimate the labels of the target domain data.
Further, all the aforementioned techniques mainly focus on computer vision related DA applications, but not on time series data for the challenging adaptation scenario of machine fault diagnosis or machine inspection.
In most practical applications of the machine inspection, access to labeled data is difficult, as manual labeling is time consuming and inducing faults in machines is not economically viable. Moreover, labeled data of every machine is not available. Thus, transferring the knowledge learned from labeled data of one machine (source) to a different but related machine (target) is important and required in practice. This is a challenging adaptation scenario since the data distribution of both domains is significantly different due to different working conditions, sampling frequency, location of sensor placement, and so on.
The present disclosure solves the technical problems in the art using a Graph Assisted Unsupervised Domain Adaptation (GA-UDA) technique for the machine fault diagnosis. The GA-UDA technique of the present disclosure for the machine fault diagnosis, carries out the domain adaptation in two stages. In the first stage, a Class-wise maximum mean discrepancy (CMMD) loss is minimized to transform the data from both source and target domains to a shared feature space. In the second stage, the augmented transformed (projected) data from both the source and the target domains are utilized to construct a joint graph. Subsequently, the labels of target domain data are estimated through label propagation over the joint graph. The GA-UDA technique of the present disclosure is similar in nature to the conventional GAKT technique. However, unlike the fixed joint graph considered in GAKT technique, the present disclosure iteratively updates the joint graph using the transformed features of both the source and target domains obtained through the optimization formulation. The GA-UDA technique of the present disclosure helps in addressing significant distribution shift between the two domains.
Referring now to the drawings, and more particularly to
The I/O interface(s) 106 may include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and the like. The I/O interface(s) 106 may include a variety of software and hardware interfaces, for example, interfaces for peripheral device(s), such as a keyboard, a mouse, an external memory, a plurality of sensor devices, a printer and the like. Further, the I/O interface(s) 106 may enable the system 100 to communicate with other devices, such as web servers and external databases.
The I/O interface(s) 106 can facilitate multiple communications within a wide variety of networks and protocol types, including wired networks, for example, local area network (LAN), cable, etc., and wireless networks, such as Wireless LAN (WLAN), cellular, or satellite. For the purpose, the I/O interface(s) 106 may include one or more ports for connecting a number of computing systems with one another or to another server computer. Further, the I/O interface(s) 106 may include one or more ports for connecting a number of devices to one another or to another server.
The one or more hardware processors 104 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the one or more hardware processors 104 are configured to fetch and execute computer-readable instructions stored in the memory 102. In the context of the present disclosure, the expressions ‘processors’ and ‘hardware processors’ may be used interchangeably. In an embodiment, the system 100 can be implemented in a variety of computing systems, such as laptop computers, portable computers, notebooks, hand-held devices, workstations, mainframe computers, servers, a network cloud and the like.
The memory 102 may include any computer-readable medium known in the art including, for example, volatile memory, such as static random access memory (SRAM) and dynamic random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes. In an embodiment, the memory 102 includes a plurality of modules 102a and a repository 102b for storing data processed, received, and generated by one or more of the plurality of modules 102a. The plurality of modules 102a may include routines, programs, objects, components, data structures, and so on, which perform particular tasks or implement particular abstract data types.
The plurality of modules 102a may include programs or computer-readable instructions or coded instructions that supplement applications or functions performed by the system 100. The plurality of modules 102a may also be used as, signal processor(s), state machine(s), logic circuitries, and/or any other device or component that manipulates signals based on operational instructions. Further, the plurality of modules 102a can be used by hardware, by computer-readable instructions executed by the one or more hardware processors 104, or by a combination thereof. In an embodiment, the plurality of modules 102a can include various sub-modules (not shown in
The repository 102b may include a database or a data engine. Further, the repository 102b amongst other things, may serve as a database or includes a plurality of databases for storing the data that is processed, received, or generated as a result of the execution of the plurality of modules 102a. Although the repository 102b is shown internal to the system 100, it will be noted that, in alternate embodiments, the repository 102b can also be implemented external to the system 100, where the repository 102b may be stored within an external database (not shown in
Referring to
At step 302 of the method 300, the one or more hardware processors 104 of the system 100 are configured to receive a labeled source domain data, and an unlabeled target domain data. The labeled source domain data includes a plurality of labeled source domain samples. Each labeled source domain sample includes a source domain feature and a source domain label. In another words each labeled source domain sample is a labeled or annotated sample. The unlabeled target domain data includes one or more unlabeled target domain samples. Each unlabeled target domain sample comprises a target domain feature. In another words, each unlabeled target domain sample includes only the features for which the labels or classes are (fault labels or fault classes) to be predicted.
Hence, the unlabeled target domain data is associated with a machine whose faults are to be diagnosed and the labeled source domain data is associated with the similar machine. Specifically, in an embodiment, the labeled source domain data and the unlabeled target domain data are from different but similar or related machines such as machines with different working conditions, different sampling frequencies, different sensor placements, with same or similar fault types, sensors, and so on. In an embodiment, the source domain feature present in each labeled source domain sample and the target domain feature present in each unlabeled target domain sample are obtained from a raw sample data collected from one or more sensors present in the machine whose faults to be diagnosed. For example, the source domain features, and the target domain features for the machine includes a root mean square (RMS) value, a variance, a data peak value, a kurtosis value, a peak-to peak time-domain value, and so on.
In an embodiment, each of the source domain label associated with each source domain feature, and each of the target domain label associated with each target domain feature, are part of a plurality of predefined labels. Here the plurality of predefined labels means the annotated labels or the classes through which the machine fault types are defined. Thus, the labeled source domain data and the unlabeled target domain data are associated with a same feature space (the source domain feature and the target domain feature) and a same label space or a class space (the plurality of predefined labels).
Let the labeled source domain S data be expressed as
where ns denotes a number of a plurality of labeled source domain samples, Xs∈Rm×ns denotes a list of source domain features
and each source domain feature is of m dimensions. Ys∈Rn
(source domain labels) with C number of classes (the plurality of predefined labels). Similarly the unlabeled target domain T data is expressed as
where nt denotes a number of the one or more unlabeled target domain samples, Xt∈Rm×n
of the target domain data Xt, assuming the feature and label space to be the same across both the source and target domains.
Class-wise Maximum Mean Discrepancy: A Maximum Mean Discrepancy (MMD) is one of the popular techniques used to address the domain discrepancy between the source S and target T domains. The MMD computes a deviation of sample means of two domains in the projected space. More formally, the MMD loss C1 is mathematically expressed as in equation 1:
Where Ps∈Rm×k and Pt∈Rm×k are two projection matrices with k<m, xs
To address this, the system 100 and method 300 of the present disclosure uses a Class-wise Maximum Mean Discrepancy (CMMD) which computes the difference between sample means of two similar class data from different domains. This CMMD requires the knowledge of labels for both the domains. Since the target domain T data is unlabeled, in most works, pseudo labels are generated by applying the classifier trained on labeled source domain S data to the target domain T data. In other words, the weighted class-wise maximum mean discrepancy (CMMD) loss is defined as a sum of the class-wise distance between a mean of the projected source domain data Xsp and the mean of the projected target domain data Xtp associated with similar labels among the plurality of predefined labels. The weighted CMMD loss, C2 is mathematically expressed as in equation 2:
Wherein
respectively. Here, ntc is calculated as Σj=1n
At step 304 of the method 300, the one or more hardware processors 104 of the system 100 are configured to perform an optimization of a set of parameters including (i) a source projection matrix Ps, (ii) a target projection matrix Pt, and (iii) a probabilistic target domain label associated with each target domain feature present in each unlabeled target domain sample of the one or more unlabeled target domain sample in the unlabeled target domain T data.
In the present disclosure, as shown in
The optimization of the set of parameters is described in detail through steps 304a through 304e. At step 304a, the source projection matrix Ps and the target projection matrix Pt, are initialized in a first iteration of a plurality of iterations, based on the labeled source domain S data and the unlabeled target domain T data respectively, using a principal component analysis (PCA) technique. More specifically, the source projection matrix Ps and the target projection matrix Pt are initialized using the PCA technique over the source and target domain data Xs and Xt, respectively.
At step 304b, the source projected data Xsp and the target projected data Xtp, are determined from (i) the labeled source domain S data and the source projection matrix Ps obtained at step 304a, and (ii) the unlabeled target domain T data and the target projection matrix Pt obtained at step 304a, respectively. More specifically, the source projected data Xsp is determined by multiplying the labeled source domain S data and the source projection matrix Ps. Similarly, the target projected data Xtp is determined by multiplying the unlabeled target domain T data and the target projection matrix Pt.
At step 304c, the source projected data Xsp and the target projected data Xtp obtained at step 304b are augmented, to construct the joint graph G, using the Gaussian kernel. A Graph signal processing (GSP) is a method used for signal modeling that involves a graph structure and a graph signal Y residing on a graph structure. The graph structure is represented as {V, E, W}, where V is the set of vertices and E is the set of edges connecting those vertices with weights specified in the weight matrix W. Given the data X∈Rm×n with m features of n samples, a graph of n vertices can be constructed using W∈Rn×n obtained from Gaussian kernel that is mathematically expressed as in equation 3:
Wherein σ is a scaling factor, and xi and xj are the feature vectors at ith and jth vertices of the graph G, respectively. One of the important matrices associated with graphs is the graph Laplacian. The un-normalized graph Laplacian is expressed as L=D−W∈Rn×n where D is the degree matrix, which is a diagonal matrix whose diagonal entries are expressed as Dii=ΣjWij. The normalized graph Laplacian is expressed as Ln=D−1/2(D−W)D−1/2.
At step 304d, the probabilistic target domain label associated with each target domain feature present in each unlabeled target domain sample, is estimated through label propagation over the joint graph G constructed at step 304c, using the source domain label associated with the source domain feature present in each labeled source domain sample. The probabilistic target domain label associated with each target domain feature present in each unlabeled target domain sample, are referred as pseudo labels (
The graph signal (Y: V→R) is a function that takes real value at each vertex of the joint graph G. The variation of signal over the underlying graph structure is defined by the Graph Total Variation (GTV), which is mathematically expressed as in equation 4:
Where Y(i) and Y(j) denotes the labels at ith and jth vertices of the joint graph G, respectively.
Most of the applications in GSP involve minimizing the GTV loss to ascertain that the graph signal is in agreement with the underlying graph structure. Thereby ensuring a smooth transition of the signal over the graph. This term has been popularly used for label propagation, where data form the graph structure and labels are considered as the graph signal. If the data residing at the two vertices are similar, minimizing the GTV term enables the labels at those vertices to be similar as well.
At step 304e, a joint learning is iteratively performed using the initialized parameters at step 304a (i.e., the source projection matrix Ps and the target projection matrix Pt) and the set of parameters from step 304b through step 304d in a first iteration and learnt parameters obtained thereafter until a convergence criterion is met. The joint learning is explained in detail through steps 304e1 through 304e4.
At step 304e1, each of the source projection matrix Ps and the target projection matrix Pt, is learned using (i) the labeled source domain S data and the unlabeled target domain T data, and (ii) and the probabilistic target domain label (obtained at step 304) associated with each target domain feature present in each unlabeled target domain sample. The source projection matrix Ps and the target projection matrix Pt, are learned by minimizing a weighted class-wise maximum mean discrepancy (CMMD) loss using equation 2. Note here that the weighted class-wise maximum mean discrepancy (CMMD) loss is employed for learning the source projection matrix Ps and the target projection matrix Pt in the second and subsequent iterations, instead of the PCA technique in the initial or the first iteration.
Taking P=[PsPt], the projections Ps and Pt are learned by minimizing a loss function mathematically expressed as in equation 5:
Here, Hs and Ht denote the centering matrices expressed as
respectively, where In
At step 304e2, the source projected data Xsp and the target projected data Xtp, are determined from (i) the labeled source domain S data and the source projection matrix Ps obtained at step 304e1, and (ii) the unlabeled target domain T data and the target projection matrix Pt obtained at step 304e1, respectively, as described at step 304b. Then at step 304e3, the source projected data Xsp and the target projected data Xtp, determined at step 304e2, are augmented, to construct the joint graph G, using the Gaussian kernel as described at step 304c using equation 3.
At step 304e4, the probabilistic target domain label associated with each target domain feature present in each unlabeled target domain sample, are learned through label propagation over the joint graph G obtained at step 304e3, using the source domain label associated with the source domain feature present in each labeled source domain sample, by minimizing the graph total variation (GTV) loss as described at step 304d, using equation 4.
The pseudo labels
where Ln is the normalized graph Laplacian for the joint graph G obtained using the augmented projected data Xp from the two domains using the equation 3. Since Ln is symmetric, Lst=LtsT, solving the equation 6, results in the following closed form update
The joint learning is iteratively performed through steps 304e1 to 304e4, until the convergence criterion is met. In an embodiment, the convergence criterion is met when the GTV loss is less than an empirically determined threshold value. Once the convergence criterion is met, the joint learning results in obtaining (i) the learnt source projection matrix, (ii) the learnt target projection matrix, and (iii) the learnt probabilistic target domain label associated with each target domain feature present in each unlabeled target domain sample.
At step 306 of the method 300, the one or more hardware processors 104 of the system 100 are configured to determine a target domain label associated with each target domain feature present in each unlabeled target domain sample, from the learnt probabilistic target domain label associated with each target domain feature present in each unlabeled target domain sample. More specifically, the class labels associated with the highest probability in the target predictions (
Hence the methods and systems of the present disclosure, through GA-UDA technique, iteratively updates the joint graph using the transformed features of both the source and target domains obtained through the optimization formulation. Hence the joint graph obtained is always a complete graph (not like a disjoint graph), and therefore the estimation of labels for the target domain data is always efficient and accurate. The GA-UDA technique of the present disclosure helps in addressing significant distribution shift between the two domains efficiently.
The bearing fault datasets and the conventional techniques in the art (referred as benchmark methods) are used to analyze the performance of the methods and systems of the present disclosure.
CWRU Dataset: This bearing dataset is collected by Case Western Reserve University (CWRU). It contains vibration data captured from the drive and fan end of the machine at a sampling frequency of 12 kHz. It has data for four different loading conditions (0, 1, 2, and 3 Horse Power (Hp)) with rotating speeds of 1797, 1772, 1750, and 1730 rpm, respectively. The data has four classes or fault types (the labels or predefined labels): Normal, Inner-race Fault (IF), Outer-race Fault (OF), and Bearing-race Fault (BF). Here, faults of different sizes (0.007, 0.014, 0.021 inches) are induced using electro-discharge machining (EDM).
Paderborn Dataset: This bearing dataset is collected from Paderborn University. It contains vibration and stator current signals collected from a test rig consisting of a drive motor, a torque measurement shaft, the test modules, and a load motor. Data for both real and artificially damaged bearings are available with a sampling frequency of 64 kHz for two rotating speeds (900 and 1500 rpm) and loading torques (0.7 and 0.1 Nm). The data has three classes or fault types (the labels or predefined labels): Normal, Innerrace Fault (IF), and Outer-race Fault (OF). Only the vibration data is considered in this analysis with faults introduced using EDM.
Benchmark methods: The methods and systems of the present disclosure is compared against five state-of-the-art UDA methods for bearing fault diagnosis and the Graph based DA (GAKT) method for the performance evaluation. The UDA methods for bearing fault diagnosis include mapping-based methods like Joint Maximum Mean Discrepancy (JMMD), Multi Kernels Maximum Mean Discrepancy (MK-MMD), CORrelation ALignment (CORAL), and adversarial learning-based methods like Domain Adversarial Neural Network (DANN) and Conditional Domain Adversarial Network (CDAN). They have been successfully used for adaptation between different working conditions of the same machine. In this analysis, they are evaluated for the difficult scenario of adaptation between physically different but related machines for the bearing fault diagnosis. These methods are implemented considering the same deep CNN backbone and bottleneck architecture.
The evaluation considers a challenging adaptation scenario where the source and target data belong to physically different but related machines. Here, adaptation is considered between CWRU and Paderborn datasets for bearing fault detection and classification. Note that the bearing specifications, sampling frequency, and working conditions are different for both datasets, making it a challenging adaptation scenario. CWRU data with 0 Hp motor torque and 0.007 inch fault size, collected from the drive end, and Paderborn data with 900 rpm and 0.7 loading torque have been utilized for the present experimentation. The Paderborn dataset is down sampled to 12 kHz to match the sampling frequency of CWRU dataset. The raw data is pre-processed by taking a sliding window of 1024 length, which results in 351 samples for each dataset. Five relevant time domain features, namely a root mean square (RMS), a variance, a data peak, a kurtosis, and a peak to peak are extracted from the raw data. They are well-studied features for bearing fault diagnosis that carry class discriminative information. For a fair comparison, these features are fed as input to all the methods, and a three-class classification problem is considered: Normal, IF, and OF.
The performance for all the methods is assessed using Accuracy (Acc), Precision (P), Recall (R), and F1 score (F1). To simulate a data limited scenario, experimentation was carried out considering 50% train-test split. The average results obtained using five randomly generated train-test sets are summarized in the tables, with the best performing method highlighted in bold. Table 1 and 2 provide the classification results for CWRU→Paderborn and Paderborn→CWRU, respectively, where S→T denotes adaptation from source to the target domain. Note the optimal value of the hyperparameters σ and α for the present disclosure are obtained using grid search and are mentioned in the tables.
For the case of CWRU→Paderborn, Table 1 shows that the mapping and adversarial learning based DA methods do not perform well for limited data scenarios. Even with domain-specific features as input, they fail to learn discriminative representations from the data. On the other hand, for the Paderborn→CWRUcase, the performance of these methods is comparatively better. However, for both cases, the methods and systems of the disclosure performs better than all the other benchmark methods. From both Table 1 and 2, it is observed that the present disclosure significantly improves over the GAKT method with ≈19% and 27% increase in accuracy, respectively. Note the distribution shift between the two domains is significant for the adaptation scenario considered here, where the source and target data are from different machines. In GAKT method, this results in two disjoint sub-graphs associated with the source and target data, respectively. Hence the label propagation over the joint graph is not effective. Unlike the static graph in GAKT, the graph in the present disclosure is updated iteratively using the transformed features learned through the optimization formulation till the convergence is met. This allows the present disclosure to effectively handle the distribution shift between the two domains, thereby providing more reliable adaptation results.
The written description describes the subject matter herein to enable any person skilled in the art to make and use the embodiments. The scope of the subject matter embodiments is defined by the claims and may include other modifications that occur to those skilled in the art. Such other modifications are intended to be within the scope of the claims if they have similar elements that do not differ from the literal language of the claims or if they include equivalent elements with insubstantial differences from the literal language of the claims.
The embodiments of present disclosure herein address unresolved problem of the Unsupervised Domain Adaptation (UDA), through the Graph Assisted Unsupervised Domain Adaptation (GA-UDA) technique for the machine fault diagnosis. Experimental results also show that the methods and systems of the present disclosure provides superior performance compared to the benchmark methods for the challenging data-limited scenario of adaptation between different but related machines. The experimental results demonstrate the applicability of the present disclosure method effectively and accurately for the domain adaptation.
The application of the machine fault diagnosis is considered to explain the methods and systems of the present disclosure in detail through the Graph Assisted Unsupervised Domain Adaptation (GA-UDA). However, the scope of the present disclosure (GA-UDA) is generic and can be applied to other application domains such as computer vision domains including an object detection and a face recognition.
It is to be understood that the scope of the protection is extended to such a program and in addition to a computer-readable means having a message therein; such computer-readable storage means contain program-code means for implementation of one or more steps of the method, when the program runs on a server or mobile device or any suitable programmable device. The hardware device can be any kind of device which can be programmed including e.g., any kind of computer like a server or a personal computer, or the like, or any combination thereof. The device may also include means which could be e.g., hardware means like e.g., an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or a combination of hardware and software means, e.g., an ASIC and an FPGA, or at least one microprocessor and at least one memory with software processing components located therein. Thus, the means can include both hardware means and software means. The method embodiments described herein could be implemented in hardware and software. The device may also include software means. Alternatively, the embodiments may be implemented on different hardware devices, e.g., using a plurality of CPUs.
The embodiments herein can comprise hardware and software elements. The embodiments that are implemented in software include but are not limited to, firmware, resident software, microcode, etc. The functions performed by various components described herein may be implemented in other components or combinations of other components. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
The illustrated steps are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope of the disclosed embodiments. Also, the words “comprising,” “having,” “containing,” and “including,” and other similar forms are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items or meant to be limited to only the listed item or items. It must also be noted that as used herein and in the appended claims, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise.
Furthermore, one or more computer-readable storage media may be utilized in implementing embodiments consistent with the present disclosure. A computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored. Thus, a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein. The term “computer-readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e., be non-transitory. Examples include random access memory (RAM), read-only memory (ROM), volatile memory, nonvolatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, and any other known physical storage media.
It is intended that the disclosure and examples be considered as exemplary only, with a true scope of disclosed embodiments being indicated by the following claims.
Number | Date | Country | Kind |
---|---|---|---|
202321044922 | Jul 2023 | IN | national |