1. Field of the Invention
The present invention relates to a method for predictive maintenance of a machine.
2. Background Art
Although there are a variety of systems and methods for monitoring and maintaining machinery and equipment, each has one or more inherent limitations which limit its usefulness. For example, many condition-monitoring algorithms operate by continuously comparing newly extracted features—i.e., machine conditions—to their corresponding baseline values. These baseline characteristics are essentially the statistical means of the features collected during the setup phase. The diagnostic capabilities of conventional predictive maintenance systems are based on applying different types of thresholds, templates, and rules, to quantify the relationship between the current feature values and their baseline counterparts.
One limitation of this type of system is that during the process of monitoring the machine features, the thresholds remain unchanged unless an expert interferes to force their recalculation. This type of human intervention. usually results from the observation of frequent false alarms caused by a process mean shift. Therefore, it would be desirable to have a method for predictive maintenance of a machine which utilizes unsupervised learning techniques, and which can identify a significant change in the pattern of monitored machine factures.
Another limitation of conventional machine monitoring methods and condition-based maintenance (CBM) technologies is that their application is limited to a particular machine. Over time, there may be extensive characterization of the physical and mechanistic principles that guide the equipment behavior and evolution. While this may lead to accurate information about a particular machine, such technologies are extremely limiting when it comes to widespread deployment for a wide variety of equipment. Therefore, it would be desirable to have a method for predictive maintenance of a machine, which developed a “generic” framework that was relatively independent of the type of physical equipment under consideration.
At least one embodiment of the present invention provides a Predictive Maintenance (PdM) Agent which utilizes a generic prognostic novelty detection based learning algorithm that is not dependent on the specific measured parameters determining the health of the particular machine. The PdM Agent utilizes a combination of measured machine characteristics in the time and frequency domain, process parameters, energy levels, and other parameters that can describe the state of a piece of equipment. This set of measured variables, called the feature set, is standardized and mapped in real time to the space of principal components.
The first two principal components of the feature vector are monitored and visualized in the two-dimensional (2-D) space. A real time, unsupervised clustering algorithm is applied to identify stable patterns that constitute different operating modes of the equipment in order to minimize potential false positive alarms due to significant changes of the feature pattern. Two alternative strategies are used to recognize both abrupt and gradually developing (incipient) changes. The algorithm also predicts whether incipient changes may lead to significantly patterns, and subsequently to fault conditions.
Before generating a final decision for a potential fault warning, a comparison is made between the prediction in the multiple feature space and the statistical performance of those features that are expected to have a strong influence on the machine performance. The final determination of fault prediction may utilize visual tools to simplify the validation of the predictions made by the diagnostic/prognostic algorithm. The algorithm may be implemented as a set of recursive real time procedures that do not require storing a large amount of data. The algorithm can be realized as a stand alone event-driven software agent that is interfaced to a server storing raw machine data.
The input to the PdM Agent is a feature vector that characterizes the status of the equipment being monitored. The features may include time, frequency or energy characteristics, process parameters or other measured attributes. Some features which may used by the PdM Agent for machine health monitoring of rotating equipment include time domain features such as time domain data statistics and auto regressive (AR) model parameters. Time domain features can be calculated directly from raw vibration signals picked up by one or more sensors attached to the machine being monitored. Time domain data statistics include such things as root mean square (RMS), crest factor, variance, skewness, and kurtosis. Auto regressive model parameters may use the Burg method to fit a predefined order (p) of an AR model to the input signals by minimizing the forward and backward prediction errors, while constraining the AR parameters to satisfy the Levinson-Durbin recursion.
Other types of features which may be used with the PdM Agent include frequency domain features, which may use a transform such as a Fast Fourier Transform (FFT) to transform time-based vibration signals into a frequency domain. The PdM Agent may also use energy features, where energy bands are calculated from derived frequency blocks. A velocity amplitude spectrum is another feature which may be utilized by the PdM Agent. Utilizing the data obtained after applying an FFT, a velocity amplitude spectrum can be estimated. Of course, energy, frequency and velocity spectrum features can be obtained directly from frequency signatures without performing an FFT from time waveform signals. Moreover, the PdM Agent is not limited to receiving vibration data, but rather, it could receive data from temperature sensors, velocity sensors, or other instruments, which monitor the machine characteristics.
The prediction model of the PdM Agent includes two hierarchal levels of evolving clusters that are dynamically populated and updated as new features are gathered. Operating Mode (OM) clusters represent different prototypical modes of operation of monitored machines. Operating Conditions (OC) clusters cover alternative operating conditions within individual operating modes. The OM clusters are associated with significantly different, but repetitive, machine signatures—e.g., start-up, normal or idle operating modes. Although the machine may switch from one mode to another mode, it is anticipated that the machine will remain at least for a short time within one of these modes. During this time the expectation is that similar patterns will be seen, which might be slightly different, but remain within the same envelope of operating conditions.
When the PdM Agent is being setup, it is not expected that all possible OM clusters will be seen. Rather, it is expected that the OM clusters will evolve over time in order to identify new operating modes that have not been initially observed. The evolution of the OM clusters provides a process of creation of new clusters, and a process of continuous updating of the existing OM clusters. The former accounts for potential new operating modes, outliers, drastic faults, or some combination thereof. The latter represents the gradual changes in machine characteristics. Drastic faults are viewed as potentially new operating modes because they exemplify dramatically new patterns that have not been previously observed. Two differences between a drastic fault and an actual operating mode are: 1) that a drastic fault is unstable, and 2) that a drastic fault includes fewer feature vectors than the normal operating modes. Therefore, the number of feature vectors in an OM cluster, and the extensive creation of new OM clusters, can be used to diagnose a drastic fault as opposed to a new operating mode.
The OC clusters are singles or groups of clusters that are included within the OM clusters. They exemplify changing operating conditions within an operating mode. The root cause for the evolution of the OC clusters can be changes of machine parameters, or gradual wear-off conditions. New operating conditions can be created over time because they may not necessarily be completely identified during the setup. Their evolution is driven by gradual modification of the cluster parameters or by creation of new clusters. The trend of changing the OC clusters is used to predict a potential incipient fault.
Another aspect of using the OM clusters, is that their relative stability and repetitive feature patterns allow them to be used to define local mappings between the K-dimensional (K-D) feature space and the two-dimensional space of the first two principal components (PC's). Use of the K-D to 2-D transformation reduces dimensionality of the feature space, decreases the amount of insignificant information, and allows for visualization of the decision making process. The covariance matrices associated with each of the OM clusters are used to update the mappings transforming the features in the OM clusters to their 2-D images in the co-domain space of the first two PC's. Therefore, each of the OM clusters in the feature space has a 2-D counterpart that includes multiple evolving 2-D OC clusters.
Another embodiment of the present invention provides a diagnostics and prognostics framework (DPF) that is relatively independent of the type of physical equipment under consideration. Much of the modeling and estimation procedures employed by the DPF are based on unsupervised learning methods, which reduce the need for human intervention. The procedures are also designed to temporally evolve parameters with monitoring experience for enhanced diagnostic/prognostic accuracy. The framework also makes a provision for incorporating end-user feedback for enhancing the diagnostic/prognostic accuracy.
The DPF employs a procedure to combine principal component analysis (PCA) based dimensionality reduction with an unsupervised clustering technique. Initially, a single principal component transformation matrix (called “raw basis”) is constructed from signal/feature data. As discussed above with regard to the PdM Agent, such signal/feature data may be gathered from one or more sensors monitoring the operating conditions of a particular machine. The DPF then uses a kernel density based unsupervised clustering technique to cluster the data in the space of the two most dominate PC's, to identify different equipment “modes of operation”. Data points from individual clusters or modes are then identified using sets of indices. A PC transformation matrix is then recomputed for each individual cluster or mode using the corresponding index set. This leads to a different mode basis for a distinct operating mode/cluster. The diagnostics engine employs these bases for raising pertinent alarms during future monitoring.
Given that equipment behavior evolves because of such processes as wear-in, maintenance, and wear-out, the DPF is configured to effectively track this non-stationary behavior. The DPF employs a cluster tracking procedure using an optimal exponential waiting scheme. In particular, it employs the following two strategies to enhance the performance of the diagnostics engine. First, the on-line determination of an optimal exponential discounting factor ensures that the cluster tracking is effective in matching the rate of evolution of the equipment operating mode behavior. Second, the DPF includes a provision to allow differing exponential discounting factors for different clusters to further enhance the performance of the diagnostics engine. The discounting factor is optimized based on an objective function that employs a generalized statistical distance (also called Mahalanobis distance) cost function in the dominant PC space.
The DPF may be viewed as being composed of four different processes. The first is automated dimensionality reduction, discussed above. The second is multivariate and univariate characterization of equipment evolutionary behavior. Multivariate adaptive clustering methods attempt to distinctly characterize naturally inherent different operating modes and behaviors. Conversely, the univariate signal/feature enveloping technique attempts to represent equipment evolutionary behavior by separately modeling each promising signal/feature. The third process is detection of anomalous behavior through the use of a diagnostics engine, and the fourth process includes a prognostics engine that estimates remaining useful life.
The present invention also provides a method for predictive maintenance of a machine, which includes collecting data related to operation of the machine. At least some of the data is transformed into feature vectors in a first feature space. At least some of the feature vectors are standardized, thereby creating standardized feature vectors in a standardized feature space. At least some of the standardized feature vectors are transformed into two-dimensional feature vectors in a two-dimensional feature space. At least some of the two-dimensional feature vectors are clustered, based on similarity between the two-dimensional feature vectors. This forms at least one two-dimensional vector cluster. Additional data related to operation of the machine is then collected. At least some of the additional data is transformed into additional feature vectors in the first feature space. At least some of the additional feature vectors are standardized, thereby creating additional standardized feature vectors in the standardized feature space. At least some of the additional standardized feature vectors are transformed into additional two-dimensional feature vectors in the two-dimensional feature space. At least some of the additional two-dimensional feature vectors are analyzed relative to the at least one two-dimensional vector cluster to provide predictive maintenance information for the machine.
The invention also provides a method for predictive maintenance of a machine, which includes collecting feature data for the machine while the machine is operating. The feature data includes a plurality of feature vectors. At least some of the feature vectors are standardized to facilitate compatibility between the standardized feature vectors. At least some of the standardized feature vectors are transformed into corresponding two-dimensional feature vectors. At least some of the two-dimensional feature vectors are clustered, based on operating modes of the machine, thereby forming a plurality of two-dimensional operating mode clusters. Additional feature data is collected while the machine operating. The additional feature data includes a plurality of additional feature vectors. At least some of the additional feature vectors are standardized, and at least some of the additional standardized feature vectors are transformed into corresponding additional two-dimensional feature vectors. An algorithm is applied to at least some of the additional two-dimensional feature vectors to facilitate a comparison between the operation of the machine when the feature data was collected and operation of the machine when the additional feature data was collected. This provides predictive maintenance information for the machine.
The invention further provides a method for predictive maintenance of a machine, which includes collecting feature data for the machine, defining feature vectors from the feature data, standardizing the feature vectors, and clustering the standardized feature vectors based on operating modes of the machine. The standardized feature vectors are transformed into corresponding two-dimensional feature vectors, which are then clustered at least based on the operating modes of the machine. The method also includes recursively analyzing new feature data relative to at least some of the clusters, thereby providing predictive maintenance information for the machine.
Both the initialization and monitoring phases are preceded by a feature extraction phase wherein a set of features is extracted from the time domain sensor signal. For example, a machine such as a fan, compressor, pump, etc. may have attached to it one or more sensors configured to monitor vibrations as the machine operates. To monitor the vibrations, one or more accelerometers or other vibration sensing devices could be used. It is worth noting that although the exemplary illustrations contained herein use vibrations to determine machine features, other types of machine data could be used. For example, a current sensor may be used to measure changes in the amount of current the machine draws during various operations. Similarly, a thermocouple, or other type of temperature sensor, could be used to detect changes in temperature of some portion of the machine. The machine speed or torque could also be sensed to provide data relating to the operation of the machine.
Depending on the type of sensor or sensors employed, the raw signal itself may be able to be used as a feature, and therefore, would need no feature extraction process. Alternatively, the raw signal may be used in a feature extraction scheme to put the data in the appropriate form. For example, as described above, extracted from vibration data for a rotating machine may be time domain features, frequency domain features, energy features, or a velocity amplitude spectrum. Transformation of raw data into a feature vector could include the application of a statistical equation, such as determining the root mean square (RMS) of the raw data, or applying a Fast Fourier Transform (FFT) to the data. The configuration of the feature set is done when the PdM Agent is configured. The result of the feature extraction phase is a K-dimensional feature vector.
During the initialization phase, the PdM Agent collects new data until a predefined number of feature vectors, N, for agent initialization is reached. The lower bound for (N) is estimated from the minimal number of independent parameters of the feature covariance matrix—i.e., the minimal number of steps is:
Nmin=K(K−1)/2.
where K is the dimension of the feature vector. When at least Nmin data readings are accumulated, the following tasks are conducted in sequence as shown in the flow chart 10 in
The PdM Agent may reside in a one or more controllers which are part of larger information system used to gather and process information about equipment and processes in a manufacturing, or other, facility. In the embodiment shown in
At step 18, feature normalization and dimension reduction occur; as shown in
Y* and σ are the vectors of the means and the standard deviation of individual features, and Y(k) is the standardized feature vector.
In order to reduce the dimension of the feature vectors, a Principal Component Transformation is applied to extract the first two principal components of the standardized feature vectors. This dimensional reduction also facilitates classification of the feature vectors in clusters corresponding to the main operating modes observed during the initialization phase. Performing a Singular Value Decomposition (SVD) on the covariance matrix (S) yields:
S=V*A*T (4)
The first two columns T(12) of the transformation matrix T define a mapping from the standardized feature vector space to the 2-D PC space:
y(k)=Y(k)*T(12) (5)
where y(k) is the standardized featured vector in the 2-D PC space.
At step 20, clustering occurs based on some similarity among the feature vectors, for example, based on operating modes of the machine. In such a case, Operating Mode (OM) clusters are identified, and OM cluster indices are generated. The standardized feature vectors may be clustered according to any criteria effective to facilitate diagnostics and/or prognostics on the machine being observed. Clustering according to an operating mode—such as startup and normal or idle operating modes—as illustrated in
The process illustrated in step 20 identifies the unknown number of existing OM clusters and initializes each of the identified OM cluster parameters. To accomplish this, any of a number of algorithms could be applied. For example, a Greedy Expectation Maximization clustering algorithm can be applied to identify the number of clusters corresponding to different operating modes. The Greedy Expectation Maximization Algorithm may be based on incremental mixture density estimation, and can use a combination of a global and a local search each time a randomized new component is inserted to the mixture to obtain optimal mixtures.
Another algorithm that can be used is a Mounting Clustering algorithm. The Mounting Clustering algorithm is applied on the (N) transformed 2-D feature data points y(k), k=[1, N] that are obtained in Step 18. The result of the clustering algorithm is the number of OM clusters, m, the mean and covariance matrix y(j)* and s(j), and the vector of membership n(j), j=[1, m], of the 2-D feature vectors y(k) with respect to each of the OM clusters. The membership vectors n(i) are used to initialize the prototype OM clusters in the standardized feature space. Thus, for the OM clusters in the standardized feature space, the feature vectors Y(j) belonging to each operating mode, and the mean and covariance matrix Y(j)*, S(j), are obtained.
Application of this algorithm facilitates a refining of the standardization by applying expressions (1)-(3) on each of the operating modes. One reason for performing the clustering in the PC space, {y(k), k=[1, N]}, rather than in the domain space, {Y(k), k=[1, N]}, is to visualize the result and to check the meaningfulness of the identified clusters. The Mounting Clustering algorithm is applied only during the initialization phase. In the following monitoring phase, allowance is made for the OM clusters to evolve and grow in number, reflecting potentially new operating modes. That is, with every new feature vector, the number of the OM clusters (m) and their means and covariance matrices Y(j)*, S(j), j=[1, m], are updated.
The transformation matrix T(j)(12) of a particular operating mode with its unique basis defined by the first two PC components are then derived from the OM covariance matrices S(j) according to expressions (4) and (5). The 2-D images y(j) of the feature vectors of Y(j) are obtained through a PC transformation with T(j)(12) and expression (5). This is the principal component analysis (PCA) transformation shown in step 20. The mean and inverse covariance matrix, y(j)* and s(j)−1 belonging to each of the 2-D OM clusters, can be obtained directly from y(j).
At step 22, Operating Condition (OC) clusters are formed, and the mean and covariance matrix is determined for each feature vector. The Operating Conditions used to form the additional clusters may be based on alternative operating conditions within an individual Operating Mode. Thus, if during the startup mode, there are different operating conditions which are detected through the data collection, then data related to a particular operating condition can be clustered into an OC cluster.
It is assumed that each operating mode starts with one operating condition which is characterized by its mean and inverse covariance matrix, y(j).OC(1)* and s(j).OC(1)−1, that are initially identical to the respective OM cluster parameters, i.e.:
y(j).OC(1)*=y(j)* (6)
s(j).OC(1)−1=s(j)−1
After all identified OM clusters are initialized, the following parameters are updated upon the arrival of new readings:
The steps described in
As noted above, the initialization phase is optional, but may be beneficial to provide initial clusters to which new data—as collected in step 14′—can be compared. At step 52 the feature set extracted in step 14′ is normalized and its dimension reduced. This is performed with respect to (w.r.t.) all OM clusters formed in the initialization phase. If the initialization phase is omitted, step 52 in
At step 54, a number of calculations and/or determinations are made. In this step the similarity between the new feature vector and each of the existing operating modes is evaluated. This is done by checking the mahalanobis distances of the new feature vector image to the (m) cluster centers of the 2-D OM clusters:
zj=y(j)(k+1)−y(j)*;
Dj(k+1)=zj*s(j)−1*zj, j=[1,m] (7)
Assume, for example, that im is the closet OM cluster, i.e.:
iM=arg minj(Dj(k+1)) (8)
The significance of the similarity between the image vector y(i
D(k+1)<11.829 (9)
Condition (9) is derived from the T2 Hotelling's statistics:
Ti
where
zi
Ti
χ2p,α is the (1−α)th value of the Chi-squared distribution with p degrees of freedom, and α is the probability of a false alarm, e.g. χ22, 0.0027=11.8290, χ23, 0.0027=14.1563, while χ21, 0.0027=9 corresponds to the well-known+/−3σ limit rule for the case of a single output.
There are two potential outcomes of the condition described by equation (9), which will then lead to either step 56 or step 58. If equation (9) is satisfied, then the new reading Y(R)(k+1) is assumed to follow the distribution that is associated with the iM'th OM cluster with very high possibility. In this case, the process advances to step 56. If equation (9) is not satisfied, then the it is assumed that the closest existing operating mode does not share significant similarities with the current reading. Therefore, the algorithm advances to step 58, where the current reading is temporarily marked as an outlier, and the possibility to create a new operating mode cluster is considered.
If the algorithm advances to step 56, the appropriate OM cluster is updated. At step 56, the iM'th OM cluster is identified as the current operating mode based on the similarity between the associated with iM'th OM cluster and the feature vector y(R)(k+1). Therefore, the parameters of the iM'th OM cluster are updated recursively using the standardized feature vector Y(i
Y(i
S(i
y(i
The last parameter to be updated is the inverse covariance matrix in 2D space s(i
At step 60, the similarity between the 2-D image of the feature vector from the im'th OM cluster y(i
If equation (9) is satisfied at step 60, the algorithm advances to step 62, where an autoregressive model of the y(i
y(i
θ(t)=[y(i
where yi
φ(t+1)=φ(t)
y(i
where ζ(t) represents Gaussian white noise:
The vector of model parameters φ for each OC cluster is saved inside the PdM Agent for future updates. Multiple-steps-ahead prediction for the recently updated OC cluster centers are performed to assess the probability of the particular OC cluster to move toward the boundary of its enclosing OM cluster—something which corresponds to an incipient failure. In general, the two-dimensional feature vectors are analyzed relative to the two-dimensional feature clusters to provide predictive maintenance information for the machine.
If the predicted trajectory of the y(i
Returning to step 60 in
The steps described in
For those new feature vectors that belong to an existing OM cluster, they are mapped to the 2-D space as described above and shown in
Similarly,
To avoid confusion with the PdM Agent described above, the DPF will be described using slightly different notation for the vectors, means, and covariance matrices. DPF employs a clustering method in the two-dimensional principal component space to detect and characterize potentially distinct equipment modes of operation. It can, for example, support Kernel Density Estimation based clustering, as well as Gaussian Mixture Model based clustering. Once clustering is performed, each cluster (i) is characterized using a mean vector (μi) and a covariance matrix (Σi), forming a two tuple (μiΣi).
Just as with the PdM Agent, the DPF takes the raw data or features and performs a dimensional reduction from a feature space to a 2-D space—see step 80. In addition, step 82, may also be performed as in the PdM Agent. In general, the diagnostics path uses three methods of analysis: (a) diagnostics based on classification (called C), which is a multivariate, multi-basis classification system, (b) diagnostics based on feature/signal enveloping (called SPC) , which is a univariate enveloping system, and (c) diagnostics based on velocity threshold (called V) These three domains contribute to the overall diagnostics result. The diagnosis result is a number called ‘severity rating’, SR, computed through a voting algorithm as follows:
Turning to the diagnostics based on classification, shown in block 84, the DPF assigns a new feature vector to existing clusters based on the smallest Generalized Statistical Distance (also called Mahalanobis Distance): D=(Xnew−{circumflex over (μ)}j)T{circumflex over (Σ)}j−1(Xnew−{circumflex over (μ)}j). Classification is done after dimension reduction, in two-dimensional PC space. The DPF employs an exponential weighted moving average method for recursive estimation of mean and covariance matrices. For feature (j), using the new observation (xj), the recursive estimation expressions are (used for constructing a feature envelope):
{circumflex over (μ)}j(new)=α{circumflex over (μ)}j(old)+(1−α)xj(new) (20)
{circumflex over (σ)}j(new)=α{circumflex over (σ)}2j(old)+(1−α)(xj(new)−{circumflex over (μ)}j(old))T(xj(new)−{circumflex over (μ)}j(old)) (21)
For the complete feature vector (X), the recursive estimation expressions are (used for updating PC Basis):
{circumflex over (μ)}(new)=α{circumflex over (μ)}(old)+(1−α)X(new) (22)
{circumflex over (Σ)}(new)=α{circumflex over (Σ)}(old)+(1−α)(X(new)−{circumflex over (μ)}(old))T(X(new)−{circumflex over (μ)}(old)) (23)
Like the PdM Agent, the diagnostics based on classification determines whether a given feature vector or data point lies within an existing cluster, Ci, or whether it is an outlier. One criterion for labeling a point an ‘outlier’is: (Xnew−{circumflex over (μ)}j)T{circumflex over (Σ)}j−1(Xnew−{circumflex over (μ)}j)≧χd2(β)∀j. This implies that if (Xnew) is not within the β% (for example, ≧99%) probability contour of N(μj,Σj), but still is closest to cluster (j) than any other cluster in terms of the generalized statistical distance, the data point gets labeled an outlier to cluster (j).
Three different cases are considered here for diagnostics:
The signal enveloping, shown at block 86, is a limit-based system, and uses diagnosis based on univariate signal envelopes. For each of the features/signals, signal envelopes are constructed recursively using a ±kσ limit, where (k) is a predetermined value based on the size of the envelope desired. The actual expressions are based on equations (20)-(23), shown above. A new feature point (xj) is considered an outlier if |xj−{circumflex over (μ)}x
In addition to the classification and signal enveloping, the diagnostics path also includes a determination of velocity thresholds. Standardized velocity within individual clusters is estimated based on consecutive feature vector entries as follows. If (X1) and (X2) denote the most recent consecutive feature vectors in (Rn), collected at time instants (t1) and (t2), then the standardized velocity is calculated as follows:
where, Z is standardized feature vector obtained by standardizing each element of X using the formula:
For purposes of using the velocity thresholds in the diagnostics portion of the DPF, rv is assigned value of 1 if V>Vth, or else rv is set to 0. Typically Vth is set to 5.
The output 90 from the diagnostics engine 78 is in communication with a decision support system (DSS) 92. The DSS 92 uses diagnostics/prognostics results and recommends necessary actions for maintenance. A DSS, such as the DSS 92, may include computers with preprogrammed algorithms configured to return certain outputs based on the information received from the DPF. As with the PdM Agent, the outputs based on the DPF information may be in the form of graphical displays or other methods useful to shop floor and other decision making personnel.
As illustrated in
With regard to the forecasting signals, each feature/signal is considered as a time series (xt). A univariate time series forecasting method is employed to predict values of xt+1, xt+2, xt+2 and xt+4. In one implementation, an autoregressive time series model of order 7, AR(7), is fitted to (xt) and used for forecasting. As with the output 90 from the diagnostics engine 78, output 98 from the prognostics engine 96 is also in communication with the DSS 92.
In addition, both outputs are in communication with an end-user feedback system 100. Given the technical difficulty of developing diagnostic and prognostic algorithms, it is pragmatic to believe that most DPF's will not achieve 100% accuracy in diagnostics or prognostics. Therefore, the current DPF includes the feedback system 100, which is designed to incorporate user feedback for the classification based diagnostics system. The feedback system 100 can allow shop floor personnel, for example, to manually input information into the system to remedy known incorrect decisions. Such a system may be employed with the PdM as well as the DPF, and provides one more method for ensuring the integrity of the information provided.
While embodiments of the invention have been illustrated and described, it is not intended that these embodiments illustrate and describe all possible forms of the invention. Rather, the words used in the specification are words of description rather than limitation, and it is understood that various changes may be made without departing from the spirit and scope of the invention.