This disclosure relates to processing of sensor data, and more particularly to sensor data fusion for prognostics and health monitoring applications.
Complex engineered systems including such things as helicopters, jet engines, heating, ventilating, and air conditioning (HVAC) systems and elevators typically are systematically monitored to make sure faults are detected and flagged early. Several types of sensors are used to monitor physical observables such as temperature, pressure, fluid flow rate and vibrations. Information related to changes in system performance is commonly distributed among these sensors. Typically, experts use their domain knowledge and experience to hand-craft features that capture relevant information across different sensor modalities. However, such features are not always complete and necessary domain knowledge may not be available in many situations.
According to an embodiment, a method includes converting time-series data from a plurality of prognostic and health monitoring (PHM) sensors into frequency domain data. One or more portions of the frequency domain data are labeled as indicative of one or more target modes to form labeled target data. A model including a deep neural network is applied to the labeled target data. A result of applying the model is classified as one or more discretized PHM training indicators associated with the one or more target modes. The one or more discretized PHM training indicators are output.
In addition to one or more of the features described above, or as an alternative, further embodiments could include where the PHM sensors are heterogeneous sensors that monitor at least two uncorrelated parameters of a monitored system.
In addition to one or more of the features described above, or as an alternative, further embodiments could include where the frequency domain data include spectrogram data generated for each of the PHM sensors covering a same period of time.
In addition to one or more of the features described above, or as an alternative, further embodiments could include where the one or more targeted modes include one or more fault conditions.
In addition to one or more of the features described above, or as an alternative, further embodiments could include where the deep neural network is a deep belief network with a soft max layer performing classification using a nonlinear mapping.
In addition to one or more of the features described above, or as an alternative, further embodiments could include where the model is trained using a supervised learning process to develop a plurality of weights in a pre-training process and tune the weights based on the labeled target data.
In addition to one or more of the features described above, or as an alternative, further embodiments could include the pre-training process includes applying a pre-training network of Restricted Boltzmann Machines to develop the weights to remove noise from one or more inputs including noise.
In addition to one or more of the features described above, or as an alternative, further embodiments could include where applying the model to the labeled target data is performed during a training process to train the model, and further including: applying the model in a testing process to unlabeled frequency domain data from one or more of the PHM sensors, classifying a result of applying the model as one or more discretized PHM result indicators, and outputting the one or more discretized PHM result indicators.
In addition to one or more of the features described above, or as an alternative, further embodiments could include creating different instances of the model for different target modes.
In addition to one or more of the features described above, or as an alternative, further embodiments could include where the one or more target modes include one or more health condition and one or more prognostic condition of a monitored system.
A further embodiment is a system that includes a sensor system and a PHM processor. The sensor system includes a plurality of PHM sensors. The PHM processor is operable to convert time-series data from the PHM sensors into frequency domain data, label one or more portions of the frequency domain data indicative of one or more target modes to form labeled target data, apply a model including a deep neural network to the labeled target data, classify a result of applying the model as one or more discretized PHM training indicators associated with the one or more target modes, and output the one or more discretized PHM training indicators.
Technical function of the embodiments described above includes creation and use of monitoring models from multiple sensor inputs for health and prognostic monitoring.
Other aspects, features, and techniques of the embodiments will become more apparent from the following description taken in conjunction with the drawings.
The subject matter which is regarded as the present disclosure is particularly pointed out and distinctly claimed in the claims at the conclusion of the specification. The foregoing and other features, and advantages of the present disclosure are apparent from the following detailed description taken in conjunction with the accompanying drawings in which:
Embodiments automate the creation of system monitoring models integrating information across homogeneous or heterogeneous sensor inputs that enable more accurate health monitoring evaluation and prognostics of health related conditions such as remaining useful life.
Referring now to the drawings,
The PHM processor 102 is a processing system which can include memory to store instructions that are executed by one or more processors. The executable instructions may be stored or organized in any manner and at any level of abstraction, such as in connection with a controlling and/or monitoring operation of the sensor system 104 of
Embodiments can use a deep neural network (DNN) model in the form of a deep belief network (DBN). A DNN model can include many hidden layers for PHM. Inputs to the DNN model may be from multiple PHM sensors 106 of the same kind (e.g., multiple accelerometers 106E) or different kinds and can include other non-sensor information. A DNN is a feedforward artificial neural network that has more than one layer of hidden units between its inputs and outputs. Each hidden unit, j, uses a nonlinear mapping function, often the logistic function, to map its total input from the layer below, xj, to the scalar state, yj, that it sends to the layer above, where bj is the bias of unit j, i is an index over units in the layer below, and wij is the weight to unit j from unit i in the layer below. The values of yj and xi can be computed according to equation 1.
For classification, the output unit j converts its total input, xj into a class probability, pj using a nonlinear mapping such as the soft max function of equation 2, where k is an index over all classes.
To train a DNN model, a pre-training process 200 can be performed as depicted in the example of
A pre-training network 210 can be used to determine weights 212 as further depicted in the example of
To train a DNN, a pre-training step is performed, such as pre-training process 200 of
The DNN model 402 including DBN 404 is applied to the labeled target data 412 in a supervised learning process 414. The supervised learning process 414 can include developing a plurality of weights 212 in pre-training process 200 and tuning the weights 212 based on the labeled target data 412. Fine tuning of the weights 212 may be performed using gradient descent and backpropagation. A result of applying the DNN model 402 can be classified as one or more discretized PHM training indicators associated with the one or more target modes. Classification can be performed in soft max layer 416 using nonlinear mapping according to the soft max function of equation 2. The one or more discretized PHM training indicators are output at block 418. The one or more discretized PHM training indicators can identify whether one or more health condition and/or one or more prognostic condition of the monitored system 100 are detected.
Different instances of the DNN model 402 can be created for different target modes. Once the DNN model 402 is initially trained, it can be presented with any number of target modes to model; therefore, multiple models for different tasks can be created from the same underlying DBN 404. DNNs have been shown to be able to make more effective use of the information present in the data for discriminative tasks and can be applied to detecting one or more fault conditions. Prognostics can be performed by learning over several time steps of data or presenting target label points from subsequent time steps.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the embodiments. While the present disclosure has been described in detail in connection with only a limited number of embodiments, it should be readily understood that the present disclosure is not limited to such disclosed embodiments. Rather, the present disclosure can be modified to incorporate any number of variations, alterations, substitutions or equivalent arrangements not heretofore described, but which are commensurate with the spirit and scope of the present disclosure. Additionally, while various embodiments of the present disclosure have been described, it is to be understood that aspects of the present disclosure may include only some of the described embodiments. Accordingly, the present disclosure is not to be seen as limited by the foregoing description, but is only limited by the scope of the appended claims.
This application is a National Stage application of International Patent Application Serial No. PCT/US2015/066673, filed Dec. 18, 2015, which claims benefit to U.S. Provisional Application No. 62/094,681, filed Dec. 19, 2014, which are incorporated herein by reference in their entirety.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2015/066673 | 12/18/2015 | WO | 00 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2016/100816 | 6/23/2016 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
5373460 | Marks, II | Dec 1994 | A |
6578040 | Syeda-Mahmood | Jun 2003 | B1 |
8390675 | Riederer | Mar 2013 | B1 |
20060221072 | Se et al. | Oct 2006 | A1 |
20110231169 | Furem | Sep 2011 | A1 |
20110288714 | Flohr et al. | Nov 2011 | A1 |
20130177235 | Meier | Jul 2013 | A1 |
20140019388 | Kingsbury | Jan 2014 | A1 |
20140195192 | Kimishima | Jul 2014 | A1 |
20140222425 | Park et al. | Aug 2014 | A1 |
20140226855 | Savvides et al. | Aug 2014 | A1 |
20140253760 | Watanabe et al. | Sep 2014 | A1 |
20140294088 | Sung et al. | Oct 2014 | A1 |
20140333787 | Venkataraman et al. | Nov 2014 | A1 |
20160098037 | Zornio | Apr 2016 | A1 |
Entry |
---|
Tran et al., “An approach to fault diagnosis of reciprocating compressor valves using Teager-Kaiser energy operator and deep belief networks”, Jul. 2014, Expert Systems with Applications, vol. 41 Issue 9, pp. 4113-4122 (Year: 2014). |
Khunarsal et al, “Very short time environmental sound classification based on spectrogram pattern matching”, Sep. 2013, Information Sciences, vol. 243, pp. 57-74 (Year: 2013). |
Khunarsal et al., “Very short time environmental sound classification based on spectrogram pattern matching”, Sep. 2013, Information Sciences, Vo. 243, all pages (Year: 2013). |
Suhaimi, Emil Zaidan bin, “Intelligent Sensor Data Pre-processing Using Continuous Restricted Boltzmann Machine”, Oct. 2013 , all pages (Year: 2013). |
Khunarsal et al., “Very short time environmental sound classification based on sptectrogram pattern matching”, Sep. 2013, Information Sciences, Vo. 243, all pages (Year: 2013). |
Dahl et al, “Large Scale Malware Classification Using Random Projections and Neural Networks”, May 2013, IEEE, all pages (Year: 2013). |
Suhaimi, Emil Zaidan, “Intelligent Sensor Data Pre-Processing Using Continuous Restricted Boltzmann Machine”, Oct. 2013, UTPedia, all pages (Year: 2013). |
B. Wu, et al., “Fast pedestrian detection with laser and image data fusion,” Proceedings of the 6th International Conference on Image and Graphics, Aug. 12, 2011, pp. 605-608. |
C. Premebida et al., “LIDAR and vision-based pedestrian detection system,” Journal of Field Robotics, vol. 26, No. 9, Sep. 1, 2009, pp. 696-711. |
International Application No. PCT/US2015/066664 International Search Report and Written Opinion, dated Apr. 26, 2016, 12 pages. |
International Application No. PCT/US2015/066673 International Search Report and Written Opinion, dated Apr. 6, 2016, 12 pages. |
J. Ngiam, et al., “Multimodal Deep Learning,” Proceedings of the 28th International Conference in Machine Learning (ICML '11), Jun. 28, 2011, pp. 689-696. |
J. Sun, et al., “Application of Deep Belief Networks for Precision Mechanism Quality Inspection,” IFIP Advances In Information And Communication Technology, vol. 435, Feb. 16, 2014, pp. 87-93. |
J. Xie, et al., “Learning Features from High Speed Train Vibration Signals with Deep Belief Networks,” 2014 International Joint Conference on Neural Networks, Jul. 6-11, 2014, pp. 2205-2210. |
M. Szarvas et al., “Real-time pedestrian detection using LIDAR and convolutional neural networks,” Proceedings of the 2006 IEEE Intelligent Vehicles Symposium, Jun. 13-15, 2006, pp. 213-218. |
N.K. Verma, et al. “Intelligent Condition Based Monitoring of Rotating Machines using Sparse Auto-encoders,” Proceedings of the 2013 IEEE Conference on Prognostics and Health Management, Jun. 24, 2013, 7 pages. |
P. Tamilselvan, et al., “Deep Belief Network Based State Classification for Structural Health Diagnosis,” Proceedings of the 2012 IEEE Aerospace Conference, Mar. 3, 2012, 11 pages. |
V.T. Tran et al., “An approach to fault diagnosis of reciprocating compressor valves using Teager-Kaiser energy operator and deep belief networks,” Expert Systems With Applications, vol. 41, No. 9, Dec. 29, 2013, pp. 4113-4122. |
Z. Kira, et al., “Long-Range Pedestrian Detection using Stereo with a Cascade of Convolutional Network Classifiers”, Proceedings of the 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, Oct. 7, 2012, 8 pages. |
“Elements of artificial neural networks”; Mehrotra, 1997, 351 pages. |
Number | Date | Country | |
---|---|---|---|
20180217585 A1 | Aug 2018 | US |
Number | Date | Country | |
---|---|---|---|
62094681 | Dec 2014 | US |