Data are routinely collected by engineers on both maintenance and operation of equipment such as the aircraft of an airline's fleet, and can offer valuable insights into future performance and potential repair needs. However, the complexity and sheer quantity of the collected data renders much useful analysis beyond the skills of a typical safety or maintenance engineer. Even for trained engineering analysts with the requisite specialist skills the process can be time consuming and laborious using typical software tools such as Excel or Tableau.
Machine learning is an increasingly popular tool for utilizing and interpreting such large datasets, but may be out of reach for a typical safety or maintenance engineer. Effective application of machine learning techniques to a maintenance problem typically requires identification of a relevant data pattern or pre-cursor signature, as well as expertise in data science to select and tune an appropriate algorithm, and programming skills to implement training and evaluation of the algorithm to generate a predictive model.
Augmented analytics and software tools to assist in the data analysis and model design process are desirable, to bring the power and insights of trend analysis and predictive models to a broader range of users, and simplify and accelerate the process for experienced analysts and data scientists.
The present disclosure provides systems, apparatus, and methods relating to predictive maintenance model design. In some examples, a data processing system for generating predictive maintenance models may include one or more processors, a memory including one or more digital storage devices, and a plurality of instructions stored in the memory. The instructions may be executable by the one or more processors to receive a historical dataset relating to each system of a plurality of systems, the historical dataset including maintenance data and operational data. The instructions may be further executable to receive a first selection of a first operational data feature and a first system, and display operational data associated with the first operational data feature and the first system, and maintenance data associated with the first system, on a timeline in a graphical user interface. The instructions may be further executable to receive a second selection of a second operational data feature and generate a predictive maintenance model using the second operational data feature according to a machine learning method.
In some examples, a computer implemented method of generating a predictive maintenance model may include receiving a historical dataset relating to each system of a plurality of systems, the historical dataset including maintenance data and operational data. The method may further include receiving a first selection of a first operational data feature and a first system, and displaying operational data associated with the first operational data feature and the first system, and maintenance data associated with the first system, on a timeline in a graphical user interface. The method may further include receiving a second selection of a second operational data feature and generating a predictive maintenance model using the second operational data feature according to a machine learning method.
In some examples, a computer program product for generating predictive maintenance models may include a non-transitory computer-readable storage medium having computer-readable program code embodied in the storage medium, the computer-readable program code configured to cause a data processing system to generate a predictive maintenance model. The code may include at least one instruction to receive a historical dataset relating to each system of a plurality of systems, the historical dataset including maintenance data and operational data. The code may further include at least one instruction to receive a first selection of a first operational data feature and a first system, and display operational data associated with the first operational data feature and the first system and maintenance data associated with the first system, on a timeline in a graphical user interface. The code may include at least one instruction to receive a second selection of a second operational data feature and generate a predictive maintenance model using the second operational data feature according to a machine learning method.
Features, functions, and advantages may be achieved independently in various examples of the present disclosure, or may be combined in yet other examples, further details of which can be seen with reference to the following description and drawings.
Various aspects and examples of a predictive maintenance model design system including a data visualization module, as well as related systems and methods, are described below and illustrated in the associated drawings. Unless otherwise specified, a design system in accordance with the present teachings, and/or its various components may, but are not required to, contain at least one of the structures, components, functionalities, and/or variations described, illustrated, and/or incorporated herein. Furthermore, unless specifically excluded, the process steps, structures, components, functionalities, and/or variations described, illustrated, and/or incorporated herein in connection with the present teachings may be included in other similar devices and methods, including being interchangeable between disclosed examples. The following description of various examples is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses. Additionally, the advantages provided by the examples described below are illustrative in nature and not all examples provide the same advantages or the same degree of advantages.
This Detailed Description includes the following sections, which follow immediately below: (1) Overview; (2) Examples, Components, and Alternatives; (3) Illustrative Combinations and Additional Examples; (4) Advantages, Features, and Benefits; and (5) Conclusion. The Examples, Components, and Alternatives section is further divided into subsections A through C, each of which is labeled accordingly.
In general, a predictive maintenance model design system may be configured to assist a user in discovering and interpreting data trends and patterns, designing and training a prediction algorithm, and/or implementing a generated predictive maintenance model. For example, the design system may be a data processing system and/or a software program configured to execute a process 110, as shown in
At step 112 process 110 includes integrating operational and maintenance data, and generating data features. The integration may include receiving at least two distinct datasets for a plurality of systems such as a fleet of aircraft, and combining the data to form a single historical dataset. The datasets may be stored in the memory of the processing system on which process 110 is executed, may be available on a server for access over a network of some kind, or may be received by any effective means. In some examples, data may be drawn from multiple databases and/or from disparate sources.
Integrating the data may also include pre-processing or modification to prepare the data for use. The dataset may include numerical values organized as attributes of a plurality of maintenance and/or repair records and a plurality of telemetry and/or sensor records for each of a plurality of systems (e.g., aircraft). Raw attribute data from the telemetry records may be processed to generate one or more operational data features. The data features may be an unaltered attribute, may be a statistical function of an attribute, and/or may be an aggregate of multiple attributes or records.
At step 114, process 110 includes visualizing and analyzing the historical dataset, and receiving a selection of operational data features. The visualization may include displaying a variety of graphs, charts, plots, and tables in a graphical user interface (GUI) along with a plurality of interactive elements. Raw maintenance and operational data from the historical dataset, generated data features, and/or results of analysis of the dataset may be visualized. A user such as an engineering analyst may use the visualizations to identify trends in the sensor data that are indicative of equipment degradation and failure. Facilitating rapid identification of signs of deviation from normal operation by such an analysis is desirable in order to allow efficient generation of useful predictive models.
The interactive elements of the GUI may be configured to allow input of constraints on what data is visualized, initiation of desired analysis, and selection of operational data features. For example, the GUI may include selection boxes or buttons, display of contextual information on cursor hover, drill-down from graphs to tables or from complete dataset to data subsets, a refresh trigger, a mathematical function input, and/or any GUI elements known to those skilled in the art of software design.
At step 116, process 110 includes generating a predictive maintenance model based on the selected operational data features. Model generation may include selection of an appropriate anomaly detection algorithm, and input of algorithm parameters. In some examples, one or more algorithm templates may be presented in the GUI. In some examples, a plurality of logic block elements may be placeable in an interactive workflow building environment to define an appropriate algorithm and parameters.
Generating the predictive maintenance model may further include training and testing the selected algorithm. The algorithm may be trained and evaluated one or more times, and detailed test results displayed in the GUI. A selection of an algorithm configuration exhibiting desired properties may be received. The selected algorithm configuration may then be used to train a final model on the full historical dataset.
At step 118, the process includes implementing the generated predictive maintenance model. In some examples, the generated model may be prepared for deployment by software and/or a data processing system separate from the predictive maintenance model design system used to execute process 110. In such examples, the generated model may be prepared for deployment as part of a software program, or may be converted to an accessible format, e.g., including an application programming interface (API).
In some examples, implementing the generated predictive maintenance model may include receiving additional operational data. For instance, additional flight data may be recorded by an aircraft fleet and input to the predictive maintenance model design system running process 110. The predictive maintenance model may be applied to the additional operational data, and generate alerts for detected anomalies. Based on the generated alerts, proactive and preventative maintenance action such as inspection, testing, repair, or replacement of equipment, may be taken by maintenance workers to avoid potential costly and disruptive unplanned component replacements or other undesirable maintenance events.
Process 110 may be repeated to generate additional predictive maintenance models. For example, a suite of predictive maintenance models may be generated for an aircraft fleet. Over time, additional models may be generated or re-generated based on new data and/or to address new maintenance challenges.
Aspects of a predictive maintenance model design system or design process such as process 110 may be embodied as a computer implemented method, computer system, or computer program product. Accordingly, aspects of a predictive maintenance model design system may take the form of an entirely hardware example, an entirely software example (including firmware, resident software, micro-code, and the like), or an example combining software and hardware aspects, all of which may generally be referred to herein as a “circuit,” “module,” or “system.” Furthermore, aspects of the predictive maintenance model design system may take the form of a computer program product embodied in a computer-readable medium (or media) having computer-readable program code/instructions embodied thereon. The computer-readable program code may be configured to cause a data processing system to generate a predictive maintenance model.
Any combination of computer-readable media may be utilized. Computer-readable media can be a computer-readable signal medium and/or a computer-readable storage medium. A computer-readable storage medium may include an electronic, magnetic, optical, electromagnetic, infrared, and/or semiconductor system, apparatus, or device, or any suitable combination of these. More specific examples of a computer-readable storage medium may include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, a cloud-based storage service, and/or any suitable combination of these and/or the like. In the context of this disclosure, a computer-readable storage medium may include any suitable non-transitory, tangible medium that can contain or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer-readable signal medium may include a propagated data signal with computer-readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, and/or any suitable combination thereof. A computer-readable signal medium may include any computer-readable medium that is not a computer-readable storage medium and that is capable of communicating, propagating, or transporting a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer-readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, and/or the like, and/or any suitable combination of these.
Computer program code for carrying out operations for aspects of a predictive maintenance model design process may be written in one or any combination of programming languages, including an object-oriented programming language such as Java, Smalltalk, C++, Python, and/or the like, and conventional procedural programming languages, such as C. Mobile apps may be developed using any suitable language, including those previously mentioned, as well as Objective-C, Swift, C#, HTML5, and the like.
The program code may execute entirely on a user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server. In the latter scenario, the remote computer or server may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), and/or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). The remote computer or server may be part of a cloud-based network architecture, such as a cloud computing service or platform. In some examples, the program code may be executed in a software-as-a-service (SaaS) framework accessed by a file transfer protocol such as secure shell file transfer protocol (SFTP) and/or an internet browser on the user's computer.
Aspects of the predictive maintenance design system are described below with reference to flowchart illustrations and/or block diagrams of methods, apparatus, systems, and/or computer program products. Each block and/or combination of blocks in a flowchart and/or block diagram may be implemented by computer program instructions. The computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block(s). In some examples, machine-readable instructions may be programmed onto a programmable logic device, such as a field programmable gate array (FPGA).
These computer program instructions can also be stored in a computer-readable medium that can direct a computer, other programmable data processing apparatus, and/or other device to function in a particular manner, such that the instructions stored in the computer-readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block(s).
The computer program instructions can also be loaded onto a computer, other programmable data processing apparatus, and/or other device to cause a series of operational steps to be performed on the device to produce a computer-implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block(s).
Any flowchart and/or block diagram in the drawings is intended to illustrate the architecture, functionality, and/or operation of possible implementations of systems, methods, and computer program products according to aspects of the predictive maintenance design system. In this regard, each block may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). In some implementations, the functions noted in the block may occur out of the order noted in the drawings. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. Each block and/or combination of blocks may be implemented by special purpose hardware-based systems (or combinations of special purpose hardware and computer instructions) that perform the specified functions or acts.
The following sections describe selected aspects of exemplary predictive maintenance model design systems and data visualization modules as well as related systems and/or methods. The examples in these sections are intended for illustration and should not be interpreted as limiting the entire scope of the present disclosure. Each section may include one or more distinct examples, and/or contextual or related information, function, and/or structure.
As shown in
System 200 is configured to assist a user in completing process 300 to generate and implement machine learning models to detect anomalies in operational data which are indicative of future maintenance events. In the present example, system 200 is designed to generate models for a fleet of aircraft, based on recorded flight data. In some examples, system 200 may additionally or alternatively be used for predictive maintenance with respect to other mechanical systems, such as ground vehicles, ships, manufacturing equipment, industrial appliances, etc.
System 200 may be used to prepare models relating to different aspects (e.g., components or systems) of aircraft, and one set of historical data may be used to prepare multiple different models. For example, system 200 may be used to prepare a model for each of multiple subsystems present in an aircraft, to prepare a model for each class of sensor recording flight data, or to prepare a model for each failure mode of a particular component.
Modules 216, 218, 220 may be described as performing steps 310-324 of process 300, in cooperation with a user 228 who performs steps 330-338 of the process, as depicted in
User 228 interacts with the modules of system 200 on remote server 210 through a graphical user interface (GUI) 226 on local computer 212. The user is guided through a data exploration and model design process by the GUI, then a generated model is implemented by remote server 210 on new operational data to return anomaly alerts. New operational data may be input and alerts returned on an ongoing basis.
In the present example, GUI 226 is executed by local computer 212. For instance, the user may access the GUI through an internet browser installed on the local computer, or system 200 may include a client-side program running on the local computer and displaying the GUI. In general, a user may interface with system 200 in any manner allowing effective display and input of information.
System 200 may be configured to facilitate exploration of multiple maintenance events of interest, investigation of multiple trends concurrently, design of multiple algorithms, and/or use by multiple users of local computer 212 through, for example, creation of multiple distinct sessions. Process 300 may be performed or repeated independently in each such session, and/or information such as data features, analysis results, or algorithm configurations may be accessible and/or selectively shared between sessions.
Exploration module 216 receives historical data from a maintenance data source 222 and a sensor or operational data source 224. In some examples, the module may receive historical data from a plurality of maintenance and/or operational data sources. Typically, flight data are recorded by airplane sensors and downloaded in large data sets post flight. In some examples, sensor data may additionally or alternatively be downloaded during a flight using a system such as Aircraft Communications Addressing and Reporting System (ACARS). Separately, airline maintenance teams maintain information logs, recording maintenance events, defects, and actions taken, using maintenance information systems in an airline back-office. These two information sources, flight data and maintenance data, are not integrated. However, in order to perform predictive maintenance analyses, flight data patterns over time may need to be compared with the occurrence of maintenance events.
To address this, exploration module 216 may integrate the datasets for analysis and visualization. Step 310 of process 300 includes integrating the historical maintenance and sensor data. The data may be integrated into a single historical dataset and prepared for display and analysis. For example, the data may be received from other software and may be converted from an output format of such software to a format appropriate to modules 216, 218 and 220. Preprocessing algorithms may be applied to the dataset to discretize continuous variables, reduce dimensionality, separate measurements into components, eliminate missing or inaccurate data, and/or any appropriate modifications. When the dataset includes data from multiple sources, pre-processing may include merging data, harmonizing formatting, and matching organizational structure.
Exploration module 216 is configured to integrate data saved locally to computer 212 (e.g., repair logs saved as spreadsheets and/or in a repair management software database). In some examples, the exploration module may also be configured to interface or communicate with external databases or database software to retrieve relevant data. For example, the exploration module may generate SQL queries to request data from an online database. Such connectivity may facilitate access to complete and up-to-date information. The module may also be configured to accept input of data in any anticipated format.
At step 312, exploration module 216 generates data features from the operational data of the historical dataset, including phase-dependent aggregate data features. Trending of recorded sensor data can be difficult due to large variations in sensor values that occur as a result of equipment operational cycles. For example, if an analyst simply aggregates the time series recordings for many aircraft flights into one large time series, and plots this data over long periods to look for a trend over time, the result is typically a very noisy distribution with no significant trends evident.
Digitally recorded sensor data, e.g. temperatures, pressures, electrical currents, and actuator positions, from an electro-mechanical system such as an aircraft or other vehicle, may be discrete samples in a time series covering a period of observation. For aircraft, these data are recorded for each flight. Typical sampling rates are 0.25 Hz to 8 Hz for modern commercial aircraft. The recorded values of these data vary substantially over the course of an operational cycle, e.g. during the flight of an aircraft. For example, recorded temperatures could vary by hundreds of degrees depending on variables such as altitude (e.g. ground level versus cruising altitude), operational mode of the equipment, and any dynamic operational control changes applied to the equipment, either automatically or via an explicit operator, e.g. pilot.
Exploration module 216 avoids this obfuscation by dividing the data according to a plurality of phases of the operational cycle before aggregation. For example, sensor data from an aircraft flight may be divided into phases of taxi-out, take-off, climb, cruise, descent, landing, and taxi-in. A range of aggregating functions may then be separately applied to the data associated with each phase, to create phase-dependent aggregate features that can be trended over long periods, e.g. thousands of flights. For instance, the data features may combine phase-dependence and a value constraint or a differential comparison with aggregating statistical functions.
In some examples the module may generate a pre-defined set of data features corresponding to a maintenance event of interest selected by the user. Such a set may include raw attribute data features and/or aggregate data features. For example, if the user selects an overheating event the generated data features may include a temperature sensor reading, an average temperature sensor reading during take-off, a difference in temperature reading between port and starboard sensors, and/or an average difference in temperature reading between port and starboard sensors during take-off.
At step 330, user 228 may define custom data features. In other words, exploration module 216 may receive a mathematical function or other logical ruleset defining a custom data feature from user 228, and generate a data feature accordingly. A GUI 226 of design system 200 may include an interface facilitating input of one or more such rulesets. Custom data features may include representations of recorded time-series sensor data that capture events, transitions, performance metrics, and states of the system under observation. Any custom defined data features may be ranked, visualized, and communicated to machine learning module 218 in the same manner as other data features generated at step 312, such as the pre-defined set of data features.
Exploration module 216 is further configured to categorize maintenance events, at step 314 of process 300. More specifically, the module may divide maintenance events of a selected type into two or more sub-categories based on machine learning analysis of the maintenance data. For example, topic modelling may be applied to text of maintenance logs to divide unplanned replacements of a selected component into a plurality of failure types. The maintenance events, sub-categories, and/or other results of the machine learning analysis may be displayed to user 228 through GUI 226.
At step 332, user 228 may designate health timeframes relative to recorded maintenance events. That is, exploration module 216 may receive one or more time thresholds from the user. For example, user 228 may indicate that components having undergone replacement are healthy for a first time period following replacement, and degraded for a second time period preceding replacement. The exploration module may subdivide operational data into labeled classes or categorizes based on this received designation. In some examples, exploration module 216 may automatically use maintenance event sequences to label the sensor data for the purposes of supervised machine learning, such as classification models for failure prediction.
At step 316, exploration module 216 ranks the importance of some or all of the data features generated in step 312. Importance may be ranked according to one or more quantitative measures of influence of a given data feature on predictive model performance. The ranking may indicate relative importance of the data features in predicting occurrence of a set of historical maintenance events.
The exploration module may use the designations from step 332 in a machine learning method to perform the ranking. In some examples, the user may select one or more data features and one or more maintenance events to be used in the ranking process. The machine learning method may evaluate correlation between data features and maintenance events in an absolute and/or relative sense. For example, the module may iterate supervised classification, eliminating a data feature at each iteration to generate a relative ranking of feature importance.
At step 334, user 228 may use GUI 226 to explore the sensor and maintenance data integrated at step 310. The user may also explore the data features generated at step 312, maintenance event categories generated at step 314, and/or data feature rankings generated at step 316. The user may explore the data in order to identify potential data patterns, pre-cursor signatures, and/or candidate data features useful for creating a predictive maintenance model.
At step 318, exploration module 216 may visualize and analyze data to aid the user's data exploration. The exploration module is configured to visualize and display both the operational and maintenance data of the historical dataset in a manner that enables a user to discover behavior patterns in large sets of recorded flight sensor data. Flight sensor data and maintenance event data may be displayed overlaid together in one or more graphs and/or charts to allow the user to identify relevant correlations and trends over time. In other words, the exploration module automatically combines the flight data features and maintenance events into a single time series visualization, enabling visual identification of important flight data patterns that are associated with maintenance problems.
As described above with reference to step 312, trending raw sensor data may provide limited insight. Instead, exploration module 216 may display the generated phase-dependent aggregate data features. The health timeframes designated at step 332 may also be displayed relative to each maintenance event. Visualizations may be displayed to the user, and constraints, selections, and other inputs received from the user through GUI 226.
Exploration module 216 may perform automatic analysis of the historical dataset as well as additional analysis as selected by the user. Automatic analysis may include standard statistical measures, detection of potential seasonal bias, and/or any analysis typically relevant to maintenance prediction. In some examples, the analysis automatically performed may depend on the maintenance event of interest selected by the user.
GUI 226 may also provide user 228 control over iteration of steps 312-316, 330, and 332. For example, user 228 may identify a data trend of interest when exploring sensor and maintenance data in step 334 and perform step 330 again to define a related custom data feature, or may begin with step 318 to have the exploration module visualize and analyze data relevant to a data trend of interest before performing step 330 to define a related custom feature. The user may then trigger generation of the defined feature in step 312, and re-ranking of data feature importance in step 316.
At step 336, user 228 may select one or more key data features. That is, exploration module 216 may receive a selection of one or more data features. In some examples, the user may opt for the exploration module to perform an automatic selection of one or more data features. At step 320, the exploration module prepares the selected data features and any relevant related information, as appropriate. Exploration module 216 communicates a set of the data features and labels 233 to machine learning module 218.
Machine learning module 218 is configured to assist user 228 in defining, training, and evaluating candidate algorithms to arrive at a trained anomaly detection model having desired performance traits. The machine learning module may be configured to accommodate users of different levels of expertise and/or allow a user to select a desired level of guidance for a particular project. For example, GUI 226 may include first and second interfaces to the machine learning module.
The first and second interfaces may be designed for beginner and advanced users, or for simplified and complex design, respectively. A user may select the first interface due to limited experience, and/or in order to save time and avoid potential complications such as over-fitting. A user may select the second interface to build and test an algorithm from scratch and/or to create custom algorithm characteristics. In some examples, a user may start with a simple algorithm generated in the first interface, and introduce targeted complexity using tools in the second interface.
At step 338, user 228 may select a template and tune algorithm parameters. For example, the user may select an algorithm template with pre-determined parameters through the first interface, or may select an algorithm type and input all relevant parameters through the second interface. In either case, the machine learning module 218 may receive all necessary details of an anomaly detection algorithm configuration.
Appropriate algorithms may include supervised, unsupervised, or semi-supervised anomaly detection algorithms and techniques such as k-nearest neighbor, support vector machines, Bayesian networks, hidden Markov models, or deep learning. Input parameters and/or model settings may include tuning parameters such as feature data thresholds and relative weighting factors, data pre-processing methods such as smoothing, filtering, and normalization, and alert output criteria such as deviation persistence.
At step 322, the module defines and tests an anomaly detection algorithm based on the algorithm configuration selected at step 338 and the key data features selected in step 336. That is, the module trains and validates a predictive maintenance model. For example, the prepared data features may be divided into complementary training and validation data subsets, and the algorithm trained on the training data subset, then tested on the corresponding validation data subset. In some examples, the machine learning module may receive a selection of training and validation data sets from the user.
The algorithm may be trained and evaluated one or more times, and detailed test results reported to user 228 in GUI 226. Based on the evaluation results, the user may repeat step 338 and trigger a repeat of step 322 by the machine learning module. The GUI may also provide further tools for refining the algorithm, such as alternative training and testing methods and/or error investigation of individual cases.
In some examples, machine learning module 218 may be configured to train and evaluate multiple algorithm configurations, either concurrently or sequentially, and report a comparative analysis in addition to or in place of individual evaluation results. User 228 may repeat steps 338, 322 as necessary until arriving at a satisfactory algorithm configuration. For example, the user may select for desired properties such as high accuracy or low rate of false positives. The user may then trigger step 324 of process 300.
Implementation module 220 is configured to apply the final predictive maintenance model trained by machine learning module 218 to new operational data, detect anomalies in the data, and generate alerts accordingly. Machine learning module 218 may communicate a trained model 234 to implementation module 220, and the implementation module may receive new sensor data 230 from local computer 212. At step 324, the module runs anomaly detection on the new sensor data. The implementation module may then return alerts 232 to the local computer.
Similarly to exploration module 216, implementation module 220 is configured to integrate data saved locally to computer 212. In some examples, the implementation module may also be configured to interface or communicate with external databases or database software. Such connectivity may facilitate ongoing communication with operational databases for automatic generation of maintenance alerts. For instance, remote server 210 may communicate daily with a database of operational data to automatically receive recorded flight data added to the database during maintenance of an aircraft, and issue maintenance alerts before the aircraft returns to service.
As shown in
The data visualization module is configured to enable a user such as an engineering analyst to identify patterns and pre-cursor signatures in large sets of sensor data. Without need for programming expertise, the data visualization module allows the user to rapidly create long-term trend plots, perform comparative analysis, select relevant data-subsets, and drill down to individual sensor readings. The data visualization module also facilitates overlay of maintenance data onto an operational data timeline, for clear visualization of relationships between operational performance and maintenance outcomes.
A user of the data visualization module may identify patterns for the purpose of building algorithms to distinguish component degradation and predict failures in advance. Use of the data visualization module may substantially reduce the time required to investigate flight data features and identify features of importance for failure prediction. In some examples, the data visualization module may be used to identify patterns relevant to other applications, such as performance evaluation of aircraft models or maintenance facilities. In such examples, the data visualization module may be configured for standalone use, separate from a design system.
Aspects of systems and modules described above may be utilized in the method steps described below. Where appropriate, reference may be made to components and systems that may be used in carrying out each step. These references are for illustration, and are not intended to limit the possible ways of carrying out any particular step of the method. The flowchart of
Step 410 includes receiving a historical dataset of operational and maintenance data. The dataset may consist of time-labeled historical maintenance and sensor data integrated from separate databases, cleaned, and pre-processed for display and analysis, as described in reference to step 310, above. In the present example the data visualization module is configured for use with data from a fleet of aircraft. The operational data include flight sensor data, digitally recorded by the fleet of aircraft and including readings such as temperatures, pressures, electrical currents, and actuator positions. The operational data include discrete samples in a plurality of time series, each time series recorded over the course of a flight. The maintenance data include maintenance logs such as records of scheduled or unplanned component replacements, routine upkeep and repair work performed, and inspections results.
Step 412 includes dividing the operational data into phase-dependent subsets. As discussed above, trending of recorded sensor data can be difficult due to large variations in sensor values that occur as a result of equipment operational cycles. In the present example, recorded aircraft flight data varies according to flight phases such as climb, cruise, and descent. To address this, sensor data from each flight is divided according to flight phases and data is trended for one phase at a time. In other words, the operational data is divided into a plurality of phase-dependent data subsets, each subset corresponding to one of the flight phases. In examples where the data visualization module is configured for use with equipment other than aircraft, the operational data may be divided according to other operational phases such as engine warm-up, full power, and idle.
In the present example, the flight phases are automatically defined as taxi-out, takeoff, climb, cruise, descent, landing, and taxi-in. The user may also re-define the flight phases as appropriate to a specific data analysis goal and/or a type of aircraft. For example, flight phases for a fleet including helicopters may further include a hover phase. Step 412 may be repeated as needed when the flight phase definitions are changed.
Step 414 includes displaying selection tools in a GUI. More specifically, selection tools are displayed in GUI 510, as shown in
Step 416 includes receiving selections. The selections are received from the user, via GUI 510. Some selections may be required before proceeding with data visualization in step 432, while other selections may be optional additional constraints. Step 416 may be repeated may times in method 400. The data processing and visualization method may be described as iterative, with repeated selection of constraints and display of data. The user may use insights gained from a visualization to inform a subsequent selection of constraints, and so on.
The data visualization module or a design system of which the module is a part may allow the user to save a set of selections received at step 416, for later use or sharing with other users. For example, the data visualization module may create distinct user sessions, allowing the user to work on investigating multiple questions separately, and/or multiple users to work independently. In some examples, the data visualization module may support export of a received set of selections and/or associated visualizations to another format, or to other data processing software.
In the present example, selection 418 is required before proceeding with step 432. Selection 418 is of one or more data features. In the present description, the term ‘data feature’ is used to describe a subset of records of the operational data of the historical dataset, extracted, processed, and/or otherwise prepared for use in a machine learning method. The data features of selection 418 are defined by selected constraints 420-426. More specifically, the data features are defined by constraint 420 on sensors, constraint 422 on aggregating functions, constraint 424 on flight phase, and constraint 426 on aircraft.
Phase constraint 424 is received via a phase selection menu 520. The available options in menu 520 are the flight phases used to divide the operational data into phase-dependent data subsets in step 412. In the depicted example, only a single phase can be selected at a time, and all data features of selection 418 are restricted to the phase-dependent data subset corresponding to the selected flight phase. In some examples, multiple phases may be selected at once using menu 520 or a selection box, and data features for each phase-dependent data subset corresponding to a selected flight phase may be included in selection 418.
Sensor constraint 420 is received via a sensor parameter selection box 512 and a position selection box 516. The term ‘sensor parameter’ is used here to refer to an attribute of the operational data indicating a type of sensor by which values of a record were recorded. An aircraft may include multiple sensors of the same type, in different positions on the aircraft. Selection of a sensor parameter and a position may constrain the defined data feature to records associated with a single sensor. The user may select one or more sensor parameters and one or more positions, to include all sensors defined by possible combinations of the selected sensor parameters and positions.
In some examples, position selection box 516 may also be used to select one or more filtering functions to be applied to the data associated with the sensor parameters of selection box 512. Use of filtering and/or other statistical functions may reduce observed flight-to-flight noise in displayed features. In the example of
Aggregating function constraint 422 is received via a function selection box 514. In the present example, the available aggregating statistical functions include average, minimum, maximum, standard deviation, median, upper quartile value, and lower quartile value. Similar to the above-described difficulty in long-term trending due to operational phase, effective visualization of trends can be hampered by the division of the operational data into a plurality of distinct time series, or flights. To avoid this difficulty, the data features of selection 418 are aggregate. For each flight, records in the selected phase-dependent data subset associated with the selected sensor are aggregated according to the selected statistical function.
Aircraft constraint 426 is received via a tail number selection box 518. Each aircraft of the fleet of the historical dataset may be designated by unique identifier. In the present example, each aircraft is assigned a tail number. The user may use box 518 to select one or more tail numbers for aircraft of interest. The defined data feature may be thereby constrained to records from flights made by the selected aircraft. The aircraft selected may also be a constraint on displayed maintenance data, as described further below.
Each possible combination of selected phase, sensor parameters, positions, aggregating functions, and aircraft defines a data feature of selection 418. Selection 418 may also include a custom data feature defined by the user. Custom data features may be defined in another module of a design system, and received via an interface opened with feature button 517. Additionally or alternatively, the data visualization module may include an interface allowing a user to define a custom data feature. For example, the user may specify an arithmetic combination of two or more existing data features and/or apply a filtering function or other statistical function.
In the example depicted in
Based on the visualizations of steps 432-454, the user may opt to save the data feature or features defined by selection 418. Saved data features may be passed to other modules such as a feature ranking module, or machine learning module 218, as described above.
Step 416 further includes receiving a selection 428 of maintenance event types, and selection 430 of associated time periods. Along with aircraft constraint 426, selection 428 may define a subset of maintenance events to be displayed, from the maintenance data of the historical dataset. Selection 428 is received via an event selection menu 522. In the example of
The event types available for selection may depend on what events are recorded in the maintenance logs of the historical dataset, and how the events are logged. For example, the available types may include unplanned removal, scheduled removal, diagnostic test, system alert, and routine maintenance. In some examples, a failure mode categorization module or other module of a design system may further divide the recorded maintenance events according to analysis of the maintenance data. Such division may include assignment to additional event types. For example, unplanned removal events may be divided into electrical failure, thermal failure, mechanical failure, and unknown failure.
Selection 430 of the event associated time periods is received via a healthy threshold selection box 524 and a degraded threshold selection box 526. Each threshold time defines a time range prior to each event. In other words, the data is divided into multiple time periods according to the selected time relationship. More specifically, in the depicted example the number from box 524 defines a period of days before each event, prior to which operational data is assumed to reflect healthy operation. That is, a healthy period from the most recently preceding event to the selected number of days prior to the respective event is defined. The number from box 526 defines a period of days before an event during which operational data is assumed to reflect degraded components or unhealthy operation. That is, a degraded period of the selected number of days up to the event is defined. Any time not within either period for any selected maintenance event is designated as having an uncertain status.
The depicted time periods are labeled for use with unplanned removal events, to assign healthy and degraded labels based on an assumption of component degradation leading to the unplanned removal. However, the user may use boxes 524 and 526 to define any informative time periods prior to a selected event type. For example, a user investigating effects of routine maintenance on aircraft performance might designate time periods corresponding to theorized optimal maintenance intervals to visualize operational data from flights outside such intervals.
Step 432 of method 400 includes displaying values for each phase-dependent aggregate data feature in a trend plot. That is, values are calculated from operational data and displayed as points of a plot or graph. A calculated value is achieved by application of the selected aggregating statistical function to data from each selected sensor data parameter for the selected systems and the selected operational phase.
In the example depicted in
Each point 540 of trend plot 530 corresponds to a flight made by aircraft Tail_1. The point is plotted at the date of the flight, and the value of the point is the average calculated from all filtered values of brake temperature readings by a temperature sensor at a p7 wheel position, during the landing phase of the flight. The filtered value is calculated from the brake temperature readings by averaging readings over a five-flight moving window.
In general, for every displayed trend plot, each scatterplot point indicates a value calculated from a series of sensor readings recorded on a flight. More specifically, the point indicates a value calculated from sensor readings recorded during one phase of a flight. Such phase-dependent aggregate display of sensor data allows much clearer display of high-level trends over the course of months and years, reducing the noise associated with variation within individual flights.
By default, each data feature is displayed on a separate trend plot. In examples where a user selects multiple data features, additional trend plots may be displayed below trend plot 530 in GUI 510. Each trend plot may be titled with the name of the plotted data feature, as shown for trend plot 530 in
In the present example, overlap is allowed for data features sharing all constraints apart from aircraft and/or position. An overlap checkbox 542 is therefore associated with each of tail number selection box 518 and position selection box 516. In some examples, overlap may be permitted for any desired data features, and an overlap checkbox may also be associated with each of sensor parameter selection box 512, function selection box 514 and phase selection menu 520.
An example of a trend plot 544 with overlapped aircraft is depicted in
Maintenance events are also displayed in each trend plot. At step 434, the method includes displaying maintenance events of the selected type associated with the respective aircraft. The module may display maintenance data by overlaying the time-labeled maintenance data on the same timeline as the displayed operational data. For each trend plot displayed in step 432, maintenance events of the types received in selection 428 associated with the aircraft selected in the definition of the respective data feature are displayed. More specifically, a labeled vertical line or event line is plotted at the date of the event.
In the example depicted in
GUI 510 provides additional, detailed information for individual maintenance events, as request by a user. For example, hovering a cursor over maintenance event lines on the trend plot may trigger a popup tooltip, contextual menu, or highlight label with information such as maintenance log text or message text. In
Step 436 includes visually indicating a division into the selected time periods, for each displayed maintenance event. ** A division of each trend plot into the time periods of selection 430 as defined by the threshold values of selection boxes 524 and 526 is indicated. In some examples, the time periods may be represented as color-coded regions of the trend plot. In the examples depicted in
Such display of maintenance and operational data on the same timeline is of substantial benefit in discovering trends in operational data that are correlated with maintenance events and therefore useful in prediction of such events. For instance, in the example of
The data visualization module further supports display of other maintenance data, including maintenance messages from the aircraft onboard computer, and fault messages from the Engine Indicating and Crew Alerting System (EICAS). Such messages are indicated by a box 554 and an oval 556 on the trend plot, as depicted on trend plot 548 in
After step 436, the next step in method 400 may be triggered by the user, in GUI 510. The user may return to the selection tools to alter the selections, and the method may return to step 416. The user may opt to evaluate seasonality of the selected data features, using seasonal bias analysis checkbox 558 or seasonal bias removal checkbox 559, in which case the method may proceed with step 438. The user may opt to drill down to flight level data by selecting button 560, in which case the method may proceed with step 446. The user may opt to generate additional plots by selecting button 562 or 564, in which case method 400 may proceed with step 450 or 452, respectively. The user may iterate any or all of these optional branches of method 400 as desired, by interaction with the relevant elements of GUI 510.
Step 438 includes calculating a best-fit periodic curve. As noted above, step 438 may be performed as an analysis for seasonal bias or variation present in the data, and may be triggered by checkbox 558 or 559 in GUI 510. The best-fit calculation may be performed for each selected data feature, within the selected date domain. More specifically, the data visualization module performs a sinusoidal regression for each data feature time series, and temporarily stores the fit results.
At decision 440, the module evaluates the parameters of the stored fit results, comparing the parameters to pre-determined thresholds for seasonal bias. For example, a period of a calculated best-fit curve may be compared to an annual period of a year or 365 days. If the period is within a pre-determined threshold, the data qualifies as having a seasonal bias. For example, if the calculated period is within 40 days of 365 days, the data is considered to have seasonal bias. In some examples, the pre-determined thresholds may be customizable by the user.
If the result of the evaluation at 440 is that no seasonal bias is present, method 400 may return to step 432. Trend plots of the selected data features may be displayed without alteration. In some examples, a notification may be displayed to the user indicating the result of the seasonal bias analysis. If one or more of the data features does exhibit seasonal bias, the method proceeds with step 442 or step 444 according to the GUI checkbox selected by the user. Each step is performed for those data features exhibiting seasonal bias. Any data features with no seasonal bias may be displayed without alteration.
For a selection of seasonal bias analysis checkbox 558, step 444 includes displaying each calculated curve on the respective trend plot. An illustrative seasonal curve 568 is displayed in trend plot 548 of
Seasonal bias is common in recorded aircraft sensor parameters such as temperatures, pressures, etc. measured during lower altitude phases of flight. Periodicity of a data feature due to seasonal bias may result in generation of false positive alerts by a predictive algorithm. However, a seasonally biased data feature may still have predictive value, particularly if the bias can be removed.
For a selection of seasonal bias removal checkbox 559, step 442 includes modifying each data feature to compensate for the seasonal variation. The data visualization module may use the stored fit results to alter the data feature values. More specifically, the sinusoidal regression curve values may be subtracted from the data feature values, and an altered data feature stored. Method 400 returns to step 432, and the altered data features are used in place of the original data features in the displayed trend plots. In some examples, the trend plots of altered data features may be labeled as modified to compensate for calculated seasonal bias, or a notification may be displayed to the user indicated which data features were altered in the seasonal bias analysis.
The data visualization module is configured to visualize aircraft flight sensor data at multiple levels. GUI 510 allows rapid drill-down from the long-term trending analysis of aggregate data parameters over thousands of historical flights to exploration of raw sensor data in individual flights. Step 446 includes receiving a selection of a flight. In the present example, the user uses button 560 to activate flight visualization and then clicks a plot point 540 in trend plot 530 to select an individual flight.
Step 448 includes displaying readings from the flight for each selected sensor, in a separate plot. That is, flight data is displayed on a separate timeline from the trend plots. In the present example, once the user has selected a flight, GUI 510 opens a new window as depicted in
Initially, flight plot 570 may graph data recorded by the sensors of constraint 420 in selection 418 of step 416. The user may subsequently add and/or subtract sensors as desired using sensor selection box 572. In some examples, flight phases may be visually indicated on flight plot 570. For example, the graph lines may be color-coded according to flight phase, or vertical lines dividing the phase time ranges may be displayed.
More detailed data is also available to the user in record table 574 and a contextual highlight label 582. As the user hovers a cursor over flight plot 570, points on each graph line 580 at the corresponding time are highlighted and label 582 is displayed with values for each highlighted point. Only a portion of record table 574 is depicted in
In addition to long-term trending and individual flight level data, the user may opt to view comparative plots for one or more aircraft, or one or more groups of flights. Four illustrative plots are depicted in
Step 450 includes receiving a selection of one or more aircraft. In the present example, the user may make the selection using tail number selection box 518. The user may then trigger step 454 with visualization button 562, as shown in
Selection of flight groups may be particularly helpful for comparing data from different time regions as defined in selection 430 and displayed in step 436. For example, a first flight group may be selected from a time region labeled as healthy and second flight group may be selected from a time region labeled as degraded. Comparative plots may then allow the user to identify statistical differences between healthy and degraded sensor data. If there is a substantial difference between the selected groups, then the data feature may be a good candidate for a predictive alerting algorithm.
Step 454 includes displaying one or more of a scatter plot with a linear regression, a density plot, a box plot, and a heat map for the selected data features. The data features displayed in the plot are those selected at 418 of step 416. The plots to be displayed may pre-determined or may be selectable by the user. In the present example, all four plot types are displayed automatically. The same plots are displayed either for the aircraft selected in step 450, or the groups of flights selected in step 452. The illustrative plots of
A plot key 592 includes a calculated p-value for each best-fit line, and a correlation score for each group. The correlation score may help the user to evaluate independence of data features, and identify anomalous data. For instance, if two variables are highly correlated then one may be omitted without significant loss of information, and thereby avoid issues with predictive models that do not perform well with redundant data features. For another instance, anomalous data may appear as regions of non-correlation, which may be indicative of a predictive signature.
Such comparative analyses as facilitated by plots 584, 585, 591, and 593 may allow the user to determine which data features provide independent information useful in developing predictive alerting algorithms. The data visualization module may further display any charts, plots, heat maps, and/or graphs useful for identification of trends or patterns in data on either a long-term or flight level timescale.
From step 454 the user may return to the selection tools of GUI 510, as depicted in
As shown in
In this illustrative example, data processing system 600 includes a system bus 602 (also referred to as communications framework). System bus 602 may provide communications between a processor unit 604 (also referred to as a processor or processors), a memory 606, a persistent storage 608, a communications unit 610, an input/output (I/O) unit 612, and/or a display 614.
Processor unit 604 serves to run instructions that may be loaded into memory 606. Processor unit 604 may comprise a number of processors, a multi-processor core, and/or a particular type of processor or processors (e.g., a central processing unit (CPU), graphics processing unit (GPU), etc.), depending on the particular implementation. Further, processor unit 604 may be implemented using a number of heterogeneous processor systems in which a main processor is present with secondary processors on a single chip.
Memory 606 and persistent storage 608 are examples of storage devices 616. A storage device may include any suitable hardware capable of storing information (e.g., digital information), such as data, program code in functional form, and/or other suitable information, either on a temporary basis or a permanent basis. Storage devices 616 also may be referred to as computer-readable storage devices or computer-readable media.
Persistent storage 608 may contain one or more components or devices. For example, persistent storage 608 may include one or more devices such as a magnetic disk drive (also referred to as a hard disk drive or HDD), solid state disk (SSD), an optical disk drive such as a compact disk ROM device (CD-ROM), flash memory card, memory stick, and/or the like, or any combination of these. One or more of these devices may be removable and/or portable, e.g., a removable hard drive.
Input/output (I/O) unit 612 allows for input and output of data with other devices that may be connected to data processing system 600 (i.e., input devices and output devices). For example, an input device may include one or more pointing and/or information-input devices such as a keyboard, a mouse, touch screen, microphone, digital camera, and/or the like. These and other input devices may connect to processor unit 604 through system bus 602 via interface port(s) such as a serial port and/or a universal serial bus (USB).
Output devices may use some of the same types of ports, and in some cases the same actual ports, as the input device(s). For example, a USB port may be used to provide input to data processing system 600 and to output information from data processing system 600 to an output device. Some output devices (e.g., monitors, speakers, and printers, among others) may require special adapters. Display 614 may include any suitable human-machine interface or other mechanism configured to display information to a user, e.g., a CRT, LED, or LCD monitor or screen, etc.
Communications unit 610 refers to any suitable hardware and/or software employed to provide for communications with other data processing systems or devices. While communication unit 610 is shown inside data processing system 600, it may in some examples be at least partially external to data processing system 600. Communications unit 610 may include internal and external technologies, e.g., modems, ISDN adapters, and/or wired and wireless Ethernet cards, hubs, routers, etc. Data processing system 600 may operate in a networked environment, using logical connections to one or more remote computers.
Instructions for the operating system, applications, and/or programs may be located in storage devices 616, which are in communication with processor unit 604 through system bus 602. In these illustrative examples, the instructions are in a functional form in persistent storage 608. These instructions may be loaded into memory 606 for execution by processor unit 604. Processes of one or more examples of the present disclosure may be performed by processor unit 604 using computer-implemented instructions, which may be located in a memory, such as memory 606.
These instructions are referred to as program instructions, program code, computer usable program code, or computer-readable program code executed by a processor in processor unit 604. The program code in the different examples may be embodied on different physical or computer-readable storage media, such as memory 606 or persistent storage 608. Program code 618 may be located in a functional form on computer-readable media 620 that is selectively removable and may be loaded onto or transferred to data processing system 600 for execution by processor unit 604. Program code 618 and computer-readable media 620 form computer program product 622 in these examples. In one example, computer-readable media 620 may comprise computer-readable storage media 624 or computer-readable signal media 626.
Computer-readable storage media 624 may include, for example, an optical or magnetic disk that is inserted or placed into a drive or other device that is part of persistent storage 608 for transfer onto a storage device, such as a hard drive, that is part of persistent storage 608. Computer-readable storage media 624 also may take the form of a persistent storage, such as a hard drive, a thumb drive, or a flash memory, that is connected to data processing system 600. In some instances, computer-readable storage media 624 may not be removable from data processing system 600.
In these examples, computer-readable storage media 624 is a non-transitory, physical or tangible storage device used to store program code 618 rather than a medium that propagates or transmits program code 618. Computer-readable storage media 624 is also referred to as a computer-readable tangible storage device or a computer-readable physical storage device. In other words, computer-readable storage media 624 is media that can be touched by a person.
Alternatively, program code 618 may be transferred to data processing system 600, e.g., remotely over a network, using computer-readable signal media 626. Computer-readable signal media 626 may be, for example, a propagated data signal containing program code 618. For example, computer-readable signal media 626 may be an electromagnetic signal, an optical signal, and/or any other suitable type of signal. These signals may be transmitted over communications links, such as wireless communications links, optical fiber cable, coaxial cable, a wire, and/or any other suitable type of communications link. In other words, the communications link and/or the connection may be physical or wireless in the illustrative examples.
In some illustrative examples, program code 618 may be downloaded over a network to persistent storage 608 from another device or data processing system through computer-readable signal media 626 for use within data processing system 600. For instance, program code stored in a non-transitory computer-readable storage medium in a server data processing system may be downloaded over a network from the server to data processing system 600. The computer providing program code 618 may be a server computer, a client computer, or some other device capable of storing and transmitting program code 618.
The different components illustrated for data processing system 600 are not meant to provide architectural limitations to the manner in which different examples may be implemented. One or more examples of the present disclosure may be implemented in a data processing system that includes fewer components or includes components in addition to and/or in place of those illustrated for computer 600. Other components shown in
This section describes additional aspects and features of predictive maintenance model design systems, computer implemented methods of predictive maintenance design, and computer programs product for generating predictive maintenance models, presented without limitation as a series of paragraphs, some or all of which may be alphanumerically designated for clarity and efficiency. Each of these paragraphs can be combined with one or more other paragraphs, and/or with disclosure from elsewhere in this application, including the materials incorporated by reference in the Cross-References, in any suitable manner. Some of the paragraphs below expressly refer to and further limit other paragraphs, providing without limitation examples of some of the suitable combinations.
A0. A data processing system for generating predictive maintenance models, comprising:
one or more processors,
a memory including one or more digital storage devices, and
a plurality of instructions stored in the memory and executable by the one or more processors to:
receive a historical dataset relating to each system of a plurality of systems, the historical dataset including time-labeled maintenance and operational data,
receive a first selection of a first operational data feature and a first system,
display operational data associated with the first operational data feature and the first system on a timeline in a graphical user interface,
display maintenance data associated with the first system on the timeline,
receive a second selection of a second operational data feature, and
generate a predictive maintenance model, using the second operational data feature according to a machine learning method.
A1. The data processing system of A0, wherein the timeline is a time axis of a scatterplot, and the operational data associated with the first operational data feature and the first system are displayed as points of the scatterplot.
A2. The data processing system of A1, wherein the plurality of systems are a fleet of aircraft, and each point of the scatterplot indicates a value calculated from a series of sensor readings recorded on a flight.
A3. The data processing system of any of A0-A2, wherein the plurality of systems have a shared plurality of operational phases, and the plurality of instructions are further executable by the one or more processors to:
divide the operational data into a plurality of phase-dependent data subsets corresponding to the phases of the shared plurality of operational phases.
A4. The data processing system of A3, wherein the first selection includes an operational phase, and the displayed operational data is in a phase-dependent data subset corresponding to the operational phase.
A5. The data processing system of A3 or A4, wherein the plurality of systems are a fleet of aircraft, and the shared plurality of operational phases are phases of flight.
A6. The data processing system of A5, wherein the shared plurality of operational phases include one or more of taxi-out, takeoff, climb, cruise, descent, landing, and taxi-in.
A7. The data processing system of A5 or A6, wherein the operational data of the historical dataset includes a plurality of series of sensor readings, each series of sensor readings having been recorded during a flight of an aircraft of the fleet of aircraft, and the plurality of instructions are further executable by the one or more processors to:
receive a selection of a flight, and
display the series of sensor readings recorded during the selected flight, on a separate timeline.
A8. The data processing system of any of A3-A7, wherein the first operational data feature is an aggregate data feature, calculated according to an aggregating statistical function on one of the phase-dependent data subsets of the operational data.
A9. The data processing system of A8, wherein displaying the operational data associated with the first operational data feature includes displaying a calculated value of the aggregating statistical function for each of a plurality of series of sensor readings.
A10. The data processing system of A8 or A9, wherein receiving the first selection of a data feature includes receiving a selection of a sensor data parameter, a position, and the aggregating statistical function, and the aggregate data feature is calculated from operational data associated with the sensor data parameter and the position according to the selected aggregating statistical function.
A11. The data processing system of any of A3-A10, wherein the first selection includes a plurality of aggregate operational data features calculated according to one or more aggregating statistical functions, and the plurality of instructions are further executable by the one or more processors to:
receive a selection of an operational phase, and
calculate each aggregate data feature for the selected operational phase,
wherein displaying the operational data associated with the first operational data features includes displaying calculated values of the aggregate data features.
A12. The data processing system of any of A0-A11, wherein the first operational data feature is an aggregate data feature, calculated according to an aggregating statistical function.
A13. The data processing system of any of A0-A12, wherein the plurality of instructions are further executable by the one or more processors to:
analyze seasonal variation in the operational data associated with the first operational data feature and the first system,
calculate a best-fit seasonal bias curve, and
display the calculated curve on the timeline of the graphical user interface.
A14. The data processing system of A13, wherein the plurality of instructions are further executable by the one or more processors to:
modify the operational data associated with the first operational data feature and the first system to compensate for the analyzed seasonal variation, prior to displaying the operational data on the timeline of the graphical user interface.
A15. The data processing system of any of A0-A14, wherein the plurality of instructions are further executable by the one or more processors to:
calculate a best-fit periodic curve for the operational data associated with the first operational data feature and the first system,
compare a period of the calculated curve to a year, and
when the period is within a pre-determined threshold from a year, display the calculated curve on the timeline of the graphical user interface.
A16. The data processing system of A15, wherein the plurality of instructions are further executable by the one or more processors to:
modify the operational data associated with the first operational data feature and the first system to compensate for the calculated curve, prior to displaying the operational data on the timeline of the graphical user interface.
A17. The data processing system of any of A0-A16, wherein the first selection includes a plurality of operational data features, and the plurality of instructions are further executable by the one or more processors to:
display operational data associated with each of the plurality of operational data features on the timeline, the operational data associated with each operational data feature of the plurality of operational data features being visually distinct from operational data associated with the other operational data features of the plurality of operational data features.
A18. The data processing system of A17, wherein the plurality of instructions are further executable by the one or more processors to:
evaluate correlation between each pair of the plurality of operational data features, and
display a quantitative result of each evaluation in the graphical user interface.
A19. The data processing system of any of A0-A18, wherein the first selection includes a plurality of systems, and the plurality of instructions are further executable by the one or more processors to:
display operational data associated with each system of the plurality of systems on the timeline, the operational data associated with each system of the plurality of systems being visually distinct from operational data associated with the other systems of the plurality of systems.
A20. The data processing system of any of A0-A19, wherein displaying maintenance data associated with the first system includes displaying a maintenance event for the first system, and the plurality of instructions are further executable by the one or more processors to:
visually indicate a division of the displayed operational data into multiple time periods according to a selected time relationship to the displayed maintenance event.
A21. The data processing system of A20, wherein the multiple time periods include a first period of time more than a first threshold time prior to the displayed maintenance event, and a second period of time less than a second threshold time prior to the displayed maintenance event, and the plurality of instructions are further executable by the one or more processors to:
receive a selection of the first and second threshold times.
A22. The data processing system of any of A0-A21, wherein the plurality of instructions are further executable by the one or more processors to:
receive a selection of a pair of operational data features, systems, or pluralities of series of sensor readings, and
display a comparison of the selected pair in one or more of a density plot, a heat map, a box plot, and a scatter plot including a linear regression.
A23. The data processing system of any of A0-A22, wherein the plurality of instructions are further executable by the one or more processors to:
receive a selection of two groups of series of sensor readings, and
display values of the first data feature in one or more of a density plot, a box plot, and a scatter plot including a linear regression.
A24. The data processing system of A23, wherein the first selection includes a plurality of operational data features, and the plurality of instructions are further executable by the one or more processors to:
evaluate correlation between each pair of the plurality of operational data features for each group, and
display a quantitative result of each evaluation.
A25. The data processing system of A23 or A24, wherein the first selection includes a plurality of operational data features, and the plurality of instructions are further executable by the one or more processors to:
visually indicate a division of the displayed operational data into multiple time periods according to a selected time relationship to a displayed maintenance event, the multiple time periods including a first period of time more than a first threshold time prior to the displayed maintenance event, and a second period of time less than a second threshold time prior to the displayed maintenance event, and
wherein a first group of the two groups of series of sensor readings is selected from the first time period, and a second group of the two groups of series of sensor readings is selected from the second time period.
B0. A data processing system for generating predictive maintenance models, comprising:
one or more processors;
a memory including one or more digital storage devices; and
a plurality of instructions stored in the memory and executable by the one or more processors to:
receive a historical dataset relating to each system of a plurality of systems, the historical dataset including time-labeled maintenance and operational data, and the plurality of systems having a shared plurality of operational phases,
divide the operational data into a plurality of subsets corresponding to the shared plurality of operational phases,
receive a first selection of operational data features, systems of the plurality of systems, and an operational phase of the shared plurality of operational phases,
display the operational data subset corresponding to the selected operational phase in a graphical user interface, and
receive a second selection of operational data features from the user, and
generate a predictive maintenance model, using the second selection of operational data features according to a machine learning method.
C0. A computer implemented method of generating a predictive maintenance model, comprising:
receiving a historical dataset relating to each system of a plurality of systems, the historical dataset including time-labeled maintenance and operational data,
receiving a first selection of a first operational data feature and a first system,
displaying operational data associated with the first operational data feature and the first system on a timeline in a graphical user interface,
displaying maintenance data associated with the first system on the timeline,
receiving a second selection of a second operational data feature, and
generating a predictive maintenance model, using the second operational data feature according to a machine learning method.
D0. A computer program product for generating predictive maintenance models, the computer program product comprising:
a non-transitory computer-readable storage medium having computer-readable program code embodied in the storage medium, the computer-readable program code configured to cause a data processing system to generate a predictive maintenance model, the computer-readable program code comprising:
at least one instruction to receive a historical dataset relating to each system of a plurality of systems, the historical dataset including time-labeled maintenance and operational data,
at least one instruction to receive a first selection of a first operational data feature and a first system,
at least one instruction to display operational data associated with the first operational data feature and the first system on a timeline in a graphical user interface,
at least one instruction to display maintenance data associated with the first system on the timeline,
at least one instruction to receive a second selection of a second operational data feature, and
at least one instruction to generate a predictive maintenance model, using the second operational data feature according to a machine learning method.
E0. A data processing system for analyzing data relating to aircraft operation and maintenance, comprising:
a processing system;
a memory including a digital storage device; and
a plurality of instructions stored in the memory and executable by the processing system to:
receive a historical dataset relating to a plurality of aircraft, the historical dataset including time-labeled maintenance and operational data,
receive a first selection of operational data features and a selection of aircraft from a user,
display the operational data for the first selection of operational data features and the selected aircraft on a timeline in a graphical user interface, and
display the maintenance data for the selected aircraft on the timeline.
E1. The data processing system of E0, wherein the plurality of instructions are further executable by the processing system to:
receive a second selection of operational data features from the user, and
generate a predictive maintenance model, using the second selection of operational data features according to a machine learning method.
E2. The data processing system of E0 or A1, wherein the operational data include a series of sensor readings from a first piece of equipment for each aircraft, and the plurality of instructions are further executable by the one or more processors to:
divide each series of sensor readings into a plurality of time periods corresponding to a plurality of operational phases of the piece of equipment.
E3. The data processing system of any of E0-E2, wherein each of the plurality of aircraft has a shared plurality of operational phases, and the plurality of instructions are further executable by the one or more processors to:
divide the operational data into a plurality of subsets corresponding to the shared plurality of operational phases.
E4. The data processing system of E3, wherein the shared plurality of operational phases are phases of flight.
E5. The data processing system of E4, wherein the shared plurality of operational phases include taxi-in, takeoff, climb, cruise, descent, landing, and taxi-out.
E6. The data processing system of E4 or E5, wherein the operational data include a series of sensor readings from an aircraft of plurality of aircraft, the series of sensor readings corresponding to a flight, and the plurality of instructions are further executable by the one or more processors to:
display the series of sensor readings separate from the timeline, in response to an interaction with the graphical user interface.
E7. The data processing system of any of E3-E6, wherein the plurality of instructions are further executable by the one or more processors to:
receive a selection of operational phases, and
display the operational data subsets corresponding to the selected operational phases on the timeline in the graphical user interface.
E8. The data processing system of E7, wherein the displayed operational data subsets are color-coded according to the corresponding operational phase.
E9. The data processing system of any of E0-A8, wherein displaying the operational data includes color-coding the data according to time before or after a maintenance event
E10. The data processing system of any of E0-E9, wherein displaying the maintenance data includes displaying additional information related to a maintenance event, in response to an interaction of the user with the displayed maintenance event.
E11. The data processing system of any of E0-E10, wherein the selected aircraft include a plurality of aircraft.
The different examples of the predictive maintenance model design system described herein provide several advantages over known solutions for using machine learning models to forecast maintenance. For example, illustrative examples described herein allow creation of models for effective maintenance forecasting without programming or data science expertise.
Additionally, and among other benefits, illustrative examples described herein facilitate identification of relevant data patterns and precursor signatures in a large and complex dataset.
Additionally, and among other benefits, illustrative examples described herein allow long term trending of phase-dependent operational data.
Additionally, and among other benefits, illustrative examples described herein allow identification and elimination of seasonal bias, which in turn improves effectiveness of generated predictive maintenance models by reducing false positive alerts.
Additionally, and among other benefits, illustrative examples described herein facilitate rapid visualization of a wide variety of data features for a variety of data subsets and in a variety of plot types.
No known system or device can perform these functions, particularly in a complex dataset based on recorded telemetry from equipment cycling through multiple operational phases. Thus, the illustrative examples described herein are particularly useful for aircraft maintenance forecasting. However, not all examples described herein provide the same advantages or the same degree of advantage.
The disclosure set forth above may encompass multiple distinct examples with independent utility. Although each of these has been disclosed in its preferred form(s), the specific examples thereof as disclosed and illustrated herein are not to be considered in a limiting sense, because numerous variations are possible. To the extent that section headings are used within this disclosure, such headings are for organizational purposes only. The subject matter of the disclosure includes all novel and nonobvious combinations and subcombinations of the various elements, features, functions, and/or properties disclosed herein. The following claims particularly point out certain combinations and subcombinations regarded as novel and nonobvious. Other combinations and subcombinations of features, functions, elements, and/or properties may be claimed in applications claiming priority from this or a related application. Such claims, whether broader, narrower, equal, or different in scope to the original claims, also are regarded as included within the subject matter of the present disclosure.
This application claims the benefit under 35 U.S.C. § 119(e) of the priority of U.S. Provisional Patent Application Ser. No. 63/055,289, filed Jul. 22, 2020, the entirety of which is hereby incorporated by reference for all purposes.
Number | Date | Country | |
---|---|---|---|
63055289 | Jul 2020 | US |