One technical field of the present disclosure relates processing and visualization of structured sensor data and derived data. Another technical field relates to issue diagnosis and prediction for industrial systems. Yet another technical field relates to asset organization for industrial systems.
The approaches described in this section are approaches that could be pursued, but not necessarily approaches that have been previously conceived or pursued. Therefore, unless otherwise indicated, it should not be assumed that any of the approaches described in this section qualify as prior art merely by virtue of their inclusion in this section.
Modern industrial systems, such as a factory, a production site, or a naval ship, are inherently complex systems. These industrial systems are typically made up of hundreds of interconnected subsystems. These systems are heavily instrumented to improve diagnostics as well as to detect emergent behaviors, which results in thousands of sensor values getting produced at any given time.
However, software applications used to manage these systems generally have limited interest in understanding system structure and do not utilize most of these sensor values in an integrated manner. For example, Enterprise Asset Management or Asset Performance Management applications are configured to represent structured components of a system for the purpose of managing their maintenance or for visualizing their performance but are not configured to interpret the sensor values at a system level. As a result, they do not provide a good understanding of the system's operational state at any given time. Some engineering design tools capture schematics such as piping and instrumentation diagrams, which are meant for visualization rather than for analysis. This representation, while useful for observation and monitoring, cannot be readily used for analysis especially as industrial complexity tends to overload diagrams for non-analytical purposes.
In addition, traditional analysis methods for diagnostics and prediction treat each analysis of a subsystem as a flat mathematical process, whereby system structure and, therefore, engineering design are often lost. As a result, complex systems cannot be correctly analyzed without requiring a large amount of manual effort to map analysis results to an understanding of the overall system's operation. This limitation hinders root cause analysis of complex systems as well as their optimal operational management.
Traditional methods, therefore, do not adequately support the analysis of real-time data produced by complex systems to understand causes of their recent or past behavior. Thus, it would be helpful to have an improved solution to processing and visualizing large volumes of data of complex systems for understanding causes of system behavior.
The present invention is illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar elements and in which:
In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It will be apparent, however, that the present invention may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to avoid unnecessarily obscuring the present invention.
Embodiments are described herein in sections according to the following outline:
1.0. GENERAL OVERVIEW
2.0. DEFINITIONS
3.0. SYSTEM OVERVIEW
4.0. ASSET ORGANIZATION OVERVIEW
5.0. GRAPHICAL USER INTERFACE EXAMPLES
6.0. PROCEDURAL OVERVIEW
7.0. HARDWARE OVERVIEW
8.0. SOFTWARE OVERVIEW
9.0. OTHER ASPECTS OF DISCLOSURE
Techniques described herein model behavior of both discrete and composite systems. In discrete systems, behavior can be captured by independent models based on machine learning (ML). A compressor operating in isolation is an example of a “discrete” system because the behavior of the compressor can be understood by modeling only the compressor itself (i.e., without reference to any other systems in the plant). In composite systems, important behavior comes from the interaction of multiple discrete or composite subsystems such that understanding the overall composite system behavior requires multiple models describing interacting subsystems. It is from these interactions that the complete behavior is understood.
For example, in commercial space, a steel production plant is an example of a “composite” system because behavior of the overall plant can be understood only by modeling the interactions between the various subsystems (e.g. blast furnace, rolling mill, castor, pinch-rollers, cooling table, motors, etc.). For another example, in governmental space, the U.S. Navy's Zumwalt class destroyer is an example of a “composite” system because behavior of the ship can be understood only by modeling the interactions between the various subsystems (e.g., turbine generators, switchgear, water pumping systems, power conversion and distribution modules, etc.).
An approach to modeling is to put all of a system's signals into a model and use that data to learn behaviors of the system. For small systems, this approach works well as the number of signals is limited (e.g., tens to a few hundreds). However, for complex systems, this approach does not work well as the number of signals from all of the subsystems can easily reach into thousands or more. Patterns found directly from such a large number of disparate signals may be too high-level or superficial without truly capturing problematic behavior that might be traced to components at different levels of the system. Therefore, in modeling complex systems, a different approach is needed—one which reduces the signal count used to find patterns but that still accounts for interactions between the subsystems which generate all of those signals.
Techniques described herein relate to model chaining. Model chaining provides users with enormous flexibility to define their systems in a way that best suits their needs to get the most benefit from models. In model chaining, a model chain may be generated. A model chain includes a plurality of models “chained” together. Output of a model may be used as the signal input to another model. In this way, lower-level models can be more sensitive to local behavior as they find patterns using just a few signals, and higher-level models (e.g., a model of models) then look for patterns in the output of the lower-level models.
When a model chain finds or predicts abnormal behavior in the system, users are able to drill down to the specific signals which are responsible for the abnormal behavior by aligning and traversing multiple models across multiple Datastreams. Traversals enable the effective use of model chains for understanding complex systems.
Techniques described here further relate to improving learning and tracing the reliability, emission, quality, performance of industrial systems. The techniques also enable building an output product hierarchy that will capture the potential issue with the quality of the output product depending on the quality issue detected at a certain step(s) in the process of the assembly or processing.
In one aspect, a computer-implemented method comprises receiving an indication of an error state of a specific asset of a plurality of assets that is arranged in a hierarchy of a plurality of levels. Each asset of the plurality of assets is associated with at least one component of an industrial system. The plurality of levels includes a top level, a bottom level, and one or more intermediary levels between the top level and the bottom level. Each of the plurality of assets is associated with a machine learning (ML) model, thus forming a corresponding hierarchy of ML models. A first ML model associated with a first asset of the plurality of assets that is at the bottom level is configured to receive one or more signals corresponding to one or more values of sensors attached to one or more components of the industrial system in real time relative to generation of the one or more values. A second ML model associated with a second asset of the plurality of assets that is at the bottom level or at the one or more intermediary levels is configured to receive one or more signals to predict a condition of the second asset as output of the second ML model, wherein the output of the second ML model is used as an input signal by at least a third ML model associated with a third asset of the plurality of assets that is higher in the hierarchy than the second asset. The method also includes performing a diagnosis of the error state by traversing the hierarchy of the plurality of levels from the top level. The traversing the hierarchy comprises determining a particular input signal of one or more input signals for a ML model associated with an asset at a current level of the hierarchy satisfies an event, following the particular input signal to a ML model associated with an asset at a level lower than the current level, thereby visiting the asset at the lower level, and repeating the determining and the following until an asset of the plurality of assets is identified as a potential source of the error state indicated for the system.
Other embodiments, aspects, and features will become apparent from the reminder of the disclosure as a whole.
Throughout the discussion herein, several acronyms, shorthand notations, and terms are used to aid the understanding of certain concepts pertaining to the associated system. These acronyms, shorthand notations, and terms are solely intended for the purpose of providing an easy methodology of communicating the ideas expressed herein and are in no way meant to limit the scope of the present invention.
Sensors associated with an industrial equipment or machine produce multiple signals forming time series data. Features can be identified from the time series data. Each feature can involve one or more signals (at the same time point) or one or more time points (for the same signal)—a time period can comprise any number of time points. Each feature corresponds to a relationship of the signal values across signals, time points, or both. Such relationships among signals, time series, features, and so on are further discussed in U.S. Pat. No. 10,552,762, titled “Machine Learning of Physical Conditions Based on Abstract Relations and Sparse Labels” and issued Feb. 4, 2020, for example.
For example, referring to
A feature is a description of time series data across multiple signals and across time. A condition can be characterized by patterns detected in multiple features. A feature vector is a vector of features (or feature values). An example of a condition of a printer is that it is about to stop printing. A pattern characteristic of the condition could be that features related to ink levels show decreasing values over time. Another pattern characteristic of the condition could be features related to a first wireless signal being weak (below a certain threshold) and features related to a second wired signal being undetectable (zero) at the same time. Knowing which of the signals contribute most to the condition of the printer given the features is helpful. In certain embodiments, a feature represents a pattern in values produced by one or more signals over a period of time that occurs in multiple pieces of time series data. A feature vector could then represent the occurrence of one or more patterns in values of a signal, the set of values of a signal that correspond to when one or more patterns occur, or the set of values corresponding to a pattern.
Table A below provides additional, extended definitions. A full definition of any term can only be gleaned by giving consideration to the full breadth of this patent.
All drawing figures, all of the description and claims in this disclosure, are intended to present, disclose and claim a technical system and technical methods comprising specially programmed computers, using a special-purpose distributed computer system design and instructions that are programmed to execute the functions that are described. These elements execute functions that have not been available before to provide a practical application of computing technology to address the difficulty in efficiently and intelligently analyzing and visualizing large volumes of time series data in complex systems for understanding causes of behavior. In this manner, the disclosure has many technical benefits.
In some embodiments, the networked computer system 100 comprises one or more client computers 104, one or more sensors 106, and a server computer 108, which are communicatively coupled directly or indirectly via network 102.
In the example of
The server computer 108 may comprise fewer or more functional or storage components. Each of the functional components can be implemented as software components, general or specific-purpose hardware components, firmware components, or any combination thereof. A storage component can be implemented using any of relational databases, object databases, flat file systems, or JSON stores. A storage component can be connected to the functional components locally or through the networks using programmatic calls, remote procedure call (RPC) facilities or a messaging bus. A component may or may not be self-contained. Depending upon implementation-specific or other considerations, the components may be centralized or distributed functionally or physically.
In an embodiment, the server computer 108 executes receiving instructions 110, chaining instructions 112, training instructions 114, inferencing instructions 116, generating instructions 118, analyzing instructions 120, and visualizing instructions 122, the functions of which are described herein. Other sets of instructions may be included to form a complete system such as an operating system, utility libraries, a presentation layer, database interface layer and so forth. In addition, the server computer 108 may be associated with one or more data repositories 130.
The receiving instructions 110 may cause the server computer 108 to receive, over the network 102, operational data (e.g., actual/raw data) for processing and/or storage in the data repository 130. In an embodiment, the operational data may be time series data generated by field sensors 106. Time series data may be numerical or categorical. Example numerical time series data may relate to temperature, pressure, or flow rate generated by a machine, device, or equipment. Example categorical time series data has a fixed set of values, such different states of a machine, device, or equipment.
The chaining instructions 112 may cause the server computer 108 to select and connect machine learning (ML) models. The model chain may have a configuration that is hierarchical, sequential, or a hybrid of both. Each model in the model chain corresponds to a logical grouping of one or more assets, which are further discussed below. Each model receives and processes one or more input signals, and generates an estimated condition or signal patterns characterizing the condition as output. Output of a model may be routed as a signal feed for (e.g., input to) another model. In this manner, lower-level models may be more sensitive to local behavior of the system as they find patterns using just a few signals, while higher-level models find patterns in the patterns of the lower-level models. The model chain represents or reflects structures and process flows of a complex system (e.g., an industrial system).
Each model may be associated with machine learning approaches, including any one or more of: supervised learning (e.g., using gradient boosting trees, using logistic regression, using back propagation neural networks, using random forests, decision trees, etc.), unsupervised learning (e.g., using an Apriori algorithm, using K-means clustering), semi-supervised learning, a deep learning algorithm (e.g., neural networks, a restricted Boltzmann machine, a deep belief network method, a convolutional neural network method, a recurrent neural network method, stacked auto-encoder method, etc.), reinforcement learning (e.g., using a Q-learning algorithm, using temporal difference learning), a regression algorithm (e.g., ordinary least squares, logistic regression, stepwise regression, multivariate adaptive regression splines, locally estimated scatterplot smoothing, etc.), an instance-based method (e.g., k-nearest neighbor, learning vector quantization, self-organizing map, etc.), a regularization method (e.g., ridge regression, least absolute shrinkage and selection operator, elastic net, etc.), a decision tree learning method (e.g., classification and regression tree, iterative dichotomiser 3, C4.5, chi-squared automatic interaction detection, decision stump, random forest, multivariate adaptive regression splines, gradient boosting machines, etc.), a Bayesian method (e.g., naïve Bayes, averaged one-dependence estimators, Bayesian belief network, etc.), a kernel method (e.g., a support vector machine, a radial basis function, a linear discriminant analysis, etc.), a clustering method (e.g., k-means clustering, expectation maximization, etc.), an associated rule learning algorithm (e.g., an Apriori algorithm, an Eclat algorithm, etc.), an artificial neural network model (e.g., a Perceptron method, a back-propagation method, a Hopfield network method, a self-organizing map method, a learning vector quantization method, etc.), a dimensionality reduction method (e.g., principal component analysis, partial least squares regression, Sammon mapping, multidimensional scaling, projection pursuit, etc.), an ensemble method (e.g., boosting, bootstrapped aggregation, AdaBoost, stacked generalization, gradient boosting machine method, random forest method, etc.), and/or any suitable machine learning approach.
The training instructions 114 may cause the server computer 108 to train each model using historical data, including past operational signals generated by field sensors and past prediction signals generated by models, and past actual conditions of assets. Each model may be retrained using new available data. Each model may be individually trained. Alternatively or in addition to, all models may be trained together.
The inferencing instructions 116 may cause the server computer 108 to apply each trained model to use current (e.g., real-time) operational signals generated by the field sensors and/or current prediction signals generated by other trained models to predict current conditions (e.g., behavior, warnings, states, etc.) of associated assets.
The generating instructions 118 may cause the server computer 108 to generate signals encoding current conditions predicted by trained models. These prediction signals are categorical signals that convey current conditions with timestamps. The generating instructions 118 may also cause the server computer 108 to generate signals encoding signal patterns characterizing the current conditions. These prediction signals are continuous signals. Example models are described in U.S. Pat. No. 10,409,926, titled “Learning Expected Operational Behavior of Machines from Generic Definitions and Past Behavior” and issued Sep. 10, 2019.
The analyzing instructions 118 may cause the server computer 108 to generate performance/status reports. In an embodiment, at least the analyzing instructions 118 may form the basis of a computational performance model. A performance/status report may include an explanation score and contribution rank of each signal of input signals used by a trained model. The explanation score describes a contribution of each signal of input signals for a predicted condition of an associated asset. The contribution rank, based on the explanation score, rank the signal among the other input signals in terms of contribution to the predicted condition. Signals higher in the rank are likely contributors for the condition of the associated asset. Example methods of determining explanation scores and contribution ranks are described in co-pending U.S. patent application Ser. No. 15/906,702, titled “System and Method for Explanation for Condition Predictions in Complex Systems” and filed Feb. 27, 2018.
The visualizing instructions 120 may cause the server computer 108 to receive a user request (API request), from a requesting client computer, to view processed data and/or signal data and, in response, cause the requesting client computer to display the processed data and/or signal data. Processed data may include performance/status reports and other information related to a model chain. Signal data may include past and current operational signals, and past and current prediction signals. For example, via an interactive graphical user interface (GUI), a user is able to investigate system errors and/or to visualize signals.
Example methods of visualizing signals are described in co-pending U.S. patent application Ser. No. 16/939,568, titled “Fluid and Resolution-Friendly View of Large Volumes of Time Series Data” and filed Jul. 27, 2020.
In an embodiment, the computer system 100 comprises components that are implemented at least partially by hardware at one or more computing devices, such as one or more hardware processors executing program instructions stored in one or more memories for performing the functions that are described herein. All functions described herein are intended to indicate operations that are performed using programming in a special-purpose computer or general-purpose computer, in various embodiments. A “computer” may be one or more physical computers, virtual computers, and/or computing devices. As an example, a computer may be one or more server computers, cloud-based computers, cloud-based cluster of computers, docker containers, virtual machine instances or virtual machine computing elements such as virtual processors, storage and memory, data centers, storage devices, desktop computers, laptop computers, mobile devices, and/or any other special-purpose computing devices. Any reference to “a computer” herein may mean one or more computers, unless expressly stated otherwise.
Computer executable instructions described herein may be in machine executable code in the instruction set of a central processing unit (CPU) and may have been compiled based upon source code written in JAVA, C, C++, OBJECTIVE-C, or any other human-readable programming language or environment, alone or in combination with scripts in JAVASCRIPT, other scripting languages and other programming source text. In another embodiment, the programmed instructions also may represent one or more files or projects of source code that are digitally stored in a mass storage device such as non-volatile RAM or disk storage, in the systems of
The data repository 130, coupled directly or indirectly with the server computer 108, may include a database (e.g., a relational database, object database, post-relational database), a file system, and/or any other suitable type of storage system. The data repository 130 may store operational data generated by field sensors, predicted data generated by one or more trained models, processed data, and configuration data.
One or more field sensors 106 may detect or measure one or more properties of a machine, device, or equipment as operational data during operation of the machine, device, or equipment. An example machine, device, or equipment is a windmill, a compressor, an articulated robot, an IoT device, or other machinery. Operational data can also comprise condition or state indicators of each physical asset, from which condition or state indicators of each logical asset can be determined. (“State,” “condition,” “state indicator,” and “condition indicator” can be used interchangeably to refer to a value that represents or describes the state or condition of an asset.) Operational data may be transmitted via a computing device with a network communication interface or to the server computer 108 over the network 102 or directly provided to the server 108 via physical cables, for storage in the data repository 130 and for processing by trained models. Predicted data generated by the trained models may be stored in the data repository 130. In an embodiment, operational data (e.g., operational signals) and predicted data (e.g., prediction signals) may be stored in the data repository according to a particular data structure that allows the processed data to be served and/or read as quickly as possible. Example methods of storing signals are described in co-pending U.S. patent application Ser. No. 16/939,568, titled “Fluid and Resolution-Friendly View of Large Volumes of Time Series Data” and filed Jul. 27, 2020.
Processed data, such as performance/status reports, are also stored in the data repository 130. A performance/status report generally indicates how an asset performs over a period of time. A performance/status report can include a contribution score, for a signal, that indicates its contribution to an asset's condition at a certain point during the period of time that is determined by a trained model which takes that signal as input.
Configuration data associated with the trained models are also stored in the data repository 130. Configuration data include parameters, constraints, objectives, and settings of each trained or tuned model.
The data repository 130 may store other data, such as map data, that may be used by the server computer 108. Map data include geo-spatial maps where a condition indicator of an asset is mapped to the physical location of the asset that may be visualized with processed data.
The network 102 broadly represents a combination of one or more wireless or wired networks, such as local area networks (LANs), wide area networks (WANs), metropolitan area networks (MANs), global interconnected internetworks, such as the public internet, or a combination thereof. Each such network may use or execute stored programs that implement internetworking protocols according to standards such as the Open Systems Interconnect (OSI) multi-layer networking model, including but not limited to Transmission Control Protocol (TCP) or User Datagram Protocol (UDP), Internet Protocol (IP), Hypertext Transfer Protocol (HTTP), and so forth. All computers described herein may be configured to connect to the network 102 and the disclosure presumes that all elements of
The server computer 108 is accessible over network 102 by multiple requesting computing devices, such as the client computer 104. Any other number of client computers 104 may be registered with the server computer 108 at any given time. Thus, the elements in
A requesting computing device, such as the client computer 104, may comprise a desktop computer, laptop computer, tablet computer, smartphone, or any other type of computing device that allows access to the server computer 108. The client computer 104 may be used to request and to view or visualize processed data.
For example, the client computer 104 may send a user request to create a model of models and/or to view processed data to the server computer 108. A browser or a client application on the client computer 104 may receive response data for display in an interactive GUI that allows easy viewing operations, such as zoom, pan, and select gestures, as further described herein.
Industrial systems and processes may be represented as an organization of interconnected assets and, by extension, their respective models. Patterns are detected by a model using available features from the model.
Model chaining allows users to use logical grouping of component models to build an organization of interconnected assets with their respective models. Such an organization of assets could be defined, modeled, monitored and managed at multiple levels of granularity. In an embodiment, the organization of assets may be viewed as an asset graph, in which each asset may be viewed as a node in the graph. An organization may be hierarchical, sequential, or a hybrid of both.
4.1. Hierarchical Organization
For purposes of discussion, a simplified version of the 9-tier hierarchy is referred to herein. This simplified hierarchy starts at Level 4 (Plant) and to Level 8 (Component), and includes Level 10 (an extension to ISO/DIS 14224 taxonomy) that identifies operational signals that originate from field sensors. In this simplified hierarchy, the components (at Level 8) have one or more operational signals.
Techniques described herein are not limited to the ISO/DIS 14224 Taxonomy but rather are flexible to allow for a different taxonomy or hierarchy or even a graph structured assets.
4.1.1. Building Models
Each asset is a logical asset that is represented by one or more signals (e.g., one or more sensor signals and/or one or more prediction signals). The logical relationship does not need to correspond to a physical relationship. A logical asset could correspond to a grouping of any physical assets (or other logical assets) or the conditions thereof without requiring any relationships among the physical assets in the group. For example, the “Comp 1” asset 302 is represented by five sensor signals (i.e., {S1, S2, S3, S6, S9}). For another example, the “EquipS 1” asset 306 is represented by two component prediction signals and one sensor signal (i.e., {Comp-1_M[1] 302s, Comp-2_M[2] 304s, S8}). For yet another example, the “EquipU 1” asset 310 is represented by three equipment subunit prediction signals (i.e., {EquipS-1_M[6] 306s, EquipS-2_M[7] 308s, EquipS-3_M[8] 3105}. In an embodiment, these logical assets may be defined using Signal Groups, as shown in Table B.
Each logical asset is associated with a respective model that is programmed to make an inference of conditions associated with the asset, as further described below. For example, the “Comp 1” asset 302 is associated with model M[1]. For another example, the “Plant X” asset 320 is associated with model M[13]. Each model receives and processes one or more signals, from a lower level, as input data and generates output data that includes conditions, predicted for the associated asset, that may be used by a model in an upper level. In building a model, a signal input to the model may be an operational signal generated by a sensor or an actual condition prediction signal (or indicator) of a lower-level asset. For example, all the input signals and corresponding output signals used for training purposes can be obtained from monitoring and recording actual conditions of each component or unit of the system over a period of time. For a logical asset that does not correspond to an actual physical component but merely a logical grouping of physical components that are not fully physically connected, the condition could be specifically derived according to specific rules. Patterns or other data characterizing a condition are needed to build a model, whether to classify the combination of input signals or to form part of input data for the model, the patterns or other data could be derived from the actual historical data for training purposes. The input data to a model includes those signals that represent the logical asset corresponding to the model.
In an embodiment, models are built separately. In an embodiment, models are built using a bottom up approach in the sense that the output signals associated with lower-level components are input signals of higher-level components. Referring to the example hierarchical organization 300 of
Table B shows example mappings of signals to signal groups to logical assets. These mappings correspond to the example hierarchical organization 300 of assets illustrated in
In an embodiment, the “Model Used” field shown in Table B is for system-use only. This field ensures that the model for signal mapping is never lost. As a signal gets used in multiple models, additional model identifiers are appended to this field in a comma-separated list. Techniques described herein are further extensible to include models across different Datastreams.
In
It is noted that entries for sensor signals do not have values for the “Logical Asset Name” field in Table B.
Once an organization of models is created, the models are trained (and re-trained) using actual historical data. Training a model involves providing a mathematical algorithm with sufficient historical data to learn from. A model may be retrained with new data when, for example, there is a model drift, a decline in model performance or a new condition of interest appears.
4.1.2. Applying Models in Real Time
Once models are available and new data is received by the server computer 108, the models are applied on the new data to generate new predictions/outputs. It would be tedious and time consuming to make the user apply individual models on the new data. Using an understanding of the model hierarchy as is demonstrated in Table B, the server computer 108 easily automates the apply process using deep apply. In deep apply, the server computer 108 uses the Signal Groups and/or Models Used information, as necessary, to determine the structure of the asset organization and to apply the lower-level models on the new data to generate the new outputs that are required by the higher-level models.
When the models are applied, real-time signals are routed to each model where they are used, new output is generated (at the assessment rate of the model) and routed as a signal feed to the higher-level models in real-time for pattern detection at each level, one level at a time. This roll-up continues all the way to the topmost level (e.g., plant level) for real-time analysis. In this manner, signal patterns “bubble” or propagate up from the bottom. When an abnormal event is detected at a Component level, for instance, it may contribute to the overall health of the higher-level asset(s) and performance.
For example, in
In
Managing system operation in a hierarchical manner, including propagating errors up only when the models associated with assets at a certain level of a hierarchy have outputted an error condition, provide an advantage and improvement over prior monitoring and alerting systems for users in a manner that allows them to stay focused on a particular problem at hand without being distracted or overwhelmed with unwanted false-positive alerts.
As a real-world illustration, an entire crude-oil processing plant would not be in an error state if one of the motors (lowest level component) becomes faulty and starts to misbehave. For an entire plant to be in an “Error” state it may require a large number of critical systems and/or subsystems to become faulty.
While an error condition may be detected at the level of individual signal(s), as illustrated in
4.1.3. Analyzing Model Performance
While building models follows a bottom-up approach, analyzing model performance follows a top-down approach. To explain the top-down approach,
In an embodiment, a performance/status report may be generated at each level that can provide a detailed view of an asset under monitoring to a user. During model performance analysis, using these reports, the user may traverse down the asset hierarchy, starting from the top (e.g., highest level) signals to find a potential root cause of the “Error” state of “Plant X” or another higher-level asset. The user may traverse down the assets by looking at the signals that most explain an error condition. The user may also traverse down the signals by looking at those signals that have high explanation scores provided by a corresponding Analyzer or a Live Model any that given a condition of a first component caused by the conditions of a group of sub-components, generates an explanation score for each of the sub-components that estimates how much the sub-component's condition contributes to the first component's condition.
For example, the user may look at explanation scores for the input signals for the current condition of “Plant X” 320, which would lead to predicted signals {Zone-1_M[11] 316s, Zone-n_M[12] 318s} used in model M[13]. The user may find a comparatively high explanation score for signal Zone-1_M[11] 316s. In other words, the condition observed at “Plant X” 320 is best explained by the condition of “Zone 1” 316. At this point, the user has a lead and may navigate to model M[11] for “Zone 1” 316.
“Zone 1” 316, which is a logical asset, is in an “Error” state, uses signals from models of the two equipment units (i.e., {EquipU 1312, EquipU n 314}). Condition of “Zone 1” 316 are explained by one or more of its constituent signals, namely {EquipU-1_M[9] 312s, EquipU-n_M[10] 314s}, which are outputs of models M[9] and M[10]. When looking at the explanation scores and signal contribution rank for these signals, the user may find that the current state of “Zone 1” 316 is best explained by the signal EquipU-1_M[9] 312s. This will lead the user to further investigate “EquipU 1” 312 for more details.
“EquipU 1” 312, which is a logical asset, is in an “Error” state, and uses outputs from three equipment subunits (i.e., {EquipS 1306, EquipS 2308, EquipS 3310}). The user may find a high explanation score for signal EquipS-1_M[6] 306s; a medium explanation score for signal EquipS-2_M[7] 308s; and, finally, a low explanation score for signal EquipS-3_M[8] 310s. This will guide the user towards understanding the behavior of “EquipS 1” 306, where the explanation score is high. Condition of “EquipU 1” 312 will be explained by one or more of its constituent signals, namely {EquipS-1_M[6] 306s, EquipS-2_M[7] 308s, EquipS-3_M[8] 310s}.
The same analysis continues, showing that the “Error” state of “EquipS 1” 306 may be better explained by the high explanation score for signal Comp-1_M[1] 302s, which in turn would point to signals {S2, S6}, which may have a higher explanation score.
In an embodiment, the user may be able to backtrack to traverse a different signal path to investigate another potential root cause for the “Error” state predicted by model M[13]. For example, from “Comp 1,” the user may backtrack to “EquipU 1” to investigate “EquipU 2” or “EquipU 3” for more details and then, from there, to traverse down the signals. The backtracking could follow a ranking of the components in terms of their explanation scores. For example, as discussed above, when “EquipU 1” generates the signal with the highest explanation score, it can be inspected first. When it is at least desirable to inspect another component that contributes to the condition of “Zone 1,” the component that generates the next highest explanation score can be inspected.
In some embodiments, the component associated with the highest explanation score may not be predicted to be in an error state, following the sub-hierarchy rooted at this component might not lead to components predicted to be in error states, or manually inspecting the component might not reveal an error. Though there is no requirement that an “error” condition at higher levels comes from an “error” condition at a lower level, as illustrated in
In some embodiments, multiple paths in the hierarchy can be traversed at the same time. All paths corresponding to the top N (a positive integer) explanation scores or all explanation scores above a certain threshold could be traversed. The decision on whether to traverse a path can also depend on both the explanation score associated with a component and the current state of the component. For example, the list of possible conditions could be converted into condition scores, such as a largest number for an error state and a smallest number for a normal state. The decision could then be based on the product of the explanation score and the condition score. In other embodiments, the decision could be based on a manual inspection of the asset when the asset corresponds to a physical asset. For example, a path leading to a component may not be traversed when in reality the component is in a normal condition. In this manner, the analysis is guiding the manual inspection of physical components at select levels of the hierarchy in diagnosing a problem.
4.2. Sequential Organization
4.2.1. Oil Processing Plant Example
In many industrial setups, it may be beneficial to see a complex system, such as an oil processing plant, sequentially instead of hierarchically (like above). At a very high level, the oil processing plant puts crude oil through a chemical process that is composed of three steps: {Separation, Conversion, Treatment}. While the hierarchical nature applies to the structure of the system operation, the sequential nature generally applies to the timing of the system operation. The crude oil is taken as inputs to produce, after the three steps, multitudes of petroleum products as the output. Techniques described herein are flexible to support sequential systems or processes.
In
Table C shows example mappings of signals to signal groups to logical assets. These mappings correspond to the example sequential organization 400 of logical assets illustrated in
Each system in the organization 400 of
The patterns detected in the discrete model M[100] may be representative of the quality of the output produced from “Tank A” 402. The output signal may also be considered as a signal for modeling. The approach for modeling “Tank A” 402 is similar to modeling “Tank B” 404 and “Tank C” 406 generating the condition outputs from discrete models M[200] & M[300] respectively.
Further downstream inputs to “Mixer” 408 are:
1. output flow data (e.g. rate and velocity) of each tank {Fa, Fb, Fc} into “Mixer,”
2. two of its own sensor readings {Tm1, Pm1}, and
3. the time shifted condition output of each of the tanks {M[100], M[200], M[300]},
where
Similarly, the learning signals for a composite model M[500] of “Processor A” 410 are {Fmo, Tc1, Tc2, Fc1, Fc2, Tp1, Fp1, Fp2, time shifted Mixer conditions [M400 output]}, where:
At time t2, as illustrated in
The condition of the “Mixer,” at time t3, could be explained by the sensor signals {Tm1, Pm1, Fa} and the prediction signals {Tank-A_M_[100]}, while the prediction signals {Tank-B_M[200], Tank-C_M[300]} are non-contributing to the condition because nothing changed in the operation of those tanks. Thus, the Tank A_M[100] condition propagates to downstream models, helping predict and explain the behaviors of the downstream components.
Moving along, the flow of chemicals in “Tank A” is restored and is back to normal operating condition, as illustrated in
It is possible that the “Mixer” may exhibit a different type of warning condition on its own even when “Tank A,” “Tank B,” and “Tank C” operations are normal. This could be because of its own independent set of sensor signals or may be clogging at valve {Fmo} or some chemical sludge buildup inside the “Mixer.” This will result in an independent change in asset behavior, which will affect downstream, causing a high level mark reaching in one or all of the tanks.
As in a hierarchical organization, building models and analyzing model performance in a sequential organization are performed in reverse or opposite fashion. For example, while “Tank A,” “Tank B,” “Tank C,” “Mixer,” and “Processor A” are all Level 8 assets, building models in a sequential organization follows a downstream approach (e.g., starting with “Tank A,” “Tank B,” and “Tank C”), and analyzing model performance in a sequential organization follows an upstream approach (e.g., starting with “Processor A”).
4.2.2 Automobile Manufacturing Plant Example
An automobile manufacturing plant is another example of a complex system. An end-to-end automobile manufacturing process that includes numerous parts and assembly steps, may be laid out as a sequential process. Each assembly step may be built on top of the previous assembly step, which thereby forms a product hierarchy. Monitoring assets in such a sequential organization allows a user to assess the product hierarchy of the automobile (e.g., a manufactured product. Bad quality of any of the lower-level parts in the product hierarchy will reflect on the overall quality of the automobile.
Assume that the assembly of a chassis requires numerous weldings. A bad quality weld could become a potential hazard. Using the sequential organization of the weld assets in the automobile manufacturing plant, the quality at each weld station may be determined by building a model for that weld station. Every weld station will have the state of the product at the end of the previous station and may have an independent set of inputs. This chaining continues throughout the manufacturing process. A ML model assessing the quality of work done (e.g., weld) at each step reflects the quality of the final manufactured product (e.g., automobile).
4.3. Hybrid Organization
Techniques described herein are also flexible to support hybrid systems or processes.
Everything at Level 8 and below remains the same as seen in the sequential organization 400. At Level 7, logical assets “Chemical Tanks” 502, “Pre-Processors” 504, and “Post-Processors” 506 are created. The “Chemical Tanks” asset 502 is represented by three prediction signals {Tank-A_M[100], Tank-B_M[200], Tank-C_M[300]} and generates prediction output under model M[101] associated with the “Chemical Tanks” asset 502. The “Pre-Processors” asset 504 is represented by one or more prediction signals {Mixer_M[400], . . . } and generates a prediction output under model M[401] associated with the “Pre-Processors” asset 504. The “Post-Processors” asset 506 is represented by one or more prediction signals {Processor-A_M[500], . . . } and generates a prediction output under model M[501] associated with the “Post-Processors” asset 506.
The logical assets “Chemical Tanks” 502, “Pre-Processors” 504, and “Post-Processors” 506 are extracted to the next higher level logical asset “Ethanol Production Line” 508, which is represented by prediction signals {Chemical-Tanks M[101], Pre-Processors M[401], Post-Processors_M[501]}. Model M[151] for this logical asset “Ethanol Production Line” 508 will look at the health of the overall line of ethanol production—a sequential process. As illustrated, the output of each model will roll-up to the next logical entity and develop a hierarchical structure.
Table D shows example mappings of signals to signal groups to logical assets. These mappings correspond to the example hybrid organization 500 of assets illustrated in
A user may study the impact of the system state on the quality of the output produced by comparing two or more model outputs. For example, the user may compare the Level 6 model Ethanol-Production-Line-A_M[151], which reflects the overall state of the production line, and Level 8 model Post-Processors_M[501], which reflects the overall quality of the output generated.
5.1. Signal Visualizations
In an embodiment, a user may select models and independently select signals of their choice in a GUI to visualize relevant signals in a timeline view.
As described below, the GUI includes features to present signals in a new and useful manner that allows the user to determine model-signal relationships in a hierarchical context or another context that reflects the structural relationship among components of a system.
5.1.1. Signals Highlighted by Model
In an embodiment, the server computer 108 causes via a GUI initially presenting graphical representations of signals representing conditions of higher-level components, such as the entire system or the assets being hierarchically right under the entire system. The GUI allows the user to drill down to signals representing conditions of lower-level components. For example, these signals could also be displayed in a separate window or on the bottom of the screen to add to the existing display.
In some embodiments, when the user is reviewing a particular model, such as M[1] 602, the GUI highlights the graphical presentation of the associated signal. The server computer 108 uses the information in a data structure, such as Table B, to recognize that signals related to the particular model, such as {S1, S2, S3, S6 & S9}, are to be added to the view, highlighted, or grouped in a collection shown at a certain position within the view. (The “Models Used” field in Table B helps filter down the signals for display of selected models.) Other signals could fade away or may be dropped lower in the view or removed completely from the view.
In some embodiments, the GUI initially shows graphical representations of all signals associated with specific levels of a hierarchy and highlights all the displayed signals related to a component in response to user input. As illustrated in
In example of
5.1.2. Signals Grouped by Model and Sorted by Contribution Rank
In an embodiment, a signals list may be sorted in order of the signal contribution ranks (e.g., descending, ascending, etc.) to help the user focus only on those signals that matter the most for the condition/prediction of interest. The signal contribution ranks can be obtained from applying one of the explanation methods, as discussed above.
For example, in timeline view 750 of
5.1.3. Other Example Visualization Features
Other GUI features may include a grouping feature, a linking feature, and a pinning feature. Using the grouping features, one or more signals may be grouped. Grouped signals may be shown/hidden using an expand/collapse feature. Using the linking feature, a link may be provided to “show 5 more” signals, for example. Using the pinning feature, one or more signals may be pinned to a timeline view and may always be shown on top of the timeline view. In this manner, every time a new signal is pinned, it may be automatically added to the “pinned” group so that the signal does not hide away and is moved to the top portion of the timeline view.
In an embodiment, displayed signals may be reorganized on the GUI based on a selected event (e.g., behavior) in the GUI. In an embodiment, a signal may be zoomed in/out on the timeline view.
5.2. Model to Signal Conversion
As described herein, a model of models may represent either a higher-level physical or logical asset.
After the user designates the model output as a signal, the user may assign it to one or more Signal Groups (just like any other signal). Referring back to
In an embodiment, all model outputs may be automatically generated in a way which allows that output to be used as signal data in another model in the same account Datastream.
Once converted into a signal, the signal can be used anywhere a signal is used. For example, during visualization, the expand/collapse feature may show/hide the signal in the timeline view. A set/reset feature may set/reset signal level properties, such gapThreshold, etc., of the signal.
As discussed above, newly converted signals may be used for building higher-level models. For example, the user may create the model M[6] using Signal Group “_equips1,” which includes three signals {Comp-1_M[1], Comp-2_M[2], S8} of which two are prediction signals and the third is sensor-based signal. Similarly, the user may create the model M[7] using Signal Group “equips2,” which includes two prediction signals {Comp-3_M[3], Comp-4_M[4]}, and the model M[8] using Signal Group “equips3,” which includes two prediction signal {Comp-4_M[4], Comp-5_M[5]}.
A higher-level (equipment unit) model M[9] is then created using Signal Group “_equipu-1,” which will include the signals converted from model output of models M[6], M[7], & M[8]. In Table B above, prediction signals named EquipS-1_M[6], EquipS-2_M[7], and EquipS-3_M[8] are created from the model output of M[6], M[7] and M[8], respectively.
This chaining continues on and further up until it reaches the “Plant X” (at Level 4). When necessary, this can be extended to higher-level categories for Installation (Level 3), Business Category (Level 2), finally the Industry (Level 1).
As discussed above, users are not limited to these 9-tier levels but are allowed to create a new level for their selection as may be relevant to their business needs. Not just a hierarchical organization, techniques described herein support organizing the system of assets in a graph structure to support a process flow.
These interconnected operational ML models then represent a digital twin of the interconnected systems of a complex system, such as an industrial plant. Users are able to monitor the performance of the asset at any level.
5.3. Model Comparison
A user may compare two assets at different levels by using models corresponding to selected assets. For example, if the user wants to compare component “Comp 1” and component “Comp 2,” then the user would pick the models M[1] and M[2], respectively. However, if the user wants to compare a component “Comp 1” and an equipment subunit “EquipS 1,” then the user would pick the models M[1] and M[6], respectively.
Via a GUI, both the models are highlighted and placed one above the other. The corresponding signal groups are shown in the same order as their models. Within the Signal Groups, the signals may be rank-ordered. In such a view, if there is a common signal used by both models, they will be repeated in both groups.
5.4. Digital Twin
An analyzer, such as those shown in Tables B, C, and D, are containerized models that can be deployed in any computing environment that can run a docker container, such as a Raspberry PI, an Android-based smart phone, a laptop/PC, etc., for real-time monitoring of physical assets.
Condition output from Analyzers may be placed on a 2D static picture (e.g., geo-spatial map view) that can then be viewed based on a corresponding organization of assets. The user may either traverse through the structure of the organization and navigate from the geo-spatial map view to a specific asset, or use a search box to locate an asset of interest and directly navigate to the asset.
In an embodiment, analyzers may be directly placed on an existing SCADA/DCS/HMI instead of a 2D static image. In an embodiment, analyzers may be directly placed on an existing 3D rendering instead of a 2D static image.
In an embodiment, method 1200 is performed at each level in a plurality of levels associated with an asset organization, starting from the bottommost level (e.g., Level 8), excluding the signal level (e.g., Level 10), of the asset organization. In an embodiment, the GUI 900 facilitates building models associated with the asset organization.
At step 1202, an asset in a current level is selected for which a model is to be defined. An example asset may be a part, a component, an equipment subunit, an equipment unit, a zone, or a plant of an industrial system.
At step 1204, input signals for the model are selected. The model determines conditions of the asset associated with the model based on the input signals. Input signals for a model may include operational signals from field sensors, prediction signals of models associated with assets that are located at a level lower than the current level, or a combination thereof.
At step 1206, an output signal for the model is named. The output signal would encode conditions predicted by the model. The output signal is a prediction signal that may be used by at least one model associated with an asset of the plurality of assets that is located at a level higher than the current level. For example, referring to
In an embodiment, steps 1202-1206 are repeated for each asset in the current level.
After all models are built, associated assets are thereby connected or chained to form a model chain. Put differently, prediction signals output from a lower-level model may be used by any higher-level models. In the model chain, lower-level models may be more sensitive as they find patterns using just a few signals, and higher-level model then looks for patterns in the patterns of the lower-level models. In this manner, method 1200 accounts for all interactions between assets that generate signals while reducing the number of signals used by each model to find patterns.
After the model chain is formed, the models in the model chain may be applied using deep apply, in which lower-level models are applied on new signal data to generate new output that are required by higher-level models. Users are able to perform root cause analysis of complex systems efficiently and effectively as they are not blinded by subtle system behavior since failures at individual signal(s) only bubble up to the topmost model when a model at each level below indeed determines an error based on the pattern detected from its input signals.
In an embodiment, method 1250 is performed by traversing a plurality of levels associated with an asset organization, starting from the topmost level (e.g., Level 4) of the asset organization. For discussion, assume that the asset organization is the one described with respect to method 1200 of
At step 1252, a particular input signal of one or more input signals for the model associated with the asset at a current level is determined to satisfy a user-defined criteria. An example of such a criteria might be the particular input signal having a highest explanation score for a particular “Error” condition among the one or more input signals used by the model associated with the asset at the current level of the hierarchy. Explanation scores for the one or more input signals may be determined by a performance model associated with the asset at the current level. Another example criteria is the particular input signal having a specific value (e.g., “error”).
At step 1254, the particular input signal is navigated to a model associated with an asset at a level lower than the current level.
In an embodiment, steps 1252 and 1254 are repeated until an asset of the plurality of assets is identified as a potential source of the error state. For example, steps 1252 and 1254 are repeated until an asset at the bottommost level (e.g., Level 8) is reached.
In an embodiment, a signal path associated with the traversal starting from the identified asset may be backtracked to another asset along the signal path and traversing the plurality of levels therefrom. For example, referring to
Techniques described herein enable predictive analytics systems to consider discrete and composite models (model of models) to be completely represented analytical models (or digital twin) of physical or logical asset formations on the ground. A composite model's input includes outputs of other models and zero or more sensor signals. The knowledge of which models are providing input to other models makes it possible and easy to navigate from one part of a complex system to another part of the complex system. The “Models Used In” information helps a user to navigate from a signal to one of the models. This bi-directional navigational ability enhances the end-user experience. Additionally, the user may start at any asset of interest and may use an entity/asset search bar in a GUI to locate an asset of interest. When one or more matches are found, the user may navigate to the corresponding digital twin model.
Consequently, the disclosed techniques provide numerous technical benefits. One example is reduced use of memory, CPU cycles, network traffic, and other computer resources, resulting in improved machine efficiency, for all the reasons set forth herein.
According to one embodiment, the techniques described herein are implemented by at least one computing device. The techniques may be implemented in whole or in part using a combination of at least one server computer and/or other computing devices that are coupled using a network, such as a packet data network. The computing devices may be hard-wired to perform the techniques, or may include digital electronic devices such as at least one application-specific integrated circuit (ASIC) or field programmable gate array (FPGA) that is persistently programmed to perform the techniques, or may include at least one general purpose hardware processor programmed to perform the techniques pursuant to program instructions in firmware, memory, other storage, or a combination. Such computing devices may also combine custom hard-wired logic, ASICs, or FPGAs with custom programming to accomplish the described techniques. The computing devices may be server computers, workstations, personal computers, portable computer systems, handheld devices, mobile computing devices, wearable devices, body mounted or implantable devices, smartphones, smart appliances, internetworking devices, autonomous or semi-autonomous devices such as robots or unmanned ground or aerial vehicles, any other electronic device that incorporates hard-wired and/or program logic to implement the described techniques, one or more virtual computing machines or instances in a data center, and/or a network of server computers and/or personal computers.
Computer system 1400 includes an input/output (I/O) subsystem 1402 which may include a bus and/or other communication mechanism(s) for communicating information and/or instructions between the components of the computer system 1400 over electronic signal paths. The I/O subsystem 1402 may include an I/O controller, a memory controller and at least one I/O port. The electronic signal paths are represented schematically in the drawings, for example as lines, unidirectional arrows, or bidirectional arrows.
At least one hardware processor 1404 is coupled to I/O subsystem 1402 for processing information and instructions. Hardware processor 1404 may include, for example, a general-purpose microprocessor or microcontroller and/or a special-purpose microprocessor such as an embedded system or a graphics processing unit (GPU) or a digital signal processor or ARM processor. Processor 1404 may comprise an integrated arithmetic logic unit (ALU) or may be coupled to a separate ALU.
Computer system 1400 includes one or more units of memory 1406, such as a main memory, which is coupled to I/O subsystem 1402 for electronically digitally storing data and instructions to be executed by processor 1404. Memory 1406 may include volatile memory such as various forms of random-access memory (RAM) or other dynamic storage device. Memory 1406 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 1404. Such instructions, when stored in non-transitory computer-readable storage media accessible to processor 1404, can render computer system 1400 into a special-purpose machine that is customized to perform the operations specified in the instructions.
Computer system 1400 further includes non-volatile memory such as read only memory (ROM) 1408 or other static storage device coupled to I/O subsystem 1402 for storing information and instructions for processor 1404. The ROM 1408 may include various forms of programmable ROM (PROM) such as erasable PROM (EPROM) or electrically erasable PROM (EEPROM). A unit of persistent storage 1412 may include various forms of non-volatile RAM (NVRAM), such as FLASH memory, or solid-state storage, magnetic disk, or optical disk such as CD-ROM or DVD-ROM, and may be coupled to I/O subsystem 1402 for storing information and instructions. Storage 1410 is an example of a non-transitory computer-readable medium that may be used to store instructions and data which when executed by the processor 1404 cause performing computer-implemented methods to execute the techniques herein.
The instructions in memory 1406, ROM 1408 or storage 1410 may comprise one or more sets of instructions that are organized as modules, methods, objects, functions, routines, or calls. The instructions may be organized as one or more computer programs, operating system services, or application programs including mobile apps. The instructions may comprise an operating system and/or system software; one or more libraries to support multimedia, programming or other functions; data protocol instructions or stacks to implement TCP/IP, HTTP or other communication protocols; file format retrieving instructions to parse or render files coded using HTML, XML, JPEG, MPEG or PNG; user interface instructions to render or interpret commands for a graphical user interface (GUI), command-line interface or text user interface; application software such as an office suite, internet access applications, design and manufacturing applications, graphics applications, audio applications, software engineering applications, educational applications, games or miscellaneous applications. The instructions may implement a web server, web application server or web client. The instructions may be organized as a presentation layer, application layer and data storage layer such as a relational database system using structured query language (SQL) or no SQL, an object store, a graph database, a flat file system or other data storage.
Computer system 1400 may be coupled via I/O subsystem 1402 to at least one output device 1412. In one embodiment, output device 1412 is a digital computer display. Examples of a display that may be used in various embodiments include a touch screen display or a light-emitting diode (LED) display or a liquid crystal display (LCD) or an e-paper display. Computer system 1400 may include other type(s) of output devices 1412, alternatively or in addition to a display device. Examples of other output devices 1412 include printers, ticket printers, plotters, projectors, sound cards or video cards, speakers, buzzers or piezoelectric devices or other audible devices, lamps or LED or LCD indicators, haptic devices, actuators, or servos.
At least one input device 1414 is coupled to I/O subsystem 1402 for communicating signals, data, command selections or gestures to processor 1404. Examples of input devices 1414 include touch screens, microphones, still and video digital cameras, alphanumeric and other keys, keypads, keyboards, graphics tablets, image scanners, joysticks, clocks, switches, buttons, dials, slides, and/or various types of sensors such as force sensors, motion sensors, heat sensors, accelerometers, gyroscopes, and inertial measurement unit (IMU) sensors and/or various types of transceivers such as wireless, such as cellular or Wi-Fi, radio frequency (RF) or infrared (IR) transceivers and Global Positioning System (GPS) transceivers.
Another type of input device is a control device 1416, which may perform cursor control or other automated control functions such as navigation in a graphical interface on a display screen, alternatively or in addition to input functions. Control device 1416 may be a touchpad, a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor 1404 and for controlling cursor movement on display 1412. The input device may have at least two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allows the device to specify positions in a plane. Another type of input device is a wired, wireless, or optical control device such as a joystick, wand, console, steering wheel, pedal, gearshift mechanism or other type of control device. An input device 1414 may include a combination of multiple different input devices, such as a video camera and a depth sensor.
In another embodiment, computer system 1400 may comprise an internet of things (IoT) device in which one or more of the output device 1412, input device 1414, and control device 1416 are omitted. Or, in such an embodiment, the input device 1414 may comprise one or more cameras, motion detectors, thermometers, microphones, seismic detectors, other sensors or detectors, measurement devices or encoders and the output device 1412 may comprise a special-purpose display such as a single-line LED or LCD display, one or more indicators, a display panel, a meter, a valve, a solenoid, an actuator or a servo.
When computer system 1400 is a mobile computing device, input device 1414 may comprise a global positioning system (GPS) receiver coupled to a GPS module that is capable of triangulating to a plurality of GPS satellites, determining and generating geo-location or position data such as latitude-longitude values for a geophysical location of the computer system 1400. Output device 1412 may include hardware, software, firmware and interfaces for generating position reporting packets, notifications, pulse or heartbeat signals, or other recurring data transmissions that specify a position of the computer system 1400, alone or in combination with other application-specific data, directed toward host 1424 or server 1430.
Computer system 1400 may implement the techniques described herein using customized hard-wired logic, at least one ASIC or FPGA, firmware and/or program instructions or logic which when loaded and used or executed in combination with the computer system causes or programs the computer system to operate as a special-purpose machine. According to one embodiment, the techniques herein are performed by computer system 1400 in response to processor 1404 executing at least one sequence of at least one instruction contained in main memory 1406. Such instructions may be read into main memory 1406 from another storage medium, such as storage 1410. Execution of the sequences of instructions contained in main memory 1406 causes processor 1404 to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions.
The term “storage media” as used herein refers to any non-transitory media that store data and/or instructions that cause a machine to operation in a specific fashion. Such storage media may comprise non-volatile media and/or volatile media. Non-volatile media includes, for example, optical or magnetic disks, such as storage 1410. Volatile media includes dynamic memory, such as memory 1406. Common forms of storage media include, for example, a hard disk, solid state drive, flash drive, magnetic data storage medium, any optical or physical data storage medium, memory chip, or the like.
Storage media is distinct from but may be used in conjunction with transmission media. Transmission media participates in transferring information between storage media. For example, transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise a bus of I/O subsystem 1402. Transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications.
Various forms of media may be involved in carrying at least one sequence of at least one instruction to processor 1404 for execution. For example, the instructions may initially be carried on a magnetic disk or solid-state drive of a remote computer. The remote computer can load the instructions into its dynamic memory and send the instructions over a communication link such as a fiber optic or coaxial cable or telephone line using a modem. A modem or router local to computer system 1400 can receive the data on the communication link and convert the data to a format that can be read by computer system 1400. For instance, a receiver such as a radio frequency antenna or an infrared detector can receive the data carried in a wireless or optical signal and appropriate circuitry can provide the data to I/O subsystem 1402 such as place the data on a bus. I/O subsystem 1402 carries the data to memory 1406, from which processor 1404 retrieves and executes the instructions. The instructions received by memory 1406 may optionally be stored on storage 1410 either before or after execution by processor 1404.
Computer system 1400 also includes a communication interface 1418 coupled to bus 1402. Communication interface 1418 provides a two-way data communication coupling to network link(s) 1420 that are directly or indirectly connected to at least one communication networks, such as a network 1422 or a public or private cloud on the Internet. For example, communication interface 1418 may be an Ethernet networking interface, integrated-services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of communications line, for example an Ethernet cable or a metal cable of any kind or a fiber-optic line or a telephone line. Network 1422 broadly represents a local area network (LAN), wide-area network (WAN), campus network, internetwork, or any combination thereof. Communication interface 1418 may comprise a LAN card to provide a data communication connection to a compatible LAN, or a cellular radiotelephone interface that is wired to send or receive cellular data according to cellular radiotelephone wireless networking standards, or a satellite radio interface that is wired to send or receive digital data according to satellite wireless networking standards. In any such implementation, communication interface 1418 sends and receives electrical, electromagnetic, or optical signals over signal paths that carry digital data streams representing various types of information.
Network link 1420 typically provides electrical, electromagnetic, or optical data communication directly or through at least one network to other data devices, using, for example, satellite, cellular, Wi-Fi, or BLUETOOTH technology. For example, network link 1420 may provide a connection through a network 1422 to a host computer 1424.
Furthermore, network link 1420 may provide a connection through network 1422 or to other computing devices via internetworking devices and/or computers that are operated by an Internet Service Provider (ISP) 1426. ISP 1426 provides data communication services through a world-wide packet data communication network represented as internet 1428. A server computer 1430 may be coupled to internet 1428. Server 1430 broadly represents any computer, data center, virtual machine, or virtual computing instance with or without a hypervisor, or computer executing a containerized program system such as DOCKER or KUBERNETES. Server 1430 may represent an electronic digital service that is implemented using more than one computer or instance and that is accessed and used by transmitting web services requests, uniform resource locator (URL) strings with parameters in HTTP payloads, API calls, app services calls, or other service calls. Computer system 1400 and server 1430 may form elements of a distributed computing system that includes other computers, a processing cluster, server farm or other organization of computers that cooperate to perform tasks or execute applications or services. Server 1430 may comprise one or more sets of instructions that are organized as modules, methods, objects, functions, routines, or calls. The instructions may be organized as one or more computer programs, operating system services, or application programs including mobile apps. The instructions may comprise an operating system and/or system software; one or more libraries to support multimedia, programming or other functions; data protocol instructions or stacks to implement TCP/IP, HTTP or other communication protocols; file format retrieving instructions to parse or render files coded using HTML, XML, JPEG, MPEG or PNG; user interface instructions to render or interpret commands for a graphical user interface (GUI), command-line interface or text user interface; application software such as an office suite, internet access applications, design and manufacturing applications, graphics applications, audio applications, software engineering applications, educational applications, games or miscellaneous applications. Server 1430 may comprise a web application server that hosts a presentation layer, application layer and data storage layer such as a relational database system using structured query language (SQL) or no SQL, an object store, a graph database, a flat file system or other data storage.
Computer system 1400 can send messages and receive data and instructions, including program code, through the network(s), network link 1420 and communication interface 1418. In the Internet example, a server 1430 might transmit a requested code for an application program through Internet 1428, ISP 1426, local network 1422 and communication interface 1418. The received code may be executed by processor 1404 as it is received, and/or stored in storage 1410, or other non-volatile storage for later execution.
The execution of instructions as described in this section may implement a process in the form of an instance of a computer program that is being executed, and consisting of program code and its current activity. Depending on the operating system (OS), a process may be made up of multiple threads of execution that execute instructions concurrently. In this context, a computer program is a passive collection of instructions, while a process may be the actual execution of those instructions. Several processes may be associated with the same program; for example, opening up several instances of the same program often means more than one process is being executed. Multitasking may be implemented to allow multiple processes to share processor 1404. While each processor 1404 or core of the processor executes a single task at a time, computer system 1400 may be programmed to implement multitasking to allow each processor to switch between tasks that are being executed without having to wait for each task to finish. In an embodiment, switches may be performed when tasks perform input/output operations, when a task indicates that it can be switched, or on hardware interrupts. Time-sharing may be implemented to allow fast response for interactive user applications by rapidly performing context switches to provide the appearance of concurrent execution of multiple processes simultaneously. In an embodiment, for security and reliability, an operating system may prevent direct communication between independent processes, providing strictly mediated and controlled inter-process communication functionality.
Software system 1500 is provided for directing the operation of computing device 1400. Software system 1500, which may be stored in system memory (RAM) 1406 and on fixed storage (e.g., hard disk or flash memory) 1410, includes a kernel or operating system (OS) 1510.
The OS 1510 manages low-level aspects of computer operation, including managing execution of processes, memory allocation, file input and output (I/O), and device I/O. One or more application programs, represented as 1502A, 1502B, 1502C . . . 1502N, may be “loaded” (e.g., transferred from fixed storage 1410 into memory 1406) for execution by the system 1500. The applications or other software intended for use on device 1500 may also be stored as a set of downloadable computer-executable instructions, for example, for downloading and installation from an Internet location (e.g., a Web server, an app store, or other online service).
Software system 1500 includes a graphical user interface (GUI) 1515, for receiving user commands and data in a graphical (e.g., “point-and-click” or “touch gesture”) fashion. These inputs, in turn, may be acted upon by the system 1500 in accordance with instructions from operating system 1510 and/or application(s) 1502. The GUI 1515 also serves to display the results of operation from the OS 1510 and application(s) 1502, whereupon the user may supply additional inputs or terminate the session (e.g., log off).
OS 1510 can execute directly on the bare hardware 1520 (e.g., processor(s) 1404) of device 1400. Alternatively, a hypervisor or virtual machine monitor (VMM) 1530 may be interposed between the bare hardware 1520 and the OS 1510. In this configuration, VMM 1530 acts as a software “cushion” or virtualization layer between the OS 1510 and the bare hardware 1520 of the device 1400.
VMM 1530 instantiates and runs one or more virtual machine instances (“guest machines”). Each guest machine comprises a “guest” operating system, such as OS 1510, and one or more applications, such as application(s) 1502, designed to execute on the guest operating system. The VMM 1530 presents the guest operating systems with a virtual operating platform and manages the execution of the guest operating systems.
In some instances, the VMM 1530 may allow a guest operating system to run as if it is running on the bare hardware 1520 of device 1400 directly. In these instances, the same version of the guest operating system configured to execute on the bare hardware 1520 directly may also execute on VMM 1530 without modification or reconfiguration. In other words, VMM 1530 may provide full hardware and CPU virtualization to a guest operating system in some instances.
In other instances, a guest operating system may be specially designed or configured to execute on VMM 1530 for efficiency. In these instances, the guest operating system is “aware” that it executes on a virtual machine monitor. In other words, VMM 1530 may provide para-virtualization to a guest operating system in some instances.
The above-described basic computer hardware and software is presented for purpose of illustrating the basic underlying computer components that may be employed for implementing the example embodiment(s). The example embodiment(s), however, are not necessarily limited to any particular computing environment or computing device configuration. Instead, the example embodiment(s) may be implemented in any type of system architecture or processing environment that one skilled in the art, in light of this disclosure, would understand as capable of supporting the features and functions of the example embodiment(s) presented herein.
In the foregoing specification, embodiments of the invention have been described with reference to numerous specific details that may vary from implementation to implementation. Thus, the sole and exclusive indicator of what is the invention and, is intended by the applicants to be the invention, is the set of claims that issue from this application, in the specific form in which such claims issue, including any subsequent correction. Any definitions expressly set forth herein for terms contained in such claims shall govern the meaning of such terms as used in the claims. Hence, no limitation, element, property, feature, advantage, or attribute that is not expressly recited in a claim should limit the scope of such claim in any way. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.
As used herein the terms “include” and “comprise” (and variations of those terms, such as “including”, “includes”, “comprising”, “comprises”, “comprised” and the like) are intended to be inclusive and are not intended to exclude further features, components, integers or steps.
Various operations have been described using flowcharts. In certain cases, the functionality/processing of a given flowchart step may be performed in different ways to that described and/or by different systems or system modules. Furthermore, in some cases a given operation depicted by a flowchart may be divided into multiple operations and/or multiple flowchart operations may be combined into a single operation. Furthermore, in certain cases the order of operations as depicted in a flowchart and described may be able to be changed without departing from the scope of the present disclosure.
It will be understood that the embodiments disclosed and defined in this specification extends to all alternative combinations of two or more of the individual features mentioned or evident from the text or drawings. All of these different combinations constitute various alternative aspects of the embodiments.
This application is related to U.S. Pat. No. 10,409,926, titled “Learning Expected Operational Behavior of Machines from Generic Definitions and Past Behavior” and issued Sep. 10, 2019, U.S. Pat. No. 10,552,762, titled “Machine Learning of Physical Conditions Based on Abstract Relations and Sparse Labels” and issued Feb. 4, 2020, U.S. patent application Ser. No. 15/906,702, titled “System and Method for Explanation for Condition Predictions in Complex Systems” and filed Feb. 27, 2018, and U.S. patent application Ser. No. 16/939,568, titled “Fluid and Resolution-Friendly View of Large Volumes of Time Series Data” and filed Jul. 27, 2020, the entire contents of which are hereby incorporated by reference for all purposes as if fully set forth herein.