The present application generally relates to equipment that can be used in manufacturing, product development, and/or other processes (e.g., equipment used to develop or commercially manufacture a pharmaceutical product), and more specifically relates to the identification of actions that can mitigate or prevent performance deficiencies relating to such equipment.
In various development and production contexts, different types of equipment are relied upon to provide output (e.g., physical products) with a sufficiently high level of quality. To manufacture biopharmaceutical drug products, for example, the requisite equipment may include media holding tanks, filtration equipment, bioreactors, separation equipment, purification equipment, and so on. In some cases, the equipment can include or be associated with auxiliary devices, such as sensors (e.g., temperature and/or pressure probes) that enable real-time or near real-time monitoring of the process. When such monitoring is available, subject matter experts or teams can leverage their training and experience to identify problems with the equipment, or to predict the onset of problems with the equipment, preferably at a time before the equipment is used for its primary purpose (e.g., used for product development or commercial manufacture of the product). For example, a subject matter expert may observe particular patterns or behaviors in a monitored temperature within a tank that is used for a “steam-in-place” sterilization procedure, and apply his or her personal knowledge to theorize that the patterns or behaviors are the result of a faulty steam trap, improper temperature probe calibration, or some other specific root cause. The subject matter expert may then apply his or her personal knowledge to determine an appropriate action or actions to take in response to the diagnosis (e.g., checking and/or replacing the steam trap, or recalibrating the temperature probes, etc.), and either complete the action(s) or request completion of the action(s).
However, this expertise is typically specific to each individual or team, and therefore can be inconsistently applied across locations (e.g., plants or laboratories) and over time (e.g., as key employees leave). Moreover, subject matter experts may fail to note particular warning signs, such as when signals indicative of an equipment problem (e.g., brief dips in sensor readings, etc.) are intermittent. Even if subject matter experts could accurately and consistently identify problems or potential problems, the process would generally be time consuming, and the costs high (e.g., due to the number of man-hours required from highly skilled individuals). In some contexts, the costs associated with continuous manual monitoring are prohibitive, and so “second best” practices are instead employed. For example, some equipment may be maintained (e.g., inspected, calibrated, etc.) on a regular calendar basis (e.g., once per three months or once per year) or on a usage basis (e.g., after every 100 hours of use, or after every “run”) in order to lower the likelihood of problems. However, this can result in an unnecessarily high expenditure of resources (if maintenance is performed more often than needed) or an unacceptably high number or frequency of performance issues (if maintenance is performed less often than needed).
To address some of the aforementioned drawbacks of current/conventional practices, embodiments described herein include systems and methods that automate and improve the identification of equipment performance issues/deficiencies, as well as the determination of which actions to take based on those issues/deficiencies. The equipment may be any type of device or system used in a particular process, such as a sterilization or holding tank, a bioreactor, and so on, and in some embodiments may include some or all of the sensor device(s) used to monitor the equipment. While the examples provided herein relate primarily to pharmaceutical manufacture or development, it is understood that the systems and methods disclosed herein provide an equipment-agnostic platform that can be applied to equipment designed for use in other contexts (e.g., equipment used in non-pharmaceutical development or manufacture processes such as for food, textiles, automobiles, etc.).
To identify equipment performance issues, a classification model is trained using historical data. The classification model may be trained using collections of historical sensor readings for time periods in which a particular piece of equipment was used (or in which multiple, similar pieces of equipment were used), along with labels indicating how subject matter experts or teams classified any performance issues, or the lack thereof, for each such time period. For example, for a given set of input data, a subject matter expert may assign a label selected from the group consisting of [“Good,” “Failure Type 1,” . . . “Failure Type N”], where N is an integer greater than or equal to one. It is understood that, as used herein, the term “expert” does not necessarily indicate any minimum level of qualifications (e.g., training, knowledge, experience, etc.), although it may in some embodiments. To determine which features (e.g., which sensor readings) are used to train the classification model, principal component analysis or other suitable techniques may be used to determine which features are most predictive of particular performance issues.
Once trained, the classification model may be configured to operate on new data (e.g., real-time sensor readings over a predetermined time window) to diagnose/infer when equipment of the same (or at least similar) type is experiencing a specific type of deficiency, or to predict when the equipment is going to experience a specific type of deficiency. For example, for a given set of input data (corresponding to the features used during training) in a given time window, the classification model may output a classification that corresponds to one of the labels used during training (e.g., “Good,” “Failure Type 1,” etc.).
Further, in some embodiments, a computing system (possibly, but not necessarily, the same computing device that trains and/or runs the classification model) may map the output of the classification model to a particular action or set of actions to be taken, in order to rectify the diagnosed performance problem, or to prevent a predicted performance problem from occurring. The computing system may also notify one or more users of the recommended action(s), and possibly also notify the user(s) of the diagnosed or predicted performance issue that was mapped to the action(s), in order to instigate completion of the action(s). The computing system may perform the mapping by accessing a database that includes a repository of subject matter expert knowledge, for example. Further, in some embodiments, individuals (e.g., subject matter experts) may enter information to confirm whether particular classifications output by the classification model were correct, and the computing system may use this information as training labels to further improve the accuracy of the classification model.
The systems and methods disclosed herein can identify problems and/or potential problems relating to equipment with improved reliability/consistency, and with far greater speed, as compared to the conventional practices described in the Background section above. This, in turn, can reduce the risks and costs associated with equipment performance failures or other deficiencies that might otherwise occur during production (or during development, etc.). Moreover, due to a reduced need for human monitoring, labor costs may be greatly reduced. Further, in some embodiments, costs associated with excessive maintenance can be reduced—without a corresponding increase in the risk of equipment failures/deficiencies—by triggering maintenance activities when those activities are truly needed, and not merely based on the passage of time or the level of equipment usage. The systems and methods described herein can also exhibit increased accuracy over time (e.g., by further training based on user confirmation of model classifications), and can facilitate the identification of previously unrecognized equipment deficiency types/modes.
The skilled artisan will understand that the figures, described herein, are included for purposes of illustration and are not limiting on the present disclosure. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the present disclosure. It is to be understood that, in some instances, various aspects of the described implementations may be shown exaggerated or enlarged to facilitate an understanding of the described implementations. In the drawings, like reference characters throughout the various drawings generally refer to functionally similar and/or structurally similar components.
The various concepts introduced above and discussed in greater detail below may be implemented in any of numerous ways, and the described concepts are not limited to any particular manner of implementation. Examples of implementations are provided for illustrative purposes.
The system 100 also includes one or more sensor devices 104, which are configured to sense physical parameters associated with the equipment 102 and/or its contents or proximate external environment. For example, the sensor device(s) 104 may include one or more temperature sensors (e.g., to take readings of internal, surface, and/or external temperatures of the equipment 102 during operation), one or more pressure sensors (e.g., to take readings of internal and/or external pressures of the equipment 102 during operation), and/or one or more other sensor types. As a more specific example, the equipment 102 may be a sterilization tank, and the sensor device(s) 104 may include multiple temperature sensors at different positions within the tank. The sensor device(s) 104 may include sensors that only take direct measurements (e.g., temperature, pressure, flow rate, etc.), and/or “soft” sensing devices or systems that determine parameter values indirectly (e.g., a Raman analyzer and probe to determine chemical composition and molecular structure in a non-destructive manner), as is appropriate for the type of the equipment 102 and the operation for which the equipment 102 is configured to be used.
The sensor device(s) 104 may include one or more devices integrated on or within the equipment 102, and/or one or more devices affixed to or otherwise placed in proximity with the equipment 102. Depending on the embodiment, none, some, or all of the sensor device(s) 104 may be viewed as a part of the equipment 102. In particular, in embodiments where the performance of any or all of the sensor device(s) 104 is included in the equipment performance analysis (as described further below), references herein to “the equipment 102” includes those sensor device(s) 104. For example, an analysis of the performance of a sterilization tank may encompass not only analyzing the ability of the tank to do its intended task (e.g., hold the desired contents without leaks, and subject the contents to a desired temperature profile), but also analyzing the performance of a number of temperature sensors affixed to or integrated with the tank.
The system 100 also includes a computing system 110 coupled to the sensor device(s) 104. As discussed in further detail below, the computing system 110 may include a single computing device, or multiple computing devices (e.g., one or more servers and one or more client devices) that are either co-located or remote from each other. The computing system 110 is generally configured to: (1) analyze the readings generated by the sensor device(s) 104 in order to infer/diagnose or predict/anticipate deficiencies (e.g., faults or otherwise unacceptable performance) of the equipment 102; (2) identify actions that should be taken based on the inferred or predicted deficiencies; and (3) notify users of the identified actions. In the example embodiment shown in
The processing unit 120 includes one or more processors, each of which may be a programmable microprocessor that executes software instructions stored in the memory 128 to execute some or all of the functions of the computing system 110 as described herein. Alternatively, one or more of the processors in the processing unit 120 may be other types of processors (e.g., application-specific integrated circuits (ASICs), field-programmable gate arrays (FPGAs), etc.).
The network interface 122 may include any suitable hardware (e.g., front-end transmitter and receiver hardware), firmware, and/or software configured to use one or more communication protocols to communicate with external devices and/or systems (e.g., the sensor device(s) 104, or a server, not shown in FIG.1, that provides an interface between the computing system 110 and the sensor device(s) 104, etc.). For example, the network interface 122 may be or include an Ethernet interface. While not shown in
The display 124 may use any suitable display technology (e.g., LED, OLED, LCD, etc.) to present information to a user, and the user input device 126 may be a keyboard or other suitable input device. In some embodiments, the display 124 and the user input device 126 are integrated within a single device (e.g., a touchscreen display). Generally, the display 124 and the user input device 126 may combine to enable a user to view and/or interact with visual presentations (e.g., graphical user interfaces or displayed information) output by the computing system 110, e.g., for purposes such as notifying users of equipment faults or other deficiencies, and recommending any mitigating or preventative actions for the users to take.
The memory 128 may include one or more physical memory devices or units containing volatile and/or non-volatile memory, and may include memories located in different computing devices of the computing system 110. Any suitable memory type or types may be used, such as read-only memory (ROM), solid-state drives (SSDs), hard disk drives (HDDs), and so on. The memory 128 stores the instructions of one or more software applications, including an equipment analysis application 130. The equipment analysis application 130, when executed by the processing unit 120, is generally configured to train a classification model 132, to use the trained classification model 132 to infer or predict deficient equipment performance (i.e., for equipment 102 and possibly also other equipment), to identify remedial actions, and to notify users of the deficiencies and corresponding actions. To this end, the equipment analysis application 130 includes a dimension reduction unit 140, a training unit 142, a classification unit 144, and a mapping unit 146. The units 140 through 146 may be distinct software components or modules of the equipment analysis application 130, or may simply represent functionality of the equipment analysis application 130 that is not necessarily divided among different components/modules. For example, in some embodiments, the classification unit 144 and the training unit 142 are included in a single software module. Moreover, in some embodiments, the different units 140 through 146 may be distributed among multiple copies of the equipment analysis application 130 (e.g., executing at different devices in the computing system 110), or among different types of applications stored and executed at one or more devices of the computing system 110. The operation of each of the units 140 through 146 is described in further detail below, with reference to the operation of the system 100.
The classification model 132 may be any suitable type of classifier, such as a support vector machine (SVM) model, a decision tree model, a deep neural network, a k-nearest neighbor (KNN) model, a naive Bayes classifier (NBC) model, a long short-term memory (LSTM) model, an HDBSCAN clustering model, or any other model that can classify sets of input data into one of two or more possible classifications. In some embodiments, the classification model 132 also operates upon the values of one or more other types of parameters, in addition to those generated by the sensor device(s) 104. For example, in addition to the readings from the sensor device(s) 104, the classification model 132 may accept a time parameter value as an input (e.g., the number of minutes or hours since a process started). In some embodiments, the classification model 132 accepts one or more categorical parameters as inputs (e.g., 0 or 1, or category A, B, or C, etc.). A categorical (e.g., binary) parameter may represent whether a particular operation occurred, whether a particular substance was added, and so on. Moreover, the classification model 132 may accept one or more inputs that reflect a “memory” component. For example, one parameter may be a temperature reading from a probe at x minutes, while another may be a temperature reading from the same probe at x−1 minutes, and so on. In other embodiments, the classification model 132 itself has a memory component (i.e., the classification model 132 is “stateful”).
Depending on the embodiment, the classification model 132 may classify sets of inputs (parameter values) as one of two possible classifications (e.g., “good performance” or “poor performance”), or as one of more than two possible classifications (e.g., “Good,” “Failure Type A,” or “Failure Type B”). Some examples of sensor readings that may correspond to good performance, or to specific types of equipment deficiencies, are discussed below in connection with
As will also be described in further detail below, the computing system 110 is configured to access a historical database 150 for training purposes, and is configured to access an expert knowledge database 152 to identify recommended actions. The historical database 150 may store parameters values associated with past runs of the equipment 102 and/or past runs of other, similar equipment. For example, the historical database 150 may store sensor readings that were generated by the sensor device(s) 104 (and/or by other, similar sensor devices), and possibly also values of other relevant parameters (e.g., time). The historical database 150 may also store “label” information indicating a particular equipment deficiency, or the lack of any such deficiency, for each set of historical parameter values. For example, some sets of sensor readings may be associated with “Good” labels in the historical database 150, other sets of sensor readings may be associated with “Failure Type 1” labels in the historical database 150, and so on.
The expert knowledge database 152 may be a repository of information representing actions that subject matter experts took in the past in order to mitigate or prevent equipment issues (for the equipment 102 and/or similar equipment) when certain types of equipment deficiencies were identified. For example, the expert knowledge database 152 may include one or more tables that associate each of the deficiency types represented by the labels of the historical database 150 (e.g., “Failure Type 1,” etc.) with one or more appropriate actions that could mitigate or prevent the corresponding problem. The databases 150, 152 may be stored in a persistent memory of the memory 128, or in a different persistent memory of the computing system 110 or another device or system. In some embodiments, the computing system 110 accesses one or both of the databases 150, 152 via the Internet using the network interface 122.
As noted above, the computing system 110 may include one device or multiple devices and, if multiple devices, may be co-located or remotely distributed (e.g., with Ethernet and/or Internet communication between the different devices). In one embodiment, for example, a first server of the computing system 110 (including units 140, 142) trains the classification model 132, a second server of the computing system 110 collects real-time measurements from the sensor device(s) 104, and a third server of the computing system 110 (including units 144, 146) receives the measurements from the second server and uses a copy of the trained classification model 132 to generate classifications (i.e., diagnoses or predictions) based on the received measurements. As another example, the third server of the above example does not store a copy of the trained classification model 132, and instead utilizes the classification model 132 by providing the measurements to the second server (e.g., if the classification model 132 is made available via a web services arrangement). As used herein, unless the context of the usage of the term clearly indicates otherwise, terms such as “running,” “using,” “implementing,” etc., a model such as classification model 132 are broadly used to encompass the alternatives of directly executing a locally stored model, or requesting that another device (e.g., a remote server) execute the model. It is understood that still other configurations and distributions of functionality, beyond those shown in
Operation of the system 100 will now be described in further detail, with reference to both the components of
After stage 204, at stage 206 of the process 200, the training unit 142 trains the classification model 132 using the parameter values generated at stage 204. For example, if the dimension reduction unit 140 implements a PCA technique to reduce the original parameter values (e.g., historical readings from sensor devices) to values in two dimensions (PC1, PC2) at stage 204, then the training unit 142 may train the classification model 132 at stage 206 using those (PC1, PC2) values and their corresponding, manually-generated labels. In other embodiments, however, stage 204 is omitted from the process 200 and the dimension reduction unit 140 is omitted from the system 100. In this latter case, the training unit 142 may instead train the classification model 132 using the original parameter values from the historical data 202 as direct inputs. In either case, for good performance of the classification model 132, the historical data 202 should include numerous and diverse examples of each type of classification desired (e.g., “good” performance and one or more specific types of equipment deficiencies). The training unit 142 may also validate and/or further qualify the trained classification model 132 at stage 206 (e.g., using portions of the historical data 202 that were not used for training).
Returning now to
As the equipment 102 operates, the sensor device(s) 104 generate at least a portion of the new data 208. For example, the sensor device(s) 104 may each generate one real-time reading (e.g., temperature, pressure, pH level, etc.) per fixed time period (e.g., every five seconds, every minute, etc.). The type and frequency of the readings may match the data that was used during the training phase.
At stage 210, the equipment analysis application 130 (or other software) filters/pre-processes the new data 208. Stage 210 may apply a filter to ensure that only data from some pre-defined, current time window is retrieved, for example. As another example, the equipment analysis application 130 (or other software) pre-processes the sensor readings at stage 210 to put those readings in the same format as the historical data 202 that was used for training. If the sensor readings from the sensor device(s) 104 are captured less frequently than the sensor readings used during training, for example, then the equipment analysis application 130 may generate additional “readings” at stage 210 using an interpolation technique.
At stage 212, the dimension reduction unit 140, or a similar unit, reduces the dimensionality of the parameter values reflected by the new data 208 (possibly after processing at the filtering stage 210).
At stage 214, the classification unit 144 runs the trained classification model 132 using the parameter values generated at stage 212. For example, if the dimension reduction unit 140 implements a PCA technique to reduce the original parameter values (e.g., readings from the sensor device(s) 104) to values in two dimensions (PC1, PC2) at stage 212, the classification unit 144 may run the classification model 132 at stage 214 on those (PC1, PC2) values. An example of classification in one such embodiment, where the dimension reduction unit 140 reduces the input parameter values to two dimensions and the classification model 132 is an SVM model, is discussed below in connection with
In alternative embodiments, stage 212 is omitted from the process 200, in which case the classification unit 144 may instead run the classification model 132 on the original parameter values from the new data 202 (possibly after processing at stage 210) as direct inputs. For example, the system 100 may omit the dimension reduction unit 140, and the process 200 may omit both stage 204 and stage 212.
The classification model 132 outputs a particular classification for each set of input data, e.g., for each of a number of uniform time periods while the equipment 102 is in use (e.g., every 10 minutes, or every hour, every six hours, every day, etc.). The classification may be an inference, i.e., a diagnosis of a current problem (e.g., failure/fault) exhibited by the equipment 102 or the lack thereof. Alternatively, the classification may be a prediction that the equipment 102 will exhibit a particular problem in the future, or a prediction that that equipment 102 will not exhibit problems in the future. In some embodiments, the classification model 132 is configured/trained to output any one of a set of classifications that includes both inferences and predictions. For example, classification “A” may indicate no present or expected problems for the equipment 102, classification “B” may indicate that the equipment 102 is currently experiencing a particular type of fault, classification “C” may indicate that the equipment 102 will likely experience a particular type of fault (or otherwise result in deficient performance) in the relatively near future if remedial actions are not taken, and so on.
At stage 216, the classifications output by the classification model 132 are provided back to the historical data 202, for use in further training (refinement) of the classification model 132. For this additional training, the equipment analysis application 130 or other software may provide a user interface for individuals (e.g., subject matter experts) to confirm whether a classification is correct, or to enter a correct classification if the output of the classification model 132 is incorrect. These manually-entered or confirmed classifications may then be used as labels for the additional training. The additional training can be particularly beneficial when the amount of historical data 202 available for the initial training was relatively small. In some embodiments, stage 216 is omitted from the process 200.
At stage 218, the mapping unit 146 maps the classification made by the classification model 132 to one or more recommended actions. To this end, the mapping unit 146 may use the classification as a key to a table stored in the expert knowledge database 152, for example. The corresponding action(s) may include one or more preventative/maintenance actions, and/or one or more actions to repair a current problem. For example, the mapping unit 146 may map a classification “Fault Type C” to an action to inspect and/or change a filter. In some embodiments, the mapping unit 146 maps at least some of the available classifications to sets of alternative actions that might be useful (e.g., if subject matter experts had, in the past, found that there were several different ways in which to best address a particular problem with the equipment 102 or similar equipment).
Some example mappings between deficiency classifications and corresponding actions in the expert knowledge database 152, for an embodiment in which the equipment 102 is a sterilization tank, are provided in the table below:
In the above example, the classification model 132 may also support a fourth classification that corresponds to “good” performance, and therefore requires no mapping. In some embodiments, however, even a “good” classification requires a mapping (e.g., to one or more maintenance actions that represent a minimal or default level of maintenance).
At stage 220, the equipment analysis application 130 presents or otherwise provides the recommended action(s) to one or more system users. For example, the equipment analysis application 130 may generate or populate a graphical user interface or other presentation (or a portion thereof) at stage 220, for presentation to a user via the display 124 and/or one or more other displays/devices. The action(s) (and possibly the corresponding classification produced by the classification model 132) may be individually shown, and/or may be used to provide a view of higher-level statistics, etc. Additionally or alternatively, the equipment analysis application 130 may automatically generate an email or text notification for one or more users, including a message that indicates the recommended action(s) and the corresponding classification. The notifications may be provided in real-time, or nearly in real-time, as sensor data is made available (e.g., as soon as the last sensor readings within a given time window are generated by the sensor device(s) 104).
In some embodiments, the process 200 includes additional stages not shown in
In some embodiments and/or scenarios, stages 204 through 220 all occur prior to the primary intended use of the equipment 102. If the equipment 102 is intended for use in the commercial manufacture of a biopharmaceutical drug product, for example, stages 204 through 220 may occur before the equipment 102 is used during the commercial manufacture process for that drug product. In this manner, the risk of unacceptable equipment performance occurring during production may be greatly reduced, thereby lowering the risk of costs and delays due to “down time,” and/or preventing quality issues. As another example, if the equipment 102 is intended for use in the product development stage, stages 204 through 220 may occur before the equipment 102 is used during that development process, potentially lowering costs and drug development times. In some embodiments, however, stages 210 through 220 (or just stages 210 through 216) also occur, or instead occur, during the primary use of the equipment 102 (e.g., during commercial manufacture or product development).
In some scenarios, new types of equipment deficiencies may be discovered during the process 200. For example, a recommended action output at stage 220 may fail to mitigate or prevent a particular equipment problem. In that case, subject matter experts may study the problem to identify a “fix.” Once the fix is identified, the problem can be manually re-created, to create additional training data in the historical database 150. The classification model 132 can then be modified and retrained, now with an additional classification corresponding to the newly identified problem. Moreover, the expert knowledge database 152 can be expanded to include the appropriate mitigating or preventative action(s) for that problem.
In some instances, it may be impractical to develop new training data on a scale that allows the classification model 132 to accurately identify certain equipment issues. In these cases, the classification model 132 may be supplemented with “hard coded” classifiers (e.g., fixed algorithms/rules to identify a particular type of equipment deficiency).
Performance of a system and process similar to the system 100 and process 200 was tested with about 20 different combinations of feature engineering techniques (e.g., PCA, PPCA, etc.) and classification models (e.g., SVM, decision tree, etc.), for the example case of a “steam-in-place” sterilization tank. The best performance for that particular use case was provided by using a PCA technique to reduce the n-dimensional data (for n features/inputs) to two dimensions, and an SVM classification model, which resulted in about 94% to 97% classification accuracy, depending on which data was randomly selected to serve as the testing and training datasets, and depending on the equipment under consideration. Overall accuracy for a SVM classification model with PCA, across different datasets and equipment, was about 95%.
Across different datasets and equipment, random forest classification with PCA also performed well, providing about 96% overall accuracy. However, SVM classification was more consistently accurate across all use cases examined. NBC classification, decision tree classification, and KNN classification (each with PCA) provided overall accuracy of about 89%, 89%, and 85%, respectively.
As seen in
In some embodiments, the equipment analysis application 130 also (or instead) generates and/or populates other types of presentations. In some embodiments, for example, the equipment analysis application 130 generates or populates a text-based message or visualization for each run/classification (e.g., at stage 220 of
At block 602, values of one or more parameters associated with equipment (e.g., the equipment 102) are determined by monitoring the parameter(s) over a time period during which the equipment is in use (e.g., during a sterilization operation, or during a harvesting operation, etc., depending on the nature of the equipment). The parameter(s) may include temperature, pressure, pH level, humidity, or any other suitable type of physical characteristic associated with the equipment. Block 602 may include receiving the parameter values, directly or indirectly, from one or more sensor devices (e.g., the sensor device(s) 104) that generated the values. In other embodiments (e.g., if the method 600 is performed by the system 100 as a whole), block 602 may include the act of generating the values (e.g., by the sensor device(s) 104). The time period may be any suitable length of time (e.g., 10 minutes, six hours, one day, etc.), and within that time period the parameter values may correspond to measurements taken at any suitable frequency (e.g., once per second, once per minute, etc.) or frequencies (e.g., in some embodiments where multiple sensor devices are used).
At block 604, a performance classification of the equipment is determined by processing the values determined at block 602 using a classification model. The classification model (e.g., the classification model 132) may include an SVM model, a decision tree model, a deep neural network, a KNN model, an NBC model, an LSTM model, an HDBSCAN clustering model, or any other suitable type of model that can classify sets of input data as one of multiple available classifications. The classification model may be a single trained model, or may include multiple trained models.
At block 606, the performance classification is mapped to a mitigating or preventative action. Block 606 may include using the performance classification as a key to a database (e.g., expert knowledge database 152), for example. That is, block 606 may include determining which action corresponds to the performance classification in such a database. In some embodiments, the performance classification is also mapped to one or more additional mitigating or preventative actions, which may include actions that should be taken cumulatively (e.g., clean component A and inspect component B), and/or actions that should be considered as alternatives (e.g., clean component A or replace component A).
At block 608, an output indicative of the mitigating or preventative action is generated. In some embodiments, the output is also indicative of the performance classification that was mapped to the action (e.g., a code corresponding to the classification, and/or a text description of the classification). Moreover, in some embodiments, the output may include information indicative of classifications and/or corresponding actions for each of multiple time periods in which the equipment was used. The output may be a visual presentation (e.g., on the display 124), a portion of a visual presentation (e.g., specific fields or charts, etc.), or data used to generate or trigger any such presentation, for example. In some embodiments, block 608 includes generating data to populate a web-based report that can be accessed by multiple users via their web browsers.
In some embodiments, the method 600 also includes one or more additional blocks not shown in
Embodiments of the disclosure relate to a non-transitory computer-readable storage medium having computer code thereon for performing various computer-implemented operations. The term “computer-readable storage medium” is used herein to include any medium that is capable of storing or encoding a sequence of instructions or computer codes for performing the operations, methodologies, and techniques described herein. The media and computer code may be those specially designed and constructed for the purposes of the embodiments of the disclosure, or they may be of the kind well known and available to those having skill in the computer software arts. Examples of computer-readable storage media include, but are not limited to: magnetic media such as hard disks, floppy disks, and magnetic tape; optical media such as CD-ROMs and holographic devices; magneto-optical media such as optical disks; and hardware devices that are specially configured to store and execute program code, such as ASICs, programmable logic devices (“PLDs”), and ROM and RAM devices.
Examples of computer code include machine code, such as produced by a compiler, and files containing higher-level code that are executed by a computer using an interpreter or a compiler. For example, an embodiment of the disclosure may be implemented using Java, C++, or other object-oriented programming language and development tools. Additional examples of computer code include encrypted code and compressed code. Moreover, an embodiment of the disclosure may be downloaded as a computer program product, which may be transferred from a remote computer (e.g., a server computer) to a requesting computer (e.g., a client computer or a different server computer) via a transmission channel. Another embodiment of the disclosure may be implemented in hardwired circuitry in place of, or in combination with, machine-executable software instructions.
As used herein, the singular terms “a,” “an,” and “the” may include plural referents, unless the context clearly dictates otherwise.
As used herein, the terms “connect,” “connected,” and “connection” refer to (and connections depicted in the drawings represent) an operational coupling or linking. Connected components can be directly or indirectly coupled to one another, for example, through another set of components.
As used herein, the terms “approximately,” “substantially,” “substantial” and “about” are used to describe and account for small variations. When used in conjunction with an event or circumstance, the terms can refer to instances in which the event or circumstance occurs precisely as well as instances in which the event or circumstance occurs to a close approximation. For example, when used in conjunction with a numerical value, the terms can refer to a range of variation less than or equal to ±10% of that numerical value, such as less than or equal to ±5%, less than or equal to ±4%, less than or equal to ±3%, less than or equal to ±2%, less than or equal to ±1%, less than or equal to ±0.5%, less than or equal to ±0.1%, or less than or equal to ±0.05%. For example, two numerical values can be deemed to be “substantially” the same if a difference between the values is less than or equal to ±10% of an average of the values, such as less than or equal to ±5%, less than or equal to ±4%, less than or equal to ±3%, less than or equal to ±2%, less than or equal to ±1%, less than or equal to ±0.5%, less than or equal to ±0.1%, or less than or equal to ±0.05%.
Additionally, amounts, ratios, and other numerical values are sometimes presented herein in a range format. It is to be understood that such range format is used for convenience and brevity and should be understood flexibly to include numerical values explicitly specified as limits of a range, but also to include all individual numerical values or sub-ranges encompassed within that range as if each numerical value and sub-range is explicitly specified.
While the present disclosure has been described and illustrated with reference to specific embodiments thereof, these descriptions and illustrations do not limit the present disclosure. It should be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the true spirit and scope of the present disclosure as defined by the appended claims. The illustrations may not be necessarily drawn to scale. There may be distinctions between the artistic renditions in the present disclosure and the actual apparatus due to manufacturing processes, tolerances and/or other reasons. There may be other embodiments of the present disclosure which are not specifically illustrated. The specification (other than the claims) and drawings are to be regarded as illustrative rather than restrictive. Modifications may be made to adapt a particular situation, material, composition of matter, technique, or process to the objective, spirit and scope of the present disclosure. All such modifications are intended to be within the scope of the claims appended hereto. While the techniques disclosed herein have been described with reference to particular operations performed in a particular order, it will be understood that these operations may be combined, sub-divided, or re-ordered to form an equivalent technique without departing from the teachings of the present disclosure. Accordingly, unless specifically indicated herein, the order and grouping of the operations are not limitations of the present disclosure.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US22/11007 | 1/3/2022 | WO |
Number | Date | Country | |
---|---|---|---|
63133554 | Jan 2021 | US |