The present disclosure generally relates to systems, apparatus and methods for predictive process control (PPC) of a manufacturing process. More particularly, the subject technology provides improvements to the manufacturing process and in particular, encompasses systems and methods for adaptively controlling various stations in the manufacturing process, as well as optimizing the final manufacturing product and process, based on predictions made using machine-learning models. As discussed in further detail below, some aspects of the technology encompass systems and methods for training the machine-learning models.
To manufacture products that consistently meet desired design specifications, safely, timely and with minimum waste, requires constant monitoring and adjustments to the manufacturing process.
In some aspects, the disclosed technology relates to the use of a deep-learning controller for monitoring and improving a manufacturing process. In some embodiments, the disclosed technology encompasses a computer-implemented method that includes steps for receiving, by a deep-learning controller, one or more control values associated with a process, station in the manufacturing process, predicting, by the deep-learning controller, an expected value for an article of manufacture output from the process station, based on the one or more control values, and determining if the deep-learning controller can control the manufacturing process based on the expected value.
In another embodiment, the disclosed technology encompasses a system having one or more processors, and a non-transitory memory storing instructions that, when executed by the one or more processors, cause the one or more processors to perform operations including: receiving, by a deep-learning controller, one or more control values associated with a process station in the manufacturing process, predicting, by the deep-learning controller, an expected value for an article of manufacture output from the process station, based on the one or more control values, and determining if the deep-learning controller can control the manufacturing process based on the expected value.
In yet another embodiment, the disclosed technology encompasses a non-transitory computer-readable storage medium comprising instructions stored therein, which when executed by one or more processors, cause the one or more processors to perform operations including: receiving, by a deep-learning controller, one or more control values associated with a process station in the manufacturing process, predicting, by the deep-learning controller, an expected value for an article of manufacture output from the process station, based on the one or more control values, and determining if the deep-learning controller can control the manufacturing process based on the expected value.
In order to describe the manner in which the above-recited and other advantages and features of the disclosure can be obtained, a more particular description of the principles briefly described above will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. Understanding that these drawings depict only exemplary embodiments of the disclosure and are not therefore to be considered to be limiting in their scope, the principles herein are described and explained with additional specificity and detail through the use of the accompanying drawings in which:
The detailed description set forth below is intended as a description of various configurations of the subject technology and is not intended to represent the only configurations in which the subject technology can be practiced. The appended drawings are incorporated herein and constitute a part of the detailed description. The detailed description includes specific details for the purpose of providing a more thorough understanding of the subject technology. However, it will be clear and apparent that the subject technology is not limited to the specific details set forth herein and may be practiced without these details. In some instances, structures and components are shown in block diagram form in order to avoid obscuring the concepts of the subject technology.
Additional details relating to the use of image classification in manufacturing control are provided by U.S. Provisional Application No. 62/836,202, entitled “DEPLOYMENT OF AN IMAGE CLASSIFICATION MODEL ON A SINGLE-BOARD COMPUTER FOR MANUFACTURING CONTROL,” which is hereby incorporated by reference herein in its entirety.
Additional details relating to the user of computation models for optimizing assembly/manufacturing operations are provided by U.S. Provisional Patent Application No. 62/836,192, entitled “A COMPUTATION MODEL FOR DECISION-MAKING AND ASSEMBLY OPTIMIZATION IN MANUFACTURING,” and U.S. patent application Ser. No. 16/289,422, entitled “DYNAMIC TRAINING FOR ASSEMBLY LINES,” both of which are hereby incorporated by reference in their entirety.
The manufacturing process is complex and comprises raw materials being processed by different process stations (or “stations”) until a final product (referred to herein as “final output”) is produced. With the exception of the final process station, each process station receives an input for processing and outputs an intermediate output that is passed along to a subsequent (downstream) process station for additional processing. The final process station receives an input for processing and outputs the final output.
Each process station can include one or more tools/equipment that performs a set of process steps on: received raw materials (this can apply to a first station or any of the subsequent stations in the manufacturing process) and/or the received output from a prior station (this applies to any of the subsequent stations in the manufacturing process). Examples of process stations can include, but are not limited to conveyor belts, injection molding presses, cutting machines, die stamping machines, extruders, CNC mills, grinders, assembly stations, 3D printers, quality control and validation stations. Example process steps can include: transporting outputs from one location to another (as performed by a conveyor belt); feeding material into an extruder, melting the material and injecting the material through a mold cavity where it cools and hardens to the configuration of the cavity (as performed by an injection molding presses); cutting material into a specific shape or length (as performed by a cutting machine); pressing material into a particular shape (as performed by a die stamping machine).
In some manufacturing processes, several process stations can run in parallel. In other words, a single process station can send its intermediate output to one or more stations (e.g., 1 to N stations), and a single process station can receive and combine intermediate outputs from one to n stations. Moreover, a single process station can perform the same process step or different process steps, either sequentially or non-sequentially, on the received raw material or intermediate output during a single iteration of a manufacturing process.
Operation of each process station can be governed by one or more process controllers. In some implementation, each process station has one or more process controllers (referred to herein as “a station controller”) that are programmed to control the operation of the process station (the programming algorithms referred to herein as “control algorithms”). However, in some aspects, a single process controller may be configured to control the operations of two or more process stations.
An operator, or control algorithms, can provide the station controller with station controller setpoints (or “setpoints” or “controller setpoints” or CSPs) that represent the desired value/or range of values for each control value. The attributes/parameters associated with a station's tools/equipment/process steps that can be measured during the operation of the station are either control values or station values. If the measured attributes/parameters are also used to control the station, then they are “control values.” Otherwise, the measured attributes/parameters are “station values.” Examples of control or station values include, but are not limited to: speed, temperature, pressure, vacuum, rotation, current, voltage, power, viscosity, materials/resources used at the station, throughput rate, outage time, noxious fumes, the type of steps and order of the steps performed at the station. Although, the examples are the same, whether a measured attribute/parameter is considered a control value or a station value, will depend on the particular station and whether the measured attribute/parameter is used to control the station or is simply a byproduct of the operation of the station.
The control algorithms can also include instructions for monitoring control values, comparing control values to corresponding setpoints and determining what actions to take when the control value is not equal to (or not within a defined range of) a corresponding station controller setpoint. For example, if the measured present value of the temperature for the station is below the setpoint, then a signal may be sent by the station controller to increase the temperature of the heat source for the station until the present value temperature for the station equals the setpoint. Conventional process controllers used in the manufacturing process to control a station are limited, because they follow static algorithms (e.g., on/off control, PI control, PID control, Lead/Lag control) for adjusting setpoints and prescribing what actions to take when a control value deviates from a setpoint. Conventional process controllers also have limited capability, if any, to analyze non-control values such as ambient conditions (e.g., external temperature, humidity, light exposure, wear and tear of the station), station values, intermediate or final output values, feedback from other process stations, and to make dynamic adjustments to a station controller's setpoints or control algorithms, that control the operation of an associated station.
A process value, as used herein refers to a station value or control value that is aggregated or averaged across an entire series of stations (or a subset of the stations) that are part of the manufacturing process. Process values can include, for example, total throughput time, total resources used, average temperature, average speed.
In addition to station and process values, various characteristics of a process station's product output (i.e., intermediate output or final output) can be measured, for example: temperature, weight, product dimensions, mechanical, chemical, optical and/or electrical properties, number of design defects, the presence or absence of a defect type. The various characteristics that can be measured, will be referred to generally as “intermediate output value” or “final output value.” The intermediate/final output value can reflect a single measured characteristic of an intermediate/final output or an overall score based on a specified set of characteristics associated with the intermediate/final output that are measured and weighted according to a predefined formula.
Mechanical properties can include hardness, compression, tack, density and weight. Optical properties can include absorption, reflection, transmission, and refraction. Electrical properties can include electrical resistivity and conductivity. Chemical properties can include enthalpy of formation, toxicity, chemical stability in a given environment, flammability (the ability to burn), preferred oxidation states, pH (acidity/alkalinity), chemical composition, boiling point, vapor point). The disclosed mechanical, optical, chemical and electrical properties are just examples and are not intended to be limiting.
As shown in
In some aspects, statistical values for intermediate outputs and/or final outputs can be used to make determinations as to when an article of manufacture is “in-specification,” i.e., when an output has achieved certain pre-specified design requirements. As such, in-specification can refer to an article of manufacture, or a characteristic of an article of manufacture, that meets or exceeds a specified design requirement or set of requirements. By way of example, an article of manufacture that is deemed to be in-specification may be one that achieves specified statistical requirements, such as having acceptable deviations from an ideal or average (mean) value.
As long as the values for the intermediate/final output are within the upper and lower control limits, then the process station/overall process is considered to be in control and typically, no interventions or corrective actions will be taken. Interventions or corrective action will typically be taken when the value for the intermediate/final output exceeds the upper or lower control limit defined for that measurement. However, SPC control has limited impact on improving or optimizing the manufacturing process, because intervention/correction only occurs when an upper/lower control limit is exceeded. Adjustments are not usually made when the process is in control. Further, SPC evaluates a single station in isolation and does not consider trends across many stations or the impact of several stations together on the final product.
Accordingly, new mechanisms are needed that can consider the inputs to and outputs of each station individually, and together with the other inputs to and outputs of the other stations in the manufacturing process, to intelligently and dynamically adjust inputs to a station controller to better control an associated station's operation. In particular, new mechanisms are needed to predict inputs that will optimize the manufacturing process to produce in specification final outputs. New mechanisms are also needed to predict inputs that will improve upon the design and manufacturing process of the final outputs. In addition, new mechanisms are needed to decrease variability in the manufacturing process and the final output.
Aspects of the disclosed technology address the foregoing limitations of conventional manufacturing processes by providing mechanisms (which can include systems, methods, devices, apparatuses, etc.) for progressively improving the manufacturing process and the resulting product of manufacture without disrupting the ongoing manufacturing process (referred to herein as predictive process control). As such, the disclosed technology can be used to retrofit and integrate with existing manufacturing systems and infrastructure, without causing disruptions to ongoing manufacturing processes. Improvements are realized by providing dynamic control to one or more process stations in the manufacturing process, via conventional station controllers, to: (1) consistently produce final outputs that are within specification; (2) optimize the design and manufacturing process of the final output; and (3) decrease variability in the manufacturing process and the final output.
A deep-learning controller based on machine-learning/artificial intelligence (AI) models may be used to evaluate control/station/process values and intermediate and final output values and determine adjustments to a station controller's input. As understood by those of skill in the art, machine learning based techniques can vary depending on the desired implementation, without departing from the disclosed technology. For example, machine learning techniques can utilize one or more of the following, alone or in combination: hidden Markov models; recurrent neural networks; convolutional neural networks (CNNs); deep-learning; Bayesian symbolic methods; reinforcement learning, general adversarial networks (GANs); support vector machines; image registration methods; applicable rule-based system.
Machine learning models can also be based on clustering algorithms (e.g., a Mini-batch K-means clustering algorithm), a recommendation algorithm (e.g., a Miniwise Hashing algorithm, or Euclidean Locality-Sensitive Hashing (LSH) algorithm), and/or an anomaly detection algorithm, such as a Local outlier factor. The machine learning models can be based on supervised and/or unsupervised methods.
Machine learning models, as discussed herein, can also be used to determine the process stations, control/station/process values and intermediate output values that are most influential on the final output value (“key influencers”), and to optimize the manufacturing process by targeting the key influencers.
Functional priors, as used herein, refers to information relating to the functionality and known limitations of each process station, individually and collectively, in a manufacturing process. The specifications for the tools/equipment used at the process station are all considered functional priors. Example functional priors can include, but are not limited to: a screw driven extruder that has a minimum and maximum speed that the screw can rotate; a temperature control system that has a maximum and minimum temperature achievable based on its heating and cooling capabilities; a pressure vessel that has a maximum pressure that it will contain before it explodes; a combustible liquid that has a maximum temperature that can be reached before combustion. Functional priors can also include an order in which the individual stations that are part of a manufacturing process perform their functions. In some embodiments, several stations can run in parallel and combine their intermediate outputs at a subsequent station.
Experiential priors, as used herein, refers to information gained by prior experience with, for example performing the same or similar manufacturing process; operating the same or similar stations; producing the same or similar intermediate/final outputs. In some embodiments, experiential priors can include acceptable final output values or unacceptable final output values. Acceptable final output values refer to an upper limit, lower limit or range of final output values where the final output is considered “in specification.” In other words, acceptable final output values describe the parameters for final output values that meet design specification, i.e., that are in-specification. Conversely, unacceptable final output values refer to upper/lower limits or range of final output values where the final output is “not in specification” (i.e., describe the parameters for final output values that do not meet design specifications). For example, based on prior experience it might be known that an O-ring used to seal pipes, will only seal if it has certain compression characteristics. This information can be used to establish acceptable/unacceptable compression values for an O-ring final output. In other words, all O-ring final outputs that have acceptable compression values are able to perform their sealing functionality, while all O-ring final outputs that have unacceptable compression values cannot perform their sealing functionality. Acceptable intermediate output values, which can be defined per station, refer to upper/lower limits or a range of intermediate output values that define the parameters for an intermediate output that can ultimately result in a final output that is in specification, without requiring corrective action by other stations. Unacceptable intermediate output values, which can also be defined by station, refer to upper/lower limits or range of intermediate output values that define the parameters for an intermediate output that will ultimately result in a final output that is not in specification, unless corrective action is taken at another station. Similarly, acceptable/unacceptable parameters can be defined for other variables relating to the manufacturing process:
Experiential priors can also include acceptable an unacceptable manufacturing performance metrics. Manufacturing performance metrics calculate one or more aspects of multiple iterations of the manufacturing process (e.g., production volume for a specified time period, production downtime for a specified time period, resources used for a specified time period or a specified number of final outputs, percentage of products not in specification for a specified time period, production volume for a particular operator, material costs associated with a specified number of final outputs).
Universal inputs, as used herein, refers to a value that is not specific to a particular process station, but rather to an aspect of the entire manufacturing process, for example, a date, time of day, ambient temperature, humidity or other environmental conditions that might impact the manufacturing process, operator, level of skill of the operator, raw materials used in the process, raw material specifications such as color, viscosity, particle size, among other characteristics that are specific to the raw material, specific lot numbers and cost of raw materials, tenure of the equipment/tools for each station, identifying information such as production work order numbers, batch numbers, lot numbers, finished product numbers and finished product serial numbers.
Note, that the examples provided for each of functional priors, experiential priors and universal inputs represent one way to classify these examples, other suitable classifications can be used. For example, another way to classify the input that is provided to deep-learning controller 218 is: pre-process inputs (e.g., experiential priors, functional priors, material properties, scheduling requirements); in-process inputs (e.g., universal inputs, control values, station values, intermediate values, final output values, process values); post-process inputs (e.g., manufacturing performance metrics and other analytics).
Each process station can be controlled by one or more associated station controllers (e.g., station controller 220 controls process station 222 and station controller 240 controls process station 242). In other embodiments a single station controller can control multiple process stations. Deep learning controller 218 can provide control inputs (represented by 226 and 246) based on predictive process control to each process station controller. In response to the received control input (e.g., 226 and 246), each station controller can provide one or more control signals (e.g., 221 and 241) that provides commands for regulating a station's control values (e.g., control values 225 and 245). Each station outputs an intermediate output (e.g., 224 and 244), that has an intermediate output value (234a and 244a respectively). All intermediate output values and the final output value from the processing stations are provided to deep-learning controller 218. Each station also outputs station values (e.g., 228 and 248) to deep-learning controller 218.
It is understood that the communication among deep-learning controller 218, the station controllers and process stations can use any suitable communication technologies that provide the ability to communicate with one or more other devices, and/or to transact data with a computer network. By way of example, implemented communication technologies can include, but are not limited to: analog technologies (e.g., relay logic), digital technologies (e.g., RS232, ethernet, or wireless), network technologies e.g., local area network (LAN), a wide area network (WAN), the Internet, Bluetooth technologies, Nearfield communication technologies, Secure RF technologies, and/or any other suitable communication technologies.
In some embodiments, operator inputs can be communicated to deep-learning controller 218, and/or any of the station controllers or process stations using any suitable input device (e.g., keyboard, mouse, joystick, touch, touch-screen, etc.).
In some embodiments, one or more process stations can be operated manually, for example, by a human operator performing specific instructions. Instead of an electronic station controller, an operator follows a set of instructions, which can be provided manually or via electronic means (e.g., via a video or computer display). For example, at a manual station, an operator can perform the functions of cutting wire to a specific length and measuring the length of the cut wire. Manual feedback, such as the length of the cut wire, can be provided to deep-learning controller 218. Using predictive process control, as described herein, deep-learning controller 218 can determine whether the wire was cut to the desired length specification and provide improvements to the cutting process, for example, that are provided in the form of a set of instructions to the operator of the manual station.
In step 310, the setpoints, algorithms and other control inputs for each station controller in a manufacturing process can be initialized using conventional methods. Further, the control algorithms/operator can provide initial control/station values. The control algorithms, initial setpoint values, and initial control/station values can be provided to deep-learning controller 218 (step 315).
Note, that control values, control algorithms, setpoints and any other information (e.g., process timing, equipment instructions, alarm alerts, emergency stops) provided to a station controller will be referred to collectively as “station controller inputs” or “control inputs.”
In addition, other inputs, like functional priors 238, experiential priors 239 and universal inputs 236 can be provided to deep-learning controller 218.
In step 325, the manufacturing process iterates through all the process stations using conventional control methods. As discussed above, the process stations discussed herein can operate in series or in parallel. Further, a single station can perform: a single process step multiple times (sequentially or non-sequentially), or different process steps (sequentially or non-sequentially) for a single iteration of a manufacturing process. The process stations generate intermediate outputs, or a final output if it is a final station. The intermediate output is transmitted to subsequent (downstream) station(s) in the manufacturing process until a final output is generated.
As the process iterates through each station, all the values associated with: an individual station (e.g., control values); an output of an individual station (e.g., station values, intermediate/final output values), or multiple stations (e.g., process values) are measured or calculated and provided to condition the machine learning algorithms of deep-learning controller 318 (steps 327 and 328).
In some embodiments manufacturing performance metrics (e.g., production volume for a specified time period, production downtime for a specified time period, resources used for a specified time period or a specified number of final outputs, percentage of products not in specification for a specified time period, production volume for a particular operator, material costs associated with a specified number of final outputs) for the manufacturing process under conventional control can be calculated and provided to deep-learning controller 218 (step 329).
Although not shown, any actions taken (or control signals generated) by the station controller in response to a received control value or other control input from a process station can be provided to deep-learning controller 218. Such actions can include adjusting temperature, speed, etc. In addition, deviations from: acceptable setpoints, acceptable intermediate/final output values, acceptable control/station/process values can also be calculated and provided to deep-learning controller 218.
Note all inputs to deep-learning controller 218 can be entered electronically or via manual means by an operator.
The conditioning of the machine learning models of deep-learning controller 218 (step 235) can be achieved through unsupervised learning methods. Other than functional priors 238, experiential priors 239, universal inputs 236 that are input into deep-learning controller 218, deep-learning controller 218 draws inferences simply by analyzing the received data that it collects during the iteration of the manufacturing process (e.g., steps 328 and 329). In other embodiments, deep-learning controller 218 can be conditioned via supervised learning methods, or a combination of supervised and unsupervised methods or similar machine learning methods. Further, the training of deep-learning controller 218 can be augmented by: providing deep-learning controller 218 with simulated data or data from a similar manufacturing process. In one embodiment, deep-learning controller 218 can be conditioned by implementing deep-learning controller 218 into a similar manufacturing process and fine-tuning the deep-learning controller during implementation in the target manufacturing process. That is, training of deep-learning controller 218 can be performed using a training process that is performed before deep-learning controller 218 is deployed into a target manufacturing environment.
Based on the conditioning of the machine learning models, deep-learning controller 218 can predict the values for the characteristics of the final output (“expected value” or “EV”) that determine whether or not the final output value will be acceptable or not (i.e., whether or not the final output is “in specification”) (step 342). Deep learning controller 218 can provide a confidence level for its prediction at an instant or over a specific time period, for example, to provide a measure of statistical confidence in the prediction. In some aspects, the confidence level may be expressed as a numerical probability of accuracy for the prediction, in other aspects, the confidence level may be expressed as an interval or probability range. At step 343, deep-learning controller 218 can compare the expected value to the actual measurements of the specified characteristics of the final output (“actual value” or “AV”).
In some aspects, deep-learning controller 218 can be configured to perform EV predictions regarding output characteristics on a station-by-station basis. That is, deep-learning controller 218 can make EV predictions regarding outputs at a specific station, and subsequently compare those predictions to actual outputs observed at that station. Alternatively, EV predictions can be made for outputs resulting from combined processing performed by two or more stations, depending on the desired implementation.
As the manufacturing process proceeds through each station, and deep-learning controller 218 receives additional information, deep-learning controller 218 can revise its expected value, along with the confidence level. If deep-learning controller 218's predictions are correct, over a specified time period and with a predefined threshold confidence level, then deep-learning controller 218 can provide a signal that deep-learning controller 218 is ready to control the operation of the process stations.
In some embodiments, deep-learning controller 218 can also, after initialization of the station controllers, at the outset of an iteration of the manufacturing process (i.e., proceeding through all stations in the manufacturing process), as well as over the course of the manufacturing process, predict whether any control inputs will cause unsatisfactory station performance or impact process performance (i.e., cause unacceptable process performance). Deep learning controller 218 can provide a confidence level for its predictions. Deep learning controller 218 can determine whether or not deep-learning controller 218's predictions were correct. In further embodiments, if deep-learning controller 218's predictions, both with respect to the expected final output value and predicted station/process performance, over a specified time period and with a threshold confidence level, as defined by an operator, then deep-learning controller 218 can provide a signal that deep-learning controller 218 is ready to control the operation of the process stations.
Deep learning controller 218 uses its conditioned machine learning algorithms (as discussed in connection with
At step 430, the manufacturing process proceeds through all the process stations serially or in parallel. As the process iterates through each station, all the values associated with: an individual station (e.g., control values); an output of an individual station (e.g., station values, intermediate/final output values), or multiple stations (e.g., process values) can be measured or calculated and provided to condition the machine learning algorithms of deep-learning controller 218 (steps 432). Further, manufacturing performance metrics for the manufacturing process under predictive process control can be calculated and provided to deep-learning controller 218 (step 432). The process values and manufacturing performance metrics calculated under PPC can be compared to the process values and manufacturing performance metrics calculated under conventional control to determine the improvements provided by predictive process control.
Throughout the process shown in
In some aspects, deep-learning controller 218 can be configured to perform EV predictions regarding intermediate outputs on a station-by-station basis. That is, deep-learning controller 218 can make EV predictions regarding outputs at a specific station, determine whether the EV for an intermediate output is in-specification, determine the confidence level for its prediction, and subsequently compare those predictions to actual outputs observed at that station. Alternatively, EV predictions can be made for outputs resulting from combined processing performed by two or more stations, depending on the desired implementation. Further, if deep-learning controller 218 determines that the intermediate output is not in-specification, it can calculate adjustments to the control inputs, so that the predicted expected value for the intermediate output is in-specification.
Note, if the confidence level determined by deep-learning controller 218 is below a predetermined threshold, then control of the manufacturing process can revert back to conventional control, as described in connection with
In some embodiments, deep-learning controller 218 can also monitor whether any of the station/control/process or intermediate output values are unacceptable and make further adjustments to the station controller inputs or generate an alert if the problem cannot be fixed by adjusting the station controller inputs.
Based on data it receives as the stations run through the manufacturing process, deep-learning controller 218 can adjust the control inputs for one or more station controller. In further embodiments, deep-learning controller 218 can, not only initialize the station controller inputs before the start of an iteration through the process stations of the manufacturing process, but also adjust station controller inputs during the process itself (“feedforward control”). In particular, based on information received from prior stations in a manufacturing process, deep-learning controller 218 can make changes to the control inputs associated with later stations in the process. For example, if deep-learning controller 218 determines that there are defects in the intermediate output of a particular station, then deep-learning controller 218 can determine whether there are any corrective actions that can be taken at subsequent stations, so that the final output will be in specification. Deep learning controller 218 can also make changes to the current and prior processing stations in real-time, based on feedback about control/station/process values and/or intermediate/final output values (“feedback control”). This ability to dynamically control each station in real time and make adjustments to downstream station controllers to compensate for errors, miscalculations, undesirable conditions or unforeseen outcomes that occurred upstream, increases the likelihood of manufacturing in specification final outputs. Further, even though a broad range of final output values can be considered in specification, it may be desirable to manufacture final outputs that are of the same or similar quality and have similar final output values within a narrower range of acceptable final output values. Feedback and feedforward control, along with the deep-learning controller's predictive capabilities, enable deep-learning controller 218 to adjust the station controllers to produce final output values of consistent quality and similar final output values.
It is useful to identify what parameters of the manufacturing process most impact the final output value or the process performance (the “key influencers”). Deep learning controller 218 can consider all parameters of the manufacturing process (e.g., one or more control values, one or more station values, one or more process values, one or more stations, one or more intermediate outputs or any combination thereof), and using one or more of its machine learning algorithms can identify the key influencers. In some aspects, deep-learning controller 218 can employ unsupervised machine learning techniques to discover one or more key influencers, for example, wherein each key influencer is associated with one or more parameters (or parameter combinations) that affect characteristics of various station outputs, the final output, and/or process performance. It is understood that discovery of key influencers and their associated parameters may be performed through operation and training of deep-learning controller 218, without the need to explicitly label, identify or otherwise output key influencers or parameters to a human operator.
In some approaches, deep-learning controller 218 can rank, in order of significance, the impact of each parameter of the manufacturing process on the final output value or the process performance. A key influencer, can be identified based on: a cutoff ranking (e.g., the top 5 aspects of the manufacturing process that impact the final output value), a minimum level of influence (e.g., all aspects of the manufacturing process that contribute at least 25% to the final output value); or any other suitable criteria. In some aspects, key influence characteristics may be associated with a quantitative score, for example, that is relative to the weight of influence for the corresponding characteristic.
Deep learning controller 218 can continuously, throughout the manufacturing process, calculate the key influencers (step 446).
In some embodiments, as described in connection with
In conventional manufacturing, the data generated from the manufacturing process is limited, because the goal is to produce intermediate output values that are within a specified standard variation from the mean. As a consequence, the range of control inputs, control/station/process values are also limited, because they are all designed to produce intermediate output values that are within a specified range from the mean. In contrast, under the subject disclosure, as long as the final output is in specification, the intermediate output values do not have to be within a specific range from the mean. As the deep-learning controller's predictions become more accurate, in some embodiments, deep-learning controller 218 can intentionally make changes to control inputs, to create the conditions for generating intermediate output values that may exceed the normal fluctuations of an in control process under traditional manufacturing process control (e.g., SPC), but still generates final outputs that are in specification. This creates a more robust data training set for deep-learning controller 218 to detect patterns and determine how particular stations, station/control/process values, and intermediate output values impact the final output value (e.g., whether the final output is in specification or not). The creation of a robust data set can occur both during and prior to implementation of deep-learning controller 218 in a production environment.
In one example embodiment, control inputs related to material tolerances can be purposely varied, and deep-learning controller 218 can be used to determine what adjustments to make to other control inputs to produce in specification final products. By training deep-learning controller 218 in this manner, when new materials are introduced into the manufacturing process unexpectedly, deep-learning controller 218 can adapt the control inputs on its own, without requiring operator input. Similarly, adjustments can purposely be made to control algorithms (e.g., adjustments that mimic a virus or other attack), and deep-learning controller 218 can be used to determine what adjustments to make to other control inputs to produce in specification final products. By purposely introducing these changes, deep-learning controller 218 can adapt the control inputs on its own, without requiring operator input, when adjustments are made to the control algorithms unexpectedly during the manufacturing process.
In some embodiments, deep-learning controller 218 can first determine the key influencers (step 446 as described in connection with
The following example, further illustrates creating a robust data set by varying a control input (e.g., a temperature setpoint for a particular station). In this example, the setpoint temperature for a specific station is 95°, and the actual temperature of the station fluctuates between 92°-98° (i.e., 3° above and below the setpoint temperature). The setpoint temperature at 950 and the corresponding ±3° fluctuation of the actual station temperature all result in final output values that are in specification. Deep learning controller 218 can then predict if adjusting the temperature setpoint by a negligible amount (e.g., ±0.5°), will still result in final output values that are in specification. If deep-learning controller 218 predicts in specification final output values, at a threshold confidence level, for a manufacturing process using adjusted temperature setpoints, then deep-learning controller 218 will adjust the temperature setpoint by ±0.5°. Assuming the same station temperature fluctuation of ±3° from the setpoint, then the actual station temperature, will range from 92.5°-98.5°, when the setpoint is 95.5°; and will range from 91.5°-97.5° when the setpoint is 94.5°. Deep learning controller 218 can compare the final output value with expected output value over the temperature range 91.5°-98.5°, and determine whether it correctly predicted that the final output value would be in specification. Because the temperature setpoint was adjusted by ±0.5°, the resulting data set covers a broader temperature range: 91.5°-98.5 than the original temperature range of 92°-98°.
Note, that instead of changing station controller setpoints, control values of a station can be changed by modifying other control inputs to the station controller (e.g., control values) that would achieve the same goal of changing the setpoints. For example, assume the following: setpoint for a station controller is 100 degrees, actual station temperature value (i.e., a control value) is 100 degrees, and the goal is to increase the actual temperature value of the station by two degrees. Instead of increasing the temperature setpoint to 102 degrees to achieve that change, deep-learning controller 218 can change the control value that it provides to the station controller by two degrees below the actual temperature value (e.g., changing the control value from 100 degrees to 98 degrees), causing the station controller to increase the station temperature by two degrees to achieve the desired station temperature increase of two degrees (i.e., 102 degrees). It may be necessary to change control values, instead of setpoints, when existing station controls do not allow setpoints to be changed.
In some embodiments, as described in connection with
Once the most important stations, station/control/process values, intermediate output values that influence the final output value or the process performance are identified, then resource allocation and process optimization can target the key influencers. For example, instead of collecting volumes of data that marginally impact the process, data resources (e.g., collection, processing and storage) can be allocated largely for the data associated with the key influencers (“curated data”). Further, the curated data (a subset of all the data available from the manufacturing process) can be provided to machine learning algorithms to make optimizations to the key influencers, reducing the volume of training examples and increasing the resources available to process the curated data. In addition, the machine learning algorithms can be directed to optimizing the key influencers, instead of the entire process, reducing the possible states and actions that a machine learning algorithm must consider, and allocating resources more efficiently and intelligently. For example, in reinforcement learning, state and action spaces define the range of possible states an agent might perceive and the actions that are available to the agent. By using reinforcement learning only on the key influencers, the state and action spaces are reduced, making the algorithm more manageable.
In some embodiments, an operator can specify one or more characteristics of the final output or the process values that it would like to optimize for (e.g., using the least amount of power or resources, fastest throughput, minimum number of defects, greatest tensile strength). Deep learning controller 218 can run machine learning models like reinforcement learning, targeting the key influencers, to optimize for the specified characteristics of the final output (“optimal design values”) and/or specified process values (“optimal process values”).
At 600, the desired optimal design/process values are provided to deep-learning controller 218. For example, generate an in specification final output: using the least amount of power or resources, fastest throughput, minimum number of defects, greatest tensile strength, etc.
As shown at 605, deep-learning controller 218 can determine the key influencers for driving in specification products (see also
In other embodiments, the parameters for final outputs that are in specification will be updated to match the desired optimal design/process values. For example, if in specification tensile strength for a final output is 40-90 megapascals (MPa), and optimal tensile strength for the final output is determined to be 70-90 MPa, then the in specification parameters can be updated to 70-90 MPa. Deep learning controller 218 can predict, and determine the confidence level, that the calculated control inputs for the station controllers associated with key influencers will achieve the updated specification parameters (i.e., 70-90 MPa). Deep learning controller 218 can update the relevant station controllers when it predicts that the calculated control inputs will achieve the updated specification parameters, at a confidence level above a predefined threshold.
Note, deep-learning controller 218 can continue to provide control inputs to the other station controllers as discussed in connection with
Deep learning controller 218 can also compare the expected values with the actual values to provide feedback on its prediction and to further adjust the control inputs (step 545). As the process begins and goes through all the stations, in some embodiments, only measurements related to the key influencers, as well as measurements for the final output, will be collected and provided to deep-learning controller 218. In other embodiments, deep-learning controller 218 can continue to collect data from all the process stations. The data will be used to continuously improve the control inputs for the key influencers, so that the optimal design/process values are achieved consistently and with a high confidence level.
Note that the desired optimal design/process values can be changed at any time. Further, new key influencers can be calculated by changing the criteria for what qualifies as a key influencer (e.g., a first criteria might classify as key influencers the top 5 stations that drive in-specification final outputs, while an updated criteria might classify as key influencers only the top 3 stations that drive in-specification final outputs). Further, once deep-learning controller 218 predicts control inputs to achieve optimal design/process value with a certain confidence level over a period of time, deep-learning controller 218 can identify which of the key influencers are key influencers for driving the desired optimizations and only target that subset of key influencers, further reducing the possible actions/states that a machine learning algorithm must consider and more efficiently allocating the resources to that subset.
In further embodiments, deep-learning controller 218 can, not only initialize the station controller inputs for the key influencers before the start of an iteration through the process stations of the manufacturing process, but also adjust the control inputs during the process itself. In particular, based on information received from prior stations in a process, deep-learning controller 218 can make changes to the control inputs associated with later stations in the process to ensure the optimal design/process values are achieved. Deep learning controller 218, can also adjust prior stations in the process as it proceeds through the process and receives data from subsequent stations.
In some embodiments manufacturing performance metrics for the manufacturing process trying to achieve optimal design or process values can be calculated and provided to deep-learning controller 218 (step 632). These manufacturing performance metrics can be compared to the manufacturing performance metrics for PPC and/or the manufacturing performance metrics for conventional control to determine the improvements provided by optimizing the design and or process.
An example of manufacturing system optimization to reduce the possible actions/states that can be adapted and implemented by deep-learning controller 218 is described in U.S. Patent Provisional Application 62/836,199 “Adaptive Methods and Real-Time Decision Making for Manufacturing Control,” which is hereby incorporated by reference herein in its entirety. The disclosed method is just an example and is not intended to be limiting.
Another example of manufacturing system optimization to reduce the possible actions/states that a machine learning algorithm must consider that can be adapted and implemented by deep-learning controller 218 of the subject disclosure is described in U.S. Patent Provisional Application 62/836,213 “TRANSFER LEARNING APPROACH TO MULTI-COMPONENT MANUFACTURING CONTROL,” which is hereby incorporated by reference herein in its entirety. The disclosed method is just an example and is not intended to be limiting.
Note, in some embodiments, in connection with predictive process control method discussed in
Further, in some embodiments of predictive process control, a data logging module 810, as shown in
Deep learning controller 218 can include a processing unit (e.g., CPU/s and/or processor/s) 910 and bus 905 that couples various system components including system memory 915, such as read only memory (ROM) 920 and random access memory (RAM) 925, to processing unit 910. Processing unit 910 can include one or more processors such as a processor from the Motorola family of microprocessors or the MIPS family of microprocessors. In an alternative embodiment, the processing unit 910 can be specially designed hardware for controlling the operations of deep-learning controller 218 and performing predictive process control. When acting under the control of appropriate software or firmware, processing module 910 can perform various machine learning algorithms and computations of PPC described herein.
Memory 915 can include various memory types with different performance. characteristics, such as memory cache 912. Processor 910 can be coupled to storage device 930, which can be configured to store software and instructions necessary for implementing one or more functional modules and/or database systems. Each of these modules and/or database systems can be configured to control processor 910 as well as a special-purpose processor where software instructions are incorporated into the actual processor design.
To enable operator interaction with deep-learning controller 218, input device 945 can represent any number of input mechanisms, such as a microphone for speech, a touch-sensitive screen for gesture or graphical input, keyboard, mouse, motion input and so forth. An output device 935 can also be one or more of a number of output mechanisms (e.g., printer, monitor) known to those of skill in the art. In some instances, multimodal systems can enable an operator to provide multiple types of input to communicate with deep-learning controller 218. Communications interface 940 can generally govern and manage the operator input and system output, as well as all electronic input received from and sent to other components that are part of a manufacturing process such as the station controllers, process stations, data logging module, and all associated sensors and image capturing devices. There is no restriction on operating on any particular hardware arrangement and therefore the basic features here may easily be substituted for improved hardware or firmware arrangements as they are developed. Data output from deep controller 218 can be displayed visually, printed, or generated in file form and stored in storage device 930 or transmitted to other components for further processing.
Communication interface 940 can be provided as interface cards (sometimes referred to as “line cards”). Generally, they control the sending and receiving of data packets over the network and sometimes support other peripherals used with the router. Among the interfaces that can be provided are Ethernet interfaces, frame relay interfaces, cable interfaces, DSL interfaces, token ring interfaces, and the like. In addition, various very high-speed interfaces may be provided such as fast token ring interfaces, wireless interfaces, Ethernet interfaces, Gigabit Ethernet interfaces, ATM interfaces, HSSI interfaces, POS interfaces, FDDI interfaces and the like. Generally, these interfaces may include ports appropriate for communication with the appropriate media. In some cases, they may also include an independent processor and, in some instances, volatile RAM. The independent processors may control such communications intensive tasks as packet switching, media control and management. By providing separate processors for the communications intensive tasks, these interfaces allow processing unit 910 to efficiently perform machine learning and other computations necessary to implement predictive process control. Communication interface 940 can be configured to communicate with the other components that are part of a manufacturing process such as the station controllers, process stations, data logging module, and all associated sensors and image capturing devices.
Sensors associated with a manufacturing process can include sensors that existed prior to implementation of PPC, as well as any new sensors that were added to perform any additional measurements used by PPC. One or more sensors can be included within or coupled to each station. Sensors can be used to measure values generated by a manufacturing process such as: station values, control values, intermediate and final output values. Further, information provided by sensors can be used by deep-learning controller 218, or by a module external to deep-learning controller 218 to compute process values and performance manufacturing metrics. Example sensors can include, but are not limited to: rotary encoders for detecting position and speed; sensors for detecting proximity, pressure, temperature, level, flow, current and voltage; limit switches for detecting states such as presence or end-of-travel limits. Sensor, as used herein, includes both a sensing device and signal conditioning. For example, the sensing device reacts to the station or control values and the signal conditioner translates that reaction to a signal that can be used and interpreted by deep-learning controller. Example of sensors that react to temperature are RTDs, thermocouples and platinum resistance probes. Strain gauge sensors react to pressure, vacuum, weight, change in distance among others. Proximity sensors react to objects when they are within a certain distance of each other or a specified tart. With all of these examples, the reaction must be converted to a signal that can be used by deep-learning controller 218. In many cases the signal conditioning function of the sensors produce a digital signal that is interpreted by deep-learning controller 218. The signal conditioner can also produce an analog signal or TTL signal among others.
In some embodiments, deep-learning controller 218 can include an imaging processing device 970 that processes images received by various image capturing devices such as video cameras, that are coupled one or more processing station and are capable of monitoring and capturing images of intermediate and final outputs. These images can be transmitted to deep-learning controller 218 via communication interface 940, and processed by image processing device 970. The images can be processed to provide data, such as number and type of defects, output dimensions, throughput, that can be used by deep-learning controller 218 to compute intermediate and final output values. In some embodiments, the image processing device can be external to deep-learning controller 218 and provide information to deep-learning controller 218 via communication interface 940.
Storage device 930 is a non-transitory memory and can be a hard disk or other types of computer readable media that can store data accessible by a computer, such as magnetic cassettes, flash memory cards, solid state memory devices, digital versatile disks, cartridges, random access memories (RAMs) 825, read only memory (ROM) 820, and hybrids thereof.
In practice, storage device 930 can be configured to receive, store and update input data to and output data from deep-learning controller 218, for example functional priors, experiential priors, universal input; pre-process inputs; in-process inputs and post-process inputs.
In some embodiments, any suitable computer readable media can be used for storing instructions for performing the functions and/or processes described herein. For example, in some embodiments, computer readable media can be transitory or non-transitory. For example, non-transitory computer readable media can include media such as non-transitory magnetic media (such as hard disks, floppy disks, etc.), non-transitory optical media (such as compact discs, digital video discs, Blu-ray discs, etc.), non-transitory semiconductor media (such as flash memory, electrically programmable read only memory (EPROM), electrically erasable programmable read only memory (EEPROM), etc.), any suitable media that is not fleeting or devoid of any semblance of permanence during transmission, and/or any suitable tangible media. As another example, transitory computer readable media can include signals on networks, in wires, conductors, optical fibers, circuits, and any suitable media that is fleeting and devoid of any semblance of permanence during transmission, and/or any suitable intangible media.
The various systems, methods, and computer readable media described herein can be implemented as part of a cloud network environment. As used in this paper, a cloud-based computing system is a system that provides virtualized computing resources, software and/or information to client devices. The computing resources, software and/or information can be virtualized by maintaining centralized services and resources that the edge devices can access over a communication interface, such as a network. The cloud can provide various cloud computing services via cloud elements, such as software as a service (SaaS) (e.g., collaboration services, email services, enterprise resource planning services, content services, communication services, etc.), infrastructure as a service (IaaS) (e.g., security services, networking services, systems management services, etc.), platform as a service (PaaS) (e.g., web services, streaming services, application development services, etc.), and other types of services such as desktop as a service (DaaS), information technology management as a service (ITaaS), managed software as a service (MSaaS), mobile backend as a service (MBaaS), etc.
The provision of the examples described herein (as well as clauses phrased as “such as,” “e.g.,” “including,” and the like) should not be interpreted as limiting the claimed subject matter to the specific examples; rather, the examples are intended to illustrate only some of many possible aspects. A person of ordinary skill in the art would understand that the term mechanism can encompass hardware, software, firmware, or any suitable combination thereof.
Unless specifically stated otherwise as apparent from the above discussion, it is appreciated that throughout the description, discussions utilizing terms such as “determining,” “providing,” “identifying,” “comparing” or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system memories or registers or other such information storage, transmission or display devices. Certain aspects of the present disclosure include process steps and instructions described herein in the form of an algorithm. It should be noted that the process steps and instructions of the present disclosure could be embodied in software, firmware or hardware, and when embodied in software, could be downloaded to reside on and be operated from different platforms used by real time network operating systems.
The present disclosure also relates to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored on a computer readable medium that can be accessed by the computer. Such a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, application specific integrated circuits (ASICs), or any type of non-transient computer-readable storage medium suitable for storing electronic instructions. Furthermore, the computers referred to in the specification may include a single processor or may be architectures employing multiple processor designs for increased computing capability.
The algorithms and operations presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems may also be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the required method steps and system-related actions. The required structure for a variety of these systems will be apparent to those of skill in the art, along with equivalent variations. In addition, the present disclosure is not described with reference to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present disclosure as described herein, and any references to specific languages are provided for disclosure of enablement and best mode of the present disclosure.
The logical operations of the various embodiments are implemented as: (1) a sequence of computer implemented steps, operations, or procedures running on a programmable circuit within a general use computer, (2) a sequence of computer implemented steps, operations, or procedures running on a specific-use programmable circuit; and/or (3) interconnected machine modules or program engines within the programmable circuits. The system 300 can practice all or part of the recited methods, can be a part of the recited systems, and/or can operate according to instructions in the recited non-transitory computer-readable storage media. Such logical operations can be implemented as modules configured to control the processor 363 to perform particular functions according to the programming of the module.
It is understood that any specific order or hierarchy of steps in the processes disclosed is an illustration of exemplary approaches. Based upon design preferences, it is understood that the specific order or hierarchy of steps in the processes may be rearranged, or that only a portion of the illustrated steps be performed. Some of the steps may be performed simultaneously. For example, in certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.
The predictive process control apparatus, method and system have been described in detail with specific reference to these illustrated embodiments. It will be apparent, however, that various modifications and changes can be made within the spirit and scope of the disclosure as described in the foregoing specification, and such modifications and changes are to be considered equivalents and part of this disclosure.
Statements of the Disclosure
Statement 1: A computer-implemented method for controlling a manufacturing process, comprising: receiving, by a deep-learning controller, one or more control values associated with a process station in the manufacturing process; predicting, by the deep-learning controller, an expected value for an article of manufacture output from the process station, based on the one or more control values; and determining if the deep-learning controller can control the manufacturing process based on the expected value.
Statement 2: The computer-implemented method of statement 1, wherein determining if the deep-learning controller can control the manufacturing process further comprises: receiving, by the deep-learning controller, an actual value associated with the article of manufacture; comparing the expected value with an actual value; and if the expected value is within a threshold confidence level of the actual value, determining that the deep-learning controller can control the manufacturing process.
Statement 3: The computer-implemented method of any of statements 1-2, further comprising: indicating that the deep-learning controller is ready to take control of the manufacturing process.
Statement 4: The computer-implemented method of any of statements 1-3, wherein determining if the deep-learning controller can control the manufacturing process further comprises: receiving, by the deep-learning controller, an actual value associated with the article of manufacture; comparing the expected value with an actual value; and if the expected value is not within a threshold confidence level of the actual value, determining that the deep-learning controller cannot control the manufacturing process.
Statement 5: The computer-implemented method of any of statements 1-4, wherein the one or more control values are received from one or more process controllers associated with the process station.
Statement 6: The computer-implemented method of any of statements 1-5, further comprising: assuming control of the manufacturing process by the deep-learning controller based on the expected value.
Statement 7: The computer-implemented method of any of statements 1-6, wherein the expected value is based on two or more physical attributes of the article of manufacture.
Statement 8: A system comprising: one or more processors; and a non-transitory memory storing instructions that, when executed by the one or more processors, cause the one or more processors to perform operations comprising: receiving, by a deep-learning controller, one or more control values associated with a process station in the manufacturing process; predicting, by the deep-learning controller, an expected value for an article of manufacture output from the process station, based on the one or more control values; and determining if the deep-learning controller can control the manufacturing process based on the expected value.
Statement 9: The system of statement 8, wherein determining if the deep-learning controller can control the manufacturing process further comprises: receiving, by the deep-learning controller, an actual value associated with the article of manufacture; comparing the expected value with an actual value; and if the expected value is within a threshold confidence level of the actual value, determining that the deep-learning controller can control the manufacturing process.
Statement 10: The system of any of statements 8-9, wherein the one or more processors are further configured to perform operations comprising: indicating that the deep-learning controller is ready to take control of the manufacturing process.
Statement 11: The system of any of statements 8-10, wherein determining if the deep-learning controller can control the manufacturing process further comprises: receiving, by the deep-learning controller, an actual value associated with the article of manufacture; comparing the expected value with an actual value; and if the expected value is not within a threshold confidence level of the actual value, determining that the deep-learning controller cannot control the manufacturing process.
Statement 12: The system of any of statements 8-11, wherein the one or more control values are received from one or more process controllers associated with the process station.
Statement 13: The system of any of statements 8-12, wherein the one or more processors are further configured to perform operations comprising: assuming control of the manufacturing process by the deep-learning controller based on the expected value.
Statement 14: The system of any of statements 8-13, wherein the expected value is based on two or more physical attributes of the article of manufacture.
Statement 15: A non-transitory computer-readable storage medium comprising instructions stored therein, which when executed by one or more processors, cause the one or more processors to perform operations comprising: receiving, by a deep-learning controller, one or more control values associated with a process station in the manufacturing process; predicting, by the deep-learning controller, an expected value for an article of manufacture output from the process station, based on the one or more control values; and determining if the deep-learning controller can control the manufacturing process based on the expected value.
Statement 16: The non-transitory computer-readable storage medium of statement 15, wherein determining if the deep-learning controller can control the manufacturing process further comprises: receiving, by the deep-learning controller, an actual value associated with the article of manufacture; comparing the expected value with an actual value; and if the expected value is within a threshold confidence level of the actual value, determining that the deep-learning controller can control the manufacturing process.
Statement 17: The non-transitory computer-readable storage medium of any of statements 15-16, wherein the one or more processors are further configured to perform operations comprising: indicating that the deep-learning controller is ready to take control of the manufacturing process.
Statement 18: The non-transitory computer-readable storage medium of any of statements 15-17, wherein determining if the deep-learning controller can control the manufacturing process further comprises: receiving, by the deep-learning controller, an actual value associated with the article of manufacture; comparing the expected value with an actual value; and if the expected value is not within a threshold confidence level of the actual value, determining that the deep-learning controller cannot control the manufacturing process.
Statement 19: The non-transitory computer-readable storage medium of any of statements 15-18, wherein the one or more control values are received from one or more process controllers associated with the process station.
Statement 20: The non-transitory computer-readable storage medium of any of statements 15-19, wherein the instructions are further configured to cause the processors to perform operations comprising: assuming control of the manufacturing process by the deep-learning controller based on the expected value.
This application is a continuation of U.S. application Ser. No. 17/304,611, filed Jun. 23, 2021, which is a continuation of U.S. application Ser. No. 16/663,245, filed Oct. 24, 2019, now U.S. Pat. No. 11,156,992, issued Oct. 26, 2021, which is a continuation of U.S. application Ser. No. 16/519,102, filed Jul. 23, 2019, now U.S. Pat. No. 11,156,991, issued Oct. 26, 2021, which claims the priority to U.S. Application Ser. No. 62/865,859, filed Jun. 24, 2019, which are incorporated by reference in their entireties.
Number | Name | Date | Kind |
---|---|---|---|
4056716 | Baxter et al. | Nov 1977 | A |
4433385 | De Gasperi et al. | Feb 1984 | A |
5027295 | Yotsuya | Jun 1991 | A |
5808432 | Inoue et al. | Sep 1998 | A |
5815198 | Vachtsevanos et al. | Sep 1998 | A |
6240633 | Kent et al. | Jun 2001 | B1 |
6266436 | Bett et al. | Jul 2001 | B1 |
6650779 | Vachtesvanos et al. | Nov 2003 | B2 |
6757571 | Toyama | Jun 2004 | B1 |
7149337 | Michaelis et al. | Dec 2006 | B2 |
7551274 | Wornson et al. | Jun 2009 | B1 |
8185217 | Thiele | May 2012 | B2 |
8612043 | Moyne et al. | Dec 2013 | B2 |
8909926 | Brandt et al. | Dec 2014 | B2 |
9945264 | Wichmann et al. | Apr 2018 | B2 |
9977425 | McCann et al. | May 2018 | B1 |
10061300 | Coffman et al. | Aug 2018 | B1 |
10102495 | Zhang et al. | Oct 2018 | B1 |
10481579 | Putman et al. | Nov 2019 | B1 |
11117328 | Hough et al. | Sep 2021 | B2 |
11156982 | Putman et al. | Oct 2021 | B2 |
11156991 | Putman | Oct 2021 | B2 |
11156992 | Putman et al. | Oct 2021 | B2 |
11209795 | Putman et al. | Dec 2021 | B2 |
11675330 | Putman et al. | Jun 2023 | B2 |
20020002414 | Hsiung et al. | Jan 2002 | A1 |
20020143417 | Ito et al. | Oct 2002 | A1 |
20030061004 | Discenzo | Mar 2003 | A1 |
20040030431 | Popp et al. | Feb 2004 | A1 |
20040070509 | Grace et al. | Apr 2004 | A1 |
20050267607 | Paik | Dec 2005 | A1 |
20060013505 | Yau et al. | Jan 2006 | A1 |
20060058898 | Emigholz et al. | Mar 2006 | A1 |
20060149407 | Markham et al. | Jul 2006 | A1 |
20070005525 | Collette, III et al. | Jan 2007 | A1 |
20070036421 | Toba et al. | Feb 2007 | A1 |
20070047797 | Vilella | Mar 2007 | A1 |
20070177787 | Maeda et al. | Aug 2007 | A1 |
20080100570 | Friedrich et al. | May 2008 | A1 |
20080276128 | Lin et al. | Nov 2008 | A1 |
20080300709 | Collette, III et al. | Dec 2008 | A1 |
20090158577 | Schweikle | Jun 2009 | A1 |
20090198464 | Clarke et al. | Aug 2009 | A1 |
20090242513 | Funk et al. | Oct 2009 | A1 |
20090281753 | Noy | Nov 2009 | A1 |
20100106458 | Leu et al. | Apr 2010 | A1 |
20100131202 | Dannevik et al. | May 2010 | A1 |
20110141265 | Holtkamp et al. | Jun 2011 | A1 |
20120151585 | Lamastra et al. | Jun 2012 | A1 |
20120304007 | Hanks et al. | Nov 2012 | A1 |
20130031037 | Brandt et al. | Jan 2013 | A1 |
20130339919 | Baseman et al. | Dec 2013 | A1 |
20140082730 | Vashist et al. | Mar 2014 | A1 |
20140247347 | McNeill et al. | Sep 2014 | A1 |
20140336785 | Asenjo et al. | Nov 2014 | A1 |
20150045928 | Perez et al. | Feb 2015 | A1 |
20150067844 | Brandt et al. | Mar 2015 | A1 |
20150096022 | Vincent et al. | Apr 2015 | A1 |
20150184549 | Pamujula et al. | Jul 2015 | A1 |
20150185716 | Wichmann et al. | Jul 2015 | A1 |
20150213369 | Brandt et al. | Jul 2015 | A1 |
20150286202 | Amano et al. | Oct 2015 | A1 |
20150324329 | Blevins et al. | Nov 2015 | A1 |
20160170996 | Frank et al. | Jun 2016 | A1 |
20160253618 | Imazawa et al. | Sep 2016 | A1 |
20160259318 | Vogt et al. | Sep 2016 | A1 |
20160261465 | Gupta et al. | Sep 2016 | A1 |
20160300338 | Zafar et al. | Oct 2016 | A1 |
20160330222 | Brandt et al. | Nov 2016 | A1 |
20160352762 | Friedlander et al. | Dec 2016 | A1 |
20170034205 | Canedo et al. | Feb 2017 | A1 |
20170093897 | Cochin et al. | Mar 2017 | A1 |
20170102694 | Enver et al. | Apr 2017 | A1 |
20170102696 | Bell et al. | Apr 2017 | A1 |
20170109646 | David | Apr 2017 | A1 |
20170149820 | Ruvio et al. | May 2017 | A1 |
20170156674 | Hochman | Jun 2017 | A1 |
20170169219 | Ogawa et al. | Jun 2017 | A1 |
20170255723 | Asenjo et al. | Sep 2017 | A1 |
20170264629 | Wei et al. | Sep 2017 | A1 |
20180005083 | Georgescu et al. | Jan 2018 | A1 |
20180033130 | Kimura et al. | Feb 2018 | A1 |
20180079125 | Perez et al. | Mar 2018 | A1 |
20180114121 | Rana et al. | Apr 2018 | A1 |
20180144248 | Lu et al. | May 2018 | A1 |
20180150070 | Johnson et al. | May 2018 | A1 |
20180157831 | Abbaszadeh et al. | Jun 2018 | A1 |
20180165602 | Van Seijen et al. | Jun 2018 | A1 |
20180180085 | Watanabe et al. | Jun 2018 | A1 |
20180188704 | Cella et al. | Jul 2018 | A1 |
20180188714 | Cella et al. | Jul 2018 | A1 |
20180188715 | Cella et al. | Jul 2018 | A1 |
20180210425 | Cella et al. | Jul 2018 | A1 |
20180210426 | Cella et al. | Jul 2018 | A1 |
20180210427 | Cella et al. | Jul 2018 | A1 |
20180248905 | Côtéet al. | Aug 2018 | A1 |
20180253073 | Cella et al. | Sep 2018 | A1 |
20180253074 | Cella et al. | Sep 2018 | A1 |
20180253075 | Cella et al. | Sep 2018 | A1 |
20180253082 | Asenjo et al. | Sep 2018 | A1 |
20180255374 | Cella et al. | Sep 2018 | A1 |
20180255375 | Cella et al. | Sep 2018 | A1 |
20180255376 | Cella et al. | Sep 2018 | A1 |
20180255377 | Cella et al. | Sep 2018 | A1 |
20180255378 | Cella et al. | Sep 2018 | A1 |
20180255379 | Cella et al. | Sep 2018 | A1 |
20180255380 | Cella et al. | Sep 2018 | A1 |
20180255381 | Cella et al. | Sep 2018 | A1 |
20180255382 | Cella et al. | Sep 2018 | A1 |
20180255383 | Cella et al. | Sep 2018 | A1 |
20180262528 | Jain | Sep 2018 | A1 |
20180276375 | Arov et al. | Sep 2018 | A1 |
20180284735 | Cella et al. | Oct 2018 | A1 |
20180284736 | Cella et al. | Oct 2018 | A1 |
20180284737 | Cella et al. | Oct 2018 | A1 |
20180284741 | Cella et al. | Oct 2018 | A1 |
20180284742 | Cella et al. | Oct 2018 | A1 |
20180284743 | Cella et al. | Oct 2018 | A1 |
20180284744 | Cella et al. | Oct 2018 | A1 |
20180284745 | Cella et al. | Oct 2018 | A1 |
20180284746 | Cella et al. | Oct 2018 | A1 |
20180284747 | Cella et al. | Oct 2018 | A1 |
20180284749 | Cella et al. | Oct 2018 | A1 |
20180284752 | Cella et al. | Oct 2018 | A1 |
20180284753 | Cella et al. | Oct 2018 | A1 |
20180284754 | Cella et al. | Oct 2018 | A1 |
20180284755 | Cella et al. | Oct 2018 | A1 |
20180284756 | Cella et al. | Oct 2018 | A1 |
20180284757 | Cella et al. | Oct 2018 | A1 |
20180284758 | Cella et al. | Oct 2018 | A1 |
20180292811 | Baseman et al. | Oct 2018 | A1 |
20180292812 | Baseman et al. | Oct 2018 | A1 |
20180299878 | Cella et al. | Oct 2018 | A1 |
20180316719 | Schneider et al. | Nov 2018 | A1 |
20180321666 | Cella et al. | Nov 2018 | A1 |
20180321667 | Cella et al. | Nov 2018 | A1 |
20180321672 | Cella et al. | Nov 2018 | A1 |
20180358271 | David | Dec 2018 | A1 |
20180367550 | Musuvathi et al. | Dec 2018 | A1 |
20180376067 | Martineau | Dec 2018 | A1 |
20190020669 | Glatfelter et al. | Jan 2019 | A1 |
20190025805 | Cella et al. | Jan 2019 | A1 |
20190025806 | Cella et al. | Jan 2019 | A1 |
20190025812 | Cella et al. | Jan 2019 | A1 |
20190033845 | Cella et al. | Jan 2019 | A1 |
20190033846 | Cella et al. | Jan 2019 | A1 |
20190033847 | Cella et al. | Jan 2019 | A1 |
20190033848 | Cella et al. | Jan 2019 | A1 |
20190033849 | Cella et al. | Jan 2019 | A1 |
20190041836 | Cella et al. | Feb 2019 | A1 |
20190041840 | Cella et al. | Feb 2019 | A1 |
20190041841 | Cella et al. | Feb 2019 | A1 |
20190041843 | Cella et al. | Feb 2019 | A1 |
20190041844 | Cella et al. | Feb 2019 | A1 |
20190041845 | Cella et al. | Feb 2019 | A1 |
20190041846 | Cella et al. | Feb 2019 | A1 |
20190064766 | Friebolin et al. | Feb 2019 | A1 |
20190064792 | Cella et al. | Feb 2019 | A1 |
20190068618 | Mestha et al. | Feb 2019 | A1 |
20190068620 | Avrahami et al. | Feb 2019 | A1 |
20190072922 | Cella et al. | Mar 2019 | A1 |
20190072923 | Cella et al. | Mar 2019 | A1 |
20190072924 | Cella et al. | Mar 2019 | A1 |
20190072925 | Cella et al. | Mar 2019 | A1 |
20190072926 | Cella et al. | Mar 2019 | A1 |
20190072928 | Cella et al. | Mar 2019 | A1 |
20190073585 | Pu et al. | Mar 2019 | A1 |
20190079483 | Cella et al. | Mar 2019 | A1 |
20190089722 | Ciocarlie et al. | Mar 2019 | A1 |
20190094829 | Cella et al. | Mar 2019 | A1 |
20190094842 | Lee et al. | Mar 2019 | A1 |
20190094843 | Lee et al. | Mar 2019 | A1 |
20190104138 | Storms et al. | Apr 2019 | A1 |
20190107816 | Cella et al. | Apr 2019 | A1 |
20190114756 | Weiss et al. | Apr 2019 | A1 |
20190118300 | Penny et al. | Apr 2019 | A1 |
20190121339 | Cella et al. | Apr 2019 | A1 |
20190121340 | Cella et al. | Apr 2019 | A1 |
20190121342 | Cella et al. | Apr 2019 | A1 |
20190121343 | Cella et al. | Apr 2019 | A1 |
20190121344 | Cella et al. | Apr 2019 | A1 |
20190121345 | Cella et al. | Apr 2019 | A1 |
20190121346 | Cella et al. | Apr 2019 | A1 |
20190121347 | Cella et al. | Apr 2019 | A1 |
20190121349 | Cella et al. | Apr 2019 | A1 |
20190129404 | Cella et al. | May 2019 | A1 |
20190129405 | Cella et al. | May 2019 | A1 |
20190129406 | Cella et al. | May 2019 | A1 |
20190129408 | Cella et al. | May 2019 | A1 |
20190129409 | Cella et al. | May 2019 | A1 |
20190137985 | Cella et al. | May 2019 | A1 |
20190137987 | Cella et al. | May 2019 | A1 |
20190137988 | Cella et al. | May 2019 | A1 |
20190137989 | Cella et al. | May 2019 | A1 |
20190138897 | Xu et al. | May 2019 | A1 |
20190138932 | Akella et al. | May 2019 | A1 |
20190146474 | Cella et al. | May 2019 | A1 |
20190146476 | Cella et al. | May 2019 | A1 |
20190146477 | Cella et al. | May 2019 | A1 |
20190146481 | Cella et al. | May 2019 | A1 |
20190146482 | Cella et al. | May 2019 | A1 |
20190155272 | Cella et al. | May 2019 | A1 |
20190179277 | Cella et al. | Jun 2019 | A1 |
20190179278 | Cella et al. | Jun 2019 | A1 |
20190179279 | Cella et al. | Jun 2019 | A1 |
20190179300 | Cella et al. | Jun 2019 | A1 |
20190179301 | Cella et al. | Jun 2019 | A1 |
20190180153 | Buckler et al. | Jun 2019 | A1 |
20190187646 | Cella et al. | Jun 2019 | A1 |
20190187647 | Cella et al. | Jun 2019 | A1 |
20190187648 | Cella et al. | Jun 2019 | A1 |
20190187649 | Cella et al. | Jun 2019 | A1 |
20190187650 | Cella et al. | Jun 2019 | A1 |
20190187651 | Cella et al. | Jun 2019 | A1 |
20190187652 | Cella et al. | Jun 2019 | A1 |
20190187653 | Cella et al. | Jun 2019 | A1 |
20190187654 | Cella et al. | Jun 2019 | A1 |
20190187655 | Cella et al. | Jun 2019 | A1 |
20190187656 | Cella et al. | Jun 2019 | A1 |
20190187657 | Cella et al. | Jun 2019 | A1 |
20190187680 | Cella et al. | Jun 2019 | A1 |
20190187681 | Cella et al. | Jun 2019 | A1 |
20190187682 | Cella et al. | Jun 2019 | A1 |
20190187683 | Cella et al. | Jun 2019 | A1 |
20190187684 | Cella et al. | Jun 2019 | A1 |
20190187685 | Cella et al. | Jun 2019 | A1 |
20190187686 | Cella et al. | Jun 2019 | A1 |
20190187687 | Cella et al. | Jun 2019 | A1 |
20190187688 | Cella et al. | Jun 2019 | A1 |
20190187689 | Cella et al. | Jun 2019 | A1 |
20190187690 | Cella et al. | Jun 2019 | A1 |
20190197236 | Niculescu-Mizil et al. | Jun 2019 | A1 |
20190213099 | Schmidt et al. | Jul 2019 | A1 |
20190219995 | Cella et al. | Jul 2019 | A1 |
20190219996 | Cella et al. | Jul 2019 | A1 |
20190227536 | Cella et al. | Jul 2019 | A1 |
20190227537 | Cella et al. | Jul 2019 | A1 |
20190230099 | Mestha et al. | Jul 2019 | A1 |
20190230106 | Abbaszadeh et al. | Jul 2019 | A1 |
20190235461 | Cella et al. | Aug 2019 | A1 |
20190235462 | Cella et al. | Aug 2019 | A1 |
20190238568 | Goswami et al. | Aug 2019 | A1 |
20190243323 | Cella et al. | Aug 2019 | A1 |
20190243346 | Baseman et al. | Aug 2019 | A1 |
20190286111 | Yennie et al. | Sep 2019 | A1 |
20190286892 | Li et al. | Sep 2019 | A1 |
20190294869 | Naphade et al. | Sep 2019 | A1 |
20190295887 | Trickett et al. | Sep 2019 | A1 |
20190295890 | Clark et al. | Sep 2019 | A1 |
20190295891 | Clark et al. | Sep 2019 | A1 |
20190295906 | Clark et al. | Sep 2019 | A1 |
20190299536 | Putman et al. | Oct 2019 | A1 |
20190302707 | Guo et al. | Oct 2019 | A1 |
20190339684 | Cella et al. | Nov 2019 | A1 |
20190339685 | Cella et al. | Nov 2019 | A1 |
20190339686 | Cella et al. | Nov 2019 | A1 |
20190339687 | Cella et al. | Nov 2019 | A1 |
20190362480 | Diao et al. | Nov 2019 | A1 |
20190379677 | Zenz et al. | Dec 2019 | A1 |
20190384250 | Cella et al. | Dec 2019 | A1 |
20190386595 | Fujita et al. | Dec 2019 | A1 |
20190391550 | Cella et al. | Dec 2019 | A1 |
20190391551 | Cella et al. | Dec 2019 | A1 |
20190391552 | Cella et al. | Dec 2019 | A1 |
20200012248 | Cella et al. | Jan 2020 | A1 |
20200013156 | Weiss et al. | Jan 2020 | A1 |
20200019154 | Cella et al. | Jan 2020 | A1 |
20200019155 | Cella et al. | Jan 2020 | A1 |
20200026270 | Cella et al. | Jan 2020 | A1 |
20200076838 | Mestha et al. | Mar 2020 | A1 |
20200081423 | Clark et al. | Mar 2020 | A1 |
20200083070 | Clark et al. | Mar 2020 | A1 |
20200083074 | Clark et al. | Mar 2020 | A1 |
20200083080 | Clark et al. | Mar 2020 | A1 |
20200096986 | Cella et al. | Mar 2020 | A1 |
20200096987 | Cella et al. | Mar 2020 | A1 |
20200096988 | Cella et al. | Mar 2020 | A1 |
20200096989 | Cella et al. | Mar 2020 | A1 |
20200096990 | Cella et al. | Mar 2020 | A1 |
20200096992 | Cella et al. | Mar 2020 | A1 |
20200096993 | Cella et al. | Mar 2020 | A1 |
20200096994 | Cella et al. | Mar 2020 | A1 |
20200096995 | Cella et al. | Mar 2020 | A1 |
20200096996 | Cella et al. | Mar 2020 | A1 |
20200096997 | Cella et al. | Mar 2020 | A1 |
20200096998 | Cella et al. | Mar 2020 | A1 |
20200099707 | Abbaszadeh et al. | Mar 2020 | A1 |
20200103890 | Cella et al. | Apr 2020 | A1 |
20200103891 | Cella et al. | Apr 2020 | A1 |
20200103892 | Cella et al. | Apr 2020 | A1 |
20200103893 | Cella et al. | Apr 2020 | A1 |
20200110398 | Cella et al. | Apr 2020 | A1 |
20200110399 | Cella et al. | Apr 2020 | A1 |
20200110400 | Cella et al. | Apr 2020 | A1 |
20200110401 | Cella et al. | Apr 2020 | A1 |
20200111689 | Banna et al. | Apr 2020 | A1 |
20200117180 | Cella et al. | Apr 2020 | A1 |
20200125978 | Abbaszadeh et al. | Apr 2020 | A1 |
20200166909 | Noone et al. | May 2020 | A1 |
20200175171 | Rieger et al. | Jun 2020 | A1 |
20200310380 | Sun et al. | Oct 2020 | A1 |
20200314128 | Hild | Oct 2020 | A1 |
20200333777 | Maruyama | Oct 2020 | A1 |
20200401120 | Putman et al. | Dec 2020 | A1 |
20210069990 | Hough et al. | Mar 2021 | A1 |
20210118730 | Clark et al. | Apr 2021 | A1 |
20210125863 | Clark et al. | Apr 2021 | A1 |
20210132593 | Sundstrom et al. | May 2021 | A1 |
20210138735 | Limoge et al. | May 2021 | A1 |
20210168976 | Kawai et al. | Jun 2021 | A1 |
20210192779 | Putman et al. | Jun 2021 | A1 |
20210263495 | Putman et al. | Aug 2021 | A1 |
20210311440 | Sundstrom et al. | Oct 2021 | A1 |
20210378190 | Limoge et al. | Dec 2021 | A1 |
20210394456 | Hough et al. | Dec 2021 | A1 |
20220011727 | Hlavac et al. | Jan 2022 | A1 |
20220236709 | Cella et al. | Jul 2022 | A1 |
20220308653 | Pu et al. | Sep 2022 | A1 |
20230182235 | Penny et al. | Jun 2023 | A1 |
Number | Date | Country |
---|---|---|
2002359881 | Jul 2003 | AU |
1371489 | Sep 2002 | CN |
1705938 | Dec 2005 | CN |
101771702 | Jul 2010 | CN |
102466566 | May 2012 | CN |
102778858 | Nov 2012 | CN |
103324175 | Sep 2013 | CN |
104656602 | May 2015 | CN |
10594030 | Nov 2015 | CN |
105264640 | Jan 2016 | CN |
109167796 | Jan 2016 | CN |
105488806 | Apr 2016 | CN |
105960777 | Sep 2016 | CN |
106687981 | May 2017 | CN |
106857797 | Jun 2017 | CN |
106921676 | Jul 2017 | CN |
107389701 | Nov 2017 | CN |
107835982 | Mar 2018 | CN |
107851047 | Mar 2018 | CN |
107886500 | Apr 2018 | CN |
107976969 | May 2018 | CN |
108353078 | Jul 2018 | CN |
108604393 | Sep 2018 | CN |
108780314 | Nov 2018 | CN |
108885446 | Nov 2018 | CN |
109561112 | Apr 2019 | CN |
109766992 | May 2019 | CN |
110381045 | Oct 2019 | CN |
110431503 | Nov 2019 | CN |
110647414 | Jan 2020 | CN |
110851834 | Feb 2020 | CN |
0671677 | Mar 1999 | EP |
2585248 | Oct 2017 | EP |
4028228 | Jul 2022 | EP |
H05322789 | Dec 1993 | JP |
2001100838 | Apr 2001 | JP |
2002230337 | Aug 2002 | JP |
2003167613 | Jun 2003 | JP |
2004104576 | Apr 2004 | JP |
2004178388 | Jun 2004 | JP |
2005211105 | Aug 2005 | JP |
2005250990 | Sep 2005 | JP |
2007280366 | Oct 2007 | JP |
2008009868 | Jan 2008 | JP |
2008512792 | Apr 2008 | JP |
2008146621 | Jun 2008 | JP |
2009134623 | Jun 2009 | JP |
2009282740 | Dec 2009 | JP |
4601492 | Dec 2010 | JP |
4621773 | Jan 2011 | JP |
2015099022 | May 2015 | JP |
2015181024 | Oct 2015 | JP |
2016-157357 | Sep 2016 | JP |
5984096 | Sep 2016 | JP |
2017091091 | May 2017 | JP |
6224873 | Nov 2017 | JP |
2017211713 | Nov 2017 | JP |
2018022210 | Feb 2018 | JP |
6356909 | Jul 2018 | JP |
2018139101 | Sep 2018 | JP |
2019061565 | Apr 2019 | JP |
6527295 | Jun 2019 | JP |
2019095859 | Jun 2019 | JP |
2019145042 | Aug 2019 | JP |
2020-035420 | Mar 2020 | JP |
2020114597 | Jul 2020 | JP |
2022522159 | Apr 2022 | JP |
10-2011-0069934 | Jun 2011 | KR |
10-2015-0075742 | Jul 2015 | KR |
101568879 | Nov 2015 | KR |
10-2017-0127430 | Nov 2017 | KR |
10-2019-0000182 | Jan 2019 | KR |
454137 | Sep 2001 | TW |
489443 | Jun 2002 | TW |
200307972 | Dec 2003 | TW |
200415453 | Aug 2004 | TW |
200629117 | Aug 2006 | TW |
200715080 | Apr 2007 | TW |
200724237 | Jul 2007 | TW |
201212140 | Mar 2012 | TW |
I409658 | Sep 2013 | TW |
201339069 | Oct 2013 | TW |
201717144 | May 2017 | TW |
201723914 | Jul 2017 | TW |
201809640 | Mar 2018 | TW |
201816717 | May 2018 | TW |
201839626 | Nov 2018 | TW |
201842403 | Dec 2018 | TW |
201908896 | Mar 2019 | TW |
201939634 | Oct 2019 | TW |
201941194 | Oct 2019 | TW |
201941328 | Oct 2019 | TW |
202001678 | Jan 2020 | TW |
2005093535 | Nov 2005 | WO |
2018044410 | Mar 2018 | WO |
2018055754 | Mar 2018 | WO |
2018061842 | Apr 2018 | WO |
2018062398 | Apr 2018 | WO |
2018105296 | Jun 2018 | WO |
2018217903 | Nov 2018 | WO |
2019012653 | Jan 2019 | WO |
2019058532 | Mar 2019 | WO |
2019182913 | Sep 2019 | WO |
2019195039 | Oct 2019 | WO |
03060812 | Jul 2023 | WO |
Entry |
---|
SPC for Excel: “Control Chart Rules and Interpretation,” BPI Consulting, LLC, Mar. 2016, 20 Pages, [Retrieved on Jul. 23, 2019], Retrieved From URL: https://www.spcforexcel.com/knowledge/control-chart-basics/control-chart-rules-interpretation. |
SPC for Excel: “Interpreting Control Charts,” BPI Consulting, LLC, Apr. 2004, 9 Pages, [Retrieved on Jul. 23, 2019], Retrieved From URL: https://www.spcforexcel.com/knowledge/control-charts-basics/interpreting-control-charts. |
Szkilnyk G., “Vision Based Fault Detection in Assembly Automation,” Queen's University, Jun. 2012, 219 Pages. |
Thrun, “Probabilistic robotics,” Communications of the ACM 45.3, 2002, pp. 52-57. |
Thrun S., et al., “Probabilistic Robotics,” pp. 1999-2000. |
Vecerik M., et al., “Leveraging Demonstrations for Deep Reinforcement Learning on Robotics Problems with Sparse Rewards,” arXiv preprint, arXiv:1707.08817, Submitted on Jul. 27, 2017, 10 Pages, Last revised on Oct. 8, 2018. |
Zhong R.Y., et al., “Intelligent Manufacturing in the Context of Industry 4.0: A Review,” Engineering, Mar. 31, 2017, vol. 3, No. 5, pp. 616-630. |
Zhou C., et al., “Anomaly Detection with Robust Deep Autoencoders,” Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Aug. 13-17, 2017, pp. 665-674. |
American Society for Quality: “What is Statistical Process Control?,” 2021, 07 Pages, [Retrieved on Jul. 23, 2019], Retrieved from URL: https://asq.org/quality-resources/statistical-process-control. |
An J., etaL, “Variational Autoencoder Based Anomaly Detection Using Reconstruction Probability,” Special Lecture on IE 2.1,Dec. 27, 2015, pp. 1-18. |
Bose A., et al., “Behavioral Detection of Malware on Mobile Handsets,” MobiSys, Proceedings of the 6th International Conference on Mobile Systems, Applications, and Services, Jun. 17-20, 2008, pp. 225-238. |
Evangelidis G.D., etal., “Parametric Image Alignment Using Enhanced Correlation Coefficient Maximization,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Oct. 2008, vol. 30, No. 10, pp. 1-8. |
Extended European Search Report for European Application No. 19916905.3, dated Sep. 9, 2022, 10 Pages. |
Extended European Search Report for European Application No. 20156940.7, dated Aug. 10, 2020, 12 Pages. |
Extended European Search Report for European Application No. 20763956.8, dated Sep. 9, 2022, 11 Pages. |
Extended European Search Report for European Application No. 20832713.0, dated Jan. 3, 2023, 10 Pages. |
Fujimoto S., et al., “Addressing Function Approximation Error in Actor-critic Methods,” Proceedings of the 35th International Conference on Machine Learning Research, Oct. 22, 2018, 15 Pages. |
Goodfellow I.J., et al., “Generative Adversarial Nets,” Proceedings of Advances in Neural Information Processing Systems, 2014, 9 Pages. |
International Preliminary Report on Patentability for International Application No. PCT/US2019/053746, dated Sep. 10, 2021, 6 Pages. |
International Preliminary Report on Patentability for International Application No. PCT/US2020/029022, dated Sep. 10, 2021, 6 Pages. |
International Preliminary Report on Patentability for International Application No. PCT/US2020/039064, dated Jan. 6, 2022, 7 Pages. |
International Preliminary Report on Patentability for International Application No. PCT/US2020/052254, dated Apr. 21, 2022, 7 Pages. |
International Preliminary Report on Patentability for International Application No. PCT/US2020/059339, dated May 19, 2022, 13 Pages. |
International Preliminary Report on Patentability for International Application No. PCT/US2020/061434, dated Jun. 2, 2022, 09 Pages. |
International Preliminary Report on Patentability for International Application No. PCT/US2021/019857, dated Sep. 9, 2022, 14 Pages. |
International Preliminary Report on Patentability for International Application No. PCT/US2021/021440, dated Sep. 22, 2022, 09 Pages. |
International Search Report and Written Opinion for International Application No. PCT/US2019/053746, dated Nov. 5, 2019, 7 Pages. |
International Search Report and Written Opinion for International Application No. PCT/US2020/029022, dated Jul. 9, 2020, 08 Pages. |
International Search Report and Written Opinion for International Application No. PCT/US2020/039064, dated Jul. 30, 2020, 8 Pages. |
International Search Report and Written Opinion for International Application No. PCT/US2020/052254, dated Jan. 12, 2021, 8 Pages. |
International Search Report and Written Opinion for International Application No. PCT/US2020/059339, dated Feb. 5, 2021, 14 Pages. |
International Search Report and Written Opinion for International Application No. PCT/US2020/061434, dated Feb. 22, 2021, 10 Pages. |
International Search Report and Written Opinion for International Application No. PCT/US2021/019857, dated May 7, 2021, 15 Pages. |
International Search Report and Written Opinion for International Application No. PCT/US2021/021440, dated May 20, 2021, 10 Pages. |
International Search Report and Written Opinion for International Application No. PCT/US2021/038085, dated Sep. 29, 2021, 14 Pages. |
Karnouskos S., “Stuxnet Worm Impact on Industrial Cyber-Physical System Security,” IECON, 37th Annual Conference of the IEEE Industrial Electronics Society, IEEE, 2011,5 Pages. |
Kingma D.P., et aL, “Adam: A Method for Stochastic Optimization,” arXiv preprint arXiv:1412.6980, ICLR 2015, Jan. 30, 2017, 15 pages. |
Lardinois F., “Nvidia's Researchers Teach a Robot to Perform Simple Tasks by Observing a Human,” 6 Pages, [Retrieved on Mar. 11, 2019], Retrieved from URL: https://techcrunch.com/2018/05/20/nvidias-researchers-teach-a-robot-to-learn-simple-tasks-by-observing-a-human/?utm_source=tcfbpage&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29&sr_share=facebook. |
Lillicrap T.P., et al., Continuous Control With Deep Reinforcement Learning, Published as a Conference Paper at ICLR 2016, arXiv: 1509.02971v6 [cs.LG], Last Revised on Jul. 5, 2019, 14 Pages. |
Liu H., et al., “Intelligent Tuning Method of Pid Parameters Based on Iterative Learning Control for Atomic Force Microscopy,” Science Direct Micron, 2018, vol. 104, pp. 26-36. |
Malhotra P., et al., “LSTM-Based Encoder-Decoder for Multi-Sensor Anomaly Detection,” arXiv preprint arXiv: 1607.00148, Last Revised on Jul. 11, 2016, 5 pages. |
Mnih V., et al., “Playing Atari With Deep Reinforcement Learning,” arXiv preprint arXiv: 1312.5602v1, Dec. 19, 2013, 9 pages. |
Mueller F., et al., “Real-time Hand Tracking under Occlusion from an Egocentric RGB-D Sensor,” Max-Planck-Institute for Informatics, Germany, Universidad Rey Juan Carlos, Spain, Oct. 5, 2017, 16 Pages. |
Ng A., “Sparse Autoencoder,” CS294A Lecture Notes 72.2011,2011, pp. 1-19. |
Office Action and Search Report from Taiwan Patent Application No. 108137373, dated Mar. 31, 2023, 24 pages. |
Office Action and Search Report from Taiwan Patent Application No. 111130991, dated May 17, 2023,19 pages. |
Office Action for Chinese Patent Application No. 2020800738527, dated Apr. 25, 2023, 37 Pages. |
Office Action for European Patent Application No. 20156940.7, dated Feb. 10, 2023, 6 Pages. |
Office Action for Japanese Patent Application No. 2021-549835, dated Mar. 3, 2023, 7 Pages. |
Office Action for Japanese Patent Application No. 2021575060, dated Jun. 2, 2023, 7 pages. |
Office Action for Japanese Patent Application No. 2022520885, dated Jun. 30, 2023, 10 Pages. |
Office Action for Japanese Patent Application No. 2022529027, dated Jun. 30, 2023, 5 pages. |
Papanastasiou S., et al., “Bridging the Gap between Physical Layer Emulation and Network Simulation,” IEEE Wireless Communication and Networking Conference, Date of Conference: Apr. 18-21, 2010, 06 pages. |
Purdue University: “Intrusion Alert: System Uses Machine Learning, Curiosity-Driven ‘Honeypots’ to Stop Cyber Attackers,” Research Foundation News, Feb. 6, 2020, 06 Pages, Retrieved From URL: https://engineering.purdue.edu/ECE/News/2020/intrusion-alert-system-uses-machine-learning-curiosity-driven-honeypots-to-stop-cyber-attackers. |
Real R., et al., “The Probabilistic Basis of Jaccard's Index of Similarity,” Systematic Biology, 1996, vol. 45, No. 3, pp. 380-385. |
Sakurada M., et al., “Anomaly Detection Using Autoencoders With Nonlinear Dimensionality Reduction,” Proceedings of the Machine Learning for Sensory Data Analysis (MLSDA) 2nd Workshop on Machine Learning for Sensory Data Analysis, 2014, 8 Pages. |
Saunders J.A., et al., “Visual Feedback Control of Hand Movements,” The Journal of Neuroscience, Mar. 31, 2004, vol. 24, No. 13, pp. 3223-3234. |
Simon T., et al., “Hand Keypoint Detection in Single Images Using Multiview Bootstrapping,” Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, 2017, pp. 1145-1153. |
Office Action for Japanese Patent Application No. 2021549835, dated Sep. 22, 2023, 7 pages. |
Office Action for Japanese Patent Application No. 2021575060, dated Oct. 13, 2023, 3 pages. |
Office Action for Japanese Patent Application No. 2022529027, dated Oct. 13, 2023, 3 pages. |
Notice of Allowance for Taiwanese Patent Application No. 108137373, dated Oct. 12, 2023, 4 pages. |
Office Action for Chinese Patent Application No. 202080073852.7, dated Nov. 1, 2023, 4 pages. |
Office Action for Japanese Patent Application No. 2022-520885, dated Nov. 2, 2023, 5 pages. |
Office Action for Japanese Patent Application No. 2022-551360, dated Nov. 2, 2023, 4 pages. |
Office Action for Japanese Patent Application No. 2022-207136, dated Nov. 24, 2023, 6 pages. |
Office Action for TW Patent Application No. 11221179860, dated Nov. 27, 2023, 10 pages. |
Office Action for Japanese Patent Application No. 2022-553668, mailed Sep. 1, 2023, 9 Pages. |
Supplementary European Search Report for European Patent Application No. 21760563.3, mailed Jul. 18, 2023, 12 Pages. |
Vollmer, et al., “Cyber-physical system security with deceptive virtual hosts for industrial control networks,” IEEE Transactions on Industrial Informatics 10.2, May 2014, pp. 1337-1347. |
Office Action and Search Report from Taiwan Patent Application No. 112103333, dated Aug. 21, 2023, 8 Pages. |
Office Action from Taiwan Patent Application No. 11221224400, dated Dec. 6, 2023, 18 pages. |
Office Action from Indian Patent Application No. 202217044168, dated Nov. 30, 2023, 10 pages. |
Extended Search Report from European Patent Application No. 20874557.0, dated Oct. 19, 2023, 12 Pages. |
Potluri, et al., “Deep learning based efficient anomaly detection for securing process control systems against injection attacks,” 2019 IEEE 15th International Conference on Automation Science and Engineering (CASE), 2019, pp. 854-860. |
Erba, et al., “Real-time evasion attacks with physical constraints on deep learning-based anomaly detectors in industrial control systems,” arXiv preprint arXiv:1907.07487, 2019, 15 pages. |
Notification of Reason for Refusal from Korean Patent Application No. 10-2021-7030695, dated Dec. 18, 2023, 13 Pages. |
Notice of Allowance for Japanese Patent Application No. 2021-549835, mailed Jan. 5, 2024, 3 Page. |
Office Action for Japanese Patent Application No. 2022577143, mailed Jan. 12, 2024, 7 pages. |
Office Action for Chinese Patent Application No. 202080044987.0, mailed Jan. 29, 2024, 7 pages. |
Office Action for Chinese Patent Application No. 202080016336.0, mailed Feb. 1, 2024, 8 pages. |
Office Action for Japanese Patent Application No. 2022-553668, mailed Feb. 9, 2024, 9 pages. |
Extended European Search Report for European Application No. 20885424.0 dated Jan. 5, 2024, 12 pages. |
Extended European Search Report for European Application No. 20889594.6 dated Nov. 27, 2023, 87 pages. |
Notice of Allowance from Japanese Patent Application No. 2022-551360, dated Feb. 16, 2024, 3 pages. |
Notice of Allowance for Korean Patent Application No. 10-2021-7039615, mailed Feb. 27, 2024, 8 pages. |
Office Action from Chinese Patent Application No. 201980092196.2, dated Feb. 29, 2024, 12 pages. |
Office Action from KR Patent Application No. 10-2022-7014934, dated Mar. 21, 2024, 15 pages. |
Office Action from Chinese Patent Application No. 202080073852.7, dated Apr. 19, 2024, 15 pages. |
Office Action from Chinese Patent Application No. 202180014828.0, dated Apr. 28, 2024, 8 pages. |
Office Action from TW Patent Application No. 112103333, dated May 2, 2024, 3 pages. |
Notice of Allowance from Taiwan Patent Application No. 112113077, dated May 14, 2024, 6 pages. |
Office Action for European Patent Application No. 21767468.8, mailed Mar. 22, 2024, 12 Pages. |
Office Action from Taiwan Patent Application No. 111108232, dated May 30, 2024, 8 pages. |
Notice of Allowance from Japan Patent Application No. 2022-577143, dated May 31, 2024, 2 pages. |
Office Action from Chinese Patent Application No. 2020800791637, dated Jun. 4, 2024, 10 pages. |
Office Action from India Patent Application No. 202318013498, dated May 31, 2024, 7 pages. |
Office Action for Japanese Patent Application No. 2023-116653, mailed Jul. 12, 2024, 6 pages. |
Notice of Allowance for TW Patent Application No. 109121636, mailed Jul. 9, 2024, 4 pages. |
Office Action for Japanese Patent Application No. 2022-207136, mailed Jul. 26, 2024, 6 pages. |
Office Action for Korean Patent Application No. 10-2022-7032975, mailed Jul. 20, 2024, 16 pages. |
Office Action for Korean Patent Application No. 10-2021-7030695, mailed Jul. 12, 2024, 6 pages. |
Office Action for Japanese Patent Application No. 2023-192266, mailed Aug. 9, 2024, 9 pages. |
Notice of Allowance for Chinese Patent Application No. 2020800449870, mailed Aug. 20, 2024, 4 pages. |
Office Action for EP Patent Application No. 199169058, mailed May 28, 2024, 7 pages. |
Office Action for EP Patent Application No. 207639588, mailed May 28, 2024, 7 pages. |
Number | Date | Country | |
---|---|---|---|
20230367301 A1 | Nov 2023 | US |
Number | Date | Country | |
---|---|---|---|
62865859 | Jun 2019 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 17304611 | Jun 2021 | US |
Child | 18357560 | US | |
Parent | 16663245 | Oct 2019 | US |
Child | 17304611 | US | |
Parent | 16519102 | Jul 2019 | US |
Child | 16663245 | US |