This application claims priority to Taiwan Application Serial No. 108133671, filed on Sep. 18, 2019. The entirety of the above-mentioned patent application is hereby incorporated by reference herein and made a part of this specification.
The present disclosure relates to data processing methods, and, more particularly, to a data processing method and a data processing system that predict a life limit of a vacuum pump.
When a semiconductor wafer is fabricated, a vacuum pump is used to clean the dust of a working chamber. The vacuum pump will be degraded gradually, and cannot clean the dust in an effective way. Therefore, the degraded vacuum pump has to be replaced.
In the prior art, operators replace a vacuum pump based on their experiences. However, the operators sometimes replace a vacuum pump that is still functioning very well or is already malfunctioned for a period of time.
Patent application publication no. WO/2006/064990 provides a method for sensing a vacuum pump automatically, and predicts a life limit of the vacuum pump according to a complicated calculation process, which takes much time to be performed, and the accuracy of the prediction is limited, thereby not meeting the needs of the modern semiconductor wafer fabrication process for the immediate replacement of vacuum pumps.
Therefore, how to replace a vacuum pump that is degraded already in appropriate time is becoming an urgent issue in the art.
The present disclosure provides a data processing method of a vacuum pump, comprising: sensing, via at least one sensing portion, target information of a target device; receiving and processing, via an electronic device, the target information of the sensing portion to form feature information; processing, via the electronic device, the feature information into a label matrix, and establishing, via an artificial intelligence training method, a target model based on the label matrix; and after the electronic device captures real-time information of the target device, predicting, via the target model, a life limit of the target device, wherein a content of the target information is corresponding to a content of the real-time information.
The present disclosure also provides a data processing system of a vacuum pump, comprising: a sensing portion configured for sensing target information of a target device; a reception portion communicatively connected to the sensing portion and configured for receiving and processing the target information to form feature information; a label portion communicatively connected to the reception portion and configured for processing the feature information into a label matrix, where a target model is established by an artificial intelligence training method based on the label matrix; and a prediction portion communicatively connected to the reception portion and the label portion and configured for predicting a life limit of the target device via the target model after real-time information of the target device is captured, wherein a content of the target information is corresponding to a content of the real-time information.
In an embodiment, the electronic device labels the feature information according to a cumulative method and a principal component analysis method. In another embodiment, the cumulative method converts the feature information before accumulation into a cumulative feature after accumulation. In yet another embodiment, the label matrix is obtained by calculating the cumulative feature after accumulation according to the principal component analysis method and a min-max normalization method.
In an embodiment, the feature information includes data of the target device at a working stage, without including data of the target device when a machine is at an idling stage, a maintenance stage and/or a shutdown stage and has no load.
In an embodiment, the sensing portion is an acceleration sensor connected to the target device.
In an embodiment, the target device is a vacuum pump in communication with a working chamber of a semiconductor wafer fabrication process.
In an embodiment, the target model is a deep learning model constituted by a neural network calculation mechanism.
In an embodiment, the real-time information is processed by the electronic device and input to the target model, and the electronic device obtains predicted information of a life limit of the target device.
The present disclosure further provides a non-transitory computer readable medium stored with a program, which, when loaded into and executed by a computer, achieves the previously described data processing method.
In the data processing method and the data processing system according to the present disclosure, the label portion processes the feature information into the label matrix, and a good target model is thus constituted and is advantageous in training artificial intelligence. Compared with the prior art, the present disclosure employs a simple calculation process of the target model when predicting a life limit of a vacuum pump. The life time can be predicted very quickly and accurately. Therefore, the demand required by a modern semiconductor wafer fabrication process for the replacement of a vacuum pump is satisfied.
In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the disclosed embodiments. It will be apparent, however, that one or more embodiments may be practiced without these specific details. In other instances, well-known structures and devices are schematically shown in order to simplify the drawing.
The terminology used herein is for the purpose of describing particular devices and methods and is not intended to be limiting of this disclosure. As used herein, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises,” “comprising,” “includes,” and “including,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
In an embodiment, the electronic device 1a is a computer. The target device 9 includes a booster and/or a vacuum pump (e.g., a dry pump). The vacuum pump is in communication with a working chamber S of a semiconductor wafer fabrication process.
In an embodiment, the sensing portion 10 is an acceleration sensor (e.g., an acceleration meter) and is disposed on the target device 9.
In an embodiment, two acceleration meters (the sensing portion 10) are installed on the target device 9, such as the booster, in the axial direction and the radius direction, respectively. Two acceleration meters (the sensing portion 10) are installed on the target device 9, such as a dry pump combination of a high pressure (HP) machine and a low pressure (LP) machine, respectively.
The reception portion 11 is communicatively connected to the sensing portion 10 and receives and processes the target information to form feature information.
In an embodiment, the feature information includes data of the target device 9 at a working stage, without including data of the target device 9 when a machine is at an idling stage, a maintenance stage and/or a shutdown stage and has no load.
The label portion 12 is communicatively connected to the reception portion 11 and processes the feature information into a label matrix, and an artificial intelligence training method establishes the label matrix into a target model.
In an embodiment, the label portion 12 processes the feature information according to a cumulative method and a principal component analysis (PCA) method. In an embodiment, the cumulative method converts the feature information before accumulation into a cumulative feature after accumulation. In another embodiment, the label portion 12 obtains the label matrix by calculating the cumulative feature according to the principal component analysis method and a min-max normalization method.
In an embodiment, the target model is a deep learning model constituted by a neural network calculation mechanism.
The prediction portion 13 is communicatively connected to the reception portion 11 and the label portion 12, and, after real-time information of the target device 9 is captured, predicts a life limit of the target device 9 based on the target model. A content of the target information is corresponding to a content of the real-time information.
In an embodiment, the real-time information is processed by the reception portion 11 and the label portion 12 and input to the target model, and the prediction portion 13 obtains predicted information of a life limit of the target device 9.
In an embodiment, the content of the target information and the content of the real-time information are in the same unit. In another embodiment, the content of the target information and the content of the real-time information have different values.
In step S20, the sensing portion 10 senses target information of the target device 9.
In an embodiment, the target information comprises data of the target device 9 at a working stage and at an idling stage when a machine has no load and the electronic device 1a has not process data yet (e.g., data before filtered shown in
In step S21, a reception portion 11 of the electronic device 1a receives the target information of the sensing portion 10.
In an embodiment, the electronic device 1a selects needed data (e.g., sub-step S210), and filters the needed data (e.g., sub-step S211) to obtain filtered data shown in
The first filtering condition: a recipe has a multiple sets of data, e.g., 520 sets of data, which define a front segment having 60 sets of data, a middle segment having 10 sets of data, and a rear segment having 450 sets of data. The data of the middle segment is filtered out, and the data of the front and rear segments are captured.
The second filtering condition: the standard deviation of the data of the front segment has to be less than a threshold of the data of the front segment, and the standard deviation of the data of the rear segment has to be less than a threshold of the data of the rear segment. The thresholds of the data of the front and rear segments are weighting values trained by a quasi-neural network by collecting data for a long period of time. In an embodiment, the standard deviation of the data of the front segment has to be less than 0.25, and the standard deviation of the data of the rear segment has to be less than 0.14. In another embodiment, the data (point number) of the rear segment exceeding a base line has to be greater than 30 sets. The base line=Lmean Lstd, wherein Lmean is the average value of the rear segment, and Lstd is the standard deviation of the rear segment.
In step S22, the reception portion 11 of the electronic device 1a processes the target information to form feature information.
In an embodiment, the feature information comprises data of the target device 9 in a working stage, without including data of the target device 9 when a machine is at an idling stage, a maintenance stage and/or a shutdown stage and has no load. In an embodiment, the electronic device 1a converts the filtered data (as shown in
The feature information comprises the following features of data at measurement points:
First, root mean square of time-domain signals:
Second, vibration speed: calculating the vibration amount according to ISO-10816 vibration detection specification; and Third, a sum of 1-10 times of frequencies: the frequencies discovered from a mechanical structure and corresponding to the construction.
In step S23, the label portion 12 of the electronic device 1a processes the feature information into a label matrix.
In an embodiment, the label portion 12 labels the feature information with a labeling mechanism. In an embodiment, the labeling mechanism includes a cumulative method and a principal component analysis (PCA) method.
In order to make the trend of the feature information more significant, an accumulative concept is used to process the features. The cumulative method converts the feature information before accumulation (as shown in
Fc=[Σi=1NF(i)]1/2,
wherein F represents the feature table of the feature information, and Pc represents the cumulative features.
The label matrix is obtained by calculating the cumulative features according to the principal component analysis method and a min-max normalization (limit normalization) method, i.e., converting [FC] into [L], wherein L represents a label matrix. The label matrix is obtained by a calculation mechanism M shown in
In step M41, a plurality of features of FC are smoothed by a matlab software, e.g., a rloess method, to obtain new feature information (feature table) Fcs.
In step M42, PCA calculation is performed. A plurality of features of Fcs perform PCA via the matlab software, and the PCA reduces the dimensions of the features that have 12 dimensions to capture PCA_1 (1 level).
In step M43, PCA_1 is limit-normalized, as shown in the following formula:
wherein x is a feature vector, subscripts min and max represent the minimum and maximum of the feature vector, respectively, and Xnom is the result after normalization. Thus, a label matrix L is obtained, as shown in
In steps S24 and S25, the label matrix is established into a target model via an artificial intelligence training method.
In an embodiment, the target model is a deep learning model constituted by a neural network (NN) calculation mechanism (as shown in
wherein the target model is represented by an activation function as the following formula:
Function f=input value a·input_weighting W+bias_weighting b=output value, wherein
In step S26, the reception portion 11 of the electronic device 1a captures the real-time information of the target device 9.
In an embodiment, the real-time information processes the real-time signals, which are processed by the electronic device 1a (e.g., steps S20-S23) and input to the target model.
In step S27, the prediction portion 13 predicts predicted information of a life limit of the target device 9 via the operation of the target model (as shown in
In an embodiment, the content of the target information is in the same unit (e.g., vibration data) as the unit of the content of the real-time information (e.g., vibration data), and the content of the target information has a different value (e.g., vibration data) from a value of the content of the real-time information (e.g., vibration data).
An autoregressive moving average model (ARMA) is used to represent equipment health index (EHI) time table (as shown in
The autoregressive moving average model is a method to study time sequences, and is constituted by “combining” an autoregressive (AR) model and a moving average (MA) model as follows:
ARMA(p,q) model: Xt=c+εt+Σi=1pφiXt−i+Σj=1qθjεt−i,
wherein φ1, . . . , φp are AR model parameters, θ1, . . . , θq are MA model parameters, c is a constant, and εt is white noise signals.
In an actual predicting process, as shown in
In another actual predicting process, as shown in
In another actual predicting process, as shown in
In another actual predicting process, as shown in
In the data processing method and the data processing system according to the present disclosure, the label portion of the electronic device processes the feature information formed by the target information into the label matrix. Thus, a good target model is constituted and is advantageous in training artificial intelligence. Compared with the prior art, the present disclosure employs a simple calculation process of the target model when predicting a life limit of a vacuum pump. The life time can be predicted very quickly and accurately. Therefore, the demand required by a modern semiconductor wafer fabrication process for the replacement of a vacuum pump is satisfied.
The present disclosure provides a non-transitory computer readable medium stored with a program, which, when loaded into and executed by a computer, achieves the previously described data processing method. In an embodiment, the non-transitory computer readable medium is a compact disk.
It will be apparent to those skilled in the art that various modifications and variations can be made to the disclosed embodiments. It is intended that the specification and examples be considered as exemplary, with a true scope of the disclosure being indicated by the following claims and their equivalents.
Number | Date | Country | Kind |
---|---|---|---|
108133671 | Sep 2019 | TW | national |
Number | Name | Date | Kind |
---|---|---|---|
6289735 | Dister et al. | Sep 2001 | B1 |
6301572 | Harrison | Oct 2001 | B1 |
6865513 | Ushiku et al. | Mar 2005 | B2 |
8493216 | Angell | Jul 2013 | B2 |
11169502 | Xia | Nov 2021 | B2 |
20090110236 | Huang | Apr 2009 | A1 |
20100302247 | Perez | Dec 2010 | A1 |
20130051634 | Mirski-Fitton | Feb 2013 | A1 |
20150134271 | Ikejiri et al. | May 2015 | A1 |
Number | Date | Country |
---|---|---|
1230869 | Dec 2005 | CN |
101080699 | Nov 2007 | CN |
101080700 | Nov 2007 | CN |
100476663 | Apr 2009 | CN |
102402727 | Apr 2012 | CN |
102693450 | Sep 2012 | CN |
106089753 | Jan 2018 | CN |
591688 | Jun 2004 | TW |
I234610 | Jun 2005 | TW |
I250271 | Mar 2006 | TW |
I447302 | Aug 2014 | TW |
201640244 | Nov 2016 | TW |
Entry |
---|
Taiwanese Office Action for Taiwanese Patent Application No. 108133671 dated Sep. 10, 2020. |
Thanagasundram, et al. “A case study of autoregressive modelling and order selection for a dry vacuum pump”, ResearchGate; 2005; 10. |
Konishi, et al. “Diagnostic system to determine the in-service life of dry vacuum pumps”, IET; 1999; 7. |
Kacprzynski, et al. “A Prognostic Modeling Approach for Predicting Recurring Maintenance for Shipboard Propulsion Systems”, ASME; 2001; 7. |
Twiddle, et al. “Fuzzy model-based condition monitoring of a dry vacuum pump via time and frequency analysis of the exhaust pressure signal”, SAGE journals; 2008; 7. |
Choi, “Modeling and Model Based Fault Diagnosis of Dry Vacuum Pumps in the Semiconductor Industry”, 2013; 212. |
Butler, et al. “Prediction of Vacuum Pump Degradation in Semiconductor Processing”, Elsevier; 2009; 6. |
Number | Date | Country | |
---|---|---|---|
20210079925 A1 | Mar 2021 | US |