The present embodiments are related to monitoring and/or testing arrangements for controlling production.
Collecting production data or the associated data collection process is a very integral part of the production management, production planning, and inventory management process. It is important that the collecting or the data collection process is done correctly in order to obtain a true picture of the process. In addition, the collecting or the data collection process should be done in an efficient and faithful manner in order for the collection or the data collection process to be analyzed by production engineers and management to understand what is happening in the production plant with respect to process quality, efficiency, and overall equipment effectiveness (OEE) of machinery. Data collection from a production process may be done either through manual processes and paperwork or through a production/process management software system. The collection method chosen will depend on the size and complexity of the production process being managed, as some software packages may be costly to purchase and implement with requirements for different sensors, PLCs, relays, computers, and control loop systems.
From U.S. Pat. US628146B1, a device and method for predictively diagnosing the prevailing quality of the technical work result of a technical installation (e.g., the prevailing quality of the welding spots of a spot-welding robot) have become known.
In practice, it may be extremely difficult to determine the prevailing quality of the product of a technical installation (e.g., a production system). In contrast to determining physical quantities using measuring techniques, in many cases, there are no common direct measuring methods available for determining the quality parameters of production results. In some cases, there is success in assembling a highly specialized, complex measuring arrangement that, for example, is based on radiological, electromagnetic, or optical principles or a combination thereof. Many times, however, it is still to be provided that the prevailing quality parameters are to be subjectively determined by experienced operating personnel (e.g., within the framework of a “quality control”). This produces a multitude of disadvantages. First, the determination of quality parameters by experienced operating personnel is neither representative nor reproducible. Rather, assessments of this kind vary even in the short term, depending on the operating personnel employed and respective daily conditions. Further, operating personnel may generally only carry out evaluations of quality parameters on selected production results of the respective installation by taking random samples. A temporary absence or a change of “experienced operating personnel”, for example, make it impossible to prevent unreproducible assessment variations in the long term, as well.
Second, exceptional outlay is to be provided to be able to use the quality parameters, gained from the assessments by the operating personnel, along the lines of open-loop or closed-loop control engineering in the form of control variables or adjusted setpoint values for influencing the operational performance of the respective technical installation. In the case of high-speed, possibly fully automatic, production plants, for example, it is almost impossible in practice for the characteristic quality values, gained from random samples, to be made usable sufficiently quickly to influence the operational equipment of the technical installation.
From U.S. Pat. Application Publication US 2018/0307203 A1, a machining defect factor estimation device that includes a machine learning device that learns an occurrence factor of a machined-surface defect based on an inspection result on a machined surface of a workpiece has become known. The machine learning device observes the inspection result on the machined surface of the workpiece from an inspection device as a state variable, acquires label data indicating the occurrence factor of the machined-surface defect, and learns the state variable and the label data in a manner such that the state variable and the label data are correlated to each other.
However, an artificial neural network is a black box, and its decision mechanisms is not comprehendible to a human. In addition, training of the artificial neural network requires ample amount of data be available.
The scope of the present invention is defined solely by the appended claims and is not affected to any degree by the statements within this summary.
Different aspects are disclosed herein that reduce the technical effort when it comes to inspecting products and that allow easy scaling of the inspection effort and are easily transferable between production systems producing different products.
According to a first aspect, a method of operating a production for producing a plurality of products is provided. The method includes inspecting the products according to a first inspection rate. The first inspection rate determinates the number of products that are inspected during a period of time and/or from a given set of products. The inspection of one of the products includes testing (e.g., in a first number of testing steps) at at least one inspection station. The method further includes obtaining test data based on the inspection of at least one of the products and setting a threshold for a number of products not fulfilling the testing (e.g., during a specified period of time). The method further inlcudes determining a second inspection rate based on the threshold set and the test data obtained, and inspecting the products according to a second inspection rate. The second inspection rate is, for example, lower than the first inspection rate.
Products are usually inspected on site at the place of production. Further, products usually are inspected at one or more stages of production (e.g., between production steps between two successive production steps, or at the end of the line (EoL)). Being a key element of quality control, product inspections allow to verify product quality at different stages of the production process and prior to its dispatch. Inspecting a product before it leaves the production is an effective way of preventing quality problems and supply chain disruptions further down the line.
An inspection may include test criteria such as product function, performance, overall appearance, and dimensions. Whether a product meets the test criteria may be determined in one or more testing steps. As a result of the testing steps or of the inspection of a product, in general, test data may be obtained. Accordingly, a threshold for a number of products may be set. The threshold may correspond to the number of products fulfilling or not fulfilling (e.g., passing or failing) an inspection. In any case, the threshold set allows identifying a (maximum) number of products not fulfilling testing. Now, the period of time for which the inspection according to the first inspection rate is carried out may correspond to one or more production cycles or to the period of time it takes to produce a predetermined number of products.
Inspection according to the second inspection rate may be carried out for a subsequent period of time subsequent to the period of time the inspection has been carried out according to the first inspection rate. This subsequent period of time may be as long as the first period of time. Alternatively, the subsequent period of time may (e.g., depending on the specific circumstances of production) be shorter or longer than the preceding period of time during which inspection according to the first inspection rate has been carried out.
The second inspection rate may be determined based on the threshold set and/or the test data obtained. For example, the test data may serve for determining a probability distribution of the test variables. The one or more test variables may thus represent the one or more test criteria. Based (e.g., solely) on the probability distribution, the second inspection rate may be determined. For example, the first inspection rate may be lowered (e.g., by 10 %) in order to yield a second inspection rate in case the probability distribution predicts less products not fulfilling the one or more test criteria. In general, the second inspection rate may be adapted based on (e.g., a property of) the probability distribution or a change of (e.g., a property of) the probability distribution. Additionally or alternatively, the inspection rate may be adapted (e.g., solely) based on the threshold set. For example, if the threshold is lowered (e.g., fewer products not fulfilling the inspection), the number of inspections per time or set of products (e.g., the inspection rate) may be increased resulting in a second inspection rate.
According to a second aspect, an apparatus is provided. The apparatus is operative to perform the acts of the first aspect. In one embodiment, the apparatus includes a processor and a memory that perform the acts of the method according to the first aspect.
According to a third aspect, a production system including one or more apparatuses according to the second aspect is provided.
According to a fourth aspect, a computer program product and/or a non-transitory medium including program code that when executed performs the method acts according to the first aspect is provided.
In the manufacturing industry (e.g., electronics or motor production), production 1 includes a variety of individual production acts. During the production 1, quality-assurance measures provide that the product 2 meets the requirements and may be used error-free. The quality with which products 2 as well as individual components and component groups are tested has already been exhausted to a “maximum”. Production may today already achieve up to 99% FPY (first pass yield) rates. This provides that 99% of the products 2 are error-free. Hence, by product, a component or part of the final product may be understood. These quality assurance measures are costly and time-consuming, as the quality assurance measures require, among other things, usage of personnel as well as test and testing procedures, which may also need to be developed or further developed. A saving of these non-value-enhancing work steps opens up enormous financial potential. For this reason, solutions have already been developed that include such quality assurance measures in terms of complexity and time required and, in the end, also lead to a significant increase in efficiency in the production 1. An extremely promising approach is the “Closed Loop Analytics” (CLA) offered by SIEMENS®. Therein, an analysis based on machine learning algorithms is provided in order to make reliable statements about the quality of the component, without requiring a physical testing.
The main starting point is a data set (e.g., process parameters during production such as temperature, pressure, etc., as well as target values assigned to input data, such as the delay of a part in one or more production stations) that allows a statement about the quality of the component. The data set may be obtained from the production 1 in act S1. On the basis of this data set, it is possible to create predictive models M that allow to make predictions about the product quality to be made purely based on the input data, whereby, for example, algorithms and methods from the field of machine learning are employed. The result of the predictive model may be transmitted to the production 1 in act S2. However, it is precisely this data set that often presents the following challenges:
Insufficient amount of data: Often, no or insufficient process data that allows a meaningful predictive model M to be generated is available. This may be, for example, because not enough process data is recorded. Without sufficient data, however, it is almost impossible to achieve a reliable predictive model M.
Quality of data: In order to create a meaningful predictive model M, not only is a sufficient amount of data to be provided, but also the data is to be of high quality. A “noise” in the data set (e.g., the correlation between the input data (e.g., temperature, pressure) and the output data (e.g., distortion of a metal sheet a particular manufacturing station) is not always unambiguous, and may also give rise to a qualitatively poor model. In addition, for example, for manufacturing processes that already offer a very high-quality standard, problems with the predictive model may arise. A predictive model is to also detect “outliers” or production deviations that lead to quality problems. These, however, are usually an exception in the data set and may be insufficiently captured by the predictive model. Especially when reusable prediction models M are developed for customers with the same or similar production processes, this may be become a problem, since different users may use different parameter settings.
Non-measurable sizes: Not all process parameters that have a significant impact on component quality are measurable. This may, for example, be the mechanical voltage in a deep drawing component during the pressing process in the tool or the temperature in the center of a casted component. In both cases, a measurement may be almost impossible or at least very difficult, but the quality of a component may be predicted very well if the parameters were known.
Duration of the training phase: Especially if hardly any data is available for a production 1 or the production 1 is being rebuilt, it may take a long time (e.g., up to several months) to gather a data set that allows the creation of a reliable predictive model M. In addition, an already productive system may be adapted, which requires gathering a new data set.
In order to overcome these drawbacks, one or more following approaches may be used: Data-based quality control, direct measurement (e.g., End-of-Line testing as the last step in production), process simulation.
Data-based quality control: As described in the above, it is possible to carry out a data-based quality check based on process and manufacturing data and a predictive model M. The predictive model M may include different methods that are usually used for machine learning (e.g., decision trees, time series models, neural networks).
Process simulation: Today, there are many different ways to simulate manufacturing processes and material influences digitally. Such process simulation is available in many production areas, such as forming, and are intensively used in process development. There are continuous process simulations (e.g., physical/chemical processes) and discrete process simulations (e.g., material flow simulation), whereby for the purpose of the quality assessment of components and assemblies, only continuous simulations may be used. These simulation technologies are already so advanced that certain processes are simulated with very high accuracy (e.g., partly below 5% deviation). This makes it possible to develop and optimize processes on the computer and examine all relevant aspects that may have an impact. An example thereof is described in international patent application PCT/EP2019/079624 and respectively European patent publication EP3667445 with title “METHOD AND DEVICE AND METHOD FOR PRODUCING A PRODUCT AND COMPUTER PROGRAM PRODUCT”.
Direct quality control: Special machines and equipment, which allow a quality check, may be used at at least one inspection station. For example, soldering points may be controlled by X-ray, or after a deep drawing process, a distortion measurement takes place via image recordings or laser measurement. Such inspection stations exist in many different forms with the frequent disadvantage that such inspection stations are very expensive and often slow down production or form a bottleneck in the production.
A production system (e.g., a production line) and a method for operating the same that allows increasing the efficiency of the quality control of products by using a predictive model P based on test data is provided. The method includes one or more of the following acts: 1) a probability P (e.g., probability distribution) for a defective product (e.g., for an upcoming time interval) is determined based on the probability P (e.g., probability distribution) of a test variable 5 (e.g., in an elapsed time interval, cf.
2) Based on the probability distribution P, the savings potential of tests (e.g., and cost of the test; a possible reduction of test) for the upcoming time interval using the binomial distribution may be determined. This calculation may be made taking into account two (e.g., customer-individual) risk parameters: i) a maximum accepted slip k (e.g., the number of defective products that are delivered untested); ii) a confidence level α (e.g., 99%) that describes the probability that no more than k slips occur. Different combinations create different savings potentials. High values for α and low values for k result in low savings potential but reduce the probability of slips. This trade-off (e.g., slips k vs. test reduction) may be set individually per test variable. In
3)The calculated savings potential for the upcoming time interval may be returned to production and/or test systems. In other words, the inspection of the products may be continued with an adjusted inspection rate.
4) In the upcoming time interval, various metrics, such as the probability of failing the testing, the number of outliers, and error, may be monitored, as shown in
Turning to
Inspection of the products 2 may take place at the end of the line. The relevant properties and features of the device under test may be selected for the end-of-line test to determine overall quality, perform fast measurements and produce meaningful data, and arrive at a correct Pass/Fail decision. Testing in production 1 (e.g., between two production steps in a multi-step production) may be performed alternatively or additionally. For example, in case of motor production, the test variables may be any one or more of the following: a current ripple factor, periodically fluctuating strain (e.g., rotational ripple), adjustment angle (e.g., of rotation with respect to the shaft), Line-to-line output voltages and/or corresponding electrical resistances, electromotive force (EMF) and optionally higher harmonics of the electromotive force, braking voltage (e.g., as applied to the motor windings), and temperature measurements (e.g., for validating the measurements of the thermometers integrated into the motor). Additional test variables may be used in the case of testing motors of a motor production. Further, for productions, suitable test variables may be used when testing different products, such as printed circuit boards.
In order for a predictive model M to be created and also to have a sufficient forecast quality, a sufficient data set, both qualitatively and quantitatively, is to be provided. Previously, production 1 had to run for a long time (e.g., in some cases, several months) or a sufficient data set was already generated in the past, which is more of an exception. With the approach disclosed herein, other data sources that complement or replace the real process data are provided.
In this section, the one or more acts, as described in the above, will be described in more detail. First, calculation of the probability distribution of error per test variable is described. Then, predicting possible test effort reduction and monitoring measures will be discussed.
The probability (e.g., distribution) of an error (e.g., a part or product not passing or fulfilling the testing (requirements)) describes how likely it is that the measured test variables of a testing step are outside a threshold set (e.g., one or more tolerances). Hence, the probability distribution P may be part of the predictive model M. For this purpose, the historical data of the one or more testing steps in the previous time interval may be used. The size of the time intervals may be individually adapted (e.g., to certain production cycles). A time interval may correspond to a fraction of a second up to one or more months. The measured test variables from the previous time interval represent a univariate distribution, which may be approximated (e.g., using Kernel Density Estimation (KDE)). In order to provide the best possible estimator for the probability of error, a number of exponentially and time-wise weighted bootstrap samples are created and aggregated into an error probability (e.g., distribution), which may also include a confidence interval. For the purpose of boot strapping, the exponential weighting allows putting more emphasis on more recent measurements (e.g., of the one or more test variables) at the end of the previous time interval, and less emphasis on the measurements at the beginning of the previous time interval. The aggregated KDEs may then be used together with the upper and lower tolerance limits to calculate the probability of error.
With the probability of error calculated, as described in the above, for the previous time interval, a number of reduced tests may be determined. A binomial distribution may be used for this determination. The parameters k (e.g., number of maximum slips in the upcoming time interval) and α (e.g., probability of no more than k slips in the next time interval) may be set, for example, individually by a user. After these parameters have been fixed, the binomial distribution may be solved for n (e.g., number of reduced tests). Subsequently, in order to indicate a test reduction in percent, the number n is set in relation to the number of tests in the previous time interval or, if known, set in relation to the subsequent time interval. In order to be able to repeat act 1 reliably again in the time interval after the subsequent time interval, a sufficient number of tests is to be performed during the subsequent time interval (e.g., the one for which the number of tests has already been reduced). In order to achieve this, the minimum number of tests, the test (e.g., effort) reduction may be set to R-max (e.g., ⅔). In addition to this, at least n_min tests are to be performed in a particular time interval. Hence, as another boundary condition, the number of tests n_min may be set.
Acts 1. and 2. as just described may be repeated after each time interval. It is possible to repeat acts 1. and 2. periodically not after each time interval but after a number of time intervals have passed in order to reduce the test efforts. Hence, after having determined the possible test reduction and/or the corresponding inspection second rate, this second inspection rate may be applied to the testing station. In other words, the second inspection rate is used for testing the products.
To monitor test reduction, multiple metrics may be calculated and/or displayed or otherwise be brought to the attention of a user (e.g., an alarm may be raised) during or after a time interval. The most important parameter for monitoring is the derived probability of error. This probability of error may be recalculated (e.g., several times) within the time interval (e.g., in sub-intervals) in order to check whether the predicted test reduction still is valid. If the error probabilities significantly deviate from the one predicted for the time interval, a new test reduction is calculated (e.g., another inspection rate is determined). In addition, one or more of the following metrics may be determined: Number of real errors; number of outliers defined by the inter quartile range of the distribution from the previous time interval; similarity of the distributions of the current and previous time interval using the Kolmogorov-Smirnov and/or Anderson-Darling tests; further moments of the distributions in the current time interval (e.g., mean, standard deviation, and/or Kurtosis, etc.); and process capability index values. If there are too many deviations between the previous and the present time intervals, an alarm may be raised, and the test reduction (e.g., the inspection rate) may be accordingly adjusted.
Hence, appropriate data to estimate the failure rate/probability p of a product (e.g., a panel) is to be provided. Further, a time interval or a number of products based on which the test reduction is performed is to be provided. A certainty for having not more than the number of slips k allowed in that time interval is to be provided (e.g., a 99% certainty not to have more than 1 slip). Then, the number of tests that may be skipped in the next time interval (e.g., under the assumption that the failure probability for the production of the product is stable) may be determined. The failure probability P may be monitored during the time intervals in order to intervene and take preventive measures in case necessary. The probability for having not more than a predetermined number of slips k differs between different product groups or types and/or different time intervals.
For the estimation of the failure probability, the following acts may be executed (e.g., iteratively):
1. Collect test results (e.g., measurements) of a previous time interval (e.g., including x days (x is a parameter that may be setsuch as set to 90)). These test results may then form a data set.
2. Sample n times K data points (e.g., corresponding to the measurements) out of the data set (e.g., using exponential or uniform weighting) to generate bootstrap samples. This is done in order to improve the estimate of the probability (e.g., distribution) P to be determined.
3. If measurements of the test variables 5 are continuous (e.g., voltage, electrical current, etc.), a kernel density estimation (KDE) may be used in order to fit the distribution and compute the area outside of specified limits. Then, the estimates may be aggregated in order to determine a stable and good estimator of the failure probability P. Alternatively, a Bayesian estimator may be computed for the failure probability by regarding the data as the outcome of a Bernoulli experiment. Hence, it is possible to compute the posterior distribution by using Bayes Theorem. Both methods lead to a distribution over P (e.g., either via bootstrap or via Bayes Theorem). A certain percentile (e.g., 90%) will be used as an estimator for the failure probability p. If a KDE and Bayes estimator is present, the larger one will be used in the next act.
4. Use the binomial distribution together with predefined certainties α and a maximum number of slips k to solve for n (e.g., the number of skipped tests for the next time interval (resulting in a reduced, second inspection rate)).
5. These acts may be repeated after a lapsed time interval (e.g., on a daily, weekly, monthly basis).
The computed test reduction per iteration may be valid for a defined time frame (e.g., for one month). However, in order to provide that the computed test reduction is valid, the failure probability may be monitored on a shorter time interval basis (e.g., on a daily basis). If the probability estimator increases drastically, the production process may be halted or the inspection rate adapted again. For example, in such a case, testing may be increased again (e.g., all of the products may be subject to inspection and testing).
Now, the estimation of the probability distribution of (not) fulfilling the inspection including one or more testing steps is described in more detail. Two different approaches to estimate the probability of error may be used: KDE and Bayes. Throughout the present disclosure, probability of error and failure probability is used interchangeably.
If the test variables are continuous, the drawn bootstrap samples may be fitted using KDE. Together with the given pre-determined limits, per test variables (e.g., one or more thresholds that define whether a product passes or fails inspection), a bootstrap distribution may be derived. Then, the median or other percentiles of the bootstrap distribution may be chosen as the probability for failing (or passing) one or more test of an inspection.
For the Bayesian estimator, the data is treated as the outcome of a Bernoulli experiment (e.g., like a simple coin-toss). The data may be weighted (e.g., either uniformly or exponentially). Afterwards, the probability distribution may be calculated based on the following formula:
where π is the failure probability, p(D′|π) is a likelihood function for the Bernoulli experiment (e.g., the posterior probability distribution), p(D′) is a normalization constant, and p(π) is a prior belief of the probability distribution (e.g., a prior probability distribution to the posterior probability distribution). A conjugate prior (e.g., the Beta distribution) is used. The parameters a and b of the Beta distribution may (e.g., initially) be manually set (e.g., a=1 and b=1 to yield a uniform distribution) or may be derived from the posterior distribution (e.g., by continuously updating the parameters of the Beta distribution). A combination of the KDE estimate and the Bayes estimate may be used. If the data is discrete, the Bayesian estimate may be used. If the data is continuous, KDE estimates for the prior belief of the probability distribution may be used. Alternatively, the maximum value of the probability of error (e.g., based on KDE or Bayes approach) may be used. In that case, both the KDE and Bayes approach may be performed.
Alternatively, a Bayesian estimator for the probability p of error allows for continuous adapting and hence improving the probability of error estimate. Further, an innovation factor µ, running between 0 and 1, may be used to weight the previous posterior as part of the new prior: Pprior(n+1) = (1- µ) Pposterior(n) + µ Pprior(n), where the prior Pprior is actually time independent, and n designates consecutive time interval of production.
Now, after having obtained (e.g., computed), as described in the above, the p estimator π, the test reduction n may be determined based on the following formula:
Here, k′ and α may be pre-defined (e.g. by a user). Alternatively, k′ and α may be set when creating the computer code that is executed when performing the method acts as described in the above. For example, α = 0.99 and k′ = 1 provides that n tests may be skipped while being 99% confident of not having more than k′ slips (e.g., malfunctioning products which have skipped the testing). The formula given in the above may be solved numerically for n. Based on the number of test n that may be skipped, the inspection rate of products may be changed (e.g., from a first inspection rate (for a first time interval) to a second inspection rate (for a subsequent time interval)). There, inspection rate defines the number of products 2 that are testes relative to the total number of products 2 produced.
The embodiments provide low entry barriers since most customers already have a database for test results. Compared to other closed-loop analytics methods, no other data connection than the one to a database with such test results is necessary. Further advantages inlcude that no dedicated sensors are necessary and that the model is comprehensible to a human since no black box, as in the case of artificial neural networks, is involved. This causes greater confidence in the methods used. Further, a continuous monitoring and adaptation of test reduction is enabled. The duration of the learning phase may be significantly reduced (e.g., by using a bootstrapping algorithm as proposed). Overall, higher-quality prediction models for quality forecasting with possibly very short learning time are provided herewith.
The application possibilities of predictive models, which Siemens already offers today, are expanded. The predictive models used become more reliable, and it may be possible to standardize predictive models for certain processes. The scaling and/or transferability of the models used is significantly simplified, as no longer, machine learning algorithms such as artificial neural networks are to be trained.
The elements and features recited in the appended claims may be combined in different ways to produce new claims that likewise fall within the scope of the present invention. Thus, whereas the dependent claims appended below depend from only a single independent or dependent claim, it is to be understood that these dependent claims may, alternatively, be made to depend in the alternative from any preceding or following claim, whether independent or dependent. Such new combinations are to be understood as forming a part of the present specification.
While the present invention has been described above by reference to various embodiments, it should be understood that many changes and modifications can be made to the described embodiments. It is therefore intended that the foregoing description be regarded as illustrative rather than limiting, and that it be understood that all equivalents and/or combinations of embodiments are intended to be included in this description.
Number | Date | Country | Kind |
---|---|---|---|
20179003.7 | Jun 2020 | EP | regional |
This application is the National Stage of International Application No. PCT/EP2021/057220, filed Mar. 22, 2021, which claims the benefit of European Patent Application No. EP 20179003.7, filed Jun. 9, 2020. The entire contents of these documents are hereby incorporated herein by reference.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2021/057220 | 3/22/2021 | WO |