This following relates to a method for automatically determining of a changed manufacturing process and an apparatus with a computer system for automatically determining of a changed manufacturing process. The method is an automatic conceptual planning process. The apparatus with the computer system is an automatic conceptual planning tool (automatic conceptual planning assistant). Moreover, the following relates to a usage of the method and usage of the computer system, respectively.
Changes in products cause changes in production processes (changed manufacturing process). These changes can influence resources for the production process, too. In manufacturing domains, it is a frequently occurring planning step to adapt existing plants with established processes (historical manufacturing processes with historical manufacturing process data) and resources to new or changed products. Depending on the product and depending on the product manufacturing process the adaption to a changed process can require much effort.
An aspect relates to limit the effort which is required by the adaption of a manufacturing process of a product.
Further aspects of embodiments of the invention are the providing of a computer system for carrying out the method and a usage of the method.
With embodiments of the invention, a method for automatically determining of a changed manufacturing process of a product with changed manufacturing process data is provided. The method uses an apparatus with a computer system. With the method following steps are carried out:
The method is an automatic conceptual planning process. With the method a changed conceptual manufacturing process can automatically defined. By the adaption of the changed manufacturing process to an existing (historical) manufacturing process the determining of the changed manufacturing can easily be executed.
In addition to the method, an apparatus with a computer system for carrying out of the method is provided. The apparatus with the computer system is a conceptual planning tool (conceptual planning assistant). The computer system comprises at least one insighter engine for the executing of the method and the insighter engine comprises at least one data providing tool for providing of the basic data. With the computer system an automatically determining of the changed manufacturing process with changed manufacturing process data can be conducted.
Moreover, a usage of the method (and hence a usage of the computer system) for determining the changed process is provided. The method is applicable for diverse businesses. In an embodiment, the determined process is an industrial process. In an embodiment, a changed process of the automotive industry is determined. The method is used for the automotive industry.
The basic data form a data basis for the method. The historical data are previous (former) data. For example, the historical manufacturing data are former (e.g., established) manufacturing data. The target data are the data of a planned manufacturing process and/or the data of a planned product.
The basic data comprise the manufacturing process data and/or the product data and/or product specification data and/or product design data. These data reflect historical conceptual plans and/or target conceptual plans. By that, the basic data can comprise every kind of data. In an embodiment, basic data with three-dimensional basic data are used. The three-dimensional data can refer to the historical product, to the target product, to the historical manufacturing process and/or to the target manufacturing process. With the aid of the three-dimensional data, the reality can be reflected resulting in a realistic changed manufacturing process. The determined changed manufacturing process is very close to reality.
The classifying of the basic data is a kind of typecasting of the basic data. By the classifying the basic data are structured.
With the aid of the graph technology a Knowledge Graph (KG) is generated. The graph technology comprises a semantically lifting of the basic data with the help of an ontology.
In an embodiment, at least one of following bills is used:
Starting from a historical (existing) bill at least one new bill is generated and/or at last one historical (existing) bill is transformed into a new (changed) bill. For instance, a new BoM with new manufacturing process data is generated which addresses new requirements of a new (planned) manufacturing process.
In an embodiment, for the determining of the changed manufacturing process an identifying of similarities between the historical manufacturing data and the target manufacturing data and/or an identifying of similarities between the historical product data and the target product data is conducted. A similarity check is carried out. Additionally, in order to make the similarity check more efficient similarities between different historical data can be identified, too.
In an embodiment, for the identifying of similarities at least one of following similarity scoring methods is carried out: Eigenvector Distance (ED), Graph Edit Distance (GED) and Mean Levenshtein Distance of Graph Labels (MLD). The Eigenvector Distance is based on a topological similarity metric. The Graph Edit Distance focus on structural similarities. With Levenshtein Distance of Graph Labels semantic similarities by the distance between the labels in the given graphs are evaluated. In order to improve the similarity check two or all three similarity scoring methods are used.
In an embodiment, for the identifying of the similarities a change object list of changes between the historical manufacturing process data and the target manufacturing process data and/or between the historical product data and target product data are generated. By the identifying of similarities, the method can be performed efficiently and fast.
In an embodiment, an effort list, a risk list and/or a cost list of the changed manufacturing process are generated. For instance, this can be conducted based on the change object list.
In an embodiment, for the determining of the changed manufacturing process following additional steps are conducted:
For instance, based on the effort list, the risk list and/or the cost list a list of preselected manufacturing processes is generated. By that a ranking of alternative manufacturing processes is possible. This would result in a ranked list of alternative manufacturing processes with alternative BoM and BoP.
In an embodiment, a machine learning tool (ML) is used. The computer system comprises a machine learning tool. With the aid of the machine learning tool an automatic (e.g., iterative) approach for the determining of the changed manufacturing process is possible. This is very efficient by the combination with the graph technology.
The advantage of embodiments of the invention can be summarized as follows:
Some of the embodiments will be described in detail, with reference to the following figures, wherein like designations denote like members, wherein:
The method is an automatic conceptual planning process. The used apparatus with a computer system is an automatic conceptual planning tool (automatic conceptual planning assistant). The conceptual planning tool 1000 is equipped with a machine learning tool 1002. The method is applied in the automotive industry.
For the method following steps are carried out (
Besides the method aa apparatus with a computer system 1000 for executing of the method is described (
For the providing of the basic data historical manufacturing process data and product data are used to create a Knowledge Graph. Out of these data a lot of domain knowledge, like the connection between different manufacturing features and their corresponding processes and resources is derived.
In addition, these data are lifted semantically with help of an ontology of the graph technology. After lifting the data from different sources in the Knowledge Graph, the structure of the data and its interconnections could look like in
The Knowledge Graph 140 is used as data storage unit 1001 of historical (old) project data. With the help of this “historical” knowledge about products and their corresponding processes a prediction of a changed (new) BoP is available. This is possible, if a new BoM comes into the workflow, as described in
The new BoM contains for example a new version (e.g., facelift) of an already produced car. In such a case the number of changes in the respective BoM is relatively small in comparison to the total amount of parts and manufacturing features. In order to find these changes (e.g., differences between a “new” BoM1 and a nearly similar “old” BoM0) already existing BoMs and their corresponding BoPs are analyzed. For this analyzing, an identifying of similarities between the historical manufacturing data and the target manufacturing data and an identifying of similarities between the historical product data and the target product data are conducted. A similarity (Delta) check is carried out.
For the identifying of the similarities a change object list 131 of changes between the historical manufacturing process data and the target manufacturing process data and a change object list 132 between the historical product data and target product data are generated. Hence, a list 133 of differences between BoM1 and BoM0 is generated. All changes and/or differences can be exported as a list together with the new BoM1. The new BoM1 can also include some information about the pre-version of the BoM. This can be done during the setting of the individual new BoM1.
In an alternative embodiment, the changes between two BoMs can be identified after the setting of a number of new BoMx by comparison as follows:
With the change information (every change is handled as Change Object (CO) with specific attributes) an adaption of the “old” BoP0 can be performed, so that it fits to the new BoM1. For every single change a rough estimation of effort, risk and cost can be created. For this, one needs to know which effect every single change produces in the BoP. By this, an effort list, a risk list and/or a cost list of the changed manufacturing process are generated.
To reduce the solution space of effects to the BoP one can create types of typical changes in the respective domain. Again, the information is structured. The raw data based on the BoPs are classified. In the Body-in-white-domain (e.g., in automobile manufacturing area: The BiW-domain is the stage in which a car body's frame has been joined together) there could be occur only a small number of senseful changes on a BoM. These lists can be created together with domain experts.
A list of possible change types is shown in
As example one could pick the introduction of a new additional part in the new BoM1. This is the most difficult case of the Change Objects, because the new part is not yet connected to an existing process. In this case it is difficult to say, which process is affected by the change like in the other cases. This problem is solved again with the KG. The historic information in the KG is useful to find a similar part in the old projects. This part is connected to some processes. This could be a used as a possible solution: If these processes are also in the BoP0 the new part could connected to these processes. If not, one needs to check, if there some similar processes or create a new process to handle the new part.
Another solution leads over new or changed features. These features are already connected to some parts and processes. In these cases, it is easier to find the affected processes to adapt them according to the Change Objects.
Given a BoM1, all the existing BoM1 in KG are ranked with respect to an aggregated similarity metric devised so that BoMx, BoMx+1, BoMx+2, . . . , BoMy are ranked from the most similar BoMx to the least similar BoMy to BoM1. The goal is obtaining the list of similar BoMs to BoM1. So, a generating of a preselected list is carried out. To achieve this preselected list of ranked BoMs, results from three different similarity scoring methods are aggregated for each BoM1 and BoMi pair: A topological similarity metric called Eigenvector Distance (ED), a structural similarity metric called Graph Edit Distance (GED), and A semantic similarity score that evaluates the distance between the labels in the given graphs named Mean Levenshtein Distance of Graph Labels (MLD). The similarity check is depicted with reference 51 in
For computing the ED value between two BoMs, the Laplacian eigenvalues for the adjacency matrices of each of the graph representations of the two BoMs in the KG are calculated. For each graph, the smallest k is found such that the sum of the k largest eigenvalues constitutes at least 90% of the sum of all the eigenvalues. If the values of k are different between the two graphs, then the smaller one is used. The similarity metric is then the sum of the squared differences between the largest k eigenvalues between the graphs. The ED values of two BoMs are in the range [0, ∞), where values closer to zero are more similar.
GED is a scalar measure that identifies the minimum number of operations to transform the graph representation of BoM1 to a graph representation of BoMi. The set of elementary graph edit operators typically includes vertex and edge insertions, deletions, and substitutions.
The Levenshtein distance between two words is the minimum number of single-character edits (insertions, deletions or substitutions) required to change one word into the other. MLD is a total minimum number of single-character edits between all the labels in the graph representation of BoM1 to a graph representation of BoMi.
Finally, these three similarity values between the BoM1 and all the existing BoMi in KG are aggregated and normalized to create the list of similar BoMs.
With the next step an adaption of a BoP given Change Objects and Domain Constraints is carried out.
After obtaining the list of similar BoMs, the BoP of the most similar BoM is selected from the list and duplicated as *BoP1. The task “adaption of *BoP1 to BoP1” is represented by the references 52 and 53 (
The features from the change objects are used to derive requirements regarding the first step of the adaption necessary to derive BoP1 from *BoP1. Such requirements enable to query the KG for obtaining the BoP fragments (a series of operations) to process the new parts in the change objects. These BoP fragments are intelligently integrated in *BoP1 when necessary. Following notes: It could be possible that some requirements are already satisfied in *BoP1. In addition, when a complete adaptation cannot be applied in *BoP1, a new *BoP1 is selected from the preselected list of BoMs and hence of the preselected list of BoPs. So, to approach to the changed manufacturing process the adaption can be done in an iterative way.
In order to integrate the process fragments into *BoP1, the existent precedence constraints of operations in all BoPs in KG are reviewed. This automated process is successful when:
Finally, *BoP1 is tested against the domain constraints for validation and repair. These domain constraints represent the typical violations in BoPs, and they are collected with the help of a domain expert. An example constraint c1 would be “load operation must not be followed by an unload operation”.
Such constraints are represented by a state-of-the-art language called SHACL (Shapes Constraint Language) for describing and validating RDF (Resource Description Framework) graphs. It can be used to define classes together with constraints on their properties. The language consists of several built-in types of constraints such as cardinality (minCount/maxCount), value type and allowed values, but it is also possible to define more complex kinds of constraints for almost arbitrary validation conditions (SHACL was accepted as a W3C (Word Wide Web Consortium, international standards organization) recommendation in July 2017). To perform a validation test, the validation engine must be given the graph representation of *BoP1 against the graph representing constraints.
The validation engine returns the fragments of *BoP1 which are not satisfying the constraints. As a last step, these fragments are attempted to be automatically repaired using the knowledge in the constraint. For example, given c1, in case there is an operation sequence where a load operation is followed by an unload operation in BoP1, the unload operation is removed, and the resultant *BoP1 is validated once again to check against side effects of this modification. If this process ends successfully with no violated constraints, the resultant BoP1 is created (cf.
Although the present invention has been disclosed in the form of embodiments and variations thereon, it will be understood that numerous additional modifications and variations could be made thereto without departing from the scope of the invention.
For the sake of clarity, it is to be understood that the use of “a” or “an” throughout this application does not exclude a plurality, and “comprising” does not exclude other steps or elements.
This application claims priority to PCT Application No. PCT/EP2020/072924, having a filing date of Aug. 14, 2020, the entire contents of which are hereby incorporated by reference.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2020/072924 | 8/14/2020 | WO |