The invention refers to a computer-implemented method to orchestrate configurations of simulations based on digital twin models of modular plants according to the preamble claim 1 and an orchestration system to orchestrate configurations of simulations based on a digital twin models of modular plants according to the preamble claim 3.
Plants of various technical domains—such as industry domain (e.g., metal or chemical processing industry), medical domain, energy domain, telecommunication domain, etc.—are becoming more and more modular and plant owners delegate their tasks to machine suppliers, who deliver modular machines with all automation and analytics functionalities embedded.
For a plant owner there are vendor agnostic solutions for the integration of the automation—such as MTP, PackML, and many others—into an orchestration layer, but for distributed analytics and simulation on edge devices of the machine there are no existent solutions. The present patent application takes on this issue.
Regarding the distributed analytics and simulation on edge devices there are some specific problems:
- Distributed simulation is currently not supported on an edge infrastructure with the capability to access field data.
- Integration of such a distributed edge application into automation is time consuming:
So often according to a specified Open Platform Communications Unified Architecture <OPC UA>tag per tag have to be enabled, connected and validated with cumbersome orchestration, synchronization routines have to be developed.
- Too many dependencies on hardware, software, and operating systems and on the simulation and simulation orchestration.
- Complex deployment of distributed simulations among many edge devices due to synchronization.
But even if today solutions of such applications were available, so many of needed functionalities would have been developed specifically for a single use and are thus not scalable or massively widely replicable. Furthermore, the applications have a point to point communication and synchronization with a tight vendor software dependency.
It is an objective of the invention to propose a computer-implemented method and an orchestration system to orchestrate configurations of simulations based on digital twin models of modular plants, by which in the course of the orchestration the configurations of the model-based simulations are generated and deployed automatically.
This objective is solved with regard to a computer-implemented method defined in the preamble of claim 1 by the features in the characterizing part of claim 1.
Moreover, the objective is solved by a computer-implemented tool carrying out the computer-implemented method according to claim 2.
The objective is further solved with regard to an orchestration simulator defined in the preamble of claim 3 by the features in the characterizing part of claim 3.
The main idea of the invention according to the claims 1, 24 and 3—in order to orchestrate configurations of simulations based on digital twin models of modular plants and a logical “System Structure and Parameterization <SSP>”-functionality including a “Functional Mock-up Interface <FMI>”-functionality combined with a “Functional Mock-up Unit <FMU>”-functionality—is (i) generating for distributed operational-technology-applications of the modular plants, wherein the operational-technology-applications are edge devices of field devices, simulated model components including assignment rules for automation data captured due to automation of the plants and assigned to the distributed operational-technology-applications, (ii) deploying in the course of orchestrating the simulations configurations the simulated model components on the distributed operational-technology-applications by using the SSP-functionality with “Functional Mock-up Unit <FMU>”-functionalities as part of the FMI-functionality for the distributed operational-technology-applications and implementing for the distributed operational-technology-applications and as part of the FMI-functionality in a server-client-manner “Remote Procedure Call <RPC>”-technology based Proxy-FMU-functionalities with
- “Proxy-FMU”-entities embedded in the SSP-functionality and
- corresponding “Remote-Controlled-FMU”-entities implemented on the distributed operational-technology-applications and (iii) deploying the automation data on the distributed operational-technology-applications according to the assignment rules for the automation data and as a result of the deployment of the simulated model components.
SUMMARY OF USED FEATURES
- Runtime for execution of distributed digital twins on an edge infrastructure
- Abstraction layer for distributed simulations and analytics
- Incorporation of field devices and field data into the abstraction layer
- Combination of a co-simulation environment with “operational technology <OT>”-aspects on an OT-infrastructure and automation system.
- Mocking of standard “Functional Mock-up Interface <FMI>” combined with a “Functional Mock-up Unit <FMU>” (cf. https://fmi-standard.org) access even for remote simulation components
- Automatic generation and deployment of simulation components in the OT-infrastructure
The combination of features, tools, artifacts, and elements of the Digital Enterprise are leveraged by using a distributed abstraction layer for a digital twin runtime environment.
This application enables the OT-Edge to interconnect standardized simulation objects deployable on an edge device and to coordinate them through an intelligent vendor agnostic orchestrating master algorithm from any vendor (cf. FIGS. 1, 3 and 4).
These figures show the idea how to implement and handle FMU calls to an FMU that resides on a remote machine.
For the following explanation of the idea, let us assume to have a simulation in form of an FMU called “myFmu”.
Parameterized Proxy FMU on the Server
The idea is to have a generic FMU that acts as a proxy to a remote one. As this proxy is also an FMU, it needs to implement all functions specified in the FMI standard. It is totally generic though in that it does not implement any actual simulation in these functions but merely RPC-calls to a remote FMU. You can think of this proxy therefore as a remote controller for any FMU.
What makes an instance of such a proxy FMU specific are essentially two things:
- an actual simulation FMU that is to be remote controlled.
- a URI how to reach the remote-controlled FMU
So, given “myFmu” and the empty, generic proxy FMU “proxyFmu”, a specific proxy FMU instance for “myFmu” can be generated as follows:
- 1. Create appropriate modelDescription.xml: The idea is that the generated proxy FMU has exactly the same input and output information as “myFmu”. This is very easy to achieve by simply copying the exact same “modelDescription.xml”-file from the original FMU. No further modifications to this file are necessary.
- 2. Create “proxyMyFmu”: An FMU is just a zip archive, so we can create a new one based on “proxyFmu” which has the previously generated “modelDescription.xml”-file added to it. Additionally, it can be added any file needed to the ZIP-archive. In particular, it is made use of that to add a configuration file to the proxy FMU's resource folder that contains information like where the remote-controlled FMU resides (i.e., its remote host name) as well as the port where the corresponding FMU remote controller listens at.
At this point, there is a valid FMU called “proxyMyFmu” that can be used like any other to orchestrate simulations in any FMU-based simulation tool/library.
The implementation of “proxyFmu” can be totally generic because all it does is essentially this:
- 1. It acts as an RPC-client, and upon instantiation connects to a remote RPC-server of a well-known “Remote Procedure Call <RPC>”-technology (cf. https://en.wikipedia.org/wiki/Remote_procedure_call in the version of Jul. 23, 2021) given by a “Uniform Resource Identifier <URI>”. As described before, this URI comes from a dedicated file containing it.
- 2. It implements all functions specified in the FMI standard, but instead of actually doing anything in these function implementations, it simply serializes the function arguments and does corresponding RPC-calls to its RPC-server.
Remote-Controlled FMU on a Device (Client)
The client side consists of a generic RPC-server part whose role is to accept incoming RPC-calls from the “proxyFmu” like “proxyMyFmu”. What it does then is this:
- 1. Deserialize the function arguments.
- 2. Call the corresponding function of the actual FMU (which is just a library implementing the functions specified in the FMI standard). Note that it needs to know what the actual FMU is such that it can extract this library and dynamically load it at runtime. This information must therefore ideally come from the deployment step and can be e.g., given as an argument when starting a corresponding “Docker”-container (cf. https://en.wikipedia.org/wiki/Docker_(software) in the version of Jul. 7, 2021).
- 3. The result of the function call is then finally sent back as the response to the RPC-request.
- 4. Each FMU has a defined set of inputs and outputs, each given in its “modelDescription.xml”-file by a name and a value type (e.g., integer, floating point number, Boolean, string). Orchestrating a distributed simulation that consists of individual such FMUs (no matter where or on which edge device each individual one is being executed), involves connecting such inputs and outputs. Typically, the cited modular plants involve data coming from field devices that is made available by a field bus, and such data should be part of relevant simulations in a natural way, in other words, it will enter simulation models as inputs. As a consequence, there are typically not only FMU inputs that stem from other FMUs' outputs, but they rather come in directly as (signal-) data from a field device. This is something that needs to be configured as well when the whole distributed simulation is being orchestrated. Precisely, not only do FMU inputs and outputs need to be mapped, but also automation data names to their corresponding FMU inputs. While the first kind of mapping, i.e., the input/output mapping, can be elegantly described by a standard called “System Structure & Parameterization <SSP>” (cf. https://ssp-standard.org), there is no such standard for the mapping of automation data. What we propose here is the following:
- During the deployment process, the data mapping is generated and put inside of the remote-controlled FMU. Information within the SSP-topology file can define what field signals are expected by the FMUs. This can automate the process in many cases. This can again be a simple configuration file, such like “YAML” (cf. https://en.wikipedia.org/wiki/YAML in the version of Jun. 23, 2021) put inside of the ZIP-archive.
- The remote-control service reads this signal mapping, subscribes to the configured data names, and passes values on to the actual FMU function calls when needed. If new values are memorized as soon as they come in, passing on always the latest values to the FMU functions is guaranteed and the simple subscription model suffices.
Just like the server part, the service described here (let us give it the name “fmuRemoteController” is again totally generic. An actual instance of this service only needs to know where the FMU library is such that it can call its functions (cf. FIG. 3 shows a sequence of RPC-calls)
The explained orchestrating system allows a distributed operational technology digital twin runtime with access to field data of modular plants to be deployed on edge devices.
Known applications have deep dependencies, have many manual extensions and are difficult to adapt for different applications.
The proposed idea abstracts all these aspects and enables simulation and analytic model suppliers to deploy their OT-systems as part of an Edge Ecosystem reducing their dependency footprint.
- 1. The actual RPC-technology is not that hard to exchange. The only parts that need to be adapted are the “proxyFmu” and “fmuRemoteController”.
- 2. Instances like “proxyMyFmu” are just normal FMUs. They behave in no way differently from a simulation tool's point of view. They can be orchestrated and run like any other, and the remote calling parts are totally hidden in its internals.
- 3. Any simulation tool can be used; or also a library like “Amesim”, “Simulink”, “fmpy”, “libcosim”.
- 4. Remote FMI/FMU are automatically generated
- 5. FMI/FMUs are automatically deployed on corresponding (depending on field variables) OT-system
- 6. SSP-standard is used to connect OT-signals
Further advantageous developments of the invention are given by the independent claims.
Moreover, additional advantageous developments of the invention arise out of the following description of a preferred embodiment of the invention according to FIGS. 1 to 4. They show:
FIG. 1 a scenario for orchestrating configurations of model-based simulations of a modular plant,
FIG. 2 a principle presentation of a computer-implemented-tool, in particular a Computer-Program-Product, e.g., designed as an APP,
FIG. 3 an exemplary message-flow-diagram for deploying the simulated model components by implementing in a server-client-manner “Remote Procedure Call <RPC>”-technology based Proxy-FMU-functionalities,
FIG. 4 an exemplary orchestration flow between involved units of the “SERVER-CLIENT”-based scenario depicted in the FIG. 1.
FIG. 1 shows a server-client-based scenario for orchestrating configurations of model-based simulations of a plant with a modular structure Pm. The plant with the modular structure Pm includes for example as depicted three plant modules, a first plant module PM1, a second plant module PM2 and a third plant module PM2, which can be regarded as modular plants. Each modular plant PM1, PM2, PM3 has always the same plant structure.
So, it belongs to the first modular plant PM1, a first control system of plant module CS1, which is connected or assigned on one side to a first digital twin model DTM1 and on the other to automated machines or processes, depicted in the FIG. 1 by robots. To the first control system of plant module CS1 belongs—in general and well-known—a first “Programmable Logic Controller <PLC>”-unit PLCU1, which is connected with a first field device FD1 including a first edge device ED1. The first edge device ED1 is in the context of the preferred embodiment of the invention a distributed operational-technology-application OTAd, for which according to the objective of the invention and in the course of orchestrating the configurations of the model-based simulations are generated and deployed automatically.
The same structure have the second modular plant PM2 and the third modular plant PM3.
So, it belongs to the second modular plant PM2, a second control system of plant module CS2, which is connected or assigned on one side to a second digital twin model DTM2 and on the other again to automated machines or processes, depicted in the FIG. 1 by the robots. To the second control system of plant module CS2 belongs again a second “Programmable Logic Controller <PLC>”-unit PLCU2, which is connected with a second field device FD2 including a second edge device ED2. Also, the second edge device ED2 is in the context of the preferred embodiment of the invention a distributed operational-technology-application OTAd, for which according to the objective of the invention and in the course of orchestrating the configurations of the model-based simulations are generated and deployed automatically.
To the third modular plant PM3 consequently belongs a third control system of plant module CS3, which is connected or assigned on one side to a third digital twin model DTM3 and on the other again to automated machines or processes, depicted in the FIG. 1 by the robots. To the third control system of plant module CS3 belongs a third “Programmable Logic Controller <PLC>”-unit PLCU3, which is connected with a third field device FD3 including a third edge device ED3. Also, the third edge device ED3 is in the context of the preferred embodiment of the invention again a distributed operational-technology-application OTAd, for which according to the objective of the invention and in the course of orchestrating the configurations of the model-based simulations are generated and deployed automatically.
According to the FIG. 1 the orchestration is carried out by an orchestration system OS, which is for instance in a preferred and simplest design a simulator expanded to include SSP-, FMI/FMU-functionality, to orchestrate configurations of simulations based on the digital twin models DTM1, DTM2, DTM3 of the modular plants PM1, PM2, PM3, which includes a server-controller SCRT to configure the model-based simulations.
According to the depicted “SERVER-CLIENT”-based scenario server-controller SCRT of the orchestration system OS is the “SERVER” and the edge devices ED1, ED2, ED3 of the modular plants PM1, PM2, PM3 are the “CLIENTS”.
In the course of the orchestration the server-controller SCRT includes as part of a deployment tool DT a logical “System Structure and Parameterization <SSP>”-tool SSP-T, which is well-known (cf. https://ssp-standard.org) and describes in a logical way how model components for the simulations are connected and composed for their deployment into composite components and how model parameterization data are stored and exchanged between each the model component and the composite component. The logical “System Structure and Parameterization <SSP>”-tool SSP-T consist or includes a “Functional Mock-up Interface <FMI>”-tool FMI-T (cf. https://fmi-standard.org) for sharing the simulations via a “ZIP-archive”-based “Functional Mock-up Unit <FMU>”-tool FMU-T packing XML-files and compiled C-code.
Moreover in the course of the orchestration the server-controller SCRT includes a generation unit GU, which generates grt based on the digital twin models DTM1, DTM2, DTM3 of the modular plants PM1, PM2, PM3 and for the distributed operational-technology-applications OTAd respectively the edge devices ED1, ED2, ED3 of the modular plants PM1, PM2, PM3 simulated model components, (i) at least one first simulated model component MCs,1 for the first edge device ED1, (ii) at least one second simulated model component MCs,2 for the second edge device ED2 and (iii) at least one third simulated model component MCs,3 for the third edge device ED3, captured each due to automation of the modular plants PM1, PM2, PM3.
For this purpose, the first simulated model component MCs,1 includes first assignment rules AR1 for first automation data AD1 assigned to the first edge device ED1, second assignment rules AR2 for second automation data AD2 assigned to the second edge device ED2 and third assignment rules AR3 for third automation data AD3 assigned to the third edge device ED3.
These generated information MCs,1, MCs,2, MCs,3, AR1, AR2, AR3, AD1, AD2, AD3 are transferred within the server-controller SCRT to the deployment tool DT for deploying the information on the distributed operational-technology-applications OTAd respectively the edge devices ED1, ED2, ED3 of the modular plants PM1, PM2, PM3.
For this purpose, the deployment tool DT of the server-controller SCRT deploys dpl the simulated model components MCs,1, MCs,2, MCs,3 on the distributed operational-technology-applications OTAd respectively the edge devices ED1, ED2, ED3 by
- using the SSP-functionality SSP-T with “Functional Mock-up Unit <FMU>”-functionalities as part of the FMI-functionality FMI-T, (i) a first “Functional Mock-up Unit <FMU>”-functionality FMU-T1 for the first edge device ED1, (ii) a second “Functional Mock-up Unit <FMU>”-functionality FMU-T2 for the second edge device ED2 and (iii) a third “Functional Mock-up Unit <FMU>”-functionality FMU-T3 for the first edge device ED3, and
- implementing for the distributed operational-technology-applications OTAd respectively the edge devices ED1, ED2, ED3 and as part of the FMI-functionality FMI-T in a server-client-manner and based on a well-known “Remote Procedure Call <RPC>”-technology (cf. https://en.wikipedia.org/wiki/Remote_procedure_call in the version of Jul. 23, 2021) Proxy-FMU-functionalities, (i) a first Proxy-FMU-functionality PFMU-T1 with a first “Proxy-FMU”-entity PFMU-E1 embedded in the SSP-functionality SSP-T of the server-controller SCRT and a corresponding first “Remote-Controlled-FMU”-entity RCFMU-E1 implemented in a first client-controller CCRT1 on the first edge device ED1, which includes the first automation data AD1 assigned to the first edge device ED1 and to be deployed on (ii) a second Proxy-FMU-functionality PFMU-T2 with a second “Proxy-FMU”-entity PFMU-E2 embedded in the SSP-functionality SSP-T of the server-controller SCRT and a corresponding second “Remote-Controlled-FMU”-entity RCFMU-E2 implemented in a second client-controller CCRT2 on the second edge device ED2, which includes the second automation data AD2 assigned to the second edge device ED2 and to be deployed on, and (iii) a third Proxy-FMU-functionality PFMU-T3with a third “Proxy-FMU”-entity PFMU-E3 embedded in the SSP-functionality SSP-T of the server-controller SCRT and a corresponding third “Remote-Controlled-FMU”-entity RCFMU-E3 implemented in a third client-controller CCRT3 on the third edge device ED3, which includes the third automation data AD3 being assigned to the third edge device ED3 and to be deployed on.
Moreover, the deployment tool DT is designed such that the automation data AD1, AD2, AD3 are deployed dpl on the distributed operational-technology-applications OTAd respectively the edge devices ED1, ED2, ED3 according to the assignment rules AR1, AR2, AR3 for the automation data AD1, AD2, AD3 and as a result of the deployment dpl of the simulated model components MCs,1, MCs,2, MCs,3.
Furthermore, the orchestration system OS can be designed as a hardware solution or realized as a software solution such that the orchestration system OS is a computer-implemented-tool CIT, which is nothing else than a Computer-Program-Product being designed preferably as an APP and which is up-loadable into the server-controller SCRT.
FIG. 2 shows in a principal diagram of the computer-implemented-tool CIT how the tool could be designed. According to the depiction the computer-implemented-tool CIT includes a non-transitory, processor-readable storage medium STM having processor-readable program-instructions of a program module PGM for identifying manipulations of cyber-physical-systems stored in the non-transitory, processor-readable storage medium STM and a processor PRC connected with the storage medium STM executing the processor-readable program-instructions of the program module PGM to identify manipulations of cyber-physical-systems.
FIG. 3 shows starting from the FIG. 1 and based on the corresponding FIG. 1-description an exemplary message-flow-diagram for deploying the simulated model components by implementing in a server-client-manner “Remote Procedure Call <RPC>”-technology based Proxy-FMU-functionalities,
FIG. 4 shows starting from the FIG. 1 and based on the corresponding FIG. 1 -description an exemplary orchestration flow between involved units of the “SERVER-CLIENT”-based scenario depicted in the FIG. 1. Between the server-controller SCRT including an orchestration entity responsible for the orchestration and manage-clients-entity responsible for managing clients the orchestration flow is TCP/IP-based (cf. https://en.wikipedia.org/wiki/Internet_Paxocol in the version of Jul. 25, 2021).