This application claims priority to European Application No. 18184765.8, having a filing date of Jul. 20, 2018, the entire contents of which are hereby incorporated by reference.
The following relates to a method and apparatus for providing transparency in an autonomous production system.
Production systems can comprise a plurality of production facilities, subsystems and components, in particular, controllers, actuators and sensors. The machines of a production system become increasingly smart and are equipped with increasing autonomy to flexibly steer high variant production. However, this implies that the decisions and actions taken by the autonomous production system are not easily transparent to human users, such as the operators of a manufacturing plant. The intransparency of actions taken by an autonomous production system may lead to a lack of trust in the respective autonomous system. Further, the intransparency of actions performed by an autonomous production system makes it difficult for a user to decide whether to overrule or change an action performed by the autonomous production system. This in turn can lead to wrong decisions made by the user because in a conventional production system the user is not sufficiently assisted in taking the right decisions because of the inherent intransparency.
An aspect relates to an autonomous production system providing transparency to actions performed by the autonomous production system during the production process.
The first aspect of embodiments of the present invention provide an autonomous production system comprising a transformation unit adapted to transform an ontology representation of actions of the autonomous production system into a corresponding planning representation, an artificial intelligence, AI, planning unit adapted to compute an action plan comprising a sequence of instantiated actions based on the planning representation, and a retransformation unit adapted to transform the computed action plan into an ontology language representation providing transparency information about the computed action plan.
The autonomous production system according to the first aspect of embodiments of the present invention makes decisions and/or actions taken by the autonomous production system transparent to a user by explaining the decisions and/or actions to the user, e.g. via a graphical user interface.
The embodiments provide according to the second aspect a method for providing transparency in an autonomous production system comprising the steps of:
In a possible embodiment of the autonomous production system according to the first aspect of embodiments of the present invention the ontology representation of actions of the autonomous production system comprise a web ontology language, OWL, representation.
In a further possible embodiment of the autonomous production system according to the first aspect of embodiments of the present invention the planning representation of actions of the autonomous production system comprises a planning domain definition language, PDDL representation.
In a further possible embodiment of the autonomous production system according to the first aspect of embodiments of the present invention the transformation unit and the retransformation unit have access to a mapping memory which stores a mapping table of mappings between the ontology representation of actions and a corresponding planning representation of actions.
In a further possible embodiment of the autonomous production system according to the first aspect of embodiments of the present invention each action the table stored in the mapping memory comprises in the ontology representation and in the planning representation of at least one action precondition, action parameters and at least one action effect.
In a further possible embodiment of the autonomous production system according to the first aspect of embodiments of the present invention an ontology of a domain of the autonomous production system is stored in a memory or database and comprises a representation of states of a domain of the autonomous production system including an initial state and a goal state.
In a further possible embodiment of the autonomous production system according to the first aspect of embodiments of the present invention the autonomous production system further comprises a state evolution unit, SEU, which is configured to evolve the initial state of the domain by applying the instantiated actions of the computed action plan transformed into its ontology representation into an intermediate state of interest of the domain queried by a user.
In a still further possible embodiment of the autonomous production system according to the first aspect of embodiments of the present invention the system further comprises a logical reasoning unit, LRU, configured to compute explanatory justifications for the preconditions of the instantiated actions applied by the state evolution unit.
In a further possible embodiment of the autonomous production system according to the first aspect of embodiments of the present invention the system further comprises an explanation generation unit, EGU, configured to generate automatically the transparency information comprising human readable explanation text and/or a machine-readable explanation files on the basis of the explanatory justifications computed by the logical reasoning unit.
In a further possible embodiment of the autonomous production system according to the first aspect of embodiments of the present invention the readable explanation text generated by the explanation generation unit, EGU, is output via a user interface, UI, to a user of the system.
In a further possible embodiment of the autonomous production system according to the first aspect of embodiments of the present invention the explanation generation unit, EGU, of the system is adapted to perform natural language processing of the justifications computed by the logical reasoning unit, LRU, to generate automatically an explanation text output via the user interface, UI, to a user of the autonomous production system.
In a still further possible embodiment of the autonomous production system according to the first aspect of embodiments of the present invention the action plan computed by the AI planning unit comprising a sequence of instantiated actions is executed by an execution engine, EE, of the autonomous production system.
In a still further possible embodiment of the autonomous production system according to the first aspect of embodiments of the present invention the transparency information generated by the explanation generation unit EGU is evaluated to trigger the execution of the action plan by the execution engine EE.
In a further possible embodiment of the autonomous production system according to the first aspect of embodiments of the present invention the execution engine EE comprises a controller adapted to generate control signals applied to actuators of the autonomous production system during execution of the instantiated actions of the action plan.
In a still further possible embodiment of the autonomous production system according to the first aspect of embodiments of the present invention the transparency information comprises explanations indicating at least one reason for a specific action being included or not being included in the action plan computed by the AI planning unit.
Some of the embodiments will be described in detail, with references to the following Figure, wherein like designations denote like members, wherein:
An autonomous production system APS according to embodiments of the present invention can comprise a complex system providing a plurality of subsystems and components including control entities, actuators and/or sensors. The actuators of the autonomous production system APS are controlled by controllers of the autonomous production system according to instantiated actions of an action plan AP. The autonomous production system APS can comprise an apparatus adapted to generate an action plan AP which can be supplied by possible implementation of at least one execution engine of the autonomous production system APS having a controller adapted to generate control signals applied to actuators of the autonomous production system during execution of the instantiated action of the action plan AP.
The autonomous production system can comprise an apparatus adapted to provide transparency information about the computed action plan AP before or during execution of the action plan AP.
As can be seen in the schematic diagram of
The transformation unit 1 used in the autonomous production system APS according to the first aspect of embodiments of the present invention is adapted to transform an ontology representation of actions of the autonomous production system APS into a corresponding planning representation. The artificial intelligence, AI, planning unit 2 is adapted to compute an action plan AP such as illustrated schematically in
The web ontology language OWL comprises knowledge representation languages or ontologies. Ontologies describe taxonomies and classification networks essentially defining a structure of knowledge in various technical domains. OWL ontologies are characterized by formal semantics. They can be built upon worldwide web consortiums W3C XML standards or objects called resource description frameworks RDF. The W3C standard web ontology language OWL and associated reasoning mechanisms can be used for knowledge representation and reasoning in information systems.
The planning representation used in the system according to embodiments of the present invention comprises in a possible embodiment a planning domain definition language PDDL representation. The planning problem is separated in two major parts including a domain description and a related problem description.
The domain description can consist of a domain name definition, definition of requirements, a definition of object-type hierarchy, a definition of constant objects, a definition of predicates and a definition of possible actions. The actions can comprise parameters, i.e. variables that can be instantiated with objects, preconditions and effects. Effects of actions can also be conditional.
The problem description can comprise a problem name definition, the definition of the related domain name, a definition of all possible objects, an initial state of the planning environment and a definition of at least one goal state.
The PDDL representation uses a PDDL language which forms a formal language for describing action planning domains and problems in declarative way as an input to an AI planner that can compute action plans AP as solutions.
As illustrated in
A possible example for a table stored in the mapping memory is given in the below table:
The ontology ONT can comprise a domain and problem ontology which can be specified in a web ontology language, OWL, representation. Ontology can be stored in a memory or database to which the transformation unit 1 has access as illustrated in the schematic diagram of
A state evolution unit 5 is configured to evolve the initial state of the domain by applying the instantiated actions of the computed action plan AP transformed into its ontology representation oAP into an intermediate state of interest A of the domain queried by a user.
The autonomous production system APS comprises in a possible embodiment further a logical reasoning unit 6 which is configured to compute explanatory justifications for the preconditions of the instantiated actions applied by the state evolution unit 5. An explanation generation unit 7 is configured to generate automatically transparency information on the basis of the explanatory justifications computed by the logical reasoning unit 6. The generated transparency information can comprise human readable explanation text output via a user interface UI and/or machine-readable explanation files output via data interface DI of the autonomous production system APS. In a further embodiment, the explanation generation unit 7 is adapted to perform natural language, NL, processing of the justifications J computed by the logical reasoning unit 6 to generate automatically an explanation text output via user interface to a user of the autonomous production system APS. In a possible embodiment, the transparency information can comprise explanations E indicating at least one reason for a specific action being included or not being included in the action plan AP computed by the AI planning unit 2 of the system.
The autonomous production system APS of embodiments of the present invention as illustrated schematically in
The autonomous production system APS can for instance comprise a robot assembly system which is adapted to assemble final product from parts, e.g. a vehicle or car from basic parts such as wheels, axles, chassis, etc. performing the right actions at symbolic level in the correct sequence, like pick-up wheel, insert it onto axles etc. The autonomous production system APS gets a description of the parts and its assembly capabilities (skills) as an input and computes intermediate actions such as the correct placing of the car's chassis into a fixture in the right position for its bottom side to be accessible for following actions that insert the axles. The autonomous production system APS according to the first aspect of embodiments of the present invention has the additional ability to explain its computed actions to a user making its decision process transparent to the user and assisting the user in controlling the autonomous production system APS. In an exemplary explanation for an action performed by the autonomous production system could be like “put chassis into fixture upside down”. This action can be explained along the lines of “the action achieves that the chassis is fixed within the fixture and its bottom side is accessible to the assembly robot for properly inserting an axle as a required following action”.
The transparency information provided by the autonomous production system APS according to embodiments of the present invention can comprise explanations E indicating to a user at least one reason for a specific action being included or not being included in the action plan AP computed by the AI planning unit 2 of the system. Accordingly, the explanations E for actions can comprise why-explanations indicating why the autonomous production system has taken or performed a certain action and why-not-explanations indicating why the autonomous production system has not performed a specific alternative action at a certain point.
The autonomous production system APS according to embodiments of the present invention does combine ontologies and artificial intelligence planners for providing transparency information about a computed action plan AP. The autonomous production system APS uses a semantically rich ontological vocabulary to provide explanations that a user has a good understanding of. In a possible embodiment, the system connects the declarative descriptions in the PDDL problem and domain specification to set off the vocabulary in the respective ontology. In the autonomous production system according to embodiments of the present invention the used artificial intelligence, AI, planning unit 2 is exchangeable depending on the requirements of the system and/or user. The explanation mechanism provided by the autonomous production system according to embodiments of the present invention does not rely on a specific implementation of the artificial intelligence, AI, planning unit 2 and thus has a broad range of use.
In the autonomous production system according to the first aspect of embodiments of the present invention ontologies are used to capture the vocabulary of the domain such as assembly in manufacturing in a semantically rich way with explicit domain concepts (like wheel, axle, . . . ) and the relations (like is part of, is connected to, . . . ). In a possible embodiment, a standardized OWL ontology language can be used for representation of the actions of the autonomous production system APS.
PDDL domain and problem specification can be generated from the ontological representation that captures planning knowledge (skills/capabilities of the autonomous production system) next to domain knowledge of the objects to be handled.
Any conventional artificial intelligence, AI, planner that understands planning domain definition language PDDL representation can be used to compute the action plan AP as a solution to a task or goal environment state posed to the autonomous production system APS based on the generated PDDL problem and domain specifications.
Since the PDDL code can be generated from statements using an ontological vocabulary, the resulting action plan AP can be interpreted in terms of this vocabulary by considering the inverse transformation of the generation process. Thus, the instantiated actions from the action plan AP can be expressed in terms of the domain ontology used in the first place.
For explaining a specific action (why) or the absence of an alternative action (why not) OWL justifications J can be computed by the logical reasoning unit 6 for respective ontology interferences which are related to the actions of the action plan. Since the calculated justifications J explain how these interferences can be deduced from the knowledge in the ontology, they yield an explanation for the actions of the action plan AP, building on the domain vocabulary that the user understands well.
The autonomous production system APS according to embodiments of the present invention can provide in a possible implementation different kinds of explanations concerning actions performed by the autonomous production system according to the action plan AP. These explanations E can comprise why-explanations (i.e. why an action was included in the action plan AP at a certain point) and why-not-explanations (i.e. why was an alternative action not included in the action plan at a certain point). To achieve a why-explanation an OWL justification J for the respective action's precondition being true can be computed and its content can be presented to the user via the user interface 12. For a why-not-explanation, the OWL justification J can be computed for the respective action's precondition being false and its content can be presented to the user. With respect to the why-not-explanations, these explanations concern an action that is not part of the action plan AP (at least not at the relevant point in the action plan). This requires the user to construct an alternative instantiation of a hypothetic action for a why-not-explanation as additional input. For generation of an explanation, an action plan AP is considered as a result of a classical AI planner as illustrated schematically in
To generate an accurate explanation for why or why not an action occurs or has occurred, it is necessary to have knowledge of the state in which the environment is at that specific moment in time. This is necessary in such a black-box approach with respect to the artificial intelligence, AI, planner, since in the ontology is only the initial state available and not the intermediate states that successively lead to the goal state by applying the instantiated actions in the action plan AP as a result of the artificial intelligence planning unit 2. For generating the explanations, the action plan AP can be considered independently from its execution. In a possible embodiment, the action plan AP can have already been executed and the user gets the explanations for the performed actions. However, the explanations E can also be used by the user for verification prior to execution of the action plan AP by the execution engine 9, in particular for safety reasons.
An approach to solve this is inferring the next state of the environment as the algorithm progresses further into the plan and generates explanations E for the applied actions.
A pseudo-code for the mechanism according to embodiments of the present invention is given below:
An instantiated action can be parsed. The action can be executed and the state in which the environment is can be transposed to the state caused by the executed action. The A-box of the ontology does represent the state of the environment as well as the relations between the different objects in it. An A-box of an OWL ontology is the part of the ontology which makes statements about specific individual objects and situations. This is contrary to a T-box which makes more general statements about classes of objects. In the method according to embodiments of the present invention, the use of the A-box is relevant for representing a specific state of the environment which needs to be computed for explaining a particular action while the T-box knowledge remains the same for all such states. The A-box of the ontology represents the state of the environment as well as the relations between the different objects in it. The individuals represent the objects and the object properties represents their relations. Inferring to the next state in which the system can be found is applying the effects of a specific action that can be executed as long as certain preconditions of that action are fulfilled. The method can infer the next state of the system by applying those effects of the specific action to the action plan AP, therefore updating the A-box of the ontology or the state of the environment according to the above illustrated pseudo-code. The method can be applied for more than one action depending on which state of interest is to be reached and which action in the action plan AP has to be explained.
After having reached a certain state in the environment, a justification set (i.e. a collection of each justification of the entailments) can be generated to construct an explanation of why an action has been performed. A justification set is the minimal set of statements that are required to substantiate explanation for an action. In a possible embodiment, a user may ask a question within the scope of the action plan to get explanations. Depending on which part of the action plan AP queried by the user the method does evolve the system to the queried state of interest while generating at the same time a justification set as a collection of individual justifications for the specific entailments.
The following table shows an example based on the above action plan AP and an ontology O that contains statements as follows:
For example, one can consider a why-explanation for the second action in the action plan AP which corresponds to asking: “why was the action InsertLid included in the action plan at this point?”. The A-box that represents the initial state is denoted by A0. To generate an appropriate explanation, the method and algorithm according to embodiments of the present invention must first infer the state of the environment relevant to the InsertLid action. In a possible embodiment, it needs to simulate the application of the previous PickLid action, asserting effects to the initial state A0. This leads to a state represented by an A-box A1 as follows:
At this point, the explanation mechanism does compute justifications for all the preconditions of the InsertLid action being fulfilled with respect to state A1. This does justify to the user why the AI planning unit 2 did chose to apply the InsertLid action in A1. The table below shows the respective justification as OWL axioms together with explanatory texts. After the interference, justification sets are constructed for the following entailments:
The following table illustrates a justification J1 of the first precondition of the respective action:
The following table illustrates the justification J2 of the second precondition of the action:
The following table illustrates a justification J3 of the third precondition of the action:
The actual explanation presented as transparency information to a user or to a controller can comprise a collection of the explanatory text in each justification from the above indicated tables. These texts can either be part of the ontology as hand-crafted annotations or they can be generated from the axioms on the fly by means of applying natural language, NL, processing techniques.
As can be deduced from the algorithm the collective justification J for the action can be determined as: J=J1 ∪J2 ∪J3.
The autonomous production system APS according to embodiments of the present invention performs computation of the user and/or machine interpretable explanations of an action plan AP computed by an artificial intelligence, AI, planning unit. The autonomous production system APS makes use of ontological vocabularies together with OWL justification reasoning for a black-box explanation of PDDL action planning. The autonomous production system APS according to embodiments of the present invention uses explanation mechanisms from the automated reasoning and ontology community (OWL justifications) to that of conventional action planning with PDDL.
The autonomous production system APS comprises a transformation unit which provides transformation between the ontology language (OWL) and the planning language (PDDL) which governs the representation of action knowledge within an ontology suitable for explanatory reasoning.
The autonomous production system APS provides for a mechanism for inferring any particular state of the environment (state of interest) from an instantiated action plan AP together with an initial system state. This does also allow for explanation of the actions at any point within the action plan.
The autonomous production system APS and method according to embodiments of the present invention further provides a mechanism for generating explanations E in form of explanatory text from OWL justifications or the preconditions of any action. The mechanism exploits ontological domain vocabulary to make a planner's decisions transparent to a user without facing the difficulties connected to analyzing planner's internal computation.
The autonomous production system APS and method according to embodiments of the present invention provides advantages that arise from explaining automated actions of autonomous production systems. In case of planning ahead of actual production sequences, explanations can be used to explore and optimize production workflows prior to the execution, supporting the task of production engineering.
In case of flexible autonomous plant set-ups where the autonomous production system APS takes decisions as to the next production steps, the explanations E can be utilized to increase transparency over such automated decisions. The plant operator or user can at any time be informed about the reasons for certain actions being taken (why-explanation) or about rejected alternative actions (why-not-explanations). This does increase the acceptance and trust in highly flexible automation solutions provided by the autonomous production system APS.
In case of non-producibility, i.e. a situation where no sequence of actions can be found for achieving the desired goal or goal state given by the target product, the explanations can also help a user or engineers to investigate reasons for the non-producibility, thus supporting product engineering as well as production engineering tasks. The autonomous production system APS according to embodiments of the present invention makes its internal decisions and/or actions transparent to a user by generating automatically explanations. For example, the autonomous production system APS can give an explanation E of why an assembly robot changed its tool prior to fastening a screw. A possible explanation could be that the current tool was not compatible with the required type of screw. In a possible embodiment, the explanations E output to the user can assist the user in controlling the autonomous production system. If the user is satisfied with the received explanation E, he may not interfere with the autonomous production system APS while executing the action plan AP. If the action plan AP is executed in a specific operation mode step by step, the user may decide depending on the given explanation E for the next instantiated action whether the action is to be performed or not. In a still further possible embodiment, the user may have the possibility to overrule an instantiated action if the given explanation E is not satisfactory. In a possible embodiment the user may interrupt the sequence of instantiated actions if the calculated explanations E for one or more of the instantiated actions is not satisfactory. In a still further embodiment, the user may input that given explanation is satisfactory or not satisfactory. Depending on the user input, the calculated action plan AP may be executed or not. In a still further embodiment, if the explanation E for an instantiated action is not satisfactory to a user, the artificial intelligence, AI, planning unit 2 can be triggered by the process to calculate an alternative action plan and to provide transparency information about the adapted action plan. According to a possible embodiment, the user receives explanations E about the computed action plan AP as an observer without interfering with the autonomous production system APS. In an alternative embodiment, the user can give a feedback on the received explanations E to interfere with the execution of the computed action plan AP.
Although the present invention has been disclosed in the form of preferred embodiments and variations thereon, it will be understood that numerous additional modifications and variations could be made thereto without departing from the scope of the invention
For the sake of clarity, it is to be understood that the use of “a” or “an” throughout this application does not exclude a plurality, and “comprising” does not exclude other steps or elements.
Number | Date | Country | Kind |
---|---|---|---|
18184765.8 | Jul 2018 | EP | regional |