The present application claims the benefit under 35 U.S.C. § 119 of European Patent Application No. EP 19181874.9 filed on Jun. 21, 2019, which is expressly incorporated herein by reference in its entirety.
The present invention relates to automatic processes for planning tasks by determining a sequence of manipulation skills. Particularly, the present invention relates to a motion planning framework for robot manipulation.
General use of robots for performing various tasks is challenging, as it is almost impossible to preprogram all robot capabilities that may potentially be required in the latter application. Training the skill whenever it is needed renders the use of robots inconvenient and will not be accepted by a user. Further, simply recording and replaying a demonstrated manipulation is often insufficient, because changes in the environment, such as varying robot and/or object poses, would render any attempt unsuccessful.
Therefore, the robot needs to recognize and encode the intentions behind these demonstrations and should be capable to generalize the trained manipulation to unforeseen situations. Furthermore, several skills need to be performed in a sequence to accomplish complex tasks. The task planning problem aims to define the right sequence of actions and needs a prespecified definition of the planning model and of the preconditions and effects of all available skills. Due to the large variation of skills, the definition of such a planning model quickly becomes impractical.
According to the present invention, a method for planning an object manipulation, and a system for planning object manipulation are provided.
Further embodiments are described herein.
According to a first aspect of the present invention, an example method for planning a manipulation task of an agent, particularly a robot, is provided, comprising the steps of:
Further, the learning of the number of manipulation skills may be performed in that a plurality of manipulation trajectories for each respective manipulation skill is recorded, particularly by demonstration, a task parametrized Hidden Semi-Markov model (TP-HSMM) is determined depending on the plurality of manipulation trajectories for each respective manipulation skill and the symbolic abstraction of the respective manipulation skill is generated.
The above task planning framework allows a high-level planning of a task by sequencing general manipulation skills. Manipulation skills are action skills in general which may also include translations or movements. The general manipulation skills are object-oriented and respectively relate to a single action performed on the object, such as a grasping skill, a dropping skill, a moving skill or the like. These manipulations skills may have different instances, which means that the skills can be carried out in different ways (instances) according to what is needed to be done next. Furthermore, the general skills are provided with object centric symbolic action descriptions for the logic-based planning.
The above method is based on learning from demonstration by means of fitting a prescribed skilled model, such as a Gaussian mixture model to a handful of demonstrations. Generally, a TP-GMM task parameterized Gaussian mixture model may be described which can then be used to an execution to reproduce a trajectory for the learned manipulation skill. The TP-GMM is defined by one or more specific frames (coordinate systems) which indicates the translation and rotation with respect to a word frame. After observation of the actual frame, the learned TP-GMM can be converted into a single GMM. One advantage of the TP-GMM is that the resulting GMM can be updated in real time according to the observed task parameters. Hence, the TP-HSMM allows to adapt to changes in the objects during the execution of the manipulation task.
Furthermore, the generating of the symbolic abstraction of the manipulations skills may comprise constructing a PDDL model, wherein objects, initial state and goal specification define a problem instance, while predicates and actions define a domain of a given manipulation, wherein particularly the symbolic abstraction of the manipulations skills uses a classical PDDL planning language.
It may be provided that the concatenated sequence of manipulation skills is determined, such that the probability of achieving the given goal specification is maximized, wherein particularly a PDDL planning step is used to find a sequence of actions to fulfill the given goal specification, starting from a given initial state.
According to an example embodiment of the present invention, the transition probability between states of the TP-HSMM may be determined using Expectation-Maximization.
Moreover, a task parametrized Hidden Semi-Markov model (TP-HSMM) may be determined by cascading manipulations skills, wherein a Viterbi algorithm is used to retrieve the sequence of states from the single TP-HSMM based on the determined concatenated sequence of manipulation skills.
Parameters of the TP-HSMM may be learned through a classical Expectation-Maximization algorithm.
Furthermore, the symbolic abstractions of the demonstrated manipulation skills may be determined by mapping low-variance geometric relations of segments of manipulation trajectories into the set of predicates.
According to an example embodiment of the present invention, the step of determining the concatenated sequence of manipulation skills may comprise an optimization process, particularly with the goal of minimizing the total length of the trajectory.
Particularly, determining the concatenated sequence of manipulation skills may comprise selectively reproducing one or more of the manipulation skills of a given sequence of manipulation skills so as to maximize the probability of satisfying the given goal specification.
Furthermore, determining the concatenated sequence of manipulation skills may include the steps of:
Particularly, the modified Viterbi algorithm may include missing observations and duration probabilities.
According to a further embodiment of the present invention, a device for planning a manipulation task of an agent, particularly a robot, is provided, wherein the device is configured to:
Example embodiments of the present invention are described in more detail in conjunction with the figures.
Within this setup, a human user can perform several kinesthetic demonstrations on the arm to manipulate one or several objects for certain manipulation skills. Denote by A={ai, a1, . . . , aH} the set of demonstrated skills. Moreover, for manipulation skill ah∈A, the set of objects involved is given by Oa
The robot arm 2 is controlled by means of a control unit 3 which may actuate actuators to move the robot arm 2 and activate the end effector 22. Sensors may be provided at the robot arm 2 or at the robot workspace to record the state of objects in the robot workspace. Furthermore, the control unit 3 is configured to record movements made with the robot arm 2 and to obtain information about objects in the workspace from the sensors and further to perform a task planning process as described below. The control unit 3 has a processing unit where the algorithm as described below is implemented in hardware and/or software.
All demonstrations are described by the structure of TP-GMM (task-parametrized-Gaussian Mixture Models). The basic idea is to fit a prescribed skill model such as GMMs to multiple demonstrations. GMMs are described, e.g., in S. Niekum et al. “Learning grounded finite-state representations from unstructured demonstrations”, The International Journal of Robotics Research, 34(2), pages 131-157, 2015. For a number M of given demonstrations (trajectory measurement results), each of which contains Tm data points for a dataset, N=M*Tm total observations ξ={ξt}t=1N exists, where ξt∈d for sake of clarity. Also, it is assumed the same demonstrations are recorded from the perspective of P different coordinate systems (given by the task parameters such as objects of interest). One common way to obtain such data is to transform the demonstrations from global frame (global coordinate system) to frame p by ξt(p)=At(p)
Differently from standard GMM learning, the mixture model above cannot be learned independently for each frame p. Indeed, the mixing coefficients πk are shared by all frames p and the k-th component in frame p must map to the corresponding k-th component in the global frame. For example, Expectation-Maximization (EM) is a well-established method to learn such models. In general, an expectation-maximization (EM) algorithm is an iterative method to find maximum likelihood or maximum a posteriori (MAP) estimates of parameters in a statistical model, which depends on unobserved latent variables.
Once learned, the TP-GMM can be used during execution to reproduce a trajectory for the learned skill. Namely, given the observed frames {bt(p),At(p)}p=1P, the learned TP-GMM is converted into one single GMM with parameters {πk,μt(p),Σt(p)}k=1K, by multiplying the affine-transformed Gaussian components across different frames, as follows
where the parameters of the updated Gaussian at each frame p are computed as
{circumflex over (μ)}t,k(p)=At(p)μk(p)+bt(p)and {circumflex over (Σ)}t,k(p)=At(p)μk(p)+bt(p)
Hidden semi-Markov Models (HSMMs) have been successfully applied, in combination with TP-GMMs, for robot skill encoding to learn spatio-temporal features of the demonstrations, such as manipulation trajectories of a robot or trajectories of a movable agent.
Hidden semi-Markov Models (HSMMs) extend standard hidden Markov Models (HMMs) by embedding temporal information of the underlying stochastic process. That is, while in HMM the underlying hidden process is assumed to be Markov, i.e., the probability of transitioning to the next state depends only on the current state, in HSMM the state process is assumed semi-Markov. This means that a transition to the next state depends on the current state as well as on the elapsed time since the state was entered.
More specifically, a task parametrized HSMM model consists of the following parameters
M
θ
={{a
kh}h=1K,(μkD,σkD),πk,{μk(p),Σk(p)}p=1P}k=1K
where akh is the transition probability from state k to h; (μkD,σkD) describes the Gaussian distributions for the duration of state k, i.e., the probability of staying in state k for a certain number of consecutive steps; {πk,{μk(p),Σk(p)}p=1P}k=1K equals the TP-GMM introduced earlier and, for each k, describe the emission probability, i.e., the probability of observation, corresponding to state k. In an HSMM the number of states correspond to the number of Gaussian components in the “attached” TP-GMM. In general, HSMM states are Gaussian distributions, which means that its observation probability distribution is represented as a classical GMM. To render HSMM to be object-centric, the observation probabilities can be parametrized as it is done in TP-GMM to obtain a TP-HSMM.
Given a certain sequence of observed data points , the associated sequence of states in θ is given by st=s1, s2 . . . st. The probability of data point ξt belonging to state k (i.e., st=k) is given by the forward variable at(k)=p(st=k,{):
where oτt=N() is the emission probability, where ({circumflex over (μ)}l,k,{circumflex over (Σ)}l,k) are derived from ({circumflex over (Σ)}t,k)−1=Σp=1P({circumflex over (Σ)}t,k(p))−1,μt,k={circumflex over (Σ)}t,kΣp=1P({circumflex over (Σ)}t,k(p))−1{circumflex over (μ)}t,k(p) as shown above. Furthermore, the same forward variable can also be used during reproduction to predict future steps until Tm. In this case however, since future observations are not available, only transition and duration information are used, i.e., by setting N()=1 for all k and >t in at(k)=Στ-1t-1Σh-1Kat-τ(h)ahkN(τ|μkD,σkD)oτt. At last, the sequence of the most-likely states s*T
All demonstrations are recorded from multiple frames. Normally, these frames are closely attached to the objects in Oa
In addition, consider a set of pre-defined predicates, denoted by B={bi,b2, . . . ,bL}, representing possible geometric relations among the objects of interest. Here, predicates b∈B are abstracted as Boolean functions taking as inputs the status of several objects while outputting whether the associated geometric relation holds or not. For instance, grasp:=O→B indicates whether an object is grasped by the robot arm; within: O×O→B indicates whether an object is inside another object; and onTop: O×O→B indicates whether an object is on the top of another object. Note that these predicates are not bound to specific manipulation skills but rather shared among them. Usually, such predicate functions can be easily validated for the robot arm states and the object states (e.g., position and orientations).
Finally, a goal specification G is given as a propositional logic expression over the predicates B, i.e., via nested conjunction, disjunction and negation operators. In general, the goal specification G represents the desired configuration of the arm and the objects, assumed to be feasible. As an example, one specification could be “within(peg,cylinder)ΛonTop(cylinder, box)”, i.e., “the peg should be inside the cylinder and the cylinder should be on top of the box”.
In the following, a problem can be defined as follows: Given a set of demonstrations D for skills A and the goal G, the objective is
M
θ
={{a
kh}h=1K,(μkD,σkD),πk,{μk(p),Σk(p)}p=1P}k=1K,
for each demonstrated skill ah and reproduce the skill for a given final configuration G. For the reproduction of the skill the sequence of states obtained through Viterbi algorithm can be used.
P=(PD,Pp)
where Objects, Initial State and Goal Specification define a problem instance Pp, while Predicates and Actions define the domain of a problem PD;
The PDDL model P includes a domain for the demonstrated skills and a problem instance given the goal specification G.
The Planning Domain Definition Language (PDDL) is the standard classic planning language. Formally, the language consists of the following key ingredients:
In the example embodiment described herein, motion planning is performed at the end-effector/gripper trajectory level. This means it is assumed that a low-level motion controller is used to track the desired trajectory.
The example method for planning a manipulation task is described in detail with respect to the flowchart of
Firstly, a TP-HSMM model Mθ is to be learned for each demonstrated skill ah and reproduce the skill for a given final configuration G.
One demonstrated skill ah∈A is considered. As described above, the set of available demonstrations, recorded in P frames, is given by Da
Given a properly chosen number of components K which correspond to the TP-HSMM states, which are the Gaussian components representing the observation probability distributions, the TP-HSMM model Ma
A final goal configuration G is provided in step S2 which can be translated into the final state of the end effector 22 xG∈3×S3×1. This configuration can be imposed as the desired final observation of the reproduction, i.e., ξT
The forward variable of formula
allows to compute the sequence of marginally most probable states, while we are looking for the jointly most probable sequence of states given the last observation ξT
To overcome this issue, in step S3 a modification of the Viterbi algorithm is used. Whereas the classical Viterbi algorithm has been extensively used to find the most likely sequence of states (also called the Viterbi path) in classical HMMs that result in a given sequence of observed events, the modified implementation differs in that: (a) it works on HSMM instead of HMM; and that (b) most observations except the first and the last ones are missing.
Specifically, in the absence of observations the Viterbi algorithm becomes
The Viterbi algorithm is modified to include missing observations, which is basically what is described for variable ‘b’. Moreover, the inclusion of duration probabilities pj(d) in the computation of variable ‘d_t(j)’ makes it work for HSMM.
At each time t and for each state j, the two arguments that maximize equation δt(j) are recorded, and a simple backtracking procedure can then be used to find the most probable state sequence s*T
The above modified Viterbi algorithm provides the most likely state sequence for a single TP-HSMM model that produces the final observation δT. As multiple skills are used, these models need to be sequenced and δt(j) has to be computed for each individual model Ma
As a next step, symbolic abstractions of the demonstrated skills allow the robot to understand the meaning of each skill on a symbolic level, instead of the data level of HSMM. This may generalize a demonstrated and learned skill. Hence, the high-level reasoning of the herein described PDDL planner, needs to understand how a skill can be incorporated into an action sequence in order to achieve a desired goal specification starting from an initial state. A PDDL model contains the problem instance Pp and the domain PD.
While the problem Pp can be easily specified given the objects O, the initial state and the goal specification G, the key ingredient for symbolic abstraction is to construct the actions description in the domain PD for each demonstrated skill, wherein PD should be invariant to different task parameters.
ah∈A is a symbolic representation for one demonstrated skill in PDDL form. The learned TP-HSMM Ma
{πk,{μk(p),Σk(p)}p=1P}k=1K.
For each model, it is then possible to identify two sets, each of which containing initial and final states and denoted by ,⊆{1, . . . , K}, respectively.
To construct the preconditions of a skill, the segments of demonstrations that belong to any of the initial state are to be identified, and to further derive the low-variance geometric relations which can be mapped into the set of predicates B. For each initial state i∈, its corresponding component in frame p is given by (μip,Σi(p)) for p=1, . . . , P. These frames correspond to objects i.e., skill ah interacts with these objects. For each demonstration {ξt(p)}t=1T
b
l(o1, . . . ,oP)=T, if
where 0<η<1 is a design parameter (probability threshold). o1t, . . . , oPt are object states computed based on the recorded frame coordinates {bt(P),At(p)}p=1P and object geometric dimensions. Denote by Bi the set of instantiated predicates that are True within state i,∀i∈. As a result, the overall precondition of skill ah is given by the disjunction of the conjunction of these predicates for each initial state, i.e.,
where ∇ and Λ are the disjunction and conjunction operations. Similarly, to construct the effect of a skill, the procedure described above can be applied to the set of final states . In particular, for each final state f∈, the set of instantiated predicates that are True within f is denoted by Bf. However, in contrast to the precondition, the effect cannot contain a disjunction of predicates. Consequently, the effect of skill ah is given by
as the invariant predicates common for all of the final states. Based on the above elements, the PDDL model P can be generated in an automated way. More importantly, the domain PD can be constructed incrementally whenever a new skill is demonstrated and its descriptions are abstracted as above. On the other hand, the problem instance Pp needs to be re-constructed whenever a new initial state or goal specification is given.
Following, it is referred to the planning and sequencing of trained and abstracted skills. The PDDL definition P has been constructed, which can be directly fed into any compatible PDDL planner. Different optimization techniques can be enforced during the planning, e.g., minimizing the total length of the plan or total cost. Denote by a*D=a*1=a*2 . . . a*D the generated optimal sequence of skills, where a*d∈A holds for each skill. Moreover, denote by Ma*
Given this sequence a*D, each skill within a*D, is reproduced as the end-effector trajectory level, so to maximize the probability of satisfying the given goal G.
The learned TP-HSMM encapsulates a general skill that might have several plausible paths and the choice relies heavily on the desired initial and final configurations. To avoid incompatible transitions from one skill to the next, a compatibility measure shall be embedded while concatenating the skills within a*D. Particularly, the proposed solution contains three main steps:
Since the transition from one skill to another is never demonstrated, such transition probabilities are computed from the divergence of emission probabilities between the sets of final and starting states. Particularly, consider two consecutive skills a*d and a*d+1 in a*D. The transition probability from one final state f of Ma*
where KL(⋅∥⋅) is a KL-divergence (Kullback-Leibler-Divergence, see also Kullback, S.; Leibler, R. A., “On Information and Sufficiency.” Ann. Math. Statist. 22 (1951), no. 1, 79-86. doi:10.1214/aoms/1177729694. https://projecteuclid.org/euclid.aoms/1177729694), Pc is the set of common frames between these two skills, and α≥0 is a design parameter. The outgoing probability of any final state should be normalized. This process is repeated for all pairs of starting and final states between consecutive skills in a*D. In this way, one complete model {circumflex over (M)}a*
that projects the minimum length path between {circumflex over (μ)}s
The covariance matrices {circumflex over (Σ)}s
Under the above assumptions, the control objective in the tangent space can be formulated as
Finally, in step S4 the concatenated sequence of manipulation skills is executed.
Number | Date | Country | Kind |
---|---|---|---|
19181874.9 | Jun 2019 | EP | regional |