DEVICE AND METHOD FOR CONTROLLING A ROBOT DEVICE

Information

  • Patent Application
  • 20230141855
  • Publication Number
    20230141855
  • Date Filed
    October 18, 2022
    2 years ago
  • Date Published
    May 11, 2023
    2 years ago
Abstract
A method for controlling a robot device. The method includes providing a selection model and executing multiple instances of a task, including, in each execution, when a function of the robot device needs to be selected to perform the task instance, checking whether the selection model provides a selection of a function and, if yes, controlling the robot device to perform the function selected by the selection model and if no, receiving user input indicating a selection of a function, selecting a function according to the selection indicated by the user input, controlling the robot device to perform the function selected according to the selection indicated by the user input and training the selection model according to the selection indicated by the user input.
Description
CROSS REFERENCE

The present application claims the benefit under 35 U.S.C. § 119 of German Patent Application No. DE 10 2021 212 494.1 filed on Nov. 5, 2021, which is expressly incorporated herein by reference in its entirety.


FIELD

The present disclosure relates to devices and methods for controlling a robot device.


BACKGROUND INFORMATION

Robotic skills may be programmed through learning-from-demonstration (LfD) approaches, where a nominal plan of a skill is learned by a robot from demonstrations. The main idea of LfD is to parameterize Gaussians by the pose of a camera monitoring the robot's workspace and a target object to be handled by the robot. Learning from demonstration provides a fast, intuitive and efficient framework to program robot skills, e.g. for industrial applications. However, instead of a single motion, complex manipulation tasks often contain multiple branches of skill sequences that share some common skills. A planning process is therefore needed which generates the right sequence of skills and their parameters under different scenarios. For instance, a bin-picking task involves to pick an object from the box (depending on how where it is located in the box), clear it from the corners if needed, re-orient it to reveal its barcode, and show the barcode to a scanner. Choosing the correct skill sequence is essential for flexible robotic systems across various applications. Such transitions among the skills and the associated conditions are often difficult and tedious to specify manually.


Therefore, reliable approaches for selecting the correct sequence of skill primitives and the correct parameters for each skill primitive under various scenarios are desirable.


SUMMARY

According to various embodiments of the present invention, a method for controlling a robot device is provided comprising providing, for each function of a plurality of functions, a control model for controlling the robot device to perform the function, providing a selection model for selecting among the plurality of functions and executing multiple instances of a task by the robot device, comprising, in each execution, when a function of the plurality of functions needs to be selected to perform the task instance, checking whether the selection model provides a selection of a function and, if the selection model provides a selection of a function, controlling the robot device to perform the function selected by the selection model using the control model for the selected function and if the selection model does not provide a selection of a function, receiving user input indicating a selection of a function, selecting a function according to the selection indicated by the user input, controlling the robot device to perform the function selected according to the selection indicated by the user input using the control model for the selected function and training the selection model according to the selection indicated by the user input.


Thus, the method described above allows training a selection model on the fly during control of a robot device by user (i.e. human) input. The provided selection model may be untrained or only pre-trained, such that, at least for some configurations of the robot device (or the controlled system, e.g. including configurations of the environment such as objects), the selection model does not provide a selection of a function. During execution of task instances the selection model gets more and more reliable such that in the end, the robot device can perform complex manipulation tasks comprising sequences of multiple skills and/or skill branches.


In the following, various examples are given.


Example 1 is a method for controlling a robot as described above.


Example 2 is the method of Example 1, wherein the selection model outputs indications of confidences for selections of functions of the plurality of functions and wherein training the selection model according to the selection indicated by the user input comprises adjusting the selection model to increase a confidence output by the selection model to select the function selected according to the selection indicated by the user input.


Thus, the robot device gets more and more confident for selections for which it has received user input until it can behave autonomously.


Example 3 is the method of Example 1 or 2, wherein the selection model outputs indications of confidences for selections of functions of the plurality of functions and checking whether the selection model provides a selection of a function comprises checking whether the selection model outputs an indication of a confidence for a selection of the function which is above a predetermined lower confidence bound.


The selection model is thus trained such that it gets more and more certain about the selection of functions until it has achieved a sufficient confidence for certain selections (e.g. for skill branch selections in certain states). Then, user input is no longer necessary (and e.g. no longer requested). Thus, the effort for the user diminishes over time and the robot device can eventually perform the task autonomously.


On the other hand, in situations that have not yet been encountered (and thus, confidence is low), user input is used as a basis for the selection. Thus, wrong decisions which might, for example, lead to damages to the robot device or handled objects, can be avoided.


Example 4 is the method of any one of Examples 1 to 3, wherein the functions include skill and branches of skills and the selection function is trained to provide, for sets of alternative skills, a selection of a skill and, for sets of alternative branches of skills, a selection of a branch.


Thus, a hierarchical approach is used wherein selections of skills are performed and, for a selected skill, a selection of a branch. The selection model may thus include an edge selector (to select among skills, e.g. in a task network) and a branch selector (to select among branches of skills). This makes the selection more understandable and thus intuitive for the human user, reducing user effort and errors in the selection.


Example 5 is the method of any one of Examples 1 to 4, wherein providing the control model for each function comprises performing demonstrations of the function and training the control model using the demonstrations.


In other words, the functions (e.g. primitive skills) are trained from learning by demonstrations. This provides an efficient way to learn primitive skills.


Example 6 is the method of any one of Examples 1 to 5, wherein the selection model is a logistic regression model.


This allows reliable fast training (and re-training) from low amounts of data


Example 7 is the method of any one of Examples 1 to 6, comprising, if the selection model does not provide a selection of a function, pausing operation of the robot device until user input indicating a selection of a function has been received.


The robot device thus operates until its controller can no longer decide with which function to proceed and then pauses until the user guides it. Thus, wrong operation which may lead to damages can be avoided. Moreover, the robot device pausing indicates to the user that user input is needed.


Example 8 is a robot controller, configured to perform a method of any one of Examples 1 to 7.


Example 9 is a computer program comprising instructions which, when executed by a computer, makes the computer perform a method according to any one of Examples 1 to 7.


Example 10 is a computer-readable medium comprising instructions which, when executed by a computer, makes the computer perform a method according to any one of Examples 1 to 7.


In the figures, similar reference characters generally refer to the same parts throughout the different views. The figures are not necessarily to scale, emphasis instead generally being placed upon illustrating the principles of the present invention. In the following description, various aspects are described with reference to the figures.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 shows a robot according to an example embodiment of the present invention.



FIG. 2 illustrates robot control for a task goal according to an example embodiment of the present invention.



FIG. 3 illustrates the determination of feature vectors for an edge selector and a branch selector, according to an example embodiment of the present invention.



FIG. 4 shows a flow diagram illustrating a method for controlling a robot device, according to an example embodiment of the present invention.





DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS

The following detailed description refers to the figures that show, by way of illustration, specific details and aspects of this disclosure in which the present invention may be practiced. Other aspects may be utilized and structural, logical, and electrical changes may be made without departing from the scope of the present invention. The various aspects of this disclosure are not necessarily mutually exclusive, as some aspects of this disclosure can be combined with one or more other aspects of this disclosure to form new aspects.


In the following, various examples will be described in more detail.



FIG. 1 shows a robot 100.


The robot 100 includes a robot arm 101, for example an industrial robot arm for handling or assembling a work piece (or one or more other objects). The robot arm 101 includes manipulators 102, 103, 104 and a base (or support) 105 by which the manipulators 102, 103, 104 are supported. The term “manipulator” refers to the movable members of the robot arm 101, the actuation of which enables physical interaction with the environment, e.g. to carry out a task. For control, the robot 100 includes a (robot) controller 106 configured to implement the interaction with the environment according to a control program. The last member 104 (furthest from the support 105) of the manipulators 102, 103, 104 is also referred to as the end-effector 104 and may include one or more tools such as a welding torch, gripping instrument, painting equipment, or the like.


The other manipulators 102, 103 (closer to the support 105) may form a positioning device such that, together with the end-effector 104, the robot arm 101 with the end-effector 104 at its end is provided. The robot arm 101 is a mechanical arm that can provide similar functions as a human arm (possibly with a tool at its end).


The robot arm 101 may include joint elements 107, 108, 109 interconnecting the manipulators 102, 103, 104 with each other and with the support 105. A joint element 107, 108, 109 may have one or more joints, each of which may provide rotatable motion (i.e. rotational motion) and/or translatory motion (i.e. displacement) to associated manipulators relative to each other. The movement of the manipulators 102, 103, 104 may be initiated by means of actuators controlled by the controller 106.


The term “actuator” may be understood as a component adapted to affect a mechanism or process in response to be driven. The actuator can implement instructions issued by the controller 106 (the so-called activation) into mechanical movements. The actuator, e.g. an electromechanical converter, may be configured to convert electrical energy into mechanical energy in response to driving.


The term “controller” may be understood as any type of logic implementing entity, which may include, for example, a circuit and/or a processor capable of executing software stored in a storage medium, firmware, or a combination thereof, and which can issue instructions, e.g. to an actuator in the present example. The controller may be configured, for example, by program code (e.g., software) to control the operation of a system, a robot in the present example.


In the present example, the controller 106 includes one or more processors 110 and a memory 111 storing code and data based on which the processor 110 controls the robot arm 101. According to various embodiments, the controller 106 controls the robot arm 101 on the basis of a machine learning model 112 stored in the memory 111. The machine learning model 112 includes control models for skills and skill branches as well as a selection model to select among skills and skill branches.


A robot 100 can take advantage of learning-from-demonstration (LfD) approaches to learn to execute a skill or collaborate with a human partner. Human demonstrations can be encoded by a probabilistic model (also referred to as control model). The controller 106 can subsequently use the control model, which is also referred to as robot trajectory model, to generate the desired robot movements, possibly as a function of the state (configuration) of both the human partner, the robot and the robot's environment.


The basic idea of LfD is to fit a prescribed skill model such as GMMs to a handful of demonstrations. Let there be M demonstrations, each of which contains Tm data points for a dataset of N=ΣmTm total observations ξ={ξt}t=1N, where ξtcustom-characterd. Also, it is assumed that the same demonstrations are recorded from the perspective of P different coordinate systems (given by the task parameters such as local coordinate systems or frames of objects of interest). One common way to obtain such data is to transform the demonstrations from a static global frame to frame p by ξt(p)=A(p)−1t−b(p)). Here, {(bb(p), A(p))}p=1P is the translation and rotation of (local) frame p w.r.t. the world (i.e. global) frame. Then, a TP-GMM is described by the model parameters {πk, {μkb(p), Σkb(p)}p=1P}k=1K where K represents the number of Gaussian components in the mixture model, πk is the prior probability of each component, and {μkb(p), Σkb(p))}p=1P are the parameters (mean and covariance) of the k-th Gaussian component within frame p.


Differently from standard GMM, the mixture model above cannot be learned independently for each frame. Indeed, the mixing coefficients πk are shared by all frames and the k-th component in frame p must map to the corresponding k-th component in the global frame. Expectation-Maximization (EM) is a well-established method to learn such models.


Once learned, the TP-GMM can be used during execution to reproduce a trajectory for the learned skill.


Hidden semi-Markov Models (HSMMs) extend standard hidden Markov Models (HMMs) by embedding temporal information of the underlying stochastic process. That is, while in HMM the underlying hidden process is assumed to be Markov, i.e., the probability of transitioning to the next state depends only on the current state, in HSMM the state process is assumed semi-Markov. This means that a transition to the next state depends on the current state as well as on the elapsed time since the state was entered. They can be applied, in combination with TP-GMMs, for robot skill encoding to learn spatio-temporal features of the demonstrations, resulting in a task-parameterized HSMM (TP-HSMM) model. A task-parameterized HSMM (TP-HSMM) model is defined as:





Θ={{ahk}h=1K,(μkDkD),πk,{(μkb(p)kb(p))}p=1P}k=1K,  (1)


where ahk is the transition probability from state h to k; (μkD, σkD) describe the Gaussian distributions for the duration of state k, i.e., the probability of staying in state k for a certain number of consecutive steps; {πk, {μkb(p), Σkb(p)}p=1P}k=1K equal the TP-GMM introduced earlier, representing the observation probability corresponding to state k. Note that herein the number of states corresponds to the number of Gaussian components in the “attached” TP-GMM.


Consider now a multi-DoF (degree of freedom) robotic arm 101 within a static and known workspace, of which the end-effector 104 has state r such as its 6-D pose and gripper state. Also, there are multiple objects of interest 113 denoted by O={o1, . . . oJ} such as its 6-D pose.


It is assumed that there is a set of primitive skills that enable the robot to manipulate these objects, denoted by A={a1, a2, . . . , aH}. For each skill, a human user performs several kinaesthetic demonstrations on the robot. Particularly, for skill a∈A the set of objects involved is given by Oa⊆O and the set of demonstrations is given by


Da={D1, . . . , DMo}, where each demonstration Dm is a timed sequence of states consists of the end-effector state r and object states {Po, o∈Oa}, i.e.






D
m=[st]t=1Tm=[(rt,{pt,o,o∈Oa})]t=1Tm.


Thus, for each (primitive) skill a∈A, a TP-HSMM Θa (i.e. a control model) is learned as in equation (1).


Via a combination of these skills, the objects 113 can be manipulated by the robot arm 101 to reach different states. It is desirable that the robot 100 (specifically controller 106) is trained for a generic manipulation task, i.e. should be able to perform may different instances of a generic task. Each task instance is specified by an initial state so and a set of (at least one) desired goal states sG. A task (instance) is solved when the system state (also referred to as configuration) is changed from so to sG.


So, given a new task (s0; sG), the controller 106 should determine (i) the discrete sequence of (primitive) skills and (ii) the continuous robot trajectory to execute each skill. Here, a task may be an instance of a complex manipulation task where the sequence of desired skills and the associated trajectories depend significantly on the scenario of the task instance.



FIG. 2 illustrates robot control for a task goal 201 according to an embodiment.


According to various embodiments, the controller implements (extended) primitive skill models 202 and a GTN (geometric task network) which are trained interactively during online execution by human inputs.


Primitive Skill Learning

As illustrated for the skills in the skill models 202, there are often multiple ways of executing the same skill under different scenarios (called branches). For instance, there are five different ways of picking objects from a bin, i.e., approaching with different angles depending on the distances to each boundary. To handle the branching, the controller implements, for each (primitive) skill, a branch selector 207 as an extension to the TP-HSMM model Θa for the skill.


The controller 106 trains the branch selector 207 from demonstrations 205 and online instructions 204 to choose, for each skill 202 to be performed for achieving the task goal 201, a branch 206. A branch 206 is, in other words, a variant of the skill. For example, the skill may be to pick an object from a bin and branches may be to pick the object from the left from the right, depending on where the object is located in the bin. For example, if it is positioned near a bin wall with it right side, the branch selector 207 selects the branch to pick the object from the left.


p∈{1, . . . , P} Consider a skill primitive a with M demonstrations (from demonstrations 205 provided for all skills) and B different branches. Each execution trajectory or demonstration of the skill is denoted by Jm=[st]t=1Tm which is associated with exactly one branch


p∈{1, . . . , P}bm∈Ba={1, . . . , B}. Let Ja denote the set of such trajectories, initialized to be the set of demonstrations Da (and supplemented during operation by online instructions 204). The frames associated with Jm, are computed from the initial state s0, by abstracting the coordinates of the robot arm 101 and the objects 113, denoted by (F0, F1, . . . , FP), where Fp=(bp, Ap) is the coordinate of frame of frame; their order can be freely chosen but fixed afterwards. Then, the controller 106 derives a feature vector






v
m=(F01,F12, . . . ,F(P-1)P)  (2)


where Fij=(bij, αij)∈custom-character6 is the relative transformation from frame Fi to frame Fj, bijcustom-character3 is the relative pose and αijcustom-character3 is the relative orientation. Thus, given Ja, the controller 106 can construct the training data for the branch selector 207:





τaB={(vm,bm),∀Jm∈Ja},


where bm is the branch label of trajectory Jm; vm is the associated feature vector. The controller 106 can then train the branch selector 207, denoted by custom-characteraB, via any multi-nominal classification algorithm. For example, logistic regression under the “one-vs-rest” strategy yields an effective selector from few training samples. Given a new scenario with state st, the controller 106 chooses branch b with the probability:





ρb=custom-characteraB(st,b),∀b∈Ba,


where ρb∈[0, 1]. Since most skills contain two or three frames, the feature vector vm normally has dimension 6 or 12.


Task Network Construction

As mentioned above, complex manipulation tasks often contain various sequences of skills 202 to account for different scenarios. For example, if a bar-code to be scanned is at the top of an object, the robot 100 needs to turn the object (i.e. execute a turn skill) before picking up the object and showing it to a camera (i.e. executing a show skill). This may not be required if the bar-code is already at the bottom. A high-level abstraction of such relations between skills is referred as task network. A valid plan evolves by transition from one skill 202 to another until the task is solved. The conditions on these transitions are particularly difficult and tedious to specify manually. Therefore, according to various embodiments, the controller 106 uses a coordination structure referred to as geometric task network (GTNs) 203, where the conditions are learned from task executions.


The a GTN 203 has a structure defined by a triple custom-character=(V, E, f). The set of nodes V is a subset of the primitive skills A; the set of edges E⊆V×V contains the allowed transitions from one skill to another; the function ƒ:v→custom-character maps each node to a edge selector w.r.t. all of its outgoing edges. Intuitively, (V, E) specifies how skills can be executed consecutively for the given task, while function ƒ(v) models the different geometric constraints among the objects and the robot, for the outgoing edges of node v. It should be noted that ƒ(⋅) is explicitly conditioned on both the current system state and the goal state.


A complete plan of a task is given by the following sequence:





ξ=as0a0s1a1s2 . . . sGā,


where a and ā are virtual “start” and “stop” skills, respectively. For different initial and goal states instances of the task the resulting plans can be different. Let Ξ={ξ} denote the set of complete plans for a set of given task instances. Then, for each “action-state-action” triple (an, sn+1, an+1) within ξ, first, the pair (an, an+1) is added to an edge set Ê if not already present; second, for each unique skill transition (an, an+1), a set of augmented states is collected, denoted by ŝanan+1={ŝ}, where ŝ=(sn+1, sG) The controller 106 trains an edge selector 208 to select edges and thus, skills 202 to be performed. For this, for each augmented state custom-character=(st,sG)∈ŝanan+1, the controller 106 derives the following feature vector:






h
l=(htG,vG),  (3)


where htG=(Hr, Ho1, . . . , HoH), where Ho=(bo, αo)∈custom-character6 is the relative translation and rotation of robot r and objects o1, o2, . . . , oH∈Oan, from the current system state st to the goal state sG; VG is the feature vector defined in equation (2) associated with the goal state sG. It should be noted that custom-character encapsulates features from both the relative transformation to the goal, and the goal state itself. Its dimension is linear to the number of objects relevant to skill an, as shown in FIG. 3.



FIG. 3 illustrates the determination of feature vectors vm for the edge selector 208 and custom-character for the branch selector 207, given skill frames Fp.


Once the controller 106 has processed all plans within Ξ, it can construct the GTN 203custom-character as follows. First, its nodes and edges are directly derived from Ê. Then, for each node a, the set of its outgoing edges in Ê is given by Êa={(a,custom-character)∈Ê}. Thus the function ƒ(a) returns the edge selector 208custom-characteraE over Êa. To compute this selector, we first construct the following training data:





τaE={(custom-characterl,e),∀custom-characterŝe,∀e∈Êa},


where e is the label for an edge e=(a, custom-character)∈Êa; custom-character is the feature vector derived from equation (3). Then the edge selector custom-characteraE can be learned via a multi-nominal classification algorithms. Similar to the branch selector 207, given a new scenario with state st and the specified goal state sG, the controller 106 then chooses an edge e with a probability of





ρe=custom-characteraE((st,sG),e),∀e∈Êa,


where ρe∈[0, 1]. It should be noted that ρe is trivial for skills with only one outgoing edge (i.e. with only one possibility for the subsequent skill).


In the previous two sections, the approaches to learn an extended skill model (involving a branch selector 207) and the task network 203 (involving edge selectors 208) were described. The required training data are execution trajectories of the skills and complete plans of the task. According to various embodiments, the controller 106 generates training data for the branch selectors 207 and the edge selectors 208 from human instructions 204 provided during run time. This allows improving both the skill model and the task network on-the-fly.


The GTN custom-character is initialized as empty (i.e. is initially untrained). Consider a problem instance of the task, namely (sO, sG). The system starts from state sn whereas the GTN 203 starts from the virtual start node an=a for n=0. Then the associated edge selector custom-characteranE is used to compute the probability ρe of each outgoing edge e∈Êan. Then, the next skill to execute is chosen as:








a

n
+
1



=


argmax

e



E
^


a
n






{




ρ
e

(


s
n

,


s
G


)



where



ρ
e


>


ρ
¯

E


}



,




where ρE>0 is a design parameter as the lower confidence bound. It should be noted that if the current set of outgoing edges is empty, i.e., Êan=Ø, or the maximal probability of all edges is less than ρE, the controller 106 asks the human operator to input the preferred next skill an+1* (as online instruction 204), e.g. by pausing execution, i.e. the robot 100 waits until the human operator inputs the online instruction, e.g. guides the robot arm 101 or inputs a branch number. Consequently, the controller 106 adds an additional data point to the training data τanE, i.e.,





τanE←(h(sn,sG),(an,an+1*)),  (4)


where the feature vector h is computed according to (3). Thus, a new edge (an, an+1*) is added to the graph topology (V,E) if not present, and the embedded function ƒ(⋅) is updated by re-learning the edge selectors custom-characternE given this new data point.


With regarding execution and update the branch selectors 207, let an+1 be chosen as the next skill (according to the edge selector 208). Then the controller 106 uses the branch selector custom-characteran+1B to predict the probability of each branch ρb, ∀b∈Ban+1. Then it chooses the most likely branch for an+1 by








b

n
+
1



=


argmax

b


B

a

n
+
1







{



ρ
b

(

s
n

)

,


where



ρ
b


>


ρ
¯

B



}



,




where ρB>0 is another parameter as the lower confidence bound for the branch selection. Again, as for the edge selection, if the controller 106 cannot find a branch in this manner, it asks the human operator to input the preferred branch bn+1* for skill an+1, e.g. by guiding the robot arm 101 or inputting an edge number. In that case, the controller 106 adds an additional data point to the training data τan+1B, i.e.,





τan+1B←(v(sn),bn+1*),  (5)


where the feature vector v is computed according to equation (2).


Once the controller 106 has selected a branch b* for the desired next skill an+1*, the controller 106 can retrieve its trajectory using the skill model θan+1. The retrieval process consists of two steps: First, the most-likely sequence of GMM components within the desired branch (denoted by KT*) can be computed via a modified Viterbi algorithm. Then, a reference trajectory 209 generated by an optimal controller 210 (e.g. LQG controller) to track this sequence of Gaussians in the task space. Thus this reference trajectory 209 is then sent to the low-level impedance controller to compute the control signal u*.


Afterwards, the system state is changed to Sn+2 with different poses of the robot arm 113 and the objects 113, i.e., obtained from the state estimation and perception modules (such as a camera) providing observations 211. Given this new state, the controller 106 repeats the process to choose the next skill and its optimal branch, until the goal state 201 is reached.


In the following, an exemplary overall algorithm is given in pseudo code (using the usual English keywords like “while”, “do”, etc.).

















Input: {Da, ∀a ∈ A}, human inputs {an*, bn*}.



Output: custom-character , { custom-characteraB}, u*.










/* offline learning
*/








1
Learn θa and { custom-characteraB}, ∀a ∈ A.


2
Initialize or load existing custom-character  .










/* online execution and learning
*/








3
while new task (s0, sG) given do









4
 |
Set an a and sn ← s0.


5
 |
while sn ≠ sG do










6
 |
 |

custom-character  , an+1 = ExUpGtn( custom-character  , an, (sn, sG), an+1*).



7
 |
 |

custom-characteran+1B, bn+1 = ExUpBrs( custom-characteran+1B, sn, bn+1*).



8
 |
 |
Compute u* for branch bn+1 of skill an+1.


9
 |
 |
Obtain new state sn+1. Set n ← n + 1.



 |









During the online execution for solving new tasks, the algorithm executes and updates the GTN 203 as described above. This is done by the function ExUpGtn(⋅) in Line 6, with possible human input an* if required. Once the next skill an+1 is chosen, the algorithm executes and updates the branch selector 208, This is done by the function ExUpBrs(⋅) in Line 7, with possible human input bn+1* if required. Consequently the GTN 203 and branch selectors 208 are updated and improved according to equation (4) and equation (5) on-the-fly. Compared with the manual specification of the transition and branching conditions, the human inputs are more intuitive and easier to specify.


In summary, according to various embodiments, a method is provided as illustrated in FIG. 4.



FIG. 4 shows a flow diagram 400 illustrating a method for controlling a robot device.


In 401, for each function of a plurality of functions, a control model (skill model or skill branch model) for controlling the robot device to perform the function is provided.


In 402, a selection model for selecting among the plurality of functions is provided.


In 403, multiple instances of a task are executed by the robot device (i.e. the robot device is controlled to perform multiple instances of a task), comprising, in each execution, when a function of the plurality of functions needs to be selected to perform the task instance, checking whether the selection model provides a selection of a function and, if the selection model provides a selection of a function, controlling the robot device to perform the function selected by the selection model using the control model for the selected function in 404 and, if the selection model does not provide a selection of a function, receiving user input indicating a selection of a function, selecting a function according to the selection indicated by the user input, controlling the robot device to perform the function selected according to the selection indicated by the user input using the control model for the selected function and training the selection model according to the selection indicated by the user input in 405.


The approach of FIG. 4 can be used to compute a control signal for controlling a physical system generally referred to as “robot device”, like e.g. a computer-controlled machine, like a robot, a vehicle, a domestic appliance, a power tool, a manufacturing machine, a personal assistant or an access control system. According to various embodiments, a policy for controlling the physical system may be learnt and then the physical system may be operated accordingly.


Various embodiments may receive and use image data (i.e. digital images) from various visual sensors (cameras) such as video, radar, LiDAR, ultrasonic, thermal imaging, motion, sonar etc., for example as a basis for the descriptor images.


According to one embodiment, the method is computer-implemented.


Although specific embodiments have been illustrated and described herein, it will be appreciated by those of ordinary skill in the art that a variety of alternate and/or equivalent implementations may be substituted for the specific embodiments shown and described without departing from the scope of the present invention. This application is intended to cover any adaptations or variations of the specific embodiments discussed herein.

Claims
  • 1. A method for controlling a robot device, comprising the following steps: providing, for each function of a plurality of functions, a control model for controlling the robot device to perform the function;providing a selection model for selecting among the plurality of functions;executing multiple instances of a task by the robot device, including, in each execution, when a function of the plurality of functions needs to be selected to perform the task instance, checking whether the selection model provides a selection of a function and,based on the selection model providing a selection of a function, controlling the robot device to perform the function selected by the selection model, using the control model for the selected function; andbased on the selection model not providing a selection of a function, receiving user input indicating a selection of a function, selecting a function according to the selection indicated by the user input, controlling the robot device to perform the function selected according to the selection indicated by the user input using the control model for the selected function. and training the selection model according to the selection indicated by the user input.
  • 2. The method of claim 1, wherein the selection model outputs indications of confidences for selections of functions of the plurality of functions and wherein training the selection model according to the selection indicated by the user input includes adjusting the selection model to increase a confidence output by the selection model to select the function selected according to the selection indicated by the user input.
  • 3. The method of claim 1, wherein the selection model outputs indications of confidences for selections of functions of the plurality of functions, and the checking of whether the selection model provides a selection of a function includes checking whether the selection model outputs an indication of a confidence for a selection of the function which is above a predetermined lower confidence bound.
  • 4. The method of claim 1, wherein the functions include skill and branches of skills and the selection function is trained to provide, for sets of alternative skills, a selection of a skill, and, for sets of alternative branches of skills, a selection of a branch.
  • 5. The method of claim 1, wherein the providing of the control model for each function includes performing demonstrations of the function and training the control model using the demonstrations.
  • 6. The method of claim 1, wherein the selection model is a logistic regression model.
  • 7. The method of claim 1, further comprising: based on the selection model not providing a selection of a function, pausing operation of the robot device until user input indicating a selection of a function has been received.
  • 8. A robot controller configured to control a robot device, the robot controller configured to: provide, for each function of a plurality of functions, a control model for controlling the robot device to perform the function;provide a selection model for selecting among the plurality of functions;execute multiple instances of a task by the robot device, including, in each execution, when a function of the plurality of functions needs to be selected to perform the task instance, checking whether the selection model provides a selection of a function and,if the selection model provides a selection of a function, control the robot device to perform the function selected by the selection model, using the control model for the selected function; andif the selection model does not provide a selection of a function, receive user input indicating a selection of a function, select a function according to the selection indicated by the user input, control the robot device to perform the function selected according to the selection indicated by the user input using the control model for the selected function. and train the selection model according to the selection indicated by the user input.
  • 9. A non-transitory computer-readable medium on which are stored instructions controlling a robot device, the instructions, when executed by a computer, causing the computer to perform the following steps: providing, for each function of a plurality of functions, a control model for controlling the robot device to perform the function;providing a selection model for selecting among the plurality of functions;executing multiple instances of a task by the robot device, including, in each execution, when a function of the plurality of functions needs to be selected to perform the task instance, checking whether the selection model provides a selection of a function;if the selection model provides a selection of a function, controlling the robot device to perform the function selected by the selection model, using the control model for the selected function; andif the selection model does not provide a selection of a function, receiving user input indicating a selection of a function, selecting a function according to the selection indicated by the user input, controlling the robot device to perform the function selected according to the selection indicated by the user input using the control model for the selected function. and training the selection model according to the selection indicated by the user input.
Priority Claims (1)
Number Date Country Kind
10 2021 212 494.1 Nov 2021 DE national