This invention relates to haptic robot interaction devices and methods.
Robotics is a rapidly evolving field that already has much impact on our societies, with robots enhancing the production capabilities of all our industries. The next step will br in developing new robotic systems, able to evolve and work with humans in diverse environments, outside of their usual, very structured, factories. As robots become more and more present in our lives, we will need efficient and intuitive ways of interacting with them. Haptic teleoperation in particular is a modality that allows a human operator to remotely control a robot behavior while feeling the environment it is interacting with. This is very beneficial in confined, unsafe, or sensitive environments such as hazardous industry, underwater, space or surgery rooms. It enables to naturally combine human high-level intelligence and robot physical capabilities while maintaining the safety and comfort of the human operator.
One of the main goals for teleoperation systems focuses in accurately interpreting and transferring the human commands to the robot and providing a high-fidelity relevant feedback to the human. This is usually called haptic transparency, and it imposes many constraints on haptic systems. Haptic bilateral controllers in particular are subject to many challenges such as: inaccurate dynamic modeling, sensor noises, task uncertainties, human-in-the-loop disturbances, communication time delays and non-collocation. These challenges have been at the heart of haptic research as they impose a tradeoff between haptic transparency and stability on most teleoperation systems. This invention addresses these challenges.
The present invention tackled the challenges of real-world haptic teleoperation through the conceptualization and design of a new paradigm for haptic-robot control. The inventors' approach relied on autonomous behaviors on both robot and haptic sides, parametrized via a dual-proxy model. The concept of local autonomy and proxy-based haptic-robot interaction is shown in
With this approach, there are no issues due to non-collocation as robot and haptic sensory information are only used in the local control loops. As a result, it greatly increases the stability of the teleoperation system, even in the presence of significant communication time delays, without deteriorating the system's transparency in force and position. This is due to the fact that the system stability now mostly depends on the stability of each one of the local feedback controllers. This approach also seeks to address the wide differences in remote tasks, and in the physical interaction they involve, in a simple and generic framework. As such, this invention brings the following contributions:
Embodiments of the invention provides advantages over existing methods and systems. For example, existing haptic control methods become unstable or dangerous when the communication delays between the robot and haptic device gets around 100 milliseconds or less. Typically, existing methods do not allow to teleoperate a robot with haptic feedback over the internet across the globe. Embodiments of this invention make it possible as it is robust to delays above 1 second.
In one embodiment, the invention is defined as a method for controlling a robot via a haptic interface. In this method one would have a robot controlled by a local, independent robot controller. The robot interacts with an environment. The robot controller distinguishes a unified force and motion controller, parametrized autonomously. The method further has a haptic device controlled by a local, independent haptic controller. The haptic controller distinguishes a unified force and motion controller, parametrized autonomously. The method further has an environment estimator near or within the robot controller estimating single or multi-dimensional geometric constraints from the robot interactions with the environment. The geometric constraints are defined in orientation and position. A dual proxy model is further part of the method separately connecting the robot controller and the haptic controller. The dual proxy model calculates outputs defined as (i) a parametrization for the robot and haptic unified controllers, (ii) a force command and a motion command for the robot unified force and motion controller, (iii) a force feedback command and a motion command for the unified force and motion haptic controller. The outputs are calculated by the dual proxy model receiving inputs defined as (j) position information from the haptic device, (jj) the geometric constraints from the environment estimator, and (jjj) position information from the robot, and wherein the calculated outputs are consistent with the robot and haptic device capabilities, and with a safe haptic-robot interaction.
In this invention, the inventors' rethought haptic teleoperation through a local autonomy-based control strategy. The idea was to implement local controllers at the robot and haptic device sides, to respectively monitor the task actions and provide a proper perception to the user, while only exchanging high-level passive variables between the two robots. This sliced approach is based on the introduction of the dual-proxy model: a smart bridge between the two local loops which encodes haptic-robot interactions.
The state-of-the-art points that a mandatory compromise has to be done between stability and fidelity once power variables are transmitted through a global feedback loop between the haptic device and the remote robot. In a position-position controller, intrinsic stability and robustness to time delay can be achieved by removing force/velocity non-collocation in the global bilateral architecture. To this end, the approach presented herein minimizes non-collocations by only closing the feedback loop locally at the device and robot sides, and exchanging high-level information between the two local controllers. A dual-proxy model computes setpoints to the two local loops with respect to the task-controlled modalities and exchanged data.
The concept of the haptic control framework is shown in
Using autonomous local controllers make this approach very modular and general because they can be easily changed and tuned to adapt to different tasks and different hardware, without having to redesign the whole system.
The idea of proxy in haptics is to create a point, the proxy, in the virtual environment that kinematically follows the user input, but is subject to the constraints of the virtual environment. This proxy is then used to generate a haptic spring force that tries to bring the haptic device back to the position of the proxy. It is a very effective way to render a virtual environment haptically.
The idea of the dual-proxy model is to define two proxys, one for the robot and one for the haptic device that will be used to generate commands for the local controllers. The proxys are, therefore, virtual objects which represents the behavior of the two robots in the workspace of their counterparts. With this proxy-based approach, only position information has to be transmitted over the network, which cannot jeopardize stability of the teleoperation framework. The dual-proxy model uses knowledge of the robot environment, that can be known a priori or computed with the online perception algorithm described infra, and the knowledge of the robot and haptic device states to generate inputs to the local controllers. The dual-proxy model handles the communication between the robot and haptic device and prevents non-collocation issues. It is described in
The first role of the dual-proxy model is to handle the communication between the two robots and to generate a proxy for each. The proxy captures the behavior of the robot and of the haptic device in the frame of the other. To this end, the dual-proxy model copies each robot position data and transforms it through a proper workspace mapping. The most commonly used mapping relates the two workspaces via a translation, rotation and constant scaling factor.
The haptic device current position xh and orientation Rh, are measured in the device frame. They encode the human desired pose for the robot. These position and orientation are sent to the dual-proxy model, where they are mapped onto the robot frame, thanks to the rotation matrix from the device frame to the robot frame Rh-r, the translation between the robot frame and the desired center of the robot workspace in position pr-cw and orientation Rr-cw, and the scaling factor s, to obtain the robot proxy position xp,r and orientation Rp,r:
x
p,r
=p
r-cw
+sR
h-r
T
x
h
R
p,r
=R
h-r
T
R
h
R
h-r
R
r-cw (1)
The reverse transformation is done to compute the haptic proxy xp,h, Rp,h from the robot position xr and orientation Rr:
x
p,h
=R
h-r(xr−pr-cw)/s
R
p,h
=R
h-r
R
r
R
r-cw
T
R
h-r
T (2)
It is worth noting that the same workspace mapping transformation is applied to any data passed through the network from one robot to the other, to maintain the physical consistency between all remote information. In particular, the model of the robot environment geometry needs to be mapped to the haptic device to generate correct force feedback.
In general purpose teleoperation applications, the communication network is often slower than the desired control frequency and rarely ensures the hard-real-time constraints of haptic-robot interaction. In addition, communication delays and packet losses generate many issues in conventional teleoperation systems that use one global feedback loop. This is particularly true when there is a significant distance between the haptic device and the robot, for example when teleoperating a robot in space.
The dual-proxy model removes these communication issues. Indeed, since the dual-proxy model and the two local controllers are independently implemented, they can run at different frequencies. Typically, the local control loops will run at 1 kHz or more to get the best possible performances. However, the communication channel can run at a lower frequency, which will mostly depend on the chosen communication technology, and will typically be between 10-100 Hz. In addition, this frequency difference enables the computation of smooth proxys by averaging, in the fast controllers, position data between two consecutive communication messages. Finally, communication delay and latency spikes can be monitored by the dual proxy in many ways without compromising on the performance of the local control loops as it runs independently.
The second role of the dual proxy is to use the proxys, robot configurations and robot environment information to generate motion and force control inputs. A broad spectrum of physical interaction is involved when performing tasks in haptic teleoperation. Remote minimally-invasive surgical operations require a high accuracy in motion while assembly tasks or polishing works need precise perception and regulation of the contact forces with the environment. This range of interaction implies different objectives for the local control loops. To achieve high fidelity in both motion and contact tasks, the dual-proxy model smartly generates the control inputs such as they properly reflect the real-world physical interaction. The strategy includes either computing a motion command, if the robot is moving in some direction, or computing a force command when the robot is in contact with the environment.
The computation of the control inputs is based on a projection of the robot proxy and haptic proxy onto motion and force spaces, defined by the task physical interaction, whether the robot is in contact with the environment or not. The environment geometry is described locally by the directions of constraints and directions of free motion. These are encoded by selection matrices for the robot force space Ωf and the robot motion space Ωm. These matrices are 3D projection matrices, and their rank corresponds to the number of degrees of freedom in the force and motion space respectively. Besides, the force space and motion space need to be orthogonal so we have Ωf+Ωm=I3×3. A similar selection process projects the end-effector rotational motion and applied torques onto rotation and moment spaces, respectively through Ωrot and Ωτ, with Ωrot+Ωτ=I3×3. Note that because of these relationships, only the force and moment selection matrices need to be transferred from the robot to the haptic device. However, they need to be mapped to the haptic device workspace. The force and moment selection matrices for the haptic device are therefore:
Ωf,h=Rh-rΩfRh-rT
Ωτ,h=Rh-rΩτRh-rT (3)
Δxm=Ωm(xp,r−xr)
F
r
=k
vir,rΩf(xp,r−xr) (4)
where kvir,r is the stiffness of the virtual spring on the robot side. Monitoring these task-based control modalities at the robot level gives an intuitive and explicit control of the task interaction, whether it involves precise motions or contact forces. The robot local controller regulates the end-effector dynamic behavior to reach the desired control objectives and properly perform the remote task. On the haptic side, the controller must reflect the task interaction to the human operator. The haptic control objective is, therefore, to produce a high-fidelity remote sensing of the contact forces without any constraint in free space. The haptic control inputs reflect the contact force onto the force space while ensuring free motion of the operator onto the motion space. The projection-based approach ensures a good free space transparency as the desired control force is set to zero in motion space. In the force space, the feedback force is generated as:
F
h
=k
vir,hΩf,h(xp,h−xh) (5)
In addition to avoiding the exchange of force signals, computing the desired haptic and robot force through a virtual-spring model is a convenient tool to adjust the haptic feedback, with respect to the device maximum stiffness, and to adapt the robot behavior in contact. For example, one could reduce the spring stiffness to bring compliance in the force control input and to increase safety of the interaction. If the same virtual-spring model is implemented on both sides, the same (opposite) desired forces, up to the scaling factor, are input to the robot and haptic local loops.
The last component of the dual-proxy model aims at maintaining safety and adequacy of the control inputs for the local loops. The desired force and motion, computed from the proxy projection, are adjusted according to high-level exchanged information, such as the control and communication frequencies, the robot and haptic device limits, or task-related constraints. Therefore, each proxy-based input not only reflects the interaction, but is also adapted to the controlled robot and can be modified to increase task performances.
The first step is to perform safety checks on the control inputs with respect to hardware limitations. It ensures physical consistency of the desired force and motion for the two robots. The desired haptic feedback is maintained under the haptic device maximum force, and damping can be added in the force space to prevent undesired oscillations. On robot side, if the motion input goes outside the range of motion, it is projected back onto the robot workspace. The robot control force can also be saturated to a maximum value to preserve safety of the robot/environment contact. Finally, the monitoring of communication delays and possible latency spikes enables the dual proxy model to reject potentially dangerous and outdated commands.
An additional safety stage is implemented on the robot side to prevent the control motion to go outside the robot kinematic and dynamic operational range. The motion command is interpolated according to the robot maximum velocity and acceleration in the operational space. The desired robot motion is, therefore, safely reachable, even if the human motion input is too fast or involves high acceleration change. In addition, the frequency difference between the communication loop and the control loops might result in undesired step commands in the force space. Therefore, the desired forces on both sides are also interpolated to avoid step commands in the forces that would be uncomfortable for the human on the haptic device and potentially dangerous on the robot side.
These safety measures enable the generation of safe commands for the robot and haptic device, regardless of what is happening on the other side of the communication network, or the time it took for the data to be transmitted.
With the framework, the proper knowledge and communication of the robot environment geometry, that is to say the force space and motion space, is crucial to the haptic force fidelity. In certain tasks (in manufacturing for example), it is possible to know in advance the location and direction of the constraints. However, in general use cases, one would want the robot to autonomously detect the directions of constraints, in real time, to infer the force and motion spaces and properly parameterize the proxy projection. Here, we design a particle filter called the Force-Space Particle Filter (FSPF). The FSPF states will represent the likely contact normals at the robot end effector, and will allow one to find the force and motion spaces. For an exemplary embodiment, the FSPF was developed and implemented for linear force and motion spaces, but its extension to angular force/motion spaces is immediate.
The idea behind the FSPF is the following: given a robot in contact that is moving, one should be able to find the contact normal by combining contact force and velocity data. In particular, the contact normal is going to be more or less in the direction of the sensed force, and orthogonal to the robot velocity.
The FSPF state space is the 3d unit sphere plus its center. The sphere represents all the possible contact normals at the robot end effector, while its center encodes the belief that there is no contact on the robot end effector.
The update step involves creating a list of augmented particles Pk knowing the current list of particles and the control and sensor inputs. Pk contains np>np particles. The resampling step first assigns a weight w(ρ, Fs, vs)∈[0, 1] to each particle ρ∈Pk that measures the likeliness of that particle given the sensor measurements, and, then, resamples np particles from Pk by prioritizing the particles with high weights in order to obtain the next list of particles Pk+1.
At any point during robot operation, 3 things can happen:
The update step needs to accommodate for these three possibilities. The augmented particle list is created, starting with an empty list =∅ and using the procedure described in Algorithm 1. In one example, a parameter 0<ε<<1 is used to numerically determine if the vectors are at the origin. The algorithm is explained next:
k = [ ];
k.append(ρ + d)
k.append(s * uf + (1 − s) * ρtent);
k.append(0);
= [ ];
.append( [l]);
One defines a force weight function wf and a velocity weight function wv for a given particle ρ and the value of the measured force Fs and velocity vs. These functions represent a credence value for the particle based on the force sensor measurement and the velocity measurement. For the force weight function and particles on the sphere, one will consider that if the sensed force is higher than a threshold fh in the direction of the particle, then it is highly likely, and if it is lower than a threshold fl, it is not likely at all. For particles at the center of the sphere, one would consider them likely if there is no sensed force, and not likely if the sensed force is high. Equation 6 is defined as:
where <., .> represents the euclidean scalar product. For the velocity weight function and particles on the sphere, one will consider that a particle is not very likely if there is a significant velocity in its direction or its opposite direction, and it is likely otherwise. For particles at the center, the velocity gives no information (as the robot could be moving in free space or in contact) so one will assign a velocity weight of 0.5. Equation 7 is defined as:
where vh and vl are high and low velocity thresholds.
Using the previously defined functions, one can perform the resampling step. The weight of each particle w(ρ,Fs,vs) is defined as the product of the force and velocity weights for that particle:
w(ρ,Fs,vs)=wf(ρ,Fs)wv(ρ,vs) (8)
One can see that w(ρ,Fs,vs) is always between 0 and 1. Besides, the product means that as soon as one of the sensor modalities deems the particle unlikely, then it will be excluded. For the resampling itself, one could use low variance resampling to prevent the loss of particles that are isolated but have a high probability. The resampling step is implemented using Algorithm 2.
The output of the FSPF, described supra, is a list of likely particles at the current time. One would need to extract a likely force space and motion space from this particle list. Intuitively, the distribution of the particles will give enough information to estimate the force and motion directions. In particular:
To find the dimension and direction of the force space, one performs a SVD of the particles. We define Xp, a matrix of dimension 3×1.5*np such that the first np columns of Xp are the coordinates of the np particles of the current state of the particle filter Pk, and the last 0.5np columns are zero (which represent particles at the center). The left SVD of Xp will give a 3d ellipsoid that best explains the points, as depicted in
Let's note dfs the dimension of the force space that can be between 0 and 3. Let's also note U the 3×3 matrix of the left singular vectors of Xp, and U=[U1 U2 U3] its columns. Let us finally note S=[σ0, σ1, σ2] the vector whose elements are the singular values of Xp ordered in decreasing order (σ0≥σ1≥σ2). Here one applies Algorithm 3 to find the selection matrices Ωf and Ωm. The algorithm is also described here:
if ∥S∥<√{square root over (np/2)},dfs=0 (9)
The three algorithms presented in this section show the full implementation of the FSPF. The particle filter is used to provide an added layer of local autonomy. The controllers on the robot and haptic side can self-parameterize autonomously in real time using sensory observations. The FSPF and the dual-proxy model are the key innovations in the local autonomy-based approach. For the robot and haptic local controllers, many choices can be made.
The local autonomy-based dual-proxy method allows one to choose controllers for the robot and haptic device independently of everything else. The dual-proxy model and FSPF together ensure the safety and physical consistency of the local control inputs, as well as the correct parameterization of force and motion spaces. The robot local controller needs to be able to regulate both free space motions and physical interactions in a stable way. Two good choices are to implement an impedance controller or a unified force/motion control. The latter requires a re-parameterization of the controller when the force space changes, but it enables a completely decoupled dynamic response in force space and in motion space. In haptic applications, the choice of stiffness in force space will mostly depend on the maximum stiffness that the haptic device can render, and the compliance needed for the task. One would want to be able to select it independently of the free-space motion tracking performance. For this reason, the inventors advocate for the usage of operational space control on the robot side. On the haptic side, the controller will mostly depend on which haptic device is used. When using a low-inertia haptic device with isotropic force capabilities, an open-loop force controller can be implemented to monitor the force feedback. Note that even if one used impedance control on the robot side, the force/motion-space projection would still be required to compute consistent haptic control inputs.
Here the inventors used the unified force/motion controller in operational space to locally control both the motion and force modality on the robot. With this decoupling technique and assuming perfect estimates of the dynamic components, the motion/force loops regulate the desired behavior of the decoupled end effector equivalent to a unit mass system in motion space and a force source in force space. This unified control strategy provides good performances in both motion and contact tasks. For completeness, the robot controller is described here. Let's take a robot arm in contact with the environment at its end effector. The dynamic equations of the robot, expressed in joint space, are:
A(q){umlaut over (q)}+b(q,{dot over (q)})+g(q)+JTFc=Γ (10)
where A is the robot mass matrix, b is the vector of centrifugal and coriolis forces, g is the joint gravity vector, J is the task jacobian at the end effector, Fc is the vector of contact forces resolved at the task point and Γ is the joint torque vector. The operational space dynamics of the robot at the end effector is obtained by multiplying the previous equation by the transpose of the dynamically consistent inverse of the Jacobian:
Λ{umlaut over (x)}+μ+p+Fc=F (11)
Λ=(JA−1JT)−1
μ=
p=
T
g
are respectively the task space inertia matrix, the task space centrifugal and coriolis forces and the task space gravity. F is the task force and x is the task space coordinates.
With the robot in contact, its workspace is separated into the force space (directions of constraints) and the motion space (directions of free motion). The force-space and motion-space specification matrices, respectively Ωf and Ωm, are obtained via the FSPF. By projecting the robot-proxy interaction, the dual-proxy model computes two setpoints for the robot local controller: the displacement Δxm on motion space and the desired contact force Fr on force space. With the chosen unified force/motion control approach, two distinct controllers are, then, designed to monitor the robot behavior in the force and motion directions.
The control force in motion space Fm is generated by an operational space proportional derivative (PD) controller:
F
m={circumflex over (Λ)}(KpmΔxm−KvmΩm{dot over (x)}r) (12)
where the estimate of the task space inertia {circumflex over (Λ)} is used to dynamically decouple the system. Kpm is the proportional gain and Kvm is the derivative gain.
The control force in force space Ff is computed to follow the desired robot force Fr computed by the dual-proxy model thanks to a proportional integral (PI) controller with feedforward force compensation and a velocity-based damping term. The proportional and integral gains are respectively Kpf and Kif and the damping gain is Kvf. The sensed task force Fs is used as feedback to the PI loop.
F
f
=F
r
−Kp
f(Fs−Fr)−Kif∫(Fs−Fr)−KvfΩf{dot over (x)}r (13)
Finally, the two controllers are composed together, with two additional gravity and Coriolis/centrifugal compensation terms, to produce the dynamically decoupled and unified force/motion control force F.
F=F
f
+F
m
+{circumflex over (μ)}+{circumflex over (p)} (14)
μ{circumflex over ( )} and p{circumflex over ( )}respectively represent estimates of the task space Coriolis and centrifugal forces, and of the gravity. The control torques in joint space are, then, simply computed as:
Γ=JTF (15)
If the robot is redundant with respect to the task, the redundancy can be controlled thanks to a control torque vector FO projected into the dynamically consistent null-space of the task N such that:
Γ=JTF+NTΓ0 (16)
where
N=I
n×n
−
Such a unified force/motion closed loop at the robot level yields the expected end-effector dynamic behavior to perform the remote task. It monitors the task-consistent control inputs, generated by the dual-proxy model to produce the haptic-robot interactions.
On the haptic side, the haptic local closed loop must reflect the task interaction to the human operator. The desired haptic feedback Fh that was computed by the dual-proxy model, is sent as an input to the local controller. This force command is exclusively within the force space, such that the motion feels fully unconstrained in the motion space. The haptic controller also needs to compensate for parasitic dynamic components of the haptic device, which can be decently assimilated to the operational space estimated gravity p{circumflex over ( )}h when using a low inertia, low friction and parallel haptic device. For such devices, which rarely have a force sensor, open-loop force control is usually satisfactory. We also neglect Coriolis and centrifugal effects as they are small and depend on the human arm dynamics. The haptic control force is computed, with a velocity-based damping term of gain Kvf,h, by:
f
f,h
==F
h
+{circumflex over (p)}
h
−Kv
f,hΩf{dot over (x)}h (18)
Overall, the full implementation of the local controllers is depicted in
This application claims the benefit from U.S. Provisional Application 63/116,418 filed Nov. 20, 2020, which is hereby incorporated by reference for everything it teaches.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2021/060224 | 11/20/2021 | WO |
Number | Date | Country | |
---|---|---|---|
63116418 | Nov 2020 | US |