Today, there is increasing demand for collaborative robotic applications and system that require precisely controlled force-based interactions. For example, force-sensitive industrial tasks such as sanding and polishing increasingly rely on machines and automated systems. However, most existing robotic systems provide inadequate support and responses to force-based interactions, are highly expensive, and require operator free work environments.
The detailed description is described with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The use of the same reference numbers in different figures indicates similar or identical components or features.
Described herein are implementations and embodiments of a system comprising a control system and a robot equipped or configured with torque-controllable actuators. In some cases, the system discussed herein may be a robotic arm and/or system configured to allow for precisely controlled force-based responses and contact with environmental or physical objects. For example, the robotic arm may be configured to operate in close proximity to humans or operators as well as other objects to perform various industrial tasks without risk of injury or damage. In other examples, the robotic arm may be usable to provide for safe and effective virtual reality simulations. For instance, the robotic arm may be configured to convey and replicate real life force-based interaction with virtual and/or remote objects. Thus, unlike conventional force based robotic systems that are designed to follow position commands (no matter the forces exerted against the robot in the physical environment), the system discussed herein is configured to respond and interact with external forces encountered during operations.
The compliant and adaptive nature of the precision force control of the system discussed herein, allows the robot to perform a variety of force-oriented industrial tasks such as surface treatment or assembly by force without expensive force sensors or complicated programing processes. For example, the robot arm may perform tasks such as sanding, polishing, and buffing of curved surfaces with precise force that directly affects the quality of outcome. Force control also enables more intuitive robot programming methods such as teach-and-follow programing, in which a user guides the robot by hand to record and save position and orientation trajectories that the robot can play back with a user-defined impedance. Thus, the robot arm and system discussed herein may automate assembly and manipulation of objects in unstructured environments where human-like compliant and adaptive behaviors work more effectively than conventional rigid preprogrammed robot behaviors.
In some implementations, the robotic system may include a robotic arm that includes one or more torque-control actuators. The torque-control actuators may act as joints coupling between the various segments of the robotic arm allowing the arm to move with any number of degrees of motion or freedom (including systems having six degrees of freedom). In some cases, the robotic arm may be configured such that the actuators of each joint generate rotary motion and torque which may be propagated throughout the structure of the arm to yield translational and rotational motion at the robot end-effector. It should be understood, that with higher numbers of joints, torque-control actuators, and rotational sources, more degrees of torque or force may be generated at the end-effector, such as up to a three degrees of torque and three degrees of force.
In some cases, a control system may be electively and/or communicatively coupled to the robotic arm such that the control system may generate torque commands for each of the joints and/or receive feedback from each joint. In some instances, the control system may be configured to allow a user or operator of the system to configure a behavior (e.g., an impedance and motion) of the robotic arm and to provide a reactive feedback control loop or network to compensate for force-interactions within the physical and/or virtual environment. For example, the robotic control system may include a task planner, a robotic force controller and one or more proportional-derivate (PD) controller (e.g., a PD controller for each joint).
In one illustrative example, the control system may cause the operations of the arm to mimic or replicate the motion of a virtual spring having an impedance neutral point being moved or pulled along desired path. Thus, in this example, an operator may input a desired motion, such as a position-based task (e.g., a pick and place operation), and an impedance (or stiffness, dampening coefficient, etc.) associated with the virtual spring. The task planner may then convert the desired motion and the impedance into a current force command or task based at least in part on the current impedance neutral point, the desired impedance, and the position and/or orientation of the end-effector (or, in some implementations, the current position of each joint). In some cases, the task planner may determine the current force command or task for a defined behavior of the end-effector position and orientation at a given period of time. The robotic force controller may then generate a current torque command or task for the torque-controlled actuators of the joints based at least in part on the current force command or task and a feedforward torque representative of forces caused by the robotic system and operations (e.g., the weight of the robotic arm).
In this example, if an object obstructs the motion path of the robotic arm, the distance between the impedance neutral point and the actual position of the end-effector increases (as the end-effector is obstructed). As the distance between the impedance neutral point and the actual position of the end-effector increases, the impedance (e.g., force of the spring) is increased, resulting in increasing current force commands, which results in either the obstruction being gently pushed out of the way or the impedance exceeding a safely limit (which may also be set by an operator) and the task planner halting the movement of the impedance neutral point. In the example, when the safely limit is exceeded, once the obstruction is removed, the robotic arm will again attempt to converge with the impedance neutral point (with a force that decreases as the end-effector nears the impedance neutral point). Similarly, if the end-effector is pushed or moved off of the motion path, the distance between the impedance neutral point and the actual position of the end-effector increases and the orientation between the impedance neutral point and the actual position of the end-effector may change. In this example, the impedance controller 322 will adjust the current force command based on the relative positions between the impedance neutral point and the actual position of the end-effector causing the end-effector to close in on or chase the impedance neutral point. In this manner, the amount of force exerted on an obstruction (e.g., object or individual) may be both minor (e.g., less than 10 Newtons) upon contact and maintained below a desired safely level (such as 50 Newtons).
In addition to the actuator control system 106, the torque control actuators 102 and/or the actuator control system 106 may be electrically and/or communicatively coupled to a robotic controller or system 116. In the current example, each individual actuator control system 106 may be serially connected to the robotic control system 116 using, for instance, network communication wires, generally indicated by 118, and to a power supply 120, via power wires, generally indicated by 122. In some cases, the wires 118 and 122 may be mounted to the body of the robotic arm 104 and enclosed by a cover or exterior for protection. Thus, the wires 118 and 122 may be routed through internal channels of the robotic arm 104 for protection as well as aesthetic purposes. The power supply 120 may be a direct current supply that provides a power signal to the actuator control systems 106. In some cases, for additional safely, an emergency switch 124 may be coupled between the actuator control systems 106 and the power supply 120 to provide system 100 operators an accessible shutoff point.
As will be discussed in more detail below with respect to
The task planning component 202 may be configured to receive the user inputs 210 together with end-effector position 212 from either or both of the force control component 204 and/or the actuator control systems 206. For example, in some implementations, the actuator control systems 206 may provide the end-effector position 212 to the task planning component 202 directly while in other cases, the actuator control system 206 may output actuator data 214, such as angular position, velocity, acceleration, etc. which is usable by the task planning component 204 to determine the end-effector position 212. In another implementation, illustrated here, the actuator control systems 206 may provide the actuator data 214 to the force control component 204 and the force control component 204 may determine and provide the end-effector position 212 to the task planning component 202.
The task planning component 202 may generate based on the user input 210 (e.g., the impedance, motion path, and tasks) and the end-effector position 212 a next force command signal 216. For example, the task planning component 202 may determine a next force command 216 for each of a plurality of segments or periods of time as the robotic arm completes the assigned tasks. For instance, the task planning component 202 may determine for the segment of time a force command based on an impedance neutral point along the motion path and the end-effector position 212. In some cases, if the commanded force exceeds a predetermined threshold force (e.g., the virtual spring is stretched to far), the task planning component 202 may stop the progression of the impedance neutral point along the motion path and, in effect, cause the force commanded by the command signal 216 to be set to a maximum value (e.g., a command to limit the force of the arm until the obstruction is removed or the limited force as applied to the obstruction causes the obstruction to move).
The force control component 204 may receive the force command signal 216 as well as the actuator data 214 (e.g., the angular position, velocity, and acceleration of the end-effector) to determine a torque command signal 218 for execution by the actuator control systems 206. For example, the force control component 204 may determine a feedforward torque based on the position and orientation (or angular position) of the end-effector and either the actual velocity and acceleration or a desired velocity and acceleration when a desired trajectory is given from the task planner component 304. The force control component 204 may then generated a torque command signal 218 based at least in part on the feedforward torque and the force command signal 216. In some cases, the force control component 204 may generate a torque vector based on the position and orientation of the end-effector and the force command signal 216 and the torque command signal 218 may be determined based at least in part on the torque vector and the feedforward torque. In some specific examples, the force control component 204 may also base the torque command signal 218 on one or more torque safety vectors, such as to constrain the arms motion to a safe joint range thereby preventing damage to one or more of the torque-control actuators.
In the current example, the robotic control system 300 may utilize a robot dynamics model, represented as follows:
M(θ)α+C(θ, ω)+G(θ)=τcmd+τext
where, M, C, and G respectively represent inertia matrix, centrifugal and Coriolis force with other velocity related forces, and gravity force and θ, ω, and α respectively represent angular position, velocity, and acceleration of robotic joints. In this example, it should also be understood that τcmd is a vector of commanded torque values associated with the robot joints and may be used as a control input to the target robotic system and τext is a vector of torque values that are caused by external forces applied to the robotic system. Since the robotic system 308, discussed herein, is equipped with torque-controllable actuators, the actuators may be regarded as pure torque sources, and the actuator dynamics may be ignored in the model equation above.
In the illustrated example, the control input may be received by the robotic system 308 as torque vector, τcmd, which when applied by the actuators produce an intended behavior. In the current example, the control input, τcmd, is utilized to generate a desired workspace impedance behavior of robot's end-effector, using the following equation:
τcmd=τff+τtsk+τest
Thus, the control torque input, τcmd, may be determined based on a feedforward torque, τff, to increase the overall fidelity of the robotic movement by compensating for at least a portion of the forces from the robot dynamics including robotic system's own weight. In this example, a torque vector, τcst, may also be used to determine the control torque input, τcmd, to improve overall safety by constraining joint angles to movement within a safe range.
τff=M′(θact)αdes+C′(θact, ωdes)+G′(θact)
As shown above, the feedforward torque in equation, τff, may be determined using an inverse dynamics model 310 with an estimated robot inertia matrix, M′, estimated centrifugal and Coriolis force with velocity related force, C′, and estimated gravity force, G′. The actual angular position, θact, desired velocity, ωdes, and desired acceleration, αdes, of robotic joints may be used as the input parameters to the inverse dynamics model 310. For example, the actual angular position, θact, may be received from one or more sensors associated with the robotic system 308 and the desired velocity, ωdes, and the desired acceleration, αdes, may, in some cases, be determined using an inverse kinematics model 312 with a given end-effector trajectory position generated by the end-effector trajectory generator 314 and/or from an acceleration estimator 330.
Using the feedforward torque, τff, and the task-related workspace force, Ftsk, force control component 306 associated with the robotic system 308 with torque-controllable actuators may generate, at a Jacobian matrix component 318, generate a torque vector, Ttsk, using the following equation:
τtsk=J(θ)TFtsk
In the current example, to generate the task-related workspace force, Ftsk, a torque vector, τtsk, is converted, at a Jacobian matrix component 318, from the force by the transpose of Jacobian matrix, J(θ), as shown in equation above and may be added at 316 to the control torque input, τcmd, provided to the actuators of the robotic system 308. In the current example, the task-related workspace force, Ftsk, may be determined based on a force, Fimp, discussed below, an established force, Fest, from a safety trigger component 328, and any additional force, Fadd, such as any force to compensate for gravity acting on an object being held and/or moved by the end-effector.
In the task planner component 304, a reference position, Xref, at robot's end-effector is calculated from an impedance-based trajectory generator 314 and then a spring-dampening force, Fimp, required for an end-effector to generate a spring-damping like impedance behavior may be determined by the impedance controller 322 as follows:
F
imp
=k
spr(Xref−Xact)−kdmpVact
where kspr and kdmp are stiffness and damping matrices that may be input by the user via the user system 302 and/or determined by a desired stiffness/damping component 320 of the task planner component 304 based on the user input and Vact is the actual linear/angular velocities of the end-effector. Vact may be converted from the estimated joint velocity by a second Jacobian Matrix component 332 based on the actual angular position, θact, provided by the sensors of the robotic system 308. A reference position/orientation component 324 may also generate, a referenced position, Xref, and an actual position, Xact of the end-effector may be determined by a forward kinematics component 326 based on the actual angular position, θact, provided by the sensors of the robotic system 308.
Using the above equation, the end-effector of the robotic system 308 acts as a spring-damper system with spring or impedance neutral position at Xref. Trajectory control is done by updating the value of spring or impedance neutral position Xref. The trajectory generator 314 and/or the referenced position/orientation component 324 of the task planner component 304 may generate a desired end-effector position at each control cycle (e.g. each segment or period of time) to update the spring or impedance neutral position. In some cases, the trajectory may be in the form of a workspace position and orientation of the end-effector without an inverse kinematics determination. In some cases, the inverse kinematics model 312 may be used to convert the reference frame for expressing the orientation of end-effector and to compensate for other adverse effects that may occur during execution of the trajectory by the robotic system 308.
Using the above referenced impedance-based trajectory control, the robotic system 308 is compliant to any interference from external disturbances (e.g., physical obstructions). However, the robotic system 308 with the impedance-based trajectory control may stop or otherwise halt movement in response to contact with an object in the external or physical environment. In some cases, the force output by the end-effector may increase as the trajectory progresses. In some cases, to prevent excessive force, an additional constraint representing a spring stretch as (Xref−Xact) may be used as a first threshold value by various safety trigger components 328 to halt the progression of the target point (Xref) when exceeded. Additionally, the impedance force, Fimp, following the above equation may be explicitly saturated at a maximum impedance.
For more compliant behaviors to a large amount of external disturbances, a process of trajectory recalculation may be added to the trajectory generator 314 of the task planner component 304. When the spring stretch (Xref−Xact) is beyond a second threshold value, the spring neutral position, Xref, is dragged to a new position close to the actual end-effector position. As a result of the combination of the force controller component 306 and the task planner component 304, the robotic system 308 with torque-controllable actuators may generate soft and safe behaviors while following trajectories to perform given tasks.
In the illustrated example, the output of the trajectory generator is Xref[i], kspr, kdmp where [i] is the element in the array of intermediate points, and Xref is an intermediate spring's reference coordinate, as represented by the plurality of points associated with the trajectory 404. The impedance position may be modeled as a virtual spring around the impedance neutral position 402 in space, so the trajectory or trajectory 404 is modeled as a moving impedance neutral position 402 with the virtual spring attached to the end-effector, such that at various positions about the impedance neutral position 402 the end-effector experiences the force associated with the force field 406 about the impedance neutral position 402 as shown. Further it should be understood that as the impedance neutral position 402 transitions along the trajectory or motion path 404, the force field also adjusts again resulting in a physical output by the robotic system replicating an experience of the end-effector being pulled along the trajectory 404 by the impedance neutral position 402 via a coupled spring.
In one particular example, the array of intermediate points along the trajectory 404 may be generated by determining a straight line between the starting and ending positions, as well as a straight rotation between the starting and ending orientations or end-effector poses. Next, the task planning component generates for each segment of time or cycle an intermediate point using the starting point and directions based on a linear trajectory, a polynomial-based trajectory to minimize an overall jerk along the trajectory. For instance, the following 5th order minimum-jerk trajectory may be used:
C
5th=(10(t/Ts)3−15(t/Ts)4+6(t/Ts)5)
where t is the intermediate time requested at each iteration and Ts is the time associated with the entire trajectory motion. In some cases, Ts may be determined based on the distance between the start and end points and the desired movement speed and C5th may be a coefficient between [0,1] which represents the 5th order minimum-jerk trajectory in the time domain. Thus, to generate the intermediate points, each point may be represented by the starting point plus the span between the starting and ending points multiplied by C5th as follows:
X
ref
=X
start
+C
5th(Xtg−Xstart)
In which Xstart is the starting robotic system position and orientation. Since t increments every loop iteration, the output of the trajectory generator is a set of intermediate points that act as the impedance neutral positions for the impedance controller of the task planner component.
As discussed above and described in more detail below with respect to
At 602, the system may receive a final target point. In some cases, the target point may be updated by the trajectory generator for each cycle or segment of time based on the planned trajectory or motion path of the end-effector as well as the actual position of the end-effector, such as when the robotic system encounters situations shown in section 506 of
At 604, the system may apply an inverse kinematics model to the target point. For example, inverse kinematics may be used to enable functionality to predict the robotic actuator angles in response to the desired positions and orientations of the actuators to affect the desired end pose by the end-effector. In the current example, the inputs to the inverse kinematics function include robot positions and orientations associated with the torque-controllable actuators and the output of the inverse kinematics function may be an array of joint angles that the robotic system would encounter at the final target point.
At 606, the system may determine if the robotic system includes a pose that is associated with a singularity. For example, in some specific designs, the robotic system may encounter a pose or poses that may have singularities (e.g., a pose at which two or more joint axes become parallel to each other or movement of one or more joints do not change the position of the end effector). In these specific designs, when a trajectory or motion path passes through or targets a pose at a singularity (e.g., an unsafe position and orientation of the robotic arm), the system may implement intervening action to ensure safe and smooth robot motion. For example, the robotic system may have a singularity position when the 4th joint axis and 6th joint axis from the base of the 6DOF arm are parallel each other. Thus, if the trajectory encounters a singularity, the process 600 may advance to 608. Otherwise the process 600, proceeds to 610 and outputs a series of intermediate target points along the trajectory to the trajectory generator.
At 608, the system may generate an intermediate target point. For example, the system may divide the trajectory into two independent trajectories. In this example, the first trajectory may include a joint rotation through the singularity pose to provide for a stabilizing joint-wise impedance. The second trajectory may include a remaining portion of the original trajectory. The remaining portion of the original trajectory (e.g., the second trajectory) may then be checked for any remaining singularities as the process 600 returns to 602.
For example, the impedance controller of the task planner component may receive a desired position and orientation of the torque-controllable actuators 804 with respect to a robot workspace domain. The impedance controller may convert the actual positions and orientations into a force and torque associated with the robot workspace that is useable to control the robotic system to the desired position and orientation. In this example, the impedance control is modeled as a virtual spring 810 that pulls the end-effector 806 to a desired impedance neutral position (or pose) 808. As discussed above, damping is also added to the model to prevent overshooting and smooth out the robot motion. Thus, the resulting impedance force by be represented as follows:
F
imp
=k
spr(Xref−Xact)−kdampVact
where Fimp is the force and torque required for an end-effector 706 to generate a desired impedance behavior 810. This impedance force may be added to the robot dynamics compensation model that eliminates a weight of the robotic system due to gravity, as well as at least partially eliminates inertial and Coriolis effects of the robot linkages with an effect of the impedance force acting on a weightless robotic arm and end-effector 806 with reduced inertia. Thus, the accuracy of the impedance-based position control may depend on a fidelity of robotics' force control which is determined by the preciseness of actuator's torque-control and the feedforward torque calculation that compensates for dynamic and static forces of the robotic system 802.
The feedforward control input may be determined from an inverse dynamics model of the target robotic system 802 which determines torque values required to follow a desired trajectory or motion path overcoming the dynamic and static forces generated by the inherent characteristics of the robotic system 802. The inverse dynamics may consider kinematic data as input parameters received from an inverse kinematic model that converts the task-space position to respective robotic joint angles.
The control torque input, τcmd, includes a feedforward torque, τff, to improve the fidelity of the robotic system 802 by compensating for at least a portion of the forces caused by the inherent dynamics of the robotic system 802 including robot's own weight. In the current example, the feedforward torque, τff, may be represented as follows:
τff=M′(θact)αdes+C′(θact, ωdes)+G′(θact)
In the current example, the feedforward torque in equation may be determined from an inverse dynamics model with an estimated robot inertia matrix, M′, estimated centrifugal and Coriolis force with velocity related force such as damping, C′, and estimated gravity force, G′. In some cases, a current angular position (θact), desired velocity (ωdes), and desired acceleration (αdes) of robot joints are used as input parameters to the inverse dynamics model. The desired velocity and acceleration may be determined from an inverse kinematics model with a given trajectory of the end-effector 706. If the robotic system 802 is commanded to generate force or impedance without a specific trajectory, the robotic system 802 may exhibit arbitrary movements depending on interaction with the environment. In some cases, an actual angular position with zero velocity and acceleration may be provided to the inverse dynamics model to assist in compensating for a gravity force associated with the robotic system 802. In some instances, an acceleration and velocity may be estimated from the actual angular position. In this case, a part of the inertial, centrifugal and Coriolis forces may be compensated for using the following equation:
τff=Kc(M′(θact)αest+C′(θact, ωest))+G′(θact)
where Kc is a coefficient between 0 and 1, in one implementation, or, in another implementation, between 0 and 0.3.
In embodiments using the feedforward torque, the robotic system 702 with torque-controllable actuators 704 may generate workspace force and moment at the end-effector 706 in a high fidelity. For instance, the force F may refer to a set of force and moment described as follows:
F=[f
T
m
T]T
Where, f and m are the force and moment vectors and superscript ‘T’ refers to vector transpose.
In some instances, to generate a workspace force, Ftsk, a torque vector, τtsk is generated from the force by using a transpose of Jacobian matrix, J(θ), as shown above. The transpose may be added to the control torque input, τcmd, as follows:
τtsk=J(θ)TFtsk
Then, a set of Cartesian forces may be added and provided to the force controller component. For example, task force Ftsk may be the sum of a force to generate a desired impedance behavior, Fimp, an additional force needed for completing tasks, and a constraining force for bounding a safe workspace. The additional force, Fadd, may be an upward force to compensate the weight of an object that the end-effector 806 may carry or grasp.
F
tsk
=F
imp
+F
add
+F
cst
In some cases, to prevent the end-effector 806 from trespassing a workspace bound that may define an allowable workspace area for safety, a constraining workspace force, Fcst, may be added to the task force. For example, a workspace boundary may be defined as a sphere or a combination of planes. If the end-effector 806 trespasses the bounded surface, then the constraint force is constituted based on a workspace impedance rule as follows:
F
cst
=K
Wcst(Xclosestpoint−Xact)−DWcstVact if Xact tresspassed the workspace boundry
where, KWcst and DWcst are stiffness and damping matrices, respectively. Xact is the actual workspace position of the end-effector 806, and Xclosestpoint is a point on the bounded surface that is closest to the actual position of the end-effector 706. In some cases, Vact are the workspace velocity of the end-effector 806.
In some cases, the robotic control system may add a joint-level constraint for a joint-level safety. For instance, additional torque, τcst, may be added to the final torque command and the constraint torque, τcst, can be constituted based on a joint-wise impedance as follows:
τcst=KJcst(θmax−θact)−DJcstωact if θact>θmax
τcst=KJcst(θmin−θact)−DJcstωact if θact<θmin
where, KJcst and DJcst are diagonal matrices filled with joint-wise stiffness and damping coefficients, respectively. θmax and θmin are vectors of maximum and minimum allowable joint angles, respectively. θact and ωact are vectors of actual joint angle and velocity, respectively. Thus, the feedforward torque, task torque, and constraint torque may be added to command the torque-controllable actuators 704 to produce intended workspace force and moment at the end-effector 706. The final torque command may be represented as follows:
τcmd=τff+τtsk+τcst
A disturbance-observer component 902 may be used to increase the performance of the torque controller by removing the effects of unmodeled actuator phenomena such as static friction. In some cases, the disturbance-observer inverse dynamics component, D(s), may be simplified to 1 to reduce software complexity at little cost to performance. In this case the disturbance-observer reduces the steady-state error. The controller 900 may also include a damping friction compensation component 904 that counteracts a resultant damping-like behavior of the closed-loop system at the free-end condition by adding compensation torque to the desired torque, Td.
The control process of the controller 900 may determine the actuator output torque by comparing the requested actuator torque from the force control component to an actual actuator torque measured by a torque sensor. The output of the controller 900 may be a requested current to the motor (e.g., a low-level current controller that executes sequentially). The actual actuator torque sensor feedback is filtered via a three-point median filter before being scaled into an actual torque value as follows:
T
a
=T
filtered3ptmed(k)=median(T(k),T(k−1),T(k−2))
where Ta is the actual feedback torque, Tfiltered3ptmed(k) is the three-point-median-filtered torque at the current iteration, T(k) is the current raw value of the torque, T(k−1) is raw torque from the previous iteration, and T(k−2) is the raw torque from two iterations previous. A three-point median filter may remove any single data points that are anomalous. The disturbance-observer component 802 receives the difference between the reference torque, Tref, and the actual torque, Ta, and generates a disturbance-observer torque, Tdob, as follows:
T
dob
=k
dob
Q(s)(Ta−Tref)
where Q(s) represents a low-pass filter, and kdob is a scaling factor in the range of [0,1]. The filter may be of the form: Q(s)=Nf/(Nf+s), where Nf is the cutoff frequency. The discrete form of this filter may be in the form: Tf=αTraw(k)+(1−α)Tf(k−1) where α=NfTs/(NfTs+1), and Ts is the sampling period.
In the current example, Tf is the filtered output of the filter, and Traw(k), Tf(k−1) represent the current iteration's raw value, and previous iteration's filtered value, respectively. Before the desired torque Td is provided to the controller 800, the desired torque is adjusted by a closed-loop damping compensation, Tdampcomp, and the disturbance-observer torque, Tdob, as follows:
T
ref
=T
d
−T
dampcomp
−T
dob
The error term that is input to the controller 900 may be the difference between the adjusted torque reference and the actual measured torque as follows:
E=T
ref
−T
a
The derivative portion of the controller 900 also uses the same first-order filter. Thus, the controller is of the form: CPD(s)=Kp+Nf/(Nf+s)Kds, where Kp is the proportional gain, Kd is the derivative gain, and Nf is the low-pass filter cutoff frequency of the derivative calculation. After which, a feedforward term is added as follows: TmotorFF=Tref.
The final output of the controller 900 to the motor of the actuator is a current command as follows:
A
motor=(1/(KτNgear))(TmotorFF+TPD)=(1/(KτNgear))(Td+Tdampcomp−Q(s)(Tα−Tref)−E(s)CPD(s))
where Kτ is the motor torque constant, Ngear is the actuator gear reduction ratio, and TPD is the output torque of the controller 900.
For the damping compensation terms, as well as robot-level dynamics, the actuator angle, velocity, and accelerations are determined on the actuator controller 900 as follows:
θact=θM/Ngear+θTMD
The actuator angle, θact is simply the sum of the motor angle θM divided by the gear ratio Ngear and θTMD deflection of the torque measuring device. The actuator velocity is determined based on successive angle measurements and dividing by the sampling period, with a first-order filter as follows:
ωraw(k)=(θact(k)−θact(k−1))/Ts
ωflt(k)=αωraw(k)+(1−α)ωflt(k−1) where α=NfTs/(NfTs+1)
where ωraw(k) is the raw angular velocity, θact(k) and θact(k−1) are the current and previous iteration's actuator angles respectively, Ts is the sampling period, ωflt(k) and ωflt(k−1) are the filtered angular velocities for the current and previous iterations respectively, and Nf is the low-pass filter cutoff frequency. This angular velocity is used to determine the damping compensation term Tdampcomp in the controller as follows:
T
dampcomp
=k
dcωflt
where kdc is a scaling factor to convert angular velocity to torque.
For example, if x is the actual actuator position, xe is the estimated actuator position, and xdote is the estimated actuator velocity, and: K1=ωb2, K2=ζωb where ωb is the cutoff frequency, and ζ=0.707. The difference form of the estimator may be determined as follows:
αe(k)=K1(x−xe(k−1))−K2xdote(k−1)
edot
e(k)=Tsae(k)+xdote(k−1)
x
e(k)=Tsxdote(k)+xe(k−1)
In this example, the user 1102 may manipulate the end-effector 1114 as the user 1102 moves their hand through the virtual environment. The control system 1112 may receive data associated with a virtual object that is encountered by the user 1102 within the virtual environment and generate a desired velocity, ωdes, and desired acceleration, αdes, for the robotic system 1104 to replicate a physical force acting on the hand of the user 1102 at the end-effector 1114 of the robotic system 1104. In other words, the control system 1112 causes the robotic system 1104 to generate a force replicating the user encountering the obstruction in the physical environment. Thus, a critical piece of realistic simulation may be provided by the robotic system 1104. For example, when the user 1102 lifts a virtual object, the end-effector 1114 presses down on the hand of the user 1102, so that the user 1102 feels the object's weight. In one example, the end-effector 1114 is equipped with a position tracker be that communicates with the electronic system 1110 and system controller 1112 to generate a position and orientation in the virtual scene. In some cases, the electronic system 1110 and system controller 1112 are integrated into the display 1106.
The robotic control system 1202 may be configured to receive the robot commands 1206 together with joint communication 1208 from one or more motor controllers 1210 of the torque-controllable actuators. In some cases, the robotic control system 1202 may also provide robot status 1220 back to the user device 1204. For example, as illustrated, the data flow may commence with a desired robot trajectory or workspace (expressed in a robot's global Cartesian coordinate system) force of the end-effector of a robot from the user device 1204, as the robot command. The robotic control system 1202 may then convert the workspace forces into actuator torques via the robot control loop 1218 based on the robot commands 1206 and feedback 1222 from the motor controllers 1210. The actuator torques may then be communicated as joint communication 1208 to the cascaded motor control loops 1218 over a network via the interfaces 1214 and 1216. The motor control loop 1218 on each actuator converts the desired torque into motor commands for execution.
In this example, the robotic control system 1302 may include the robot network interface 1314 and the robot control loop 1312 which communicate the robot commands 1306 and the robot feedback 1344 similar to
At 1402, the system may generate a virtual reality (or mixed reality) environment. For example, the system may cause a three-dimensional virtual reality to be displayed to a user via a headset system. In some cases, the system may output audio, including directionally, associated with the source of the audio within the virtual environment.
At 1404, the system may co-locate the user handheld device (e.g., the end-effector) with a position in the virtual reality environment. For example, the end-effector may be equipped with a position sensor that may provide feedback to the system in a manner in which the system may determine a pose and/or position of the end-effector. In some cases, the sensor may provide a six-degree of freedom pose associated with the position of the user's hand and the virtual environment.
At 1406, the system may receive a user input associated with the virtual environment via the handheld device. For example, the user may operate or move the pose of the end-effector to simulate a movement of the user's hand through the virtual environment.
At 1408, the system may generate user interaction force using a haptics component of the virtual reality engine. For example, the system may utilize one or more collision engines to determine an intersection between the user's hand and a virtual object and a physics processor to determine desired robotic forces based at least in part on the collision data. A haptic manager may then determine a transmitted force to control the torque or force associated with the end-effector based at least in part on the desired robotic forces.
At 1410, the system may transmit the commanded force to the robotic control system. For example, the virtual reality engine may communicate to the robotic control system via one or more network loops.
At 1412, the system may provide visual feedback through the display. For example, the display may show the user holding or pushing or otherwise interacting with an object in the virtual environment.
At 1414, the system may generate interpolated joint commands from the transmitted force. For example, a robotic control system may be configured to receive the transmitted force and to translate the force into a torque commands for each of the torque-controllable actuators of the robotic system. In some cases, the interpolated joint commands may be based at least in part on feedback received from the torque-controllable actuators and/or a safety threshold.
At 1416, the system may send the joint commands to the robotic system and, at 1418, the robotic system may apply the joint commands to cause force feedback to the user. For example, the user may experience force feedback that replicates the weight of the object being held as the end-effector pushes or pulls downward on the hand of the user.
Although the subject matter has been described in language specific to structural features, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features described. Rather, the specific features are disclosed as illustrative forms of implementing the claims.
This application claims priority to U.S. Provisional Application No. 62/814,972 filed on Mar. 7, 2019 and entitled “SYSTEM AND METHOD FOR GENERATING FORCE FEEDBACK FOR VIRTUAL REALITY,” which is incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
62814972 | Mar 2019 | US |