The present invention relates generally to robotics, and specifically to a method and system to provide improved accuracies in multi-jointed robots through kinematic robot model parameters determination.
Multi-jointed robots used by industries are not generally considered very accurate. In fact, such industrial multi-jointed robots may not even have a quoted accuracy specification. On the contrary, these robots are purchased because they are repeatable, which can be on the order of ±0.0016 inches. Repeatability is the ability of the robot to return to a given position in space multiple times, and is quantified by the spread or degree of scatter in actual tool tip locations in space about the mean location of the tool tip for a collection of repeat measurements. Accuracy is the ability of the robot to position the tool tip at a commanded location in space, and is quantified by the error in the actual tip location compared to the actual commanded location.
Typically, multi-jointed robots are manually taught what to do by “jogging” the robot through its desired motions while capturing positions along the way. These captured points are used to create motion patterns that can be repeated indefinitely, and the robot will perform that function flawlessly without variation. However, there is a need for accurate robots that can be programmed to perform functions without resort to “teach and learn” robot programming as the industry is moving to programming from blueprints or CAD files. This step is limited by the accuracy of the robot and therefore is currently restricted to very special applications and expensive robots.
In addition, robot calibration requires measuring robot errors in a number of poses, that is the difference between “where it is” and “where it should be,” followed by an optimization routine to find the best set of robot model parameters to minimize these measured errors. As used herein, the term “pose” includes position and orientation of the robot tool in space. One prior art method to robot calibration uses an external measuring device to find the actual pose of the robot. Errors are calculated by subtracting the actual pose from the commanded pose of the robot. Unfortunately, such external measuring devices are typically very expensive, and in turn require their own careful calibration. Generally, they also require highly qualified operators in order to produce acceptable results.
It is against the above background that the present invention provides a method and system to provide improved accuracies in multi-jointed robots through kinematic robot model parameters determination. Specifically, model parameters are determined and used in robot controllers to control the motion of a robot end-effector tool to simplify calibration of the robot's accuracy. In this manner, the present invention allows inexpensive multi-jointed robots to achieve accuracies comparable to expensive multi-jointed robots, and allows more opportunities for additional industries to move to this mode of operation. For example, the present invention allows robot motion patterns to be generated from blueprints or CAD files, rather than these patterns being generated through the “teach and learn” method in which the robot is manually “jogged” through its desired motions to determine points for its motion pattern. The present invention also allows tools to be changed without having to recalibrate the robot parameters.
The present invention makes use of geometric constraints to perform parameter identification in order to increase position and orientation accuracy. When the tool or effector object encounters the geometric constraint then the actual robot pose (or partial pose, e.g. position) is defined by the geometry of the calibration object and that of the effector object. At each encounter, the joint values for the robot are stored, defining the joint configuration of the robot for that encounter. Errors are calculated from the difference between: the calculated pose (or partial pose), or other metrics, derived from the measured joint angles and from the current robot model; and the geometry defined by the geometric constraint. Parameter identification is accomplished by an optimization routine designed to find the robot model parameters that minimize these errors.
In one embodiment, the present invention uses an improved robot controller that allows for a mechanical “feedhold” and pose capture on reception of an input signal. This signal is supplied by a touch or trigger probe integrated with the tool. When the tool tip encounters the calibration object, a signal is sent to the controller to read pose (i.e. joint configuration) and to stop robot motion in a controlled fashion. The present invention finds the kinematic model parameters for robots with which to improve pose (position and orientation) accuracy of an end-effector on the robot. Specifically, the present invention finds the robot model parameters by capturing joint values when an “effector” object encounters a reference object. The present invention uses joint values from multiple encounters together with the kinematic model of the robot and the geometry of the reference and effector objects to create mathematical equations that are used in an optimization routine to find the model parameters for the robot. The effector object is an offset touch probe with a spherical tip and the reference object is a sphere. The touch probe provides the trigger when deflection of the tip occurs during an encounter with the sphere. Two spheres are used, with the distance between which is known, to provide a length scale for determining the length model parameters.
In another embodiment, the present invention makes use of external measuring devices to measure the position and orientation of the robot end-effector. It is to be appreciated that both this embodiment and the above-mentioned embodiment make use of differences between the actual pose and the inaccurate pose that the model calculates when using the incorrect model parameters. This approach uses an inexpensive “displacement-measuring device” to interpolate to a geometric constraint with no need for recording joint values. Consequently, it does not require special hardware or software internal to, or interfaced with, the robot controller. However, it does require an external measuring device (the displacement-measuring device), but this device is an inexpensive piece of equipment that is used to interpolate to the geometric constraint.
The following detailed description of the embodiments of the present invention can be best understood when read in conjunction with the following drawings, where like structure is indicated with like reference numerals and in which:
Standard automated, motorized, multi-jointed robots make use of computer models of the robot with which to control motion as it moves the end-effecter to desired points within the robot envelope in Cartesian coordinate frames.
As illustrated, the present invention uses a reference object, such as a sphere 12, and an accurate surface detection device, such as an offset touch probe 14, attached to a robot end-effector or a tool attachment mechanism 16 provided at the end of link F of robot 10. Use of the offset touch probe 14 permits information to be gained on the parameters of the end-joint 6 of the robot arm 8. As will be explained in a later section, additional steps are performed to separate the parameters of the offset touch probe 14 (or the tool attachment mechanism 16) from the parameters of the robot arm 8 to allow interchange of tools.
As shown in
In the explicatory embodiment of
It is to be appreciated that the tolerances for the sphericity of spheres 12a and 12b and the measured distance of length standard 28 must be better than the required degree of accuracy for the calibration, typically by an order of magnitude. For example, the distance between the spheres provides a length scale for the robot model parameters. Percentage errors in this distance will directly result in an equal percentage error in the model lengths, producing a corresponding percentage error in tool tip positioning. Requirements on resultant robot accuracy will therefore drive the required tolerance of the independent measurement of the distance between the spheres and the required sphericity of the spheres. Note that if the robot accuracy requirements are extreme, then the calibration assembly may need to be stable with temperature (length constant over a given temperature range).
It is also to be appreciated that while the geometry of the reference object in the present invention is spherical, finding the center of the sphere (i.e. a point) is a true geometric point constraint and not an extended constraint geometry as is found in some prior art.
As mentioned, the present invention uses the two spheres 12a and 12b to find a length scale that is independent of the robot 10. This is required since the geometric constraint approach does not determine absolute length dimensions, but only ratios of length dimensions. In the illustrated embodiment of
The touch probe 14, in addition to being mechanically attached to the robot end-effecter 16, is electronically interfaced with the controller 20 to indicate to the controller an encounter. An encounter is detected when a stylus or tip 18 of the touch probe 14 is deflected; such as for example, by one of the spheres 12a and 12b as the tip is moved into contact with the sphere 12. In one embodiment, the tip 18 is a ball tip, and in other embodiments may be any other type of stylus tip.
One suitable touch probe is a commercially available Renishaw® offset probe, which has a repeatability specifications on the order of ±0.00004 inches. Dimensions of the touch probe 14 are determined independently with any other device (e.g. a coordinate measuring machine) that can accurately measure the position of the touch probe stylus tip 18 relative to the mounting surface of the probe. Tolerances on the touch probe dimensions affect the resultant tool tip position accuracy. Therefore, accuracy of these independent measurements must be better than the required degree of accuracy of the robot, typically by an order of magnitude.
It is to be appreciated that with the present invention the joint axis positions are captured by the controller 20 when the touch probe 14 touches the sphere 12. Mathematical calibration equations, which are explained in later sections in greater detail, are then employed by the processor 21 to optimize the robot model parameters using as inputs the joint axis positions for various points on the surface of each sphere 12a and 12b determined by encounters with the touch probe. Error in this recording will influence the resulting robot parameters' optimization, but various error reduction methods can be employed. For instance, increasing the number of measurement poses will reduce the effect of random errors and touching the sphere 12 in a symmetric fashion will reduce the effect of systematic errors.
In the present invention, the robot model 37, which digitally represents the robot 10, is employed by the processor 21. Ideal Cartesian coordinates of the probe tip 18 are calculated from this model using nominal values of the model parameters. To determine actual robot model parameters and with reference made again to
As mentioned above, the joint axis positions of the robot arm 8 are recorded at each pose where the touch probe 14 makes contact with a surface of one of the spheres 12a and 12b. With these recorded values, a number of calculations are then performed by the processor 21 which are illustrated by
Next, the processor 21 calculates the variation in the calculated Cartesian coordinates of a sphere center as a function of variations in the Cartesian coordinates of four points on the surface of the sphere in step 130. In step 140, the processor 21 calculates the variation in the coordinates of the touch-probe tip as a function of variations in the parameters of the model. In step 150, the processor 21 calculates the variation in the calculated coordinates of a sphere center as a function of variations in the model parameters from the results of step 130 and 140. In step 160, the processor 21 optimizes the robot's model parameters by minimizing the errors in sphere centers and by constraining the calculated distance between the spheres to the known length standard 18. The processor 21 uses as inputs the result of steps 120 and 150, and the joint axes positions for points on the surface of each of the two spheres for a large number of measurements on the spheres in different positional poses of the robotic arm. If the actual robot model parameters are different from the nominal values for these parameters, then the calculated “ideal” coordinates of the spheres will differ for each robot pose. It is to be appreciated that the invention determines the best-fit values for the actual model parameters that minimize the scatter in the cluster of sphere centers calculated in this manner, in effect making corrections to the nominal model parameter values.
In another embodiment, the present invention separates the touch probe dimensions from the robot model parameters. Measurements from the dial indicator 35 on the mounting faceplate 36 of the tool attachment mechanism 16 allow tilt and displacement of the mounting faceplate to be determined. Using these measurements and the dimensions of the touch probe 14, the processor 21 calculates the robot model parameters for the end joint and removes the touch probe displacements and orientations (i.e. yaw, pitch and roll) with respect to the mounting faceplate 36 of the attachment mechanism 16.
Once the corrected model parameters are entered into the robot controller 20, the present invention will allow accurate positioning of any new tool attached to the attachment mounting faceplate 36. Tool dimensions are measured independently with, for example, a coordinate measuring machine. Mounting faceplate displacements and orientations are implemented either as a frame on the end of the robot or by transforming the tool dimensions and orientations by the mounting faceplate displacements and orientations through a matrix product and extraction of the net dimensions and orientations. If the mounting faceplate displacements and orientations are entered as a frame on the robot, then the tool frame must be implemented as a frame relative to the mounting faceplate.
In still another embodiment, the present invention determines the model parameters for the joint farthest from the robot base and tool frame for the tool attachment mechanism 16. In this embodiment, the processor 21 performs a number of additional calculations of the results of the robot model parameter optimization described above in reference to
As mentioned above, the joint values of the robot, which are recorded when the controller receives the touch probe signal, are used by the processor 21 to calculate the ideal Cartesian coordinates of points on the surface of each sphere. Then the controller calculates the ideal Cartesian coordinates of each sphere center from the Cartesian coordinates of sets of four points on the surface of the sphere. These steps are described next in greater detail.
In the Denavit-Hartenberg model, a homogeneous transformation matrix is constructed to account for the translations and rotations for a given link between two joints due to ai, di, αi, and θi, respectively. Homogeneous transformation matrices are 4×4 matrix transformations that account for both rotation and translation in three dimensions through matrix multiplication.
The usual rotation matrix is the upper left 3×3 sub-matrix of the homogeneous transformation matrix, and the translation vector is contained in the upper three elements of the right-most column. The lower right element of the homogeneous matrix has a value of one for the purposes of the analysis describe herein. The transformation matrix, Mi−1,i, from frame i−1 defined by the xi−1, yi−1, and zi−1 axes to frame i defined by the xi, yi, and zi axes, is given by the product of four matrices representing, from right to left, a rotation of αi about the x axis, a translation of ai along the x axis, a rotation of θi about the z axis, and a translation by di along the z axis:
The complete robot model (i.e., model 37 in
The generalized rotation, R, can be written in matrix form:
where α is a yaw rotation about z, β is a pitch rotation about y′, and γ is a roll rotation about x″.
As illustrated by
The present invention uses this matrix to transform coordinates from the base frame (unprimed) to the rotated frame (primed) by multiplying a coordinate vector by R:
{right arrow over (τ)}′=R{right arrow over (τ)}
where {right arrow over (τ)} is any vector,
in the unprimed coordinates and {right arrow over (τ)}′ is the same vector in the rotated frame,
However, it is to be appreciated that other Euler angle conventions are possible and may be used with this invention.
Symmetry forces the center of the sphere to be in any plane that perpendicularly bisects a segment joining any two points on the surface of the sphere. If the two points on the surface of the sphere are identified by P1 and P2, as shown in
({right arrow over (P)}2−{right arrow over (P)}1)∘({right arrow over (X)}−({right arrow over (P)}1+{right arrow over (P)}2)/2)=0({right arrow over (P)}2−{right arrow over (P)}1)∘{right arrow over (X)}−({right arrow over (P)}2−{right arrow over (P)}1)∘({right arrow over (P)}1+{right arrow over (P)}2)/2=0({right arrow over (P)}2−{right arrow over (P)}1)∘{right arrow over (X)}=({right arrow over (P)}2−{right arrow over (P)}1)∘({right arrow over (P)}1+{right arrow over (P)}2)/2
where ({right arrow over (P)}2−{right arrow over (P)}1) is the vector between P1 and P2, ({right arrow over (P)}1+{right arrow over (P)}2)/2 is the vector to the midpoint of the segment, and {right arrow over (X)} is the vector defining the locus of point on the perpendicular bisecting plane (i.e. {right arrow over (X)} defines the collection of points on the plane). Four non-coplanar points on the surface of the sphere can be used to construct three line segments. The center of the sphere is the point of intersection of the three planes that perpendicularly bisect these segments. The equations for the three planes can be combined into one matrix equation:
This equation can be solved for {right arrow over (X)} in a number of ways (e.g. matrix inversion, Cramer's rule, etc.). Appropriate selection of the points on the surface of the sphere will minimize calculation errors by ensuring that the three planes are nearly normal to each other. In the present invention, surface touch points are grouped into four-point sets that are used to calculate the center of the sphere. While three points on the surface are sufficient to defined the sphere center if the radius of the sphere and the radius of the touch probe stylus tip are known, there is uncertainty in the amount of deflection required for the touch probe to generate a pulse. This introduces both a random and systematic error in the effective distance between the center of the sphere and the center of the touch probe stylus tip. Using four points to find the center of the sphere eliminates any systematic error in this dimension.
If the actual robot model parameters are different than the nominal model parameters, then the calculated “ideal” coordinates of the sphere centers will differ for each set of four touch points. The present invention determines the best-fit values for the model parameters that minimize the scatter in the cluster of sphere centers calculated in this manner, in effect, making corrections to the nominal model parameter values.
The present invention uses a Jacobian equation, which is a linearization of the robot model 37 that quantifies the effect that variations in the model parameters have on the calculated sphere center coordinates. The Jacobian equation is constructed for a point-in-space constraint (i.e. the center of the sphere), based on joint values representing touch points on the surface of the sphere. Solving the Jacobian equation is a method of solving for the least-squares best-fit set of model parameters. The present invention finds the minimum in parameter space from differences between the actual sphere center and those calculated from the inaccurate model parameters and the recorded joint values. The present invention is iterative as the parameters are better estimated in each iteration with the subsequent Jacobian being a better estimate of the Jacobian with the best-fit set of parameters. When corrections to the coordinates of the constraint point approach zero, the routine of the present invention is considered to have converged to its best fit.
A minimization algorithm of the present invention uses a Jacobian matrix reflecting the change in Cartesian coordinates of the touch probe stylus tip 18 as a function of changes in the model parameters. In general, the Jacobian, J, is defined as follows:
where
is the vector representing the touch probe tip position,
is the vector
representing the model parameters, and K is the number of model parameters. The Jacobian is a function of the joint positions (i.e. poses) of the robot 10 and can be calculated directly from the matrix equation for the model. Four “point” Jacobians, JP1, JP2, JP3, and JP4, are determined in this way for each of four touch points on the surface of a sphere. Another Jacobian is determined for the calculation of a sphere center from the sets of four points on the sphere surface. This “center” Jacobian, Jc, reflects the changes in the calculated sphere center Cartesian coordinates as a function of changes in the Cartesian coordinates of the four points on the sphere surface.
where xc, yc, and zc are the coordinates of the sphere center, x1, y1, and z1 are the coordinates of the first point on the sphere surface, x2, y2, and z2 are the coordinates of the second point on the sphere surface, and so forth. A new Jacobian is defined, which is the product of Jc and a matrix that is constructed from the four point Jacobians:
The new Jacobian, Ĵ, reflects the chain rule for differentiation in matrix form and quantifies the change in the Cartesian coordinates of the sphere center as a function of changes in the robot model parameters.
A large number of sphere centers and the corresponding matrices, Ĵ, are determined in this way for a variety of robot poses for each sphere. The poses are selected to maximize the range of motion of each joint, which maximizes the envelope of applicability of the model parameter results. The Jacobians, Ĵ, for these poses are then combined and processed in a manner generalized from a ball-in-socket mechanism procedure. See Marco Antonio Meggiolaro, “Achieving Fine Absolute Positioning Accuracy in Large Powerful Manipulators”, Doctoral Thesis, MIT, September 2000, and the disclosure of which is herein incorporated by reference.
It is known to use a ball-in-socket mechanism on the robot end-effecter. The center of the ball is stationary and independent of robot pose. This constraint allows the robot to be used in its own calibration; however, a ball-in-socket mechanism is difficult to implement on a motorized robot. Robot positioning errors due to incorrect model parameters will produce strain in the robot as it is moved into different poses. For large, powerful robots, of the type commonly used in manufacturing companies, the inherent inaccuracy in the robots likely would tear apart the ball-in-socket mechanism, or at the least, would cause unnatural strains in the robot joints. The present invention overcomes this problem by using measured points on the surface of each of two spheres to calculate a stationary point at the center of each sphere.
The Jacobians, Ĵ, for these poses are then combined, as described below. A column vector, Δ{right arrow over (X)}, is created comprising the difference between the Cartesian coordinates for the centers of the spheres determined from the touches on the surface, and the actual centers of the spheres, these three-vectors being stacked to create a column vector. As used herein, the “c” subscript indicates the center coordinate determined from touches on the sphere surface. In addition, the “c” subscript is followed by the index of the center, where L is the total number of centers being calculated (i.e. sets of four touch points), half of which are for the first sphere (above the horizontal line in the vector), and half of which are for the second sphere (below the line). Also as used herein, the subscripts “a1” and “a2” indicate the actual sphere center coordinates for the two spheres, respectively, which are unknown. The vector is as follows:
In order to find the best-fit solution the following equation must be solved:
where, {right arrow over (ε)} is the vector of errors, or corrections, to the model parameters, and the subscript on each Ĵ is an index for sphere center; each Jacobian is evaluated for each set of four touch points.
This equation is modified to allow the processor 21 to calculate the actual center of each sphere at the same time as the model parameters are calculated. The present invention accomplishes this for the two spheres by appending to the right hand side of each Jacobian, Ĵ, a 3×3 identity matrix, I, and then a 3×3 null matrix, O, when calculating the first sphere center, or appending a 3×3 null matrix and then a 3×3 identity matrix when calculating the second sphere center. Next, these modified matrices are stacked, as described above to create a new Jacobian, {tilde over (J)}. These last two steps are illustrated in the following equation:
The present invention at this point, splits the actual sphere centers into trial sphere centers plus a correction term:
where the prime indicates trial values that are adjusted in each iteration of the solution, and the δ values represent correction terms. The left most equation is repeated from above for clarification. The next equation shows the split of the actual sphere centers into a trial term and a correction term. The next equation shows how the correction vector is separated from the other vector, and the final equation writes the result in a convenient form. The best-fit equation is then modified as follows:
where the first line shows the original best fit equation again, the second line shows how the correction vector, δ{right arrow over (X)}a, is incorporated into a generalized error vector, {right arrow over (ε)}′, and the last equation shows the resulting matrix equation in simpler notation. The final matrix equation is solvable by multiplying both sides of the equation by the pseudoinverse of {tilde over (J)}:
{tilde over (J)}#Δ{right arrow over (X)}c={right arrow over (ε)}′
where {tilde over (J)}# is the pseudoinverse of {tilde over (J)}:
{tilde over (J)}
#=(
{tilde over (J)}
t
{tilde over (J)})−1{right arrow over (J)}t
and {tilde over (J)}t is the transpose of {tilde over (J)}. The solution is a set of corrections to the model parameters, {right arrow over (ε)}, and to the trial Cartesian coordinates of the sphere centers: (δxa1, δya1, δza1), and(δxa2, δya2, δza2).
Folding these corrections back into both the model and the trial sphere centers allows this process to be iterated, resulting ultimately in the least-squares best-fit model parameters for this set of measurements. Iteration is necessary since the Jacobian formulation is a linearization of the actual robot. The closer the model parameters and trial sphere centers are to their actual values; the better will be the resulting calculations. Iteration allows the model parameters and sphere centers to approach their actual values, which results in a Jacobian formulation that is progressively better with each iteration. The resulting optimized model parameters and sphere centers minimize the scatter in centers of each sphere for the points found on the surface of the spheres. In other words, this approach minimizes the difference between the actual sphere centers and the centers determined from the optimized model parameters and the joint values captured on the sphere surface.
There are generally some model parameters that are not independent, one solution of which is to remove appropriately selected columns from the Jacobian prior to performing the pseudo-inverse. Corresponding elements of the error vector {right arrow over (ε)} must also be removed. In addition, if steps are not taken to constrain a selected link length in the algorithm, then a trivial solution results: the “optimized” model parameters are found to be equal to the nominal values, and the “actual” sphere center is determined to reside at the robot base frame origin.
As described above, the known distance between the two spheres 12a and 12b provides the length scale 28 (
The approach described above, and other approaches to robot calibration, have difficulty in determining model parameters for the joint farthest from the robot base, the “end joint”. The method of the present invention described thus far provides model parameters that include the touch probe as if it were an integral part of the robot (i.e. part of the end joint). Additional steps are then used by the processor 21 to separate the touch probe dimensions from the robot model parameters. This is useful for applications that require the interchange of tools since the present invention makes it possible for not needing to re-calibrate the robot for every tool change.
In the plate embodiment described above, the end-joint angular offset determined from the optimization scheme also described above is used as the end-joint offset for the robot provided that a frame is defined for the attachment mechanism that removes (or accounts for) the angular offset for the touch probe tip as well as accounting for the other attachment mechanism parameters. Additional measurements on the tool attachment mechanism and a reference plate, recorded by the processor 21 as the end-joint is rotated and manipulated, allow the tilt and displacement of this mounting face to be determined. These measurements, together with touch probe dimensions and outputs from the Jacobian methodology described above, are used by the processor 21 to calculate the remaining robot model parameters for the end joint and additional parameters for the attachment mechanism with respect to the end joint. These final steps are described below.
With reference made again to
μ=tan−1(d/r).
The phase of the sine function is used to define the axis about which the normal to the reference plate 40 is tilted. The tilt axis is perpendicular to the axis of the end joint 6 and is specified by an angle of rotation, v, of tilt axis about the end joint axis where v is equal to the phase, as illustrated by
The tilt angle and angle of the tilt axis are sufficient to solve for the unit normal vector to the reference plate 40 in end-joint frame coordinates (tx, ty, tz).
t
x=cos(μ)
t
y=sin(μ)cos(v)
t
z=sin(μ)sin(v)
Yaw (α) and pitch (γ) can be extracted from this normal vector with simple trigonometric calculations:
γ=sin−1(−tz)
α=tan−1(ty/tz)
Roll must be determined with additional steps, described below.
Determination of displacement of the attachment mechanism faceplate 36 in a plane perpendicular to the axis of rotation of the end joint 6 is accomplished with similar run-out measurements, this time on the outside of a cylindrical surface typically found on an attachment mechanism 16. It is to be appreciated that this cylindrical surface is centered on the attachment mechanism 16 to provide tool alignment. Displacement is plotted as a function of end joint angle, and a sine function is fit to the data. The magnitude of this sine function establishes the magnitude of displacement, and the phase gives the angle of that displacement about the end joint axis of rotation. Roll is determined in the next step.
At this point, robot parameters have been determined, which include the touch probe 14 as if the probe were a part of the robot arm 8. When these model parameters have been entered into the controller 20 and the robot arm 8 is positioned in some given orientation then the touch probe tip 18 (if it was mounting on the robot) is in a known location and orientation relative to the robot coordinate frame. For instance, for robot yaw, pitch and roll each at zero degrees, the probe 14 will be oriented so that the probe tip offset 38 is in, say, the z-direction relative to the robot base frame. Since the offset 38 of the touch probe tip 18 will not be aligned with the attachment mechanism 16 in any particular orientation, these robot model parameters will not naturally align the attachment mechanism to the robot, which is important for applications requiring interchange of tools. The present invention in this step finds the roll angle that aligns the attachment mechanism frame (its local z-axis) with the robot z-axis. Dial indicator measurements on the edge of the reference plate 40 taken as the robot arm 8 is moved up and down vertically indicates the twist angle that this edge makes with the robot vertical axis about the axis of rotation of the end joint 6.
It is to be appreciated that off-line measurements on the edge of the plate 40 will indicate the twist of the edge relative to the attachment mechanism frame. In one embodiment, the present invention uses a tool attachment mechanism coupled to the conventional tool-mounting fixture 44 provided on reference plate 40. The tool attachment mechanism used is identical to that found on the robot end joint 6. This mounting fixture assembly defines the coordinate system for measurements relative to the mounting surfaces of the attachment mechanism, such as measured by a coordinate measuring machine (CMM). The y-z plane is aligned with the horizontal mating surface of the attachment mechanism, with the x-axis centered on the axis of the alignment cylinder found on this mechanism. The z-axis is aligned with some feature on the attachment mechanism such as an alignment pin. The roll of the robot attachment mechanism is the difference between the twist angle of the reference plate measured on the robot and the twist angle measured on the reference plate relative to the mounting fixture.
Incorporating these findings into a frame for the attachment mechanism requires calculating the offset and orientation of the attachment mechanism faceplate relative to the end joint frame. Orientation (yaw, pitch and roll) comes directly from the steps outlined above. Offsets in the y and z-axes are equal to the magnitude of the off-axis displacement times the sine or cosine of the displacement angle, respectively. The offset in the x-axis, Δx, is derived from the robot model parameters found earlier, which have the touch probe dimensions embedded in them. Δx is the length along the end-joint axis of rotation from the origin of the end-joint frame to the mounting face of the attachment mechanism. A mathematical transformation is performed by the processor 21 to calculate Δx from the length to the probe tip, touch probe dimensions and the attachment mechanism orientation and offsets. This transformation results in the following equation:
Δx=d−{xp cos(α)cos(β)+yp[−sin(α)cos(γ)+cos(α)sin(β)sin(γ)]+zp[sin(α)sin(γ)+cos(α)sin(β)cos(γ)]}−Δx0
where d is the distance found from the model parameter optimization routine to the touch probe tip along the axis of rotation of the end joint, and Δx0 is the nominal offset stored in the robot controller for the distance to the robot end joint faceplate (without the attachment mechanism) along the axis of rotation of the end joint frame, and xp yp, and zp are the touch probe tip offsets determined by measurements relative to the mounting fixture. All other symbols are previously defined above. Again, this expression is derived from the matrix product of a generalized rotation with yaw, pitch and roll times a vector with coordinates (xp, yp, and zp).
Once the corrected model parameters are entered into the controller 20, the present invention provides accurate positioning of any new tool mounted to the attachment mechanism. Tool dimensions may be measured independently with a coordinate measuring machine relative to the mounting fixture. Attachment mechanism displacements and orientations are implemented either as a frame on the end of the robot or by transforming the tool dimensions and orientations by the attachment mechanism displacements and orientations. This later step is accomplished through a product of the homogeneous transformation matrices that represent the frame of the attachment mechanism and the frame of the tool, respectively, with the net dimensions and orientations of the tool/attachment mechanism combination frame being extracted from the resulting transformation matrix. Alternatively, if the attachment mechanism displacements and orientations are entered as a separate frame on the robot, then the tool frame must be implemented as a frame relative to the attachment mechanism frame. Each method is mathematically equivalent, the decision of which method to use being constrained by the capabilities of the controller in use (i.e. the controller software may not allow relative tool frames).
Each scatter plot in
In one experimental test, the positional accuracy of the system was required to be on the order of 0.010 inches. The demonstration was conducted on a six-axis, rotary jointed robot, such as robot 10 illustrated by
where Δxc is the uncertainty in the x coordinate of the center of the sphere; xc is the x-axis center coordinate; x1, y1, . . . z4 are the coordinates of the four touch points on the surface of the spheres, respectively, with associated uncertainties, Δx1, Δy1, . . . Δz4; and Jc 1,i is the variation in xc resulting from variation in the ith coordinate on the sphere surface (i=1, 2, 3, 4, 5, . . . 12 for x1, y1, z1, x2, y2, . . . z4, respectively). Assuming the probe tip uncertainties (Δx1, Δy1, Δz1, Δx2, Δy2, . . . Δz4) are all equal and letting this uncertainty be represented by Δ, then the following must hold:
This equation can be inverted to solve for Δ:
(Δ)2=(Δxc)2/(J21,1+J21,2+ . . . +J21,12)
Similar expressions can be derived for the y and z components. Each center residual (difference between actual and calculated sphere center) provides an estimate of Δxc, from which an estimate is made for Δ2. All Δ2's can be averaged to arrive at a total variance for probe positioning errors. Pooling data from all poses and all directions (x, y, and z) gives a standard deviation for probe tip positioning error of 0.0017 inches for the measurements in
In another embodiment, the present invention is performed on a subset of the model parameters, which is helpful since motorized robots must be able to perform reverse-kinematics calculations. Reverse-kinematics is much simpler if only the nominally nonzero link lengths and the joint offsets are optimized, leaving all other parameters fixed. However, in this case, actual errors in parameters that are not optimized may be folded into parameters that are being optimized. Optimization will reduced the positioning errors for the measurements being taken (i.e. for the touch probe), but there may be orientation errors that could produce additional positional errors when using tools of different dimensions than the touch probe.
In one experimental test, with the processor 21 using the above measurements from the previous experimental test, the processor 21 optimized a reduced set of model parameters, wherein the standard deviation of the center residuals became, for the respective coordinates: x: 0.0026 inches, y: 0.0035 inches, and z: 0.0040 inches. Likewise, probe tip positioning error was found to be 0.0031 inches.
In the experimental tests, the present invention also provided information on the maximum deviations of the centers. For the nominal parameters, the maximum absolute deviations of the measured centers from the average centers were found to be: x: 0.099 inches, y: 0.139 inches, and z: 0.162 inches. For the full set of optimized parameters, the maximum absolute deviations were found to be: x: 0.0073 inches, y: 0.0043 inches, and z: 0.0060 inches. For the partial set of optimization parameters, the maximum deviations were found to be: x: 0.0083 inches, y: 0.010 inches, and z: 0.012 inches. Again, the fall set of optimized parameters performs best with respect to this metric, but there is still a remarkable improvement over the uncorrected data when using a partial set of parameters. This test clearly demonstrates the advantage of accuracy calibration.
In another experimental test of the present invention, a partial set of model parameters obtained for the robot was entered into the controller software for measurement of the attachment mechanism frame. The partial set of parameters was actually a subset of the full set of parameter values found from a full parameter optimization. A partial optimization was not used. The touch probe was dimensioned on a CMM making use of the CMM mounting fixture. The twist of the attachment mechanism reference plate was also determined using a CMM and the mounting fixture. Out-of-plane run-out measurements were made with a dial indicator at 12 points on the tool attachment mechanism reference plate mounted on the robot as the end joint 6 (
With all of this data, the processor 21 calculated the frame for the attachment mechanism. This attachment mechanism frame was loaded into the robot controller as a tool frame. Out-of-plane run-out measurements were then taken at 12 points on the attachment mechanism reference plate as the robot was manipulated through a roll of 360 degrees (see
For the experimental test, initial measurements on the attachment mechanism showed the tilt of the reference plate to be 0.0055 inches at a measurement radius of 3.75 inches. This is an angle of 0.084 degrees. After adjusting for the calculated frame, the tilt was measured at 0.0009 inches at the same 3.75-inch radius, for an angle of 0.015 degrees. For a 20-inch long tool, the tip displacement for this frame would be only 0.005 inches from ideal, one-sixth of the uncorrected displacement. Initial off-axis displacement measurements showed the attachment mechanism to be shifted 0.0113 inches off center. After entering the correction, the displacement was measured at 0.0026 inches. Finally, the initial twist angle measured on the side of the reference plate was near zero degrees, but the twist of the reference plate measured with the CMM was 0.315 degrees. After entering the correction to roll, the twist of the reference plate was found to be 0.262 degrees measured with the CMM, with a difference of 0.053 degrees from ideal. These three measurements show a significant improvement for attachment mechanism errors. Accordingly, the above mentioned experimental tests have shown the value of the present invention in finding model parameters and the attachment mechanism frame that result in improved accuracy as reflected in reduced scatter in the measured sphere centers and reduced tilt, twist and displacement of a tool.
In another embodiment illustrated by
In this embodiment, the two commanded poses A and B would be selected to bracket a preset “constraint displacement”. An interpolation factor is calculated as the ratio of the displacement from pose A to the constraint displacement, to the displacement from pose A to that of pose B. If the output is linear with displacement, the interpolation factor is equivalent to the ratio of the output difference between pose A and the preset constraint output, to the output difference between pose A and pose B. Other relations can be modeled if the output is not linearly proportional to the displacement. Note that if the output varies linearly with displacement, then the displacement-measuring device may not need to be calibrated.
The interpolation factor will be used to interpolate between the commanded poses A and B in Cartesian coordinates to determine a “constraint pose”. Joint values for this constraint pose are determined by inverse kinematics using the nominal, or current, robot model parameters. At this point, parameter identification can continue as stipulated by any appropriate geometric constraint calibration method using the calculated joint configurations of the constraint pose and a modified geometric constraint. Modification of the geometric constraint can be explained as follows.
If the displacement-measuring device is oriented so as to measure displacements perpendicular to the surface of the calibration object, then the constraint pose will represent a preset and constant displacement of the tool tip beyond the initial contact point with the calibration object. This constraint pose will be effectively equivalent to a touch probe encounter (and pose capture) of an tool tip with the surface of a calibration object, but with an effective tool tip length that is smaller than the actual tool tip length, smaller by the preset constraint displacement distance. For instance, if the constraint displacement is set to 0.25 inches, then the constraint poses will correspond to encounters with a touch probe of the shorter tool on a calibration sphere, the tool being 0.25 inches shorter than the actual tool length. Making the appropriate adjustment to the geometric constraint to allow for the constraint displacement, and using the calculated joint configurations determined from the constraint pose, allows geometric constraint routines to be implemented without reading joint configurations and the associated hardware and software modifications and/or interfaces to the controller. Robot calibration based on geometric constraint approaches can be implemented in a stand-alone system brought to the robot to be calibrated.
In order to implement this embodiment, the tool tip needs to swivel about a swivel point so that linear displacements can be measured in different directions. This creates an effective tool tip radius equal to the radius from the swivel point to the tool tip minus the constraint displacement.
It is to be appreciated that using the linear displacement-measuring device with interpolation and inverse kinematics to find joint angles of an encounter between the tool and the reference object is an advantage over the use of trigger probes (including the touch probe of the present invention) in that it does not require the capture of joint angles of the robot by the controller when an encounter occurs. This in turn, allows the processor to be separated from the robot controller (i.e. stand-alone) and the measurement control program to reside in the processor. Further, in such an embodiment, the present invention allows geometric constraint methods for parameter identification to be used (adjusting robot model parameters to force the robot poses of the encounters to be consistent with the requirements of the geometric constraint) rather than a linear measurement device used to measure errors from commanded poses without any real geometric constraints. The robot model parameters are adjusted in parameter identification to change each commanded pose Cartesian coordinates (calculated from the joint values) so that these poses produce the measured linear displacement-measuring device's readings.
In order to implement the embodiment of
It should be appreciated that the distance between commanded poses within each pair should be as small as practical to minimize interpolation errors. There will be errors in each pose in the pair, but for closely spaced poses, these errors should be almost identical. If the errors in the commanded poses are similar, then the errors in the interpolated pose will also be essentially the same. This means that the joint values calculated from the interpolated constraint pose will accurately represent the joint values that would be found if the robot were actually posed in the constraint pose.
The constructed sequence of commanded pose pairs allows the calibration object to be measured from different robot configurations. In one embodiment, these poses are constructed off-line using approximate locations of any calibration objects, possibly entered by hand, and the constructed sequence are then conveniently loaded into the robot controller as a pattern of moves and dwells. It is to be appreciated that dwells allows for robot settle time and subsequent displacement measurements to be made. In one embodiment, displacement measurements are triggered by a settling in the output signal. Files of displacement measurements are then matched with the appropriate commanded pose pairs for further processing. Once data acquisition is complete, another routine calculates the joint values of the constraint poses. The constraint pose joint values are then fed into a parameter identification algorithm that calculates the best set of model parameters to minimize errors according to the geometric constraint method being implemented.
It is to be appreciated that the present invention simplifies computation of pose accuracy by not constructing a characteristic equation. In particular, in the present invention there is no calculation of a moment of inertia, or methods to solve for Eigen values as solutions to a characteristic equation. There is no search through parameter space in the sense of a method of steepest descent. Further, the use of a point as a geometric constraint avoids the use of extended constraint geometries as constraints such as lines and planes that extend throughout the work envelope of robot with associated requirements on the extent of poses throughout the volume of the robot envelope used for joint angle determinations.
Additionally, no ball-in-socket mechanism is used. In one embodiment, the use of a tool to signal an encounter and controller to read joint angles of the encounters between the tool and the reference object is an advantage over the use of a ball-in-socket method since a ball-in-socket mechanism is not practical on motorized robots due to stresses induced by errors in the robot model during movement of the ball-in-socket tool in a plurality of poses. Further, unlike the present invention, a ball-in-socket mechanism does not allow parameters of the end-effector to be removed from the robot model. Moreover, in another embodiment, the use of a linear displacement-measuring device, with interpolation and inverse kinematics to find joint angles of an encounter between the tool and the reference object, does not cause mechanical stresses induced by errors in the robot model during movement of the tool in a plurality of poses, unlike a ball-in-socket method.
It will, however be appreciated that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention as set forth in the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/US05/38359 | 10/25/2005 | WO | 00 | 4/24/2007 |
Number | Date | Country | |
---|---|---|---|
60621838 | Oct 2004 | US |