Not applicable.
The present disclosure relates to dynamic compensation for errors in the position and orientation of a robot end effector, and more particularly dynamically compensating for errors in the position and orientation of a robot end effector utilizing a kinematic error observer algorithm. Even more specifically, this disclosure relates to using an external high-precision metrology tracking system, such as a laser tracker system, to directly measure robot kinematic errors such that corrections are implemented during processing so that the end effector of the robot may be accurately positioned so that a tool or other object carried by the robot effector can carry out a designated function, such as machining a workpiece or other operation requiring that the effector be accurately positioned with respect to a workpiece.
There is a growing interest in replacing high-precision manufacturing equipment such as CNC drills or mills or the like with industrial robots for some applications. Industrial robots were initially designed to be low cost and highly repeatable for pick-and-place and assembly operations. In their current state, do not exhibit sufficient accuracy to achieve high-precision tolerances. Thus, there is a growing interest to develop both the implementation and theory required to improve the accuracy of industrial robots. Of the many methods researched by those skilled in the art, it has been found that the high accuracy and limited obtrusiveness of external metrology tracking systems makes them a viable solution for improving a robot's accuracy when incorporated in an external feedback controller around the robot's proprietary control system.
There are several known instances where metrology tracking systems (e.g. laser trackers), have been utilized to make robots more accurate for a variety of manufacturing applications, such as milling and drilling. In most instances where this has been successful, the approach involves building a custom robot controller as the foundation, in which the tracker system can be integrated at a low level. Such an approach can be prohibitively expensive, outweighing the added value of using a robotic platform for the intended application of a more accurate robot. In accord with the present disclosure, by correcting the robot's kinematic errors, the existing low bandwidth interfaces on the industrial robot controller can be utilized, thus securing a viable business case. However, to perform external high-precision feedback control over such an interface, appropriate control methodologies that address the interfaces non-deterministic behavior are required. Only then can such a controller sufficiently regulate the kinematic error. The invention described in the present disclosure discusses both the implementation and theory of a control system that addresses these issues and through experimentation is shown to reduce kinematic error, improving the robot's accuracy.
As described in this disclosure, kinematic error is the difference between the location of the robot's end effector measured by the robot controller referred to as the kinematic location, and the actual location measured by the metrology tracking system. The term “location”, as used in this disclosure, means both position and orientation. The kinematic location is computed from the robot's encoder measurements mapped through the robot's forward kinematic model, that latter being an idealized nonlinear set of equations relating the position of the robot's joints to the location of its tool flange in Euclidian space. The tool flange provides a physical interface for attaching the robot's end effector and its spatial relationship to the end effector can be easily identified and applied to the forward kinematic model. Sources of kinematic error can be attributed to discrepancies in the robot's forward kinematic model due to inaccurate link lengths, joint offsets, backlash, etc., and errors from external disturbances that are unobservable by the robot's proprietary controller (e.g., deflection of robot's links due to process forces). When the kinematic location is compared to that of the actual location, provided by the metrology tracking system, these errors can be identified and corrected. As described in this disclosure, the term “end effector” is defined to mean any type of tool or device that attaches to the end of the robot's arm. It is understood that the methodology presented in this disclosure is applicable for any type of end effector that can rigidly attach a metrology tracking system's 6 Degree of Freedom (6 DoF) sensor, the device used to determine the position and orientation of the end effector, to the end of the robot arm, and not only the one that is further described or presented in the disclosed figures.
Apparatus for controlling an industrial robot is disclosed. The industrial robot has an immovable base, a plurality of links supported by the base, a movable joint between the base and a most proximate link and between each of the adjacent links. One of the links constitutes a most distal link with respect to the base. An end effector is carried by the most distal link. Each of the joints generates a robot measurement signal corresponding to the kinematic position and orientation of the end effector as the end effector is moved by the robot to a desired position and orientation. The industrial robot has a robot control system for controlling movement of the end effector to its the desired position and orientation. More specifically, the apparatus of this disclosure comprises a metrology tracking system (referred to as a tracker) for determining the actual position and orientation of the end effector as it moves toward its desired position and orientation. The tracker has a sensor carried by the end effector for communicating with the tracker. The tracker generates a tracker measurement signal corresponding to the actual position and orientation of the end effector as the end effector moves toward its desired position and orientation and supplies the tracker measurement signal to a computer. The computer is configured to receive the robot measurement signal corresponding to the kinematic position and orientation of the end effector from the robot control system. The computer is further configured to generate a correction command and to communicate the correction command to the robot control system for correcting the position and orientation of the end effector to better match the actual position and orientation of the end effector as determined by the tracker measurement signal as the end effector moves toward its the desired position and orientation thereby to result in a more accurate positioning and orienting of the end effector when in its desired position and orientation.
Also disclosed is a method of controlling an industrial robot, the latter having an immovable base, a plurality of links supported by the base, a movable joint between the base and a most proximate link and between each of the adjacent links. One of the links constitutes a most distal link with respect to the base. An end effector is carried by the most distal ink. Each of the joints generates a robot measurement signal corresponding to the kinematic position and orientation of the end effector as the end effector is moved by the robot to a desired position and orientation. The industrial robot further has a robot control system for controlling movement of the end effector to the desired position and orientation. The method comprises the steps of utilizing a metrology tracking system (also referred to as a tracker) to determine the actual position and orientation of the end effector as the latter is moved toward its desired position and orientation. Utilizing the tracker to generate a measurement signal that corresponds to the actual position and orientation of the end effector as the latter is moved toward the desired position and orientation. The measurement signal is supplied to a computer. The computer receives a kinematic end effector position and orientation signal from the robot control system, and the computer compares the measurement signal and the kinematic end effector location signal and generates an incremental correction command that is transmitted to the robot control system so that the robot control system corrects the end effector location so as to better agree with the measurement signal.
The drawings described herein are for illustration purposes only and are not intended to limit the scope of the present teachings in any way. Corresponding reference numerals indicate corresponding parts throughout the several views of drawings.
The following description is merely exemplary in nature and is in no way intended to limit the present teachings, application, or uses. Throughout this specification, like reference numerals will be used to refer to like elements. Additionally, the embodiments disclosed below are not intended to be exhaustive or to limit the invention to the precise forms disclosed in the following detailed description. Rather, the embodiments are chosen and described so that others skilled in the art can utilize their teachings. As well, it should be understood that the drawings are intended to illustrate and plainly disclose presently envisioned embodiments to one of skill in the art, but are not intended to be manufacturing level drawings or renditions of final products and may include simplified conceptual views to facilitate understanding or explanation. As well, the relative size and arrangement of the components may differ from that shown and still operate within the spirit of the invention.
As used herein, the word “exemplary” or “illustrative” means “serving as an example, instance, or illustration.” Any implementation described herein as “exemplary” or “illustrative” is not necessarily to be construed as preferred or advantageous over other implementations. All the implementations described below are exemplary implementations provided to enable persons skilled in the art to practice the disclosure and are not intended to limit the scope of the appended claims.
Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs. The terminology used herein is for the purpose of describing a particular example embodiments only and is not intended to be limiting. As used herein, the singular forms “a”, “an”, and “the” may be intended to include the plural forms as well, unless the context clearly indicates otherwise. The terms “comprises”, “comprising”, “including”, and “having” are inclusive and therefore specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. The method steps, processes, and operations described herein are not to be construed as necessarily requiring their performance in the particular order discussed or illustrated, unless specifically identified as an order of performance. It is also to be understood that additional or alternative steps can be employed.
When an element, object, device, apparatus, component, region or section, etc., is referred to as being “on”, “engaged to or with”, “connected to or with”, or “coupled to or with” another element, object, device, apparatus, component, region or section, etc., it can be directly on, engaged, connected or coupled to or with the other element, object, device, apparatus, component, region or section, etc., or intervening elements, objects, devices, apparatuses, components, regions or sections, etc., can be present. In contrast, when an element, object, device, apparatus, component, region or section, etc., is referred to as being “directly on”, “directly engaged to”, “directly connected to”, or “directly coupled to” another element, object, device, apparatus, component, region or section, etc., there may be no intervening elements, objects, devices, apparatuses, components, regions or sections, etc., present. Other words used to describe the relationship between elements, objects, devices, apparatuses, components, regions or sections, etc., should be interpreted in a like fashion (e.g., “between” versus “directly between”, “adjacent” versus “directly adjacent”, etc.).
As used herein the phrase “operably connected to” will be understood to mean two are more elements, objects, devices, apparatuses, components, etc., that are directly or indirectly connected to each other in an operational and/or cooperative manner such that operation or function of at least one of the elements, objects, devices, apparatuses, components, etc., imparts are causes operation or function of at least one other of the elements, objects, devices, apparatuses, components, etc. Such imparting or causing of operation or function can be unilateral or bilateral.
As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items. For example, A and/or B includes A alone, or B alone, or both A and B.
Although the terms first, second, third, etc. can be used herein to describe various elements, objects, devices, apparatuses, components, regions or sections, etc., these elements, objects, devices, apparatuses, components, regions or sections, etc., should not be limited by these terms. These terms may be used only to distinguish one element, object, device, apparatus, component, region or section, etc., from another element, object, device, apparatus, component, region or section, etc., and do not necessarily imply a sequence or order unless clearly indicated by the context.
Moreover, it will be understood that various directions such as “upper”, “lower”, “bottom”, “top”, “left”, “right”, “first”, “second” and so forth are made only with respect to explanation in conjunction with the drawings, and that components may be oriented differently, for instance, during transportation and manufacturing as well as operation. Because many varying and different embodiments may be made within the scope of the concept(s) taught herein, and because many modifications may be made in the embodiments described herein, it is to be understood that the details herein are to be interpreted as illustrative and non-limiting.
The apparatuses/systems and methods described herein can be implemented at least in part by one or more computer program products comprising one or more non-transitory, tangible, computer-readable mediums storing computer programs with instructions that may be performed by one or more processors. The computer programs may include processor executable instructions and/or instructions that may be translated or otherwise interpreted by a processor such that the processor may perform the instructions. The computer programs can also include stored data. Non-limiting examples of the non-transitory, tangible, computer readable medium are nonvolatile memory, magnetic storage, and optical storage.
As used herein, the term module can refer to, be part of, or include an application specific integrated circuit (ASIC); an electronic circuit; a combinational logic circuit; a field programmable gate array (FPGA); a processor (shared, dedicated, or group) that performs instructions included in code, including for example, execution of executable code instructions and/or interpretation/translation of uncompiled code; other suitable hardware components that provide the described functionality; or a combination of some or all of the above, such as in a system-on-chip. The term module can include memory (shared, dedicated, or group) that stores code executed by the processor.
The term code, as used herein, can include software, firmware, and/or microcode, and can refer to one or more programs, routines, functions, classes, and/or objects. The term shared, as used herein, means that some or all code from multiple modules can be executed using a single (shared) processor. In addition, some or all code from multiple modules can be stored by a single (shared) memory. The term group, as used above, means that some or all code from a single module can be executed using a group of processors. In addition, some or all code from a single module can be stored using a group of memories.
The nomenclature used in this disclosure is as follows.
In the present disclosure, the topology, theory, and operation of a control system, used to correct a robot's kinematic error, is described. The control system, referred to as the Kinematic Error Control System, is comprised of several subsystems, each containing several components, which facilitate its operation. These subsystems are a robot control system, a metrology tracking system, and an external control system on which the Kinematic Error Control System is implemented. A table showing the various components in relation to their respective subsystem and a signal diagram of the signals transmitted between the components are shown in Table 1 and
The robot control system has two components, the robot, and the robot controller. The robot is the mechanical system that performs the physical operation. The robot contains encoders and servo motors used to both measure and move each of its joints. The robot controller contains the servo drives and the robot manufacturers proprietary trajectory controller which are used to both regulate and control the robot through a desired motion. The proprietary trajectory controller utilizes the forward kinematic model of the robot to convert the encoder (joint) measurements into a kinematic position and orientation of its tool flange for use in its control algorithm. In subsequent discussion the joint or kinematic position and orientation measurements will be referred to as robot measurements. In addition to the servo drives and trajectory controller, the robot controller contains the network interfaces used to communicate with the external control system as well as the software used to adjust its trajectory based on corrections transmitted from the external control system.
In this specific case the metrology tracking system has two components, the 6 DoF sensor and the laser tracker. The 6 DoF sensor is fixed to an end effector which is attached to the robot's tool flange. The 6 DoF sensor houses several orientation sensors and a retro reflector which are used to measure its orientation and position, respectively. More specifically the position of the 6 DoF sensor is measured by the laser tracker and the orientation of the 6 DoF sensor is measured by the sensor itself and transmitted to the tracker. The laser tracker houses a gimbaled laser displacement sensor that emits a laser beam which is reflected by the 6 DoF sensor's retro reflector back to the tracker. The azimuth and elevation of the beam, determined by the laser tracker's encoders, and the distance of the beam are used to determine the 6 DoF sensor's position. Position and orientation measurements collected by the laser tracker and 6 DoF sensor, respectively, are combined through a proprietary method to create a single measurement of the position and orientation of the 6 DoF sensor, and hence the actual position and orientation of the end effector. In subsequent discussion this measurement will be referred to as the tracker measurement. Additionally, the laser tracker contains the interface used to transmit the tracker measurements to the external controller system.
The external controller system is comprised of a computer (PC) containing the network interfaces used to receive the transmitted robot and tracker measurements from the robot controller and laser tracker, respectively. The robot controller and laser tracker may be unsynchronized, that is, measurements sampled and transmitted independently without using a shared clock signal between the robot controller and laser tracker. At runtime, the robot measurement is matched to the tracker measurement, the matched set of measurements are used to compute a kinematic error measurement, a kinematic error estimate is computed from the kinematic error measurement, and a rounded incremental correction of the end effectors position and orientation are computed from the kinematic error estimate. The incremental correction command is then transmitted to the robot controller where it is used to correct the position and orientation of the robot's end effector.
If the robot measurements are described using joint measurements, the robot and tracker measurements will be defined in different spatial domains. In this case, the robot measurements describe the position of its joints as coordinates in joint space while the tracker measurements describe the position and orientation coordinates of its tool flange in Euclidian space. These measurements must be converted into the same spatial domain to compute the kinematic error measurement. In the present disclosure, Euclidian space is used. Additionally, there are many ways to represent both the position and orientation of a 3D object in Euclidian space. In the field of robotics, it is common to represent a 3D object as a homogenous transformation matrix that defines the position and orientation of a frame with respect to another frame. The position is represented in cartesian coordinates and the orientation is represented as a rotation matrix, describing the projection of the axes of one frame with respect to the axes of another. This representation is both intuitive and provides a set of mathematical operators that can be used to determine the relative relationship of various frames. Further discussion describes how the robot and tracker measurements are converted into Euclidian space (if applicable) and represented as homogenous transformation matrices with respect to the same frame. A graphic depiction of the transformative relationships between the frames used to define the kinematic (robot) and actual (tracker) position and orientation of the 6 DoF sensor, equivalently the position and orientation of the end effector, with respect to the robot's base frame is shown in
Referring now to
The robot measurements are represented by a single vector, r, and are described by either a set of joint positions, r=[q1 q2 . . . qn]T, for each of the robot's joints in joint space (where n denotes the last joint) or a kinematic position (xr, yr, zr) in cartesian coordinates and orientation (αr, βr, γr), in an orientation representation defined by the robot manufacturer, of the robot's tool flange in Euclidian space, r=[xr yr zr αr βr γr]T. In the case that the robot measurement is described by joint positions, the robot's forward kinematic equations, from its forward kinematic model, are used to convert the robot measurement into a homogenous transformation of the frame defining its tool flange with respect to the robot's base frame. In the case that the robot measurement is described by the kinematic position and orientation of the robot's tool flange, the orientation of the robot measurement is converted into a rotation matrix to construct an equivalent homogenous transformation to the one produced by the kinematic equations. In both cases, an additional transformation that defines the translation and rotation of the 6 DoF sensor with respect to the robot's tool flange is applied in order to construct the kinematic position and orientation of the 6 DoF sensor,
where prb is the kinematic position and Rrb is the kinematic orientation (represented as a rotation matrix) of the 6 DoF sensor relative to a robot base frame, Tnb(·) is the equation that converts the robot measurements, r, into a homogenous transformation, and Trn is the transformation to the 6 DoF sensor with respect to the robot's tool flange. The transformation Trn is identified using standard techniques commonly understood by those skilled in the art.
The tracker measurements are taken with respect to the laser tracker's measurement frame and represented by a single vector, s=[xs ys zs αs βs γs]T, of its position (xs, ys, zs) and orientation (αs, βs, γs) in an orientation representation defined by the laser tracker manufacturer. The measurements are converted into a homogenous transformation matrix and transformed into the robot's coordinate system by,
where psb is the measured (actual) position and Rsb is the measured (actual) orientation (represented as a rotation matrix) of the 6 DoF sensor, Tsm(·) is the equation that converts the tracker measurements, s, into a homogeneous transformation matrix, and Tmb is the transformation of the laser tracker's measurement frame with respect to the robot's base frame.
The transformation Tmb is identified by the using standard techniques commonly understood by those skilled in the art.
As mentioned in [0034] the robot and tracker measurements may be unsynchronized. Lack of synchronicity of the measurements will result in both a relative time delay between the two clock signals and jitter in each clock signal's timing. Each of these issues are addressed independently in the algorithmic procedure discussed below.
Before runtime, the relative time delay between the clock signals is determined by using an identification procedure, run once prior to the operation of the Kinematic Error Control System. The relative time delay identification procedure is conducted as follows:
Referring to
At system startup (Step 1), the following variables, defined further in the disclosure, are initialized at the given values,
ηp[0]=0 (4)
ηθ[0]=0 (5)
pu[0]=0 (6)
Ru[0]=I (7)
At runtime, the robot and tracker measurements, r and s, are transmitted to the external control system independently. Once received, each measurement is given a timestamp, tr and ts, using the clock signal of the PC, and the measurements are converted (Step 2.1.A and 2.1.B) into the same spatial domain (if applicable) and representation using Equations (1) and (2), respectively. After conversion, the leading measurements, identified by (3) from the steps in [0040] are stored in a lookup table of sufficient size (constructed using a Last in First Out (LIFO) buffer). Now, the effects of the relative time delay, discussed in [0039], are compensated by matching (Step 2.2) the robot measurements to the tracker measurements producing the set of (matched) measurements, (Trb[k], Tsb[k], tk[k]), for the kth iteration of the Kinematic Error Control System, referred to as the control iteration, by:
{tilde over (T)}=ƒ
int((T1, t1), (T2, t2), {tilde over (t)}) (9)
The kinematic error measurement, that is, the relative transformation between the matched robot and tracker measurements, is taken with respect to the robot's base frame and is computed (Step 3) by,
where ep and er are the translational and rotational kinematic errors and the function, ƒr(·), defined in the appendix, converts the resultant rotation matrix of RrbRmb
The clock signal jitter, discussed in [0039], will corrupt the signal produced from Equation (11) with affects analogous to measurement noise (referred to as timing noise in the disclosure of our U.S. Provisional Patent Application No. 62/982,166). Compensation for jitter is accomplished by using the Kinematic Error Observer algorithm (Step 4). The algorithm is as follows:
Δt[k]=tk[k]−tk[k−1] (12)
The KEC algorithm computes a rounded incremental correction (Step 5) from the kinematic error estimate to be applied to the robot during the timestep of the control iteration. Computation of the rounded incremental correction is performed in three parts. In the first part, translational and rotational incremental corrections are computed, and the rotational incremental correction is converted into the orientation representation of the robot controller as follows:
Δep[k]=êp[k]−pu[k−1] (14)
Δer[k]=Ru[k−1]T ƒr−1(êr[k]) (15)
Δp[k]=Kp(Δep[k]) (16)
Δr[k]=Krƒr(Δer[k]) (17)
Δθ[k]=ƒθ(Δr[k]) (18)
Δ{tilde over (p)}[k]=round(Δp[k]+ηp[k−1], δp) (19)
Δ{tilde over (θ)}[k]=round(Δθ[k]+ηθ[k−1], δθ) (20)
ηp[k]=Δp[k]−Δ{tilde over (p)}[k], (21)
ηθ[k]=Δθ[k]−Δ{tilde over (θ)}[k]. (22)
p
u[k]=pu[k−1]+Δ{tilde over (p)}[k], (23)
R
u[k]=ƒθ−1(Δ{tilde over (θ)})Ru[k−1], (24)
where the function ƒθ−1(·) converts the manufacturer's orientation representation back into its equivalent rotation matrix.
Once the KEC algorithm is completed, the rounded incremental corrections, Δ{tilde over (p)}[k] and Δ{tilde over (θ)}[k], are transmitted (Step 6) to the robot controller for execution, the control iteration is incremented, and the next set of matched robot and tracker measurements are used to compute a new kinematic error measurement (Step 3). Control iterations are conducted indefinitely, continually correcting the robot's kinematic error, until the program on the PC is terminated or the desired motion has completed.
An outline of the above procedure is summarized below:
The description herein is merely exemplary in nature and, thus, variations that do not depart from the gist of that which is described are intended to be within the scope of the teachings. Moreover, although the foregoing descriptions and the associated drawings describe example embodiments in the context of certain example combinations of elements and/or functions, it should be appreciated that different combinations of elements and/or functions can be provided by alternative embodiments without departing from the scope of the disclosure. Such variations and alternative combinations of elements and/or functions are not to be regarded as a departure from the spirit and scope of the teachings.
Experimental results presented further in this disclosure were obtained using the hardware listed in Table 2.
Before further evaluation of the performance of the Kinematic Error Control System could be conducted, suitable values for the KEO observer gain matrix, L, and KEC feedback gain matrices, Kp and Kr were selected. The gain matrices were selected by commanding the robot to a single position, initializing the Kinematic Error Control System, and correcting the static kinematic errors at the commanded position. After several iterations, the final tuning of the system resulted in observer and feedback gains of L=diag(5,5,5) and KP=Kr=diag(5×10−3, 5×10−3, 5×10−3), respectively, and a stable overdamped response with a settling time of 8.758 s.
In an additional experiment conducted for the present disclosure, the KEO algorithm's sensitivity was evaluated in both an open loop and closed loop configuration. This was done to ensure that sufficient measurement noise and jitter were filtered from the kinematic error measurement such that the residual measurement noise and jitter in the kinematic error estimate were not amplified significantly by the feedback gains in the KEC algorithm. To conduct this experiment, the robot was commanded to a single position and samples of the kinematic error estimate were measured both with (closed-loop) and without (open-loop) applying a correction with the KEC algorithm. Once the experiments were conducted, the steady state kinematic error was removed from both sets of measurements and the standard deviation was computed. The results of this experiment, provided in Table 3, show that there was an increase in the standard deviation, equivalently the noise, in the kinematic error estimate. However, when compared to the accuracy of the laser tracker in Table 2 and the process variation shown in subsequent experiments, the residual noise and jitter in the kinematic error estimate will not inhibit the Kinematic Error Control System's ability to both measure and correct the robot's kinematic error.
In an additional experiment conducted for this disclosure, the dynamic performance of the Kinematic Error Control System was evaluated for a series of linear, constant velocity motions of the end effector. The static kinematic error in the robot's nominal forward kinematic model is dependent on the position of its joints; therefore, increasing the commanded velocity of the industrial robot's end effector will increase the rate of change of the kinematic error that the Kinematic Error Control System will attempt to correct. In this series of experiments the robot's end effector traversed 1 m in the Y-axis of the robot's base frame at constant velocities ranging from 10 mm/s to 100 mm/s. Since the evaluated constant velocities were only performed in the Y-axis of the robot's base frame, only the corrected positional kinematic errors were evaluated in these experiments. The results of these experiments are shown in
To provide a single metric for each increase in the robot's corrected kinematic error, the spatial components of the corrected positional kinematic error were filtered independently using a zero-phase 6th order Butterworth filter with cutoff frequencies ranging between 0.1 Hz and 0.5 Hz. These aggressive cutoff frequencies were selected to capture the general trends of the corrected positional kinematic errors, especially those in the Y-axis which were heavily corrupted by noise and not as easily observed. Once each component of the corrected positional kinematic error was filtered, the resultant magnitude was computed, and its average was taken. This procedure was repeated for each constant velocity experiment. The average magnitude of the filtered corrected positional kinematic errors as functions of end effector velocity are shown in
Process forces acting on the robot's end effector will cause highly nonlinear deflections, referred to as external disturbances, of the arm due to the varying stiffness of the robot's structure. More importantly, these external disturbances are due to the deformation of the robot's links and are unobservable by the robot's control system (which can only measure deviations in its joints). Thus, these external disturbances can only be corrected by the Kinematic Error Control System.
An additional experiment was conducted for the present disclosure, to evaluate the performance of the Kinematic Error Control System when subjected to an external disturbance. In this experiment the robot was commanded to a single position, the Kinematic Error Control System was initialized, and the static kinematic errors at the commanded position were corrected. Once the static kinematic errors were corrected, a 45 lb. weight was applied to the end effector to emulate a single un-modeled process force acting on the end effector. The corrected positional and rotational kinematic error responses, respectively, of the described experiment are shown in
The function, ƒint( . . . ):□4×4→□4×4, that produces an interpolation of a homogenous transformation between two sets of homogeneous transformations and corresponding timestamps, (T1, t1) and (T2, t2), at a specified timestamp, {tilde over (t)}, is defined as,
where the interpolation of the rotation matrix, {tilde over (R)}, and position vector, {tilde over (p)}, are respectively defined as,
The axis-angle representation of a rotation matrix provides a more intuitive way to visualize and scale an orientation in Euclidian space. Essentially, this representation describes any orientation by a single vector which defines a single rotation about an arbitrary axis in □3. The elements of the resultant vector define the coordinates of the arbitrary axis while the vectors magnitude defines the rotation about this axis. Consider a generalized rotation matrix,
The single rotation about the arbitrary axis is calculated from Equation (28) by,
and the arbitrary axis is calculated from Equations (28) and (29) by,
Together, Equations (29) and (30) can be combined into a single vector,
which is the axis angle representation, r, of a generalized rotation matrix, R
The present application is the US national stage under 35 U.S.C. § 371 of International Application No. PCT/US2021/019939 which was filed on Feb. 26, 2021, and which claims priority to U.S. Provisional Application No. 62/982,166, filed on Feb. 27, 2020, the disclosures of which is herein incorporated by reference in their entirety.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2021/019939 | 2/26/2021 | WO |
Number | Date | Country | |
---|---|---|---|
62982166 | Feb 2020 | US |