When a Laboratory Automation System (LAS) is installed at a customer site, a service technician aligns elements of the system, e.g. the frame, XY-gantry for the robotic arm, and the drawers on the work surface, to enable the robotic arm to precisely grip and transfer sample tubes from one position to another position. Typically, alignment of the robot arm to the working space was done manually. Manual alignment is a slow and costly process, particularly on a complex LAS which may include several robotic arms which must each be separately aligned. Additionally, manual alignment has the potential to introduce human error into each alignment. Auto-alignment processes allow for fewer service technicians to install and align more LAS in less time and with fewer risks of incorrect alignment due to human error.
In a typical LAS, each robotic arm is fixed to a gantry over a work surface, which can include, e.g., test tubes in racks that can be moved to different positions or tools on the work surface. For example, moving a test tube from a distribution rack to a centrifuge adapter. Gripping movement needs to be precise to avoid various problems. For example, if the robotic arm cannot grip a tube, or if it successfully grips a selected tube, but destroys the tube due to a misalignment. Conventional manual alignment can include various steps, such as manually positioning the gripper arm to several different positions on the work surface, either by hand or using an external drive motor. Additionally, the robotic arms need to be separately aligned for racks or drawers on the work surface. This procedure can take many hours to a day per robotic arm for manual alignment by a service technician.
Embodiments of the present invention address these and other problems.
Disclosed herein are an auto-alignment process and associated technical arrangements to calibrate and/or align a robotic arm with gripper unit within a Laboratory Automation System (LAS), in accordance with an embodiment.
In a camera-based alignment system, a camera can be attached to an XYZ-robot at the position of the gripper unit to allow the robotic arm to acquire images of the work surface below the gripper position. Alignment of the camera and the robotic arm can be performed when the camera is installed, by aligning the optical axis of the camera with the axis of the robotic arm during installation. However, accurately installing the camera, and ensuring that the camera does not change positions, can be cost prohibitive in complex systems involving multiple robotic arms. Accordingly, an auto-alignment procedure utilizing the camera can reduce production costs associated with precisely attaching the camera to the robotic arm, as well as provide a ready method of realigning the camera-robotic arm system should the position of the camera shift or otherwise become misaligned.
In accordance with an embodiment, a camera-based auto-alignment process can include gripping a first calibration tool by a gripper unit of a robotic arm. Images of the first calibration tool can be captured by a camera coupled to the gripper unit. The gripper unit and camera unit can be aligned on two roughly parallel axes. The images can be analyzed to calibrate the axis of view of the camera with the gripper axis, providing an XY calibration of the robotic arm. The gripper unit can be calibrated on a Z-axis using optical calibration with landmarks provided on a second calibration tool, and/or by moving the gripper unit towards the work surface until it makes contact with the work surface and stops. Once calibrated, the camera can be used to identify one or more landmarks at known locations on the work surface to align the robotic arm with the work surface.
In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of various embodiments of the present invention. It will be apparent, however, to one skilled in the art that embodiments of the present invention may be practiced without some of these specific details. In other instances, well-known structures and devices are shown in block diagram form.
The ensuing description provides exemplary embodiments only, and is not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the ensuing description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing an exemplary embodiment. It should be understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the invention as set forth in the appended claims.
Specific details are given in the following description to provide a thorough understanding of the embodiments. However, it will be understood by one of ordinary skill in the art that the embodiments may be practiced without these specific details. For example, circuits, systems, networks, processes, and other components may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail. In other instances, well-known circuits, processes, algorithms, structures, and techniques may be shown without unnecessary detail in order to avoid obscuring the embodiments.
In accordance with an embodiment, the gripper unit 100 can grip elements 102 in the LAS. These elements can include test tubes, calibration tools, and other objects. The elements 102 are gripped using the gripper unit on a first axis. Due to an offset between a camera unit 104 and the gripper unit 100, images can be acquired by the camera unit 104 on a second axis. Typical cameras and assemblies are too large to be integrated into the gripper assembly or into a grippable tool. As such, a camera unit can be coupled adjacent to the gripper unit, resulting in a mechanical offset between the first axis and second axis. Since the camera does not interfere with the gripper unit during normal operation, the camera can stay fixed to the gripper unit, enabling auto-alignment to be performed further as needed. The images can be analyzed to determine an offset between the second axis and the first axis and to calibrate the camera coordinate system to the robot coordinate system. The offset can account for any angular misalignment between the first axis and the second axis. A conversion ratio between motor steps and pixels can be determined by positioning the camera over a landmark and moving the robotic arm a predetermined number of steps in the X and Y directions. The conversion ratio can be determined based on the change in apparent position of the landmark. The offset and the conversion ratio can be used to calibrate the gripper in an X-Y plane. The gripper can then be calibrated on a Z-axis using a second calibration tool. Optionally, one or more landmarks on one or more elements on the work surface, e.g. an input area, can be identified to verify the precision of the calibration of the gripper. Once calibration is complete, the camera unit can be used to identify one or more landmarks at known locations on the work surface, to align the robotic arm to the LAS.
In accordance with an embodiment, calibrating the gripper on a Z-axis can include optically calibrating the gripper using landmarks provided on the second calibration tool, for example by triangulating the distance to a fixed landmark from the camera. Additionally, or alternatively, the gripper can be calibrated on the Z-axis physically, by moving the gripper toward the second calibration tool along the Z-axis until the gripper reaches a contact on the second calibration tool.
As described above, visual landmarks can be used in the calibration and alignment process of the robotic arm with the LAS. For example, the calibration tools can include landmarks that the camera unit can recognize, and landmarks on the work surface can be used to align the robotic arm to the LAS. Landmarks can include contrasting geometrical shapes positioned at known locations on the work surface. Landmarks can also be positioned on calibration tools, used to calibrate the camera and gripper unit.
Ideally, a landmark can be chosen that is easy to create, has an easily identifiable midpoint, and has a low mis-identification risk. For example, a linear landmark, such as a cross or rectangle, can be more difficult to identify reliably than a circular landmark. Additionally, scratches on the work surface may be more easily mistaken for a linear landmark than a circular landmark. Landmarks comprising a plurality of concentric circles are easy to identify, are less likely than linear landmarks to be mis-identified, and the mid-point can be determined by the algebraic average of all identified circle mid-points. Although circular landmarks are typically used herein, any contrasting shape, including but not limited to those shown in
As shown in
In accordance with an embodiment, the robotic arm can include a pressure sensor and can be configured to stop once resistance is met. This is typically used as a safety feature, to prevent the robotic arm from causing damage to itself, the work surface or objects on the work surface. Using the pressure sensor as an automatic stop, the robotic arm can be positioned over a first landmark on the Z-calibration tool, and lowered until the gripper unit makes contact with the first landmark. When contact is made, the pressure sensor stops the robotic arm. When the arm is stopped, the position of the motor on the Z-axis can be recorded. In accordance with an embodiment, the motors used to drive the robotic arm along each axis can be brushed DC motors or stepper motors. The position of the motor on the Z-axis can be recorded in encoder counts or steps. This process can be repeated for each landmark on the Z-calibration tool. Once each position has been recorded, the distance between each level can be determined in encoder counts or steps. As described further below, triangulation can be used to determine a height of each level of the Z-calibration tool (e.g., in steps per pixel). The flow of the Z-axis calibration process, in accordance with an embodiment, is further described below.
In some embodiments, a single calibration tool that combines features of the X-Y calibration tool and the Z-calibration tool can be used. For example, a combined calibration tool could resemble the X-Y calibration tool as described above, that has been modified such that each landmark is at a different level.
In accordance with an embodiment, an auto-alignment process provides an efficient, repeatable way of correctly installing robotic arms in an LAS, and provides a fast maintenance procedure should any of the robotic arms come out of alignment during use.
Complex LAS can include many robotic arms each having its own camera. Accordingly, less expensive cameras can be utilized to reduce the fixed costs of a given LAS. However, less expensive cameras typically suffer from greater lens distortion effects than more expensive cameras. These distortions can be accounted for and corrected during the alignment process.
Radial distortion tends to be the more significant factor when using relatively high-quality lenses or cameras. The radial distortion can be represented as a series of polynomials:
{circumflex over (x)}=x*(1+α1r2+α2r4+ . . . )
ŷy*(1+α1r2+α2r4+ . . . ) (11)
where ({circumflex over (x)}, ŷ) is a distortion-corrected point corresponding to (x, y), α1, α2, . . . are the coefficients that describe the radial distortion, and r is the Euclidean distance of point (x, y) from the mid-point of the image, which in this case corresponds to point (0.0).
A calibration process can be performed to determine the coefficients. For the purposes of calibration it is assumed that α1 sufficiently describes the radial distortion, and higher order effects can be ignored. Another model used to describe distortion is the Fitzgibbon division model:
When dealing with small α1, the model is almost identical to the results of the series of polynomials for one factor. The equation (12) used in Fitzgibbon's division model can be reworked to form the following equation:
In this case, s describes a scaling factor and {circumflex over (x)} corresponds to distortion-corrected point (x,y).
The assumption that point x lies on line l=(l1 l2 l3)τ can be used to show lines can be formed on the segments on the circle as a result of the radial distortion:
l
τ
p=0,
or
l
1
x+l
2
y+l
3(1+α1*r2)=0 (14)
If this equation is then applied to the form of the circle equation
(x−xm)2+(y−ym)2=R2 (15)
it follows that
Thus, the following holds true for xm, ym, R:
This property can be used to determine coefficient α1. In accordance with an embodiment, a detected landmark is shifted to the edge of the image and then moved along that edge by moving one of the axes of the robot. The mid-point position of the landmark can be recorded during this process. Since only one of the robot's axes was moved, all of the measured mid-points lie along a line that joins them together. However, this is not the case due to the distortion described above. Next, a circle function is fitted to the measured mid-points. The equation (7) can then be applied to this function to determine distortion parameter α1 as follows:
where (xm,ym) is the midpoint of the circle and R the radius.
This process can then be repeated at all four corners of the image, thereby determining the coefficients measured in this manner.
Next, a transformation mask can be determined to ensure computationally-effective image transformation. In doing so, a matrix is generated using the dimensions of the image. Each element (i,j) of the matrix corresponds to a pixel from the original image and contains corrected position (î,ĵ) of that pixel. This mask is then used to correct each pixel in the image as soon as the image is recorded. Image processing libraries, such as the open source library OpenCV, include methods implemented for this purpose, and can be used to correct images as they are captured by the camera.
In some embodiments, a periodically repeating pattern landmark, such as a chess or checker board pattern, can be used to account for and correct lens distortion. In accordance with an embodiment, the periodically repeating pattern landmark can be printed on or mounted to a tool which can be gripped by the robot. In accordance with an embodiment, this tool can be similar to the X-Y calibration tool shown in
In accordance with an embodiment, periodically repeating pattern landmarks can also be printed on or mounted to a step-like tool, such as the Z-calibration tool shown in
In accordance with an embodiment, features of the periodically repeating pattern landmark (e.g. edges of the chessboard pattern, single fields and the number of fields), as viewed through the camera, can be determined. Since the geometrical properties of the landmark are known to the system, the coordinates of these features can be compared to the known/expected position of the features using a fitting algorithm. Such fitting algorithms are available from OpenCV, the Camera Calibration Toolbox for Matlab®, DLR CalLab and CalDe—The DLR Camera Calibration Toolbox, and other similar software libraries. The fitting algorithm can then be used to estimate the intrinsic and extrinsic parameters of the computer vision system. The intrinsic and extrinsic parameters can be used to determine the distortion coefficients for the camera-lens combination in use. Using multiple pictures with the periodically repeating pattern landmark or landmarks in different positions, improves the accuracy of the distortion coefficients determined by the algorithm.
The following shows this projection in mathematical form:
The following intrinsic imaging matrix K can be assumed for the imaging properties of the camera:
where f is the focal length and point P=(u0, v0) describes the mid-point of the image. Point X=[X, Y, Z] is then mapped to point {tilde over (x)}=[u, v, w] as follows:
where K is the matrix of intrinsic camera properties described above, [R T] is the extrinsic camera matrix where R describes the rotation and T the translation of the camera coordinate system relative to the world coordinate system, and λ is a scaling factor not equal to zero.
If the mid-point of the circle is assumed to be [XC, YC,0], each point X in the circle would have to satisfy the following equation:
X
T
CX=0 (21)
where C is the matrix that defines the circle:
This circle is then depicted on ellipse E as follows:
λE=H−TCH−1
with
H=K[R
1
R
2
T] (23)
where R1 and R2 are the first two columns in rotation matrix R. The ellipse can now be described as follows:
or as a function of x and y:
0=Ax2+Bxy+Cy2+Dx+Ey+F (25)
This is a general depiction of a conic where A=1 can be assumed without losing the universality:
0=x2+Bxy+Cy2+Dx+Ey+F (26)
The following conditions guarantee an ellipse (and thus exclude parabola and hyperbolas):
In accordance with an embodiment, the previously measured point can be used to place ellipses along the trajectories of the circle. To do so, the method of the smallest error square can be used. This task can be solved using a numeric analysis and data processing library, such as ALGLIB. Alternative numerical analysis methods could also be used. To do so, the function from equation (15) can be transferred to a fitting algorithm. Moreover, the algorithm can receive initial values that can be used to start the iteration. These values are determined by using five-point combinations in equation (15) and then solving the equation system. This process is repeated with a few possible point combinations in order to then average the calculated coefficients. These values are then used as the initial values for the fitting algorithm, which uses an iterative process to determine the best solution in regard to the smallest error square.
Ratio of the radii of two circles before the projection:
Ratio cr now corresponds to the ratio of the segments produced by points of intersection p1, p2, p4 1106, 1108, 1112 where the line conjoining the midpoints of the ellipse and the ellipse intersect:
This equation can then be solved using the distance of circle amid-point pc, pcp
The radii used to calculate cγ correspond to the distance of the landmarks to the rotational center, or in other words, the midpoint of the gripped portion of the X-Y calibration tool that is grasped by the gripper unit. Two concentric circles can be used in the application of the method. Since five circles are detected when using the calibration tool described above, corresponding to the five landmarks on the calibration tool, a total of ten different combinations of circle pairs are possible. Finally, the mathematical mean and the standard deviation of the calculated mid-point are determined based on the ten combinations of circle pairs and compared to programmed limit values. When the midpoint is determined successfully, then the offset between the camera axis and the gripper unit axis can be determined in pixels. As described above, the radii used above correspond to the distance between the landmarks and the gripper unit axis, as measured in pixels. The gripper unit can center the X-Y calibration tool in the field of vision of the camera, the camera can identify the center point of the image and then determine a number of pixels on the X and Y axes from the center point to the closest marker to the center point. Based on the distance from the center point to the closest marker, and the distance from the closest marker to the gripper unit axis, the offset can be calculated in pixels.
In some embodiments, a pixel-to-motor step ratio can be determined to convert the coordinates of the mid-point of the circle from the camera coordinate system in pixels to the motor coordinate system in steps. To do so, the robotic arm can move to the tool recording position saved at the beginning and place the tool back in that position. First, a landmark is centered in the camera image. In some embodiments, this can be a particular landmark on the X-Y calibration tool, such as the middle landmark of the exemplary tool described. However, any landmark can be used. The robot then moves a specified distance (in steps) in the X and Y directions, while at the same time, the camera system records the position of the landmark. These values are then used to calculate the ratio of pixels to steps for both axes. Using the previously determined mid-point of the circle, this ratio can then be used to determine the distance of the gripper axis to the mid-point of the camera image in motor steps.
In accordance with an embodiment, the calibration process described above can be repeated at at least one other gripping height to determine linear offset functions dx(z)=mxz+bx or dy(z)=myz+by, which describe the correlation between the distance of the optical axis of the camera to the mechanical axis of the gripper robot in directions X and Y with height Z. If the operation is performed for more than two different heights, a linear function can then be fitted using the measured points (dx,y, z). This function corresponds to the tilt of the optical camera axis relative to the mechanical gripper axis over the whole working space.
In another embodiment the distortion correction step can be combined with the X-Y calibration step. As described above, a series of images can be taken by the camera, as the X-Y calibration tool is rotated through the camera view. The tool can include one or more landmarks, such as the circular landmarks shown in
In this case, f corresponds to the focal length. However, since it cannot be clearly determined due to the depth of field range in which the measurement is taken, it can be assumed to be one. Once the height of the landmark has been determined using triangulation, the determined height can be translated into a number of steps in the z axis. To do so, the robotic arm can be positioned directly above the landmark using the X-Y offset determined above. Next, the movement parameters of the z axis are adjusted so that movement is stopped if a pressure sensor in the robotic arm detects a predefined level of resistance. The robot can then be slowly lowered along the z axis until the gripper touches the landmark, at which point the pressure sensor detects resistance and causes the robot to stop. The current position of the gripper in steps is then stored.
In accordance with an embodiment, this process can be repeated at all three steps of the tool. Finally, the three measured points (z[steps/px],z[steps]) are used to fit a linear function. Using the height in pixels that was determined using triangulation, this function can now be used to determine the height in steps for the z axis.
In accordance with an embodiment, the distortion correction step can also be combined with the Z calibration step. As described above, to correct for lens distortion, a series of images of a landmark or landmarks can be taken by the camera. A Z calibration tool, such as the one shown in
Using the distortion corrected images, Z calibration can then be performed. Since the geometry of the periodically repeating pattern landmark is known, the system can determine a pixel to distance relationship with the known pattern. For example, a distance between edges in the checkerboard pattern landmark can be stored in memory. Once the images are corrected for distortion, the images can be analyzed to determine a number of pixels between the edges in the landmark, and a pixel to distance relationship can be determined. The robot arm can then be lowered to touch the landmark, as described above. The distance traveled by the robot arm to contact the landmark can be recorded and used to convert the pixel to distance relationship to a pixel to step relationship on the robot reference system. In accordance with an embodiment, the process can be repeated for additional steps on the z calibration tool.
However, the two points and the radius do not provide a final solution. As can be seen in formula (26) and (27), there are two possible mid-points for the circle (Xm1,m2, Ym1,m2); however, there is typically a large difference between one of the possible midpoints and the measured midpoint of the landmark. Once the correct midpoint has been identified, the midpoint measurements can be compared to determine the precision of the auto-alignment system.
The processor 1510 may comprise any suitable data processor for processing data. For example, the processor may comprise one or more microprocessors that function separately or together to cause various components of the system to operate.
The memory 1512 may comprise any suitable type of memory device, in any suitable combination. The memory 1512 may comprise one or more volatile or non-volatile memory devices, which operate using any suitable electrical, magnetic, and/or optical data storage technology.
The various participants and elements described herein with reference to the figures may operate one or more computer apparatuses to facilitate the functions described herein. Any of the elements in the above description, including any servers, processors, or databases, may use any suitable number of subsystems to facilitate the functions described herein, such as, e.g., functions for operating and/or controlling the functional units and modules of the laboratory automation system, axis controllers, sensor controllers, etc.
Examples of such subsystems or components are shown in
Embodiments of the technology are not limited to the above-described embodiments. Specific details regarding some of the above-described aspects are provided above. The specific details of the specific aspects may be combined in any suitable manner without departing from the spirit and scope of embodiments of the technology. For example, back end processing, data analysis, data collection, and other processes may all be combined in some embodiments of the technology. However, other embodiments of the technology may be directed to specific embodiments relating to each individual aspect, or specific combinations of these individual aspects.
It should be understood that the present technology as described above can be implemented in the form of control logic using computer software (stored in a tangible physical medium) in a modular or integrated manner. Furthermore, the present technology may be implemented in the form and/or combination of any image processing. Based on the disclosure and teachings provided herein, a person of ordinary skill in the art will know and appreciate other ways and/or methods to implement the present technology using hardware and a combination of hardware and software.
Any of the software components or functions described in this application, may be implemented as software code to be executed by a processor using any suitable computer language such as, for example, Java, C++ or Perl using, for example, conventional or object-oriented techniques. The software code may be stored as a series of instructions, or commands on a computer readable medium, such as a random access memory (RAM), a read only memory (ROM), a magnetic medium such as a hard-drive or a floppy disk, or an optical medium such as a CD-ROM. Any such computer readable medium may reside on or within a single computational apparatus, and may be present on or within different computational apparatuses within a system or network.
The above description is illustrative and is not restrictive. Many variations of the technology will become apparent to those skilled in the art upon review of the disclosure. The scope of the technology should, therefore, be determined not with reference to the above description, but instead should be determined with reference to the pending claims along with their full scope or equivalents.
One or more features from any embodiment may be combined with one or more features of any other embodiment without departing from the scope of the technology.
A recitation of “a”, “an” or “the” is intended to mean “one or more” unless specifically indicated to the contrary.
All patents, patent applications, publications, and descriptions mentioned above are herein incorporated by reference in their entirety for all purposes. None is admitted to be prior art.
This application claims priority to U.S. Provisional Patent Application No. 61/710,612, filed on Oct. 5, 2012, titled “SYSTEM AND METHOD FOR AUTO-ALIGNMENT,” by Stefan Rueckl; to U.S. Provisional Patent Application No. 61/745,252, filed on Dec. 21, 2012, titled “SYSTEM AND METHOD FOR AUTO-ALIGNMENT,” by Stefan Rueckl, et al.; and to U.S. Provisional Patent Application No. 61/772,971, filed on Mar. 5, 2013, titled “SYSTEM AND METHOD FOR AUTO-ALIGNMENT,” by Stefan Rueckl, et al., each of which is herein incorporated by reference in its entirety for all purposes. This application is related to U.S. patent application Ser. No. ______ (application No. Not Yet Assigned), filed on Oct. 4, 2013, titled “SYSTEM AND METHOD FOR LASER-BASED AUTO ALIGNMENT,” by Stephen Otts, which is herein incorporated by reference in its entirety for all purposes.
Number | Date | Country | |
---|---|---|---|
61710612 | Oct 2012 | US | |
61745252 | Dec 2012 | US | |
61772971 | Mar 2013 | US |