The present invention relates to a calibration method for a robot system, and more particularly, to an automatic calibration method for a robot system using a vision sensor.
Know calibration methods for robot systems generally involve artificial teaching. For example, an operator manually controls a robot of the robot system to move a tool mounted on the robot to reach the same target point with a plurality of different poses (for a 6-axis robot, generally with four or more different poses).
The operator must visually determine whether the tool is moved to the same target point, and consequently, calibration errors arise leading to inaccurate tool usage. Furthermore, it is extremely time-consuming to repeatedly manually control the robot to reach the same target point and visually verify the movement, greatly decreasing work efficiency. Moreover, the robot system must be re-calibrated every time a tool is replaced, adding to the time burden.
An object of the invention, among others, is to provide an automatic calibration method for a robot system in which calibration is achieved with high precision and high efficiency. The disclosed automatic calibration method for a robot system includes the steps of calibrating a sensor and a sensor coordinate system of the sensor with respect to a world coordinate system, controlling a robot under the guidance of the sensor to move a point of a tool mounted on the robot to reach a same target point with a plurality of different poses, the point of the tool in a tool coordinate system, and calculating a transformation matrix tcpTt of the tool coordinate system with respect to a tool center point coordinate system based on pose data of the robot at the same target point.
The invention will now be described by way of example with reference to the accompanying figures, of which:
The invention is explained in greater detail below with reference to embodiments of an automatic calibration method for a robot system. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete and still fully convey the scope of the invention to those skilled in the art.
A robot system according to the invention is shown in
The sensor 10 may be a vision sensor, for example, a camera, or any other type of vision sensor known to those with ordinary skill in the art. The sensor 10 has intrinsic parameters including a focal length, a lens distortion, a pixel ratio, and a geometric relationship between a chip pose and a lens pose of the sensor 10. The sensor 10 is configured to capture an image of the tool 30, an object 40, and a target region 50 on the object 40, and identify pose (position and posture) data of the tool 30, the object 40, and the target region 50 based on the captured image.
The sensor 10 calibrates the robot system and guides the robot 20. Although only one sensor 10 is shown in the embodiment of
The robot 20 in the shown embodiment is a 6-axis robot. One with ordinary skill in the art would understand that the robot may be any multi-freedom robot, for example, a four-axis robot or a five-axis robot. Although only one robot 20 is shown in the embodiment of
The tool 30 may be any type of tool that can be mounted on the robot 20. The tool 30 is controlled by the robot 20 and is used to machine the object 40.
The controller (not shown) controls the robot system based on a pre-stored program.
The processor (not shown) processes the image data obtained by the sensor 10.
A plurality of coordinate systems are shown in
Transformation matrices among these coordinate systems are also shown in
The calibration process of the robot system will now be described generally with reference to
Identifying the transformation matrix TcpTt will now be described in greater detail.
First, a target point is defined in the world coordinate system O and recognized using the sensor 10.
Second, the robot 20 is automatically controlled by the controller under the guidance of the calibrated sensor 10 to move a point of the tool 30 in the tool coordinate system Ot to reach the same target point with a plurality of different poses. In order to accurately move the point of the tool 30 to the same target point, closed-loop feedback control shown in
Third, the transformation matrix tcpTt of the robot 20 is calculated based on pose data of the robot 20 at the same target point. Due to the closed loop feedback shown in
The robot system may need to frequently replace the tool 30. In this case, after the tool 30 is replaced by a new tool 30, the robot system automatically and immediately re-recognizes a transformation matrix TcpTt of the robot 20.
In an embodiment, the above process identifying the transformation matrix TcpTt may be repeated with a plurality of additional target points to improve the accuracy of the TcpTt matrix calculation. For instance, a second target point is defined in the world coordinate system O and recognized using the sensor 10. The robot 20 is controlled by the controller under the guidance of the calibrated sensor 10 by closed-loop feedback control to move the point of the tool 30 in the tool coordinate system Ot to reach the same second target point with a plurality of different poses. A second transformation matrix tcpTt of the robot 20 is then calculated based on pose data of the robot 20 at the same second target point.
A difference between the first transformation matrix tcpTt and the second transformation matrix tcpTt is then compared to a predetermined allowable range. If the difference is outside the allowable range, the calculation of the transformation matrix tcpTt is restarted. If the difference is within the allowable range, an average of the first transformation matrix tcpTt and the second transformation matrix tcpTt is used as the overall transformation matrix tcpTt.
This process can be repeated for 1 to N target points, obtaining N transformation matrices tcpTt. N may be an integer greater than or equal to 2. The overall transformation matrix tcpTt may be calculated by applying the Least Squares Method to the N transformation matrices tcpTt. As the integer N increases the precision of the overall transformation matrix tcpTt improves.
Identifying the transformation matrix TR will now be described in greater detail.
First, the transformation matrix RTt is calculated based on the obtained transformation matrix TcpTt according to the following expression (1):
RTt=RTTcp*TcpTt (1)
Second, a target pose of the tool 30 is defined in the world coordinate system O and recognized by the sensor 10 to obtain a transformation matrix T of the recognized target pose with respect to the world coordinate system O.
Third, the robot 20 is controlled by the controller under the guidance of the sensor 10 to move the tool 30 to reach the recognized target pose. The transformation matrix TR is calculated based on the transformation matrix T of the recognized target pose with respect to the world coordinate system O and the calculated transformation matrix RTt, according to the following expression (2):
TR=T*RTt−1 (2),
wherein RTt−1 is an inverse matrix of RTt.
In order to accurately move the tool 30 to the recognized target pose in the third step, closed-loop feedback control shown in
The actual pose S may be pose information calculated based on the sensor 10 or pose data read from the sensor 10 space. There is a mapping relationship between the sensor space and the pose space, for example, s=L*p, wherein s refers to the pose data read from the sensor space, p refers to the actual pose data in the pose space, and L refers to a mapping matrix between the sensor space and the pose space. Thereby, the robot 20 may be controlled in the pose space, that is, a feedback signal S is a pose signal calculated based on the pose data read from the sensor 10. Alternatively, the robot 20 may be controlled in the sensor space, that is, a feedback signal S is a pose signal directly read from the sensor 10. The particular control algorithm for controlling the robot would differ between the sensor space and the pose space, but would have the same function in both spaces.
Identifying the transformation matrix TO will now be described in greater detail.
First, an image of the object 40 is captured by the calibrated sensor 10 and processed by the processor to obtain the object coordinate system Oo. Then, the transformation matrix TO of the object coordinate system Oo with respect to the world coordinate system O can be obtained. Generally, the object 40 to be machined is fixed, and therefore, the pose of the object 40 with respect to the world coordinate system O is constant; the object coordinate system Oo and the transformation matrix TO only need to be identified once. However, in some conditions, the object 40 to be machined is continuously or intermittently moving and the pose of the object 40 is continuously or intermittently variable. In these conditions, the transformation matrix TO and object coordinate system Oo must be recalculated.
Identifying the transformation matrix OTP will now be described in greater detail.
First, an image of the target region 50 on the object 40 is captured by the calibrated sensor 10 and processed by the processor to obtain the target region coordinate system OP. Then, the transformation matrix OTP of the target region coordinate system OP with respect to the object coordinate system OO can be obtained. Since the pose of the target region 50 in the object coordinate system Oo is constant, the transformation matrix OTP is constant.
Advantageously, in the automatic calibration method for a robot system according to the invention, the robot 20 can accurately move to the same target point with the plurality of different poses, improving the calibration accuracy of the robot system. Furthermore, the calibration of the robot system is automatically performed by the control algorithm, increasing the calibration efficiency, and simplifying the calibration operation.
Number | Date | Country | Kind |
---|---|---|---|
2014 1 0047115 | Feb 2014 | CN | national |
This application is a continuation of PCT International Patent Application No. PCT/IB2015/050707, filed Jan. 30, 2015, which claims priority under 35 U.S.C. § 119 to Chinese Patent Application No. 2014100471156, filed Feb. 11, 2014.
Number | Name | Date | Kind |
---|---|---|---|
4305130 | Kelley | Dec 1981 | A |
4402053 | Kelley | Aug 1983 | A |
5297238 | Wang | Mar 1994 | A |
5523663 | Tsuge | Jun 1996 | A |
6611617 | Crampton | Aug 2003 | B1 |
8073528 | Zhao | Dec 2011 | B2 |
8135208 | Vangal-Ramamurthy | Mar 2012 | B1 |
8180487 | Vangal-Ramamurthy | May 2012 | B1 |
9393694 | Wallack | Jul 2016 | B2 |
9517468 | Haddad | Dec 2016 | B2 |
20080188983 | Ban | Aug 2008 | A1 |
20080188986 | Hoppe | Aug 2008 | A1 |
20090118864 | Eldridge et al. | May 2009 | A1 |
20100111370 | Black | May 2010 | A1 |
20100141776 | Ban | Jun 2010 | A1 |
20100168915 | Kagawa | Jul 2010 | A1 |
20110218675 | Ban | Sep 2011 | A1 |
20120143370 | Shieh | Jun 2012 | A1 |
20140067317 | Kobayashi | Mar 2014 | A1 |
20140229005 | Suzuki | Aug 2014 | A1 |
Number | Date | Country |
---|---|---|
10150225 | Apr 2003 | DE |
2199036 | Jun 2010 | EP |
Entry |
---|
PCT International Search Report and Written Opinion, dated May 5, 2015, 3 pages. |
Abstract of EP2199036, dated Jun. 23, 2010, 2 pages. |
Abstract of DE10150225, dated Apr. 17, 2003, 2 pages. |
Number | Date | Country | |
---|---|---|---|
20160346932 A1 | Dec 2016 | US |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/IB2015/050707 | Jan 2015 | US |
Child | 15233440 | US |