The present application is based on, and claims priority from JP Application Serial Number 2021-023173, filed Feb. 17, 2021, the disclosure of which is hereby incorporated by reference herein in its entirety.
The present disclosure relates to a calibration method.
For example, as described in JP-A-8-85083 (Patent Literature 1), there has been known a robot including a robot arm, to the distal end of which a tool functioning as an end effector is attached, the robot driving the robot arm to thereby perform predetermined work on a workpiece. Such a robot grasps, in a robot coordinate system, the position of a tool center point set in the tool, controls driving of the robot arm such that the tool center point moves to a predetermined position, and performs the predetermined work. Therefore, the robot needs to calculate an offset of a control point set at the distal end of the robot arm and the tool center point, that is, perform calibration.
In Patent Literature 1, the robot positions the tool center point in at least three different postures at a predetermined point on a space specified by the robot coordinate system, that is, moves the tool center point to the predetermined point. The robot calculates a position and a posture of the tool center point based on the posture of the robot arm at that time.
However, in the method disclosed in Patent Literature 1, when the tool center point is moved to the predetermined point, since the movement is performed by visual check, the tool center point and the predetermined point do not always actually coincide and variation occurs. As a result, accurate calibration cannot be performed.
A calibration method according to an aspect of the present disclosure is a calibration method for, in a robot including a robot arm, calculating a positional relation between a first control point set in an end effector attached to a distal end of the robot arm and a second control point set further on the robot arm side than the end effector, the calibration method including: a first step of imaging the robot using an imaging section and moving the robot arm to be in a first state in which a first feature point of the robot associated with the first control point is located in a predetermined position in a captured image of the imaging section and the robot arm takes a first posture; a second step of imaging the robot and moving the robot arm to be in a second state in which the first feature point is located in the predetermined position in the captured image of the imaging section and the robot arm takes a second posture; a third step of calculating a first vector that passes a position of the second control point in the first state, a first reference position obtained from a position of the second control point in the second state, and a position of the first feature point in the second state; a fourth step of rotating the robot arm centering on a reference axis that crosses an axis extending along a component of the first vector; a fifth step of moving the robot arm to be in a third state in which the first feature point is located in the predetermined position in the captured image of the imaging section and the robot arm takes a third posture; a sixth step of calculating a second vector that, in the third state, passes a second reference position obtained from a position of the second control point in the third state and a position of the first feature point in the third state; and a seventh step of calculating a coordinate of the first feature point in a robot coordinate system based on the first vector and the second vector.
The calibration method according to the present disclosure is explained in detail below based on a preferred embodiment shown in the accompanying drawings. In the following explanation, for convenience of explanation, a +Z-axis direction, that is, the upper side in
As shown in
First, the robot 1 is explained.
The robot 1 shown in
The robot 1 is not limited to the configuration shown in
The base 11 is a supporting body that supports the robot arm 10 from the lower side to be capable of driving the robot arm 10. The base 11 is fixed to, for example, a floor in a factory. In the robot 1, the base 11 is electrically coupled to the control device 3 via a relay cable 18. The coupling of the robot 1 and the control device 3 is not limited to wired coupling as in the configuration shown in
In this embodiment, the robot arm 10 includes a first arm 12, a second arm 13, a third arm 14, and a fourth arm 15, a fifth arm 16, and a sixth arm 17. These arms are coupled in this order from the base 11 side. The number of arms included in the robot arm 10 is not limited to six and may be, for example, one, two, three, four, five, or seven or more. The sizes such as the total lengths of the arms are respectively not particularly limited and can be set as appropriate.
The base 11 and the first arm 12 are coupled via a joint 171. The first arm 12 is capable of turning, with a first turning axis parallel to the vertical direction as a turning center, around the first turning axis with respect to the base 11. The first turning axis coincides with the normal of the floor to which the base 11 is fixed.
The first arm 12 and the second arm 13 are coupled via a joint 172. The second arm 13 is capable of turning with respect to the first arm 12 with a second turning axis parallel to the horizontal axis as a turning center. The second turning axis is parallel to an axis orthogonal to the first turning axis.
The second arm 13 and the third arm 14 are coupled via a joint 173. The third arm 14 is capable of turning with respect to the second arm 13 with a third turning axis parallel to the horizontal direction as a turning center. The third turning axis is parallel to the second turning axis.
The third arm 14 and the fourth arm 15 are coupled via a joint 174. The fourth arm 15 is capable of turning with respect to the third arm 14 with a fourth turning axis parallel to the center axis direction of the third arm 14 as a turning center. The fourth turning axis is orthogonal to the third turning axis.
The fourth arm 15 and the fifth arm 16 are coupled via a joint 175. The fifth arm 16 is capable of turning with respect to the fourth arm 15 with a fifth turning axis as a turning center. The fifth turning axis is orthogonal to the fourth turning axis.
The fifth arm 16 and the sixth arm 17 are coupled via a joint 176. The sixth arm 17 is capable of turning with respect to the fifth arm 16 with a sixth turning axis as a turning center. The sixth turning axis is orthogonal to the fifth turning axis.
The sixth arm 17 is a robot distal end portion located on the most distal end side in the robot arm 10. The sixth arm 17 can turn together with the end effector 20 according to driving of the robot arm 10.
The robot 1 includes a motor M1, a motor M2, a motor M3, a motor M4, a motor M5, and a motor M6 functioning as driving sections and an encoder E1, an encoder E2, an encoder E3, an encoder E4, an encoder E5, and an encoder E6. The motor M1 is incorporated in the joint 171 and relatively rotates the base 11 and the first arm 12. The motor M2 is incorporated in the joint 172 and relatively rotates the first arm 12 and the second arm 13. The motor M3 is incorporated in the joint 173 and relatively rotates the second arm 13 and the third arm 14. The motor M4 is incorporated in the joint 174 and relatively rotates the third arm 14 and the fourth arm 15. The motor M5 is incorporated in the joint 175 and relatively rotates the fourth arm 15 and the fifth arm 16. The motor M6 is incorporated in the joint 176 and relatively rotates the fifth arm 16 and the sixth arm 17.
The encoder E1 is incorporated in the joint 171 and detects the position of the motor M1. The encoder E2 is incorporated in the joint 172 and detects the position of the motor M2. The encoder E3 is incorporated in the joint 173 and detects the position of the motor M3. The encoder E4 is incorporated in the joint 174 and detects the position of the motor M4. The encoder E5 is incorporated in the joint 175 and detects the position of the motor M5. The encoder E6 is incorporated in the joint 176 and detects the position of the motor M6.
The encoders E1 to E6 are electrically coupled to the control device 3 and transmits position information, that is, rotation amounts of the motors M1 to M6 to the control device 3 as electric signals. The control device 3 drives the motors M1 to M6 via motor drivers D1 to D6 based on this information. That is, controlling the robot arm 10 means controlling the motors M1 to M6.
A control point CP is set at the distal end of a force detecting section 19 provided in the robot arm 10. The control point CP means a point serving as a reference in performing control of the robot arm 10. The robot system 100 grasps the position of the control point CP in a robot coordinate system and drives the robot arm 10 such that the control point CP moves to a desired position. That is, the control point CP is set further on the robot arm 10 side than the end effector 20. In this embodiment, the control point CP is set at the distal end of the force detecting section 19. However, if the position and the posture of the control point CP with respect to the origin of the robot coordinate system are known, the control point CP may be set in any position further on the robot arm 10 side than the end effector 20. For example, the control point CP may be set at the distal end of the robot arm 10.
In the robot 1, the force detecting section 19 that detects force is detachably set in the robot arm 10. The robot arm 10 can be driven in a state in which the force detecting section 19 is set in the robot arm 10. In this embodiment, the force detecting section 19 is a six-axis force sensor. The force detecting section 19 detects the magnitudes of forces on three detection axes orthogonal to one another and the magnitudes of torques around the three detection axes. That is, the force detecting section 19 detects force components in axial directions of an X axis, a Y axis, and a Z axis orthogonal to one another, a force component in a W direction around the X axis, a force component in a V direction around the Y axis, and a force component in a U direction around the Z axis. In this embodiment, the Z-axis direction is the vertical direction. The force components in the axial directions can be referred to as “translational force components” as well and the force components around the axes can be referred to as “torque components” as well. The force detecting section 19 is not limited to the six-axis force sensor and may be a sensor having another configuration.
In this embodiment, the force detecting section 19 is set in the sixth arm 17. A setting part of the force detecting section 19 is not limited to the sixth arm 17, that is, an arm located on the most distal end side and may be, for example, another arm or a part between arms adjacent to each other.
The end effector 20 can be detachably attached to the force detecting section 19. In this embodiment, the end effector 20 is configured by a screwdriver that screws a work target object. The end effector 20 is fixed to the force detecting section 19 via a coupling bar 21. In the configuration shown in
The end effector 20 is not limited to the configuration shown in
In the robot coordinate system, a tool center point TCP, which is a first control point, is set at the distal end of the end effector 20. In the robot system 100, the tool center point TCP can be set as a reference of control by grasping the position of the tool center point TCP in the robot coordinate system. The robot system 100 grasps, in the robot coordinate system, the position of the control point CP, which is a second control point, set in the robot arm 10. Accordingly, by grasping a positional relation between the tool center point TCP and the control point CP, it is possible to drive the robot arm 10 and perform work with the tool center point TCP set as the reference of the control. Grasping the positional relation between the tool center point TCP and the control point CP in this way is referred to as calibration. The calibration method according to the present disclosure explained below is a method for grasping the positional relation between the tool center point TCP and the control point CP.
Subsequently, the imaging section 5 is explained.
The imaging section 5 can be configured to include an imaging element configured by a CCD (Charge Coupled Device) image sensor including a plurality of pixels and an optical system including a lens. As shown in
The imaging section 5 is set near a setting surface of the robot 1 and faces upward and performs imaging in the upward direction. In this embodiment, to facilitate explanation of the calibration method explained below, the imaging section 5 is set in a state in which an optical axis O5 is slightly inclined with respect the vertical direction, that is, the Z axis. A direction that the imaging section 5 faces is not particularly limited. The imaging section 5 may be disposed to face the horizontal direction, the vertical direction, and a direction crossing the horizontal direction. A disposition position of the imaging section 5 is not limited to the configuration shown in
Subsequently, the control device 3 and the teaching device 4 are explained. In the following explanation in this embodiment, the control device 3 executes the calibration method according to the present disclosure. However, in the present disclosure, the calibration method is not limited to this. The teaching device 4 may execute the calibration method according to the present disclosure or the control device 3 and the teaching device 4 may share the execution of the calibration method according to the present disclosure.
As shown in
The processor 31 is configured by, for example, a CPU (Central Processing Unit) and reads out and executes various programs and the like stored in the storing section 32. A command signal generated by the processor 31 is transmitted to the robot 1 via the communication section 33. Consequently, the robot arm 10 can execute predetermined work. In this embodiment, the processor 31 executes steps S101 to S116 explained below based on an imaging result of the imaging section 5. However, the execution of the steps is not limited to this. A processor 41 of the teaching device 4 may be configured to execute the steps S101 to S116 or the processor and the processor 41 may be configured to share the execution of steps S101 to S116.
The storing section 32 stores various programs and the like executable by the processor 31. Examples of the storing section 32 include a volatile memory such as a RAM (Random Access Memory), a nonvolatile memory such as a ROM (Read Only Memory), and a detachable external storage device.
The communication section 33 transmits and receives signals to and from the sections of the robot 1 and the teaching device 4 using an external interface such as a wired LAN (Local Area Network) or a wireless LAN.
Subsequently, the teaching device 4 is explained.
As shown in
The processor 41 is configured by, for example, a CPU (Central Processing Unit) and reads out and executes various programs such as a teaching program stored in the storing section 42. The teaching program may be a teaching program generated by the teaching device 4, may be a teaching program stored from an external recording medium such as a CD-ROM, or may be a teaching program stored via a network or the like.
A signal generated by the processor 41 is transmitted to the control device 3 of the robot 1 via the communication section 43. Consequently, the robot arm 10 can execute predetermined work under predetermined conditions.
The storing section 42 stores various programs and the like executable by the processor 41. Examples of the storing section 42 include a volatile memory such as a RAM (Random Access Memory), a nonvolatile memory such as a ROM (Read Only Memory), and a detachable external storage device.
The communication section 43 transmits and receives signals to and from the control device 3 using an external interface such as a wired LAN (Local Area Network) or a wireless LAN.
The robot system 100 is explained above.
In such a robot system 100, before the robot 1 performs the predetermined work, an operator attaches an end effector corresponding to the work to the distal end of the robot arm 10. The control device 3 or the teaching device 4 needs to grasp what kind of an end effector is attached. Even if the control device 3 or the teaching device 4 grasps a shape and a type of the attached end effector, the end effector is not always attached in a desired posture when the operator attaches the end effector. Therefore, the operator performs calibration for associating the tool center point TCP of the attached end effector 20 and the control point CP.
The calibration method according to the present disclosure is explained below with reference to
In the following explanation, in a captured image of the imaging section 5, the tool center point TCP is explained as a first feature point. That is, the tool center point TCP, which is the first control point, is recognized as a first feature point. Steps S100 to S103 are a first step, steps S105 to S111 are a second step, steps S112 and S113 are a third step, step S115 is a fourth step, step S103 in a second loop is a fifth step, step S113 in the second loop is a sixth step, and step S116 is a seventh step.
First, in step S100, as shown in
The imaging surface F1 is a plane having an optical axis of the imaging section 5 as a normal. An imageable position has predetermined width along an optical axis direction of the imaging section 5. This width is a region between two broken lines in
In step S100, the imaging section 5 images the tool center point TCP in motion as a video and transmits the video to the control device 3. The processor 31 grasps the tool center point TCP as the first feature point in the video transmitted from the imaging section 5 and drives the robot arm 10 such that the tool center point TCP is located in any position on the imaging surface F1.
Subsequently, in step S101, the processor 31 sets a reference plane F2 as shown in
In this embodiment, the reference plane F2 is a plane parallel to the X-Y plane. However, in the present disclosure, the reference plane F2 is not limited to this and may not be the plane parallel to the X-Y plane. For example, the reference plane F2 may be a plane parallel to an X-Z plane, may be a plane parallel to a Y-Z plane, or may be plane inclined with respect to the X-Z plane and the Y-Z plane.
In this way, the reference plane F2 is a plane parallel to a work surface on which the robot arm 10 performs work and is a plane serving as a reference when the robot arm 10 performs work. The reference plane F2 is a plane serving as a reference in changing the posture of the robot arm 10 in step S103, step S105, step S106, and step S109 explained below.
In this way, in the first step, the processor 31 sets the reference plane F2 serving as the reference in moving the robot arm 10. Consequently, it is possible to accurately and easily execute step S103, step S105, step S106, and step S109 explained below.
Subsequently, the processor 31 executes step S102. In step S102, as shown in
Subsequently, in step S103, as shown in
The posture of the robot arm 10 shown in
Subsequently, in step S104, the processor 31 determines whether processing of the calibration method is in a first loop. The determination in this step is performed based on, for example, whether a first vector explained below is already calculated and stored. When determining in step S104 that the processing is in the first loop, the processor 31 shifts to step S105.
Subsequently, in step S105, as shown in
Subsequently, in step S106, as shown in
Subsequently, in step S107, as shown in
Subsequently, in step S108, as shown in
Subsequently, in step S109, as shown in
Subsequently, in step S110, as shown in
Subsequently, in step S111, as shown in
In this way, in the second step, when changing the robot arm 10 from the first state to the second state, the processor 31 changes the robot arm 10 to the second state by driving the robot arm 10 such that the control point CP, which is the second control point, maintains the position in the first state and the robot arm 10 rotates centering on the first axis O1 extending along the vertical direction, driving the robot arm 10 such that the tool center point TCP, which is the first feature point, is located in the imaging center serving as a predetermined position in the captured image of the imaging section 5, driving the robot arm 10 such that the tool center point TCP rotates centering on the second axis O2 parallel to the normal of the reference plane F2, and driving the robot arm 10 such that the tool center point TCP is located in the imaging center serving as the predetermined position in the captured image of the imaging section 5. It is possible to accurately calculate a first reference position P0A explained below through such a step.
Subsequently, in step S112, as shown in
Subsequently, in step S113, as shown in
Such steps S112 and S113 are the third step for calculating the first vector B1 passing the first reference position P0A obtained from the position P1 of the control point CP in the first state and the position P2 of the control point CP in the second state and the position of the tool center point TCP in the second state.
Subsequently, in step S114, the processor 31 determines whether the processing is in the first loop. When determining in step S114 that the processing is in the first loop, the processor 31 shifts to step S115. When determining in step S114 that the processing is not in the first loop, that is, is in the second loop, the processor 31 shifts to step S116.
In step S115, as shown in
The reference axis J is an axis crossing the normal of the reference plane F2 in the configuration shown in
In the configuration shown in
Returning to step S102, the processor 31 executes the second loop in a state in which the positions of the tool center point TCP and the control point CP are different from initial positions in step S100.
The fifth step is step S102 and step S103 in the second loop. The fifth step is a step of imaging the robot 1 using the imaging section 5 and moving the robot arm 10 such that the tool center point TCP is located in the imaging center in the captured image of the imaging section 5 and the robot arm 10 changes to a third state in which the robot arm 10 takes a third posture different from the first posture. As shown in
Subsequently, in step S104, the processor 31 determines again whether the processing is in the first loop. When determining in step S104 that the processing is not in the first loop, that is, is in the second loop, the processor 31 shifts to step S113.
The sixth step is step S113 in the second loop. That is, as shown in
A positional relation between the position of the control point CP in the third state and the second reference position P0B is the same as a positional relation between the position P2 of the control point CP in the second state and the first reference position P0A.
Subsequently, in step S116, as shown in
When the first vector B1 and the second vector B2 cross, the processor 31 calculates an intersection P5 of the first vector B1 and the second vector B2, calculates a coordinate (X, Y, Z) of the intersection P5, and regards the coordinate (X, Y, Z) as the position of the tool center point TCP at the time when the control point CP is located in the position P2.
On the other hand, although not shown in
The position of the control point CP and the position of the tool center point TCP can be linked, that is, associated based on the calculated positional relation between the tool center point TCP and the control point CP. Accordingly, the robot arm 10 can be driven with the position of the tool center point TCP as a reference. The predetermined work can be accurately performed.
In this way, in the seventh step, when the first vector B1 and the second vector B2 cross, a point where the first vector B1 and the second vector B2 cross is regarded as the position of the tool center point TCP, which is the first feature point. Consequently, the position of the tool center point TCP can be accurately specified. Accordingly, the predetermined work can be accurately performed.
In the seventh step, when the first vector B1 and the second vector B2 are present in the twisted positions from each other, the middle point of the portion where the first vector B1 and the second vector B2 are at the shortest distance is regarded as the position of the tool center point TCP, which is the first feature point. Consequently, even when the first vector B1 and the second vector B2 do not cross, the position of the tool center point TCP can be accurately specified. Accordingly, the predetermined work can be accurately performed.
As explained above, the present disclosure is the calibration method for calculating, in the robot 1 including the robot arm 10, a positional relation between the tool center point TCP, which is a first control point, set in the end effector 20 attached to the distal end of the robot arm 10 and the control point CP, which is a second control point, set further on the robot arm 10 side than the end effector 20. The calibration method according to the present disclosure includes a first step of imaging the robot 1 using the imaging section 5 and moving the robot arm 10 to be in a first state in which the tool center point TCP, which can be regarded as a first feature point, of the robot 1 associated with the tool center point TCP is located in a predetermined position, that is, an imaging center in a captured image of the imaging section 5 and the robot arm 10 takes a first posture, a second step of imaging the robot 1 and moving the robot arm 10 to be in a second state in which the tool center point TCP, which can be regarded as the first feature point, is located in the imaging center in a captured image of the imaging section 5 and the robot arm 10 takes a second posture, a third step of calculating the first vector B1 that passes the first reference position P0A obtained from the position P1 of the control point CP in the first state and the position P2 of the control point CP in the second state and the position of the tool center point TCP in the second state, a fourth step of rotating the robot arm 10 centering on the reference axis J that crosses an axis extending along a component of the first vector B1, a fifth step of moving the robot arm 10 to be in a third state in which the tool center point TCP, which can be regarded as the first feature point, is located in the predetermined position, that is, the imaging center in the captured image of the imaging section 5 and the robot arm 10 takes a third posture, a sixth step of calculating the second vector B2 that, in the third state, passes the second reference position P0B obtained from the position of the control point CP, which is the second control point, in the third state and the position of the tool center point TCP, which can be regarded as the first feature point, in the third state, and a seventh step of calculating a coordinate of the first feature point in a robot coordinate system based on the first vector B1 and the second vector B2.
According to such a configuration of the present disclosure, since a process in which the operator visually checks the positions of the tool center point TCP and the like is absent, more accurate calibration can be performed. A touchup process performed in the past can be omitted and a reduction in time can be achieved. It is unnecessary to prepare an imaging section having an autofocus function, an imaging section having a relatively large depth of field, and the like. It is possible to perform calibration using a relatively inexpensive imaging section.
In the above explanation, the predetermined position in the captured image of the imaging section 5 is explained as the imaging center. However, in the present disclosure, the predetermined position is not limited to this and may be any position in the captured image.
In the above explanation, as an example in which the first feature point and the tool center point TCP are associated, the case in which the first feature point and the tool center point TCP coincide is explained. However, in the present disclosure, the first feature point is not limited to this and may be any position other than the tool center point TCP of the end effector 20. A positional relation between the control point CP and the tool center point TCP may be calculated using a second feature point and a third feature point explained below.
For example, an end effector 20 shown in
For example, when the position of the tool center point TCP of such an end effector 20 is grasped, the position can be grasped using either the marker S2 or the marker S3. For example, steps S101 to S116 explained above are performed with the marker S2 set as a feature point and, thereafter, steps S101 to S116 explained above are performed with the marker S3 as a feature point. Consequently, a coordinate of the marker S1 can be calculated based on a positional relation between the marker S2 and the control point CP and a positional relation between the marker S3 and the control point CP.
A posture of the end effector 20 can be calculated by calculating a positional relation between the marker S2 and the marker S3, that is, calculating a vector directed toward any point on the marker S2 from any point on the marker S3 and applying the vector to the marker S1, that is, the tool center point TCP.
For example, when the end effector 20 shown in
The end effector 20 shown in
The end effector 20 shown in
The end effector 20 shown in
The calibration method according to the present disclosure is explained about the illustrated embodiment. However, the present disclosure is not limited to the embodiment. The steps of the calibration method can be replaced with any steps that can exert the same functions. Any steps may be added.
When the optical axis of the imaging section and the reference plane do not perpendicularly cross and when focal lengths are different at the time when the first vector is calculated and at the time when the second vector is calculated, it is conceivable that the intersection of the first vector and the second vector relatively greatly deviate from the actual position of the tool center point. In this case, detection accuracy of the position of the first feature point shows a decreasing tendency. In order to prevent or suppress this tendency, it is preferable to drive the robot arm to be in the third posture in which focusing degrees of the imaging section coincide at the time when the first vector is calculated and at the time when the second vector is calculated. That is, it is preferable that focal positions of the first feature point in the second state in the third step and the first feature point in the third state in the sixth step coincide.
The decrease in the detection accuracy of the position of the first feature point may be suppressed by driving the robot arm to set a pixel size of the first feature point the same in step S103 in the first loop and step S103 in the second loop.
Number | Date | Country | Kind |
---|---|---|---|
2021-023173 | Feb 2021 | JP | national |