The present invention relates to a control device, a robot, and a robot system.
In the related art, a robot system including: a robot having a robot arm including an end effector that performs work with respect to a target and a camera attached to a tip end portion of the robot arm; and a control device that controls driving of the robot, is known.
As an example of the robot system, in JP-A-2005-300230, a measuring device including: a robot including an arm; a tool attached to an arm tip end portion of the robot; and a camera installed on the periphery of the robot, is disclosed. In the measuring device, a position of the tool with respect to a tool attachment surface of the robot is measured by using the camera. In addition, in general, the measured position of the tool is used in calibration between a coordinate system of the camera and a coordinate system of the robot.
Here, in the measuring device described in JP-A-2005-300230, the camera is provided to be fixed at a location on the periphery of the robot. Therefore, when measuring the position of the tool by the measuring device or executing the calibration, there is a concern that the robot interferes with a peripheral device according to the dispositional relationship between the robot and the peripheral device. As a result, there is a problem that it is not possible to accurately measure the position of the tool or to execute the calibration.
An advantage of some aspects of the invention is to solve at least a part of the problems described above, and the invention can be implemented as the following configurations.
A control device according to an aspect of the invention is a control device which controls a robot including a movable portion provided with a tool including a marker, including: an obtaining portion which obtains a first captured image obtained by capturing an image of the marker by a movable first image capturing portion that captures an image of the marker; and a control portion which performs first corresponding between a coordinate system of the first image capturing portion and a coordinate system of the robot based on the first captured image obtained by the obtaining portion after the first image capturing portion has moved.
In the control device according to the aspect of the invention, it is possible to perform the first corresponding (calibration) at a location at which the first image capturing portion is moved and does not interfere with the peripheral device or the like. Therefore, even in a relatively narrow region, it is possible to perform the first corresponding. In addition, since it is possible to perform the first corresponding in a state of being stopped after moving the first image capturing portion, it is not necessary to consider a moving direction of the first image capturing portion. Therefore, the first corresponding between the coordinate system of the first image capturing portion and the coordinate system of the robot is easily performed.
In the control device according to the aspect of the invention, it is preferable that the control portion performs the first corresponding at a plurality of positions.
With this configuration, by performing the first corresponding every time when the first image capturing portion is moved, it is possible to particularly improve the accuracy of the work of the robot at each location.
In the control device according to the aspect of the invention, it is preferable that the control portion performs the first corresponding at a first position, and controls driving of the robot by using the first corresponding at the first position, at a second position different from the first position.
With this configuration, since it is possible to acquire the first corresponding at the second position different from the first position based on data of the first corresponding at the first position, it is possible to save time and effort for performing the first corresponding at the second position, and it is also possible to improve the accuracy of the work of the robot at the second position similar to the work at the first position.
In the control device according to the aspect of the invention, it is preferable that 0.8≤R1/R2≤1.2 when the repeating accuracy in movement of the first image capturing portion is R1 and the repeating accuracy in work of robot is R2.
By satisfying the relationship, for example, it is possible to particularly improve the accuracy of the first corresponding at the plurality of positions based on the data of the first corresponding, for example, at one arbitrary position (first position). Therefore, it is possible to improve the accuracy of the work of the robot at the plurality of positions similar to the work at the arbitrary position (first position).
In the control device according to the aspect of the invention, it is preferable that, after the control portion performs second corresponding between a coordinate system of a second image capturing portion which captures an image of the marker and the coordinate system of the robot, the obtaining portion obtains a second captured image obtained by capturing an image of the marker by the second image capturing portion, and the control portion calculates a position of the marker in the coordinate system of the robot based on the second captured image obtained by the obtaining portion.
With this configuration, it is possible to easily and appropriately acquire the position of the marker with respect to a predetermined part (for example, a tool center point) of the robot, that is, the offset of the marker. Therefore, by using the offset of the marker, it is possible to appropriately perform the first corresponding.
In the control device according to the aspect of the invention, it is preferable that, after calculating the position of the marker in the coordinate system of the robot, the control portion calculates an offset between a predetermined part of the robot and the marker based on the position of the marker in the coordinate system of the robot, and performs the first corresponding based on the offset and the first captured image.
With this configuration, even when it is not possible to capture the predetermined part by the first image capturing portion, it is possible to appropriately perform the first corresponding based on the position of the marker and the offset.
In the control device according to the aspect of the invention, it is preferable that the marker is a transmitting portion having optical transmission properties.
With this configuration, for example, it is possible to clearly recognize an outline of the marker, to improve the image capturing accuracy of the first captured image, and to improve the measuring accuracy of the marker. Therefore, it is possible to perform the first corresponding with higher accuracy.
In the control device according to the aspect of the invention, it is preferable that the first image capturing portion is provided at a location different from the movable portion.
With this configuration, for example, it is possible to perform the first corresponding in the first image capturing portion provided on the periphery of the robot.
A robot according to an aspect of the invention includes: a movable portion which is controlled by the control device according to the aspect of the invention, and which is provided with a tool including a marker.
According to the robot, under the control of the control device, it is possible to accurately perform an operation related to the first corresponding.
A robot system according to an aspect of the invention includes: the control device according to the aspect of the invention; a robot which is controlled by the control device, and includes a movable portion provided with a tool including a marker; and a first image capturing portion having an image capturing function.
According to the robot system, it is possible to perform the first corresponding at a location at which the first image capturing portion is moved and does not interfere with the peripheral device or the like, and under the control of the control device, the robot can accurately perform the operation related to the first corresponding.
The invention will be described with reference to the accompanying drawings, wherein like numbers reference like elements.
Hereinafter, a control device, a robot, and a robot system according to the invention will be described in detail based on appropriate embodiments illustrated in the attached drawings.
A robot system 100 illustrated in
As illustrated in
As illustrated in
As illustrated in
The base 110 illustrated in
As illustrated in
In addition, on the design, the suction portion 150 is attached such that the center axis of the suction portion 150 is identical to the third axis J3 of the splicing shaft 103. Therefore, the suction portion 150 rotates in accordance with the rotation of the splicing shaft 103. Here, as illustrated in
In addition, in the tip end portion of the robot arm 10, a jig 6 having a marker 61 used in calibration which will be described later is attached to be attachable and detachable. In addition, the jig 6 will be described when describing the calibration which will be described later.
In addition, as illustrated in
In addition, each of the driving portions 130 is electrically connected to a motor driver 120 embedded in the base 110 illustrated in
In the robot 1 having such a configuration, as illustrated in
In addition, in the robot 1, a tip end coordinate system which regards the tip end portion of the suction portion 150 as a reference is set. The tip end coordinate system is a three-dimensional rectangular coordinate system determined by an xa axis, a ya axis, and a za axis which are orthogonal to each other. In the embodiment, the tip end coordinate system regards the tool center point TCP as an original point. In addition, a state where the calibration between the base coordinate system and the tip end coordinate system has been finished, and the coordinates of the tip end coordinate system which regards the base coordinate system as a reference can be calculated, is achieved. In addition, a translational component with respect to the xa axis is referred to as “component xa”, a translational component with respect to the ya axis is referred to as “component ya”, a translational component with respect to the za axis is referred to as “component za”, a rotational component around the za axis is referred to as “component ua”, a rotational component around a ya axis is referred to as “component va”, and a rotational component around the xa axis is referred to as “component wa”. The unit of lengths (sizes) of the component xa, the component ya, and the component za is “mm”, and the unit of angles (sizes) of the component ua, the component va, and the component wa is “°”.
Above, the configuration of the robot 1 is briefly described. In the robot 1, as described above, the base 110 is attached to the ceiling portion 803, and the robot arm 10 is positioned further at a vertically lower part than the base 110 (refer to
In addition, although not being illustrated, the robot 1 may include, for example, a force detection portion configured of a force sensor (for example, 6-axis force sensor) that detects a force (including a moment) applied to the suction portion 150.
As illustrated in
The second image capturing portion 4 includes, for example, an image capturing device 41 configured of a charge coupled device (CCD) image sensor including a plurality of pixels, a lens 42 (optical system) , and a coaxial episcopic illumination 43. The second image capturing portion 4 forms an image on a light receiving surface (sensor surface) of the image capturing device 41 by the lens 42 by the light reflected by an image capturing target, converts the light into an electric signal, and outputs the electric signal to the control device 5. Here, the light receiving surface is a surface of the image capturing device 41, and is a surface on which the light forms the image. In addition, in the embodiment, as an illumination, not being limited to the coaxial episcopic illumination 43, for example, a transmitted illumination or the like may be employed. In addition, on the design, the second image capturing portion 4 is provided such that an optical axis A4 (optical axis of the lens 42) thereof is along the vertical direction.
The second image capturing portion 4 sets a two-dimensional rectangular coordinate system determined by an xc axis and a yc axis which are respectively parallel to an in-plane direction of a second captured image 40, as an image coordinate system (coordinate system of the second captured image 40 output from the second image capturing portion 4) of the second image capturing portion 4 (refer to
As illustrated in
The first image capturing portion 3 includes, for example, an image capturing device 31 configured of the CCD image sensor including a plurality of pixels, a lens 32 (optical system), and a coaxial episcopic illumination 33. The first image capturing portion 3 forms an image on a light receiving surface (sensor surface) of the image capturing device 31 by the lens 32 by the light reflected by an image capturing target, converts the light into an electric signal, and outputs the electric signal to the control device 5. Here, the light receiving surface is a surface of the image capturing device 31, and is a surface on which the light forms the image. In addition, in the embodiment, as an illumination, not being limited to the coaxial episcopic illumination 33, for example, a transmitted illumination or the like may be employed. In addition, on the design, the first image capturing portion 3 is provided such that an optical axis A3 (optical axis of the lens 32) thereof is along the vertical direction.
The first image capturing portion 3 sets a two-dimensional rectangular coordinate system determined by an xb axis and a yb axis which are respectively parallel to an in-plane direction of a first captured image 30, as an image coordinate system (coordinate system of the first captured image 30 output from the first image capturing portion 3) of the first image capturing portion 3 (refer to
As illustrated in
Although not being illustrated, the moving mechanism 7 includes a power source which generates power for moving the first image capturing portion 3, a power transmission mechanism which transmits the power of the driving source to the first image capturing portion 3, a support member which is connected to the power transmission mechanism and supports the first image capturing portion 3, and a rail which guides the movement of the support member along a predetermined moving direction based on the power transmitted to the power transmission mechanism. Examples of the driving source include a motor, such as a servo motor or a linear motor, a hydraulic cylinder, and air pressure cylinder. As the power transmission mechanism, for example, a mechanism including a combination of a belt, a gear, a rack, and a pinion, and a mechanism including a combination of a ball screw and a ball nut, may be used.
As illustrated in
In addition, as a specific configuration of the conveyor 81, any configuration may be employed as long as the target 800 can be transported, and for example, a belt conveyor, a roller conveyor, and a chain conveyor, may be used.
As illustrated in
The control device 5 illustrated in
Hereinafter, each function (functional portion) included in the control device 5 will be described.
As illustrated in
The display control portion 51 is configured of, for example, a graphic controller, and is connected to the display device 83. The display control portion 51 has a function of displaying various screens (for example, a screen for operation) in the monitor of the display device 83. In addition, the input control portion 52 is connected to the input device 84, and has a function of receiving an input from the input device 84.
The control portion 53 has a function of controlling the driving of the robot 1, the operation of the first image capturing portion 3, and the operation of the second image capturing portion 4, and has a function of performing processing, such as various types of computing and determination. The control portion 53 is configured of, for example, a processor like a CPU, and each function of the control portion 53 can be realized by executing various programs stored in the storage portion 55 by the CPU.
Specifically, the control portion 53 controls the driving of each of the driving portions 130, and drives or stops the robot arm 10. For example, the control portion 53 derives a target value of the motor (not illustrated) included in each of the driving portions 130 for moving the suction portion 150 to a target position based on information output from the position sensor 131 provided in each of the driving portions 130. In addition, the control portion 53 performs processing, such as various types of computing or various types of determination, based on the information from the position sensor 131, the first image capturing portion 3, and the second image capturing portion 4, which is obtained by the input/output portion 54. For example, the control portion 53 computes the coordinates (components xb, yb, and ub: position and posture) of the image capturing target in a first image coordinate system based on the first captured image 30 (refer to
In addition, in the embodiment, the control portion 53 does not have a function of controlling the driving of the moving mechanism 7, and the driving of the moving mechanism 7 is controlled by a moving mechanism control device configured of the PC or the like which is not illustrated, but instead of the moving mechanism control device, the control portion 53 may have a function of controlling the driving of the moving mechanism 7. In addition, in the embodiment, the control portion 53 can control the operations of the first image capturing portion 3 and the second image capturing portion 4, but the operations may be controlled by an image capturing portion control device configured of the PC or the like which is not illustrated. The control device 5 may be capable of obtaining the information at least from the first image capturing portion 3 and the second image capturing portion 4.
The input/output portion 54 (obtaining portion) is configured of an interface circuit or the like, and has a function of switching the information with the robot 1, the first image capturing portion 3, and the second image capturing portion 4. For example, the input/output portion 54 has a function of obtaining a rotation angle of a rotation axis of the motor or the decelerator included in each of the driving portions 130 of the robot 1, and information of the first captured image 30 and the second captured image 40. In addition, the input/output portion 54 has a function of obtaining the information of moving amount (moving amount of the first image capturing portion 3) of the moving mechanism 7. In addition, for example, the input/output portion 54 outputs the target value of the motor derived from the control portion 53 to the robot 1.
The storage portion 55 is configured of, for example, the RAM and the ROM, and stores a program for performing various types of processing by the control device 5 or various types of data. For example, in the storage portion 55, a program for executing the calibration or a moving amount or the like of each portion of the robot arm 10 for positioning the tool center point TCP of the robot arm 10 at a target location, is stored. In addition, the storage portion 55 is not limited to a portion (the RAM or the ROM) embedded in the control device 5, and may be configured to include a so-called external storage device (not illustrated).
In addition, as described above, the display device 83 includes the monitor (not illustrated) , such as a display, and for example, have a function of displaying the first captured image 30 and the second captured image 40. Therefore, an operator can confirm the work or the like of the first captured image 30 and the second captured image 40, or the robot 1, via the display device 83. In addition, as described above, the input device 84 is configured of, for example, a mouse or a keyboard. Therefore, the operator can give an instruction, such as various types of processing, with respect to the control device 5 by operating the input device 84. In addition, instead of the display device 83 and the input device 84, a display input device (not illustrated) including both the display device 83 and the input device 84 may be used. As the display input device, for example, a touch panel or the like can be used.
Above, a basic configuration of the robot system 100 is briefly described. In the robot system, the work is performed in the robot 1 based on the first captured image 30 or the second captured image 40. Therefore, it is necessary to acquire a transformation matrix expression (correction parameter) which converts the first image coordinates (xb, yb, and ub) into the robot coordinates (xa, ya, and ua), and to acquire a transformation matrix expression (correction parameter) which converts the second image coordinates (xc, yc, and uc) into the robot coordinates (xa, ya, and ua). In other words, calibration (first corresponding) between the first image capturing portion 3 and the robot 1 and calibration (second corresponding) between the second image capturing portion 4 and the robot 1, are necessary. The calibration is automatically performed by the control device 5 based on the program for executing the calibration in accordance with the instruction by the operator.
Hereinafter, the calibration (various types of setting and execution for the calibration) will be described.
Before performing the calibration, the operator drives the robot arm 10, for example, by a so-called jog feeding (by a manual instruction via the display device 83 using the input device 84), and moves the robot arm 10 to a position at which the tool center point TCP can be captured by the second image capturing portion 4 (refer to
In addition, before performing the calibration, the control device 5 stores the information or the like of the number of pixels of the first image capturing portion 3 and the second image capturing portion 4, sets a speed and an acceleration (more specifically, for example, a moving speed and a moving acceleration of the suction portion 150) of the robot 1, and sets a local plane (work plane).
First, the control portion 53 performs the corresponding (second corresponding) between the second image coordinate system and the robot coordinate system. Accordingly, as described above, since a state where the corresponding between the robot coordinate system and the base coordinate system has been finished is achieved, it is possible to perform the corresponding between the second image coordinate system and the base coordinate system.
As illustrated in
In addition, when performing the second corresponding, for example, a circular marker 65 or the like which can be captured by the second image capturing portion 4 is provided at the tool center point TCP (refer to
First, the control portion 53 drives the robot arm 10, and moves the marker 65 positioned at the tool center point TCP at each of a plurality of arbitrary reference points 405 (virtual target points) arranged, for example, in a shape of a lattice, in a virtual reference plane 401 illustrated in
In addition, the number of reference points 405 may be at least three or more, the number is arbitrary, but the accuracy of the calibration is improved as the number of reference points 405 increases. In the embodiment, as illustrated in
In addition, in the embodiment, in order to further improve the accuracy of the calibration, steps s112, 5113, and S114 are performed (refer to
Next, the control portion 53 drives the robot arm 10 (moves the splicing shaft 103 vertically) to move the tool center point TCP in the za direction, and searches for a location at which an outline of the marker 65 projected to the second captured image 40 becomes the clearest (refer to
Here, the reference plane 401 acquired in step S112 is perpendicular to the optical axis A4, but there is a case where the reference plane 401 is inclined from a state of being perpendicular to the optical axis A4 due to an error of a position of the marker 65 installed at the tool center point TCP or an installation position of the second image capturing portion 4. Here, in step S113, the control portion 53 sets (updates) the new reference plane 401 which is in a more completely perpendicular state.
Specifically, first, the control portion 53 acquires an inclination index H1 (components va and wa) of the current reference plane 401. Next, the control portion 53 is rotated around the axis along the xb direction such that a difference between a distance dl between the reference point 405 positioned at the center in
Next, the control portion 53 acquires an inclination index H2 (components va and wa) of the reference plane 401 within the predetermined threshold value range R1. In addition, the control portion 53 acquires an inclination index H3 (components va and wa) of the reference plane 401 within a predetermined threshold value range R2. Next, the control portion 53 acquires an inclination correction amount (Δva and Δwa) with respect to the reference plane 401 acquired in step 5112 based on the inclination indexes H1, H2, and H3. In addition, the control portion 53 sets (updates) the reference plane 401 acquired in step S112 and the new reference plane 401 based on the inclination correction amount (Δva and Δwa). In addition, the control portion 53 sets the target value (the robot coordinates or the base coordinates) at the new reference point 405 based on the new reference plane 401.
By performing the high-accuracy inclination correction (step S113), it is possible to further improve the accuracy of the calibration.
The control portion 53 outputs the target value acquired in step 5113 with respect to the robot 1, drives the robot arm 10, and moves the marker 65 to each of the new reference points 405. At this time, every time when positioning the marker 65 with respect to one reference point 405, the control portion 53 captures the image of the marker 65 by the second image capturing portion 4, and the storage portion 55 stores the second image coordinates and the robot coordinates at each of the reference points 405. In addition, the control portion 53 acquires (updates) the correction parameter for converting the second image coordinates into the robot coordinates based on the second image coordinates (components xc and yc) and the robot coordinates (components xa and ya) of the tool center point TCP at each of the reference points 405 based on the plurality of second captured images 40. In addition, the correction parameter for converting the second image coordinates into the base coordinates is acquired based on the acquired correction parameter, is acquired (updated).
As described above, the calibration (second corresponding) between the second image capturing portion 4 and the robot 1 is finished. Accordingly, it is possible to acquire the position at the robot coordinates of the captured image projected to the second captured image 40. In addition, as described above, in the embodiment, by performing the focal point adjustment (step S112) or the high-accuracy inclination correction (step S113) for acquiring the inclination correction amount of the reference plane 401, it is possible to particularly improve the positional accuracy at the robot coordinates of the image capturing target projected to the second captured image 40.
Next, the control portion 53 attaches the jig 6 to the tip end of the robot arm 10, and calculates (measures) the robot coordinates of the marker 61 of the jig 6 by using the second image capturing portion 4 which has finished the calibration, and accordingly, the shift, that is, the offset, of the position of the marker 61 with respect to the tool center point TCP is acquired.
Here, similar to the above-described second corresponding (step S11), in the corresponding (first corresponding) between the first image coordinate system and the robot coordinate system in step S13 which will be described later, originally, it is necessary to capture the tool center point TCP by the first image capturing portion 3. However, as illustrated in
Hereinafter, the jig 6 will be described in detail before describing step S12. The jig 6 illustrated in
As illustrated in
In the embodiment, the attaching portion 62 is provided in a right end portion (one end portion) in
The marker 61 is provided in the left end portion (end portion on the side opposite to the attaching portion 62) in
The beam 63 is provided along the longitudinal direction of the main body portion 60, and is provided along the edge on the lower side in
At the part excluding the marker 61 of the jig 6 having the configuration, it is preferable that the light absorbing film, such as a black flat coating film, is provided. Accordingly, reflectivity of light is suppressed, and it becomes easier to recognize the outline of the marker 61 by the coaxial episcopic illumination. The optical absorbing film can be formed, for example, by using Raydent processing. In addition, the light absorbing film may not be provided at the entire part excluding the marker 61 of the jig 6, and is provided at least at an outer circumferential portion of the marker 61, and accordingly, the above-described effects can be obtained.
By using the jig 6, step S12 is executed. Hereinafter, step S12 will be described.
First, the control portion 53 drives the robot arm 10 such that the marker 61 is positioned within the viewing field of the second image capturing portion 4 which has finished the calibration (refer to
In addition, the control portion 53 captures the image of the marker 65 by the second image capturing portion 4 when the marker 61 is positioned at the center 040, and the input/output portion 54 obtains the second captured image 40 obtained by capturing an image of the marker 61 (refer to
In this manner, by using the second image capturing portion 4 which has finished the calibration, it is possible to acquire the offset of the marker 61 in the first image capturing. Therefore, it is possible to acquire the offset without excessively driving the robot arm 10. For example, as illustrated in
In addition, as described above, in the second image capturing portion 4, the coaxial episcopic illumination 43 is used (refer to
Next, the control portion 53 performs the corresponding (first corresponding) between the first image coordinate system and the robot coordinate system. Accordingly, as described above, since a state where the corresponding between the robot coordinate system and the base coordinate system has been finished, is achieved, it is possible to perform the corresponding between the first image coordinate system and the base coordinate system. In addition, in the embodiment, as described above, since the first image capturing portion 3 is movable, the control portion 53 performs the first corresponding at a plurality of locations.
Hereinafter, the description will refer to the flowchart illustrated in
First, when performing the first corresponding, the first image capturing portion 3 is moved to a position (first position) illustrated in
Here, in the embodiment, for example, from the state illustrated in
In addition, the moving direction of the first image capturing portion 3 may be a direction for the movement in a direction in which the peripheral device is unlikely to be interfered, and is not limited to the Z axis direction. Accordingly, even when the marker 61 is positioned within the viewing field of the first image capturing portion 3, it is possible to avoid interference (collision) of the jig 6 or the robot 1 with the peripheral device or the like.
Next, when the movement of the moving mechanism 7 is finished, the control portion 53 drives the robot arm 10 to position the marker 61 of the jig 6 within the viewing field of the first image capturing portion 3 at the first position. More specifically, the control portion 53 drives the robot arm 10 such that the marker 61 is projected (positioned) at a center O30 of the first captured image 30 (refer to
Next, the control portion 53 performs the first corresponding at the first position. Accordingly, at the first position, it is possible to acquire the position at the robot coordinates of the image capturing target projected to the first captured image 30.
In addition, in the first corresponding, similar to the above-described second corresponding (step S11), for example, the low-accuracy inclination correction, the focal point adjustment, the high-accuracy inclination correction, and the calibration execution are performed (refer to
Here, as described above, in step S131, since the first image capturing portion 3 is moved in a direction in which the robot 1 is unlikely to interfere with the peripheral device, the interference with the jig 6 can be avoided.
In addition, as described above, in the first corresponding at the first position, since it is not possible to capture the tool center point TCP by the first image capturing portion 3 unlike the second corresponding, the jig 6 is used. In addition, the marker 61 is captured instead of the tool center point TCP. Accordingly, it is possible to perform the first corresponding between the first image coordinate system and the tip end coordinate system by using the marker 61. In addition, in step S12, the offset of the marker 61 is acquired, and thus, by subtracting the offset of the marker 61, the position of the tool center point TCP can be specified. Therefore, even when performing the calibration by using the marker 61, it is possible to perform the first corresponding.
In addition, in the above-described step S12, the marker 61 is captured from the lower surface (one surface) side of the jig 6, but in step S13, the marker 61 is captured from the upper surface (other surface) side of the jig 6. This is because the first image capturing portion 3 can capture the vertically lower part while the second image capturing portion 4 can capture the vertically upper part. In this manner, even in a case where the image capturing directions of the second image capturing portion 4 and the first image capturing portion 3 are opposite to each other, it is possible to grasp the marker 61 or a wall portion (edge portion) that forms the hole of the marker 61 from both main surfaces of the jig 6, and thus, it is possible to appropriately perform the second corresponding at the first position by using the offset acquired in the above-described step S12. In particular, as the marker 61 is configured of a hole, it is possible to reduce the positional shift of the marker on both main surfaces. In addition, this case is preferable since the forming is also easy.
In addition, as described above, even in the first image capturing portion 3, similar to the second image capturing portion 4, the coaxial episcopic illumination 33 is used (refer to
Next, the control portion 53 performs (acquires), for example, the first corresponding at a position (second position) of the first image capturing portion 3 illustrated in
Furthermore, similarly, the control portion 53 performs the first corresponding at an arbitrary location different from the first position and the second position. In this manner, practically, even when the calibration is not executed, it is possible to perform the first corresponding at a plurality of locations.
Accordingly, the first corresponding (step S13) is finished.
As described above, the control device 5 which is one example of the control device according to the invention controls the robot 1 including the robot arm 10 that functions as “movable portion” provided with the jig 6 that functions as “tool” including the marker 61. In addition, the control device 5 includes: the input/output portion 54 that functions as “obtaining portion” which obtains the first captured image 30 (image data) obtained by capturing the image of the marker 61 by the movable first image capturing portion 3 that captures an image of the marker 61; and the control portion 53 which performs the first corresponding (step S13) between the coordinate system (first image coordinate system) of the first image capturing portion 3 and the coordinate system (tip end coordinate system) of the robot 1 based on the first captured image 30 obtained by the input/output portion 54 after the first image capturing portion 3 has moved. According to the control device 5, it is possible to perform the first corresponding (calibration) at a location at which the first image capturing portion 3 is moved and does not interfere with the peripheral device or the like. Therefore, since it is possible to perform the first corresponding even in a relatively narrow region, it is possible to reduce the work space S of the robot 1. In addition, since it is possible to perform the first corresponding in a state where the first image capturing portion 3 is stopped after being moved, it is not necessary to consider the moving direction of the first image capturing portion 3. Therefore, the first corresponding between the first image coordinate system and the tip end coordinate system is easy. In addition, by providing the jig 6 including the marker 61, it is possible to perform the first corresponding between the first image coordinate system and the tip end coordinate system by using the marker 61 instead of the tool center point TCP. In addition, the jig 6 has a part that protrudes to the outer side from the splicing shaft 103 when viewed from the direction along the third axis J3. Therefore, even in a case where it is not possible to capture the predetermined part (tool center point TCP in the embodiment) of the robot 1 by the first image capturing portion 3, it is possible to perform the first corresponding by obtaining the first captured image 30 obtained by capturing the image of the marker 61.
In addition, “robot coordinate system” is regarded as the tip end coordinate system in the embodiment, but may be regarded as the base coordinate system of the robot 1, or may be regarded as a coordinate system of a predetermined portion of the robot 1 other than the tip end coordinate system. In addition, “tool” is not limited to the jig 6, and may be other configurations as long as “marker” can be captured by the first image capturing portion 3.
In addition, as described above, the first image capturing portion 3 is provided at a location different from the robot arm 10 that functions as “movable portion”. Accordingly, it is possible to perform the first corresponding in the first image capturing portion 3 provided on the periphery of the robot 1. Therefore, it appropriately performs the work based on the first captured image 30 captured by using the first image capturing portion 3 that has finished the first corresponding, for example, work on the conveyor 81. In addition, as a location different from the robot arm 10, for example, the base 110 or the like may be employed.
In addition, as described above, after the control portion 53 performs the second corresponding (step S11) between the coordinate system (second image coordinate system) of the second image capturing portion 4 obtained by capturing the image of the marker 61 and the coordinate system (tip end coordinate system) of the robot 1, the input/output portion 54 that functions as “obtaining portion” obtains the second captured image 40 (image data) obtained by capturing the image of the marker 61 by the second image capturing portion 4, and the control portion 53 calculates the position of the marker 61 in the coordinate system (tip end coordinate system) of the robot 1 based on the second captured image 40 obtained by the input/output portion 54 (step S12). Accordingly, it is possible to easily and appropriately acquire the position of the marker 61 with respect to the predetermined part (tool center point TCP in the embodiment) of the robot 1, that is, the offset of the marker 61. Therefore, by using the offset of the marker 61, it is possible to appropriately perform the first corresponding.
In addition, as described above, the control portion 53 calculates the offset of the predetermined part (tool center point TCP in the embodiment) of the robot 1 and the marker 61 based on the position of the marker 61 in the tip end coordinate system after calculating the position of the marker 61 in the coordinate system (tip end coordinate system) of the robot 1 (step S12), and performs the first corresponding based on the offset and the first captured image 30 (step S13) . Accordingly, as illustrated in
In addition, as described above, in step S13, the control portion 53 performs the first corresponding at the first position, and controls the driving of the robot 1 by using the first corresponding at the first position, at the second position different from the first position. In step S13, since it is possible to acquire the first corresponding at the second position different from the first position based on the data of the first corresponding at the first position, it is possible to save time and effort for performing the first corresponding at the second position, and to improve the accuracy of the work of the robot 1 at the second position similar to the work at the first position. Furthermore, as described above, it is possible to perform the first corresponding at another position different from the first position and the second position based on the data of the first corresponding at the first position. Therefore, it is possible to perform the action even with one first image capturing portion 3 similar to a case where the plurality of first image capturing portions 3 are provided. As a result, at the plurality of locations, it is possible to appropriately perform the work in the robot 1 based on the first captured image 30. In addition, in this manner, by performing the first corresponding at the plurality of locations based on the calibration result at the first position, it is possible to finish the first corresponding of the first image capturing portion 3 at the location at which the interference with the peripheral device easily occur, and thus, it is possible to avoid a concern that the calibration is practically executed and the peripheral device is interfered.
In addition, in the control device 5, it is preferable that 0.8≤R1/R2≤1.2 when the repeating accuracy in movement of the first image capturing portion 3 is R1 and the repeating accuracy in work of robot 1 is R2.
By satisfying the relationship, it is possible to particularly improve the accuracy of the first corresponding at the plurality of positions based on the data of the first corresponding at one arbitrary position (first position). Therefore, it is possible to improve the accuracy of the work of the robot 1 at the plurality of positions similar to the work at the arbitrary position (first position).
Here, the repeating accuracy in the movement of the first image capturing portion is the movement accuracy of the moving mechanism 7, and illustrates how much the positional shift is generated when the first image capturing portion is repeatedly positioned at the same location. In addition, the repeating accuracy in the work of the robot is the repeating accuracy when performing the same work contents at the same location. For example, in the embodiment, it is illustrated how much the positional shift of the other targets is generated with respect to the target 800 when the other targets (not illustrated) are mounted on (adhere to) the target 800 on the conveyor 81.
The repeating accuracy in the movement of the first image capturing portion 3 is, for example, preferably 5 to 50 μm, and more preferably 10 to 20 μm. The repeating accuracy in the work of the robot is, for example, preferably 5 to 50 μm, and more preferably 10 to 20 μm. When the repeating accuracy is set in this manner, it is possible to set the comprehensive accuracy by the robot system 100 including the movement of the first image capturing portion 3, the work of the robot 1, and other reasons (for example, the calibration accuracy and the image recognizing accuracy of the first image capturing portion 3 or the second image capturing portion 4), to a relatively high accuracy. Specifically, the comprehensive accuracy can be 10 to 40 μm.
Above, the configuration of the robot 1 is briefly described. The robot 1 which is one example of the robot according to the invention is controlled by the control device 5, and includes the robot arm 10 that functions as “movable portion” provided with the jig 6 that functions as “tool” including the marker 61. According to the robot 1, under the control of the control device 5, it is possible to accurately perform the operation related to the first corresponding.
The robot system 100 which is one example of the robot system according to the invention described above includes the control device 5; the robot 1 which is controlled by the control device 5, and includes the robot arm 10 that functions as “movable portion” provided with the jig 6 that functions as “tool” including the marker 61; and the first image capturing portion 3 having an image capturing function. According to the robot system 100, it is possible to perform the first corresponding at a location at which the first image capturing portion 3 is moved and does not interfere with the peripheral device or the like, and under the control of the control device 5, the robot 1 can accurately perform the operation related to the first corresponding.
Next, a second embodiment of the invention will be described.
The robot system according to the embodiment is similar to the above-described first embodiment except that step S13 of the first corresponding is different. In addition, in the following description, regarding the second embodiment, differences from the above-described first embodiment will be focused in the description, and the description of similar contents will be omitted.
When the execution of the calibration at the first position is finished (step S133), the first image capturing portion 3 is moved to the second position. In addition, in the embodiment, by the disposition of the peripheral device or the like, even at the second position, the robot 1 does not interfere with the peripheral device.
Next, when the movement of the moving mechanism 7 is finished, the control portion 53 drives the robot arm 10 to position the marker 61 of the jig 6 within the viewing field of the first image capturing portion 3 at the second position.
Next, the control portion 53 performs the first corresponding at the second position. Accordingly, even at the second position, it is possible to acquire the position at the robot coordinates of the image capturing target projected to the first captured image 30.
In this manner, in the embodiment, the first corresponding at the first position and at the second position is performed. In other words, the control portion 53 performs the first corresponding at the plurality of positions. In this manner, by practically executing the first corresponding every time when moving the first image capturing portion 3, it is possible to particularly improve the accuracy of the first corresponding at each of the locations. Therefore, it is possible to particularly improve the accuracy of the work of the robot 1. In addition, even by the method, it is possible to perform an action with one first image capturing portion 3 similar to a case where the plurality of first image capturing portions 3 are provided. As a result, at the plurality of locations, it is possible to appropriately perform the work with respect to the robot 1 based on the first captured image 30.
Above, the control device, the robot, and the robot system according to the invention are described based on the embodiments illustrated in the drawings, but the invention is not limited thereto, and configurations of each portion can be replaced to arbitrary configurations having similar functions. In addition, in the invention, other arbitrary configurations may be added. In addition, each of the embodiments may be appropriately combined with each other.
In addition, the robot according to the invention has a configuration including the movable portion (for example, the robot arm) which is rotatable with respect to the arbitrary member (for example, the base), may have a configuration in which attachment of the tool with a marker is possible, and is not limited to the aspect of the robot illustrated in the drawings. For example, the robot according to the invention may be a selective compliance assembly robot arm robot.
In addition, the number of robot arms is not particularly limited, and may be two or more. In addition, the number of rotation axes of the robot arm is not particularly limited, and is arbitrary.
In addition, the installation location of the robot is not limited to a ceiling portion of the cell. For example, according to the image capturing direction of the first image capturing portion, the robot may be attached to the upper surface of the bottom portion or a pillar.
In addition, the robot system according to the invention may not include the cell. In this case, the installation location of the robot maybe an arbitrary location (on the floor, the wall, the ceiling, the movable cart or the like).
In addition, the robot system according to the invention may not include a conveyor. In addition, the robot system according to the invention may not include the work portion.
The entire disclosure of Japanese Patent Application No. 2016-239107, filed Dec. 9, 2016 is expressly incorporated by reference herein.
Number | Date | Country | Kind |
---|---|---|---|
2016-239107 | Dec 2016 | JP | national |