1. Field of the Invention
The present invention relates to a method of operating an industrial robot to move a distal end portion of a robot arm to a specified position, and also to a robot capable of performing such motion.
2. Description of Related Art
When moving a robot in accordance with a manual operation by an operator, the operator generally uses a teach pendant to manually move respective axes (articulations) of the robot or manually operate the robot along coordinate axes of a rectangular coordinate system. In the former operation where each specified articulation axis of the robot is moved in a positive or negative direction, a resultant robot motion varies depending on which axes are specified since each axis is adapted for a rotary or translation motion depending on the robot mechanism or structure. In the latter type of manual operation, the robot is so operated that the robot tool end point (TCP) is moved in the positive or negative direction of each specified coordinate axis of the rectangular XYZ coordinate system defined in a robot working space, or the TCP is rotated in the positive or negative direction around an axis passing through the center of the TCP.
When manually moving a robot in a real space, an operator usually wishes to move the robot in an arbitrary direction. In order to move the robot in the intended direction by use of the aforesaid conventional manual operation method, the operator must think well to find a proper combination of a plurality of motions capable of realizing the required robot motion as a whole and each achieved by teach pendant operation, while keeping in mind a relationship between intended robot motion direction and motion directions achieved by teach pendant operations. For simplicity, it is assumed here that the robot is to be moved in a real space to exactly midway between positive X and Y directions (i.e., moved in the direction inclined at an angle of 45 degrees to both the X and Y axes) on a Z plane whose Z-axis coordinate value is constant. In this case, the operator performs a bit of operation for causing a motion to the positive X axis direction to slightly move the robot in that direction, and then performs an operation for causing a motion to the positive Y axis direction to move the robot in that direction by an amount equivalent to the preceding X axis motion amount. Subsequently, the operator alternately repeats these operations to realize the intended robot motion. Thus, a so-called zigzag motion is resulted. Even for this simple case, the aforesaid operations are needed. In order to achieve a robot motion in an arbitrary direction, therefore, more difficult operations requiring skill must be made. Furthermore, the operator can frequently misunderstand the direction (positive or negative) to which the robot is to be moved. As a result, the operator sometimes erroneously moves the robot in an unintended direction, resulting in danger. In most cases, the robot is moved toward a workpiece, and hence an accident of collision of the robot and the workpiece is liable to occur. This makes the manual robot operation further difficult.
The present invention provides a robot capable of automatically move a distal end portion of a robot arm to an arbitrary target position in accordance with a demand of an operator, and a method of operating the robot to perform such motion. The robot of the present invention has a camera mounted at a distal end portion of a robot arm.
According to a first aspect of the present invention, the robot comprises: means for positioning the distal end portion of the robot arm with the camera at a first position on a plane spaced from an object by a predetermined first distance; means for displaying an image captured by the camera at the first position on a display device; means for allowing a manual operation to specify an arbitrary point on the object in the captured image displayed on the display device; means for obtaining position information of the specified point in the captured image; means for determining a direction/amount of motion of the camera to a second position where the camera confronts the specified point on the object with a predetermined second distance in between based on the obtained position information and the first predetermined distance; and means for moving the distal end portion of the robot arm with the camera to the second position in accordance with the determined direction/amount of motion.
According to a second aspect of the present invention, the robot comprises: means for displaying an image captured by the camera on a display device; means for allowing a first manual operation to specify an arbitrary point on an object in a first image captured by the camera at a first position and displayed on the display device; means for obtaining first position information of the specified point in the first image; means for determining a first direction/amount of motion based on the first position information; means for moving the distal end portion of the robot arm with the camera to a second position according to the determined first direction/amount of the first motion; means for allowing a second manual operation to specify the same point on the object as specified by the first manual operation, in a second image captured by the camera at the second position and displayed on the display device; means for obtaining second position information of the specified point in the second image; means for determining a second direction/amount of motion based on the first position information and the second position information; and means for moving the distal end portion of the robot arm with the camera to a third position according to the determined second direction/amount of motion.
According to a third aspect of the present invention, the robot comprises: means for displaying an image captured by the camera on a display device; means for allowing a first manual operation to specify an arbitrary point on an object in a first image captured by the camera at a first position and displayed on the display device; means for obtaining first position information of the specified point in the first image; means for determining a first direction of motion based on the first position information; means for moving the distal end portion of the robot arm with the camera to a second position according to the determined first direction of motion and a preset amount of motion; means for allowing a second manual operation to specify the same point on the object as specified by the first manual operation, in a second image captured by the camera at the second position and displayed on the display device; means for obtaining second position information of the specified point on the object in the second image; means for determining a second direction/amount of motion based on the first position information and the second position information; and means for moving the distal end portion of the robot arm with the camera to a third position according to the determined second direction/amount of motion.
According to a fourth aspect of the present invention, the robot comprises: means for displaying an image captured by the camera on a display device; means for allowing a first manual operation to specify an arbitrary point on an object in a first image captured by the camera at a first position and displayed on the display device; means for obtaining first position information of the specified point in the first image; means for moving the distal end portion of the robot arm with the camera to a second position according to a preset first direction/amount of motion; means for allowing a second manual operation to specify the same point on the object as specified by the first manual operation, in a second image captured by the camera at the second position and displayed on the display device; means for obtaining second position information of the specified point on the object in the second image; means for determining a second direction/amount of motion based on the first position information and the second position information; and means for moving the distal end portion of the robot arm with the camera to a third position according to the determined second direction/amount of motion.
According to a fifth aspect of the present invention, the robot comprises: means for detecting an object in a first image captured by the camera at a first position; means for obtaining first position information of the detected object in the first image; means for determining a first direction/amount of motion of the camera based on the first position information; means for moving the distal end portion of the robot arm with the camera to a second position according to the determined first direction/amount of motion; means for detecting the same object as the detected object, in a second image captured by the camera at the second position; means for obtaining second position information of the detected object in the second image; means for determining a second direction/amount of motion based on the first position information and the second position information; and means for moving the distal end portion of the robot arm with the camera to a third position based on the determined second direction/amount of motion.
According to a sixth aspect of the present invention, the robot comprises: means for detecting an object in a first image captured by the camera at a first position; means for obtaining first position information of the detected object in the first image; means for determining a first direction of motion based on the first position information; means for moving the distal end portion of the robot arm with the camera to a second position according to the determined first direction of motion and a preset amount of motion; means for detecting the same object as the detected object, in a second image captured by the camera at the second position; means for obtaining second position information of the detected object in the second image; means for determining a second direction/amount of motion based on the first position information and the second position information; and means for moving the distal end portion of the robot arm with the camera to a third position according to the determined second direction/amount of motion.
According to a seventh aspect of the present invention, the robot comprises: means for detecting an object in a first image captured by the camera at a first position; means for obtaining first position information of the detected object in the first image; means for moving the distal end portion of the robot arm with the camera to a second position according to a preset first direction/amount of motion; means for detecting the same object as the detected object, in a second image captured by the camera at the second position; means for obtaining second position information of the detected object in the second image; means for determining a second direction/amount of motion based on the first position information and the second position information; and means for moving the distal end portion of the robot arm with the camera to a third position according to the determined second direction/amount of motion.
According to an eighth aspect of the present invention, the robot comprises: means for detecting an object in a first image captured by the camera at a first position; means for obtaining first size information of the detected object in the first image; means for determining a first amount of motion based on the first size information; means for moving the distal end portion of the robot arm to a second position according to a preset direction of motion and the determined first amount of motion; means for detecting the same object as the detected object, in a second image captured by the camera at the second position; means for obtaining second size information and position information of the detected object in the second image; means for determining a second direction/amount of motion based on the first size information, the second size information and the position information; and means for moving the distal end portion of the robot arm with the camera to a third position according to the determined second direction/amount of motion.
According to a ninth aspect of the present invention, the robot comprises: means for detecting an object in a first image captured by the camera at a first position; means for obtaining first size information of the detected object in the first image; means for moving the distal end portion of the robot arm with the camera to a second position according to a preset first direction/amount of motion; means for detecting the same object as the detected object, in a second image captured by the camera at the second position; means for obtaining second size information and position information of the detected object in the second image; means for determining a second direction/amount of motion of the camera based on the first size information, the second size information and the position information; and means for moving the distal end portion of the robot arm with the camera to a third position according to the determined second direction/amount of motion.
According to a tenth aspect of the present invention, the robot comprises: means for displaying an image captured by the camera on a display device; means for allowing a manual operation to specify an arbitrary point on an object in a first image captured by the camera at a first position and displayed on the display device; means for obtaining first position information of the specified point in the first image; means for creating an image model based on image information in the vicinity of the specified point in the first image; means for determining a first direction/amount of motion based on the first position information; means for moving the distal end portion of the robot arm with the camera to a second position according to the determined first direction/amount of motion; means for detecting the same point as the specified point, in a second image captured by the camera at the second position using the image model; means for obtaining second position information of the detected point in the second image; means for determining a second direction/amount of motion based on the first position information and the second position information; and means for moving the distal end portion of the robot arm with the camera to a third position according to the determined second direction/amount of motion.
According to an eleventh aspect of the present invention, the robot comprises: means for displaying an image captured by the camera on a display device; means for allowing a manual operation to specify an arbitrary point on an object in a first image captured by the camera at a first position and displayed on the display device; means for obtaining first position information of the specified point in the first image; means for creating an image model based on image information in the vicinity of the specified point in the first image; means for determining a first direction of motion based on the first position information; means for moving the distal end portion of the robot arm with the camera to a second position according to the determined first direction of motion and a preset amount of motion; means for detecting the same point as the specified point, in a second image captured by the camera at the second position using the image model; means for obtaining second position information of the detected point in the second image; means for determining a second direction/amount of motion based on the first position information and the second position information; and means for moving the distal end portion of the robot arm with the camera to a third position according to the determined second direction/amount of motion.
According to a twelfth aspect of the present invention, the robot comprises: means for displaying an image captured by the camera on a display device; means for allowing a manual operation to specify an arbitrary point on an object in a first image captured by the camera at a first position and displayed on the display device; means for obtaining first position information of the specified point in the first image; means for creating an image model based on image information in the vicinity of the specified point in the first image; means for moving the distal end portion of the robot arm with the camera to a second position according to a preset first direction/amount of motion; means for detecting the same point as the specified point, in a second image captured by the camera at the second position using the image model; means for obtaining second position information of the detected point in the second image; means for determining a second direction/amount of motion based on the first position information and the second position information; and means for moving the distal end portion of the robot arm with the camera to a third position according to the determined second direction/amount of motion.
The means for determining the second direction/amount of motion may determine the second direction/amount of motion for the third position where the specified point on the object is on an optical axis of the camera and spaced apart form the camera by a predetermined distance. Further, the means for determining the second direction/amount of motion may determine the second direction/amount of motion such that an end of a tool attached to the distal end portion of the robot arm is positioned at the specified point on the object.
The present invention also provides a method of operating a robot carried out by the foregoing robot.
With the present invention, a robot can automatically operate to establish a predetermined relation between an object and a distal end portion of a robot arm by simply specifying a target on the object in an image captured by the camera, whereby an operation for moving the distal end portion of the robot arm relative to the object can be carried out very easily and safely.
A system program for performing basic functions of the robot and robot controller is stored in the ROM of the memory 12. A program for robot operation that varies depending on application is taught beforehand and stored in the non-volatile memory of the memory 12, together with relevant preset data.
The servo control unit 15 comprises servo controllers #1 to #n (where n indicates the total number of robot axes, or the sum of this number plus the number of movable axes of a tool attached to the wrist of the robot where required). Each of the servo controllers #1-#n is constituted by a processor, ROM, RAM, etc., and arranged to carry out a position/speed loop control and a current loop control for a corresponding axis-servomotor. In other words, each controller is comprised of a so-called digital servo controller for implementing software-based loop controls of position, speed, and current. Outputs of the servo controllers #1-#n are delivered through servo amplifiers A1-An to axis-servomotors M1-Mn, whereby these servomotors are drivingly controlled. Although not shown, the servomotors M1-Mn are provided with position/speed detectors for individually detecting the positions/speeds of the servomotors, so that the positions/speeds of the servomotors are fed back to the servo controllers #1-#n. Further, sensors provided in the robot as well as actuators and sensors of peripheral equipment are connected to the interface 16 for external devices.
Since there is a relation of f:L0=Y0:W0 (where f denotes a lens focal length and Y0 denotes a length of N0 pixels) in
Hereinafter, the distance L0 used in the calibration will be used as a known value.
When a point R, corresponding to the arbitrary target Q on the object 5, is specified in the image, the following formulae (3) and (4) can be derived:
where N denotes the number of pixels between the specified point R and the image screen center.
a and 6b are views for explaining the operational principle of a first embodiment of this invention, which is embodied by using the structure shown in
In
The following formulae are satisfied:
Thus, the motion vector q is determined from the following formula (7):
As described above, if the number, N1, of pixels between the image center and the commanded target Q in the image has once been determined, the motion vector q can be determined from the predetermined distance L1 between the object 5 and the camera 2a and the calibration data L0. Then, by moving the camera 2a by the motion vector q, the camera 2a can be positioned at the position spaced from the target Q by the distance L0, with the center of the lens 3 opposed to the specified target Q.
In the above described first embodiment where the distance L1 between the camera 2a and the object 5 is known, the camera 2a is positioned at the position spaced from the object 5 by the predetermined distance L1, and then the camera 2a is automatically moved to a position where the camera is opposed to the specified target Q. Next, a second embodiment will be explained with reference to
In
W1=C0·N1 (8)
Next, the camera 2a is moved by the distance W1 along a line extending in parallel to a straight line connecting the target Q and a point at which the optical axis crosses the object 5. That is, in this example, the camera 2a is moved by the distance W1 in the positive X axis direction in the reference coordinate system Σc for the camera 2a. (In case that the target Q of the object 5 is on an XY axis plane, the center of the camera 2a is moved by the distance W1 along a straight line connecting the target Q and a point at which the optical axis crosses the object.) Actually, the camera is moved by the robot.
In accordance with the following formula (10) derived from formula (9), the distance L1 between the camera 2a and the object 5 is determined.
A view line vector p for the state shown in
As understood from above, a motion vector q is calculated in accordance with the following formula (12):
where T denotes transposition.
By moving the camera 2a according to the thus determined motion vector q, the camera 2a is so positioned that the target is viewed at the center of the camera.
In the above described second embodiment, an amount of motion by which the camera 2a is initially to be moved is determined by the calculation of formula (8), however, this amount of motion may be a predetermined amount.
a and 8b are views for explaining a third embodiment in which the camera is moved by such a predetermined amount L2. In
W1=C0·N1 (13)
Next, the camera 2a is moved by the prespecified distance L2 along a line extending in parallel to a straight line connecting the target Q and a point at which the optical axis crosses the object 5. In actual, the camera 1a is moved by the robot 1.
From
From formulae (13), (14), and (15), the following formula (16) to determine a distance L1 is derived.
A view line vector p in the state shown in
From above, a motion vector q is calculated as shown below:
Therefore, by moving the camera 2a according to the motion vector q, the center of the lens of the camera 2a can be opposed to the target Q.
In the above described first to third embodiments, the camera 2a is initially moved in parallel to a surface of the object 5 (photodetector). However, such motion may be made in the optical axis direction.
a and 9b are views for explaining a fourth embodiment of this invention, in which the camera is moved in the optical axis direction. In
W1=C0·N1 (19)
Next, the camera 2a is moved by a prespecified distance L2 toward the target Q in the direction perpendicular to the photodetector of the camera. In actual, the camera 2a is moved by the robot 1.
Then, a distance L1 is determined in accordance with the following formula (21) derived from formula (20).
A view line vector q in the state of
From above, the motion vector q is calculated as follows:
In the foregoing first through fourth embodiments, methods have been explained in which the target Q on the object 5 is specified in an image. On the other hand, in a case where a shape of the target Q is previously known, an image model of the target Q may be taught beforehand, and image processing such as pattern matching may be performed to automatically detect the target Q.
Furthermore, the camera 2a may be moved to the vicinity of the target Q by using the image model in combination with size information of the target Q. Referring to
A distance L1 is determined in accordance with the following formula (25) derived from formula (24).
Assuming that the number of pixels between the detected position R2 and the screen center is equal to N2, a view line vector p in the state shown in
From above, a motion vector q is calculated as shown below.
In each of the above described embodiments, the robot 1 is so automatically moved as to realize a relative relationship that the camera is positioned to be opposed in front of the target Q on the object and the distance between the camera and the target is equal to the distance L0 at the time of camera calibration. However, there is a case where a different attainment target is to be achieved. For example, a distal end portion of an arc welding torch (tool) mounted to the robot is to be placed onto the target Q. In this case, if a relative relationship between target positions to be reached by the camera and the welding torch, respectively, is set beforehand, the target position of the welding torch can easily be calculated by determining the target position of the camera and by taking the relative relationship into consideration.
More specifically, it is assumed that Σf represents a position of a robot's mechanical interface coordinate system observed when the target position of the camera 2a is reached; Σf, a position of the robot's mechanical interface coordinate system observed when the target position of the welding torch 1c is reached; Σt, a tool coordinate system defined at the welding torch end; Tf, a homogeneous transformation matrix that represents Σf on the basis of Σf; Tc, a homogeneous transformation matrix that represents Σc on the basis of 1f; and Tt, a homogeneous transformation matrix that represents Σt on the basis of Σf. A target position U′ to be reached by the welding torch 1c shown in
U′=U·Tf−1·Tc·Tt (28),
where U denotes a target position to be reached by the camera shown in FIG. 11a.
First, the main processor 11 of the robot controller 1a drives the robot 1 so as to position the camera 2a at an image capturing position spaced from the object 5 by the predetermined distance L1 (Step 100), and outputs an image capturing command to the image processing unit 2. The processor 21 of the image processing unit 2 captures an image of the object 5 picked up by the camera 2a (Step 101). The captured image is stored in the frame memory 26 and displayed on the monitor 2b (Step 102). The processor 21 of the image processing unit 2 determines whether a target Q is selectively specified by a mouse or the like (Step 103). If a target is specified, the processor determines the number N1 of pixels corresponding to a position of the specified target Q (Step 104). Then, the calculation of formula (5) is performed to determine a position (distance) W1 at the object 5-to-camera 2a distance L0 used for calibration and corresponding to the target Q (Step 105). On the basis of the distance W1, the predetermined distance L1, and the distance L0 used for calibration, the calculation of formula (7) is performed to determine a motion vector q, and data thereof is transmitted to the robot controller 1a (Step 106). Based on the transmitted data of motion vector q, the robot controller 1a determines a position for robot motion, and moves the robot to the determined position, whereby the camera 2a is positioned at a position where the camera is opposed to the target Q and spaced therefrom by the distance L0 (i.e., at a position where the target Q is on the camera optical axis) (Step 107). If machining is to be made with use a tool, the calculation of formula (28) is performed, and the robot is moved to position the tool end at the target Q (Step 108).
In the second embodiment, the camera 2a is first positioned at an arbitrary position with respect to the object where an image of the object can be picked up. Thereafter, the same processing as Steps 101-105 shown in
In Steps 300-303 of the third embodiment, the same processing as Steps 200-203 shown in
On the basis of the determined pixel numbers N1 and N2, the transformation coefficient C0 determined at the time of calibration, the distance L0 used for calibration, and the predetermined distance L2, the calculation of formula (18) is performed to determine a motion vector q, and data thereof is transmitted to the robot controller (Step 309). Based on the transmitted data of motion vector q, the robot controller 1a determines a position for robot motion, and moves the robot to the determined position, whereby the camera 2a is positioned at a position where it is opposed to the target Q and spaced therefrom by the distance L0 (Step 310). If machining is to be made with use a tool, the calculation of formula (28) is performed, and the robot is moved to position the tool end at the target Q (Step 311).
In Steps 400-408 of the fourth embodiment, the same processing as Steps 300-308 of the third embodiment is performed, except that the camera 2a is moved by a predetermined distance L2 in the Z axis direction (optical axis direction) at Step 404 that is performed instead of Step 304 which moves the camera 2a in the direction perpendicular to the optical axis. In the fourth embodiment, on the basis of the determined pixel numbers N1 and N2, the transformation coefficient C0 determined at the time of calibration, the distance L0 used for calibration, and the predetermined distance L2, the calculation of formula (23) is performed to determine a motion vector q, and data thereof is transmitted to the robot controller (Step 409). Based on the transmitted data of motion vector q, the robot controller 1a determines a position for robot motion, and moves the robot to the determined position, whereby the camera 2a is positioned at a position where it is opposed to the target Q and spaced therefrom by the distance L0 (Step 410). If machining is to be made with use a tool, the calculation of formula (28) is performed, and the robot is moved to position the tool end at the target Q (Step 411).
In the fifth embodiment, the same processing as Steps 400-408 of the flowchart shown in
In the fifth embodiment, on the basis of the detected sizes S1, S2 of the target Q, the determined pixel number N2, the transformation coefficient C0 determined at the time of calibration, the distance L0 used for calibration, and the predetermined distance L2, the calculation of formula (27) is performed to determine a motion vector q, and data thereof is transmitted to the robot controller (Step 509). Based on the transmitted data of motion vector q, the robot controller 1a determines a position for robot motion and moves the robot to the determined position, whereby the camera 2a is positioned at a position where it is opposed to the target Q and spaced therefrom by the distance L0 (Step 510). If machining is to be made with use a tool, the calculation of formula (28) is performed, and the robot is moved to position the tool end at the target Q (Step 511).
In each of the first through fourth embodiments, the target Q is specified on the screen by using a cursor or the like. However, if a shape of the target Q is previously known, the target Q may automatically be detected by means of image processing such as pattern matching using a model of the target Q taught beforehand. For doing this, processing to detect a shape of the model is performed at Step 102 in
Even if no model shape is taught beforehand, an image model may be created base on an image area near the initially specified target Q, and on the basis of the thus created image model, the target Q may automatically be detected in a second target detection. For doing this, processing to create an image model is added after each of Step 202 of
Number | Date | Country | Kind |
---|---|---|---|
310409/2003 | Sep 2003 | JP | national |