These and other objects, features, and advantages of the present invention will be described in more detail below based on preferred embodiments of the present invention with reference to the accompanying drawings, wherein:
Embodiments of a simulation device 30 of a robot system 10 according to the present invention will be described blow with reference to the drawings.
First, referring to
The visual recognition device 16 is constituted by an image pickup camera 18 for obtaining an image and an image processing device 20 for processing the obtained image, detecting the workpiece W or its specific part, and measuring the position. As the image pickup camera 18, a CCD camera etc. is generally used, but another type of camera may also be used. Various lenses may be selected and mounted on the image pickup camera 18 depending on the image pickup range or distance to the object to be picked up.
In this configuration of the robot system 10, the conveyance device 12 successively conveys workpieces W. When the workpiece W is arranged at a predetermined position, the visual recognition device 16 obtains an image as shown in
In the above robot system 10, it is necessary to determine the suitable installation position of the image pickup camera 18 of the visual recognition device 16, the detection model, and the parameters for detection of the workpiece in the image processing device 20 (hereinafter simply referred to as the “detection parameters”) etc. and confirm the operation of the measurement program for measuring the position of the workpiece W or the operating program of the robot 14. The simulation device 30 according to the present invention can perform these operations off line without using the actual robot system 10 and thereby reduce the load on the worker.
Referring to
The screen of the display device 32 displays three-dimensional models of the components arranged in the three-dimensional virtual space such as the workpiece W or the conveyance device 12, robot 14, image pickup camera 18 of the robot system 10. For the three-dimensional models of the components, use is made of models prepared in advance as CAD data etc. The three-dimensional models of the components in the three-dimensional virtual space may be arranged at the initial set positions of the three-dimensional models stored in advance in the storage device 38 or may be arranged at positions suitably designated by the operator using the input device 36.
The camera position determination unit 40 of the processing device 34 determines the installation position of the image pickup camera 18 based on the image pickup range designated by the operator, the optical characteristic information of the used image pickup camera 18, and the required measurement precision. The optical characteristic information includes a focal distance of a lens 18a of the image pickup camera 18, a size of an image pickup device 18b of the image pickup camera 18, etc. The storage device 38 stores a data base linking the types of the plurality of usable image pickup cameras 18 and their lenses 18a with their optical characteristic information. By the operator designating the type of image pickup camera 18 or lens 19a to be used, that data base is used to automatically determine the optical characteristic information.
The virtual image generator unit 42 of the processing device 34 predicts and generates the virtual image to be obtained by the image pickup camera 18 by simulation based on the positions of the three-dimensional models of the image pickup camera 18 and workpiece W in the three-dimensional virtual space and the optical characteristic information of the image pickup camera 18. The virtual image generated in this way is preferably displayed on the screen of the display device 32. When the screen of the display device 32 displays the image, the worker can visually confirm the suitability of the set position determined by the camera position determination unit 40 and can use that image to set the detection parameters etc. of the visual recognition device 16.
The simulator unit 44 of the processing device 34 uses the image generated by the virtual image generator unit 42 to simulate the operation of the robot system 10 in accordance with the operating program prepared in advance by the worker. For example, in accordance with the operating program prepared in advance, in the three-dimensional virtual space, the conveyance device 12 conveys the workpiece W to a predetermined position, the image pickup camera 18 picks up the image of the workpiece W, the workpiece W or its specific part is detected from the obtained image, its accurate position is measured, and the robot 14 is made to perform a gripping operation etc. based on the measured position of the workpiece W or its specific part, whereby the operation of the robot system 10 can be simulated. Due to this, it becomes possible to confirm whether the operating program and measurement program make the robot system 10 perform the desired operation without using the actual robot system 10.
These camera position determination unit 40, virtual image generator unit 42, and simulator unit 44 may, for example, be realized by a camera position determining program, virtual image generating program, and simulation program run on a CPU (central processing unit) of a personal computer or may be realized as independent units able to run these programs.
Next, referring to
First, the display device 32 displays three-dimensional models of the conveyance device 12, robot 14, image pickup camera 18, and workpiece W based on CAD data prepared in advance (step S1). These three-dimensional models may be arranged in accordance with initial set positions stored in the storage device 38 or may be arranged at positions suitably designated by the operator using the input device 36. Next, the operator designates the range to be picked up by the image pickup camera 18 on the screen of the display device 32 using the input device 36 (step S2). The range to be picked up is usually determined based on the size of the workpiece W or its specific part to be detected and in consideration of the image pickup camera 18 and its lens 18a scheduled to be used or desired measurement precision. Next, the operator designates these through the input device 36 in accordance with the measurement precision displayed on the display device 32 and the type of lens 18a used (step S3). Note that the term “measurement precision” means an actual length or size corresponding to one pixel (length or size per pixel).
The camera position determination unit 40 of the processing device 34 determines the installation position of the image pickup camera 18 based on the image pickup range designated in this way, the required measurement precision, and the used type of lens 18a (step S4).
Here, referring to
w/W=h/H=f/L (1)
If considering the measurement precision R plus this, the following equation (2) stands:
L=(f×H×R)/h=(f×W×R)/w (2)
On the other hand, if the lens 18a (that is, the image pickup camera 18) is designated, the focal distance f and the horizontal width w or vertical width h of the image pickup device are determined. Further, if the operator designates the image pickup range and measurement precision, the horizontal width W or vertical width H of the image pickup range and the measurement precision R are determined. Therefore, from equation (2), the distance L between the lens 18a and image pickup camera 18 and the object to be picked up is calculated.
For example, assume that the focal distance of the lens 18a of the used image pickup camera 18 is 16 mm and the horizontal width w of the image pickup device 18b is 8.8 mm and vertical width h is 6.6 mm. These values are set in the processing device 34 based on the data base stored in the storage device 38 by designation of the type of image pickup camera 18 and lens 18a used. Further, assume that the image pickup range and measurement precision are input by the operator as W=640 mm, H=480 mm, and R=0.1. This being the case, the distance L between the object to be picked up and the image pickup camera 18 is calculated as follows from equation (2).
L=(16 mm×640 mm×0.1)/8.8 mm=116.4 mm
or
L=(16 mm×480 mm×0.1)/6.6 mm=116.4 mm
If the distance L between the object to be picked up (that is, the workpiece W) and the image pickup camera 18 is determined in this way, the posture of the image pickup camera 18 is determined so that the line-of-sight vector, that is, the optical axis, of the image pickup camera 18 vertically intersects the plane of the workpiece W to be picked up. Further, the position (X, Y, Z) of the image pickup camera 18 can be determined so that the image pickup camera 18 is arranged at a position away from the point positioned on the plane of the workpiece W and at the center of the image pickup range by exactly a distance L determined as described above along the line-of-sight vector, that is, optical axis, of the image pickup camera 18. In this way, the camera position determination unit 40 of the processing device 34 can automatically determine the position and posture of the image pickup camera 18 by designation of the used image pickup camera 18, the required measurement precision, and the range to be picked up by the image pickup camera 18.
When the position and posture of the image pickup camera 18 are determined by the camera position determination unit 40, the operator inputs the conveyance speed of the conveyance device 12 by the input device 36 in accordance with a request from the simulation device 30 (step S5). Next, the operator prepares a measurement program for the visual recognition device 16 for detecting the workpiece W from the image picked up by the image pickup camera 18 and measuring the position of the workpiece W and an operating program for the robot system 10 for making the image pickup camera 18 pick up an image and the robot 14 operate to grip the workpiece based on the operation of the conveyance device 12 (step S6). At this time, in order to assist this operation by the operator, the virtual image generator unit 42 of the processing device 34 predicts and generates, based on the three-dimensional model, the virtual image of the workpiece W to be taken in the three-dimensional virtual space when the image pickup camera 18 is arranged at the set position and posture of the image pickup camera 18 determined by the camera position determination unit 40, and displays this image on the screen of the display device 32. Therefore, the operator can use this virtual image to confirm that the image of the desired range is obtained and set the detection parameters or detection model etc. for detection of the workpiece W. Further, this virtual image may be used for calibration of the image pickup camera 18. Next, the operator can simulate the operation of the robot system 10 including the conveyance device 12, the robot 14, and visual recognition device 16, based on the measurement program or operating program determined in the above way, the determined position and posture of the image pickup camera 18, the selected image pickup camera 18 and lens 18a, etc. (step S7).
An example of simulation of the operation of the robot system 10 by the simulation device 30 according to the present invention will be described below with reference to
First, the operating program is started up in the processing device 34 of the simulation device 30 and the conveyance device 12 is operated in the three-dimensional virtual space (step S11). The conveyance device 12 continues operating until the workpiece W is conveyed to a predetermined position (step S12). When the workpiece W is conveyed to the predetermined position, the conveyance device 12 is stopped and the image pickup camera 18 of the visual recognition device 16 picks up the image of the workpiece W in accordance with the pickup command (step S13). The image processing device 20 of the visual recognition device 16 detects the workpiece from the image obtained by the image pickup camera 18 and performs the workpiece position measurement processing for measuring the accurate position of the workpiece W (step S14). When the workpiece position measurement processing is finished (step S15), the robot 14 is moved to the position of the detected workpiece W and operates to grip the workpiece W (step S16). Further, steps S11 to S16 are repeated until the necessary number of workpieces W have finished being processed (step S17).
The series of operations, for example, are preferably completely ended by displaying a user interface as shown in
While the invention has been described with reference to specific embodiments chosen for purpose of illustration, it should be apparent that numerous modifications could be made thereto by those skilled in the art without departing from the basic concept and scope of the invention.
Number | Date | Country | Kind |
---|---|---|---|
2006-191701 | Jul 2006 | JP | national |