The present application claims the benefit of priority based on Japanese Patent Application No. 2007-145251 filed on May 31, 2007, the disclosure of which is incorporated herein in its entirety by reference.
1. Field of the Invention
The present invention relates to a robot simulation apparatus for offline simulation of the operation of a robot which tracks an object being conveyed on a conveying apparatus and grasps the object.
2. Description of Related Art
As an example of robotic production method using a robot which tracks an object being conveyed and grasps the object, a visual tracking method has been known as disclosed in Japanese Patent No. 3,002,097. A visual tracking method is a method in which a visual sensor is used to measure a position and an attitude of a moving object being conveyed on a belt conveyor as a conveying apparatus, and the object is tracked based on the measured position and attitude so as to correct a teaching position taught to a robot for grasping the object. In Japanese Patent No. 3,002,097, a technology is disclosed for causing a robot to operate in association with tracking of an object in order to accomplish a robot operation on the object being moved by a conveying apparatus such as a belt conveyor, and more particularly, for causing a robot to operate on a moving object having deviations in position.
Although not related to a visual tracking method, a method for detecting the position of a moving object is disclosed in Japanese Patent Publication No. 2004-249391, in which a method is described for using a visual sensor to detect a characteristic position and an attitude of an object being held in a robot hand and to observe a holding state of the object based on the detection result. In this Patent Reference, a holding error is obtained by comparing the holding state detected by the visual sensor with a predetermined reference holding state, and if the holding error exceeds an allowable limit, the robot operation is stopped or an alarm signal informing an anomaly of the holding state is outputted.
In conventional visual tracking method, in order to check whether or not the robot operation, the operation of a conveyor, or the detection by a visual sensor can be properly performed with no problem, it is necessary to actually operate the robot, the conveyor and the visual sensor on site and to confirm the robot operation, the operation of a conveyor and the sensor. Thus, when, for example, the interval of supplying objects, supplying speed of the supplied objects, or the shape of the objects, are to be adjusted, a special expertise and complicated work involving trial and error is required so that much time is required for such adjustment and a production system using a robot cannot be easily constructed.
It is an object of the present invention to provide a robot simulation apparatus which permits, when a conveyance method for conveying objects is to be changed, for example, when interval for supplying objects, supplying speed of objects, or shape of objects, is to be altered, time required for change of settings in the system including actual robots and cameras associated with the change of the conveyance method to be reduced, and which is thus capable of improving production efficiency of the robotic production system.
In order to attain above object, in accordance with an aspect of the present invention, there is provided a robot simulation apparatus which performs, by performing image processing of image data captured with a camera, off-line simulation of the operation of a robot which tracks an object being conveyed by a conveyance apparatus, and grasps the object at a predetermined position, comprising: a display section which displays respectively models of at least the conveyance apparatus, the object and the robot as laid out at predetermined positions; a movement condition designating section which designates a direction and speed of movement of the object; a imaging condition designating section which designates relatively a position and a imaging condition of a camera with respect to the object in order to obtain a still image of the object located in an imaging area; a teaching model storage section which stores a teaching model for the object to be compared with a still image of the object obtained by the camera; a grasping position calculating section which calculates a grasping position for the object to be grasped by the robot based on a position and an attitude of the object obtained from a comparison of the still image with a teaching model and on a direction and a speed of movement of the object; and a teaching position setting section which sets a teaching position for the robot based on a grasping position.
In accordance with the present invention, since the grasping position of the object to be grasped by the robot is obtained by the grasping position calculating section, and since the teaching position for the robot is set by the teaching position setting section, an operation of the robotic production system comprising the object, the conveyance means, the camera and the robot can be easily checked so that the time required for examining applicability of the robot can be reduced. Therefore, a teaching and a start-up of the system can be simplified and the number of process steps can be reduced, and a production efficiency of a robotic production system can be improved.
The robot simulation apparatus may further comprise an alarm generating section which generates an alarm informing an anomaly of the robot when the robot cannot grasp the object at the grasping position calculated by the grasping position calculating section. Since an alarm informing the anomaly of the robot is generated by the alarm generating section, it is possible to recognize when the robot cannot grasp the object. When an alarm is generated, the simulation is performed repeatedly after altering the method for supplying objects or the imaging condition of the camera so as to obtain a suitable method or a condition in which no alarm is generated to inform any anomaly.
The robot simulation apparatus may further comprise a shape model designating section which designates a shape model for the object. With the shape model designating section, it is possible to designate a shape model of the object having a different shape. Thus, the simulation applicable to an actual product shape can be carried out, and an applicable range of the simulation can be thereby increased.
The robot simulation apparatus can designate a plurality of shape models for a plurality of objects having different shapes, and can supply the plurality of objects having different shapes in a predetermined order to the conveyance apparatus. By supplying the plurality of objects having different shapes in a predetermined order, the conveyance of different kinds of products in actual production site can be reproduced in simulation.
The robot simulation apparatus can use a belt conveyor as the conveyance apparatus, and may further comprise a supply interval designating section which designates a supply interval for supplying a multiplicity of objects on the belt conveyor. With the supply interval designating section, it is possible to designate a supply interval for the multiplicity of objects supplied on the belt conveyor, and to reproduce the actual supplying method for supplying objects on site.
The robot simulation apparatus can use the supply interval designating section to designate a regular interval or an irregular interval for supplying a multiplicity of objects. By designating the regular interval or the irregular interval for supplying a multiplicity of objects, the actual mode of supplying objects on site can be reproduced with higher fidelity, and precision of the simulation can be improved.
The robot simulation apparatus may further comprise a destination designating section which designates a destination of movement of the grasped object, and can thereby simulate an operation of the robot for moving the object to the destination. Since the operation of moving the object grasped by the robot to the destination designated by the destination designating section can be simulated, a series of process steps including supplying an object on the belt conveyor, grasping the object by the robot and moving the object to the destination can be reproduced in simulation. Therefore, the robot simulation apparatus can be used for verifying an optimal operation and a stability in an actual robotic production system.
The above and other objects, features and advantages of the present invention will become more apparent from following description of preferred embodiments of the invention with reference to appended drawings, in which:
A robot simulation apparatus (hereinafter referred to simply as “simulation apparatus”) according to the present invention will be described with reference to drawings. Throughout the drawings, common constituents are denoted by same reference numerals and symbols, and duplicate explanation thereof is omitted.
A simulation apparatus 1 according to this embodiment can simulate, by image-processing of image data captured with a camera, the tracking operation of an actual robot which tracks movement of an object being conveyed on a belt conveyor (conveyance apparatus), and the picking operation of the actual robot which grasps the object at a predetermined position, and as shown in
The apparatus main body 2 has a controller 4 functioning as an essential hardware component and an unshown interface. The controller 4 has a CPU (not shown), a ROM, a RAM, and various memories (not shown) such as a flash memory. The ROM has a system program stored therein for functioning of the entire simulation apparatus 1. The RAM is a memory used for temporary storage of data used for processing performed by the CPU. The flash memory has various programs and data necessary stored therein for carrying out the simulation as described later, in addition to an operational program and data, and settings for the robot 10.
The controller 4 is electrically connected via an interface to the display 3, the keyboard, the mouse, the unshown robot controller, and a CAD device, etc., in order to transmit and receive electric signals. When the shape models have been prepared by the CAD device in advance, 3-dimensional model data of the robot 10 having a robotic hand, the belt conveyor 11, the object 13 conveyed by the conveyor 11, the camera 12, and the pallet 15 for receiving the object, are transmitted by the CAD device via a communication line. The transmitted model data are temporarily stored in the flash memory to be laid out in a predetermined positional relation on the screen of the display 3 shown in
The positional relation of the individual models should reproduce the actual positional relation on the production site. Any suitable method such as a solid model, a frame model, a wire model, and the like can be employed as the display method of the individual models. Model data can be read in directly from the CAD device, or can be captured indirectly via a recording medium.
The controller 4 comprises at least following constituents. That is, the controller comprises a movement condition designating section 5 which designates a direction and speed of movement of the object 13; an imaging condition designating section 6 which designates a relative position and an imaging condition of the camera 12 with respect to the object 13 in order to obtain a still image of the object 13 located in an imaging area 14 of the camera 12; a teaching model storage section 7 which stores a teaching model for the object 13 to be compared with the still image 18 of the object 13 obtained by the camera 12; a grasping position calculating section 8 which calculates a grasping position for the object 13 to be grasped by the robot 10 based on the position and attitude of the object 13 obtained from the comparison of the still image 18 with the teaching model and on the direction and speed of movement of the object 13; and a teaching position setting section which sets a teaching position for the robot 10 based on the grasping position.
The controller 4 may further comprise an alarm generating section which generates an alarm informing an anomaly of the robot 10 when the robot 10 cannot grasp the object 13 at the grasping position calculated by the grasping position calculating section 8, a shape model designating section which designates a shape model for the object 13, a supply interval designating section which designates a supply interval for supplying a multiplicity of objects 13 on the belt conveyor 11, and a destination designating section which designates a destination position of movement of the grasped object 13.
Next, the simulation conducted by using the simulation apparatus 1 of this embodiment will be described with reference to a flow chart shown in
At step S1, 3-dimensional model data of the robot 10, the belt conveyor 11, the object 13, the camera 12 and the pallet 15 are displayed in a predetermined positional relation on the screen of the display 3.
At step S2, an object 13 to be supplied to the belt conveyor 11 is designated (designation of shape model). As shown in
At step S3, a direction and a speed of movement of the object 13 conveyed by the belt conveyor are designated (designation of movement condition). In
At step S4, an order of conveying a plurality of the objects 13 having different shapes as shown in
At step S5, in order to obtain still images 18 of the objects 13 located in an imaging area 14 (
W is width of the object;
H is height of the object;
w is width of an image sensor (CCD or CMOS);
h is height of the image sensor;
f is focal length; and
L is distance to object.
Between these quantities, the following relation holds;
(w/W)=(h/H)=(f/L).
Thus, w, width of the image sensor, and h, height of the image sensor, are determined by the lens 17.
For example, for a lens of type 1, w=12.7 mm, h=9.525 mm, for a lens of type 1/2, w=6.4 mm, h=4.8 mm, for a lens of type 2/3, w=8.8 mm, h=6.6 mm, and for a lens of type 1/3, w=4.8 mm, h=3.6 mm. Focal length of individual lens is different for each lens, and for a lens of type 2/3, for example, f=1.6 mm.
A resolution of an image displayed on the screen of the object 13 viewed with the camera 12 is taken as width×height=640 mm×480 mm. For example, if the field of view is 640 mm×480 mm, the precision per pixel is 1 mm.
Distance to the object (position of the camera) L is (f×H)/h=1.6 mm×640 mm/6.6 mm=1551.6 mm.
A position and an attitude of the camera can be determined as follows. As shown in
Next, at steps S6-S8, using a known method (for example, as disclosed in Japanese Patent Publication No. 2004-144557), the image data obtained with the camera 12 are compared with a teaching model stored in the teaching model storage section, and are subjected to an image-processing by an unshown image processor to detect the position and attitude of the object 13. Depending on a complexity of the shape of the object 13, when the object 13 has a 3-dimensional solid shape, the teaching model may require model data of the object 13 as viewed from plural directions. In
At step S9, based on the position and the attitude of the object 13 obtained at steps S6-S8, and on the direction and the speed of movement of the object 13, the grasping position of the object 13 to be grasped by the robot 10 is calculated.
Finally, after the teaching position for the robot 10 has been set based on the grasping position by the teaching position setting section, at step S10, the object 13 being conveyed is grasped by the robot 10 at the grasping position obtained at step S9, as shown in
As has been described above, in accordance with the robot simulation apparatus according to the present embodiment, in a robotic production system comprising objects, a belt conveyor, a camera, and a robot, the tracking operation or the picking operation of the robot upon an alteration of the method of supplying objects or change of the shape of objects can be easily checked so that time required for an examination of applicability can be reduced. A teaching and a starting-up of a system is thereby simplified, and it is possible to reduce the number of process steps and to improve the production efficiency of a robotic production system.
The present invention is by no means limited to the above-described embodiment, but can be carried out in various modifications without departing from the spirit and scope of the invention.
Number | Date | Country | Kind |
---|---|---|---|
2007-145251 | May 2007 | JP | national |