The present invention relates to a robot simulation device.
In a robot system including a robot, a visual sensor, and a workpiece in a workspace, a technique for executing a simulation in which a robot model of the robot, a visual sensor model of the visual sensor, and a workpiece model of the workpiece are arranged in a virtual space that three-dimensionally expresses the workspace, the workpiece model is measured by the visual sensor model, and the robot model performs work on the workpiece model has been known (for example, PTL 1).
PTL 2 describes an “information processing device including: a first selection unit that selects, based on a first instruction input, one coordinate system from a plurality of coordinate systems included in a virtual space in which a first model based on CAD data including position information in the virtual space is arranged; a first acquisition unit that acquires first information indicating a second model not including the position information in the virtual space; a second acquisition unit that acquires second information indicating a position in the coordinate system selected by the first selection unit; and a setting unit that sets, to the position, a position of the second model in the virtual space, based on the first and second information” (Abstract).
A simulation device as described in PTL 1 generates a state of workpiece models loaded in bulk in a virtual space by using, for example, a random number. A simulation technique that can efficiently create an operation program of a robot that can achieve a more accurate workpiece picking-up operation is desired.
One aspect of the present disclosure is a robot simulation device for simulating work performed on a workpiece by a robot in a robot system including the robot, a visual sensor, and the workpiece arranged in a workspace, and the robot simulation device includes: a model arrangement unit configured to arrange a robot model of the robot, a visual sensor model of the visual sensor, and a workpiece model of the workpiece in a virtual space that three-dimensionally expresses the workspace; a workpiece model position calculation unit configured to calculate a position and a posture of the workpiece model with reference to the robot model or the visual sensor model in the virtual space by superimposing a shape feature of the workpiece model on three-dimensional position information about the workpiece with reference to the robot or the visual sensor being acquired by the visual sensor in the workspace; and a simulation execution unit configured to execute a simulation operation in which the workpiece model is measured by the visual sensor model, and work is performed on the workpiece model by the robot model, wherein the model arrangement unit arranges the workpiece model in the virtual space in the position and the posture with reference to the robot model or the visual sensor model being calculated by the workpiece model position calculation unit.
A simulation operation of work of a robot model is executed while a state of workpieces loaded in bulk in a workspace is reproduced in a virtual space, and thus an operation program that can execute a more accurate picking-up operation can be efficiently created.
The objects, the features, and the advantages, and other objects, features, and advantages of the present invention will become more apparent from the detailed description of typical embodiments of the present invention illustrated in accompanying drawings.
Next, embodiments of the present disclosure will be described with reference to drawings. A similar configuration portion or a similar functional portion is denoted by the same reference sign in the referred drawings. A scale is appropriately changed in the drawings in order to facilitate understanding. An aspect illustrated in the drawing is one example for implementing the present invention, and the present invention is not limited to the illustrated aspect.
The robot simulation device 30 according to the present embodiment arranges, in a virtual space, a model of each object including the robot 10, the visual sensor 70, and the workpieces W loaded in bulk in the container 81, and simulates, by operating the models in a simulated manner, an operation of detecting the workpiece W by the visual sensor 70 and picking up the workpiece W by the robot 10 (hand 11). In this case, the robot simulation device 30 executes the simulation by acquiring actual three-dimensional position information about the workpiece W loaded in bulk in the container 81, and reproducing an actual state of the workpiece W loaded in bulk in the virtual space, and can thus efficiently create an operation program that can execute a more accurate workpiece picking-up operation.
The visual sensor 70 may be a two-dimensional camera that acquires a two-dimensional image, or may be a three-dimensional position detector that acquires a three-dimensional position of a target object. In the present embodiment, the visual sensor 70 is assumed to be a range sensor that can acquire a three-dimensional position of a target object. The visual sensor 70 includes a projector 73, and two cameras 71 and 72 arranged in positions facing each other across the projector 73. The projector 73 is configured to be able to project desired pattern light such as spotlight or slit light on a surface of a target object. The projector includes a light source such as a laser diode or a light-emitting diode, for example. The cameras 71 and 72 are a digital camera including an image pick-up device such as a CCD and a CMOS sensor.
Note that
The virtual space creation unit 131 creates a virtual space that three-dimensionally expresses a workspace.
The model arrangement unit 132 arranges a model of each object constituting the robot system 100 in the virtual space. A state where each object model is arranged in the virtual space by the model arrangement unit 132 may be displayed on the display unit 33.
The visual sensor model position setting unit 133 acquires information indicating a position of the visual sensor 70 in the workspace from the robot controller 20. For example, the visual sensor model position setting unit 133 acquires, as a file from the robot controller 20, information (calibration data) being stored in the robot controller 20 and indicating a relative position between the robot coordinate system C1 and the sensor coordinate system C2. Specifically, the information indicating this relative position is a position and a posture of the visual sensor 70 (sensor coordinate system C2) with reference to the robot 10 (robot coordinate system C1) in the workspace. The information indicating the relative position between the robot coordinate system C1 and the sensor coordinate system C2 is acquired by performing calibration of the visual sensor 70 in advance in the robot system 100, and is stored in the robot controller 20.
Herein, the calibration is achieved by, for example, acquiring a position and a posture of the visual sensor 70 with respect to a visual marker attached to a predetermined reference position of a robot by measuring the visual marker by the visual sensor 70. The position and the posture of the visual sensor 70 with respect to the robot 10 are acquired by acquiring the position and the posture of the visual sensor 70 with respect to a visual marker arranged in a known position.
The model arrangement unit 132 arranges the visual sensor model in the virtual space in such a way that a relative position between a robot model coordinate system set in the robot model in the virtual space and a sensor model coordinate system set in the visual sensor model is the same as the relative position between the robot coordinate system and the sensor coordinate system in the workspace.
The workpiece model position calculation unit 134 calculates a position and a posture of the workpiece model with reference to the robot model or the visual sensor model in the virtual space by superimposing a shape feature of the workpiece model on three-dimensional position information about a workpiece with reference to the robot 10 or the visual sensor 70 being acquired by the visual sensor 70 in the workspace. The model arrangement unit 132 arranges the workpiece model in the calculated position and posture in the virtual space.
The simulation execution unit 135 executes a simulation of an operation of measuring, by the visual sensor model, the workpiece model arranged in a state of being loaded in bulk in the calculated position and posture, and picking up the workpiece model by the robot model. Note that, when a simulation or a simulation operation is referred in this specification, a case where each object model such as the robot model is operated in a simulated manner on a display screen is included in addition to a case where a numerical simulation of an operation of a robot and the like is executed.
First, the virtual space creation unit 131 creates a virtual space that three-dimensionally expresses a workspace (step S1). Then, the model arrangement unit 132 arranges a robot model 10M in the virtual space (step S2).
Next, the visual sensor model position setting unit 133 sets a position and a posture of a visual sensor model 70M with reference to the robot model 10M in the virtual space, based on a position and a posture of the visual sensor 70 with reference to the robot 10 in the workspace (step S3). For example, the position and the posture of the visual sensor with reference to the robot 10 in the workspace are stored in the robot controller 20 as a relative position between the robot coordinate system C1 and the sensor coordinate system C2 by performing calibration of the visual sensor 70 in the robot system 100. In step S3, the visual sensor model position setting unit 133 acquires, from the robot controller 20, information as the relative position between the robot coordinate system C1 and the sensor coordinate system C2.
Next, in step S4, the model arrangement unit 132 arranges the visual sensor model 70M in the virtual space in such a way that a relative position between the robot model coordinate system M1 and a sensor model coordinate system M2 is equal to the relative position between the robot coordinate system C1 and the sensor coordinate system C2 in the workspace.
Next, in step S5, the workpiece model position calculation unit 134 calculates a position and a posture of a workpiece model WM with reference to the robot model 10M or the visual sensor model 70M in the virtual space by superimposing a shape feature of the workpiece model WM on three-dimensional information about the workpiece W with reference to the robot 10 or the visual sensor 70 being acquired by the visual sensor 70 in the workspace.
Three-dimensional position information about the workpiece W is stored as, for example, a set of three-dimensional coordinates with reference to the robot coordinate system C1 or the sensor coordinate system C2 in the robot controller 20 by measuring the workpiece W by the visual sensor 70. The workpiece model position calculation unit 134 acquires the three-dimensional position information about the workpiece W from the robot controller 20, and calculates the position and the posture of the workpiece model WM by superimposition of the shape feature of the workpiece model WM.
Herein, a method for acquiring, by the visual sensor 70, the three-dimensional position information about the workpiece W in a state of being loaded in bulk will be described with reference to
The two cameras 71 and 72 of the visual sensor 70 face in directions different from each other in such a way that visual fields of the two cameras 71 and 72 at least partially overlap each other. The projector 73 is arranged in such a way that a projection range of the projector 73 at least partially overlaps the visual field of each of the cameras 71 and 72.
A plurality of intersection lines of a first plane group arranged to divide, at a regular interval, the visual field in which the two cameras 71 and 72 capture a range which is a target of measurement in a region provided with the workpiece W and which passes through focuses of the two cameras 71 and 72, and a second plane group corresponding to a boundary surface of light and darkness of striped pattern light 160 when the projector 73 projects the pattern light 160 on the range being the target of measurement in the region provided with the workpiece W are calculated, and the three-dimensional position information about the workpiece W is calculated as three-dimensional coordinates of an intersection point of the intersection line and a workpiece surface (see
In this way, the first plane group and the second plane group are calculated, and the intersection line of the first plane group and the second plane group is also calculated. Then, three-dimensional information about a plurality of the intersection points P of a plurality of the calculated intersection lines and the surface of the workpiece W loaded in bulk is calculated.
The robot controller 20 acquires three-dimensional coordinates for all of the workpieces W by performing a workpiece picking-up process for a plurality of times.
The three-dimensional coordinates for all of the workpieces W acquired in the robot system 100 according to the procedure as described above are stored in the robot controller 20.
The workpiece model position calculation unit 134 acquires, as the three-dimensional information about the workpiece W from the robot controller 20, the three-dimensional coordinates (coordinates with reference to the robot coordinate system C1 or the sensor coordinate system C2) of the plurality of intersection points P on the workpiece surface acquired as described above. Then, the workpiece model position calculation unit 134 searches for a position and a posture that may be taken by the workpiece model by comparing the three-dimensional information about the workpiece W with the shape feature of the workpiece model (such as surface data, ridge line data, and vertex data about the workpiece model), and calculates a position and a posture of the workpiece model having a maximum degree of coincidence between the set of three-dimensional coordinates and shape information about the workpiece model. In this way, the workpiece model position calculation unit 134 acquires the position and the posture of the workpiece model WM in the virtual space associated with a position and a posture of the workpiece W in the workspace.
Next, in step S6, the model arrangement unit 132 arranges the workpiece model WM in the position and the posture of the workpiece model WM with reference to the robot model 10M or the visual sensor model 70M in the virtual space.
Next, in step S7, in a state where the workpiece model WM is arranged in the virtual space as in
Similarly to a measurement operation using the visual sensor 70, the simulation execution unit 135 measures a position and a posture of the workpiece model WM in the virtual space in a simulated manner by the following procedures.
In this way, according to the present embodiment, a simulation operation of work of a robot model is executed while a state of a workpiece loaded in bulk in a workspace is reproduced in a virtual space, and thus an operation program that can execute a more accurate picking-up operation can be efficiently created.
The present invention has been described above by using the typical embodiments, but it will be understood by those of ordinary skill in the art that changes, other various changes, omission, and addition may be made in each of the embodiments described above without departing from the scope of the present invention.
The functional block of the robot simulation device 30 illustrated in
The program executing the simulation operation in
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2021/019843 | 5/25/2021 | WO |