This is the U.S. National Phase application of PCT/JP2021/015481, filed Apr. 14, 2021, which claims priority to Japanese Patent Application No. 2020-074943, filed Apr. 20, 2020, the disclosures of these applications being incorporated herein by reference in their entirety for all purposes.
The present invention relates to a program editing device.
Since robot programming is generally performed using text-based command statements, it is necessary that the operator be knowledgeable in the programming language of the robot. In order to support intuitive input by the operator of the robot control program, program creation devices which enable programming using icons representing the commands of robot control have been proposed (For example, Patent Literature 1 and Patent Literature 2).
Motion programs for performing operations such as handling of workpieces in a robot system comprising a visual sensor include a program for so-called vision detection in which the position of an object is detected from an image captured by a visual sensor, which is then provided to a robot controller. The program for vision detection includes commands for image capture with a camera and commands for detecting an object, and the creation method thereof is different from robot control programs. Thus, even an operator who is accustomed to the programming of robot control programs cannot use such knowledge to create a vision detection program. A program editing device which enables even an operator who is accustomed to programming robots but not accustomed to programming vision detection to create a vision detection program without difficulty is desired.
An aspect of the present disclosure provides a program editing device for editing a motion program of a robot, comprising a program editing unit for receiving a shared editing operation on a first type of icons corresponding to commands related to control of the robot and a second type of icons corresponding to commands related to imaging with a visual sensor and processing of captured images, and a program generation unit for generating the motion program in accordance with the edited first type of icons and second type of icons.
According to the configuration described above, even an operator who is accustomed to robot programming but not accustomed to programming vision detection can create a vision detection program without difficulty.
The object, characteristics, and advantages of the present invention as well as other objects, characteristics, and advantages will be further clarified from the detailed description of typical embodiments of the present invention shown in the attached drawings.
Next, the embodiments of the present disclosure will be described with reference to the drawings. In the referenced drawings, identical constituent portions or functional portions are assigned the same reference signs. In order to facilitate understanding, the scales of the drawings have been appropriately modified. Furthermore, the forms shown in the drawings are merely examples for carrying out the present invention. The present invention is not limited to the illustrated forms.
The visual sensor controller 40 has a function for controlling the visual sensor 41 and a function for performing image processing on the images captured by the visual sensor 41. The visual sensor controller 40 detects the position of the object 1 from the image captured by the visual sensor 41, and supplies the detected position of the object 1 to the robot controller 50. As a result, the robot controller 50 can execute correction of the teaching positions, extraction of the object 1, etc. Below, the function of detecting the position of the object from the image captured by the visual sensor may be referred to as vision detection, and the function of correcting the teaching position based on the position detection by the visual sensor may be referred to as vision correction. Though
The visual sensor 41 may be a camera which captures grayscale images or color images, or a stereo camera or a three-dimensional sensor which can capture distance images or three-dimensional point groups. A plurality of visual sensors may be arranged in the robot system 100. The visual sensor controller 40 retains model patterns of objects, and executes image processing for detecting an object by pattern matching between an image of an object in the captured image and a model pattern.
The program editing device 10 is used to create a motion program for the robot 30 for executing handling of the object 1. The program editing device 10 is, for example, a teaching device (teach pendant, tablet terminal, etc.) connected to the robot controller 50. The program editing device 10 may have a configuration as a general computer having a CPU, ROM, RAM, a storage device, an input/output interface, a network interface, etc. The program editing device 10 may be a so-called “programming device” (PC or the like) for performing programming offline.
As will be described in detail below, the program editing device 10 can be used for programming by means of icons related to both commands used in control of the robot 30 and commands related to imaging with the visual sensor and processing (vision detection) of captured images. Below, when icons of commands used in control of the robot 30 and icons representing commands used in vision detection are distinguished, the former are referred to as the first type of icons and the latter are referred to as the second type of icons.
The functional blocks of the program editing device 10 shown in
The editing operation will be described using the case where a mouse is used as the input device for editing the motion program as an example. In the editing screen 400 shown in
In the editing screen 400 of
The upper program creation area 300 in the editing screen 400 is an area for creating a motion program by arranging icons in the order of operation. In the editing screen 400, icons are dragged and dropped from the icon display area 200 to the program creation area 300 by operating a mouse. The operation input reception unit 13 arranges a copy of the selected icon in the program creation area in response to such a drag-and-drop operation. By such an operation, the operator can create a motion program by selecting icons from the icon display area 200 and arranging them in the desired positions in the program creation area. In the program creation area 300, the icons selected from the icon display area 200 are arranged from left to right in the order of operation.
When an icon arranged in the program creation area 300 is selected and a detail tab 262 is selected, the lower area of the editing screen 400 becomes a parameter setting screen (not illustrated) for setting the detailed operation of the command of the icon. The operator can set detailed parameters related to the operation command of the selected icon via the parameter setting screen. As an example, when the icon 203 (straight line movement) arranged in the program creation area 300 is selected, the icon 203 is highlighted. If the operator selects the detail tab 262 in this state, the setting screen for the command (straight line movement) of the icon 203 is displayed in the lower area of the editing screen 400. In this case, the contents of the detailed settings include the following setting items (target position/posture and moving speed). The operator inputs, for example, the following numerical data in each setting item.
(Parameter Setting Items)
In the example of
When the operator selects the “view and pick up” icon 251 and incorporates it into the motion program, the robot control icons 201 to 209 can be arranged in the program creation area 300 by the same operation as the case in which they are arranged in the program creation area 300 by dragging and dropping.
When the detail tab 262 is selected in a state in which the icon 251 arranged in the program creation area 300 in
If a vision detection program has already been registered in the program editing device 10, when the detail tab 262 is selected in a state in which the icon 251 is selected on the editing screen 400 of
As an example, in the program creation area 300A of the editing screen 400A in
In this case, the operator selects each icon arranged in the program creation area 300A, displays the parameter setting screen at the bottom of the editing screen 400A, and sets detailed parameters. The parameter setting of the image capture icon 252 (command SNAP) includes the following setting items.
The following setting items are set in the parameter setting of the detection icon 253 (command FIND_SHAPE or FIND_BLOB). In the following setting items, the “matching threshold” and the “contrast threshold” are parameters related to thresholds in the image processing for object detection.
In the correction calculation icon 254 (command CALC_OFFSET), for example, the position of the object in the image is obtained based on the detection results of the two detection icons 253 (command FIND_SHAPE and FIND_BLOB), and by converting the position in the image into three-dimensional coordinates in the robot coordinate system, an offset amount for correcting the teaching position of the robot is obtained.
In the example of the motion program of the program creation area 300 in
An example of a vision program created in the program creation area 300 or 300A will be described below with reference to
The vision program 602 of
In the vision program 602, the vision correction is applied in the two linear movement commands. In the vision program 602, for example, there are a plurality of objects in the field of view (captured image) of the visual sensor, and an operation for picking up the plurality of objects found in the captured image in order while applying vision correction is realized.
In the vision program 602 of
In the vision program 603 of
(A1) The first camera detects one end of the object (auxiliary icon 256 on the left side in
(A2) The second camera detects the other end of the object (auxiliary icon 256 on the right side in
(A3) In the correction calculation icon 254, the position of the midpoint of the object is obtained from each of the detected positions, this position is set as the position of the object, and the position of the object in the robot coordinate system is obtained. As a result, the position where the teaching point should be corrected is obtained.
In the vision program 603 of
In the vision program 604 of
In the present embodiment as described above, the first type of icons for robot programming and the second type of icons for the vision program are similarly listed in the icon display area in the editing screen. Furthermore, in the present embodiment, the operation of arranging the icons of the vision program in the program creation area can be performed by the same operations as the operation of arranging the robot programming icons in the program creation area.
Specifically, according to the present embodiment, a motion program can be created by editing the first type of icons and the second type of icons with a common editing operation. Thus, according to the present embodiment, even an operator who is accustomed to robot programs but who is not accustomed to vision programs can create the vision program without difficulty.
Though typical embodiments have been used above to describe the present invention, a person skilled in the art would understand that changes and other various modifications, omissions, and additions can be made to the embodiments described above without deviating from the scope of the present invention.
In the embodiments described above, configuration examples in which the display method and operation method of the icons were standardized when the operation commands of the robot and the operation commands of the processing of images by the visual sensor were expressed as icons were described. In the creation of motion programs, the method of standardizing the command display method and input operation method in robot control programming and vision programming can also be realized by text-based programming.
As an example, it is assumed that the following text-based motion program (hereinafter referred to as motion program F) is input using the program editing device. In the program list below, the leftmost number on each line is the line number. In motion program F, the object is detected from the image captured by the first camera (camera A), the object is handled while correcting the position of the robot (line numbers 1 to 10), and next, the object is detected from the image captured by the second camera (camera B), and the object is handled while correcting the position of the robot (line numbers 12 to 19).
(Motion Program F)
1: linear position[1] 2000 mm/sec positioning;
2: ;
3: vision detection ‘A’;
4: vision correction data acquisition ‘A’ vision register[1] jump label[100];
5: ;
6: !Handling;
7: linear position[2] 2000 mm/sec smooth100 vision correction, vision register[1] tool correction, position register[1];
8: linear position[2] 500 mm/sec positioning vision correction, vision register[1];
9: call HAND_CLOSE;
10: linear position[2] 2000 mm/sec smooth100 vision correction, vision register[1] tool correction, position register[1];
11: ;
12: vision detection ‘B’;
13: vision correction data acquisition ‘B’ vision register[2] jump label[100];
14: ;
15: !Handling;
16: linear position[2] 2000 mm/sec smooth100 vision correction, vision register[2] tool correction, position register[1];
17: linear position[2] 500 mm/sec positioning vision correction, vision register[2];
18: call HAND_CLOSE;
19: linear position[2] 2000 mm/sec smooth100 vision correction, vision register[1] tool correction, position register[1];
20:
In motion program F above, the commands “linear position[ ]” and “call HAND_CLOSE” are commands belonging to robot control. Specifically, the command “linear position[ ]” is a command to move the tip of the arm of the robot, and the command “call HAND_CLOSE” is a command to call the process for closing the hand. In motion program F, the commands “vision detection” and “vision correction data acquisition” are commands belonging to the vision program. Specifically, the command “vision detection” is a command for capturing an image with a camera, and the command “vision correction data acquisition” is a command for detecting an object and determining the position for correction.
The program editing device displays a list of instructions, for example, in a pop-up menu on the editing screen where the instructions of the motion program F are input. The pop-up menu includes, for example, a list of instructions as shown in Table 1 below.
The program editing device displays the instruction pop-up menu when the operation to display the instruction pop-up menu is performed. The program editing device then inserts the instruction in the pop-up menu selected by the selection operation (mouse click, touch operation on the touch panel) using the input device into the line where the cursor is present in the editing screen. The operator repeats the operation of selecting and inserting instructions to create the motion program F.
Even in the creation and editing of such text-based motion programs, instructions regarding robot control and instructions of the vision program are displayed by the same display method, and the operator can insert the instructions of the vision program into the motion program by the same operation method as selecting the instructions of the robot control and inserting them into the program. Thus, even in the case of such a configuration, the same effect as in the case of programming using icons in the embodiments described above can be achieved.
In the embodiments described above, as a specific example of realizing a predetermined function by capturing an image with a visual sensor and processing the captured image, an example of detecting the position of an object and using the detected position for correcting the operation of the robot has been described. In addition to the above examples, the functions realized by capturing an image with a visual sensor and processing the captured image include various functions, such as inspection and barcode reading, that can be realized by using a visual sensor.
For example, the case in which the robot system 100 of
When the function of reading a barcode using the visual sensor 41 is added to the robot system 10 of
| Number | Date | Country | Kind |
|---|---|---|---|
| 2020-074943 | Apr 2020 | JP | national |
| Filing Document | Filing Date | Country | Kind |
|---|---|---|---|
| PCT/JP2021/015481 | 4/14/2021 | WO |
| Publishing Document | Publishing Date | Country | Kind |
|---|---|---|---|
| WO2021/215333 | 10/28/2021 | WO | A |
| Number | Name | Date | Kind |
|---|---|---|---|
| 9612727 | Saito et al. | Apr 2017 | B2 |
| 20140298231 | Saito | Oct 2014 | A1 |
| 20170236446 | Gupta | Aug 2017 | A1 |
| 20190143524 | Takahashi | May 2019 | A1 |
| 20210110735 | Nagasaka | Apr 2021 | A1 |
| Number | Date | Country |
|---|---|---|
| 103862472 | Jun 2014 | CN |
| 105408823 | Mar 2016 | CN |
| 109760042 | May 2019 | CN |
| 110315533 | Oct 2019 | CN |
| 102012004983 | Sep 2013 | DE |
| 08249026 | Sep 1996 | JP |
| H09258971 | Oct 1997 | JP |
| 2001088068 | Apr 2001 | JP |
| 2010-134879 | Jun 2010 | JP |
| 2014210332 | Nov 2014 | JP |
| 2017054298 | Mar 2017 | JP |
| 2018-077692 | May 2018 | JP |
| 6498366 | Apr 2019 | JP |
| WO-2019112110 | Jun 2019 | WO |
| Entry |
|---|
| Rainer Bischoff, Arif Kazi, Markus Seyfarth, The MORPHA Style Guide for Icon-Based Programming, Sep. 25, 2002, IEEE Int. Workshop on Robot and Human Interactive Communication, p. 482-487 (Year: 2002). |
| International Search Report and Written Opinion for International Application No. PCT/JP2021/015481, dated May 25, 2021, 6 pages. |
| Number | Date | Country | |
|---|---|---|---|
| 20230182292 A1 | Jun 2023 | US |