This invention relates to a control apparatus, a robot and a robot system.
In related art, a robot system including a camera that captures a work, a robot that performs a job on the work based on the captured image from the camera, and a control apparatus that controls driving of the camera and the robot is known. Further, recently, to control driving of the robot with high accuracy, a method of setting a calibration between a coordinate system of the captured image and a coordinate system of the robot has been developed.
As an example of the robot system that performs the calibration, for example, JP-A-2016-120564 discloses an image processing system having a robot, a robot controller that controls driving of the robot, a communication apparatus that can perform data communication with the controller, and an imaging apparatus that captures an image of a work.
In the image processing system, a user determines setting details in a calibration by clicking a setting window displayed on a monitor with a mouse.
However, in the setting of the calibration in the image processing system described in JP-A-2016-120564, there are many setting details determined by the user and determination (selection) and execution of the setting details of the calibration require skills.
Here, for example, the setting details differ depending on the placement position of the camera or the like. Accordingly, it is difficult for a beginner to select optimal setting details for the placement position of the camera from all items displayed on the setting window. As a result, only the skilled people can make optimal settings for calibrations.
Further, in the setting window of related art, it is difficult to understand the way of setting, and time and effort on settings of calibrations are huge.
The invention has been achieved for solving at least part of the above described problems and can be realized as follows.
A control apparatus of the invention is a control apparatus that can control driving of a robot, an imaging unit and a display unit based on input of an input unit, including a display control unit that allows the display unit to display an input window for inputting a robot as an object to be controlled, and allows the display unit to display an imaging unit input part that guides input of an attachment position of the imaging unit corresponding to the input robot, and a calibration control unit that performs a calibration of correlating a coordinate system of the robot and a coordinate system of the imaging unit based on the input attachment position of the imaging unit.
According to the control apparatus of the invention, the attachment position of the imaging unit corresponding to the robot input to the display unit may be displayed. In other words, the attachment position of the imaging unit not corresponding to the input robot is not displayed. Accordingly, a user may easily select the attachment position of the imaging unit corresponding to the input robot. As a result, settings for calibration may be easily and appropriately made.
In the control apparatus of the invention, it is preferable that the display control unit can display a vertical articulated robot and a horizontal articulated robot in the input window, and a display form of the imaging unit input part differs between the cases where the vertical articulated robot is input and the horizontal articulated robot is input.
Thereby, the respective selection of the attachment position of the imaging unit corresponding to the horizontal articulated robot and selection of the attachment position of the imaging unit corresponding to the vertical articulated robot may be easily made.
In the control apparatus of the invention, it is preferable that the display control unit allows the display unit to display a guide window for calibration that guides input of information for the calibration.
Thereby, the user may easily and readily complete the settings of calibration without complex operations by selecting information (setting details) according to the guide window for calibration.
In the control apparatus of the invention, it is preferable that a receiving unit that receives input is provided, and the display control unit allows the display unit to sequentially display a plurality of the guide windows for calibration based on the input received by the receiving unit.
Thereby, the user may easily and readily complete the settings of calibration without complex operations by dialogically selecting information (setting details) according to the sequentially displayed guide windows (wizard widows) for calibration.
In the control apparatus of the invention, it is preferable that the display control unit displays a local setting call-up part for calling up a guide window for local settings that guides input of information for setting a local coordinate system different from the coordinate system of the robot in the guide window for calibration.
Thereby, the user may call up the guide window for local settings via the local setting call-up part, and time and effort to once cancel calibration creation for local settings, make the local settings, and then, make other calibration creation again from the start may be omitted.
In the control apparatus of the invention, it is preferable that the display control unit displays a tool setting call-up part for calling up a guide window for tool settings that guides input of information for obtaining offset of a tool attached to the robot in the guide window for calibration.
Thereby, the user may call up the guide window for tool settings via the tool setting call-up part, and time and effort to once cancel calibration creation for tool settings, make the tool settings, and then, make other calibration creation again from the start may be omitted.
In the control apparatus of the invention, it is preferable that the display control unit displays a calibration point selection part for selecting whether or not to perform automated generation of a calibration point used for the calibration in the guide window for calibration.
Thereby, the user may select whether or not to perform automated generation of the calibration point easily via the calibration point selection part according to the purpose of the user.
In the control apparatus of the invention, it is preferable that the display control unit displays an approach point selection part for selecting whether or not to perform automated generation of an approach point as a base point of movement of a predetermined part of the robot to the calibration point in the guide window for calibration.
Thereby, the user may select whether or not to perform automated generation of the approach point easily via the approach point selection part according to the purpose of the user.
In the control apparatus of the invention, it is preferable that a control program execution part that can execute a control program for driving the robot is provided, and the control program execution part executes setting of the local coordinate system using a command that enables setting of a local coordinate system different from the coordinate system of the robot.
Thereby, the setting of the local coordinate system may be made more quickly. In the case where execution of the calibration and correction of various settings of the calibration including the setting of the local coordinate system based on the execution result are repeated at a plurality of times, use of commands is particularly effective.
In the control apparatus of the invention, it is preferable that a control program execution part that can execute a control program for driving the robot is provided, and the control program execution part executes tool settings using a command that enables tool settings of obtaining offset of a tool attached to the robot.
Thereby, the tool settings may be made more quickly. In the case where execution of the calibration and correction of various settings of the calibration including the tool settings based on the execution result are repeated at a plurality of times, use of commands is particularly effective.
A robot of the invention is controlled by the control apparatus of the invention.
According to the robot, the action with respect to the calibration may be properly performed under the control of the control apparatus.
A robot system of the invention includes the control apparatus of the invention and a robot and an imaging unit controlled by the control apparatus.
According to the robot system, the robot may properly perform the action with respect to the calibration based on the captured image (image data) from the imaging unit. Accordingly, accuracy of the calibration may be improved. As a result, accuracy of the job of the robot may be improved.
As below, a control apparatus, a robot and a robot system of the invention will be explained in detail based on the preferred embodiments shown in the accompanying drawings.
Note that, hereinafter, for convenience of explanation, the upside in
A robot vision system 100 (robot system) shown in
The computer 11, the robot control apparatus 12, and the image processing apparatus 13 are connected to one another via wired or wireless communication (hereinafter, also simply referred to as “connected”). Further, the respective display device 41 and input device 42 are connected to the computer 11 via wired or wireless communication. Furthermore, the robot 2 is connected to the robot control apparatus 12 via wired or wireless communication. Moreover, the respective plurality of imaging units 3 are connected to the image processing apparatus 13 via wired or wireless communication. Note that the imaging units 3, the display device 41 and the input device 42 may be respectively connected to the image processing apparatus 13.
In the robot vision system 100, for example, the imaging units 3 capture a work or the like and the robot 2 performs a job on the work or the like based on the captured images (image data) captured by the imaging units 3 under the control of the control system 10. Further, for example, the robot vision system 100 performs creation of an image processing sequence for recognition of the work using the imaging units 3 or the like and performs a calibration for correlating an imaging coordinate system and a robot coordinate system (distal end coordinate system or base coordinate system) under the control of the control system 10 so that the robot 2 can perform a job appropriately.
As below, the respective parts forming the robot vision system 100 will be explained.
As shown in
As below, the robots 2a, 2b will be briefly explained.
As shown in
The robot arm 20 of the robot 2a has a first arm 21 (arm), a second arm 22 (arm), a third arm 23 (arm), a fourth arm 24 (arm), a fifth arm 25 (arm), and a sixth arm 26 (arm). These arms 21 to 26 are coupled in this order from the proximal end side toward the distal end side. Further, for example, the force detection unit 290 includes a force sensor (e.g. six-axis force sensor) that detects the force applied to the hand 270 (including moment) etc. Furthermore, the hand 270 has two fingers that can grasp the work and rotates with rotation of the arm 26. The hand 270 is attached so that the center axis of the hand 270 may be aligned with a rotation axis 06 of the arm 26 (distal end arm) in design. Here, the distal end center of the hand 270 is called a tool center point P. In the embodiment, the tool center point P is a center of a region between the two fingers of the hand 270. Further, the distal end center of the robot arm 20 is referred to as “distal end axis coordinates”.
Further, as shown in
In the robot 2a having the above described configuration, as shown in
Further, in the robot 2a, a distal end coordinate system with reference to the distal end portion of the hand 270 is set. The distal end coordinate system is a three-dimensional orthogonal coordinate system determined by an xa-axis, ya-axis and za-axis orthogonal to one another. In the embodiment, the distal end coordinate system has the origin at the distal end axis coordinates of the robot 2a. The base coordinate system and the distal end coordinate system have been calibrated so that the coordinates of the distal end coordinate system with reference to the base coordinate system may be calculated. Further, a translation component with respect to the xa-axis is referred to as “component xa”, a translation component with respect to the ya-axis is referred to as “component ya”, a translation component with respect to the za-axis is referred to as “component za”, a rotation component about the za-axis is referred to as “component ua”, a rotation component about the ya-axis is referred to as “component va”, a rotation component about the xa-axis is referred to as “component wa”. The unit of the lengths (magnitudes) of the component xa, component ya and component za is “mm” and the unit of the angles (magnitudes) of the component ua, component va and component wa is “°”.
Here, in the specification, the respective base coordinate system and distal end coordinate system are also referred to as robot coordinate systems. That is, in the specification, coordinates set with reference to any location of the robot 2 are referred to as “coordinate system of robot (robot coordinate system)”.
As shown in
The robot arm 20 of the robot 2b has a first arm 201 (arm), a second arm 202 (arm) provided in the distal end portion of the arm 201, and the spline shaft 203 (arm) provided in the distal end portion of the second arm 202. Further, the hand 270 is attached so that the center axis of the hand 270 may be aligned with an axis J3 of the spline shaft 203 in design. The hand 270 rotates with the rotation of the spline shaft 203.
The robot 2b has three drive units 280 and position sensors 281 in the same number as the three arms (see
Further, in the robot 2b, like the robot 2a, a base coordinate system (a three-dimensional orthogonal coordinate system determined by an xr-axis, yr-axis and zr-axis) and a distal end coordinate system (a three-dimensional orthogonal coordinate system determined by an xa-axis, ya-axis and za-axis) are set.
As above, the configurations of the robots 2 (robots 2a, 2b) are briefly explained. Note that the robots 2 controlled by the control system 10 are not limited to the configurations shown in
As shown in
As below, the fixed camera 32 and the mobile camera 31 will be briefly explained.
The fixed camera 32 shown in
The fixed camera 32 has e.g. an imaging device including a CCD (Charge Coupled Device) image sensor with a plurality of pixels and a lens (optical system) (not shown). The fixed camera 32 forms an image of light reflected by an object to be imaged on a light receiving surface (sensor surface) of the imaging device using the lens, converts the light into an electric signal, and outputs the electric signal to the control system 10 (the image processing apparatus 13 in the embodiment). Here, the light receiving surface is a surface of the imaging device on which the image of the light is formed. Further, the fixed camera 32 is provided so that an optical axis A32 thereof (the optical axis of the lens) may be along the vertical direction of the flat surface (top surface) of the worktable 90 in design.
In the fixed camera 32, as an image coordinate system (a coordinate system of the captured image output from the fixed camera 32), a two-dimensional orthogonal coordinate system (not shown) determined by an xc-axis and a yc-axis respectively parallel to the in-plane direction of the captured image is set. Further, a translation component with respect to the xc-axis is referred to as “component xc”, a translation component with respect to the yc-axis is referred to as “component yc”, and a rotation component about the normal of the xc-yc plane is referred to as “component uc”. The unit of the lengths (magnitudes) of the component xc and component yc is “pixel” and the unit of the angle (magnitude) of the component uc is “°”. Note that the image coordinate system of the fixed camera 32 is a two-dimensional orthogonal coordinate system obtained by non-linear transform of three-dimensional orthogonal coordinates reflected in the field of view of the fixed camera 32 in consideration of the optical characteristics (focal length, distortion, etc.) of the lens and the number of pixels and the size of the imaging device.
The mobile camera 31 shown in
The mobile camera 31 has e.g. an imaging device including a CCD image sensor with a plurality of pixels and a lens (optical system) (not shown). The mobile camera 31 forms an image of light reflected by an object to be imaged on a light receiving surface (sensor surface) of the imaging device using the lens, converts the light into an electric signal, and outputs the electric signal to the control system 10 (the image processing apparatus 13 in the embodiment). Here, the light receiving surface is a surface of the imaging device on which the image of the light is formed. Further, the mobile camera 31 is provided so that an optical axis A31 thereof (the optical axis of the lens) may be along the same direction as that of the distal end axis of the robot arm 20 (the rotation axis 06 of the sixth arm 26 in the case of the robot 2a and the axis J3 of the spline shaft 203 in the case of the robot 2b) in design.
In the mobile camera 31, as an image coordinate system of the mobile camera 31 (a coordinate system of the captured image output from the mobile camera 31), a two-dimensional orthogonal coordinate system (not shown) determined by an xb-axis and a yb-axis respectively parallel to the in-plane direction of the captured image is set. Further, a translation component with respect to the xb-axis is referred to as “component xb”, a translation component with respect to the yb-axis is referred to as “component yb”, and a rotation component about the normal of the xb-yb plane is referred to as “component ub”. The unit of the lengths (magnitudes) of the component xb and component yb is “pixel” and the unit of the angle (magnitude) of the component ub is “°”. Note that the image coordinate system of the mobile camera 31 is a two-dimensional orthogonal coordinate system obtained by non-linear transform of three-dimensional orthogonal coordinates reflected in the field of view of the mobile camera 31 in consideration of the optical characteristics (focal length, distortion, etc.) of the lens and the number of pixels and the size of the imaging device.
As above, the configurations of the imaging units 3 are briefly explained. Note that the imaging units 3 controlled by the control system 10 are not limited to those illustrated. Further, the attachment locations (placement locations) of the imaging units 3 controlled by the control system 10 are not limited to the illustrated locations. For example, the attachment location of the mobile camera 31 may be in the fifth arm 25 of the robot 2a shown in
The display device 41 (display unit) shown in
The input device 42 (input unit) includes e.g. a mouse, keyboard, etc. Therefore, the user may give instructions of various kinds of processing etc. to the control system 10 by operating the input device 42.
Note that, in the embodiment, in place of the display device 41 and the input device 42, a display and input apparatus (not shown) including both the display device 41 and the input device 42 may be provided. As the display and input apparatus, e.g. a touch panel (electrostatic touch panel or pressure-sensitive touch panel) or the like may be used. Further, the input device 42 may be adapted to recognize sound (including voice).
As described above, the control system 10 has the computer 11, the robot control apparatus 12, and the image processing apparatus 13 (see
As below, the control system 10, the computer 11 and the robot control apparatus 12 will be sequentially explained.
The computer 11 includes e.g. a computer (e.g. a PC (Personal Computer), PLC (Programmable Logic Controller), or the like) in which a program (OS: Operating System) is installed. The computer 11 has e.g. a CPU (Central Processing Unit) and GPU (Graphics Processing Unit) as a processor, a RAM (Random Access Memory), and a ROM (Read Only Memory) in which the program is stored.
As below, the respective functions (functional parts) of the computer 11 will be explained.
As shown in
For example, the function of the control unit 111 may be realized by execution of various programs stored in the main memory 112 and the storage unit 113 by the CPU and the GPU. The control unit 111 has e.g. a control program edit part 1111, a control program build part 1112, a calibration edit part 1113, a calibration execution part 1114, and an image processing sequence edit part 1115. Note that the functional parts (elements) of the control unit 111 are not limited to these. One of the functional parts may be omitted or another functional part may be added.
The control program edit part 1111 creates and edits a control program for driving the robot 2 (including an job program for the robot 2 to perform various jobs). Further, for example, the control program edit part 1111 (computer 11) may designate various commands having predetermined augments in the control program. The control program build part 1112 builds and converts the control program into a language (data strings) that can be interpreted by the robot control apparatus 12.
The calibration edit part 1113 creates and edits a calibration program on a calibration. That is, the calibration edit part 1113 has a function of editing setting details on the calibration. The calibration execution part 1114 executes the calibration program. Specifically, the calibration execution part 1114 transfers instructions based on the calibration program to the robot control apparatus 12 and the image processing apparatus 13 and allows the robot 1 and the imaging units 3 to perform jobs relating to the calibration. Here, in the specification, the calibration edit part 1113 and the calibration execution part 1114 form a calibration control part 1110.
The image processing sequence edit part 1115 creates and edits an image processing program on the image processing sequence by the imaging units 3. That is, the image processing sequence edit part 1115 has a function of editing setting details on the image processing sequence.
In addition, the control unit 111 performs various calculations and determinations, and gives instructions to the respective functional parts of the computer 11, instructions to the robot control apparatus 12, instructions to the image processing apparatus 13, etc. in response to the instruction received by the input control unit 115.
The main memory 112 is a work area of the control unit 111. The function of the main memory 112 may be realized using e.g. a RAM.
The storage unit 113 has a function of recording various kinds of data (including programs). The function of the storage unit 113 may be realized by a ROM or the like or the so-called external storage device (not shown). In the storage unit 113, software (e.g. application software) including e.g. the control program for driving the robot 2, the calibration program on the calibration, the image processing program on the image processing sequence by the imaging units 3 etc. is stored. In other words, the above described software is installed in the computer 11. Further, the software includes a program on tool settings, a program on local settings (settings of local coordinate systems), programs for driving various robots 2 using various commands to execute various kinds of processing (e.g. tool settings, local settings, calibration creation and execution (calibration), image processing sequence creation and execution (creation of the image processing sequence), etc.), and programs for setting various parameters in force control based on the output from the force control unit 290. Further, the above described software may be stored in e.g. a recording medium such as a CD-ROM (not shown) and provided from the recording medium, or provided via a network.
The display control unit 114 is connected to the display device 41 and has a function of allowing the monitor of the display device 41 to display captured images and various windows (e.g. operation windows and windows relating to processing results). That is, the display control unit 114 controls driving of the display device 41. The function of the display control unit 114 may be realized by e.g. a GPU. For example, the display control unit 114 allows the display device 41 to sequentially display a plurality of guide windows relating to the image processing sequence dialogically (interactively) with the user. Further, the display control unit 114 allows the display device 41 to sequentially display a plurality of calibration creation windows relating to the calibration, a plurality of tool setting windows relating to tool settings, and a plurality of local setting windows relating to settings of local coordinate systems respectively and dialogically with the user.
The input control unit 115 is connected to the input device 42 and has a function of receiving the input from the input device 42. The function of the input control unit 115 may be realized by e.g. an interface circuit. Note that, in the case of using a touch panel, the input control unit 115 has a function as an input sensing unit that senses contact of a finger of the user with the touch panel or the like.
The communication unit 116 has a function of transmitting and receiving data to and from outside, the robot control apparatus 12, the image processing apparatus 13, etc. The function of the communication unit 116 may be realized using e.g. an interface circuit or the like.
The robot control apparatus 12 controls the driving of the robot 2 according to the instructions from the computer 11, for example. The robot control apparatus 12 is a computer in which programs (OS etc.) are installed. The robot control apparatus 12 has e.g. a CPU as a processor, RAM, and ROM in which the programs are stored.
As below, the respective functions (functional units) of the robot control apparatus 12 will be explained.
As shown in
For example, the function of the control unit 121 may be realized by execution of various programs stored in the main memory 122 and the storage unit 123 by the CPU. The control unit 121 has e.g. a control program execution part 1211 and a robot control part 1212. Note that the functional parts (elements) of the control unit 121 are not limited to these. One of the functional parts may be omitted or another functional part may be added.
The control program execution part 1211 executes the control program for driving the robot 2 according to the instruction from the computer 11. For example, the control program execution part 1211 executes various kinds of processing (e.g. execution instructions of tool settings, local settings, calibration processing (calibration) and image processing sequence etc.) on the robot 2 by various commands. The robot control part 1212 controls driving of the respective drive units 280 to drive and stop the robot arm 20. For example, the control unit 121 derives target values of the motors (not shown) of the respective drive units 280 for moving the hand 270 to a target position based on the information output from the position sensors 281 and the force detection unit 290. In addition, the control unit 121 has a function of performing processing of various calculations and determinations, a function of giving instructions of the robot control apparatus 12, etc.
The main memory 122 is a work area of the control unit 121. The function of the main memory 122 may be realized using e.g. a RAM. The storage unit 123 has a function of recording various kinds of data (including programs). The storage unit 123 records e.g. the control program etc. The function of the storage unit 123 may be realized by a ROM or the like or the so-called external storage device (not shown). The communication unit 126 has a function of transmitting and receiving data to and from outside, the robot 2, the computer 11, the image processing apparatus 13, etc. The function of the communication unit 126 may be realized using e.g. an interface circuit or the like.
The image processing apparatus 13 controls driving of the imaging units 3 and performs processing of the captured images captured by the imaging units 3 (image processing) according to the instructions from the computer 11, for example. The image processing apparatus 13 is a computer in which programs (OS etc.) are installed. The image processing apparatus 13 has e.g. a CPU and GPU as processors, RAM, and ROM in which the programs are stored.
As below, the respective functions (functional units) of the image processing apparatus 13 will be explained.
As shown in
For example, the function of the control unit 131 may be realized by execution of various programs stored in the main memory 132 and the storage unit 133 by the CPU and GPU. The control unit 131 has e.g. an image processing sequence execution part 1311, an image processing part 1312, and an imaging unit control part 1313. Note that the functional parts (elements) of the control unit 131 are not limited to these. One of the functional parts may be omitted or another functional part may be added.
The image processing sequence execution part 1311 has a function of executing the image processing sequence according to the instructions (commands) from the computer 11. The image processing part 1312 has a function of performing image processing of extracting various kinds of information from the captured images, for example. Specifically, the image processing part 1312 performs e.g. processing of various calculations, various determinations, etc. based on the captured images (image data) from the imaging units 3 etc. For example, the image processing part 1312 calculates the coordinates (components xb, yb, ub or components xc, yc, uc) of the object to be imaged in the image coordinate system based on the captured images. Further, for example, the image processing part 1312 converts the coordinates in the image coordinate system (image coordinates) into coordinates in the distal end coordinate system of the robot 2 (distal end coordinates) or coordinates in the base coordinate system of the robot 2 (base coordinates). The correction parameters used for the conversions are obtained by the computer 11 or robot control apparatus 12, for example. Note that the image processing apparatus 13 may obtain the correction parameters used for the conversions. Further, for example, the imaging unit control part 1313 has a function of controlling driving of the imaging units 3 and acquiring the captured images (image data) from the imaging units 3.
In addition, the control unit 131 has a function of performing various calculations and determinations, a function of giving instructions to the respective functional units of the image processing apparatus 13, etc. in response to the instructions from the computer 11.
The main memory 132 is a work area of the control unit 131. The function of the main memory 132 may be realized using e.g. a RAM.
The storage unit 133 has a function of recording various kinds of data (including programs). The storage unit 133 records e.g. the programs on the image processing sequence etc. The function of the storage unit 133 may be realized by a ROM or the like or the so-called external storage device (not shown).
The communication unit 136 has a function of transmitting and receiving data to and from outside, the imaging units 3, the robot control apparatus 12, the computer 11, etc. The function of the communication unit 136 may be realized using e.g. an interface circuit or the like.
As above, the configurations and the functions of the control system 10 are explained. Note that any of the computer 11, the robot control apparatus 12, and the image processing apparatus 13 may have the respective functions that the above described computer 11, the robot control apparatus 12, and the image processing apparatus 13 respectively have. Or, the computer 11, the robot control apparatus 12, and the image processing apparatus 13 may be integrated. For example, the control unit 111 of the computer 11 may have the image processing sequence execution part 1311, the image processing part 1312, and the imaging unit control part 1313 of the image processing apparatus 13. The image processing apparatus 13 may have the display control unit 114 and the input control unit 115 of the computer 11. Or, the control system 10 does not necessarily include the image processing apparatus 13. In this case, the computer 11 may have the respective functions of the image processing apparatus 13. Or, the control unit 121 of the robot control apparatus 12 may have the calibration execution part 1114 of the computer 11.
As above, the basic configuration of the robot vision system 100 is briefly explained.
Next, examples of job program creation and teaching will be explained. Note that, as below, the case where the robot 2a shown in
As below, for example, a job program of a job by the robot 2a shown in
Here, before execution of the creation of the job program, as shown in
Afterward, the user gives instructions to the control system 10 by operations of clicking various windows displayed on the display device 41 using the mouse of the input device 42 and operations of inputting characters, numerals, etc. to the instruction windows displayed on the display device 41 using the keyboard of the input device 42. That is, the control (processing) by the control system 10 in the following creation of the job program is performed according to the instructions by the user using the input device 42. Hereinafter, the instructions by the user using the input device 42 (i.e., input by the input device 42) are referred to as “operation instructions”. The operation instructions include operation instructions including selection operations of selecting desired contents from the contents displayed on the instruction windows, input instructions of inputting characters, numerals, etc. on the instruction windows by the input device 42, etc.
As below, the job program creation will be explained based on the flowchart shown in
First, the computer 11 issues a movement instruction of positioning the mobile camera 31 on the calibration plate 92 to the robot control apparatus 12 (
Then, the computer 11 performs settings of the local coordinate system, i.e., local settings (
The specific setting method of the local settings is not particularly limited, but a method of obtaining the settings based on captured images formed by imaging at least three markers (not shown) attached to the calibration plate 92 one by one and the distal end coordinates of the tool center point P at imaging, for example.
Further, the local settings are performed by display processing using the instruction windows, which will be described later. Note that the display processing in the local settings will be explained later.
When the settings of the local coordinate system end, the user takes off the calibration plate 92 from the feed board 91 and mounts a work 93 on the feed board 91 (see
Then, the computer 11 issues a movement instruction to position the mobile camera 31 to a position where the camera may image the work 93 to the robot control apparatus 12 (
Then, the computer 11 sets the distal end axis coordinates (components xr, yr, zr, ur, vr, wr) of the robot 2a at step S113 as a first point and stores the point in the storage unit 113 (
Then, the computer 11 issues instructions (commands) to the robot control apparatus 12 and the image processing apparatus 13 and creates a first image processing sequence on a marker (not shown) attached to the center of the upper surface of the work 93 (
Here, the image processing sequence contains a method and a procedure of capturing images from the imaging units 3, processing the captured images, and performing detections, inspections, etc. of predetermined parts on the captured images. The creation of the image processing sequence includes various settings of the image processing sequence, teaching of the parts, and execution and reflection of the image processing sequence. Further, the first image processing sequence refers to the image processing sequence with respect to the marker attached to the center of the upper surface of the work 93 as a part.
Further, the creation of the first image processing sequence etc. are performed by the display processing using the instruction window, which will be described later. Note that the display processing in the first image processing sequence will be explained later.
Then, the computer 11 issues a grasp instruction to grasp the work 93 to the robot control apparatus 12 (
Then, the computer 11 issues a movement instruction to position the work 93 on the fixed camera 32 to the robot control apparatus 12 (
Then, the computer 11 issues instructions (commands) to the robot control apparatus 12 and the image processing apparatus 13 and creates a second image processing sequence (second vision sequence) of a marker (not shown) attached to the center of the lower surface of the work 93 (
Note that it is difficult to respectively provide the markers in completely the same position at the upper surface center and the lower surface center of the work 93, and accordingly, the same target such as a through hole provided in the work 93 is recognized from above and below the work 93, for example.
Further, the creation of the second image processing sequence etc. are performed by the display processing using the instruction windows, which will be described later. Note that the display processing in the second image processing sequence will be explained later.
Then, the computer 11 issues instructions to the robot control apparatus 12 and the image processing apparatus 13 and makes the tool settings (
The method of obtaining the offset is not particularly limited, but includes e.g. a method of fixing one position of the distal end axis coordinates of the robot 2a and the center of the work 93 and moving (rotating, for example) the other position and obtaining the offset based on the distal end coordinates and amounts of movement (e.g. rotation angles) of the tool center point P and the center of the work 93 before and after the movement.
Further, the tool settings are performed by display processing using the instruction windows, which will be described later. Note that the display processing in the tool settings will be explained later.
Then, the computer 11 issues instructions to the robot control apparatus 12 and the image processing apparatus 13 and performs a calibration for correlating the image coordinate system of the fixed camera 32 with the local coordinates (robot coordinate system) using the marker attached to the center of the lower surface of the work 93 (
The specific method of the calibration is not particularly limited, but includes e.g. a method of positioning targets (objects to be imaged) such as single marker at at least three or more camera points within the captured image and using the image coordinates based on the captured images at the respective camera points and a transformation matrix for image coordinates obtained based on the robot coordinates of the targets such as markers at imaging and the robot coordinates. The robot coordinates of the markers or the like at the respective camera points may be calculated using positions and attitudes of the axis coordinates of the distal axis of the robot etc. and the above described tool settings (offset). Thereby, the image coordinate system may be correlated with the robot coordinate system and the image coordinates may be converted into the robot coordinates. Accordingly, the robot coordinates of the object to be imaged on the captured image may be obtained. Note that, at step S122, nine of the camera points are set.
The calibration of the fixed camera 32 is performed by the display processing using the instruction windows, which will be described later. Note that the display processing in the calibration will be explained later.
Then, the computer 11 issues instructions to the robot control apparatus 12 and the image processing apparatus 13 and performs creation of a third image processing sequence (third vision sequence) for detection of two points A (not shown) and a point B (not shown) attached to the lower surface of the work 93 (
The creation of the third image processing sequence is performed by the display processing using the instruction windows, which will be described later. Note that the display processing in the third image processing will be explained later.
Then, the computer 11 issues a movement instruction to the second point set at step S117 to the robot control apparatus 12, and issues a mounting instruction to mount the work 93 on the feed board 91 (
Then, the computer 11 allows the robot control apparatus 12 to separate the work 93 from on the feed board 91, then, moves the work to the first point obtained at step S114, and issues a movement instruction to position the mobile camera 31 to a position where the work 93 mounted on the feed board 91 may be imaged (
Then, the computer 11 issues instructions to the robot control apparatus 12 and the image processing apparatus 13 and performs a calibration of the image coordinate system of the mobile camera 31 and the robot coordinate system using the marker attached to the center of the upper surface of the work 93 and the robot coordinates saved as the fourth point (at the fourth point) (
In the calibration at step S127, the marker attached to the center of the upper surface of the work 93 is used as one target (object to be imaged), and the mobile camera 31 is moved with respect to the work 93 to image the marker at nine camera points within the captured image. Then, a transformation matrix for image coordinates and distal end coordinates is obtained using the image coordinates based on the captured images at the nine camera points and the robot coordinates of the marker attached to the upper surface of the work 93 saved as the fourth point (at the fourth point). Thereby, the image coordinates of the mobile camera 31 may be converted into the robot coordinates.
In the calibration at step S127, for the robot 2a to properly grasp the work 93, the first image processing sequence and the calibration result of the mobile camera 31 are correlated.
The calibration of the mobile camera 31 is performed by the display processing using the instruction windows, which will be described later. Note that the display processing in the calibration will be explained later.
Then, the computer 11 issues a movement instruction to position the tool center point P onto the removal board for pass 94 to the robot control apparatus 12, mounts the work 93, sets the distal end axis coordinates (components xr, yr, zr, ur, vr, wr) of the robot 2a as a fifth point and stores the point in the storage unit 113 (
Then, the computer 11 issues a movement instruction to position the distal end axis of the robot 2a onto the removal board for fail 95 to the robot control apparatus 12, mounts the work 93, sets the distal end axis coordinates (components xr, yr, zr, ur, vr, wr) of the robot 2a as a sixth point and stores the point in the storage unit 113 (
Then, the computer 11 (control program edit part 1111) creates a job program of the robot 2a based on steps S111 to S129 (
This is the end of the creation of the job program.
Here, in related art, when teaching of normal job coordinates is performed, the work 93 is grasped by the hand 270, then, the user jog-feeds the hand 270, inserts and places the work 93 in an assembly position of the object for job or the like, and teaches the job coordinates. However, the teaching job has the following problems (1), (2).
(1) When it is necessary to insert and place the work 93 in a job position with high accuracy, the user takes a long time to manually jog-feed the work 93 to the location and insert and place the work with high accuracy.
(2) When many job positions are taught, in the method of related art, the user manually jog-feeds and inserts and places the work 93 from start to finish with respect to all teaching points, and thereby, automation is difficult.
In order to solve the above described problems, of the processing from step S116 to step S125, processing except step S122, step S123 may be respectively used in the teaching of the job positions (job coordinates) where the jobs are performed. In the teaching job, the work 93 that has been manually and accurately grasped by the user is grasped and pulled by the hand 270 afterward, and thereby, the jog-feeding of the hand 270 that takes time for accurate positioning may be omitted and the teaching time may be significantly shortened. In the case where many job coordinates are taught, the works 93 are placed in the respective job positions in advance and the coordinates at which the respective works 93 are grasped are taught, and thereby, subsequent acquisition processing of the job coordinates may be easily automated.
Or, in the case where teaching is repeatedly performed, of the processing from step S116 to step S125, step S119, step S120 are executed (performed) at the initial time only, and the initial values may be used for the subsequent teachings. Further, step S124, step S125 are not absolutely necessary, and the coordinates of the fourth point may be calculated from the coordinates set at the second point and the tool offset acquired in the tool settings.
Next, the execution of the job program by the robot control apparatus 12 and the image processing apparatus 13 based on the execution instruction at step S132 will be explained.
First, for execution of the job program, the work 93 is mounted on the feed board 91.
As shown in
Then, the robot control apparatus 12 issues an execution instruction of the first image processing sequence to the image processing apparatus 13 (step S213). The image processing apparatus 13 receives the execution instruction and executes the first image processing sequence for detection of the work 93 by the mobile camera 31 (step S214). At step S214, the image processing apparatus 13 executes the first image processing sequence, performs image processing based on the captured image (image data) obtained by imaging of the work 93 using the mobile camera 31, and detects the center position of the work 93. Further, the image processing apparatus 13 converts the center position of the work 93 on the captured image into local coordinates (components xd, yd, ud) using the calibration of the mobile camera 31.
Then, the image processing apparatus 13 transmits a first image processing sequence result (robot coordinates of the image detection point etc.) to the robot control apparatus 12 (step S215). When receiving the first image processing sequence result (step S216), the robot control apparatus 12 sets the position of the center of the work 93 according to the local coordinate system (components xd, yd, zd, ud) as a seventh point based on the result, and records the point in the storage unit 113 (step S217). Here, for the position (component zd) of the work 93, the position (component zd) of the second point is used.
Then, the robot control apparatus 12 drives the robot arm 20 and moves the tool center point P to the seventh point based on the tool settings to grasp the work 93 by the hand 270 (step S218). Then, the robot control apparatus 12 drives the robot arm 20 and moves the distal end axis of the robot 2a to the third point (step S219). Then, the robot control apparatus 12 issues an execution instruction of the third image processing sequence to the image processing apparatus 13 (step S220). The image processing apparatus 13 receives the execution instruction and executes the third image processing sequence for detection of the work 93 by the fixed camera 32 (step S221). At step S221, the image processing apparatus 13 executes the third image processing sequence, performs image processing based on the captured image (image data) obtained by imaging of the work 93 using the fixed camera 32, and detects the points A, point B of the work 93. Further, the image processing apparatus 13 converts the positions of the points A and point B of the work 93 on the captured image into robot coordinates (base coordinates) using the calibration result of the fixed camera 32. Then, the image processing apparatus 13 transmits a third image processing sequence result (the respective robot coordinates of the points A and point B etc.) to the robot control apparatus 12 (step S222). When receiving the third image processing sequence result (step S223), the robot control apparatus 12 performs an inspection of measuring the distances between the points A and point B of the work 93 according to the local coordinate system based on the result (step S224).
Then, the robot control apparatus 12 performs a pass/fail determination of pass if the distances between the points A and point B are within the predetermined threshold value and fail if the distances are beyond the predetermined threshold value (step S225). If the work passes, the robot control apparatus 12 moves to step S226a, drives the robot arm 20 and moves the distal end axis of the robot 2a to the fifth point. On the other hand, if the work fails, the robot control apparatus 12 moves to step S226b, drives the robot arm 20 and moves the distal end axis of the robot 2a to the sixth point.
Then, the robot control apparatus 12 counts up (step S227), and determines whether or not the works 93 has reached a predetermined number (step S228). If the predetermined number has been reached, the robot control apparatus 12 moves to step S229, transmits that the job has ended to the computer 11 (step S229), and ends the job. On the other hand, if the predetermined number has not been reached, the apparatus returns to step S211 and repeats steps S211 to S228 until the predetermined number is reached.
This is the end of the job. Further, the computer 11 recreates the above described job program based on the job result (e.g. whether or not the work 93 has been successfully grasped, variations of the grasp position, whether or not the image sequence has been successfully executed, or the like). For example, if determining that the grasping of the work 93 is frequently unsuccessful, the computer 11 recreates (rewrites) the job program and performs the job again. In this manner, creation (updating) of the job program and the job based on the job program are performed until the grasping of the work 93 becomes stable, and thereby, the job accuracy by the robot 2a may be improved. Or, the computer 11 may recreate only a part of the job program, not the whole job program. For example, if the accuracy of the grasping of the work 93 is not within a predetermined threshold value, only the calibration of the mobile camera 31 (
The creation of the job program for the job and settings of necessary various kinds of processing are executed by the control system 10 based on operation instructions using the various instruction windows (input using the input device 42) by the user as described above. As below, the various instruction windows, the operations of the instruction windows (input using the input device 42) by the user, the display processing by the computer 11 (display control unit 114), etc. will be explained. Note that, hereinafter, various settings by the control system 10 based on the operation instructions (input using the input device 42) by the user using the various instruction windows etc. are referred to as “display processing”.
As shown in
The display control unit 114 may display a sub-window for robot operation 51 shown in
The sub-window for robot operation 51 shown in
The jog motion group 521 has a plurality of buttons 5212 that receive operation instructions for jog motion of predetermined parts of the robot 2 by the user. The jog motion group 521 has the visually recognizable buttons 5212 as described above, and thereby, the user may easily instruct jog feed of the robot 2. The jog motion group 521 is used at steps S111, S113, S116, S118, S124, S126, S128, S129 in the above described creation of the job program. Specifically, for example, at step S111, when the input control unit 115 receives operation instructions to the plurality of the buttons 5212 from the user, the control unit 111 issues a movement command to position the mobile camera 31 on the calibration plate 92 to the robot control apparatus 12 (
Further, the teach group 522 is used for setting of the teaching point by the user. The teach group 522 is used at steps S114, S117, S119, S125, S128, S129 in the above described creation of the job program. Specifically, for example, at step S114, when the input control unit 115 gives an operation instruction to the teach button by the user, the control unit 111 sets the first point and allows the storage unit 113 to record the first point (
The sub-window for image processing 61 shown in
The toolbar 615 has an icon 671 used for displaying a group of windows for creation of the image processing sequence. The picture image display part 612 displays captured images imaged by the imaging units 3, image processing results. The execution group 613 has various buttons that receive operation instructions to execute the image processing sequence by the user. The flowchart display part 62 displays the image processing procedure of the image processing sequence, the teaching procedure of the calibration, etc.
Further, the sub-window 61 has a jog panel 54 having the same configuration as the jog motion group 521, and a panel (not shown) for setting various parameters (e.g. movement velocity etc.) of the robot 2. The sub-window 61 has two tabs 56 used for displaying one of the panel for setting various parameters of the robot 2 and the jog panel 54 on the top. Note that these panels may be displayed side by side.
As described above, the sub-window 61 has the jog panel 54, and thereby, the user may perform a robot operation using the sub-window for image processing 61. Further, similarly, the sub-window 61 has the panel for setting various parameters of the robot 2, and thereby, the user may set various parameters of the robot 2 using the sub-window for image processing 61.
Further, though not shown in
As described above, the display control unit 114 may display the plurality of kinds of sub-windows (including the sub-windows 51, 61) in superimposition or side by side at the same time with the one main window 50 under the control of the control unit 111, and the user may efficiently perform a plurality of kinds of jobs. Particularly, as described above, in the embodiment, the sub-window for robot operation 51, the sub-window for image processing 61, the sub-window for command input, and the sub-window relating to force control may be displayed on the display device 41, and the convenience is especially high.
Further, as described above, the display control unit 114 may display the panel for setting various parameters of the robot 2 (not shown) and the jog panel 54 for jog motion of the robot 2 in the sub-window for image processing 61 in superimposition or side by side. Accordingly, the user may properly and efficiently perform the operation of the robot 2 when the image processing sequence is executed.
Next, the local settings in the above described job program and the display processing in the local settings will be explained.
As below, settings of the local coordinate system (step S112) in the above described creation of the job program will be explained with reference to the flowchart shown in
First, the control system 10 executes various settings in the local settings based on the input by the input device 42.
Specifically, first, when the user gives an operation instruction to (clicks) the icon for local settings 702 of the main window 50 shown in
As shown in
Further, the local setting window 72a has a button 7201 labeled “Cancel”, a button 7202 labeled “Back”, a button 7203 labeled “Next”, a button 7204 labeled “Teach” (teach button), and a button 7205 labeled “Finish”. The button 7201 is used for cancelling a local setting wizard. The button 7202 is used for returning to the previous local setting window 72 in the sequentially displayed local setting windows 72. The button 7203 is used for proceeding to the next local setting window 72 in the sequentially displayed local setting windows 72. Note that the local setting window 72a is the first one of the sequentially displayed local setting windows 72, and the button 7202 is grayed out. Also, the buttons 7204, 7205 are grayed out.
With respect to the local setting window 72a, when the user gives an operation instruction to select a desired mode (click or touch desired one radio button 721) and gives an operation instruction to the button 7203 labeled “Next”, the input control unit 115 receives the selection of the local setting mode (
The second local setting window is a window for selection (setting) of a save number (local number) for saving the local setting result (not shown). Note that the second local setting window is in nearly the same display form as that of the first local setting window 72a except that the selection of the save number is displayed in place of the display of the selection of the local setting mode.
For the display for selecting the save number in the second local setting window, e.g. a listbox or the like may be used. The second local setting window has a configuration of receiving the select in of the save number, and thereby, input errors by the user may be prevented. Note that what to select may be a save name in place of the save number. In addition, the second local setting window also has the same buttons (not shown) as the buttons 7201 to 7205 of the first local setting window 72a.
When the user gives an operation instruction to select a desired save number to the second local setting window, and then, gives an operation instruction to the button labeled “Next” (the button corresponding to the button 7203), the input control unit 115 receives the selection of the save number of the result of the local settings (
As shown in
Further, the local setting window 72b has a checkbox 729 for selecting whether or not to teach a local reference point. When receiving the selection, the control unit 111 sets the local plane containing the local coordinate system to a position passing through the designated teaching point. Thereby, the convenience when using the set local coordinates in the robot 2a is improved.
Here, the display contents of the third local setting window 72b change according to the type (selection) of the imaging unit 3 of the first local setting window 72a. Specifically, the display contents of the dropdown list 722 change according to the type (selection) of the imaging unit 3 of the first local setting window 72a. For example, at step S112 in the above described generation of the job program, local settings are made using the mobile camera 31. Therefore, when the input control unit 115 receives the selection of the second radio button 721 from the top in the drawing of the first local setting window 72a by the user, the display control unit 114 allows the display device 41 to display the local setting window 72b having the dropdown list 722 with the display contents relating to the mobile camera 31. Further, for example, when the input control unit 115 receives the selection of the third radio button 721 from the top in the drawing of the first local setting window 72a, the display control unit 114 allows the display device 41 to display the local setting window 72b having the dropdown list 722 with the display contents relating to the fixed camera 32.
In the above described manner, the limited contents according to the selection in the previously displayed local setting window 72a are displayed on the subsequently displayed local setting window 72b, and selection errors by the user may be reduced.
With respect to the third local setting window 72b, when the user gives an operation instruction to select the details of the respective vision components and gives an operation instruction to the button 7203 labeled “Next”, the input control unit 115 receives the selection of the details of the respective vision components (
The fourth local setting window is a window for setting the camera point (start position) at which the local settings are started and the local reference point (not shown). The local reference point is set only when the selection of teaching the local reference point is made in the third local setting window. Note that the fourth local setting window is in nearly the same display form as that of the first local setting window 72a except that the selection details (setting details) are different. Further, the fourth local setting window also has the same buttons as the buttons 7201 to 7205 of the first local setting window 72a.
With respect to the fourth local setting window, when the user gives an operation instruction to the teach button (the ungrayed-out button corresponding to the button 7204), the input control unit 115 receives the operation instruction (
As shown in
Using the window 720, the user gives an instruction (operation instruction) to move a predetermined part of the robot 2a using the plurality of buttons 7241 so that the calibration plate may be positioned at the center of the picture image display part 725 (the center of the captured image). When the input control unit 115 receives the operation instruction of the user, the control unit 111 issues a movement instruction to move the hand 270 based on the operation instruction to the robot control apparatus 12. Further, with the movement instruction, the unit issues an imaging instruction to capture the images of the imaging units 3 to the image processing apparatus 13 and displays the images in the picture image display part 725, and thereby, the user moves the imaging unit to a position in which the object to be imaged is appropriately taken and teaches the camera point at which the local settings are started (
Then, when the user gives an operation instruction to the button labeled “Next” (corresponding to the button 7203) in the above described fourth local setting window (not shown), the input control unit 115 receives the operation instruction by the user (
As shown in
Further, the local setting window 72c has buttons 7201 to 7205 like the first local setting window 72a, and the buttons 7203 to 7205 are grayed out. The fifth local setting window 72c is the final local setting window of the group of windows for local settings, and the button 7203 is grayed out.
Furthermore, the local setting window 72c has a button 7208 labeled “EXECUTE”.
With respect to the local setting window 72c, when the user gives an operation instruction to select the details of the various parameters and gives an operation instruction to the button 7208 labeled “EXECUTE”, the input control unit 115 receives the selection of the details of the respective parameters and the execution instruction of the local settings from the user (
Next, the control system 10 executes the processing of the local settings.
Specifically, the robot control apparatus 12 and the image processing apparatus 13 execute the processing of the local settings based on the execution instructions from the control unit 111 at step S326.
First, when receiving the execution instruction, the robot control apparatus 12 acquires the status of the robot 2a (e.g. whether or not the motor of the drive unit 280 is ON or the like) from the robot 2a. Then, the robot control apparatus 12 issues a movement instruction to the robot 2a so that the calibration plate may enter the field of view of the mobile camera 31 and the mobile camera 31 may move to the camera point taught as the start position of the local settings. In this regard, the robot 2a returns information on the movement of the robot arm 20 (values of the position sensor) or the like to the robot control apparatus 12 at each time. Then, the robot control apparatus 12 issues an execution instruction of the image processing sequence to the image processing apparatus 13. The image processing apparatus 13 receives the execution instruction and executes the image processing sequence for detection of the calibration plate using the mobile camera 31 (imaging unit 3). As the image processing sequence, the image processing sequence received by the above described local setting window 72b is executed. The image processing apparatus 13 executes the image processing sequence, performs image processing based on the captured image (image data) obtained by imaging of the calibration plate using the mobile camera 31, and detects the relative position and attitude of the calibration plate with respect to the mobile camera 31. Then, when the image processing ends, the image processing apparatus 13 transmits an execution result of the image processing sequence (the position and attitude of the calibration plate) to the robot control apparatus 12. Then, the robot control apparatus 12 calculates the local coordinate system based on the acquired position and attitude of the calibration plate and the robot coordinates (base coordinates) of the mobile camera 31 at imaging. Then, the robot control apparatus 12 transmits a setting result (local setting result) of the local coordinate system to the computer 11. Note that, as described above, the specific methods for the execution details of the local settings etc. are not particularly limited. The processing (program) is stored with respect to each of various settings in the storage unit 113, and the control unit 111 executes the processing (program) according to the selected settings.
Next, the control system 10 reflects the local setting result.
Specifically, first, when the communication unit 116 of the computer 11 receives the local setting result (
When the input control unit 115 receives the selection of reflecting the local setting result by the user (
On the other hand, when the user makes the selection not to reflect the result of the local settings, though not shown in
In the above explained local settings, as described above, in [1A] various settings in local settings, the display control unit 114 outputs the group of windows for local settings of dialogically displaying the plurality of (five in the embodiment) local setting windows 72 with the user. Then, the user gives instructions of various settings to the control system 10 using the plurality of local setting windows 72. Thereby, the user may dialogically select the setting details (information) along a predetermined sequence, and thereby, various settings in the local settings may be easily and readily completed without complex operations. Accordingly, time and effort of programing of various settings as in related art may be saved. Further, the setting details necessary in local settings are displayed, and thereby, even a beginner may reduce insufficient settings of the setting details necessary for local settings and occurrence of e.g. an error in the execution of the local settings may be reduced.
Note that, as described above, the group of windows for local settings have the five local setting windows 72, however, the number of local setting windows 72 is not limited to that. Another local setting window may be further added or one of the five local setting windows 72 may be omitted. Further, the sequence of display of the five local setting windows 72 is not limited to the above described sequence, but arbitrary. Note that it is preferable that the display contents of the local setting window 72 to be subsequently displayed change according to the selected details of the local setting window 72 previously displayed. That is, it is preferable that the display contents of the local setting window 72 to be subsequently displayed may be limited contents according to the selected details of the previously displayed local setting window 72. Therefore, it is preferable to set so that the local setting window 72b may be displayed after the above described local setting window 72a. Further, the above described five local setting windows 72 are displayed in the above explained order, and thereby, the user particularly easily grasp the setting details and the convenience for the user may be improved.
Next, the tool settings and the display processing in the tool settings in the above described job program will be explained.
As below, the tool settings (step S121) in the creation of the above described job program will be explained with reference to the flowchart shown in
First, the control system 10 executes various settings in the tool settings based on operation instructions by the user.
Specifically, first, when the user gives an operation instruction to the icon 701 for tool settings of the main window 50 shown in
As shown in
Further, the tool setting window 71a has a button 7101 labeled “Cancel”, a button 7102 labeled “Back”, a button 7103 labeled “Next”, a button 7104 labeled “Teach” (teach button), and a button 7105 labeled “Finish” like the above described local setting window 72a.
With respect to the tool setting window 71a, when the user gives an operation instruction to select a desired mode and gives an operation instruction to the button 7103, the input control unit 115 receives the selection of the tool setting mode (
The second tool setting window is a window for selection (setting) of a save number (tool number) for saving the tool setting result (not shown). Note that the second tool setting window is in nearly the same display form as that of the first tool setting window 71a except that the selection details (setting details) are different.
For the display for selection of the save number in the second tool setting window, e.g. a listbox or the like may be used. The second tool setting window has a configuration of receiving the selection of the save number, and thereby, input errors by the user may be prevented. Note that what to select may be a save name in place of the save number. In addition, the second tool setting window also has the same buttons (not shown) as the buttons 7101 to 7105 of the first tool setting window 71a.
When the user gives an operation instruction to select a desired save number to the second tool setting window, and then, gives an operation instruction to the button labeled “Next” (the button corresponding to the button 7103), the input control unit 115 receives the selection of the save number of the result of the tool settings (
As shown in
Here, the display contents of the third tool setting window 71b change according to the type (selection) of the imaging unit 3 of the first tool setting window 71a. Specifically, the display contents of the dropdown list 712 change according to the type (selection) of the imaging unit 3 of the first tool setting window 71a. For example, at step S121 in the above described generation of the job program, tool settings are performed using the fixed camera 32 that has not been calibrated. Therefore, when the input control unit 115 receives the selection of the third radio button 711 from the top in the drawing of the first tool setting window 71a, the display control unit 114 allows the display device 41 to display the tool setting window 71b having the dropdown list 712 with the display contents relating to the fixed camera 32.
In the above described manner, the limited details according to the selection in the previously displayed tool setting window 71a are displayed on the subsequently displayed tool setting window 71b, and selection errors by the user may be reduced.
With respect to the third tool setting window 71b, when the user gives an operation instruction to select the details of the respective vision components and gives an operation instruction to the button 7103 labeled “Next”, the input control unit 115 receives the selection of the details of the respective vision components (
The fourth tool setting window is a window for receiving teaching of the camera point at which the tool settings are started (not shown). Note that the fourth tool setting window is in nearly the same display form as that of the first tool setting window 71a except that the selection details (setting details) are different. Further, the fourth tool setting window also has the same buttons as the buttons 7101 to 7105 of the first tool setting window 71a.
With respect to the fourth tool setting window (not shown), when the user gives an operation instruction to the teach button (the ungrayed-out button corresponding to the button 7104), the input control unit 115 receives the operation instruction (
Using the window for teaching, the user gives an instruction (operation instruction) to position e.g. the marker (target) attached to the work 93 grasped by the hand 270 as the tool close to the center of the captured image. When the input control unit 115 receives the operation instruction of the user, the control unit 111 issues a movement instruction to move the hand 270 to the robot control apparatus 12 based on the instruction, and issues an imaging instruction to image the marker using the imaging unit 3 to the image processing apparatus 13 (
Then, when the user gives an operation instruction to the button labeled “Next” (corresponding to the button 7103) in the above described fourth tool setting window (not shown), the input control unit 115 receives the selection by the user (
As shown in
Further, the tool setting window 71c has buttons 7101 to 7105 like the first tool setting window 71a. Furthermore, the tool setting window 71c has a button 7106 labeled “EXECUTE”.
With respect to the tool setting window 71c, when the user gives an operation instruction to select the details of the various parameters and gives an operation instruction to the button 7106 labeled “EXECUTE”, the input control unit 115 receives the selection of the details of the respective parameters and the execution instruction of the tool settings from the user (
Next, the control system 10 executes the processing of the tool settings.
Specifically, the robot control apparatus 12 and the image processing apparatus 13 execute the processing of the tool settings based on the execution instructions from the control unit 111 at step S425.
First, when receiving the execution instruction, the robot control apparatus 12 acquires the status of the robot 2a from the robot 2a. Then, the robot control apparatus 12 issues a movement instruction to the robot 2a so that the marker attached to the work 93 may be imaged by the fixed camera 32 for tool settings. Here, for example, the apparatus issues the movement instruction so that the marker is located at the center of the captured image. In this regard, the robot 2a returns information on the movement of the robot arm 20 (values of the position sensor) or the like to the robot control apparatus 12 at each time. Then, the robot control apparatus 12 issues an execution instruction of the image processing sequence to the image processing apparatus 13. The image processing apparatus 13 receives the execution instruction, detects the marker using the fixed camera 32, and executes the image processing sequence. Here, the image processing sequence received by the above described tool setting window 71b is executed. The image processing apparatus 13 executes the image processing sequence, and performs image processing based on the captured image (image data) obtained by imaging of the marker using the fixed camera 32. Then, the robot control apparatus 12 issues a movement instruction to the robot 2a so that the axis coordinates may be rotated about the center of the captured image as another center, for example. Then, the image processing apparatus 13 receives the execution instruction, detects the marker using the fixed camera 32, and executes the image processing sequence. Note that the robot control apparatus 12 may perform e.g. an operation of rotating the marker with respect to the axis coordinates or further rotate the axis coordinates in addition to the above described processing, for example. Then, the image processing apparatus 13 transmits an execution result of the image processing sequence (the detection result of the marker) to the robot control apparatus 12. Then, the robot control apparatus 12 calculates offset based on the acquired detection result of the marker, the robot coordinates at imaging, etc. Then, the robot control apparatus 12 transmits a result of tool settings to the computer 11. Note that, as described above, the specific methods for the execution details of the tool settings etc. are not particularly limited. The processing (program) is stored with respect to each of various settings in the storage unit 113, and the control unit 111 executes the processing (program) according to the selected settings.
Next, the control system 10 reflects the tool setting result and executes the settings.
Specifically, first, when the communication unit 116 of the computer 11 receives the tool setting result (
If the input control unit 115 receives the selection of reflecting the tool setting result by the user (
On the other hand, when the user makes the selection not to reflect the tool setting result, though not shown in
In the above explained tool settings, as described above, in [1B] various settings in tool settings, the display control unit 114 outputs the group of windows for tool settings of dialogically displaying the plurality of (five in the embodiment) tool setting windows 71 with the user. Then, the user gives instructions of various settings to the control system 10 using the plurality of tool setting windows 71. Thereby, the user may dialogically select the setting details (information) along a predetermined sequence, and thereby, various settings in the tool settings maybe easily and readily completed without complex operations. Accordingly, time and effort of programing of various settings as in related art may be saved. Further, the setting details necessary for tool settings are displayed, and thereby, even a beginner may reduce insufficient settings of the setting details necessary for tool settings and occurrence of e.g. an error in the execution of the tool settings may be reduced.
Note that, as described above, the group of windows for tool settings have the five tool setting windows 71, however, the number of tool setting windows 71 is not limited to that. Another tool setting window may be further added or one of the five tool setting windows 71 may be omitted. Further, the sequence in which the five tool setting windows 71 are displayed is not limited to the above described sequence, but arbitrary. Note that it is preferable that the display contents of the tool setting window 71 to be subsequently displayed change according to the selected details of the tool setting window 71 previously displayed. That is, it is preferable that the display contents of the tool setting window 71 to be subsequently displayed may be limited contents according to the selected details of the previously displayed tool setting window 71. Therefore, it is preferable to set so that the tool setting window 71b may be displayed after the above described tool setting window 71a. Further, the above described five tool setting windows 71 are displayed in the above explained order, and thereby, the user particularly easily grasp the setting details and the convenience for the user may be improved.
Next, the calibration in the above described job program and the display processing in the calibration will be explained.
As below, the calibration (steps S122, S127) in the above described creation of the job program will be explained with reference to the flowcharts shown in
First, the control system 10 executes various settings in a calibration, i.e., creation of a calibration based on the operation instructions by the user.
Specifically, first, when the user gives an operation instruction to the icon 703 for calibration creation of the main window 50 shown in
As shown in
In the case where the calibration that has been already set is saved, the settings of the calibration of the copy source may be copied using the dropdown list 7313. Thereby, the setting details of the calibration of the copy source are displayed in the plurality of calibration creation windows 73 to be displayed after the calibration creation window 73a. Therefore, in the case where the user desires to create a new calibration by slightly altering the various details of the calibration that has been already set, the user may easily perform alteration by designating the calibration of the copy source.
Further, the calibration creation window 73a has a button 7301 labeled “Cancel”, a button 7302 labeled “Back”, a button 7303 labeled “Next”, a button 7304 labeled “Finish”.
With respect to the calibration creation window 73a, when the user gives an operation instruction to input of the calibration name or the like and gives an operation instruction to the button 7303, the input control unit 115 receives the input of the calibration name or the like (
As shown in
Further, the calibration creation window 73b has a dropdown list 7322 for selection of one robot 2 from the plurality of kinds of robots 2, and a group 7323 (area) that receives selection of an attachment location of the imaging unit 3. Here, in the specification, the attachment location of the imaging unit 3 includes a placement location in which the imaging unit 3 is placed and an imaging direction (orientation) of the imaging unit 3.
In the embodiment, the display content of the dropdown list 7322 is the robot 2a as the vertical articulated robot and the robot 2b as the horizontal articulated robot. The calibration creation window 73b is adapted to select one of these robots 2a, 2b. Further, in the embodiment, the group 7323 has four radio buttons 7324 and is adapted to receive one of the four attachment locations of of the imaging unit 3 shown in
Here, the display details of the second calibration creation window 73b change according to the selected detail (type) of the robot 2. Specifically, the attachment locations of the group 7323 of the second calibration creation window 73b change according to the selected detail of the robot 2 (see
For example, at step S122 in the above described generation of the job program, a calibration between the robot coordinate system of the robot 2a as the vertical articulated robot and the image coordinate system of the fixed camera 32 is performed. Therefore, when the input control unit 115 receives the selection of the robot 2a, the display control unit 114 allows the display device 41 to display the group 7323 having the attachment locations relating to the robot 2a as shown in
In the above described manner, the display contents of the group 7323, i.e., the information including the placement locations of the imaging unit 3 is displayed in a limited extent depending on the type of the robot 2, and selection errors by the user may be reduced.
With respect to the second calibration creation window 73b, as described above, when the user gives an operation instruction to select the desired robot 2 and attachment location of the imaging unit 3 and gives an operation instruction to the button 7303 labeled “Next”, the input control unit 115 receives the selection of the robot 2 and the attachment location of the imaging unit 3 (
Here, as below, the explanation will be made with a focus on the settings of the calibration at step S122. That is, the explanation will be made with a focus on the calibration with respect to the fixed camera 32. Accordingly, as below, the explanation will be made assuming that the robot 2a is selected and the upwardly fixed camera 32 (Fixed upward) is selected in the above described calibration creation window 73b in
As shown in
The display details of the dropdown list 733 change according to the type (selection) of the imaging unit 3 in the first calibration creation window 73a. For example, when the input control unit 115 receives the selection of the fixed camera 32 in the first calibration creation window 73a, the display control unit 114 allows the display device 41 to display the calibration creation window 73c having the dropdown list 733 with the display contents relating to the fixed camera 32. Note that, when the mobile camera 31 is selected in the first calibration creation window 73a, the details on the mobile camera 31 are displayed in the dropdown list 733. As described above, the display contents of the dropdown list 733 of the subsequently displayed calibration creation window 73c are limited contents corresponding to the selection in the previously displayed calibration creation window 73a, and thereby, selection errors by the user may be reduced.
With respect to the calibration creation window 73c, when the user gives an operation instruction to select the target sequence and gives an operation instruction to the button 7303 labeled “Next”, the input control unit 115 receives the selection of the target sequence (
As shown in
In the dropdown list 7342, the save number of the local settings that have been already set and save numbers of local settings to be set and saved (save numbers of local settings not yet set) are displayed. Further, the local wizard button 7341 is used for starting the group of windows for local settings having the above described plurality of local setting windows 72.
With respect to the calibration creation window 73d, for example, when the user does not give an operation instruction to the local wizard button 7341, but gives an operation instruction to select the save number of the local settings that have been set from the dropdown list 7342, and gives an operation instruction to the button 7303 labeled “Next”, the input control unit 115 receives the selection of the save number of the local settings (
On the other hand, with respect to the calibration creation window 73d, when the user selects the save number of the local settings that have not been set from the dropdown list 7342 and gives an operation instruction to the local wizard button 7341, the input control unit 115 receives the operation instruction by the user (
As shown in
The calibration creation window 73e has a radio button 7353 that receives the selection of the end effector, a dropdown list 7352 for selection of the save number of the tool settings, a tool wizard button 7351, and a checkbox 7354 that receives use of two reference points.
The radio button 7353 is displayed when the calibration with respect to the fixed camera 32 is performed. In the dropdown list 7352, the save number of the tool settings that have been already set and the save number of the tool settings to be set and saved (save number of the tool settings that have not been set). In the embodiment, not only the target provided on the hand 270 as the end effector but also e.g. a target attached to the work grasped by the hand 270 may be set as the reference point. In the dropdown list 7352, the save number of the tool settings that hold the offset of the reference point as the target at the distal end axis coordinates of the robot 2a is selected. The tool wizard button 7351 is used for tool settings by starting the group of windows for tool settings having the above described plurality of tool setting windows 71 in the case where the tool settings of the above described reference point as the target have not yet been set.
With respect to the calibration creation window 73e, for example, when the user does not give an operation instruction to the tool wizard button 7351, but gives an operation instruction to select the acquisition type of the reference point (the end effector and the save number of the tool settings) that has been set from the dropdown list 7352, and gives an operation instruction to the button 7303 labeled “Next”, the input control unit 115 receives the selection of the acquisition type of the reference point (the end effector and the save number of the tool settings) (
On the other hand, with respect to the calibration creation window 73e, when the user selects the save number of the tool settings that have not been set from the dropdown list 7352 and gives an operation instruction to the local wizard button 7351, the input control unit 115 receives the operation instruction by the user (
As shown in
With respect to the calibration creation window 73f, when the user checks the checkbox 736 and gives an operation instruction to the button 7303 labeled “Next”, the input control unit 115 receives the selection to perform automated generation of the camera points (
The seventh calibration creation window is a window for selection as to whether or not to perform distortion correction of the lens of the imaging unit 3 and setting of an image processing sequence when the distortion correction is performed (not shown). Note that the seventh calibration creation window has nearly the same configuration as the first calibration creation window 73a except the selection details (setting details) are different.
With respect to the seventh calibration creation window (not shown), when the user selects whether or not to perform the distortion correction of the lens and, if performing the distortion correction, selects the image processing sequence, and gives an operation instruction to the button 7303 labeled “Next”, the input control unit 115 receives whether or not to perform the distortion correction of the lens (
The eighth calibration creation window is a window for setting of an illumination at execution of the calibration (not shown). In the eighth calibration creation window, for example, a wait time until the illumination is turned on, an output bit that turns on the illumination, etc. may be set. Note that the eighth calibration creation window has nearly the same configuration as the first calibration creation window 73a except the selection details (setting details) are different.
With respect to the eighth calibration creation window (not shown), when the illumination is set and an operation instruction to the button 7303 labeled “Next” is given, the input control unit 115 receives the setting of the illumination (
As shown in
Further, the calibration creation window 73g has a checkbox 7372 for selection as to whether or not to use an approach point and a teach button 7373. The approach point refers to a location as a base point of movement of the camera point at execution of the calibration. The approach point is used, and thereby, the predetermined part of the robot 2 is constantly moved from the approach point to the camera point at execution of the calibration. Accordingly, stability of the position of the robot 2 at the camera point may be improved and, as a result, the accuracy of the calibration result may be further improved.
With respect to the calibration creation window 73g, for example, when the user checks the checkbox 7372 and gives an operation instruction to the teach button 7373, the input control unit 115 receives the operation instruction by the user. Thereby, the control unit 111 determines to set the approach point (
As shown in
Using the window 730 and the window (not shown) that receives the operation instruction of the jog motion, when the user sets the approach point and gives an operation instruction to the OK button 7305 in the window 730, the input control unit receives the setting of the approach point (
Then, when the user gives an operation instruction to the button 7303 labeled “Next” of the calibration creation window 73g shown in
Note that, with respect to the calibration creation window 73g, when the user does not check the checkbox 7372 and gives an operation instruction to the button 7303 labeled “Next”, the input control unit 115 receives the operation instruction by the user. Thereby, the control unit 111 determines not to make the setting of the approach point (
The tenth calibration creation window (not shown) is a window in which the details set in the first to ninth calibration creation windows are listed. Note that the tenth calibration creation window has nearly the same configuration as the first calibration creation window 73a except the selection details (setting details) are different. By visually recognizing the tenth calibration creation window, the user may confirm the setting details of the calibration at a glance.
Further, the tenth calibration creation window has buttons corresponding to the buttons 7301 to 7304 like the first calibration creation window 73a. Therefore, when the input control unit 115 receives an operation instruction to the button labeled “Back”, the control unit 111 makes resetting. Or, when the input control unit 115 receives an operation instruction to the button labeled “Cancel”, the control unit 111 cancels the set calibration. Or, when the input control unit 115 receives an operation instruction to the button labeled “Finish”, the control unit 111 allows the storage unit 113 to store the calibration set by the display processing using the first to tenth calibration creation windows.
Note that, when the input control unit 115 receives an operation instruction to the button labeled “Cancel” or the button labeled “Finish”, the control unit 111 ends the display processing of the group of windows for calibration creation and issues an instruction to erase the calibration creation window 73 from the display device 41 to the display control unit 114. Thereby, the display control unit 114 erases the calibration creation window 73 from the display device 41.
In the above described manner, the various settings (creation of calibration) of the calibration by the display processing of calibration are completed.
Next, the control system 10 executes teaching of a plurality of camera points.
As described above, when the input control unit 115 receives an operation instruction to the button labeled “Finish” by the user (
When the various settings of the calibration are completed, the property setting window 60, a teach button 6151, and a picture image display part 612 are displayed in the sub-window 61. Note that, in the picture image display part 612 shown in
The property setting window 60 has a view part 63 and a list 57 (property list).
The view part 63 is an area that displays the created calibration settings (calibration name) and the image processing sequence created by display processing to be described (image processing sequence name) together. In the embodiment, the display form of the view part 63 is a tree view in which the calibration settings and the image processing sequence settings are respectively hierarchically displayed. Thereby, the user may easily grasp the plurality of calibration settings and the plurality of image processing sequences at a glance. Accordingly, the desired calibration settings and image processing sequence are easily selected. Note that the display form of the view part 63 is not limited to that, but may be e.g. a list view in which the calibration settings and the image processing sequence settings are respectively displayed in parallel or the like.
The list 57 is an area that displays various setting details of the calibration settings and the image processing sequence selected in the view part 63. The property list is adapted to receive operation instructions (input) by the user. Accordingly, the user may set (change) the specific setting details of the calibration settings using the property list.
With respect to the sub-window 61, when the user gives an operation instruction to the teach button 6151, the input control unit 115 receives the operation instruction (execution instruction of teaching) (
For example, when receiving the selection to perform automated generation of the camera point at the above described step S527, the computer 11 issues execution instructions to teach one camera point to the robot control apparatus 12 and the image processing apparatus 13. Further, in this case, the control unit 111 issues an output instruction to display a flowchart 660a (flowchart 660) and a jog panel 54 to the display control unit 114 (see
On the other hand, when receiving the selection not to perform automated generation of the camera point in the above described step S527, the computer 11 issues execution instructions to teach predetermined all camera points (nine camera points in the embodiment) to the robot control apparatus 12 and the image processing apparatus 13. Further, in this case, the control unit 111 issues an output instruction to display a flowchart 660b (flowchart 660) and the jog panel 54 to the display control unit 114 (see
Here, the flowcharts 660a, 660b respectively show the flows of the processing of teaching, and the top flow 661 shows the selected calibration settings (calibration name). Further, the second and subsequent flows 662 from the top in the drawings show teaching steps contained in the selected calibration settings.
For example, as shown in
When the teaching of all flows 662 of the flowchart 660a or flowchart 660b is completed, in other words, teaching of all reference points is completed, the control unit 111 issues an output command of display of a calibration execution button 6152 shown in
Then, when the user gives an operation instruction to the calibration execution button 6152, the input control unit 115 receives the operation instruction (execution instruction of the calibration) (
In the above described manner, teaching of the camera points is completed.
Next, the control system 10 executes the calibration.
Specifically, the robot control apparatus 12 and the image processing apparatus 13 execute the calibration based on the execution instructions from the control unit 111 at step S543.
First, when receiving the execution instruction, the robot control apparatus 12 acquires the status of the robot 2a from the robot 2a. Then, the robot control apparatus 12 issues a movement instruction to the robot 2a so that the target is positioned at the first camera point. In this regard, the robot 2a returns information on the movement of the robot arm 20 (values of the position sensor) or the like to the robot control apparatus 12 at each time. Then, the robot control apparatus 12 issues an execution instruction of the image processing sequence to the image processing apparatus 13. The image processing apparatus 13 receives the execution instruction, detects the target (e.g. a marker) using the fixed camera 32 (imaging unit 3), and executes the image processing sequence. The image processing apparatus 13 executes the image processing sequence and performs image processing based on the captured image (image data) obtained by imaging of the target using the fixed camera 32. Then, when the image processing ends, the image processing apparatus 13 transmits an execution result of the image processing sequence (a detection result of the target) to the robot control apparatus 12. Thereby, the robot control apparatus 12 acquires the execution result of the image processing sequence at the first camera point. Then, the robot control apparatus 12 performs the same processing as the series of processing to the above described acquisition of the execution result of the image processing sequence at the first camera point on the remaining second to ninth camera points. Then, the robot control apparatus 12 calculates a calibration result obtained by correlation of the image coordinate system of the fixed camera (imaging unit 3) and the local coordinates (robot coordinates) of the robot 2a (robot 2) based on the execution results of the image processing sequences at the first to ninth reference points and the local coordinates (robot coordinates) of the target at the first to ninth camera points. Then, the robot control apparatus 12 transmits the calculated calibration result to the computer 11. Note that, as described above, the specific methods for the execution details of the calibration etc. are not particularly limited. The processing (program) is stored with respect to each of various settings in the storage unit 113, and the control unit 111 executes the processing (program) according to the selected settings.
In the above described manner, the execution of the calibration is completed.
Next, the control system 10 executes reflection and settings of the calibration result.
Specifically, first, when the communication unit 116 of the computer 11 receives the calibration result (
The window 68 has an area 681 that displays the previous calibration result, an area 682 that displays the current calibration result, an OK button 683, and a cancel button 684. The OK button 683 and the cancel button 684 are provided, and thereby, the user may select the OK button 683 when desiring reflection of the calibration result and select the cancel button 684 when not desiring the reflection. As described above, the window 68 is adapted for the user to select whether or not to reflect the calibration result. Further, the areas 681, 682 are provided, and thereby, the user may select whether or not to reflect the current calibration result in comparison with the details of the previous calibration result.
With respect to the window 68, when the input control unit 115 receives the selection to reflect the calibration result by the user, i.e., an operation instruction to the OK button 683 (
On the other hand, when the user makes the selection not to reflect the calibration result, though not shown in
As described above, in [1C] various settings in calibration, the display control unit 114 allows the display device 41 as “display unit” to display the calibration creation windows 73 as “guide windows for calibration” that guides input of information for calibration. Thereby, the user selects information (setting details) according to the details displayed in the calibration creation windows 73, and thereby, may easily and readily complete the settings of the calibration without complex operations. Accordingly, even a beginner may easily make settings of the calibration.
Further, as described above, the control system 10 of the embodiment has the input device 42 as “receiving unit” that receives input. Further, based on the input received by the input device 42, the display control unit 114 allows the display device 41 as “display unit” to sequentially display the calibration creation windows 73 as the plurality of “guide windows for calibration”. In the embodiment, the display control unit 114 displays the group of windows for local settings for dialogically displaying the plurality of (ten in the embodiment) local setting windows 72 with the user. Thereby, the user may select information (setting details) in the dialogic form (wizard form) according to the sequentially displayed calibration creation windows 73 (wizard windows). As described above, the user may dialogically select the setting details along a predetermined sequence, and thereby, may easily and readily complete settings of the calibration without complex operations. Accordingly, input errors, insufficient input, etc. may be reduced. Further, time and effort of programing of various settings as in related art may be saved. Furthermore, the setting details necessary in calibration creation are displayed to a limited extent, and thereby, even a beginner may reduce insufficient settings of the setting details necessary for calibration creation. Accordingly, occurrence of e.g. an error in the execution of the calibration may be reduced.
Note that, as described above, the group of windows for calibration creation have the ten calibration creation windows 73, however, the number of calibration creation windows 73 is not limited to that. Another calibration creation window may be further added or one of the ten calibration creation windows 73 may be omitted. Further, the sequence of display of the ten calibration creation windows 73 is not limited to the above described sequence, but arbitrary. Note that it is preferable that the display contents of the calibration creation window to be subsequently displayed change according to the selected details of the previously displayed calibration creation window 73. That is, it is preferable that the display contents of the calibration creation window 73 to be subsequently displayed may be limited contents according to the selected details of the calibration creation window 73 previously displayed. Especially, the above described ten calibration creation windows 73 are displayed in the above explained order, and thereby, the user particularly easily grasp the setting details and the convenience for the user may be improved.
Here, as described above, the control system 10 is “control apparatus” that can control driving of the robot 2, the imaging unit 3 and the display device 41 as “display unit” based on the input of the input device 42 as “input unit”. Further, as described above, the control system 10 includes the display control unit 114 that allows the display device 41 to display the calibration creation window 73b as “input window” for input of the robot 2 as an object to be controlled and allows the display device 41 to display the group 7323 (area) as “imaging unit input part” that guides the input of the attachment position (placement position) of the imaging unit 3 corresponding to the input robot 2 (e.g. robot 2a), and the calibration edit part 1113 and the calibration execution part 1114 as “calibration control unit” that performs a calibration of correlating the coordinate system (robot coordinate system) of the robot 2 and the coordinate system (image coordinate system) of the imaging unit 3 based on the input attachment position of the imaging unit 3. That is, the above described respective processing (steps S51 to S54) of the calibration are mainly performed by the calibration edit part 1113 and the calibration execution part 1114 of the control unit 111.
According to the control system 10, the attachment position of the imaging unit 3 corresponding to the input (selected) robot 2 is displayed on the display device 41. That is, the attachment position of the imaging unit 3 not corresponding to the input robot 2a is undisplayed. For example, as described above, in the group 7323 of the calibration creation window 73b, only the attachment position of the imaging unit 3 corresponding to the robot 2 selected using the dropdown list 7322 is displayed, but the attachment position of the imaging unit 3 not corresponding thereto is undisplayed. Thereby, the user may easily make the selection of the attachment position of the imaging unit 3 corresponding to the input robot 2. As a result, various settings for calibration may be easily and appropriately made.
Here, “input” includes “selection”. Further, in the input of the robot 2, for example, the user may input the robot 2 using a keyboard or the like or select the robot 2.
Especially, as described above, the display control unit 114 can display the vertical articulated robot and the horizontal articulated robot on the calibration creation window 73b as “input window”, and the display form of the group 7323 (area) as “imaging unit input part” differs between the case where the robot 2a as an example of “vertical articulated robot” is input and the case where the robot 2b as an example of “horizontal articulated robot” is input. Thereby, the selection of the attachment position of the imaging unit 3 corresponding to the robot 2a and the selection of the attachment position of the imaging unit 3 corresponding to the robot 2b may be respectively easily made. According to the control system 10 (application software installed in the control system 10), calibrations with respect to the plurality of kinds of robots 2 may be performed using the single control system 10 (single application software) without preparing control systems (application software) respectively corresponding to the robot 2a and the robot 2b, and the convenience is excellent.
Especially, as described above, the display control unit 114 displays the local wizard button 7341 as “local setting call-up part” for calling up the local setting window 72 as “guide window for local settings” that guides input of the information for setting the local coordinate system different from the coordinate system of the robot 2 (robot coordinate system) in the calibration creation window 73d as “guide window for calibration”. Thereby, the user may easily call up the local setting window 72 by giving an operation instruction to call up the local setting window 72 via the local wizard button 7341 (by clicking or touching). Accordingly, for example, time and effort to once cancel the settings by the calibration creation window 73 for local settings, make the local settings, and then, remake the settings of the calibration again from the start may be omitted. Therefore, the time and effort of the user may be significantly omitted.
Especially, in the embodiment, the display control unit 114 allows the display device 41 to sequentially display the plurality of local setting windows 72 based on the operation instructions to the local wizard button 7341 received by the input control unit 115. Thereby, the user may easily and readily make the local settings without complex operations by dialogically selecting information (setting details) according to the sequentially displayed local setting windows 72.
Further, as described above, the display control unit 114 displays the tool wizard button 7351 as “tool setting call-up part” for calling up the tool setting window 71 as “guide window for tool settings” for guiding input of the information for obtaining offset of the tool (e.g. work 93) attached to the robot 2 (information for tool settings for obtaining offset) in the calibration creation window 73e as “guide window for calibration”. Thereby, the user may easily call up the tool setting window 71 by giving an operation instruction to call up the tool setting window 71 via the tool wizard button 7351. Accordingly, for example, time and effort to once cancel the settings by the calibration creation window 73 for tool settings, make the tool settings, and then, remake the settings of the calibration again from the start may be omitted. Therefore, the time and effort of the user may be significantly omitted.
Especially, in the embodiment, the display control unit 114 allows the display device 41 to sequentially display the plurality of tool setting windows 71 based on the operation instructions to the tool wizard button 7351 received by the input control unit 115. Thereby, the user may easily and readily make the tool settings without complex operations by dialogically selecting information (setting details) according to the sequentially displayed tool setting windows 71.
Further, as described above, the display control unit 114 displays the check box 736 as “calibration point selection part” for selecting whether or not to perform automated generation of the camera points as “calibration points” used in calibration in the calibration creation window 73f as “guide window for calibration”. Thereby, the user may select whether or not to perform automated generation of the camera points easily via the checkbox 736 according to the purpose of the user. For example, the user may reduce time and effort to make settings of the plurality of camera points by automated generation of the camera points. On the other hand, for example, in the case where the drive range of the robot arm 20 is limited, it is effective not to perform automated generation of the camera points. Without the automated generation of the camera points, the user may make settings of the respective camera points in a region in which the robot arm 20 does not interfere with peripherals. Therefore, the calibration creation window 73f has the checkbox 736, and thereby, settings according to the purpose of the user may be easily made.
Further, as described above, the display control unit 114 displays the check box 7372 as “approach point selection part” for selecting whether or not to perform automated generation of the approach point as the base point of the movement to the camera points as “calibration points” of the predetermined part (e.g. tool center point P) of the robot 2 in the calibration creation window 73g as “guide window for calibration”. Thereby, the user may select whether or not to perform automated generation of the approach point easily and according to the purpose of the user by giving the operation instruction via the checkbox 7372. For example, the user may improve the stability of the position of the robot 2 at the camera points because the robot 2 constantly moves from the approach point to the camera points by the automated generation of the approach point. As a result, the accuracy of the calibration result may be further improved. On the other hand, in the case where the automated generation of the approach point is not performed, the execution of the calibration may be performed more quickly. Therefore, the calibration creation window 73g has the checkbox 7372, and thereby, settings according to the purpose of the user may be easily made.
Especially, as described above, the processing unit 110 has the display control unit 114 that controls driving of the display device 41 as “display unit” and the display control unit 114 calls up (displays) the plurality of image processing sequences edited by the image processing sequence edit part 1115 in the calibration creation window 73c as “guide window for calibration” that guides input of the setting details on the calibration. In the embodiment, the display control unit 114 displays the edited (existing) plurality of image processing sequences in the dropdown list 733 in the calibration creation window 73c. Thereby, the user may select a desired image processing sequence from the plurality of image processing sequences via the dropdown list 733. That is, in the settings of the calibration using the calibration creation windows 73, the user may call up the edited existing image processing sequences. Accordingly, for example, when settings of a plurality of calibrations are desired, time and effort to create the image processing sequences in the settings of the respective calibrations may be omitted, and thereby, creation of the image processing sequence at each time when the calibration is set may be omitted. As a result, settings of the calibration may be simple and the user-friendliness may be significantly improved.
As described above, in the embodiment, the desired image processing sequence may be selected (changed) also in the list of the property setting window 60 shown in
As described above, the control system 10 is “control apparatus” that can control driving of the robot 2 and the imaging unit 3 and includes the processing unit 110 having the image processing sequence edit part 1115 that edits the setting details on the image processing sequence containing image processing of the captured image (image data) imaged by the imaging unit 3 and the calibration edit part 1113 that edits the setting details on the calibration of correlating the coordinate system of the robot 2 (robot coordinate system) and the coordinate system of the imaging unit 3 (image coordinate system), and the processing unit 110 can call up the image processing sequence exited by the image processing sequence edit part 1115 in (at) editing (calibration creation) of the setting details on the calibration by the calibration edit part 1113. According to the control system 10, when a plurality of calibrations are desired, the existing (edited) image processing sequences may be called up in the settings of the respective calibrations, and thereby, time and effort to create the image processing sequence at each time when the calibration is set may be omitted. Accordingly, time and labor taken for the settings of the calibration may be reduced. Further, the edited image processing sequence may be called up also in processing using commands, which will be described later. Note that, in the specification, “call up image processing sequence” includes displaying the image processing sequence on the display unit and making the image processing sequence feasible in the control program.
Further, the processing unit 110 has the display control unit 114 that controls driving of the display device 41 as “display unit” and the display control unit 114 allows the display device 41 to display the calibration settings in which the setting details on the calibration have been edited and the image processing sequence settings in which the setting details on the image processing sequence have been edited in the property setting window 60 as the same “window” (see the view part 63 in
Especially, the display control unit 114 displays the calibration settings and the image processing sequence settings in the tree view (see the view part 63 in
Furthermore, the display control unit 114 displays the calibration settings and the image processing sequence settings correlating with each other (see the view part 63 in
Next, the creation of the image processing sequence (vision processing sequence) will be explained.
Hereinafter, for convenience of explanation, the upside in
In the creation of the image processing sequence, input of items of the image processing sequence, addition of an image processing object to the image processing sequence, execution of the image processing sequence, and reflection of the execution result (detection result) of the image processing sequence are sequentially executed. The input of items of the image processing sequence and the addition of the image processing object to the image processing sequence are performed, and thereby, various settings of the image processing sequence are made and image detection (image processing) for detection of the target such as a marker is realized. As below, the creation of the image processing sequence will be explained.
For the creation of the image processing sequence, the items of the image processing sequence are input.
The user makes an operation instruction to the icon 601 in the main window 50 shown in
As shown in
Then, the user makes an operation instruction to the icon 671 in the sub-window 61 shown in
Then, in the sub-window 61, the user inputs necessary items of the respective items of e.g. “sequence name”, “camera used in sequence”, “sequence of copy source”, etc. (makes an operation instruction) using the dialogue box 663. “Sequence of copy source” is input (designated) when the existing image processing sequence is copied.
When the input control unit 115 of the computer 11 receives the operation instruction by the user, the image processing sequence edit part 1115 starts creation of the image processing sequence.
At the stage, the image processing sequence has been partially created. When the image processing sequence of the copy source is designated, the settings of the image processing sequence of the copy source are copied. Accordingly, the setting details of the image processing sequence of the copy source are displayed in the sub-window 61, a guide window 65, etc. Therefore, when desiring creation of an image processing sequence in which various details of the image processing sequence that has been already set are slightly changed, the user may easily creates the sequence by designating the image processing sequence of the copy source.
Here, a flowchart is displayed in the flowchart display part 62. The currently selected sequence is displayed in the display part 621 on the top (uppermost part) of the flowchart. Further, in the second display part 622 (see
In the view part 63, the image processing sequence settings in which the setting details on the image processing sequence have been edited and the calibration settings in which the setting details on the calibration have been edited are respectively displayed in tree views. Thereby, the user may grasp the types and the number of the existing calibration settings and the types and the number of the existing image processing sequence settings at a glance.
Further, in the view part 63, the calibration settings and the image processing sequence settings correlating with each other are displayed. In the embodiment, the image processing sequence settings are displayed on the upside of the view part 63 and the calibration settings are displayed on the downside of the image processing sequence settings.
All sequences that have been edited (set) are displayed in the tree of the image processing sequence settings, and all calibrations that have been edited (set) are displayed in the tree of the calibration settings.
When the input control unit 115 receives an operation instruction to “Sequence” of the flowchart or “Sequence” of the tree with respect to the image processing sequence (creation of the image processing sequence) by the user, the display control unit 114 allows the display device 41 to display the list 57 (sequence window). In the list 57, e.g. properties with respect to the image processing sequence, execution results of the image processing sequence, etc. are displayed. Further, settings of the properties with respect to the image processing sequence etc. can be performed using the list 57.
When the input control unit 115 receives an operation instruction to “Object” of the flowchart or “Object” of the tree with respect to the image processing sequence, the display control unit 114 allows the display device 41 to display the list 57 (object window). In the list 57, e.g. properties with respect to the image processing sequence, image processing objects or execution results of the image processing objects when the image processing sequence is executed, etc. are displayed. Further, settings of the properties with respect to the image processing objects etc. can be performed using the list 57.
When the input control unit 115 receives an operation instruction to the tree with respect to the calibration, the display control unit 114 allows the display device 41 to display the list 57 (calibration window). In the list 57, e.g. settings of the calibration and execution results of the calibration, etc. are displayed. Further, settings of the properties of the calibration etc. using the list 57 may be enabled.
In order to designate image processing in the image processing sequence, a predetermined image processing object is added to the image processing sequence.
In this case, as methods of adding an image processing object, there are two methods, i.e., a method using the toolbar 615 (first method) and a method using guide windows that guide input of information (second method), i.e., a method using an image processing selection wizard (step wizard).
First, the first method will be explained.
The first method is the method of adding the image processing object using the toolbar 615.
In the case using the first method, the user does not give an operation instruction to the icon 672 in the sub-window shown in
In the first method, first, the user selects a type of predetermined image processing from types of image processing (image detection) (types of operation) from a menu of the image processing guide (the plurality of toolbar items of the toolbar 615), and then, selects a predetermined image processing object (a function relating to image processing) from the plurality of image processing objects in the selected type of image processing.
Specifically, first, the user performs an operation of selecting a predetermined toolbar item from the plurality of toolbar items of the toolbar 615 in the sub-window 61. Then, when the input control unit 115 receives an operation instruction to the predetermined toolbar item (selection of the type of image processing) by the user (step S621), the image processing sequence edit part 1115 gives an output command of a list 6540 according to the selected type (step S622). In response to the output command, the display control unit 114 allows the display device 41 to display the corresponding lists 6540 shown in
When “Detection” is selected, the list 6540 shown in
In the respective lists 6540, items including correlated icons 6541 and character strings 6542 are respectively displayed. As an example, when “Detection” is selected, e.g. a character string 6542 of “Geometric” and an icon 6541 of a predetermined figure are correlated and displayed in the list 6540.
The user selects (designates) an item of the correlated predetermined icon 6541 and character string 6542 using the necessary list 6540 of the respective lists 6540, and thereby, performs an operation of adding an image processing object corresponding to the item. Then, when the input control unit 115 receives the operation instruction by the user (selection of the image processing object) (step S623), the image processing sequence edit part 1115 adds the designated image processing object to the current image processing sequence. In the above described manner, the settings of the image processing sequence are completed. Note that, at the step of adding the image processing object, e.g. model registration (teaching) of the marker 680 (mark) or the like is performed.
Next, the second method will be explained.
The second method is the method of adding an image processing object using the image processing selection wizard (step wizard). The types of image processing (image detection) includes e.g. detection, count, inspection, read, image processing (image), all tools, etc.
In the case of using the second method, the user gives an operation instruction to the icon 672 in the sub-window 61. Then, when the input control unit 115 receives the operation instruction to the icon 672 by the user, the image processing sequence edit part 1115 executes the second method. That is, the image processing sequence edit part 1115 determines whether or not to use the image processing selection wizard (step S613) and, if determining to use the image processing selection wizard, executes the second method.
In the second method, first, the image processing sequence edit part 1115 gives an output command of the first guide window (step S614), and the display control unit 114 displays the guide window 65 (window) shown in
In the lower part of the guide window 65, a button 6501 (icon) labeled “Cancel”, a button 6502 (icon) labeled “Back”, a button 6503 (icon) labeled “Next (N)>”, and a button 6504 (icon) labeled “Finish (F)” are displayed. In the embodiment, of the buttons 6501, 6502, 6503, 6504, operation instructions (selection) can be made to the buttons 6501, 6503.
The user selects a type of image processing of the plurality of types of image processing, and the case where “Detection” is selected will be representatively explained as below.
When the input control unit 115 receives an operation instruction to select an item 6511 of “‘Detection’, ‘Set Coordinate Values of Parts’” (selection of the type of image processing) by the user (step S615), the display control unit 114 changes the color of the part of the selected item 6511 to a color different from those of the other parts of the box 651.
Then, when the input control unit 115 receives an operation instruction to the button 6503 labeled “Next (N)>” by the user, the image processing sequence edit part 1115 gives an output command of the second guide window (step S616), and the display control unit 114 displays the guide window 65 shown in
Further, in the embodiment, of the buttons 6501, 6502, 6503, 6504, operation instructions (selection) can be made to the buttons 6501, 6502, 6503.
The user selects a predetermined type of part detection tool of the plurality of types of part detection tools, and the case where “Geometric” is selected will be representatively explained as below.
When the input control unit 115 receives an operation instruction to select an item 6521 of “‘Geometric’, ‘Detect Part using Geometric Model of Edge Base’” (selection of the image processing object) by the user (step S617), the display control unit 114 changes the color of the part of the selected item 6521 to a color different from those of the other parts of the box 652.
Then, when the input control unit 115 receives an operation instruction to the button 6503 labeled “Next (N)>” by the user, the image processing sequence edit part 1115 gives an output command of the third guide window (step S618), and the display control unit 114 displays the guide window 65 shown in
In this manner, the display control unit 114 allows the display device 14 to sequentially display the three guide windows 65 dialogically with the user based on the input received by the input control unit 115. Thereby, the user selects information (items) in the dialogical form according to the sequentially displayed guide windows 65, and may simply, readily, and quickly perform a job of adding the image processing objects without complex operations.
Further, in the embodiment, of the buttons 6501, 6502, 6503, 6504, operation instructions (selection) can be made to the buttons 6501, 6502, 6504.
The user enters a predetermined name in an input dialogue of “Input Name of New Step” and performs an operation of designating an insertion location.
Then, when the input control unit 115 receives an operation instruction to the button 6504 labeled “Finish (F)” (an addition completion instruction of the image processing object) by the user (step S619), the image processing sequence edit part 1115 adds the image processing object designated to the current image processing sequence.
When the addition of the image processing object to the image processing sequence is completed, the image processing sequence edit part 1115 gives an output command of display of the added image processing object (step S620), and the display control unit 114 respectively displays the added image processing object in the tree of the image processing sequence displayed in the view part 63 and the flowchart displayed in the flowchart display part 62.
Here, as described above, the image processing sequence has at least one image processing object. The display control unit 114 can display the image processing object, and the display form of the image processing object without teaching and display form of the image processing object with teaching are different.
Thereby, the user may grasp whether or not the image processing object has been taught at a glance. As below, the specific explanation will be made.
The image processing sequence edit part 1115 determines whether or not teaching has been performed with respect to the added image processing object (step S624), if determining that the teaching has not been performed (untaught), gives an output command of display of untaught (step S625). Then, the display control unit 114 displays untaught with respect to the added untaught image processing object. That is, the display form of the untaught part of the flowchart is made different from that of the taught part. In the embodiment, an icon 6221 of “!” is displayed in the untaught part of the flowchart. Thereby, the user may distinguish the untaught image processing object at a glance.
Note that, in place of the icon 6221 of “!” or with display of the icon 6221 of “!”, the color, e.g., the background color of the untaught part of the flowchart may be made different from the taught part.
When the added image processing object is untaught, in other words, teaching is necessary, the user gives an operation instruction to a button 614 (icon) labeled “Teach” with respect to the added image processing object, and performs predetermined teaching.
When the input control unit 115 receives the operation instruction to the button 614 by the user and the instructed teaching is completed with respect to the added untaught image processing object, the image processing sequence edit part 1115 adds a detail of the completed teaching. Thereby, the added image processing object can be executed. In the above described manner, the settings of the image processing sequence are completed.
Note that, in the case of “Geometric search”, an example of teaching includes model registration of the marker 680.
When determining that the teaching has been performed (taught) at step S624, the image processing sequence edit part 1115 gives an output command of display of taught (step S626). Then, the display control unit 114 displays taught with respect to the added untaught image processing object. In the embodiment, the icon 6221 of “!” displayed in the flowchart is erased. Thereby, the user may distinguish that the untaught image processing object has been taught at a glance.
The image processing sequence (creation of the image processing sequence) includes a step of inspection based on the image captured by the imaging unit 3.
Thereby, the object imaged by the imaging unit 3 may be inspected by the image processing sequence. As below, the specific explanation will be made.
First, when the input control unit 115 receives an operation instruction of “Execution of Image Processing Sequence” (execution command of the image processing sequence) by the user (step S627), first, the image processing sequence edit part 1115 transmits the setting details of the image processing sequence to the image processing apparatus 13. In this case, the robot control apparatus 12 may relay the transmission or not.
Then, the image processing apparatus 13 receives the setting details of the image processing sequence. The image processing sequence execution part 1311 reflects the setting details of the image processing sequence.
Then, the image processing sequence edit part 1115 transmits an image processing sequence execution command to execute the image processing sequence to the image processing apparatus 13. In this case, the robot control apparatus 12 may relay the transmission or not.
Then, the image processing apparatus 13 receives the image processing sequence execution command. Then, the image processing sequence execution part 1311 executes the image processing sequence.
In this case, first, the imaging unit 3 is driven by the control of the imaging unit control part 1313. The imaging unit 3 images a predetermined object such as a marker, for example, and transmits image data of the captured image to the image processing apparatus 13. Then, the image processing apparatus 13 receives the image data. Then, the image processing part 1312 performs predetermined image processing based on the image data.
As a specific example, for example, the marker 680 (geometric model: figure) shown in
In the embodiment, in the case of the creation of the first image processing sequence, the marker 680 attached to the center of the upper surface of the work 93 is imaged using the mobile camera 31 and predetermined processing is performed. The processing includes e.g. confirmation of detection accuracy.
Or, in the case of the creation of the second image processing sequence, the marker 680 attached to the center of the lower surface of the work 93 is imaged using the fixed camera 32 and predetermined processing is performed.
Or, in the case of the creation of the third image processing sequence, two marks (not shown) attached to the lower surface of the work 93, e.g., two markers 680 placed at a predetermined interval (object) are imaged using the fixed camera 32 and predetermined processing is performed. The processing includes e.g. correlation between the third image processing sequence and the calibration result of the fixed camera 32, inspection, etc.
Further, the specific example of the inspection includes an inspection as to whether or not a distance between a point A and a point B falls within a predetermined threshold value. In this case, the two markers 680 correspond to the point A and the point B in the inspection as to whether or not the distance between the point A and the point B falls within the predetermined threshold value. In the inspection, the distance between the point A and the point B is measured, and whether or not the distance between the point A and the point B falls within the predetermined threshold value is determined based on the calibration result of the fixed camera 32 and the captured image. Then, if the distance between the point A and the point B falls within the predetermined threshold value, “Pass” is determined and, if the distance is not within the predetermined threshold value, Fail is determined.
The communication unit 116 receives the execution result (detection result) of the image processing sequence transmitted from the image processing apparatus 13 (step S628), and the computer 11 reflects the execution result of the image processing sequence.
Specifically, first, the image processing sequence edit part 1115 transmits an image processing sequence execution result transmission command to transmit the execution result of the image processing sequence to the image processing apparatus 13. In this case, the robot control apparatus 12 may relay the transmission or not.
Then, the image processing apparatus 13 transmits the execution result of the image processing sequence to the computer 11. In this case, the robot control apparatus 12 may relay or not the transmission.
Then, the computer 11 receives the execution result of the image processing sequence using the communication unit 116. Then, the image processing sequence edit part 1115 reflects the execution result of the image processing sequence.
That is, the image processing sequence edit part 1115 gives an output command of display with respect to the execution result of the image processing sequence or the like (step S629), and the display control unit 114 allows the display device 41 to display the execution result of the image processing sequence (reflects the result on the display). Further, the execution result of the image processing sequence is also reflected on the properties etc. The execution result of the image processing sequence includes e.g. image coordinates at which the marker is detected etc.
Note that, in the creation of the image processing sequence, a window for setting distortion correction of the mobile camera 31, fixed camera 32, etc. and a window for setting illumination conditions at imaging may be provided.
Then, processing of tool settings, local settings, calibration and the creation of image processing sequence, etc. using commands will be explained.
First, the summary will be briefly explained in correspondence with the claims, and then, the details will be explained later.
The control unit 110 has the control program edit part 1111 that can edit the control program for driving the robot 2. The control program edit part 1111 can insert a command to call up the edited image processing sequence (the argument of the command in the embodiment) into the control program.
Thereby, for example, in a control program of allowing the robot 2 to perform a predetermined action and a predetermined job, the existing (edited) image processing sequence may be called up. Accordingly, time and effort to create the image processing sequence at each time when the control program is created may be omitted.
Further, the robot control apparatus 12 of the control system 10 (control apparatus) includes the control program execution part 1211 that can execute the control program for driving the robot 2. The control program execution part 1211 executes the setting of the local coordinate system using a command that enables setting of the local coordinate system different from the coordinate system of the robot 2.
Thereby, the setting of the local coordinate system may be made more quickly. In the case where the calibration is regularly and repeatedly executed and the case where the calibration is executed, then, correction of various settings of the calibration including the setting of the local coordinate system based on the execution result is repeated at a plurality of times, use of commands is particularly effective. This is because the correction based on the execution results may be easily and quickly performed.
Further, the control program execution part 1211 uses “Command VEfTool” as an example of the commands that enable tool settings for obtaining offset of the tool attached to the robot 2 to execute the tool settings.
Thereby, the tool settings may be made more quickly. In the case where the calibration is regularly and repeatedly executed and the case where the calibration is executed, then, correction of various settings of the calibration including the tool settings based on the execution result is repeated at a plurality of times, use of commands is particularly effective. This is because the correction based on the execution results may be easily and quickly performed. As below, the specific explanation will be made.
The control system 10 may perform processing using commands in place of the above described display processing using the various operation windows.
The commands include action commands for execution of target processing. For example, the commands include a processing command for tool settings to make tool settings (calculate offset) using the execution result of the image processing sequence, a processing command for local settings to make local settings using the execution result of the image processing sequence, a processing command for calibration to perform a calibration using the execution result of the image processing sequence, and a command for driving the robot arm 20 so that e.g. a target within a captured image of the imaging unit 3 may move to a predetermined position using the execution result of the image processing sequence.
Further, the commands have e.g. arguments for designating the parameters.
As below, the tool settings will be representatively explained as an example, however, local settings, calibration and creation of image processing sequence may be performed in the same manner.
The user creates a program using a command and inputs (creates and inputs) the program in the computer 11 using the input device 42. When the input control unit 115 (receiving unit) of the computer 11 receives the program input from the input device 42 and stores the program in the storage unit 113. Note that the creation of program includes the case where a program is newly created and the case where the existing program is rewritten or added.
An example of the program is as follows. Further, the program enables the same settings as the above described tool settings.
Reset
Motor On
VDefTool 1, VISION_DEFTOOL_FIXEDNOCAL, TestVisSeq, 180, 5 Fend
“VDefTool 1, VISION_DEFTOOL_FIXEDNOCAL, TestVisSeq, 180, 5” is a command (the respective arguments are examples).
Of the command, “VDefTool” is a command name.
Further, the arguments (argument names) in the command VDefTool includes e.g. the same parameters as the parameters that can be set in the first tool setting window 71a, the second tool setting window (not shown), the third tool setting window 71b, the fourth tool setting window (not shown), and the fifth tool setting window 71c in the above described tool settings. The specific examples include e.g. “toolNumber”, “tool DefType”, “sequence”, “[finalAngle]”, “[initialAngle]”, “[targetTolerance]”.
“toolNumber” is the save number (tool number) for saving the tool setting results. The specific examples include 1 to 15.
Further, “tool DefType” is the tool type. The specific examples are as follows.
VISION_DEFTOOL_FIXEDNOCAL: make tool settings using the fixed camera with no calibration.
VISION_DEFTOOL_J4CAM: calculate image center of the mobile camera provided on the fourth arm.
VISION_DEFTOOL_J6CAM: calculate image center of the mobile camera provided on the sixth arm.
“sequence” is an image processing sequence used for detection of a tool (object).
“[finalAngle]” is an angle to which a tool/camera tool is rotated (final rotation angle).
“[initialAngle]” is an angle to which the tool/camera tool is rotated (initial rotation angle) at tentative tool settings.
“[targetTolerance]” is a pixel distance at which the execution result of the image processing sequence (detection result) is regarded as the same as a target position (tolerance of the target).
The created program (project) is built and converted (compiled) into a language (data strings) that can be interpreted by the robot control apparatus 12.
In this case, first, when the input control unit 115 receives an operation instruction of “Build” by the user, the control program build part 1112 of the computer 11 builds a program and compiles the program to a language that can be interpreted by the robot control apparatus 12.
Then, the computer 11 transmits the compiled program to the robot control apparatus 12. The robot control apparatus 12 receives the program transmitted from the computer 11 and stores the program in the storage unit 123.
Then, the computer 11 transmits necessary respective information including image processing detection settings to the image processing apparatus 13. In this case, the robot control apparatus 12 may relay the transmission or not. The image processing apparatus 13 receives the image processing detection settings transmitted from the computer 11 and stores the settings in the storage unit 133.
In the case where the user allows the robot vision system 100 to perform processing of tool settings, the user makes an operation instruction to a predetermined icon (not shown) displayed in the display device 41.
When the input control unit 115 receives an operation instruction of “start selection, execution of main function of program” (execution command of processing of tool settings) by the user, the computer 11 first transmits a command of execution processing of the program to the robot control apparatus 12.
Then, the image processing apparatus 13 receives the command (instruction) of execution processing of the program. Then, the control program execution part 1211 of the image processing apparatus 13 starts the execution processing of the main function of the program. Then, when the control program execution part 1211 finds execution processing of the command VDefTool, the part transmits a command of the execution processing of the command VDefTool (execution command of the processing of tool settings) with the argument of the command VDefTool to the computer 11.
Then, the computer 11 receives the command of the execution processing of the command VDefTool and executes (starts) the processing of the command VDefTool, i.e., the processing of tool settings. Note that the execution of the processing of tool settings is the same as the above described [2B] and the explanation is omitted. Further, the reflection of the tool setting result is the same as the above described [3B] and the explanation is omitted.
Using the above described commands, the respective processing including tool settings may be easily and quickly executed.
In the above described tool settings using the tool setting windows, in the case where the processing of tool setting is executed, at each time, the five tool setting windows are sequentially displayed by the display device 41, various settings are made in the tool settings, and then, the execution of the processing of the tool settings is enabled, and accordingly, time and labor are taken for preparation at each time. On the other hand, using the commands, when the arguments of the commands are once set, the settings are unnecessary in the next processing, and accordingly, the processing of tool settings may be easily and quickly executed.
Further, for example, in the case where the calibration is executed after the execution of the processing of tool settings, using the commands, the processing of tool settings and the calibration may be continuously and automatically executed and the convenience is high.
Furthermore, in the case where various settings in the tool settings are changed, the various settings may be changed by a simple job of changing the corresponding arguments to the commands.
As above, the tool settings are explained as an example, and the local settings, calibration, creation of image processing sequence, etc. may be respectively and similarly executed using commands.
For example, in the local settings, a command that enable setting of a local coordinate system different from the coordinate system of the robot 2 is created. Then, the control program execution part 1211 executes setting of the local coordinate system in response to the command. Note that the setting of the local coordinate system (execution of the processing of local settings) is the same as the above described [2A] and the explanation is omitted. Further, the reflection of the local setting result is the same as the above described [3A] and the explanation is omitted.
The above described robot vision system 100 includes the control system 10 as “control apparatus” and the robot 2 and the imaging unit 3 controlled by the control system 10. According to the robot vision system 100, the system includes the above described control system 10, and thereby, the robot may properly perform the action with respect to the calibration based on the captured image (image data) from the imaging unit 3. Accordingly, the accuracy of the calibration may be improved. As a result, the accuracy of the job of the robot 2 may be improved.
Further, the robot 2 is controlled by the control system 10 as “control apparatus”. Accordingly, the robot 2 may properly perform the action with respect to the calibration under the control of the control system 10.
As above, the control apparatus, the robot, the robot system of the invention are explained based on the illustrated embodiments, however, the invention is not limited to those. The configurations of the respective parts may be replaced by arbitrary configurations having the same functions. Further, another arbitrary configuration may be added to the invention.
This application claims the benefit of U.S. Provisional Application No. 62/437,922 filed on Dec. 22, 2016 and Japanese Application No. 2017-059553 filed on Mar. 24, 2017. The entire disclosures of the above applications are incorporated herein by reference.
Number | Date | Country | Kind |
---|---|---|---|
2017-059553 | Mar 2017 | JP | national |
Number | Date | Country | |
---|---|---|---|
62437922 | Dec 2016 | US |