The present application claims priority under 35 U.S.C. §119 to Japanese Patent Application No. 2014-053072, filed Mar. 17, 2014. The contents of this application are incorporated herein by reference in their entirety.
1. Field of the Invention
The embodiments disclosed herein relate to a robot system, a calibration method in a robot system, and a position correcting method in a robot system.
2. Discussion of the Background
Japanese Unexamined Patent Application Publication No. 2010-172986 discloses a robot system. The robot system includes a robot, a camera, a robot controller, and a personal computer. The robot includes a multi-articular robot main body and a hand mounted to a distal end of the multi-articular robot main body so as to hold a workpiece. The camera picks up an image of the workpiece held by the hand of the robot. The robot controller controls operations of the robot main body. The personal computer performs three-dimensional measurement and recognition based on the image picked up by the camera. In the robot system, the robot is provided with a calibration checkerboard for the camera to pick up an image of the checkerboard. The image is used for a calibration in which a coordinate system imparted to the robot is correlated to a coordinate system imparted to the camera.
According to one aspect of the present disclosure, a robot system includes a robot, an imaging device, a robot controller, and an image pick-up controller. The robot includes a hand and a multi-articular robot main body. The hand is configured to hold a workpiece. To the multi-articular robot main body, the hand is mounted. The imaging device is configured to pick up an image of the workpiece held by the hand The robot controller is configured to control the robot main body to operate. The image pick-up controller is configured to control the imaging device to pick up the image of the workpiece held by the hand. The image pick-up controller is configured to send movement information to the robot controller so as to control the robot controller to move the workpiece held by the robot to a plurality of positions, configured to control the imaging device to pick up the image of the workpiece held by the robot at the plurality of positions, configured to recognize a registered portion of the workpiece held by the robot at the plurality of positions, and configured to perform a calibration based on the movement information and based on a result of recognizing the registered portion of the workpiece held by the robot at the plurality of positions. The calibration includes correlating positional information of the image picked up by the imaging device to positional information of the robot.
According to another aspect of the present disclosure, a calibration method in a robot system includes holding a workpiece using a hand mounted to a main body of a multi-articular robot. The workpiece held by the robot is moved to a plurality of positions based on movement information sent from an image pick-up controller, and an imaging device is controlled to pick up an image of the workpiece at the plurality of positions. A registered portion of the workpiece held by the robot is recognized at the plurality of positions based on a result of the step of controlling the imaging device to pick up the image of the workpiece at the plurality of positions. A calibration is performed including correlating positional information of the image picked up by the imaging device to positional information of the robot based on the movement information and based on a result of the recognizing step, and the calibration is ended.
According to the other aspect of the present disclosure, a position correcting method in a robot system includes holding a workpiece using a hand mounted to a main body of a multi-articular robot. The workpiece held by the robot is moved to a plurality of positions based on movement information sent from an image pick-up controller, and an imaging device is controlled to pick up an image of the workpiece at the plurality of positions. A registered portion of the workpiece held by the robot is recognized at the plurality of positions based on a result of the step of controlling the imaging device to pick up the image of the workpiece at the plurality of positions. A calibration is performed including correlating positional information of the image picked up by the imaging device to positional information of the robot based on the movement information and based on a result of the recognizing step. A position and a posture of a targeted workpiece held by the robot are recognized, and the position of the targeted workpiece is corrected based on a result of the calibration.
A more complete appreciation of the present disclosure and many of the attendant advantages thereof will be readily obtained as the same becomes better understood by reference to the following detailed description when considered in connection with the accompanying drawings, wherein:
The embodiments will now be described with reference to the accompanying drawings, wherein like reference numerals designate corresponding or identical elements throughout the various drawings.
By referring to
As illustrated in
The robot 10 holds the workpiece 1 using the hand 13. As illustrated in
As illustrated in
controls the robot 10 (robot main body 11) based on a program to perform a predetermined operation. The robot controller 20 is coupled to the vision controller 30. From the vision controller 30, the robot controller 20 receives movement information to move the workpiece 1 held by the robot 10. Based on the movement information, the robot controller 20 drives the robot main body 11 into operation.
The vision controller 30 is coupled to the imaging device 40 to control the imaging device 40 to perform an image pick-up operation. Based on a result of the imaging device 40 picking up an image of the workpiece 1, the vision controller 30 recognizes the position of the workpiece 1 and the posture of the workpiece 1. Then, the vision controller 30 sends information of the recognized position and posture of the workpiece 1 to the robot controller 20. Then, the vision controller 30 performs a calibration including correlating positional information of the image picked up by the imaging device 40 to positional information of the robot 10. In order to correct the position of the targeted workpiece 1 based on a result of the calibration and based on the result of the imaging device 40 picking up the image of the targeted workpiece 1, the vision controller 30 sends movement information to the robot controller 20 so as to move the targeted workpiece 1 held by the robot 10.
The imaging device 40 picks up an image of the workpiece 1 held by the hand 13 of the robot 10. The imaging device 40 is fixed over the robot 10. The image picked up by the imaging device 40 has an exemplary coordinate system as illustrated in
In this embodiment, the vision controller 30 sends the movement information to the robot controller 20 so as to control the robot controller 20 to move the workpiece 1 to a plurality of positions, and controls the imaging device 40 to pick up an image of the workpiece 1 at the plurality of positions. The workpiece 1 has a registered portion 1b, and the vision controller 30 recognizes the registered portion 1b at the plurality of positions. Based on the movement information and based on a result of the vision controller 30 recognizing the registered portion 1b, the vision controller 30 performs the calibration including correlating the positional information of the image picked up by the imaging device 40 to the positional information of the robot 10.
Specifically, as illustrated in
The movement information includes movement distance information. The vision controller 30 sends the movement information including the movement distance information to the robot controller 20 so as to control the robot controller 20 to make a plurality of parallel movements of the workpiece 1 held by the robot 10 to the plurality of positions, controls the imaging device 40 to pick up an image of the workpiece 1 at the plurality of positions, and correlates coordinate information of the image picked up by the imaging device 40 to coordinate information of the robot 10 based on the movement distance information and based on the result of the imaging device 40 picking up the image of the workpiece 1 at the plurality of positions.
Specifically, as illustrated in
Referring to step (1) illustrated in
Then, referring to step (5) illustrated in
The movement information includes turning information. As illustrated in
Referring to step (11) illustrated in
Referring to step (15) illustrated in
The vision controller 30 sends the movement information including the turning information to the robot controller 20 so as to control the robot controller 20 to make a plurality of turns of the workpiece 1 to a plurality of turning positions, controls the imaging device 40 to pick up an image of the workpiece 1 at the plurality of turning positions, repeats a calculation to obtain the in-image turning center of the image picked up by the imaging device 40 based on the result of the imaging device 40 picking up the image of the workpiece 1 at the plurality of turning positions, and correlates the positional information of the image picked up by the imaging device 40 to the positional information of the robot 10 based on the movement information and based on the in-image turning center.
The portion 1b of the workpiece 1 may be registered during the calibration. The vision controller 30 uses the portion 1b of the workpiece 1 registered during the calibration to recognize the position and the posture of the targeted workpiece 1 held by the robot 10, and corrects the position of the targeted workpiece 1 based on the result of the calibration. Specifically, when the kind of the targeted workpiece 1 is identical to the kind of the previous workpiece 1 used in the calibration, the vision controller 30 uses the portion 1b of the previous workpiece 1 registered during the calibration to recognize the targeted workpiece 1; it is not necessary to register the targeted workpiece 1 for recognition purposes.
Next, by referring to
At step S1 illustrated in
At step S4, the robot controller 20 is provided with definitions of parameters and variables. Specifically, the parameters and variables that are defined include driving information for driving the robot 10 into operation. At step S5, a detection job is prepared and executed for the robot controller 20. This establishes communication between the robot controller 20 and the vision controller 30, and ensures that the robot 10 is controlled based on a result of image processing by the vision controller 30.
At step S6, calibration processing is performed. This correlates the positional information of the image picked up by the imaging device 40 to the positional information of the robot 10. At step S7, a measurement reference workpiece is registered. Specifically, a reference position of the workpiece 1 for work on the workpiece 1 is registered. At step S8, a targeted workpiece 1 is measured (imaged and recognized). Then, at step S9, the targeted workpiece 1 is corrected (moved) to the position registered in the reference workpiece registration.
Then, when the workpiece 1 registered in the reference workpiece registration is subjected to additional work, the processings at step S8 and step S9 are repeated. When a workpiece 1 unregistered in the reference workpiece registration is subjected to work, the procedure returns to the measurement reference workpiece registration processing at step S7. When the positional relationship between the robot 10 and the imaging device 40 is changed, the procedure returns to the calibration processing at step S6. After a predetermined number of workpieces 1 have been subjected to work, the workpiece recognition processing ends.
Next, by referring to
At step S11 illustrated in
At step S13, the vision controller 30 sends shift data (X, Y) (movement information) to the robot controller 20. At step S14, based on the shift data (X, Y) (movement information), the robot controller 20 drives the robot main body 11 into operation to make a parallel movement of the workpiece 1 in the XY directions. At step S15, the imaging device 40 picks up an image of the workpiece 1.
At step S16, a determination is made as to whether the workpiece 1 has been detected (the characteristic 1a of the workpiece 1 has been recognized). When the workpiece 1 has been detected, the procedure proceeds to step S17. When the workpiece 1 has not been detected, the procedure proceeds to step S23. At step S17, a determination is made as to whether to continue a judgment (image pick-up processing and recognition processing). Specifically, a determination is made as to whether the judgment (image pick-up processing and recognition processing) has been performed a predetermined number of times. When a determination is made to continue the judgment, the procedure returns to step S13. When a determination is made to end the judgment, the procedure proceeds to step S18. In this manner, the vision controller 30 correlates the direction and movement distance (measurement) of the image picked up by the imaging device 40 to the movement direction and movement distance of the robot 10.
At step S18, the vision controller 30 sends the shift data (θ) (movement information) to the robot controller 20. At step S19, based on the shift data (θ) (movement information), the robot controller 20 drives the robot main body 11 into operation to turn the workpiece 1 by an angle θ. At step S20, the imaging device 40 picks up an image of the workpiece 1.
At step S21, a determination is made as to whether the workpiece 1 has been detected (the characteristic 1a of the workpiece 1 has been recognized). When the workpiece 1 has been detected, the procedure proceeds to step S22. When the workpiece 1 has not been detected, the procedure proceeds to step S23. At step S22, a determination is made as to whether to continue the judgment (image pick-up processing and recognition processing). Specifically, a determination is made as to whether the judgment (image pick-up processing and recognition processing) has been performed a predetermined number of times. Also, a determination is made as to whether to repeat the image pick-up processing and the calculation processing to cause the in-image turning center obtained by calculation using the picked-up image of the workpiece 1 to converge within a particular range. When a determination is made to continue the judgment, the procedure returns to, step S18. When a determination is made to end the judgment, the result of the calibration is registered and the calibration processing ends. In this manner, the vision controller 30 correlates the in-image turning center of the image picked up by the imaging device 40 to the turning center about which the robot 10 turns the workpiece 1.
When at step S16 or S21 the workpiece 1 is not detected, an error is notified at step S23 and the calibration processing is stopped.
This embodiment provides the following advantageous effects.
In this embodiment, the vision controller 30 sends the movement information to the robot controller 20 so as to control the robot controller 20 to move the workpiece 1 to a plurality of positions, controls the imaging device 40 to pick up an image of the workpiece 1 at the plurality of positions, recognizes the registered portion 1b of the workpiece 1 held by the robot 10 at the plurality of positions, and performs a calibration including correlating the positional information of the image picked up by the imaging device 40 to the positional information of the robot 10 based on the movement information and based on a result of recognizing the registered portion 1b of the workpiece 1 held by the robot 10 at the plurality of positions. This eliminates the need for a calibration checkerboard and ensures a calibration using the workpiece 1 to correlate the positional information of the image picked up by the imaging device 40 to the positional information of the robot 10. This, in turn, ensures a calibration including correlating the positional information of the image picked up by the imaging device 40 to the positional information of the robot 10 while eliminating or minimizing increase in piece-part count. In addition, the robot 10 is driven into operation based on the movement information sent from the vision controller 30 at the time of the calibration. This ensures that the vision controller 30 performs the calibration processing without the need for the robot controller 20 to store the movement state of the robot 10. This eliminates or minimizes complication in the calibration processing.
Also in this embodiment, the vision controller 30 uses the portion 1b of the previous workpiece 1 registered during the calibration to recognize the position and the posture of the targeted workpiece 1 held by the robot 10, and correlates the position of the targeted workpiece 1 based on the result of the calibration. Thus, when the kind of the targeted workpiece 1 is identical to the kind of the previous workpiece 1 registered in the calibration, it is not necessary to register the targeted workpiece 1 for recognition purposes. This reduces the labor and processing load involved in the recognition of the position of the workpiece 1 and the posture of the workpiece 1.
Also in this embodiment, the vision controller 30 extracts the characteristic la from within the registered portion 1b of the workpiece 1 held by the robot 10, controls the imaging device 40 to pick up an image of the characteristic 1a of the workpiece 1 at a plurality of positions to which the held workpiece 1 is moved, obtains the movement state of the characteristic 1a of the workpiece 1, and performs a calibration based on the movement information and based on the movement state of the characteristic 1a of the workpiece 1. This ensures that the vision controller 30 automatically extracts a characteristic 1a readily recognizable for the vision controller 30 from within the registered portion 1b of the workpiece 1, and uses the extracted characteristic 1a to perform the calibration. This, in turn, ensures a more accurate calibration.
Also in this embodiment, the movement information includes movement distance information. The vision controller 30 sends the movement information including the movement distance information to the robot controller 20 so as to control the robot controller 20 to make a plurality of parallel movements of the workpiece 1 held by the robot 10 to a plurality of positions, controls the imaging device 40 to pick up an image of the workpiece 1 at the plurality of positions, and correlates the coordinate information of the image picked up by the imaging device 40 to the coordinate information of the robot 10 based on the movement distance information and based on the result of the imaging device 40 picking up the image of the workpiece 1 at the plurality of positions. This facilitates correlating the coordinate information of the image picked up by the imaging device 40 to the coordinate information of the robot 10 based on the actual movement of the workpiece 1 caused by the movement distance information and based on the movement of the workpiece 1 in the image picked up by the imaging device 40.
Also in this embodiment, the vision controller 30 sends the movement information to the robot controller 20 so as to control the robot controller 20 to make a plurality of parallel movements of the workpiece 1 held by the robot 10 to a plurality of positions in the form of a grid on a horizontal plane in the first direction (X direction) and the second direction (Y direction) orthogonal to the first direction on the horizontal plane, controls the imaging device 40 to pick up an image of the workpiece 1 at the plurality of positions, and correlates the coordinate information of the image picked up by the imaging device 40 to the coordinate information of the robot 10 based on the movement information and based on the result of the imaging device 40 picking up the image of the workpiece 1 at the plurality of positions. Thus, the calibration is based on a result of picking up an image of the workpiece 1 at a plurality of positions that form a grid resulting from two-dimensional movement of the workpiece 1 on a horizontal plane, and based on the actual movement of the workpiece 1 caused by the movement information. This ensures a more accurate calibration than a calibration in which the workpiece 1 is moved linearly (one-dimensionally).
Also in this embodiment, the movement information includes turning information, and the vision controller 30 sends the movement information including the turning information to the robot controller 20 so as to control the robot controller 20 to make, in addition to the plurality of parallel movements of the workpiece 1, a plurality of turns of the workpiece 1 held by the robot 10 to a plurality of turning positions, controls the imaging device 40 to pick up an image of the workpiece 1 at the plurality of turning positions. Then, based on the turning information and based on the result of the imaging device 40 picking up the image of the workpiece 1 at the plurality of turning positions, the vision controller 30 correlates the in-image turning center of the image picked up by the imaging device 40 to the turning center about which the robot 10 turns the workpiece 1. This facilitates correlating the in-image turning center of the image picked up by the imaging device 40 to the turning center about which the robot 10 turns the workpiece 1 based on the actual movement (turning) of the workpiece 1 caused by the turning information and based on the movement (turning) of the workpiece 1 in the image picked up by the imaging device 40.
Also in this embodiment, the vision controller 30 sends the movement information to the robot controller 20 so as to control the robot controller 20 to make a plurality of turns of the workpiece 1 to a plurality of turning positions, controls the imaging device 40 to pick up an image of the workpiece 1 at the plurality of turning positions, repeats a calculation to obtain the in-image turning center of the image picked up by the imaging device 40 based on the result of the imaging device 40 picking up the image of the workpiece 1 held by the robot 10 at the plurality of positions, and correlates the positional information of the image picked up by the imaging device 40 to the positional information of the robot 10 based on the movement information and based on the in-image turning center. By repeating the image pick-up operation and the calculating operation until the in-image turning center obtained by picking up the image of the workpiece 1 at the plurality of turning positions converges within a particular range, the in-image turning center is calculated accurately. This ensures improved accuracy in correlating the in-image turning center of the image picked up by the imaging device 40 to the turning center about which the robot 10 turns the workpiece 1.
While in the above-described embodiment the imaging device has been described as recognizing a position on a plane (two-dimensional position), the imaging device may include a three-dimensional camera to recognize a three-dimensional position.
While in the above-described embodiment the workpiece has been described as being moved in the form of a grid on a horizontal plane in the calibration, the workpiece may be moved in the form of a three-dimensional grid (rectangular parallelepiped grid) in the calibration.
While in the above-described embodiment the vision controller (imaging device control means) and the robot controller (robot control means) are separate from each other, the imaging device control means and the robot control means may be integral with each other in a single controller.
While in the above-described embodiment the imaging device has been described as fixed, the imaging device may be movable. In this case, the calibration may be performed every time the imaging device makes a movement, or it is possible to use a movement state of the imaging device to perform a correction in the calibration.
In the above-described embodiment, for the sake of description, the control processing performed by the control means has been described using a flow-driven flowchart, in which the control processing proceeds in order according to a processing flow. This, however, should not be construed in a limiting sense. The control processing performed by the control means may be event-driven processing, in which the control processing is performed on an event basis. In this case, the control processing may be complete event-driven processing or may be a combination of event-driven processing and flow-driven processing.
Obviously, numerous modifications and variations of the present disclosure are possible in light of the above teachings. It is therefore to be understood that within the scope of the appended claims, the present disclosure may be practiced otherwise than as specifically described herein.
Number | Date | Country | Kind |
---|---|---|---|
2014-053072 | Mar 2014 | JP | national |