Claims
- 1. A method of three-dimensional object location and guidance to allow robotic manipulation of an object with variable position and orientation by a robot using a sensor array, comprising:
(a) calibrating the sensor array to provide a Robot—Eye Calibration by finding the intrinsic parameters of said sensor array and the position of the sensor array relative to a preferred robot coordinate system (“Robot Frame”) by placing a calibration model in the field of view of said sensor array; (b) training object features by:
(i) positioning the object and the sensor array such that the object is located in the field of view of the sensor array and acquiring and forming an image of the object; (ii) selecting at least 5 visible object features from the image; (iii) creating a 3D model of the object (“Object Model”) by calculating the 3D position of each feature relative to a coordinate system rigid to the object (“Object Space”); (c) training a robot operation path by:
(i) computing the “Object Space→Sensor Array Space” transformation using the “Object Model” and the positions of the features in the image; (ii) computing the “Object Space” position and orientation in “Robot Frame” using the transformation from “Object Space→Sensor Array Space” and “Robot—Eye Calibration”; (iii) coordinating the desired robot operation path with the “Object Space”; (d) carrying out object location and robot guidance by:
(i) acquiring and forming an image of the object using the sensor array, searching for and finding said at least 5 trained features; (ii) with the positions of features in the image and the corresponding “Object Model” as determined in the training step, computing the object location as the transformation between the “Object Space” and the “Sensor Array” and the transformation between the “Object Space” and “Robot Frame”; (iii) communicating said computed object location to the robot and modifying robot path points according to said computed object location.
- 2. The method of claim 1 wherein the sensor array is a single camera.
- 3. The method of claim 2 wherein said camera is fixed onto a stationary structure and the object is at rest upon a surface.
- 4. The method of claim 2 wherein said camera is fixed onto a stationary structure and the object is in a robot's grasp such that the position and orientation of the object can be modified by known values.
- 5. The method of claim 2 wherein the object is at rest and stationary and the camera is attached onto the robot such that its position and orientation can be modified by known values.
- 6. The method of claim 2 whereby the object is in a robot's grasp such that its position and orientation can be modified by known values and the camera is attached onto another robot such that its position and orientation can be modified by known values.
- 7. The method of claim 2 whereby step a) is accomplished by:
I) the operator moving the “Calibration Model” relative to the camera and capturing images at multiple positions to determine camera intrinsic parameters; II) the operator taking an image of the “Calibration Model” at a stationary position to determine the extrinsic calibration parameters; III) the operator determining the position of the “Calibration Model” in the robot space using the robot end-effector while the “Calibration Model” is at the same position as in step II); IV) calculating the “Robot—Eye Calibration” using results of I), II) and III).
- 8. The method of claim 3 wherein step a) is accomplished by mounting the “Calibration Model” on the robot and using the robot to automatically move the “Calibration Model” relative to the camera and capturing images at multiple known robot positions.
- 9. The method of claim 4 wherein step a) is accomplished by mounting the “Calibration Model” on the robot and using the robot to automatically move the “Calibration Model” relative to the camera and capturing images at multiple known robot positions.
- 10. The method of claim 5 wherein step a) is accomplished by placing the “Calibration Model” in the camera's field of view and using the robot to automatically move the camera relative to the “Calibration Model” and capturing images at multiple known robot positions.
- 11. The method of claim 6 wherein step a) is accomplished by placing the “Calibration Model” in the camera's field of view and using the robot to automatically move the camera relative to the “Calibration Model” and capturing images at multiple known robot positions.
- 12. The method of claim 2 wherein step b) iii) is accomplished by using the relative heights of features and the “Robot—Eye Calibration”.
- 13. The method of claim 3 wherein step b) iii) is accomplished by the operator entering the 3D position of each feature manually from a CAD model, measurement or other source.
- 14. The method of claim 4 wherein step b) iii) is accomplished by using the robot to automatically move the object relative to the camera and capturing images at multiple known robot positions.
- 15. The method of claim 5 wherein step b) iii) is accomplished by using the robot to automatically move the camera relative to the object and capturing images at multiple known robot positions.
- 16. The method of claim 6 wherein step b) iii) is accomplished by changing the relative position of the object and camera using movement of one or both robots and capturing images at multiple known robots' positions.
- 17. The method of claim 1 wherein step c) iii) is accomplished by sending the “Object Space” to the robot and training the intended operation path relative to the “Object Space” and step d) iii) is accomplished by computing the “Object Space” inside the “Robot Frame” using the transformation between the “Object Space” and the “Sensor Array” and the “Robot—Eye Calibration” and sending the “Object Space” to the robot and executing the robot path relative to the “Object Space”.
- 18. The method of claim 1 wherein step c) iii) is accomplished by memorizing the “Object Space” and step d) iii) is accomplished by calculating the transformation between the memorized “object space” and the current “object space” and communicating this transformation to the robot to be used for correcting the operation path points.
- 19. The method of claim 1 wherein step c) iii) is accomplished by memorizing the “Object Space” and step d) iii) is accomplished by calculating the transformation between the memorized “object space” and the current “object space” and using this transformation to modify the robot operation path points and communicating the modified path points to the robot for playback.
- 20. The method of claim 1 wherein step c) i) and d) ii) are accomplished by a model-based pose estimation methods selected from the following group:
(a) 3D pose estimation using non linear optimization refinement based on maximum likelihood criteria (b) 3D pose estimation from lines correspondence in which selected features are edges using image Jacobian (c) 3D pose estimation using “orthogonal iteration” (d) 3D pose approximation under weak perspective conditions (e) 3D pose approximation using Direct Linear Transformation (DLT).
- 21. The method of claim 4 wherein in step d) i) if a sufficient number of features are not found in the field of view of the camera, the relative position and/or orientation of the object is changed until sufficient features are found.
- 22. The method of claim 5 wherein in step d) i) if a sufficient number of features are not found in the field of view of the camera, changing the relative position and/or orientation of the camera until sufficient features are found.
- 23. The method of claim 6 wherein in step d) i) if a sufficient number of features are not found in the field of view of the camera, changing the relative position and/or orientation of the camera and/or object until sufficient features are found.
- 24. The method of claim 4 wherein the following steps are carried out after step d) ii) and before step d) iii):
(a) calculating the necessary movement of the object relative to the camera using the transformation between the “Object Space” and “Robot Frame” such that the relative position and orientation of the object and the camera is sirmilar to that at the time of training; (b) executing the relative movement as calculated in previous step; (c) finding the “Object Space→Sensor Array Space” transformation in the same way as in step d) ii).
- 25. The method of claim 5 wherein the following steps are carried out after step d) ii) and before step d) iii):
(a) calculating the necessary movement of the object relative to the camera using the transformation between the “Object Space” and “Robot Frame” such that the relative position and orientation of the object and the camera is similar to that at the time of training; (b) executing the relative movement as calculated in previous step; (c) finding the “Object Space→Sensor Array Space” transformation in the same way as in step d) ii).
- 26. The method of claim 6 wherein the following steps are carried out after step d) ii) and before step d) iii):
(a) calculating the necessary movement of the object relative to the camera using the transformation between the “Object Space” and “Robot Frame” such that the relative position and orientation of the object and the camera is similar to that at the time of training; (b) executing the relative movement as calculated in previous step; (c) finding the “Object Space→Sensor Array Space” transformation in the same way as in step d) ii).
- 27. The method of claim 1 wherein in steps b) and d) the object features are extracted from multiple images captured by the same sensor array located in the same position and each image is formed under a different combinations of lighting and filters to highlight a group of object features that are not apparent in other images.
- 28. The method of claim 2 wherein steps a) and b) are accomplished by using a self calibration method of robotic eye and hand-eye relationship with model identification.
- 29. The method of claim 10 wherein step b) iii) is accomplished by using the robot to automatically move the camera relative to the object and capturing images at multiple known robot positions, and wherein step c) iii) is accomplished by creating a new frame called the “Object Frame” that is in constant relationship with the “Object Space” and sending the “Object Frame” to the robot and training the intended operation path relative to the “Object Frame” and step d) iii) is accomplished by computing the “Object Space” inside the “Robot Frame” using the transformation between the “Object Space” and the “Sensor Array” and the “Robot—Eye Calibration” and calculating and sending the “Object Frame” to the robot and executing the robot path relative to the “Object Frame”, and wherein the following steps are preceded by d) ii) and followed by d) iii):
(a) calculating the necessary movement of the object relative to the camera using the transformation between the “Object Space” and “Robot Frame” such that the relative position and orientation of the object and the camera is similar to that at the time of training; (b) executing the relative movement as calculated in previous step; (c) finding the “Object Space→Sensor Array Space” transformation in the same way as in step d) ii), and wherein step d) i) the search area for some of the features is based upon the position and orientation of some other features.
- 30. A system for three-dimensional object location and guidance to allow robotic manipulation of an object with variable position and orientation by a robot using a sensor array, comprising:
(a) a sensor array; (b) a robot; (c) means for calibrating the sensor array to provide a Robot—Eye Calibration by finding the intrinsic parameters of said sensor array and the position of the sensor array relative to a preferred robot coordinate system (“Robot Frame”) by placing a calibration model in the field of view of said sensor array; (d) means for training object features by:
(i) positioning the object and the sensor array such that the object is located in the field of view of the sensor array and acquiring and forming an image of the object; (ii) selecting at least 5 visible object features from the image; (iii) creating a 3D model of the object (“Object Model”) by calculating the 3D position of each feature relative to a coordinate system rigid to the object (“Object Space”); (e) means for training a robot operation path by:
(i) computing the “Object Space→Sensor Array Space” transformation using the “Object Model” and the positions of the features in the image; (ii) computing the “Object Space” position and orientation in “Robot Frame” using the transformation from “Object Space→Sensor Array Space” and “Robot—Eye Calibration”; (iii) coordinating the desired robot operation path with the “Object Space”; (f) means for carrying out object location and robot guidance by:
(i) acquiring and forming an image of the object using the sensor array, searching for and finding said at least 5 trained features; (ii) with the positions of features in the image and the corresponding “Object Model” as determined in the training step, computing the object location as the transformation between the “Object Space” and the “Sensor Array” and the transformation between the “Object Space” and “Robot Frame”; (iii) communicating said computed object location to the robot and modifying robot path points according to said computed object location.
- 31. The method of claim 1 whereby during the operation of the system there exists a relative velocity between the sensor array and the object wherein step d) is executed in a continuous control loop and provides real time positional feedback to the robot for the purpose of correcting the intended robot operation path
- 32. The method of claim 1 whereby the object features are created using markers, lighting or other means.
Priority Claims (1)
Number |
Date |
Country |
Kind |
2,369,845 |
Jan 2002 |
CA |
|
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application is a continuation-in-part of U.S. patent application Ser. No. 10/153680, filed May 24, 2002 which is pending.
Continuation in Parts (1)
|
Number |
Date |
Country |
Parent |
10153680 |
May 2002 |
US |
Child |
10634874 |
Aug 2003 |
US |