Claims
- 1. A method of computing a relative pose for autonomous localization for a mobile device, the method comprising:
identifying matching features of a stored landmark and of an image, where the image is provided by a visual sensor coupled to the mobile device; determining 2-dimensional coordinates within the image for the matching features of the image; retrieving 3-dimensional coordinates of the matching features of the stored landmark; computing a hypothetical device pose by:
projecting the 3-dimensional coordinates of the matching features of the stored landmark onto new 2-dimensional coordinates of a hypothetical image, where the hypothetical image corresponds to an image that would be observed by the visual sensor if the device were to be re-posed according to the hypothetical device pose; generating a projection error by comparing the new 2-dimensional coordinates to the 2-dimensional coordinates for the matching features of the image; and solving for the hypothetical device pose that corresponds to a low projection error; and using the hypothetical device pose as the computed relative device-pose.
- 2. The method as defined in claim 1, wherein the low projection error is lower than an initial projection error, where the initial projection error is calculated by comparing the 2-dimensional coordinates for the matching features of the image to the 2-dimensional coordinates that would be obtained if the device were at an origin of a landmark reference frame and were to have zero heading relative to the landmark reference frame.
- 3. The method as defined in claim 1, wherein the low projection error corresponds to a minimum root mean square (RMS) projection error.
- 4. The method as defined in claim 1, wherein the matching features correspond to scale-invariant features (SIFT).
- 5. The method as defined in claim 1, wherein the 3-dimensional coordinates relate to displacements from a visual sensor coupled to the mobile device to the corresponding features at the time when the landmark was created.
- 6. The method as defined in claim 5, wherein the visual sensor corresponds to one or more cameras, and further comprising transforming the relative pose from a camera reference frame to a device reference frame.
- 7. The method as defined in claim 1, wherein the 3-dimensional coordinates for the matching features of the stored landmark are retrieved from a data store.
- 8. The method as defined in claim 1, wherein the 2-dimensional coordinates correspond to pixel locations.
- 9. A circuit for a mobile device that is configured to compute a relative pose for autonomous localization of the mobile device, the circuit comprising:
a means for identifying matching features of a stored landmark and of an image, where the image is provided by a visual sensor coupled to the mobile device; a means for determining 2-dimensional coordinates within the image for the matching features of the image; a means for retrieving 3-dimensional coordinates of the matching features of the stored landmark; a means for computing a hypothetical device pose further comprising:
a means for projecting the 3-dimensional coordinates of the matching features of the stored landmark onto new 2-dimensional coordinates of a hypothetical image, where the hypothetical image corresponds to an image that would be observed by the visual sensor if the device were to be re-posed according to the hypothetical device pose; a means for generating a projection error by comparing the new 2-dimensional coordinates to the 2-dimensional coordinates for the matching features of the image; and a means for solving for the hypothetical device pose that corresponds to a low projection error; and a means for using the hypothetical device pose as the computed relative device pose.
- 10. The circuit as defined in claim 9, wherein the matching features correspond to scale-invariant features (SIFT).
- 11. The circuit as defined in claim 9, wherein the mobile device comprises a mobile robot.
- 12. A computer program embodied in a tangible medium for computing a relative pose for autonomous localization for a mobile device, the computer program comprising:
a module with instructions configured to identif matching features of a stored landmark and of an image, where the image is provided by a visual sensor coupled to the mobile device; a module with instructions configured to determine 2-dimensional coordinates within the image for the matching features of the image; a module with instructions configured to retrieve 3-dimensional coordinates of the matching features of the stored landmark; a module with instructions configured to compute a hypothetical device pose further comprises:
instructions configured to project the 3-dimensional coordinates of the matching features of the stored landmark onto new 2-dimensional coordinates of a hypothetical image, where the hypothetical image corresponds to an image that would be observed by the visual sensor if the device were to be re-posed according to the hypothetical device pose; instructions configured to generate a projection error by comparing the new 2-dimensional coordinates to the 2-dimensional coordinates for the matching features of the image; and instructions configured to solve for the hypothetical device pose that corresponds to a-low projection error; and a module with instructions configured to use the hypothetical device pose as the computed relative device pose.
- 13. The computer program as defined in claim 12, wherein the matching features correspond to scale-invariant features (SIFT).
- 14. A circuit in a mobile device for computing a relative pose for autonomous localization for the mobile device, the circuit comprising:
a circuit configured to identify matching features of a stored landmark and of an image, where the image is provided by a visual sensor coupled to the mobile device; a circuit configured to detennine 2-dimensional coordinates within the image for the matching features of the image; a circuit configured to retrieve 3-dimensional coordinates of the matching features of the stored landmark; a circuit configured to compute a hypothetical device pose, further comprising;
a circuit configured to project the 3-dimensional coordinates of the matching features of the stored landmark onto new 2-dimensional coordinates of a hypothetical image, where the hypothetical image corresponds to an image that would be observed by the visual sensor if the device were to be re-posed according to the hypothetical device pose; a circuit configured to generate a projection error by comparing the new 2-dimensional coordinates to the 2-dimensional coordinates for the matching features of the image; and a circuit configured to solve for the hypothetical device pose that corresponds to a low projection error; and where the circuit is configured to use the hypothetical device pose as the computed relative device pose.
- 15. The circuit as defined in claim 14, wherein the low projection error corresponds to a minimum root mean square (RMS) projection error.
- 16. The circuit as defined in claim 14, wherein the matching features correspond to scale-invariant features (SIFT).
- 17. The circuit as defined in claim 14, wherein the circuit is embodied in a robot for navigation of the robot.
RELATED APPLICATION
[0001] This application claims the benefit under 35 U.S.C. § 119(e) of U.S. Provisional Application No. 60/434,269, filed Dec. 17, 2002, and U.S. Provisional Application No. 60/439,049, filed Jan. 09, 2003, the entireties of which are hereby incorporated by reference.
[0002] Appendix A, which forms a part of this disclosure, is a list of commonly owned copending U.S. patent applications. Each one of the applications listed in Appendix A is hereby incorporated herein in its entirety by reference thereto.
Provisional Applications (2)
|
Number |
Date |
Country |
|
60434269 |
Dec 2002 |
US |
|
60439049 |
Jan 2003 |
US |