Claims
- 1. A method of autonomous localization and mapping, the method comprising:
visually observing an environment via a visual sensor; maintaining a map of landmarks in a data store, where the map of landmarks is based at least in part on visual observations of the environment; receiving data from a dead reckoning sensor, where the dead reckoning sensor relates to movement of the visual sensor within the environment; using data from the dead reckoning sensor and a prior pose estimate to predict a new device pose in a global reference frame at least partly in response to a determination that a known landmark at least recently has not at least recently been encountered; and using data from the visual sensor to predict a new device pose in the global reference frame at least partly in response to a determination that a known landmark has been recognized, where the new device pose estimate is based at least in part on a previous pose estimate associated with the known landmark and using the visual. sensor data to update one or more maps.
- 2. The method as defined in claim 1, further comprising using the autonomous localization and mapping in a mobile robot.
- 3. The method as defined in claim 1, wherein the map comprises one or more maps.
- 4. The method as defined in claim 1, further comprising using data from the dead reckoning sensor and a prior pose estimate to add a new landmark to the map at least partly in response to a determination that a new landmark has been created, wherein using data from the dead reckoning sensor and the prior pose estimate to add a new landmark to the map further comprises:
determining that a new landmark has been detected; storing selected identifiable features of the new landmark; storing the new device pose estimate; and identifiably associating the new device pose estimate with the new landmark.
- 5. The method as defined in claim 1, further comprising using data from the dead reckoning sensor and a prior pose estimate to add a new landmark to the map at least partly in response to a determination that a new landmark has been created, wherein using data from the dead reckoning sensor and the prior pose estimate to add a new landmark to the map further comprises:
determining that a new landmark has been detected; storing selected identifiable features of the new landmark; calculating 3-D coordinates of the selected identifiable features; relating the selected identifiable features to 2-D image locations, wherein the images are received from the visual sensor; storing the new device pose estimate; and identifiably associating the new device pose estimate and the calculated 3-D coordinates with the new landmark.
- 6. The method as defined in claim 1, wherein using data from the visual sensor to predict the new device pose in the global reference frame at least partly in response to the determination that the known landmark has been recognized further comprises:
retrieving a landmark pose and calculated 3-D coordinates associated with the landmark; determining the relative pose that projects at least a portion of the calculated 3-D coordinates onto the corresponding features observed in the new image; and computing the new device pose estimate based at least in part on the retrieved landmark pose and the relative pose.
- 7. The method as defined in claim 1, wherein the visual sensor corresponds to one or more cameras.
- 8. The method as defined in claim 1, wherein the dead reckoning sensor corresponds to at least one of an odometer and a pedometer.
- 9. A computer program embodied in a tangible medium for autonomous localization and mapping, the computer program comprising:
a module with instructions configured to visually observe an environment via a visual sensor; a module with instructions configured to maintain a map of landmarks in a data store, where the map of landmarks is based at least in part on visual observations of the environment; a module with instructions configured to receive data from a dead reckoning sensor, where the dead reckoning sensor relates to movement of the visual sensor within the environment; a module with instructions configured to use data from the dead reckoning sensor and a prior pose estimate to predict a new device pose in a global reference frame at least partly in response to a determination that a known landmark has not at least recently been encountered; and a module with instructions configured to use data from the visual sensor to predict a new device pose in the global reference frame at least partly in response to a determination that a known landmark has been recognized, where the new device pose estimate is based at least in part on a previous pose estimate associated with the known landmark, and using the visual sensor data to update one or more maps.
- 10. The computer program as defined in claim 9, further comprising a module with instructions configured to use data from the dead reckoning sensor and a prior pose estimate to add a new landmark to the map at least partly in response to a determination that a new landmark has been created, wherein the module with instructions configured to use data from the dead reckoning sensor and the prior pose estimate to add a new landmark to the map further comprises:
instructions configured to determine that a new landmark has been detected; instructions configured to store selected identifiable features of the new landmark; instructions configured to calculate 3-D coordinates of the selected identifiable features; instructions configured to relate the selected identifiable features to 2-D image locations, wherein the images are received from the visual sensor; instructions configured to store the new device pose estimate; and instructions configured to identifiably associate the new device pose estimate and the calculated 3-D coordinates with the new landmark.
- 11. A method of localization and mapping in a mobile device that travels in an environment, the method comprising:
receiving images of the environment from a visual sensor coupled to the mobile device as the mobile device travels in the environment; extracting visual features from one or more images; matching at least a portion of the visual features to previously observed features; estimating one or more poses of the mobile device relative to the previously-observed sets of features based at least in part on matches found between features observed in the image and features previously observed; using the one or more estimated relative poses to localize the mobile device within one or more maps; and updating the one or more maps.
- 12. The method as defined in claim 11, wherein estimating the relative pose of the device further comprises calculating a change in pose of the device from a first pose corresponding to the stored features to a second pose corresponding to the analyzed image.
- 13. The method as defined in claim 11, further comprising:
retrieving data from one or more dead reckoning sensors; using the data from the one or more dead reckoning sensors to estimate a pose for the device when the process determines that there has not been a match between the visually-detectable features of the image and the stored features; and estimating the pose of the device using dead reckoning data acquired approximately after the mobile device was at a last estimated position, where the last estimated position corresponds to a pose determined at least in part by a visual measurement.
- 14. The method as defined in claim 11, wherein using one or more estimated relative poses to localize the mobile device within one or more maps further comprises computing one or more pose hypotheses.
- 15. The method as defined in claim 11, wherein the visual features correspond to scale invariant features (SIFT).
- 16. The method as defined in claim 11, further comprising:
matching the visual features from the image to one or more sets of previously-observed features, where a set of previously-observed features relates to a landmark within a map; estimating one or more relative poses for the mobile device based at least in part on selected matches to the one or more sets of previously-observed features; and localizing the mobile device within one or more maps by updating the corresponding one or more poses with the plurality of estimated relative poses.
- 17. A circuit for localization and mapping in a mobile device that travels in an environment, the circuit comprising:
a circuit configured to receive images of the environment from a visual sensor coupled to the mobile device as the mobile device travels in the environment; a circuit configured to extract visual features from one or more images; a circuit configured to match at least a portion of the visual features to previously-observed features; a circuit configured to estimate one or more poses of the mobile device. relative to the previously-observed sets of features based at least in part on matches found between features observed in the image and features previously observed; a circuit configured to use the one or more estimated relative poses to localize the mobile device within one or more maps; and a circuit configured to update the one or more maps.
- 18. The circuit as defined in claim 17, wherein the circuit is embodied in a robot for navigation of the robot.
- 19. The circuit as defined in claim 17, wherein the circuit configure to estimate the relative pose of the device is further configured to calculate a change in pose of the device from a first pose corresponding to the stored features to a second pose corresponding to the analyzed image.
- 20. The circuit as defined in claim 17, further comprising:
a circuit configured to retrieve data from one or more dead reckoning sensors; a circuit configured to use the data from the one or more dead reckoning sensors to estimate a pose for the device when the process determines that there has not been a match between the visually-detectable features of the image and the stored features; and a circuit configured to estimate the pose of the device using dead reckoning data acquired approximately after the mobile device was at a last estimated position, where the last estimated position corresponds to a pose determined at least in part by a visual measurement.
- 21. The circuit as defined in claim 17, further comprising:
a circuit configured to match the visual features from the image to one or more sets of previously-observed features, where a set of previously-observed features relates to a landmark within a map; a circuit configured to estimate one or more relative poses for the mobile device based at least in part on selected matches to the one or more sets of previously-observed features; and a circuit configured to localize the mobile device within one or more maps by updating the corresponding one or more poses with the plurality of estimated relative poses.
- 22. A computer program embodied in a tangible medium for localization and mapping in a mobile device that travels in an environment, the computer program comprising:
a module with instructions configured to receive images of the environment from a visual sensor coupled to the mobile device as the mobile device travels in the environment; a module with instructions configured to extract visual features from one or more images; a module with instructions configured to match at least a portion of the visual features to previously-observed features; a module with instructions configured to estimate one or more poses of the mobile device relative to the previously-observed sets of features based at least in part on matches found between features observed in the image and features previously observed; a module with instructions configured to use the one or more estimated relative poses to localize the mobile device within one or more maps; and a module with instructions configured to update the one or more maps.
- 23. The computer program as defined in claim 22, wherein the module with instructions configured to estimate the relative pose of the device further comprises instructions configured to calculate a change in pose of the device from a first pose corresponding to the stored features to a second pose corresponding to the analyzed image.
- 24. The computer program as defined in claim 22, further comprising:
a module with instructions configured to retrieve data from one or more dead reckoning sensors; a module with instructions configured to use the data from the one or more dead reckoning sensors to estimate a pose for the device when the process determines that there has not been a match between the visually-detectable features of the image and the stored features; and a module with instructions configured to estimate the pose of the device using dead reckoning data acquired approximately after the mobile device was at a last estimated position, where the last estimated position corresponds to a pose determined at least in part by a visual measurement.
- 25. A method of autonomous localization, the method comprising:
using dead reckoning data for navigation between observations of visually-identifiable landmarks; and using a visual observation of a landmark with a reference in a global reference frame to adjust an estimate of a pose so as to reduce an amount of drift in a pose later estimated with the dead reckoning data.
- 26. The method as defined in claim 25, wherein the autonomous localization is used to estimate the pose of a mobile robot.
- 27. The method as defined in claim 25, wherein the amount of drift is reduced such that a resulting amount of drift is substantially less than the error in most of the visual measurements.
- 28. The method as defined in claim 25, wherein a resulting amount of drift is substantially negligible.
- 29. The method as defined in claim 25, wherein the dead reckoning data corresponds to data derived from at least one of an odometer and a pedometer.
- 30. The method as defined in claim 25, wherein the visual observation is made by one or more cameras.
- 31. The method as defined in claim 25, further comprising:
observing a visually-identifiable landmark that is not referenced in a data store; storing the estimated pose corresponding to when the visually-identifiable landmark was observed; and storing references to the visually-identifiable landmark such that a relative pose to the landmark can be calculated when the visually-identifiable landmark is re-observed.
- 32. A circuit for autonomous localization, the circuit comprising:
a circuit configured to use dead reckoning data for navigation between observations of visually-identifiable landmarks; and a circuit configured to use a visual observation of a landmark with a reference in a global reference frame to adjust an estimate of a pose so as to reduce an amount of drift in a pose later estimated with the dead reckoning data.
- 33. The circuit as defined in claim 32, wherein the circuit is embodied in a mobile robot to estimate the pose of the mobile robot.
- 34. The circuit as defined in claim 32, wherein the dead reckoning data corresponds to data derived from at least one of an odometer and a pedometer.
- 35. The circuit as defined in claim 32, further comprising:
a circuit adapted to observe a visually-identifiable landmark that is not referenced in a data store; a circuit adapted to store the estimated pose corresponding to when the visually-identifiable landmark was observed; and a circuit adapted to store references to the visually-identifiable landmark such that a relative pose to the landmark can be calculated when the visually-identifiable landmark is re-observed.
- 36. A computer program embodied in a tangible medium for autonomous localization, the computer program comprising:
a module with instructions configured to use dead reckoning data for navigation between observations of visually-identifiable landmarks; and a module with instructions configured to use a visual observation of a landmark with a reference in the global reference frame to adjust an estimate of a pose so as to reduce an amount of drift in a pose later estimated with the dead reckoning data.
- 37. The computer program as defined in claim 36, wherein the dead reckoning data corresponds to data derived from at least one of an odometer and a pedometer.
- 38. The computer program as defined in claim 36, further comprising:
a module with instructions configured to observe a visually-identifiable landmark that is not referenced in a data store; a module with instructions configured to store the estimated pose corresponding to when the visually-identifiable landmark was observed; and a module with instructions configured to store references to the visually-identifiable landmark such that a relative pose to the landmark can be calculated when the visually-identifiable landmark is re-observed.
- 39. A circuit for autonomous localization, the circuit comprising:
a means for using dead reckoning data between observations of visually-identifiable landmarks; and a means for using a visual observation of a landmark with a reference in the global reference frame to adjust an estimate of a pose such that an amount of drift in a pose later estimated with the dead reckoning data is substantially reduced.
- 40. The circuit as defined in claim 39, wherein the circuit is embodied in a mobile robot to estimate the pose of the mobile robot.
- 41. The circuit as defined in claim 39, wherein the dead reckoning data corresponds to data derived from at least one of an odometer and a pedometer.
- 42. A method of autonomous localization and mapping, the method comprising:
receiving images from a visual sensor; receiving data from a dead reckoning sensor; generating a map based on landmarks observed in the images, where a landmark is associated with a device pose as at least partly determined by data from the dead reckoning sensor, where the landmarks are identified by visual features of an unaltered or unmodified environment and not by detection of artificial navigational beacons; and localizing within the map by using a combination of recognition of visual features of the environment and dead reckoning data.
- 43. The method as defined in 42, further comprising using the localization and mapping for a mobile robot.
- 44. The method as defined in 42, wherein the visual sensor corresponds to a single camera.
- 45. The method as defined in 44, wherein the visual sensor is coupled to a mobile robot, further comprising having the mobile robot move to provide images with different perspective views.
- 46. The method as defined in 42, wherein the visual sensor corresponds to multiple cameras.
- 47. The method as defined in 42, wherein the dead reckoning data corresponds to data from at least one of an odometer and a pedometer.
- 48. The method as defined in 42, wherein generating the map and localizing within the map are performed in real time.
- 49. The method as defined in 42, further comprising updating the map by using a combination of recognition of visual features of the environment and dead reckoning data.
- 50. A computer program embodied in a tangible medium for autonomous localization and mapping, the computer program comprising:
a module with instructions configured to receive images from a visual sensor; a module with instructions configured to receive data from a dead reckoning sensor; a module with instructions configured to generate a map based on landmarks observed in the images, where a landmark is associated with a device pose as at least partly determined by data from the dead reckoning sensor, where the landmarks are identified by visual features of an unaltered or unmodified environment and not by detection of artificial navigational beacons; and a module with instructions configured to localize within the map by using a combination of recognition of visual features of the environment and dead reckoning data.
- 51. The computer program as defined in 50, wherein the visual sensor is coupled to a mobile robot, further comprising a module with instructions configured to have the mobile robot move to provide images with different perspective views.
- 52. The computer program as defined in 50, wherein the dead reckoning data corresponds to data from at least one of an odometer and a pedometer.
- 53. A method of adding a landmark to a map of landmarks, the method comprising:
using visual features observed in an environment as landmarks; referencing poses for landmarks in a map of landmarks in a global reference frame; storing one or more coordinates of the landmark's 3-D features in the landmark reference frame; and storing an initial estimate of landmark pose.
- 54. The method as defined in 53, further comprising altering the initial estimate of landmark pose by a subsequent measurement.
- 55. The method as defined in 53, wherein storing one or more coordinates further comprises measuring 3-dimensional displacements from a visual sensor coupled to a mobile robot.
- 56. The method as defined in 53, wherein the observed visual features correspond to scale-invariant features (SIFT).
- 57. The method as defined in 53, wherein the method is performed in real time.
- 58. The method as defined in 53, further comprising using images from a single camera to detect the visual features.
- 59. A computer program embodied in a tangible medium for adding a landmark to a map of landmarks, the computer program comprising:
a module with instructions configured to use visual features observed in an environment as landmarks; a module with instructions configured to reference poses for landmarks in a map of landmarks in a global reference frame; a module with instructions configured to store one or more coordinates of the landmark's 3-D features in the landmark reference frame; and a module with instructions configured to store an initial estimate of landmark pose.
- 60. The computer program as defined in 59, wherein the module with instructions configured to store one or more coordinates further comprises instructions configured to measure 3-dimensional displacements from a visual sensor coupled to a mobile robot.
- 61. The computer program as defined in 59, wherein the observed visual features correspond to scale-invariant features (SIFT).
RELATED APPLICATION
[0001] This application claims the benefit under 35 U.S.C. § 119(e) of U.S. Provisional Application No. 60/434,269, filed Dec. 17, 2002, and U.S. Provisional Application No. 60/439,049, filed Jan. 09, 2003, the entireties of which are hereby incorporated by reference.
[0002] Appendix A, which forms a part of this disclosure, is a list of commonly owned copending U.S. patent applications. Each one of the applications listed in Appendix A is hereby incorporated herein in its entirety by reference thereto.
Provisional Applications (2)
|
Number |
Date |
Country |
|
60434269 |
Dec 2002 |
US |
|
60439049 |
Jan 2003 |
US |