Claims
- 1. A method of creating a landmark for navigation, the method comprising:
receiving a plurality of images from a visual sensor; retrieving dead reckoning data corresponding to the retrieved plurality of images; using the dead reckoning data to select at least 2 images from the plurality of images, where the selected images are spaced apart; identifying visual features common to at least 2 of the selected images; determining 3-dimensional coordinates of the identified visual features using the selected images; and identifiably storing the 3-dimensional coordinates of the identified visual features such that the visual features and their corresponding 3-dimensional coordinates are associated, wherein the landmark is used for navigation.
- 2. The method as defined in claim 1, wherein using the landmark for navigation further comprises using the landmark for robot navigation.
- 3. The method as defined in claim 1, wherein the visual sensor comprises a single camera.
- 4. The method as defined in claim 1, wherein the dead reckoning data corresponds to data from at least one of an odometer and a pedometer.
- 5. The method as defined in claim 1, wherein determining 3-dimensional coordinates further comprises simultaneously solving for the 3-D coordinates and relative poses of the camera or cameras for each image.
- 6. The method as defined in claim 1, wherein the at least 2 images comprise at least 3 images spaced apart, and wherein determining 3-dimensional coordinates further comprises:
using distances between the at least 3 images computed from dead reckoning data; using 2-dimensional image coordinates for selected features of the images; and calculating the 3-dimensional coordinates by simultaneously calculating 3-dimensional coordinates and relative poses from the distances and the 2-dimensional image coordinates.
- 7. The method as defined in claim 6, wherein calculating the 3-dimensional coordinates comprises using the trifocal tensor.
- 8. The method as defined in claim 1, wherein the at least 2 images are spaced apart by at least a predetermined nonzero baseline that permits determining the 3-dimensional coordinates of the identified visual features.
- 9. The method as defined in claim 1, wherein the identified visual features correspond to scale-invariant features (SIFT).
- 10. The method as defined in claim 1, further comprising providing an indication of a new landmark to a mapping process.
- 11. The method as defined in claim 10, further comprising relating the new landmark to a corresponding timestamp, and providing the timestamp with the indication of the new landmark as inputs to the mapping process.
- 12. A computer program embodied in a tangible medium for creating a landmark for navigation, the computer program comprising:
a module with instructions configured to receive a plurality of images from a visual sensor; a module with instructions configured to retrieve dead reckoning data corresponding to the retrieved plurality of images; a module with instructions configured to use the dead reckoning data to select at least 2 images from the plurality of images, where the selected images are spaced apart; a module with instructions configured to identify visual features common to at least 2 of the selected images; a module with instructions configured to determine 3-dimensional coordinates of the identified visual features using the selected images; and a module with instructions configured to identifiably store the 3-dimensional coordinates of the identified visual features such that the visual features and their corresponding 3-dimensional coordinates are associated, wherein the landmark is used for navigation.
- 13. The computer program as defined in claim 12, wherein the module with instructions configured to use the landmark for navigation further comprises instructions configured to use the landmark for robot navigation.
- 14. The computer program as defined in claim 12, wherein the dead reckoning data corresponds to data from at least one of an odometer and a pedometer.
- 15. The computer program as defined in claim 12, wherein the module with instructions configured to determine 3-dimensional coordinates further comprises instructions configured to simultaneously solve for the 3-D coordinates and relative poses of the camera or cameras for each image.
- 16. The computer program as defined in claim 12, wherein the identified visual features correspond to scale-invariant features (SIFT).
- 17. A method of creating a landmark for navigation, the method comprising:
receiving a plurality of images from a visual sensor coupled to a mobile device; selecting at least 2 images from a plurality of images, where the at least 2 images are spaced apart; identifying visual features common to at least 2 of the selected images; determining 3-dimensional coordinates of the identified visual features using the at least 2 images; and identifiably storing information related to the identified visual features such that the visual features and the corresponding information are associated, wherein the landmark is used for navigation.
- 18. The method as defined in claim 17, wherein using the landmark for navigation further comprises using the landmark for robot navigation.
- 19. The method as defined in claim 17, further comprising selecting a reference frame corresponding to one of the at least two images.
- 20. The method as defined in claim 17, wherein the identifiably stored information related to the identified visual features corresponds to at least one of 3-D coordinates and feature descriptors.
- 21. The method as defined in claim 17, wherein the visual sensor comprises a single camera.
- 22. The method as defined in claim 17, wherein the visual sensor comprises a plurality of cameras that are spaced apart.
- 23. The method as defined in claim 17, wherein determining 3-dimensional coordinates further comprises simultaneously solving for the 3-D coordinates and relative poses of the visual sensor for each of the selected images.
- 24. The method as defined.in claim 17, wherein the at least 2 images are spaced apart by at least a predetermined nonzero baseline that permits determining the 3-dimensional coordinates of the visual features.
- 25. The method as defined in claim 17, wherein the identified visual features correspond to scale-invariant features (SIFT).
- 26. The method as defined in claim 17, further comprising providing an indication of a new landmark to a mapping system.
- 27. A circuit for creating a landmark for navigation, the circuit comprising:
a circuit configured to receive a plurality of images from a visual sensor coupled to a mobile device; a circuit configured to select at least 2 images from a plurality of images, where the at least 2 images are spaced apart; a circuit configured to identify visual features common to at least 2 of the selected images; a circuit configured to determine 3-dimensional coordinates of the identified visual features using the at least 2 images; and a circuit configured to identifiably store information related to the identified visual features such that the visual features and the corresponding information are associated, wherein the landmark is used for navigation.
- 28. The circuit as defined in claim 27, wherein the circuit is embodied in a robot for navigation of the robot.
- 29. The circuit as defined in claim 27, further comprising a circuit configured to select a reference frame corresponding to one of the at least two images.
- 30. A computer program embodied in a tangible medium for creating a landmark for navigation, the computer program comprising:
a module with instructions configured to receive a plurality of images from a visual sensor coupled to a mobile device; a module with instructions configured to select at least 2 images from a plurality of images, where the at least 2 images are spaced apart; a module with instructions configured to identify visual features common to at least 2 of the selected images; a module with instructions configured to determine 3-dimensional coordinates of the identified visual features using the at least 2 images; and a module with instructions configured to identifiably store information related to the identified visual features such that the visual features and the corresponding information are associated, wherein the landmark is used for navigation.
- 31. The computer program as defined in claim 30, wherein the module with instructions configured to use the landmark for navigation further comprises instructions configured to use the landmark for robot navigation.
- 32. The computer program as defined in claim 30, further comprising a module with instructions configured to select a reference frame corresponding to one of the at least two images.
- 33. A method of determining whether to add a landmark to a map for navigation, the method comprising:
retrieving an image from a visual sensor; comparing features from the image to a plurality of stored features; generating a list of matching landmarks from the comparison; filtering matching landmarks from the list based at least in part on reliability tests; and proceeding to a landmark creation process when there are no remaining matching landmarks in the list.
- 34. The method as defined in claim 33, wherein the visual sensor is coupled to a mobile robot.
- 35. The method as defined in claim 33, wherein filtering matching landmarks further comprises:
comparing the number of features common to the image and to a landmark to a predetermined number; and filtering the landmark from the list when the number of common features for the landmark is below the predetermined number.
- 36. The method as defined in claim 33, wherein filtering matching landmarks further comprises:
initiating a computation of a camera pose corresponding to the matching features and to the landmark; and filtering the landmark from the list upon a failure of the computation to converge to a result.
- 37. The method as defined in claim 33, further comprising providing match results when at least one matching landmark remains in the list.
- 38. A method of obtaining depth information for visual navigation, the method comprising:
receiving a first image from a visual sensor that is coupled to a mobile device, where the first image corresponds to a first pose; receiving a second image from the visual sensor corresponding to a second pose, where the second pose is different from the first pose; using common features identified in the first image and the second image to obtain depth information; and using the depth information to create a landmark for visual navigation.
- 39. The method as defined in claim 38, further comprising:
receiving a third image from the visual sensor corresponding to the third pose different from the first pose and the second pose; and wherein using common features further comprises using common features identified in the first image, the second image, and the third image to obtain the depth information.
- 40. The method as defined in claim 38, wherein the visual sensor comprises a camera.
- 41. The method as defined in claim 38, wherein the visual sensor comprises a camera that is coupled to the mobile device such that the camera generally faces a forward direction.
- 42. The method as defined in claim 38, wherein the visual sensor comprises a camera that is coupled to the mobile device such that the camera generally faces an upwards direction.
- 43. The method as defined in claim 38, further comprising having the mobile device coupled to the visual sensor move in a deliberate path such that, in consecutive images, the visual sensor is facing in approximately the same direction.
- 44. The method as defined in claim 38, further comprising selecting the first pose and the second pose such that a separation between the first pose and the second pose corresponds to at least a predetermined threshold, and where the predetermined threshold is adaptively determined at least in part based on the operating environment of the mobile device.
- 45. The method as defined in claim 38, wherein the mobile device moves by at least 5% of an average displacement to observed features between the first pose and the second pose.
- 46. The method as defined in claim 38, further comprising controlling movement of the mobile device under computer control.
- 47. A computer program embodied in a tangible medium for obtaining depth information for visual navigation, the computer program comprising:
a module with instructions configured to receive a first image from a visual sensor that is coupled to a mobile device, where the first image corresponds to a first pose; a module with instructions configured to receive a second image from the visual sensor corresponding to a second pose, where the second pose is different from the first pose; a module with instructions configured to use common features identified in the first image and the second image to obtain depth information; and a module with instructions configured to use the depth information to create a landmark for visual navigation.
- 48. The computer program as defined in claim 47, further comprising:
a module with instructions configured to receive a third image from the visual sensor corresponding to the third pose different from the first pose and the second pose; and wherein the module with instructions configured to use common features further comprises instructions configured to use common features identified in the first image, the second image, and the third image to obtain the depth information.
- 49. The computer program as defined in claim 47, further comprising a module with instructions configured to select the first pose and the second pose such that a separation between the first pose and the second pose corresponds to at least a predetermined threshold, and where the predetermined threshold is adaptively determined at least in part based on the operating environment of the mobile device.
- 50. A method of obtaining depth information for visual navigation, the method comprising:
acquiring an image from a visual sensor that is coupled to a mobile device where the image corresponds to a first pose; acquiring one or more additional images from the visual sensor corresponding to one or more additional poses; and using identified features common to at least two of the acquired images to obtain depth information for visual navigation.
- 51. The method as defined in claim 50, further comprising deliberately having the mobile device move from each pose to the subsequent pose prior to acquiring each of the one or more additional images.
- 52. The method as defined in claim 50, wherein the visual sensor comprises a camera.
- 53. The method as defined in claim 50, wherein the visual sensor comprises a camera that is coupled to the mobile device such that the camera generally faces a forward direction.
- 54. The method as defined in claim 50, wherein the visual sensor comprises a camera that is coupled to the mobile device such that the camera generally faces an upwards direction.
- 55. The method as defined in claim 50, further comprising having the mobile device coupled to the visual sensor move in a deliberate path such that, in consecutive images, the visual sensor is facing in approximately the same direction.
- 56. The method as defined in claim 50, wherein the separation between the first pose and the one or more additional poses used corresponds to at least a predetermined threshold, and where the predetermined threshold is adaptively determined at least in part based on the operating environment of the mobile robot.
- 57. The method as defined in claim 50, wherein the mobile robot moves by at least 5% of an average displacement to observed features between the two acquired images.
- 58. A computer program embodied in a tangible medium for obtaining depth information for visual navigation, the computer program comprising:
a module with instructions configured to acquire an image from a visual sensor that is coupled to a mobile device where the image corresponds to a first pose; a module with instructions configured to acquire one or more additional images from the visual sensor corresponding to one or more additional poses; and a module with instructions configured to use identified features common to at least two of the acquired images to obtain depth information for visual navigation.
- 59. The computer program as defined in claim 58, further comprising a module with instructions configured to deliberately have the mobile device move from each pose to the subsequent pose prior to acquiring each of the one or more additional images.
- 60. A method of adding a new landmark to a map for navigation of a mobile device, the method comprising:
retrieving dead reckoning data from at least a time corresponding to a prior update of device pose and a time corresponding to an observation of the new landmark; retrieving a prior device pose for the map, where the prior device pose corresponds to a prior update time; and adding the new landmark to the map, wherein the pose associated with the new landmark is computed at least in part by using the retrieved dead reckoning data to calculate a change to device pose corresponding to the prior update.
- 61. The method as defined in claim 60, wherein the prior update to device pose and the retrieved dead reckoning data are referenced by one or more timestamps.
- 62. The method as defined in claim 60, further comprising visually observing the new landmark using a digital imaging device.
- 63. A computer program embodied in a tangible medium for adding a new landmark to a map for navigation of a mobile device, the computer program comprising:
a module with instructions configured to retrieve dead reckoning data from at least a time corresponding to a prior update of device pose and a time corresponding to an observation of the new landmark; a module with instructions configured to retrieve a prior device pose for the map, where the prior device pose corresponds to a prior update time; and a module with instructions configured to add the new landmark to the map, wherein the pose associated with the new landmark is computed at least in part by using the retrieved dead reckoning data to calculate a change to device pose corresponding to the prior update.
- 64. The computer program as defined in claim 63, further comprising a module with instructions configured to visually observe the new landmark.
- 65. A method of adding a new landmark to a plurality of maps in a multiple-particle navigation system for navigation of a mobile device, the method comprising:
retrieving dead reckoning data from at least a time corresponding to a prior update of device pose and a time corresponding to an observation of the new landmark; retrieving prior device poses for the plurality of maps, where the prior device poses correspond to a prior update time; and adding the new landmark to the plurality of maps, wherein the poses associated with the new landmark are computed at least in part by using the retrieved dead reckoning data to calculate a change to device pose from the device poses corresponding to the prior update.
- 66. The method as defined in claim 65, wherein the prior update to device pose and the retrieved dead reckoning data are referenced by one or more timestamps.
- 67. The method as defined in claim 65, further comprising visually observing the new landmark using a visual sensor coupled to the mobile device.
- 68. A computer program embodied in a tangible medium for adding a new landmark to a plurality of maps in a multiple-particle navigation system for navigation of a mobile device, the computer program comprising:
a module with instructions configured to retrieve dead reckoning data from at least a time corresponding to a prior update of device pose and a time corresponding to an observation of the new landmark; a module with instructions configured to retrieve prior device poses for the plurality of maps, where the prior device poses correspond to a prior update time; and a module with instructions configured to add the new landmark to the plurality of maps, wherein the poses associated with the new landmark are computed at least in part by using the retrieved dead reckoning data to calculate a change to device pose from the device poses corresponding to the prior update.
- 69. The computer program as defined in claim 68, further comprising a module with instructions configured to visually observe the new landmark.
- 70. A method of creating a new landmark in a navigation system for a mobile device, the method comprising:
detecting a new landmark; storing a first reference to the new landmark in a reference frame that is local to the landmark for localizing; and storing a second reference to the new landmark in a global reference frame for mapping.
- 71. The method as defined in claim 70, wherein detecting the new landmark further comprises visually detecting the new landmark.
- 72. The method as defined in claim 70, wherein the mobile device is a robot.
- 73. A circuit for creating a new landmark in a navigation system for a mobile device, the circuit comprising:
a circuit configured to detect a new landmark; a circuit configured to store a first reference to the new landmark in a reference frame that is local to the landmark for localizing; and a circuit configured to store a second reference to the new landmark in a global reference frame for mapping.
- 74. The circuit as defined in claim 73, wherein the circuit configured to detect the new landmark is further configured to visually detect the new landmark.
- 75. The circuit as defined in claim 73, wherein the circuit is embodied in a robot.
- 76. A computer program embodied in a tangible medium for creating a new landmark in a navigation system for a mobile device, the computer program comprising:
a module with instructions configured to detect a new landmark; a module with instructions configured to store a first reference to the new landmark in a reference frame that is local to the landmark for localizing; and a module with instructions configured to store a second reference to the new landmark in a global reference frame for mapping.
- 77. The computer program as defined in claim 76, wherein the module with instructions configured to detect the new landmark further comprises instructions configured to visually detect the new landmark.
- 78. The computer program as defined in claim 76, wherein the computer program is embodied in a robot.
- 79. A circuit for creating a new landmark in a navigation system for a mobile device, the circuit comprising:
a means for detecting a new landmark; a means for storing a first reference to the new landmark in a reference frame that is local to the landmark for localizing; and a means for storing a second reference to the new landmark in a global reference frame for mapping.
- 80. The circuit as defined in claim 79, wherein the means for detecting the new landmark further comprises a means for visually detecting the new landmark.
- 81. The circuit as defined in claim 79, wherein the circuit is embodied in a robot for navigation of the robot.
RELATED APPLICATION
[0001] This application claims the benefit under 35 U.S.C. § 119(e) of U.S. Provisional Application No. 60/434,269, filed Dec. 17, 2002, and U.S. Provisional Application No. 60/439,049, filed Jan. 9, 2003, the entireties of which are hereby incorporated by reference.
[0002] Appendix A, which forms a part of this disclosure, is a list of commonly owned copending U.S. patent applications. Each one of the applications listed in Appendix A is hereby incorporated herein in its entirety by reference thereto.
Provisional Applications (2)
|
Number |
Date |
Country |
|
60434269 |
Dec 2002 |
US |
|
60439049 |
Jan 2003 |
US |