This application claims the benefit under 35 U.S.C. §119(a) of Korean Patent Applications No. 10-2008-99128, filed on Oct. 9, 2008, and No. 10-2009-5058, filed on Jan. 21, 2009, the disclosures of which are incorporated herein by reference in their entirety for all purposes.
1. Field
One or more embodiments set forth in the following description relate to simultaneous localization and mapping (SLAM) which is a technique used by robots and, more particularly, to measurement updates of SLAM.
2. Description of the Related Art
A robot is a machine which looks like a human being and performs various complex acts of a human being. Nowadays, the term robot generally refers to an entity which is programmed to move and perform certain tasks autonomously.
In particular, mobile robots can perform tasks in place of a human being in an extreme situation or a dangerous region. Home appliance robots, such as a robot cleaner, have also been popularized.
The mobile robot typically desires a localization and mapping algorithm to move and perform certain tasks autonomously. Simultaneous localization and mapping (SLAM) is an exemplary localization and mapping technique used by robots to build up a map within an unknown environment while at the same time keeping track of their current position.
SLAM can use many different types of sensors to acquire data used in building the map. Since the data from such sensors may contain errors, SLAM involves recursive data processing to obtain more reliable values based on the data from the sensors. For instance, SLAM reduces errors in positions of a robot or errors in acquired data through measurement updates.
The measurement updates update a calculated value in a previous step by comparing a measured value with the calculated value. The comparison between the measured value and the calculated value is typically performed in a two-dimensional space. Accordingly, SLAM involves a conversion operation from three-dimensional data about positions of a robot and the environment surrounding the robot in a three dimensional space into two-dimensional data in a two-dimensional space. However, since the conversion process includes a number of non-linear components, the operation slows down the overall processing speed, resulting in the SLAM performance being deteriorated.
One or more embodiments set forth in the following description relate to an improved method and apparatus for SLAM through a measurement update in a three-dimensional coordinate system.
In one or more embodiments, a simultaneous localization and mapping (SLAM) apparatus of a robot includes a converter to convert a two-dimensional (2D) position of measured feature data to a three-dimensional (3D) position, and an updater to update registered feature data by comparing the 3D position of the measured feature data with a 3D position of the registered feature data.
In another one or more embodiments, a SLAM method of a robot includes converting a two-dimensional (2D) position of measured feature data to a three-dimensional (3D) position, and updating registered feature data by comparing the 3D position of the measured feature data with a 3D position of the registered feature data.
The conversion may be a coordinate conversion from an image coordinate system to a camera coordinate system. For example, a 2D position in an image coordinate system may be converted to a 3D position in a camera coordinate system.
The measurement update may be performed in the camera coordinate system.
The feature data may be a feature point or a feature line.
In the case where the feature data is a feature point, upon coordinate conversion, a depth of the feature point may be set to a predetermined value and an error covariance in the depth direction may be set to infinity.
In the case where the feature data is a feature line, a depth of each end point may be set to a predetermined value and an error covariance in the depth direction and an error covariance in a direction parallel to the feature line may be set to infinity.
Additional aspects and/or advantages will be set forth in part in the description which follows and, in part, will be apparent from the description, or may be learned by practice of the invention.
These and/or other aspects and advantages will become apparent and more readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
Reference will now be made in detail to embodiments, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to like elements throughout. In this regard, embodiments of the present invention may be embodied in many different forms and should not be construed as being limited to embodiments set forth herein. Accordingly, embodiments are merely described below, by referring to the figures, to explain aspects of the present invention.
A SLAM apparatus 100 includes a camera 101, a feature data extractor 102, a driver 103, a relative position estimator 104, an absolute position estimator 105, and a map memory 106.
The camera 101 detects light reflected from a subject and converts the light to a digital signal to acquire an image of its surroundings. The camera 101 may include a charge coupled device (CCD) or complementary metal oxide semiconductor (CMOS) image sensor, and an image processing module which receives an output from the image sensor and creates a two-dimensional (2D) image. Accordingly, a robot can acquire an image of its surroundings while moving around a work space.
The feature data extractor 102 extracts feature data from the image acquired by the camera 101. Examples of the feature data include a feature point or a feature line. For example, the feature data extractor 102 may extract a specific “point” using the Harris corner detection technique, or a specific “line” using the Canny edge detection technique. When feature lines are extracted, both end points of straight lines may be extracted while curved lines are discarded.
The driver 103 which drives a robot may include a wheel, a steering unit, and a drive motor.
The relative position estimator 104 receives an output signal from the driver 103 and calculates a positional variation and a directional variation between a previous position of a robot and a current position of the robot. For example, the relative position estimator 104 estimates a current position of the robot by measuring and integrating a traveling distance of the robot from detected rotational motion of a wheel of the driver 103. The relative position estimator 104 may include an encoder, a gyro sensor, an acceleration sensor, and an operational module.
The absolute position estimator 105 calculates an absolute position of the robot using the relative position of the robot and feature data. The absolute position of the robot is determined from a relation between the robot and the feature data in a world coordinate system. Therefore, the calculation of the absolute position may be a process for obtaining the position of the robot and the position of the feature data in a world coordinate system.
For example, the absolute position estimator 105 matches feature data of an image acquired at a current position with feature data of an image acquired at a previous position and calculates absolute positions of a robot and feature data through measurement update. The feature data of an image acquired at a previous position may be stored as reference feature data in the map memory 106.
The matching may be a process for determining if there is any previously stored feature data in/for an image acquired at a current position. In a case of an unsuccessful matching, the feature data of the newly acquired image is stored as new reference feature data. On the contrary, in a case of a successful matching, the positions of the robot and feature data are updated through measurement update. Since sensing data typically contains error components, the measurement update enables previously calculated values to be more reliable.
The absolute position estimator 105 may employ a number of algorithms to perform data processing. The present embodiment may employ SLAM using extended Kalman filters.
In
The converter 201 converts a two-dimensional (2D) position of measured feature data to a three-dimensional (3D) position.
The measured feature data may be a feature point or a feature line of an image extracted from the feature data extractor 102. The 2D position may be a 2D coordinate in an image coordinate system with respect to the feature point or feature line. The 3D position may be a 3D coordinate in a camera coordinate system of the camera 101. The converter 201 converts the 2D position in the image coordinate system to the 3D position in the camera coordinate system.
When the converter 201 performs coordinate conversion, a depth may be set to a certain value and an error covariance in a depth direction may be set to infinity. If the feature data is a feature line, an error covariance in a direction parallel to the feature line may also be set to infinity. The term depth may indicate a direction perpendicular to a 2D image, i.e., a direction of the camera 101 to take an image.
The updater 202 updates registered feature data through a comparison between a converted 3D position of measured feature data and a 3D position of registered feature data. The registered feature data may be feature data which is extracted in a previous step, updated, and stored in the map memory 106. The updater 202 may employ various types of update algorithms such as an EKF update equation. The update algorithm may be performed in the camera coordinate system of the camera 101.
Coordinate systems for the measurement update will be described in detail with reference to
In
To perform the measurement update, the registered feature data 304 and the measured feature data 305 need to be subject to comparison and calculation in the same coordinate system. Therefore, the converter 201 converts the measured feature data 305 to corresponding data in the camera coordinate system 302.
The updater 202 updates originally stored feature data, i.e., feature data 305, by performing comparison and calculation of the registered feature data 304 and the measured feature data 305 in the camera coordinate system 302. Thus, the updater 202 may perform additional conversion from feature data in the world coordinate system 301 to feature data in the camera coordinate system 302. However, since the conversion is a process for converting 3D data to 3D data, the conversion is a simpler process compared to the conversion performed of the converter 201.
Accordingly, when the measurement update is performed in the camera coordinate system 302, non-linear components contained in the coordinate conversion are reduced since 3D data does not need to be converted to 2D data.
The conversion process of the converter 201 will be described in detail with reference to
In
The converter 201 converts the 2D position of the point P, i.e., P:(u, v), to a 3D position of the point Pc:(x, y, z)c. The subscript c indicates the camera coordinate system. Since position data in the direction perpendicular to the image, i.e., depth d, is unknown, the magnitude of directional vector of ct is set to a certain value.
Since the position of a feature point has a probabilistic value, the feature point has an error covariance at its position. The directional component of ct will have a very large error covariance. In
Referring
A 2D position of the feature line is converted to a 3D position as described above with reference to
The error covariance with respect to the direction parallel to the straight line, i.e., directional component of clt, may also be set to infinity. The feature line is represented by two different points, preferably both defined end points, of the straight line. However, the measurement update may also be performed with any two points on the straight line if the error covariance in the direction parallel to the straight line is set to infinity.
In operation 101, an image is acquired which a robot takes while traveling around a work space. For instance, the image may be acquired using the camera 101.
In operation 102, a relative position of the robot is calculated. For example, the relative position estimator 104 may calculate a difference between a previous position and a current position and a change in direction of the robot.
In operation 103, feature data is extracted from the image. For example, the feature data extractor 102 may extract from the image a feature point, such as a corner, or a feature line, such as an edge.
In operation 104, a matching process is performed between the extracted feature data and registered feature data. For example, the absolute position estimator 105 may compare the extracted feature data with the registered feature data. In a case of unsuccessful matching, the extracted feature data may be stored as new feature data, or feature data may be extracted from a reacquired image.
In a case of successful matching, measurement update is performed in operation 105. The measurement update may be a process where the absolute position estimator 105 updates the registered feature data and calculates absolute positions of the robot and the feature data.
In operation 106, feature data which may collide is registered as an obstacle.
The measurement update, i.e., the operation 105 will be described in detail with reference to
In operation 201, the 2D position of the extracted feature data is converted to a 3D position. For example, the converter 201 may convert the 2D coordinate in the image coordinate system to the 3D coordinate in the camera coordinate system.
As described above, insufficient data upon the conversion process may be supplemented through an appropriate assumption. For instance, depth data is set to a certain value, and an error covariance in the depth direction is set to infinity. If the feature data is a feature line, an error covariance in the direction parallel to the feature line may also be set to infinity.
In operation 202, the measurement update is performed in the converted 3D coordinate system. For example, the updater 202 may update the registered feature data by comparing the extracted feature data with the registered feature data in the camera coordinate system.
As apparent from the above description, since the measurement update of SLAM is performed in the 3D coordinate system, it is possible to reduce non-linear components contained in the calculation process and to improve the SLAM performance.
A number of exemplary embodiments have been described above. Nevertheless, it will be understood that embodiments equally include the same with various modifications. For example, suitable results may be achieved if the described techniques are performed in a different order and/or if components in a described system, architecture, module, or circuit are combined in a different manner and/or replaced or supplemented by other components or their equivalents.
Thus, while aspects of the present invention has been particularly shown and described with reference to differing embodiments thereof, it should be understood that these exemplary embodiments should be considered in a descriptive sense only and not for purposes of limitation. Descriptions of features or aspects within each embodiment should typically be considered as available for other similar features or aspects in the remaining embodiments.
Accordingly, although a few embodiments have been shown and described, with additional embodiments being equally available, it would be appreciated by those skilled in the art that changes may be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the claims and their equivalents.
Number | Date | Country | Kind |
---|---|---|---|
10-2008-0099128 | Oct 2008 | KR | national |
10-2009-0005058 | Jan 2009 | KR | national |
Number | Name | Date | Kind |
---|---|---|---|
5819016 | Watanabe et al. | Oct 1998 | A |
6384859 | Matsumoto et al. | May 2002 | B1 |
6732826 | Song et al. | May 2004 | B2 |
7660444 | Hamalainen | Feb 2010 | B2 |
7774158 | Domingues Goncalves et al. | Aug 2010 | B2 |
7912583 | Gutmann et al. | Mar 2011 | B2 |
7925049 | Zhu et al. | Apr 2011 | B2 |
7932913 | Ishiyama | Apr 2011 | B2 |
8059889 | Kobayashi et al. | Nov 2011 | B2 |
20030030398 | Jacobs et al. | Feb 2003 | A1 |
20040168148 | Goncalves et al. | Aug 2004 | A1 |
20040202351 | Park et al. | Oct 2004 | A1 |
20060143020 | Zaima | Jun 2006 | A1 |
20070003134 | Song et al. | Jan 2007 | A1 |
20070052698 | Funayama et al. | Mar 2007 | A1 |
20070090973 | Karlsson et al. | Apr 2007 | A1 |
20070146232 | Redert et al. | Jun 2007 | A1 |
20080247668 | Li et al. | Oct 2008 | A1 |
20080304707 | Ol et al. | Dec 2008 | A1 |
20090161989 | Sim | Jun 2009 | A1 |
20090167761 | Hayashi et al. | Jul 2009 | A1 |
Number | Date | Country |
---|---|---|
59-142664 | Aug 1984 | JP |
63-134912 | Jun 1988 | JP |
02-064784 | Mar 1990 | JP |
2005-028468 | Feb 2005 | JP |
10-2002-0081035 | Oct 2002 | KR |
10-2004-0045572 | Jun 2004 | KR |
10-2007-0108815 | Nov 2007 | KR |
Entry |
---|
Chen et al., Implementation of an Update Scheme for Monocular Visual SLAM, 2006, ICIA 2006, IEEE, pp. 212-217. |
Davison, Andrew J., Real-Time Simultaneous Localisation and Mapping with a Single Camera, 2003, Proceedings of the Ninth IEEE International Conference on Computer Vision (ICCV'03), pp. 1-8. |
Huang et al., Online SLAM in Dynamic Environments, 2005, Advanced Robotics, 2005. ICAR '05, Proceedings 12th International Conference, pp. 262-267. |
Davison et al., MonoSLAM: Real-Time Single Camera SLAM, 2007, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, No. 6, Jun. 2007, pp. 1052-1067. |
Davison et al., Simultaneous Localization and Map-Building Using Active Vision, 2002, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, No. 7, Jul. 2002, pp. 865-880. |
Tomono, Masahiro, Building an Object Map for Mobile Robots using LRF Scan Matching and Vision-based Object Recognition, 2004, Proceedings of the 2004IEEE Internationbal Conference on Robotics & Automation, New Orleans, LA—Apr. 2004, pp. 3765-3770. |
Zhang et al., Robot-Assisted Intelligent 3D Mapping of Unknown Cluttered Search and Rescue Environments, 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems Acropolis Convention Center, Nice, France, Sep. 22-26, 2008, pp. 2115-2120. |
Civera et al., Inverse Depth Parametrization for Monocular SLAM, 2008, IEEE Transactions on Robotics, vol. 24, No. 5, Oct. 2008, pp. 932-945. |
Montiel et al., A Visual Compass based on SLAM, May 2006, Proceedings of the 2006 IEEE International Conference on Robotics and Automation, Orlando, Florida, pp. 1917-1922. |
Montiel et al., Unified Inverse Depth Parametrization for Monocular SLAM, Aug. 2006, Proceedings of Robotics Science and Systems, Philadelphia, USA, pp. 1-8. |
Smith, Paul et al., “Real-Time Monocular SLAM with Straight Lines,” Department of Engineering Science, University of Oxford, UK and Department of Computing, Imperial College London, UK. [pas,ian]@robots.ox.ac.uk,ajd@doc.ic.uk, Sep. 2006. |
Gee, Andrew P. et al., “Real-Time Model-Based SLAM Using Line Segments,” Department of Computer Science, University of Bristol, UK. {gee,mayo} @cs.bris.ac.uk, Nov. 2006. |
Lemaire, Thomas et al., “Monocular-vision based SLAM using line segments ” LAAS-CNRS, Toulouse, France {thomas.lemarie, simon.lacroix}@ laas.fr, Apr. 2007. |
Eade, Ethan et al., “Edge Landmarks in Monocular SLAM,” Cambrige University, {ee231, twd20}@cam.ac.uk, Sep. 2006. |
Number | Date | Country | |
---|---|---|---|
20100094460 A1 | Apr 2010 | US |