Tracking Poses of 3D Camera Using Points and Planes

Abstract
A method registers data using a set of primitives including points and planes. First, the method selects a first set of primitives from the data in a first coordinate system, wherein the first set of primitives includes at least three primitives and at least one plane. A transformation is predicted from the first coordinate system to a second coordinate system. The first set of primitives is transformed to the second coordinate system using the transformation. A second set of primitives is determined according to the first set of primitives transformed to the second coordinate system. Then, the second coordinate system is registered with the first coordinate system using the first set of primitives in the first coordinate system and the second set of primitives in the second coordinate system. The registration can he used to track a pose of a camera acquiring the data.
Description
FIELD OF THE INVENTION

This invention relates generally to computer vision, and more particularly to estimating a pose of a camera.


BACKGROUND OF THE INVENTION

Systems and methods that track the pose of a camera while simultaneously reconstructing the 3D structure of a scene are widely used in augmented reality (AR) visualization, robotic navigation, scene modeling, and computer vision applications. Such a process is commonly referred to as simultaneous localization and mapping; (SLAM). Real-time SLAM systems can use conventional cameras that acquire a two-dimensional (2D) image, depth cameras that acquire a three-dimensional (3D) point cloud (a set of 3D points), or red, green, blue and depth (RGB-D) cameras that acquire both a 2D image and a 3D point cloud, such as Kinect®. Tracking refers to a process that uses a predicted motion of a camera for sequentially estimating the pose of the camera, while relocalization refers to a process that uses some feature-based global registration for recovering from tracking failures.


SLAM systems using a 2D camera are generally successful for textured scenes, but are likely to fail for textureless regions. Systems using a depth camera rely on geometric variations in the scene, such as curved surfaces and depth boundaries with the help of iterative-closest point (ICP) methods. However. ICP-based systems often fail when the geometric variations are small, such as in planar scenes. Systems using an RGB-D camera can exploit both texture and geometric features, but they still require distinctive textures.


Many methods do not clearly address the difficulty in reconstructing 3D models that are larger than a single room. To extend those methods to larger scenes, better memory management techniques are required. However, memory limitation is not the only challenge. Typically, room-scale scenes have many objects that have both texture and geometric features. To extend to larger scenes, one needs to track the camera pose in regions, such as corridors, with limited texture and insufficient geometric variations.


Camera Tracking


Systems that use 3D sensors to acquire 3D point clouds reduce the tracking problem to a registration problem given some 3D correspondences. The ICP method locates point-to-point or point-to-plane correspondences iteratively, starting from an initial pose estimate given by camera motion prediction. ICP has been widely used for line-scan 3D sensors in mobile robotics, also known as scan matching, as well as for depth cameras and 3D sensors producing full 3D point clouds. U.S. 20120194516 uses point-to-plane correspondences with the ICP method for pose tracking of the Kinect® camera. That representation of a map is a set of voxels. Each voxel represents a truncated signed distance function for the distance to a closest surface point. That method does not extract planes from 3D point clouds; instead, the point-to-plane correspondences are established by determining the normal of a 3D point using a local neighborhood. Such ICP-based methods require scenes to have sufficient geometric variations for accurate registration.


Another method extracts features from RGB images and performs descriptor-based point matching to determine point-to-point correspondences and estimate the camera pose, which is then refined with the ICP method. That method uses texture (RGB) and geometric (depth) features in the scene. However, handling textureless regions and regions with repetitive textures using only point features is still problematic.


SLAM Using Planes


Plane features have been used in several SLAM systems. To determine the camera pose, at least three planes whose normals span custom-character3 are required. Thus, using only planes causes many degeneracy issues especially when the field of view (FOV) or range of the sensor is small such as in Kinect®. A combination of a large FOV line-scan 3D sensor and a small field-of-view (FOV) depth camera can avoid the degeneracy with an additional system cost.


The method described in the related Application uses a point-plane SLAM, which uses both points and planes to avoid the failure modes that are common in methods using one of these primitives. That system does not use any camera motion prediction. Instead, that system performs relocalization for all the frames by locating point and plane correspondences globally. As a result, that system can only process about three frames per second and encounters failures with some repetitive textures due to descriptor-based point matching.


The method described in the related Application also presents registering 3D data in different coordinate systems using both point-to-point and plane-to-plane correspondences.


SUMMARY OF THE INVENTION

In indoor and outdoor scenes including man-made structures, planes are dominant. The embodiments of the invention provide a system and method for tracking an RGB-D camera that uses points and planes as primitive features. By fitting planes, the method implicitly takes care of the noise in depth data that is typical with 3D sensors. The tracking method is supported by relocalization and bundle adjustment processes to demonstrate a real-time simultaneous localization and mapping (SLAM) system using a hand-held or robot-mounted RGB-D camera.


It is an object of the invention to enable fast and accurate registration while minimizing degeneracy issues causing registration failures. The method locates point and plane correspondences using camera motion prediction, and provides a tracker based on a prediction-and-correction framework. The method incorporates relocalization and bundle adjustment processes using both the points and planes to recover from tracking failures and to continuously refine camera pose estimates.


Specifically, a method registers data using a set of primitives including points and planes. First, the method selects a first set of primitives from the data in a first coordinate system, wherein the first set of primitives includes at least three primitives and at least one plane.


A transformation is predicted from the first coordinate system to a second coordinate system. The first set of primitives is transformed to the second coordinate system using the transformation. A second set of primitives is determined according to the first set of primitives transformed to the second coordinate system.


Then, the second coordinate system is registered with the first coordinate system using the first set of primitives in the first coordinate system and the second set of primitives in the second coordinate system. The registration can be used to track a pose of a camera acquiring the data.





BRIEF DESCRIPTION OF THE FIGURES


FIG. 1 is a flow diagram of a method for tracking a pose of a camera according to embodiments of the invention; and



FIG. 2 is a schematic of a procedure to establish point-to-point and plane-to-plane correspondences between a current frame and a map using a predicted pose of the camera according to embodiments of the invention.





DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS

The embodiments of our invention provide a system and method for tracking a pose of a camera. The invention extends the embodiments described in our related U.S. application Ser. No. 13/539,060 by using camera motion prediction for faster correspondence search and registration. We use point-to-point and plane-to-plane correspondences, which are established between a current frame and a map. The map includes points and planes from frames previously registered in a global coordinate system. Here, our focus is on establishing plane-to-plane correspondences using camera motion prediction as well as on a mixed case to establish both point-to-point and plane-to-plane correspondences.


System Overview


In the preferred system, the RGB-D camera 102 is a Kinect® or an ASUS® Xtion PRO LIVE, which acquires a sequence of frames 101. We use a keyframe-based SLAM system, where we select several representative frames as keyframes and store the keyframes registered in a single global coordinate system in a map. We use both points and planes as primitives in all the processes in the system, in contrast with the prior art SLAM which only uses points. Points and planes in each frame are called measurements, and measurements from the keyframes are stored in the map as landmarks.


Given the map, we use a prediction-and-correction framework to estimate the pose of the current frame: We predict the pose of the camera, and use the pose to determine correspondences between the point and plane measurements and the point and plane landmarks, which are then used to determine the camera pose.


Tracking may fail due to incorrect or insufficient correspondences. After a predetermined number of consecutive tracking failure, we relocalize, where we use global point and plane correspondence search between the current frame and the map. We also apply bundle adjustment using points and planes to refine landmarks in the map asynchronously.


Method Overview


As shown in FIG. 1, a current frame 101 is acquired 110 by a red, green, blue and depth (RGB-D) camera 102 of a scene 103. A pose of the camera when acquiring the frame is predicted 120, which is used to locate 130 point and plane correspondences between the frame and a map 194. The point and plane correspondences are used in a RANdom SAmple Consensus (RANSAC) framework 140 to register the frame to the map. If 150 the registration fails, then count 154 the number of consecutive failures, and continue with the next frame if false (F), otherwise, if true (T) relocalize 158 the camera using a global registration method without using the camera motion prediction.


If the RANSAC registration succeeds, then the pose 160 estimated in the RANSAC framework is used as the pose of the frame. Next, determine 170 whether the current frame is a keyframe or not, and proceed with the next frame at step 110 if false. Otherwise, extract 180 additional points and planes in the current frame, update 190 the map 194, and proceed for the next frame. The map is asynchronously refined 198 using bundle adjustment.


The steps can be performed in a processor connected to memory and input/output interfaces as known in the au.


Camera Pose Tracking


As stated above, our tracking uses features that include both points and planes. The tracking is based on a prediction-and-correction scheme, which can be summarized as follows. For every frame, we predict the pose using a camera motion model. Based on the predicted pose, we locate the point and plane measurements in the frame corresponding to the point and plane landmarks in the map. We perform the RANSAC-based registration using the point and plane correspondences. If the pose is different from the poses of any keyframes currently stored in the map, then we extract additional point and plane measurements and add the frame to the map as a new keyframe.


Camera Motion Prediction


We represent the pose of the k th frame as











T
k

=

(




R
k




t
k






0
T



1



)


,




(
1
)







where Rk and tk respectively denote a rotation matrix and a translation vector. We define the coordinate system of the map using the first frame; thus T1 is an identity matrix, and Tk represents the pose of the k th frame with respect to the map.


We predict the pose of the kth frame, {circumflex over (T)}k, by using a constant velocity assumption. Let ΔT denote the previously estimated motion between the (k−1) th frame and (k−2)th frame, i.e., ≢T=Tk−1Tk−2−1. Then, we predict the pose of the k th frame as {circumflex over (T)}k=ΔTTk−1.


Locating Point and Plane Correspondences


As shown in FIG. 2, we locate point and plane measurements in the k th frame corresponding to landmarks in the map using the predicted pose {circumflex over (T)}k. Given the predicted pose 201 of the current frame, we locate correspondences between point and plane landmarks in the map 202 and point and plane measurements in the current frame 203. We first transform the landmarks in the map to the current frame using the predicted pose. Then, for every point, we perform local search using an optical flow procedure from the predicted pixel location in the current frame. For every plane, we first locate the parameters of the predicted plane. Then, we consider a set of reference points on the predicted plane, and locate pixels connected from each reference point that lie on the predicted plane. The reference point with the largest number of connected pixels is chosen and the plane parameters are refined using all the connected pixels.


Point Correspondence: Let pi=(xi,yi,zi,l)T denote the ith point landmark 210 in the map, represented as a homogeneous vector. The 2D image projection 220 of pi in the current frame is predicted as






{circumflex over (p)}
i
k
={circumflex over (T)}
k
p
i
, û
i
k
=FP({circumflex over (p)}ik),  (2)


where {circumflex over (p)}ik is the 3D point transformed to the coordinate system of the k th frame, and the function FP(·) determines the forward projection of the 3D point onto the image. plane using the internal camera calibration parameters. We locate the corresponding point measurement by using Lucas-Kanade's optical flow method, starting from the initial position of ûik. Let Δuik be the determined optical flow vector 230. Then, the corresponding point measurement pik is






u
i
k

i
k
+Δu
i
k
, p
i
k
=BP(uik)D(uik),  (3)


where the function BP(·) back-projects the 2D image pixel to a 3D ray and D(·) refers to the depth value of the pixel. If the optical flow vector is not determined or the pixel location uik has an invalid depth value, then the feature is regarded as lost.


Plane Correspondence: Instead of performing a time-consuming plane extraction procedure on each frame independently from other frames, as is the prior art, we make use of the predicted pose to extract planes. This leads to faster plane measurement extraction, and also provides the plane correspondences.


Let πj=(aj,bj,cj,dj)T denote the plane equation of the j th plane landmark 240 in the map. We assume that the plane landmark and the corresponding measurement have some overlapping regions in the image. To locate such a corresponding plane measurement, we randomly select several reference points 250 qj,r (r=1, . . . , N) from the inliers of the j th plane landmark, and transform the reference points to the k th frame as 255






{circumflex over (q)}
j,r
k
={circumflex over (T)}
k
q
j,r (r=1, . . . , N).  (4)


We also transform πj to the k th frame as 245





{circumflex over (π)}jk={circumflex over (T)}k−Tπj.  (5)


We locate connected pixels 260 from each transformed reference point {circumflex over (q)}j,rk that are on the plane {circumflex over (π)}jk, and select the pixel with the maximum inliers. The inliers are used to refine the plane equation, resulting in the corresponding plane measurement πjk. If the number of milers is smaller than a threshold, then the plane landmark is declared as lost. For example, we use N=5 reference points, a threshold of 50 mm for the point-to-plane distance to determine inliers on a plane, and 9000 as the threshold of the minimum number of inliers.


Landmark Selection


Performing the above process using all the landmarks in the map can be inefficient. Therefore, we use the landmarks appearing in a single keyframe that is the closest to the current frame, The closest keyframe is selected by using the pose of the previous frame Tk−1 before the tracking process.


RANSAC Registration


The prediction-based correspondence search provides candidates of point-to-point and plane-to-plane correspondences, which may include outliers. Thus, we perform the RANSAC-based registration to determine inliers and determine the camera pose. To determine the pose without ambiguity, we need at least three correspondences. Thus, if there are less than three candidates of correspondences, then we immediately determine a tracking failure. For accurate camera tracking, we also determine the tracking failure when there is only a small number of candidates of correspondences.


If there is a sufficient number of candidates, then we solve the registration problem using the mixed correspondences in a closed-form, The procedure prioritizes plane correspondences over point correspondences, because the number of planes is typically much smaller than the number of points, and planes have less noise due to the support from many points. Tracking is considered successful if the RANSAC locates a sufficient number of inliers, e.g., 40% of the number of all point and plane measurements. The method yields the corrected pose of the k th frame, Tk.


Map Update


We determine the k th frame as a keyframe if the estimated pose Tk is sufficiently different from the poses of any existing keyframes in the map. To check this condition, we can for example use thresholds of 100 mm in translation and 5° in rotation. For the new keyframe, the point and plane measurements located as inliers in the RANSAC-based registration are associated to corresponding landmarks, while those located as outliers are discarded. We then extract additional point and plane measurements, which newly appear in this frame. The additional point measurements are extracted using a keypoint detector, such as Scale-Invariant Feature Transform (SIFT) and Speeded Up Robust Features (SURF), on pixels that are not close to any existing point measurements. The additional plane measurements are extracted by using a RANSAC-based plane fitting on pixels that are not inliers of any existing plane measurements. The additional point and plane measurements are added as new landmarks to the map. In addition, we extract feature descriptors, such as SIFT and SURF, for all point measurements in the frame, which are used for relocalization.


Although the invention has been described by way of examples of preferred embodiments, it is to be understood that various other adaptations and modifications can be made within the spirit and scope of the invention. Therefore, it is the object of the appended claims to cover all such variations and modifications as come within the true spirit and scope of the invention.

Claims
  • 1. A method for registering data using a set of primitives, wherein the data have three dimensions (3D) and the primitives include points and planes, comprising the steps of: selecting a first set of primitives from the data in a first coordinate system, wherein the first set of primitives includes at least three primitives and at least one plane;predicting a transformation from the first coordinate system to a second coordinate system;transforming the first set of primitives to the second coordinate system using the transformation;determining a second set of primitives according to the first set of primitives transformed to the second coordinate system; andregistering the second coordinate system with the first coordinate system using the first set of primitives in the first coordinate system and the second set of primitives in the second coordinate system corresponding to each other, wherein the steps are performed in a processor.
  • 2. The method of claim 1, wherein the first set of primitives includes at least one point and at least one plane in the first coordinate system and the second set of primitives includes at least one point and at least one plane in the second coordinate system.
  • 3. The method of claim 1, wherein the data are acquired by a movable camera.
  • 4. The method of claim 1, wherein the data include textures and depths.
  • 5. The method of claim 1, wherein the registering uses RANdom SAmple Consensus (RANSAC).
  • 6. The method of claim 1, wherein the data are in a form of a sequence of frames acquired by a camera.
  • 7. The method of claim 6, further comprising: selecting a set of frames as key frames from the sequence of frames; andstoring the keyframes in a map, wherein the keyframes include the points and the planes, and the points and the planes are stored in the map as landmarks.
  • 8. The method of claim 7, further comprising: predicting a pose of the camera for each frame; anddeterming the pose of the camera for each frame according to the registering to track the camera.
  • 9. The method of claim 1, wherein the registering is in real time.
  • 10. The method of claim 7, further comprising: applying bundle adjustment using the points and the planes to refine the landmarks in the map.
  • 11. The method of claim 8, therein the pose of a k th frame is
  • 12. The method of claim 8, wherein the predicting uses a constant velocity assumption.
  • 13. The method of claim 6, wherein an optical flow procedure is used to locate the points in the frames.
  • 14. The method of claim 1, wherein correspondences of the planes are prioritizes over correspondences of the points.
  • 15. The method of claim 1, wherein the registering is used for simultaneous localization and mapping (SLAM).
RELATED APPLICATION

This is a Continuation-in-Part of U.S. application Ser. No. 13/539,060, “Method for Registering Points and Planes of 3D Data in Multiple Coordinate Systems,” filed by Assignee of present application and incorporated herein by reference.

Continuation in Parts (1)
Number Date Country
Parent 13539060 Jun 2012 US
Child 13921296 US