The present disclosure relates to the computer vision field, and in particular, to a camera tracking method and apparatus.
Camera tracking is one of most fundamental issues in the computer vision field. A three-dimensional location of a feature point in a shooting scene and a camera motion parameter corresponding to each frame image are estimated according to a video sequence shot by a camera. As science and technology advance rapidly, camera tracking technologies are applied to a very wide field, for example, robot navigation, intelligent positioning, virtuality and reality combination, augmented reality, and three-dimensional scene browsing. To adapt to application of camera tracking in various fields, after decades of efforts in research, some camera tracking systems are launched one after another, for example, Parallel Tracking and Mapping (PTAM) and an Automatic Camera Tracking System (ACTS).
In actual application, a PTAM or ACTS system performs camera tracking according to a monocular video sequence, and needs to select two frames as initial frames in a camera tracking process.
Embodiments of the present disclosure provide a camera tracking method and apparatus. Camera tracking is performed using a binocular video image, thereby improving tracking precision.
To achieve the foregoing objective, the following technical solutions are used in the present disclosure.
According to a first aspect, an embodiment of the present disclosure provides a camera tracking method, including obtaining an image set of a current frame, where the image set includes a first image and a second image, and the first image and the second image are respectively images shot by a first camera and a second camera of a binocular camera at a same moment; separately extracting feature points of the first image and feature points of the second image in the image set of the current frame, where a quantity of feature points of the first image is equal to a quantity of feature points of the second image; obtaining a matching feature point set between the first image and the second image in the image set of the current frame according to a rule that scene depths of adjacent regions on an image are close to each other; separately estimating, according to an attribute parameter of the binocular camera and a preset model, a three-dimensional location of a scene point corresponding to each pair of matching feature points in a local coordinate system of the current frame and a three-dimensional location of the scene point in a local coordinate system of a next frame; estimating a motion parameter of the binocular camera on the next frame using invariance of center-of-mass coordinates to rigid transformation according to the three-dimensional location of the scene point corresponding to the matching feature points in the local coordinate system of the current frame and the three-dimensional location of the scene point in the local coordinate system of the next frame; and optimizing the motion parameter of the binocular camera on the next frame using a random sample consensus (RANSAC) algorithm and a Levenberg-Marquardt (LM) algorithm.
In a first possible implementation manner of the first aspect, with reference to the first aspect, the obtaining a matching feature point set between the first image and the second image in the image set of the current frame according to a rule that scene depths of adjacent regions on an image are close to each other includes obtaining a candidate matching feature point set between the first image and the second image; performing Delaunay triangularization on feature points in the first image that correspond to the candidate matching feature point set; traversing sides of each triangle with a ratio of a height to a base side less than a first preset threshold; and if a parallax difference |d(x1)−d(x2)| of two feature points (x1,x2) connected by a first side is less than a second preset threshold, adding one vote for the first side; otherwise, subtracting one vote, where a parallax of the feature point x is: d(x)=uleft−uright, where uleft is a horizontal coordinate, of the feature point x, in a planar coordinate system of the first image, and uright is a horizontal coordinate, of a feature point that is in the second image and matches the feature point x, in a planar coordinate system of the second image; and counting a vote quantity corresponding to each side, and using a set of matching feature points corresponding to feature points connected by a side with a positive vote quantity as the matching feature point set between the first image and the second image.
In a second possible implementation manner of the first aspect, with reference to the first possible implementation manner of the first aspect, the obtaining a candidate matching feature point set between the first image and the second image includes traversing the feature points in the first image; searching, according to locations xleft=(uleft,vleft)T of the feature points in the first image in the two-dimensional planar coordinate system, a region of the second image of uε[uleft−a,uleft] and vε[vleft−b,vleft+b] for a point xright that makes ∥χleft−χright∥22 smallest; searching, according to locations xright=(uright,vright)T of the feature points in the second image in the two-dimensional planar coordinate system, a region of the first image of uε[uright,uright+a] and vε[Vright−b,vright+b] for a point xleft′ that makes ∥χright−χleft′∥22 smallest; and if xleft′=xleft, using (xleft,xright) as a pair of matching feature points, where χleft is a description quantity of a feature point xleft in the first image, χright is a description quantity of a feature point xright in the second image, and a and b are preset constants; and using a set including all matching feature points that satisfy xleft′=xleft as the candidate matching feature point set between the first image and the second image.
In a third possible implementation manner of the first aspect, with reference to the first aspect, the separately estimating, according to an attribute parameter of the binocular camera and a preset model, a three-dimensional location of a scene point corresponding to each pair of matching feature points in a local coordinate system of the current frame and a three-dimensional location of the scene point in a local coordinate system of a next frame includes obtaining a three-dimensional location Xt of a scene point corresponding to matching) feature points (xt,
where
where
In a fourth possible implementation manner of the first aspect, with reference to the first aspect, the estimating a motion parameter of the binocular camera on the next frame using invariance of center-of-mass coordinates to rigid transformation according to the three-dimensional location of the scene point corresponding to the matching feature points in the local coordinate system of the current frame and the three-dimensional location of the scene point in the local coordinate system of the next frame includes representing, in a world coordinate system, the three-dimensional location of the scene point corresponding to the matching feature points in the local coordinate system of the current frame, that is,
and calculating center-of-mass coordinates (αi1, αi2, αi3, αi4)T of Xi, where Cj (j=1, . . . , 4) is control points of any four different planes in the world coordinate system;
where Ctj (j=1, . . . , 4) is coordinates of the control points in the local coordinate system of the next frame; solving for the coordinates Ctj (j=1, . . . , 4) of the control points in the local coordinate system of the next frame according to a correspondence between the matching feature points and the three-dimensional location of the scene point corresponding to the matching feature points in the local coordinate system of the current frame:
to obtain the three-dimensional location of the scene point corresponding to the matching feature points in the local coordinate system of the next frame; and estimating a motion parameter (Rt,Tt) of the binocular camera on the next frame according to a correspondence Xt=RtX+Tt between a three-dimensional location of the scene point corresponding to the matching feature points in the world coordinate system of the current frame and the three-dimensional location of the scene point corresponding to the matching feature points in the local coordinate system of the next frame, where Rt is a rotation matrix of 3×3, and Tt is a three-dimensional vector.
In a fifth possible implementation manner of the first aspect, with reference to the first aspect, the optimizing the motion parameter of the binocular camera on the next frame using a RANSAC algorithm and an LM algorithm includes sorting matching feature points included in the matching feature point set according to a similarity of matching feature points in local image windows between two consecutive frames; successively sampling four pairs of matching feature points according to descending order of similarities, and estimating a motion parameter (Rt,Tt) of the binocular camera on the next frame; separately calculating a projection error of each pair of matching feature points in the matching feature point set using the estimated motion parameter of the binocular camera on the next frame, and using matching feature points with a projection error less than a second preset threshold as interior points; repeating the foregoing processes for k times, selecting four pairs of matching feature points with largest quantities of interior points, and recalculating a motion parameter of the binocular camera on the next frame; and using the recalculated motion parameter as an initial value, and calculating the motion parameter (Rt, Tt) of the binocular camera on the next frame according to an optimization formula:
According to a second aspect, an embodiment of the present disclosure provides a camera tracking method, including obtaining a video sequence, where the video sequence includes an image set of at least two frames, the image set includes a first image and a second image, and the first image and the second image are respectively images shot by a first camera and a second camera of a binocular camera at a same moment; separately obtaining a matching feature point set between the first image and the second image in the image set of each frame; separately estimating a three-dimensional location of a scene point corresponding to each pair of matching feature points in a local coordinate system of each frame according to the method in the third possible implementation manner of the first aspect; separately estimating a motion parameter of the binocular camera on each frame according to the method in any implementation manner of the first aspect or any implementation manner of the first to the fifth possible implementation manner of the first aspect; and optimizing the motion parameter of the binocular camera on each frame according to the three-dimensional location of the scene point corresponding to each pair of matching feature points in the local coordinate system of each frame and the motion parameter of the binocular camera on each frame.
In a first possible implementation manner of the second aspect, with reference to the second aspect, the optimizing the motion parameter of the binocular camera on each frame according to the three-dimensional location of the scene point corresponding to each pair of matching feature points in the local coordinate system of each frame and the motion parameter of the binocular camera on each frame includes optimizing the motion parameter of the binocular camera on each frame according to an optimization formula:
where N is a quantity of scene points corresponding to matching feature points included in the matching feature point set, M is a frame quantity, and
x
t
i=(ut,lefti,vt,lefti,ut,righti)T,π(X)=(πleft(S)[1],πleft(X)[2],πright(X)[1])T.
According to a third aspect, an embodiment of the present disclosure provides a camera tracking apparatus, including a first obtaining module configured to obtain an image set of a current frame, where the image set includes a first image and a second image, and the first image and the second image are respectively images shot by a first camera and a second camera of a binocular camera at a same moment; an extracting module configured to separately extract feature points of the first image and feature points of the second image in the image set of the current frame obtained by the first obtaining module, where a quantity of feature points of the first image is equal to a quantity of feature points of the second image; a second obtaining module configured to obtain, according to a rule that scene depths of adjacent regions on an image are close to each other, a matching feature point set between the first image and the second image in the image set of the current frame from the feature points extracted by the extracting module; a first estimating module configured to separately estimate, according to an attribute parameter of the binocular camera and a preset model, a three-dimensional location of a scene point corresponding to each pair of matching feature points in the matching feature point set, obtained by the second obtaining module, in a local coordinate system of the current frame and a three-dimensional location of the scene point in a local coordinate system of a next frame; a second estimating module configured to estimate a motion parameter of the binocular camera on the next frame using invariance of center-of-mass coordinates to rigid transformation according to the three-dimensional location of the scene point corresponding to the matching feature points in the local coordinate system of the current frame and the three-dimensional location of the scene point in the local coordinate system of the next frame that are estimated by the first estimating module; and an optimizing module configured to optimize the motion parameter, estimated by the second estimating module, of the binocular camera on the next frame using a RANSAC algorithm and an LM algorithm.
In a first possible implementation manner of the third aspect, with reference to the third aspect, the second obtaining module is configured to obtain a candidate matching feature point set between the first image and the second image; perform Delaunay triangularization on feature points in the first image that correspond to the candidate matching feature point set; traverse sides of each triangle with a ratio of a height to a base side less than a first preset threshold; and if a parallax difference |d(x1)−d(x2)| of two feature points (x1,x2) connected by a first side is less than a second preset threshold, add one vote for the first side; otherwise, subtract one vote, where a parallax of the feature point x is: d(x)=uleft−uright, where uleft is a horizontal coordinate, of the feature point x, in a planar coordinate system of the first image, and uright is a horizontal coordinate, of a feature point that is in the second image and matches the feature point x, in a planar coordinate system of the second image; and count a vote quantity corresponding to each side, and use a set of matching feature points corresponding to feature points connected by a side with a positive vote quantity as the matching feature point set between the first image and the second image.
In a second possible implementation manner of the third aspect, with reference to the first possible implementation manner of the third aspect, the second obtaining module is configured to traverse the feature points in the first image; search, according to locations Xleft=(uleft,vleft)T of or the feature points in the first image in the two-dimensional planar coordinate system, a region of the second image of uε[uleft−a,uleft] and vε[vleft−b,vleft+b] for a point xright that makes ∥χleft−χright∥22 smallest; search, according to locations xright=(uright,vright)T of the feature points in the second image in the two-dimensional planar coordinate system, a region of the first image of uε[uright,uright+a] and vε[vright−b,vright+b] for a point xleft′ that makes ∥χright−χleft′∥22 smallest; and if xleft′=xleft, use (xleft,xright) as a pair of matching feature points, where χleft is a description quantity of a feature point xleft in the first image, χright is a description quantity of a feature point xright in the second image, and a and b are preset constants; and use a set including all matching feature points that satisfy xleft′=xleft as the candidate matching feature point set between the first image and the second image.
In a third possible implementation manner of the third aspect, with reference to the third aspect, the first estimating module is configured to obtain a three-dimensional location Xt of a scene point corresponding to matching feature points (xt,
where
where
In a fourth possible implementation manner of the third aspect, with reference to the third aspect, the second estimating module is configured to represent, in a world coordinate system, the three-dimensional location of the scene point corresponding to the matching feature points in the local coordinate system of the current frame, that is,
and calculate center-of-mass coordinates (αi1, αi2, αi3, αi4)T of Xi, where Cj (j=1, . . . , 4) is control points of any four different planes in the world coordinate system; represent the three-dimensional location of the scene point corresponding to the matching feature points in the local coordinate system of the next frame using the center-of-mass coordinates, that is,
where Ctj (j=1, . . . , 4) is coordinates of the control points in the local coordinate system of the next frame; solve for the coordinates Ctj (j=1, . . . , 4) of the control points in the local coordinate system of the next frame according to a correspondence between the matching feature points and the three-dimensional location of the scene point corresponding to the matching feature points in the local coordinate system of the current frame:
to obtain the three-dimensional location of the scene point corresponding to the matching feature points in the local coordinate system of the next frame; and estimate a motion parameter (Rt, Tt) of the binocular camera on the next frame according to a correspondence Xt=RtX+Tt between a three-dimensional location of the scene point corresponding to the matching feature points in the world coordinate system of the current frame and the three-dimensional location of the scene point corresponding to the matching feature points in the local coordinate system of the next frame, where Rt is a rotation matrix of 3×3, and Tt is a three-dimensional vector.
In a fifth possible implementation manner of the third aspect, with reference to the third aspect, the optimizing module is configured to sort matching feature points included in the matching feature point set according to a similarity of matching feature points in local image windows between two consecutive frames; successively sample four pairs of matching feature points according to descending order of similarities, and estimate a motion parameter (Rt, Tt) of the binocular camera on the next frame; separately calculate a projection error of each pair of matching feature points in the matching feature point set using the estimated motion parameter of the binocular camera on the next frame, and use matching feature points with a projection error less than a second preset threshold as interior points; repeat the foregoing processes for k times, select four pairs of matching feature points with largest quantities of interior points, and recalculate a motion parameter of the binocular camera on the next frame; and use the recalculated motion parameter as an initial value, and calculate the motion parameter (Rt, Tt) of the binocular camera on the next frame according to an optimization formula:
According to a fourth aspect, an embodiment of the present disclosure provides a camera tracking apparatus, including a first obtaining module configured to obtain a video sequence, where the video sequence includes an image set of at least two frames, the image set includes a first image and a second image, and the first image and the second image are respectively images shot by a first camera and a second camera of a binocular camera at a same moment; a second obtaining module configured to separately obtain a matching feature point set between the first image and the second image in the image set of each frame; a first estimating module configured to separately estimate a three-dimensional location of a scene point corresponding to each pair of matching feature points in a local coordinate system of each frame; a second estimating module configured to separately estimate a motion parameter of the binocular camera on each frame; and an optimizing module configured to optimize the motion parameter of the binocular camera on each frame according to the three-dimensional location of the scene point corresponding to each pair of matching feature points in the local coordinate system of each frame and the motion parameter of the binocular camera on each frame.
In a first possible implementation manner of the fourth aspect, with reference to the fourth aspect, the optimizing module is configured to optimize the motion parameter of the binocular camera on each frame according to an optimization formula:
where N is a quantity of scene points corresponding to matching feature points included in the matching feature point set, M is a frame quantity, and xti=(ut,lefti, ut,lefti)T, π(X)=(πleft(X)[1], πleft(X)[2], πright(X)[1])T.
According to a fifth aspect, an embodiment of the present disclosure provides a camera tracking apparatus, including a binocular camera configured to obtain an image set of a current frame, where the image set includes a first image and a second image, and the first image and the second image are respectively images shot by a first camera and a second camera of the binocular camera at a same moment; and a processor configured to separately extract feature points of the first image and feature points of the second image in the image set of the current frame obtained by the binocular camera, where a quantity of feature points of the first image is equal to a quantity of feature points of the second image; obtain, according to a rule that scene depths of adjacent regions on an image are close to each other, a matching feature point set between the first image and the second image in the image set of the current frame from the feature points extracted by the processor; separately estimate, according to an attribute parameter of the binocular camera and a preset model, a three-dimensional location of a scene point corresponding to each pair of matching feature points in the matching feature point set, obtained by the processor, in a local coordinate system of the current frame and a three-dimensional location of the scene point in a local coordinate system of a next frame; estimate a motion parameter of the binocular camera on the next frame using invariance of center-of-mass coordinates to rigid transformation according to the three-dimensional location of the scene point corresponding to the matching feature points in the local coordinate system of the current frame and the three-dimensional location of the scene point in the local coordinate system of the next frame that are estimated by the processor; and optimize the motion parameter, estimated by the processor, of the binocular camera on the next frame using a RANSAC algorithm and an LM algorithm.
In a first possible implementation manner of the fifth aspect, with reference to the fifth aspect, the processor is configured to obtain a candidate matching feature point set between the first image and the second image; perform Delaunay triangularization on feature points in the first image that correspond to the candidate matching feature point set; traverse sides of each triangle with a ratio of a height to a base side less than a first preset threshold; and if a parallax difference |d(x1)−d(x2)| of two feature points (x1,x2) connected by a first side is less than a second preset threshold, add one vote for the first side; otherwise, subtract one vote, where a parallax of the feature point x is: d(x)=uleft−uright, where uleft is a horizontal coordinate, of the feature point x, in a planar coordinate system of the first image, and uright is a horizontal coordinate, of a feature point that is in the second image and matches the feature point x, in a planar coordinate system of the second image; and count a vote quantity corresponding to each side, and use a set of matching feature points corresponding to feature points connected by a side with a positive vote quantity as the matching feature point set between the first image and the second image.
In a second possible implementation manner of the fifth aspect, with reference to the first possible implementation manner of the fifth aspect, the processor is configured to traverse the feature points in the first image; search, according to locations xleft=(uleft,vleft)T of the feature points in the first image in the two-dimensional planar coordinate system, a region of the second image of uε[uleft−a,uleft] and vε[vleft−b,vleft+b] for a point ∥χleft−χright∥22 that makes xright smallest; search, according to locations xright=(uright,vright)T of the feature points in the second image in the two-dimensional planar coordinate system, a region of the first image of uε[uright,uright+a] and vε[vright−b,vright+b] for a point ∥χright−χleft′∥22 that makes xleft′ smallest; and if xleft′=xleft, use (xleft,xright) as a pair of matching feature points, where χleft is a description quantity of a feature point xleft in the first image, χright is a description quantity of a feature point xright in the second image, and a and b are preset constants; and use a set including all matching feature points that satisfy xleft′=xleft as the candidate matching feature point set between the first image and the second image.
In a third possible implementation manner of the fifth aspect, with reference to the fifth aspect, the processor is configured to obtain a three-dimensional location Xt of a scene point corresponding to matching feature points (xt,
where
where
In a fourth possible implementation manner of the fifth aspect, with reference to the fifth aspect, the processor is configured to represent, in a world coordinate system, the three-dimensional location of the scene point corresponding to the matching feature points in the local coordinate system of the current frame, that is,
and calculate center-of-mass coordinates (αi1, αi2, αi3, αi4)T of Xi, where Cj (j=1, . . . , 4) is control points of any four different planes in the world coordinate system; represent the three-dimensional location of the scene point corresponding to the matching feature points in the local coordinate system of the next frame using the center-of-mass coordinates, that is,
where Ctj (j=1, . . . , 4) is coordinates of the control points in the local coordinate system of the next frame; solve for the coordinates Ctj (j=1, . . . , 4) of the control points in the local coordinate system of the next frame according to a correspondence between the matching feature points and the three-dimensional location of the scene point corresponding to the matching feature points in the local coordinate system of the current frame:
to obtain the three-dimensional location of the scene point corresponding to the matching feature points in the local coordinate system of the next frame; and estimate a motion parameter (Rt,Tt) of the binocular camera on the next frame according to a correspondence Xt=RtX+Tt between a three-dimensional location of the scene point corresponding to the matching feature points in the world coordinate system of the current frame and the three-dimensional location of the scene point corresponding to the matching feature points in the local coordinate system of the next frame, where Rt is a rotation matrix of 3×3, and Tt is a three-dimensional vector.
In a fifth possible implementation manner of the fifth aspect, with reference to the fifth aspect, the processor is configured to sort matching feature points included in the matching feature point set according to a similarity of matching feature points in local image windows between two consecutive frames; successively sample four pairs of matching feature points according to descending order of similarities, and estimate a motion parameter (Rt,Tt) of the binocular camera on the next frame; separately calculate a projection error of each pair of matching feature points in the matching feature point set using the estimated motion parameter of the binocular camera on the next frame, and use matching feature points with a projection error less than a second preset threshold as interior points; repeat the foregoing processes for k times, select four pairs of matching feature points with largest quantities of interior points, and recalculate a motion parameter of the binocular camera on the next frame; and use the recalculated motion parameter as an initial value, and calculate the motion parameter (Rt,Tt) of the binocular camera on the next frame according to an optimization formula:
According to a sixth aspect, an embodiment of the present disclosure provides a camera tracking apparatus, including a binocular camera configured to obtain a video sequence, where the video sequence includes an image set of at least two frames, the image set includes a first image and a second image, and the first image and the second image are respectively images shot by a first camera and a second camera of the binocular camera at a same moment; and a processor configured to separately obtain a matching feature point set between the first image and the second image in the image set of each frame; separately estimate a three-dimensional location of a scene point corresponding to each pair of matching feature points in a local coordinate system of each frame; separately estimate a motion parameter of the binocular camera on each frame; and optimize the motion parameter of the binocular camera on each frame according to the three-dimensional location of the scene point corresponding to each pair of matching feature points in the local coordinate system of each frame and the motion parameter of the binocular camera on each frame.
In a first possible implementation manner of the sixth aspect, with reference to the sixth aspect, the processor is configured to optimize the motion parameter of the binocular camera on each frame according to an optimization formula:
where N is a quantity of scene points corresponding to matching feature points included in the matching feature point set, M is a frame quantity, and
x
t
i=(ut,lefti,vt,lefti,ut,righti)T,π(X=(πleft(X)[1],πleft(X)[2],πright(X)[1])T.
It can be learned from the foregoing that, the embodiments of the present disclosure provide a camera tracking method and apparatus, where the method includes, obtaining an image set of a current frame, where the image set includes a first image and a second image, and the first image and the second image are respectively images shot by a first camera and a second camera of a binocular camera at a same moment; separately extracting feature points of the first image and feature points of the second image in the image set of the current frame, where a quantity of feature points of the first image is equal to a quantity of feature points of the second image; obtaining a matching feature point set between the first image and the second image in the image set of the current frame according to a rule that scene depths of adjacent regions on an image are close to each other; separately estimating, according to an attribute parameter of the binocular camera and a preset model, a three-dimensional location of a scene point corresponding to each pair of matching feature points in a local coordinate system of the current frame and a three-dimensional location of the scene point in a local coordinate system of a next frame; estimating a motion parameter of the binocular camera on the next frame using invariance of center-of-mass coordinates to rigid transformation according to the three-dimensional location of the scene point corresponding to the matching feature points in the local coordinate system of the current frame and the three-dimensional location of the scene point in the local coordinate system of the next frame; and optimizing the motion parameter of the binocular camera on the next frame using a random sample consensus algorithm RANSAC and an LM algorithm. In this way, camera tracking is performed using a binocular video image, which improves tracking precision, and avoids a disadvantage in the prior art that tracking precision of camera tracking based on a monocular video sequence is relatively low.
To describe the technical solutions in the embodiments of the present disclosure or in the prior art more clearly, the following briefly describes the accompanying drawings required for describing the embodiments or the prior art. The accompanying drawings in the following description show merely some embodiments of the present disclosure, and a person of ordinary skill in the art may still derive other drawings from these accompanying drawings without creative efforts.
The following clearly describes the technical solutions in the embodiments of the present disclosure with reference to the accompanying drawings in the embodiments of the present disclosure. The described embodiments are merely some but not all of the embodiments of the present disclosure. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments of the present disclosure without creative efforts shall fall within the protection scope of the present disclosure.
Step 201: Obtain an image set of a current frame, where the image set includes a first image and a second image, and the first image and the second image are respectively images shot by a first camera and a second camera of a binocular camera at a same moment.
The image set of the current frame belongs to a video sequence shot by the binocular camera, and the video sequence is a set of image sets shot by the binocular camera in a period of time.
Step 202: Separately extract feature points of the first image and feature points of the second image in the image set of the current frame, where a quantity of feature points of the first image is equal to a quantity of feature points of the second image.
The feature point generally refers to a point whose gray scale sharply changes in an image, and includes a point with a largest curvature change on an object contour, an intersection point of straight lines, an isolated point on a monotonic background, and the like.
Preferably, the feature points of the first image and the feature points of the second image in the image set of the current frame may be separately extracted using a scale-invariant feature transform (SIFT) algorithm. Description is made below using a process of extracting the feature points of the first image as an example.
(1) Detect a scale space extrema, and obtain a candidate feature point. Searching is performed over all scales and image locations using a difference of Gaussian (DoG) operator, to preliminarily determine a location of a key point and a scale of the key point, and scale space of the first image at different scales is defined as a convolution of an image I (x, y) and a Gaussian kernel G (x, y, σ):
where
D(x,y,σ)=(G(x,y,kσ)−G(x,y,σ))*I(x,y)=L(x,y,kσ)−L(x,y,σ).
All points are traversed in scale space of the image, and a value relationship between the points and points in a neighborhood are determined. If there is a first point with a value greater than or less than values of all the points in the neighborhood, the first point is a candidate feature point.
(2) Screen all candidate feature points, to obtain the feature points in the first image.
Preferably, an edge response point and a feature point with a poor contrast ratio and poor stability are removed from all the candidate feature points, and remaining feature points are used as the feature points of the first image.
(3) Separately perform direction allocation on each feature point in the first image.
Preferably, a scale factor m and a main rotation direction θ are specified for each feature point using a gradient direction distribution characteristic of feature point neighborhood pixels, so that an operator has scale and rotation invariance, where
(4) Perform feature description on each feature point in the first image.
Preferably, a coordinate axis of a planar coordinate system is rotated to a main direction of the feature point, a square image region that has a side length of 20 s and is aligned with θ is sampled using a feature point x as a center, the region is evenly divided into 16 sub-regions of 4×4, and four components of Σdx, Σ|dx|, Σdy, and Σ|dy| are calculated for each sub-region. Then, the feature point x corresponds to a description quantity χ of 16×4=64 dimensions, where dx and dy respectively represent Haar wavelet responses (with a filter width of 2 s) in x and y directions.
Step 203: Obtain a matching feature point set between the first image and the second image in the image set of the current frame according to a rule that scene depths of adjacent regions on an image are close to each other.
Exemplarily, the obtaining a matching feature point set between the first image and the second image in the image set of the current frame according to a rule that scene depths of adjacent regions on an image are close to each other may include:
(1) Obtain a candidate matching feature point set between the first image and the second image.
(2) Perform Delaunay triangularization on feature points in the first image that correspond to the candidate matching feature point set.
For example, if there are 100 pairs of matching feature points (xleft,1,xright,1) to (xleft,100,xright,100) in the candidate matching feature point set, any three feature points in 100 feature points xleft,1 to xleft,100 in the first image corresponding to the candidate matching feature point set are connected as a triangle, and connecting lines cannot be crossed in a connecting process, to form a grid diagram including multiple triangles.
(3) Traverse sides of each triangle with a ratio of a height to a base side less than a first preset threshold; and if a parallax difference |d(x1)−d(x2)| of two feature points (x1,x2) connected by a first side is less than a second preset threshold, add one vote for the first side; otherwise, subtract one vote, where a parallax of the feature point x is: d(x)=uleft−uright, where xleft is a horizontal coordinate, of the feature point x, in a planar coordinate system of the first image, and uright is a horizontal coordinate, of a feature point that is in the second image and matches the feature point x, in a planar coordinate system of the second image.
The first preset threshold is set according to experiment experience, which is not limited in this embodiment. If a ratio of a height to a base side of a triangle is less than the first preset threshold, it indicates that a depth variation of a scene point corresponding to a vertex of the triangle is not large, and the vertex of the triangle may meet the rule that scene depths of adjacent regions on an image are close to each other. If a ratio of a height to a base side of a triangle is greater than or equal to the first preset threshold, it indicates that a depth variation of a scene corresponding to a vertex of the triangle is relatively large, and the vertex of the triangle may not meet the rule that scene depths of adjacent regions on an image are close to each other, and matching feature points cannot be selected according to the rule.
Likewise, the second preset threshold is also set according to experiment experience, which is not limited in this embodiment. If a parallax difference between two feature points is less than the second preset threshold, it indicates that scene depths between the two feature points are similar. If a parallax difference between two feature points is greater than or equal to the second preset threshold, it indicates that a scene depth variation between the two feature points is relatively large, and that there is mismatching.
(4) Count a vote quantity corresponding to each side, and use a set of matching feature points corresponding to feature points connected by a side with a positive vote quantity as the matching feature point set between the first image and the second image.
For example, feature points connected by all sides with a positive vote quantity are xleft,20 to xleft,80, and a set of matching feature points (xleft,20, xright,20) to (xleft,80,xright,80) is used as the matching feature point set between the first image and the second image.
The obtaining a candidate matching feature point set between the first image and the second image includes traversing the feature points in the first image; searching, according to locations xleft=(uleft,vleft)T of the feature points in the first image in the two-dimensional planar coordinate system, a region of the second image of uε[uleft−a,uleft] and vε[vleft−b,vleft+b] for a point xright=(uright,vright)T that makes |χleft−χright∥22 smallest; searching, according to locations xright=(uright,vright)T of the feature points in the second image in the two-dimensional planar coordinate system, a region of the first image of uε[uright,uright+a] and vε[vright−b,vright+b] for a point xleft′ that makes ∥χright−χleft′λ22 smallest; and if xleft′=xleft, using (xleft,xright) as a pair of matching feature points, where χleft is a description quantity of a feature point xleft in the first image, χright is a description quantity of a feature point xright in the second image, a and b are preset constants, and a=200 and b=5 in an experiment; and using a set including all matching feature points that satisfy xleft′=xleft as the candidate matching feature point set between the first image and the second image.
Step 204: Separately estimate, according to an attribute parameter of the binocular camera and a preset model, a three-dimensional location of a scene point corresponding to each pair of matching feature points in a local coordinate system of the current frame and a three-dimensional location of the scene point in a local coordinate system of a next frame.
Exemplarily, the separately estimating, according to an attribute parameter of the binocular camera and a preset model, a three-dimensional location of a scene point corresponding to each pair of matching feature points in a local coordinate system of the current frame and a three-dimensional location of the scene point in a local coordinate system of a next frame includes:
where
where
Preferably, the optimization formula 2 is solved using an iteration algorithm, and a specific process is shown as follows:
Then, Xt+1 in this case is the three-dimensional location of the scene point corresponding to the matching feature points in the local coordinate system of the next frame.
A process of obtaining δX by solving the formula
is as follows:
where
It should be noted that, to further accelerate convergence efficiency and improve a computation rate, a graphic processing unit (GPU) is used to establish a Gaussian pyramid for an image, the formula
is first solved on a low-resolution image, and then optimization is further performed on a high-resolution image. In an experiment, a pyramid layer quantity is set to 2.
Step 205: Estimate a motion parameter of the binocular camera on the next frame using invariance of center-of-mass coordinates to rigid transformation according to the three-dimensional location of the scene point corresponding to the matching feature points in the local coordinate system of the current frame and the three-dimensional location of the scene point in the local coordinate system of the next frame.
Exemplarily, the estimating a motion parameter of the binocular camera on the next frame using invariance of center-of-mass coordinates to rigid transformation according to the three-dimensional location of the scene point corresponding to the matching feature points in the local coordinate system of the current frame and the three-dimensional location of the scene point in the local coordinate system of the next frame may include:
and calculating center-of-mass coordinates (αi1, αi2, αi3, αi4)T of Xi, where Cj (j=1, . . . , 4) is control points of any four different planes in the world coordinate system;
where Ctj (j=1, . . . , 4) is coordinates of the control points in the local coordinate system of the next frame;
to obtain the three-dimensional location of the scene point corresponding to the matching feature points in the local coordinate system of the next frame; and
When the coordinates Ctj (j=1, . . . , 4) of the control points in the local coordinate system of the next frame are being solved for, direct linear transformation (DLT) is performed on
to convert into three linear equations about 12 variables of ((Ct1)T, (Ct2)T, (Ct3)T, (Ct4)T)T:
Step 206: Optimize the motion parameter of the binocular camera on the next frame using a RANSAC algorithm and an LM algorithm.
Exemplarily, the optimizing the motion parameter of the binocular camera on the next frame using a RANSAC algorithm and an LM algorithm may include:
where n′ is a quantity of interior points obtained using a RANSAC algorithm.
It can be learned from the foregoing that, this embodiment of the present disclosure provides a camera tracking method, which includes obtaining an image set of a current frame, where the image set includes a first image and a second image, and the first image and the second image are respectively images shot by a first camera and a second camera of a binocular camera at a same moment; separately extracting feature points of the first image and feature points of the second image in the image set of the current frame, where a quantity of feature points of the first image is equal to a quantity of feature points of the second image; obtaining a matching feature point set between the first image and the second image in the image set of the current frame according to a rule that scene depths of adjacent regions on an image are close to each other; separately estimating, according to an attribute parameter of the binocular camera and a preset model, a three-dimensional location of a scene point corresponding to each pair of matching feature points in a local coordinate system of the current frame and a three-dimensional location of the scene point in a local coordinate system of a next frame; estimating a motion parameter of the binocular camera on the next frame using invariance of center-of-mass coordinates to rigid transformation according to the three-dimensional location of the scene point corresponding to the matching feature points in the local coordinate system of the current frame and the three-dimensional location of the scene point in the local coordinate system of the next frame; and optimizing the motion parameter of the binocular camera on the next frame using a RANSAC algorithm and an LM algorithm. In this way, camera tracking is performed using a binocular video image, which improves tracking precision, and avoids a disadvantage in the prior art that tracking precision of camera tracking based on a monocular video sequence is relatively low.
Step 301: Obtain a video sequence, where the video sequence includes an image set of at least two frames, the image set includes a first image and a second image, and the first image and the second image are respectively images shot by a first camera and a second camera of a binocular camera at a same moment.
Step 302: Separately obtain a matching feature point set between the first image and the second image in the image set of each frame.
It should be noted that, a method for obtaining a matching feature point set between the first image and the second image in the image set of each frame is the same as the method in Embodiment 1 for obtaining the matching feature point set between the first image and the second image in the image set of the current frame, and details are not described herein.
Step 303: Separately estimate a three-dimensional location of a scene point corresponding to each pair of matching feature points in a local coordinate system of each frame.
It should be noted that, a method for estimating a three-dimensional location of a scene point corresponding to each pair of matching feature points in a local coordinate system of each frame is the same as step 204 in Embodiment 1, and details are not described herein.
Step 304: Separately estimate a motion parameter of the binocular camera on each frame.
It should be noted that, a method for estimating a motion parameter of the binocular camera on each frame is the same as the method in Embodiment 1 for calculating the motion parameter of the binocular camera on the next frame, and details are not described herein.
Step 305: Optimize the motion parameter of the binocular camera on each frame according to the three-dimensional location of the scene point corresponding to each pair of matching feature points in the local coordinate system of each frame and the motion parameter of the binocular camera on each frame.
Exemplarily, the optimizing the motion parameter of the binocular camera on each frame according to the three-dimensional location of the scene point corresponding to each pair of matching feature points in the local coordinate system of each frame and the motion parameter of the binocular camera on each frame includes optimizing the motion parameter of the binocular camera on each frame according to an optimization formula:
where N is a quantity of scene points corresponding to matching feature points included in the matching feature point set, M is a frame quantity, and xti=(ut,lefti, vt,lefti, ut,righti)T, π(X)=(πleft(X)[1], πleft(X)[2], πright(X)[1])T.
It can be learned from the foregoing that, this embodiment of the present disclosure provides a camera tracking method, obtaining a video sequence, where the video sequence includes an image set of at least two frames, the image set includes a first image and a second image, and the first image and the second image are respectively images shot by a first camera and a second camera of a binocular camera at a same moment; separately obtaining a matching feature point set between the first image and the second image in the image set of each frame; separately estimating a three-dimensional location of a scene point corresponding to each pair of matching feature points in a local coordinate system of each frame; separately estimating a motion parameter of the binocular camera on each frame; and optimizing the motion parameter of the binocular camera on each frame according to the three-dimensional location of the scene point corresponding to each pair of matching feature points in the local coordinate system of each frame and the motion parameter of the binocular camera on each frame. In this way, camera tracking is performed using a binocular video image, which improves tracking precision, and avoids a disadvantage in the prior art that tracking precision of camera tracking based on a monocular video sequence is relatively low.
The first obtaining module 401 is configured to obtain an image set of a current frame, where the image set includes a first image and a second image, and the first image and the second image are respectively images shot by a first camera and a second camera of a binocular camera at a same moment.
The image set of the current frame belongs to a video sequence shot by the binocular camera, and the video sequence is a set of image sets shot by the binocular camera in a period of time.
The extracting module 402 is configured to separately extract feature points of the first image and feature points of the second image in the image set of the current frame obtained by the first obtaining module 401, where a quantity of feature points of the first image is equal to a quantity of feature points of the second image.
The feature point generally refers to a point whose gray scale sharply changes in an image, and includes a point with a largest curvature change on an object contour, an intersection point of straight lines, an isolated point on a monotonic background, and the like.
The second obtaining module 403 is configured to obtain, according to a rule that scene depths of adjacent regions on an image are close to each other, a matching feature point set between the first image and the second image in the image set of the current frame from the feature points extracted by the extracting module 402.
The first estimating module 404 is configured to separately estimate, according to an attribute parameter of the binocular camera and a preset model, a three-dimensional location of a scene point corresponding to each pair of matching feature points in the matching feature point set, obtained by the second obtaining module 403, in a local coordinate system of the current frame and a three-dimensional location of the scene point in a local coordinate system of a next frame.
The second estimating module 405 is configured to estimate a motion parameter of the binocular camera on the next frame using invariance of center-of-mass coordinates to rigid transformation according to the three-dimensional location of the scene point corresponding to the matching feature points in the local coordinate system of the current frame and the three-dimensional location of the scene point in the local coordinate system of the next frame that are estimated by the first estimating module 404.
The optimizing module 406 is configured to optimize the motion parameter, estimated by the second estimating module, of the binocular camera on the next frame using a RANSAC algorithm and an LM algorithm.
Further, the extracting module 402 is configured to separately extract the feature points of the first image and the feature points of the second image in the image set of the current frame using an SIFT algorithm. Description is made below using a process of extracting the feature points of the first image as an example.
where
D(x, y, σ)=(G(x, y, kσ)−G(x, y, σ))*I(x, y)=L(x, y, kσ)−L(x, y, σ). All points are traversed in scale space of the image, and a value relationship between the points and points in a neighborhood are determined. If there is a first point with a value greater than or less than values of all the points in the neighborhood, the first point is a candidate feature point.
Preferably, an edge response point and a feature point with a poor contrast ratio and poor stability are removed from all the candidate feature points, and remaining feature points are used as the feature points of the first image.
Preferably, a scale factor m and a main rotation direction θ are specified for each feature point using a gradient direction distribution characteristic of feature point neighborhood pixels, so that an operator has scale and rotation invariance, where
Preferably, a coordinate axis of a planar coordinate system is rotated to a main direction of the feature point, a square image region that has a side length of 20 s and is aligned with θ is sampled using a feature point x as a center, the region is evenly divided into 16 sub-regions of 4×4, and four components of Σdx, Σ|dx|, Σdy, and Σ|dy| are calculated for each sub-region. Then, the feature point x corresponds to a description quantity χ of 16×4=64 dimensions, where dx and dy respectively represent Haar wavelet responses (with a filter width of 2 s) in x and y directions.
Further, the second obtaining module 403 is configured to:
For example, if there are 100 pairs of matching feature points (xleft,1,xright,1) to (xleft,100,xright,100) in the candidate matching feature point set, any three feature points in 100 feature points xleft,1 to xleft,100 in the first image corresponding to the candidate matching feature point set are connected as a triangle, and connecting lines cannot be crossed in a connecting process, to form a grid diagram including multiple triangles.
The first preset threshold is set according to experiment experience, which is not limited in this embodiment. If a ratio of a height to a base side of a triangle is less than the first preset threshold, it indicates that a depth variation of a scene point corresponding to a vertex of the triangle is not large, and the vertex of the triangle may meet the rule that scene depths of adjacent regions on an image are close to each other. If a ratio of a height to a base side of a triangle is greater than or equal to the first preset threshold, it indicates that a depth variation of a scene corresponding to a vertex of the triangle is relatively large, and the vertex of the triangle may not meet the rule that scene depths of adjacent regions on an image are close to each other, and matching feature points cannot be selected according to the rule.
Likewise, the second preset threshold is also set according to experiment experience, which is not limited in this embodiment. If a parallax difference between two feature points is less than the second preset threshold, it indicates that scene depths between the two feature points are similar. If a parallax difference between two feature points is greater than or equal to the second preset threshold, it indicates that a scene depth variation between the two feature points is relatively large, and that there is mismatching.
For example, feature points connected by all sides with a positive vote quantity are xleft,20 to xleft,80, and a set of matching feature points (xleft,20,xright,20) to (xleft,80,xright,80) is used as the matching feature point set between the first image and the second image.
The obtaining a candidate matching feature point set between the first image and the second image includes traversing the feature points in the first image; searching, according to locations xleft=(uleft,vleft)T of the feature points in the first image in the two-dimensional planar coordinate system, a region of the second image of uε[uleft−a,uleft] and vε[vleft−b,vleft+b] for a point xright that makes ∥χleft−χright∥22 smallest; searching, according to locations xright=(uright,vright)T the feature points in the second image in the two-dimensional planar coordinate system, a region of the first image of uε[uright,uright+a] and vε[vright−b,vright+b] for a point xleft′ that makes ∥χright−χleft′∥22 smallest; and if xleft′=xleft, using (xleft,xright) as a pair of matching feature points, where χleft is a description quantity of a feature point xleft in the first image, χright is a description quantity of a feature point xright in the second image, a and b are preset constants, and a=200 and b=5 in an experiment; and using a set including all matching feature points that satisfy xleft′=Xleft as the candidate matching feature point set between the first image and the second image.
Further, the first estimating module 404 is configured to:
where
where
Preferably, the optimization formula 2 is solved using an iteration algorithm, and a specific process is shown as follows:
Then, Xt+1 in this case is the three-dimensional location of the scene point corresponding to the matching feature points in the local coordinate system of the next frame.
A process of obtaining δX by solving the formula
is as follows:
where
It should be noted that, to further accelerate convergence efficiency and improve a computation rate, a graphic processing unit (GPU) is used to establish a Gaussian pyramid for an image, the formula
is first solved on a low-resolution image, and then optimization is further performed on a high-resolution image. In an experiment, a pyramid layer quantity is set to 2.
Further, the second estimating module 405 is configured to:
and calculate center-of-mass coordinates (αi1, αi2, αi3, αi4)T of Xi, where Cj (j=1, . . . , 4) is control points of any four different planes in the world coordinate system;
where Ctj (j=1, . . . , 4) is coordinates of the control points in the local coordinate system of the next frame;
to obtain the three-dimensional location of the scene point corresponding to the matching feature points in the local coordinate system of the next frame; and
When the coordinates Ctj (j=1, . . . , 4) of the control points in the local coordinate system of the next frame are being solved for, direct linear transformation (DLT) is performed on
to convert into three linear equations about 12 variables of ((Ct1)T, (Ct2)T, (Ct3)T, (Ct4)T)T:
Further, the optimizing module 406 is configured to:
where n′ is a quantity of interior points obtained using a RANSAC algorithm.
It can be learned from the foregoing that, this embodiment of the present disclosure provides a camera tracking apparatus 40, which obtains a video sequence, where the video sequence includes an image set of at least two frames, the image set includes a first image and a second image, and the first image and the second image are respectively images shot by a first camera and a second camera of a binocular camera at a same moment; separately obtains a matching feature point set between the first image and the second image in the image set of each frame; separately estimates a three-dimensional location of a scene point corresponding to each pair of matching feature points in a local coordinate system of each frame; separately estimates a motion parameter of the binocular camera on each frame; and optimizes the motion parameter of the binocular camera on each frame according to the three-dimensional location of the scene point corresponding to each pair of matching feature points in the local coordinate system of each frame and the motion parameter of the binocular camera on each frame. In this way, camera tracking is performed using a binocular video image, which improves tracking precision, and avoids a disadvantage in the prior art that tracking precision of camera tracking based on a monocular video sequence is relatively low.
It should be noted that, the second obtaining module 502 is configured to obtain the matching feature point set between the first image and the second image in the image set of each frame using a method the same as the method in Embodiment 1 for obtaining the matching feature point set between the first image and the second image in the image set of the current frame, and details are not described herein.
The first estimating module 503 is configured to separately estimate the three-dimensional location of the scene point corresponding to each pair of matching feature points in the local coordinate system of each frame using a method the same as step 204, and details are not described herein.
The second estimating module 504 is configured to estimate the motion parameter of the binocular camera on each frame using a method the same as the method in Embodiment 1 for calculating the motion parameter of the binocular camera on the next frame, and details are not described herein.
Further, the optimizing module 505 is configured to optimize the motion parameter of the binocular camera on each frame according to an optimization formula:
where N is a quantity of scene points corresponding to matching feature points included in the matching feature point set, M is a frame quantity, and (xti=(ut,lefti, vt,lefti, ut,righti)T, π(X)=(πleft(X)[1], πleft(X)[2], πright(X)[1])T.
It can be learned from the foregoing that, this embodiment of the present disclosure provides a camera tracking apparatus 50, which obtains a video sequence, where the video sequence includes an image set of at least two frames, the image set includes a first image and a second image, and the first image and the second image are respectively images shot by a first camera and a second camera of a binocular camera at a same moment; separately obtains a matching feature point set between the first image and the second image in the image set of each frame; separately estimates a three-dimensional location of a scene point corresponding to each pair of matching feature points in a local coordinate system of each frame; separately estimates a motion parameter of the binocular camera on each frame; and optimizes the motion parameter of the binocular camera on each frame according to the three-dimensional location of the scene point corresponding to each pair of matching feature points in the local coordinate system of each frame and the motion parameter of the binocular camera on each frame. In this way, camera tracking is performed using a binocular video image, which improves tracking precision, and avoids a disadvantage in the prior art that tracking precision of camera tracking based on a monocular video sequence is relatively low.
The processor 601 may be a central processing unit (CPU).
The memory 602 may be a volatile memory, such as a random access memory (RAM); a non-volatile memory, such as a read-only memory (ROM), a flash memory, a hard disk drive (HDD), or a solid state drive (SSD); or may be a combination of memories of the foregoing types, and provide an instruction and data to the processor 601.
The binocular camera 603 is configured to obtain an image set of a current frame, where the image set includes a first image and a second image, and the first image and the second image are respectively images shot by a first camera and a second camera of the binocular camera 603 at a same moment.
The image set of the current frame belongs to a video sequence shot by the binocular camera, and the video sequence is a set of image sets shot by the binocular camera in a period of time.
The processor 601 is configured to separately extract feature points of the first image and feature points of the second image in the image set of the current frame obtained by the binocular camera 603, where a quantity of feature points of the first image is equal to a quantity of feature points of the second image; obtain, according to a rule that scene depths of adjacent regions on an image are close to each other, a matching feature point set between the first image and the second image in the image set of the current frame from the feature points extracted by the processor 601; separately estimate, according to an attribute parameter of the binocular camera and a preset model, a three-dimensional location of a scene point corresponding to each pair of matching feature points in the matching feature point set, obtained by the processor 601, in a local coordinate system of the current frame and a three-dimensional location of the scene point in a local coordinate system of a next frame; estimate a motion parameter of the binocular camera on the next frame using invariance of center-of-mass coordinates to rigid transformation according to the three-dimensional location of the scene point corresponding to the matching feature points in the local coordinate system of the current frame and the three-dimensional location of the scene point in the local coordinate system of the next frame that are estimated by the first estimating module; and optimize the motion parameter, estimated by the second estimating module, of the binocular camera on the next frame using a RANSAC algorithm and an LM algorithm.
The feature point generally refers to a point whose gray scale sharply changes in an image, and includes a point with a largest curvature change on an object contour, an intersection point of straight lines, an isolated point on a monotonic background, and the like.
Further, the processor 601 is configured to separately extract the feature points of the first image and the feature points of the second image in the image set of the current frame using an SIFT algorithm. Description is made below using a process of extracting the feature points of the first image as an example.
where
D(x, y, σ)=(G(x, y, kσ)−G(x, y, σ))*I(x, y)=L(x, y, kσ)−L(x, y, σ). All points are traversed in scale space of the image, and a value relationship between the points and points in a neighborhood are determined. If there is a first point with a value greater than or less than values of all the points in the neighborhood, the first point is a candidate feature point.
Preferably, an edge response point and a feature point with a poor contrast ratio and poor stability are removed from all the candidate feature points, and remaining feature points are used as the feature points of the first image.
Preferably, a scale factor m and a main rotation direction θ are specified for each feature point using a gradient direction distribution characteristic of feature point neighborhood pixels, so that an operator has scale and rotation invariance, where
Preferably, a coordinate axis of a planar coordinate system is rotated to a main direction of the feature point, a square image region that has a side length of 20 s and is aligned with θ is sampled using a feature point x as a center, the region is evenly divided into 16 sub-regions of 4×4, and four components of Σdx, Σ|dx|, Σdy, and Σ|dy| are calculated for each sub-region. Then, the feature point x corresponds to a description quantity χ of 16×4=64 dimensions, where dx and dy respectively represent Haar wavelet responses (with a filter width of 2 s) in x and y directions.
Further, the processor 601 is configured to:
(1) Obtain a candidate matching feature point set between the first image and the second image.
(2) Perform Delaunay triangularization on feature points in the first image that correspond to the candidate matching feature point set.
For example, if there are 100 pairs of matching feature points (xleft,1, xright,1) to (xleft,100,Xright,100) in the candidate matching feature point set, any three feature points in 100 feature points xleft,1 to xleft,100 in the first image corresponding to the candidate matching feature point set are connected as a triangle, and connecting lines cannot be crossed in a connecting process, to form a grid diagram including multiple triangles.
(3) Traverse sides of each triangle with a ratio of a height to a base side less than a first preset threshold; and if a parallax difference |d(x1)−d(x2)| of two feature points (x1,x2) connected by a first side is less than a second preset threshold, add one vote for the first side; otherwise, subtract one vote, where a parallax of the feature point x is: d(x)=uleft−uright, where uleft is a horizontal coordinate, of the feature point x, in a planar coordinate system of the first image, and uright is a horizontal coordinate, of a feature point that is in the second image and matches the feature point x, in a planar coordinate system of the second image.
The first preset threshold is set according to experiment experience, which is not limited in this embodiment. If a ratio of a height to a base side of a triangle is less than the first preset threshold, it indicates that a depth variation of a scene point corresponding to a vertex of the triangle is not large, and the vertex of the triangle may meet the rule that scene depths of adjacent regions on an image are close to each other. If a ratio of a height to a base side of a triangle is greater than or equal to the first preset threshold, it indicates that a depth variation of a scene corresponding to a vertex of the triangle is relatively large, and the vertex of the triangle may not meet the rule that scene depths of adjacent regions on an image are close to each other, and matching feature points cannot be selected according to the rule.
Likewise, the second preset threshold is also set according to experiment experience, which is not limited in this embodiment. If a parallax difference between two feature points is less than the second preset threshold, it indicates that scene depths between the two feature points are similar. If a parallax difference between two feature points is greater than or equal to the second preset threshold, it indicates that a scene depth variation between the two feature points is relatively large, and that there is mismatching.
(4) Count a vote quantity corresponding to each side, and use a set of matching feature points corresponding to feature points connected by a side with a positive vote quantity as the matching feature point set between the first image and the second image.
For example, feature points connected by all sides with a positive vote quantity are xleft,20 to xleft,80, and a set of matching feature points (xleft,20,xright,20) to (xleft,80,xright,80) is used as the matching feature point set between the first image and the second image.
The obtaining a candidate matching feature point set between the first image and the second image includes traversing the feature points in the first image; searching, according to locations xleft=(uleft,vleft)T of the feature points in the first image in the two-dimensional planar coordinate system, a region of the second image of uε[uleft−a,uleft] and vε[vleft−b,vleft+b] for a point xright that makes ∥χleft−χright∥22 smallest; searching, according to locations xright=(uright,vright)T of the feature points in the second image in the two-dimensional planar coordinate system, a region of the first image of uε[uright,uright+a] and vε[vright−b,vright+b] for a point xleft′ that makes ∥χright−χleft′∥22 smallest; and if xleft′=xleft, using (xleft,xright) as a pair of matching feature points, where χleft is a description quantity of a feature point xleft in the first image, χright is a description quantity of a feature point xright in the second image, a and b are preset constants, and a=200 and b=5 in an experiment; and using a set including all matching feature points that satisfy xleft′=xleft as the candidate matching feature point set between the first image and the second image.
Further, the processor 601 is configured to:
where
where
Preferably, the optimization formula 2 is solved using an iteration algorithm, and a specific process is shown as follows:
Then, Xt+1 in this case is the three-dimensional location of the scene point corresponding to the matching feature points in the local coordinate system of the next frame.
A process of obtaining δX by solving the formula
is as follows:
(1) Perform first order Taylor expansion on fleft(δX) and fright(δX) at 0:
where
(2) Solve a derivative of f(δX), so that f(δX) gets an extrema at a first-order derivative of 0, that is,
(3) Substitute formula 3 into formula 4, to obtain a 3×3 linear system equation: A·δX=b, and solve the equation A·δX=b to obtain δX, where
It should be noted that, to further accelerate convergence efficiency and improve a computation rate, a graphic processing unit (GPU) is used to establish a Gaussian pyramid for an image, the formula
is first solved on a low-resolution image, and then optimization is further performed on a high-resolution image. In an experiment, a pyramid layer quantity is set to 2.
Further, the processor 601 is configured to:
and calculate center-of-mass coordinates (αi1, αi2, αi3, αi4)T of Xi, where Cj (j=1, . . . , 4) is control points of any four different planes in the world coordinate system;
where Ctj (j=1, . . . , 4) is coordinates of the control points in the local coordinate system of the next frame;
to obtain the three-dimensional location of the scene point corresponding to the matching feature points in the local coordinate system of the next frame; and
When the coordinates Ctj (j=1, . . . , 4) of the control points in the local coordinate system of the next frame are being solved for, direct linear transformation (DLT) is performed on
to convert into three linear equations about 12 variables of ((Ct1)T, Ct2)T, (Ct3)T, (Ct4)T)T:
Further, the processor 601 is configured to:
where n′ is a quantity of interior points obtained using a RANSAC algorithm.
It can be learned from the foregoing that, this embodiment of the present disclosure provides a camera tracking apparatus 60, which obtains a video sequence, where the video sequence includes an image set of at least two frames, the image set includes a first image and a second image, and the first image and the second image are respectively images shot by a first camera and a second camera of a binocular camera at a same moment; separately obtains a matching feature point set between the first image and the second image in the image set of each frame; separately estimates a three-dimensional location of a scene point corresponding to each pair of matching feature points in a local coordinate system of each frame; separately estimates a motion parameter of the binocular camera on each frame; and optimizes the motion parameter of the binocular camera on each frame according to the three-dimensional location of the scene point corresponding to each pair of matching feature points in the local coordinate system of each frame and the motion parameter of the binocular camera on each frame. In this way, camera tracking is performed using a binocular video image, which improves tracking precision, and avoids a disadvantage in the prior art that tracking precision of camera tracking based on a monocular video sequence is relatively low.
The processor 701 may be a CPU.
The memory 702 may be a volatile memory (volatile memory), such as a RAM; a non-volatile memory, such as a ROM, a flash memory, a HDD, or a SSD; or may be a combination of memories of the foregoing types, and provide an instruction and data to the processor 1001.
The binocular camera 703 is configured to obtain a video sequence, where the video sequence includes an image set of at least two frames, the image set includes a first image and a second image, and the first image and the second image are respectively images shot by a first camera and a second camera of the binocular camera at a same moment.
The processor 701 is configured to separately obtain a matching feature point set between the first image and the second image in the image set of each frame; separately estimate a three-dimensional location of a scene point corresponding to each pair of matching feature points in a local coordinate system of each frame; separately estimate a motion parameter of the binocular camera on each frame; and optimize the motion parameter of the binocular camera on each frame according to the three-dimensional location of the scene point corresponding to each pair of matching feature points in the local coordinate system of each frame and the motion parameter of the binocular camera on each frame.
It should be noted that, the processor 701 is configured to obtain the matching feature point set between the first image and the second image in the image set of each frame using a method the same as the method in Embodiment 1 for obtaining the matching feature point set between the first image and the second image in the image set of the current frame, and details are not described herein.
The processor 701 is configured to separately estimate the three-dimensional location of the scene point corresponding to each pair of matching feature points in the local coordinate system of each frame using a method the same as step 204, and details are not described herein.
The processor 701 is configured to estimate the motion parameter of the binocular camera on each frame using a method the same as the method in Embodiment 1 for calculating the motion parameter of the binocular camera on the next frame, and details are not described herein.
Further, the processor 701 is configured to optimize the motion parameter of the binocular camera on each frame according to an optimization formula:
where N is a quantity of scene points corresponding to matching feature points included in the matching feature point set, M is a frame quantity, and xti=(ut,lefti, vt,lefti, ut,righti)T, π(X)=(πleft(X)[1], πleft(X)[2], πright(X)[1])T.
It can be learned from the foregoing that, this embodiment of the present disclosure provides a camera tracking apparatus 70, which obtains a video sequence, where the video sequence includes an image set of at least two frames, the image set includes a first image and a second image, and the first image and the second image are respectively images shot by a first camera and a second camera of a binocular camera at a same moment; separately obtains a matching feature point set between the first image and the second image in the image set of each frame; separately estimates a three-dimensional location of a scene point corresponding to each pair of matching feature points in a local coordinate system of each frame; separately estimates a motion parameter of the binocular camera on each frame; and optimizes the motion parameter of the binocular camera on each frame according to the three-dimensional location of the scene point corresponding to each pair of matching feature points in the local coordinate system of each frame and the motion parameter of the binocular camera on each frame. In this way, camera tracking is performed using a binocular video image, which improves tracking precision, and avoids a disadvantage in the prior art that tracking precision of camera tracking based on a monocular video sequence is relatively low.
In the several embodiments provided in this application, it should be understood that the disclosed system, apparatus, and method may be implemented in other manners. For example, the described apparatus embodiment is merely exemplary. For example, the unit division is merely logical function division and may be other division in actual implementation. For example, a plurality of units or components may be combined or integrated into another system, or some features may be ignored or not performed. In addition, the displayed or discussed mutual couplings or direct couplings or communication connections may be implemented through some interfaces. The indirect couplings or communication connections between the apparatuses or units may be implemented in electronic or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one location, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each of the units may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in a form of hardware, or may be implemented in a form of hardware in addition to a software functional unit.
When the foregoing integrated unit is implemented in a form of a software functional unit, the integrated unit may be stored in a computer-readable storage medium. The software functional unit is stored in a storage medium and includes several instructions for instructing a computer device (which may be a personal computer, a server, or a network device) to perform some of the steps of the methods described in the embodiments of the present disclosure. The foregoing storage medium includes any medium that can store program code, such as a universal serial bus (USB) flash drive, a removable hard disk, a ROM, aRAM, a magnetic disk, or an optical disc.
Finally, it should be noted that the foregoing embodiments are merely intended for describing the technical solutions of the present disclosure but not for limiting the present disclosure. Although the present disclosure is described in detail with reference to the foregoing embodiments, persons of ordinary skill in the art should understand that they may still make modifications to the technical solutions described in the foregoing embodiments or make equivalent replacements to some technical features thereof, without departing from the spirit and scope of the technical solutions of the embodiments of the present disclosure.
Number | Date | Country | Kind |
---|---|---|---|
201410096332.4 | Mar 2014 | CN | national |
This application is a continuation of International Application No. PCT/CN2014/089389, filed on Oct. 24, 2014, which claims priority to Chinese Patent Application No. 201410096332.4, filed on Mar. 14, 2014, both of which are hereby incorporated by reference in their entireties.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/CN2014/089389 | Oct 2014 | US |
Child | 15263668 | US |