The present invention relates to a measuring apparatus for capturing the structured light, which is emitted to a target object by a pattern projector, by an image capturing device and estimating a 3D shape from the captured image by the principle of triangulation, and more particularly to a 3D shape measuring apparatus which does not have the need for previously measuring the positional relationship between the pattern projector and the image capturing device.
3D acquisition stereo systems can be categorized into two basic types: a passive stereo system and an active stereo system. The former can recover 3D shapes only from multiple images. Therefore, no special devices are necessary, and the systems are usually easy to use and convenient. However, in order to recover 3D shapes from images by passive stereo, eliminating errors for searching correspondences between images is generally difficult. Furthermore, candidates for correspondence points are usually limited to feature points on the images. Thus, interpolation is necessary to recover dense 3D points and the accuracy of the data except feature points may be unreliable.
Active stereo systems, on the other hand, utilize light projectors or laser projectors for scanning, and thus can measure 3D shapes with high precision, having no need to solve correspondence problems. In addition, dense 3D points can easily be captured in those systems by scanning lasers or by using structured light methods. Therefore, in many cases, active 3D measurement systems are adopted for scanning shapes of objects with complicated shapes.
One of the disadvantages of the active stereo systems is that these systems usually require special devices. For example, there are high precision and efficient active 3D scanners, which are equipped with servo actuators for controlling a laser projecting device. However, there is a problem in that the equipment usually becomes complex and expensive because of the necessity for accurate control of motors and so on. Also, a structured light projecting system using special light projectors are utilized, which are usually expensive.
Recently, low-cost video projectors for computers are commonly available, and it is possible to construct a practical scanning system easily based on active vision technique using those devices. Among those systems, a structured light projection method (coded structured light method) is widely used because of several advantages. For example, it can retrieve dense 3D points in a short period of time because of relatively short scanning time, and a commercially available video projector can be used, thus, there is no need for special devices such as servo motors for scanning process. Documents 1 to 5 describe examples of the researches for a structured light projection method.
Another disadvantage of active stereo systems is that calibration is required between the camera and the projector each time the conditions of the system are changed. Especially for the system based on a structured light projection method, with which a light projector and a image sensor are apart, a precalibration is required each time the system is moved, and this significantly compromises the convenience of the system. If the extrinsic calibration process can be eliminated from an active stereo system, the system can be specified and the biggest problem for active 3D scanners would be solved, thus, the system would be more practical.
Much research has been conducted based on applying self-calibration methods for passive stereo to active stereo systems by substituting a projector for one camera of the stereo paired cameras, e.g., the projector described in Document 6. However, these method are for 3D shape reconstruction in a projective space, and it is not a practical system, because many impractical assumptions are required for Euclidean reconstruction in those systems, such as an affine camera model or a plane in the scene which should be captured by the camera.
Document 1: J. Batlle, E. Mouaddib and J. Salvi “Recent progress in coded structured light as a technique to solve the correspondence problem: a survey”, Pattern Recognition, 31, 7, pp. 963-982, 1998
Document 2: D. Caspi, N. Kiryati and J. Shamir “Range imaging with adaptive color structured light”, IEEE Trans. on Patt. Anal. Machine Intell. 20, 5, pp. 470-480, 1998
Document 3: K. L. Boyer and A. C. Kak “Color-encoded structured light for rapid active ranging”, IEEE Trans. on Patt. Anal. Machine Intell., 9, 1, pp. 14-28, 1987
Document 4: S. Inokuchi, K. Sato and F. Matsuda “Range imaging system for 3-D object recognition”, ICPR, pp. 806-808, 1984
Document 5: O. Hall-Holt and S. Rusinkiewicz “Stripe boundary codes for real-time structured-light range scanning of moving objects”, In. Conf. Computer Vision, Vol. 2, pp. 359-366, 2001
Document 6: D. Fofi, J. Salvi and E. M. Mouaddib “Uncalibrated vision based on structured light”, ICRA, pp. 3548-3553, 2001
In this invention, a 3D reconstruction method using an uncalibrated active stereo vision system is proposed. The method is equivalent to a self-calibration technique for a passive stereo pair with one of the cameras replaced by a projector.
The main difference between a passive and an active stereo system is that the proposed system can capture dense 3D points, whereas the passive stereo system can capture sparse data set. Therefore, the proposed method can achieve a robust self-calibration using dense correspondence points, and achieve a dense 3D reconstruction.
In addition, a simultaneous 3D reconstruction method is proposed, which achieves more robust and precise 3D reconstruction using multiple scanning data that are repeatedly measured to increase accuracies. Also, a method using a simple device to eliminate ambiguity in the scaling of 3D data is provided, which usually can not be solved by uncalibrated 3D reconstruction systems.
An embodiment of the present invention will be explained below referring to the drawings.
The configuration of a 3D measuring system of the preferred embodiment will be described. First,
In the preferred embodiment, the 3D position of points on the surface of the target object are obtained from a set of images captured by the above method. The method will be briefly described referring to
The methods to acquire the correspondence points between the projector and the camera in the invention are described. In the invention, a large number of correspondence points are required to acquire a dense shape. For active stereo systems, to acquire a 3D position, a triangulation method using a structured light projection method has been usually conducted. To acquire a large number of correspondence points efficiently, structured light projection methods are proposed (see Documents 1 to 5). In those methods, the position of projected light can be identified by projecting coded patterns on the target object and then decoding the patterns on the captured image. Many correspondence points can be acquired by applying this technique. Therefore, in this embodiment, the correspondences between the camera and the projector using a structured light projection method is also acquired. For example, the gray code pattern method proposed by Inokuchi (Document 4) can be used (
Next, parameters estimated by the embodiment will be described. In the preferred embodiment, an active stereo system, in which recalibration is not required even if the camera and the projector are arbitrary moved, is constructed. Accordingly, it is necessary to carry out the self-calibration of camera parameters, in particular, extrinsic parameters from measured data. In the preferred embodiment, the intrinsic parameters and the extrinsic parameter of the camera and the projector can be estimated. A part of the intrinsic parameters may be known.
An embodiment, in which the intrinsic parameters of the camera are known and the focal length of the intrinsic parameters of the projector is not known, will be explained as an embodiment. This is an important embodiment in practical use because, although the intrinsic parameters of the camera can be relatively simply and accurately obtained from many existing methods, a method of obtaining the intrinsic parameters of the projector is not so common, and the focal length of the projector is moved more often than that of the camera in measurement.
A method of self-calibration and 3D reconstruction method will be disclosed as to the above embodiment.
In this embodiment, the self-calibration of the respective parameters is carried out by a non-linear optimization such as a Newton method. Recently, an increase in the computational cost of non-linear optimization is not a problem due to the improved calculation capacity of computers. Thus, studies are made to reconstruct a 3D shape by a non-linear optimization from the beginning. This approach can be also used in the preferred embodiment. First, a method of carrying out the self-calibration of the positional relationship between the camera and the projector will be briefly explained referring to
When the projection model of the camera and the projector is applied (10-2) to the set (10-1) of the correspondence relations between the camera and the projector, a set (10-3) of the directions from the camera and the projector to the correspondence points is obtained. These corresponding points, the camera, and the projector satisfy epipolar constraint. Here, an error function, which is minimized when the respective correspondence points satisfy the epipolar constraint, is defined using the parameter of the positional relationship between the camera and the projector and the focal length of the projector as inputs. The positional relationship between the camera and the projector is estimated by minimizing the function with respect to the parameter of the positional relationship. Specifically, an update value (10-6), which reduces the error function with respect to the estimated values (10-5) of the positional relationship and the intrinsic parameters, is determined (10-4) by solving a linear equation. The estimated values of the parameters are updated iteratively until an update value becomes small (10-7).
In the following, a method to calculate values for updating the parameters of the relative positions from the set of correspondence points and the estimation of the parameters of the relative positions is described. The camera coordinate is defined as a coordinate system fixed with the camera. Coordinate values expressed in the camera coordinate system are the camera coordinates. The projector coordinate system and the projector coordinates are defined in the same way. The origin of the camera coordinate system (the projector coordinate system) is the optical center of the camera (projector). The forward direction of the camera (projector) is the negative direction of the z-axis of the camera coordinate system (the projector coordinate system). The x and y-axis of the camera coordinate system are parallel with the vertical and horizontal directions, respectively, of the image coordinate system of the screen. The axes of the projector coordinates are defined in the same way.
As already described, in this embodiment, the intrinsic parameters of the camera are assumed to be known, except that the focal length of the camera is assumed to be unknown. Let the focal length of the projector be ƒp, and the direction vector of the i th correspondence point expressed in the projector coordinates be
(upi,vpi,−ƒp)t. (1)
Here, the rigid transformation from the projector coordinates to the camera coordinates is expressed as the rotation matrix Rp and the translation vector tp. The rotation is expressed by the parameters of Euler angles αp, βp and γp and the rotation matrix is thus expressed as Rp(αp, βp, γp). Since the norm of the translation vector |tp| cannot be resolved only from the correspondence points, tpis assumed to be an unit vector and is expressed by two parameters of polar coordinates (ρp, φp). Thus, tp expressed by polar coordinate is represented by tp(ρp, φp).
The coordinate of the i th correspondence point observed by the camera is converted to the screen coordinates of a normalized camera after the correction of the effect of the lens distortions. Let the coordinate be expressed as
(uci,vci,−1)t. (2)
If the epipolar constraints are met, the lines of sights from the camera and the projector to the correspondence point intersect in the 3D space. The line of sight from the projector to the correspondence point is
r{Rp(αp,βp,γp)}(upi/ƒp,vpi/ƒp,−1)t+tp(ρp,φp) (3)
in the camera coordinates, where r is a parameter. The line from the camera to the correspondence point is expressed as
s(uci,vci,−1)t (4)
where s is a parameter.
By minimizing the 3D distance between the above two lines (3) and (4), the parameters that confirm the epipolar constraints are estimated. Let the direction vectors of the two lines be expressed as
pci:=N(uci,vci,−1)t, qci(θ,ƒp):=N{Rp(αp,βp,γp)}(upi/ƒp,vpi/ƒp,−1)t (5)
where N is an operator which normalizes a vector (i.e. Nx:=x/|x|), and θ:=(αp,βp,γp) represents a part of the extrinsic parameters of the projector. Then, the signed distance between the two lines is
Ei(θ,τ,ƒp):=tp(τ)·N(pci×qci(θ,ƒp)) (6)
where “·” indicates a dot product, and τ:=(ρp, φp) represents the parameters of the translation.
Ei(θ, τ, ƒp) includes systematic errors whose variances change with the parameters (θ, τ, ƒp) and the index i of the correspondence points. To achieve unbiased evaluation of the parameters (θ, τ, ƒp), Ei(θ, τ, ƒp) should be normalized with respect to the expected variance of the errors. Assuming that the epipolar constraints are met, the distances from the camera and the projector to the reference point is represented by
Dci(θ,τ,ƒp):=|tp(τ)×qci(θ,ƒp)|/|pci×qci(θ,ƒp)|,
Dpi(θ,τ,ƒp):=|tp(τ)×pci|/|pci×qci(θ,ƒp)|(7)
By using these distances, the distance {tilde over (E)}i(θ,τ,ƒp) between the two lines normalized with respect to the errors is
wi(θ,ρ,ƒp):={εcDci(θ,τ,ƒp)+εpDpi(θ,τ,ƒp)/ƒp}−1,
{tilde over (E)}i(θ,τ,ƒp):=wi(θ,τ,ƒp)Ei(θ,τ,ƒp) (8)
where εc and εp are the errors intrinsic to the camera and the projector expressed as lengths in the normalized screens. In this embodiment, for example, the pixel sizes measured in the image planes of the normalized cameras can be used as these values. In case that the probability distributions of the errors of the camera and the projector are known in advance, the variances of the distributions can be used. If the differences of the error levels of the camera and the projector are not large, εc and εp can be set to be the same. Then, the function ƒ(θ,τƒp) to be minimized with the non-linear optimization is expressed as the following form:
where K is the number of the reference points.
The above function can be minimized using the Newton method as the following. It is assumed that the solution of the minimization at the j th iteration is (θj,τj,ƒp,j). Then, (θj,τj,ƒp,j) are updated by the solutions Δα, . . . , Δƒ of the equations
where the partial derivatives
are calculated for θ=θj, τ=τj, ƒp=ƒp,j, (θj,τj,ƒp,j). Since N>6 (i.e. the number of equations is larger than the number of variables), these equations are generally unsolvable. However, the approximate solutions Δα, . . . , Δƒ that minimize the sum of the squares of the differences of both sides of the equations are calculated using a pseudo-inverse matrix. By updating the estimated values using the following forms iteratively, the target function can be minimized.
θj+1=θj+(Δα,Δβ,Δγ), τj+1=τj+(Δρ,Δφ), ƒp,j+1=ƒp,j+Δƒ. (11)
Since qci depends on αp and tp does not depend on αp,
is calculated as
where x⊥y represents component of vector x that is vertical to vector
can also be calculated similarly. Since tp, depends on ρp and qci does not depend on ρp,
is calculated as
can also be calculated similarly.
Let the N×6 coefficient matrix of the simultaneous linear equations that are formed by equations (10) be M. The stability of the solutions of these simultaneous equations can be evaluated from the ratio λmin/λmaxof the maximum singular value λmaxto the minimum singular value λminof M, which can be considered as an estimate of the “strength of the linear independency” of 6 column vectors of M.
The equation (13) tells that, when the normal vectors N (pci×qci) of epipolar planes have a small variance, the linear independence of the column vectors of M is weak, thereby the stability of the solution is deteriorated. This corresponds to a case in which the vertical angle of view of a scene from the camera or the projector is small while the projector is placed beside the camera.
In the reconstruction achieved by minimizing the evaluation function of the equation (9), when a particularly large error of (upi, vpi, −ƒp)t and (uci, vci, −1)tis included (outliers exist), the reliability of the estimated parameters may be deteriorated. As a reconstruction method having high robustness to a small number of outliers, it is effective to use only the error having a relatively small value of the error {tilde over (E)}i(θj, τj, ƒp,j) to determine the update values Δα, . . . , Δƒ (
In the above robust reconstruction method, when the parameters (θj, τj, ƒp,j) are converged using only the correspondence points having the order within 90% from the beginning, the stability of convergence may be deteriorated. As a countermeasure to it, the reconstruction may be carried out by applying the above robust reconstruction method after the values (θj, τj, ƒp,j) are updated first until they are converged using all the correspondence points.
Further, when the rate of the outliers is high, robustness can be enhanced when the rate of the correspondence points to be used is reduced. At the time, the stability of convergence may be enhanced by using all the correspondence points at the beginning, and thereafter gradually reducing the rate of the correspondence points to be used to 90%, 80%, and the like.
When the focal length ƒpof the projector is included to the parameters to be estimated, the stability of convergence may be deteriorated in optimization. In this case, convergence may be stably obtained by fixing the focal length ƒpof the projector first, carrying out optimization only with respect to the extrinsic parameters, and, after the convergence is obtained, carrying out optimization by including ƒpto variables.
Further, in the case the focal length ƒpof the projector is accurately determined previously, the accuracy of a final solution can be enhanced when ƒpis used as a fixed parameter until the end and is not used for the optimization.
When the information such as whether the projector is located on the right side or the left side of the camera is obtained as an assumption, the information is preferably used as the initial value of the extrinsic parameters. When the projector is located on the left side of the camera (shown in
A 3D shape can be directly restored by applying the estimated parameters tp and Rpto the stereo method.
Since the ambiguity of scaling can not be decided by self-calibrating stereo methods, several problems occur in practical use. One of the problems is that, when we scan an object from various view directions to capture its entire shape, scaling parameters are different for each view directions. In this invention, two solutions are proposed for this problem: the first solution is keeping the consistent scaling factors for multiple scans by estimating camera parameters simultaneously from the scans, and the second is a method to decide real scaling by using a laser pointer attached to the projector.
First, the first solution to keep the consistent scaling for multiple scans with simultaneous 3D reconstruction method is explained.
In this invention, “simultaneous 3D reconstruction” means the process of camera parameter estimation and 3D reconstruction simultaneously using inputs of multiple views. The outline of the simultaneous 3D reconstruction is described in the following, using
An advantage of simultaneous reconstruction using data obtained with multiple scanning is that the scaling factors of all the reconstructed scenes become consistent. This is important, especially for registering multiple scanning of an object and integrating them into a single shape.
Another advantage is that it is expected that this method can improve the stability of solving the problem. Generally, one of the reasons for an instability of the solution of uncalibrated 3D reconstruction is that the changes of the observed values (qciin the case above) with a change of the focal length appears to be similar to the changes of the observed values with a change of the distance between the projector and the scene. Using multiple depth values of corresponding regions of multiple scenes, the differences between those changes will increase and become easily observable, and it is expected that the solution will become more stable. Especially, in the case that target object is nearly planar, measuring the object with only one scan causes an unstable solution. However, the solution is effectively stabilized by scanning the object multiple times while moving the object and processing simultaneous reconstruction.
Next, the method to determine the real scaling parameter for the reconstructed shape by attaching a laser pointer to the projector is explained.
The scaling factor of the reconstructed 3D shape produced by this method is different from the scale of the real shape. The simultaneous reconstruction described in the previous subsection is useful for obtaining multiple reconstructions with constant scaling, but the “real” scaling factor of the scene cannot be determined with the method. To achieve this, the following methods can be applied:
(1) actually measuring the length between specific two points on the target object,
(2) measuring an object with a known shape and the target object successively without moving the camera nor the projector, or
(3) measuring an object with a known shape and the target object simultaneously.
However, all of these techniques normally require some human intervention of measuring scenes and make it difficult to develop a completely automatic measuring process.
In this embodiment, a method is proposed, which is for estimating the scale factor by attaching a laser pointer to the projector and observing the reflection of the laser beam on measuring the target object. The laser pointer projects a beam on a line fixed to the projector coordinate system. The parameters of the line of the beam (4-5) should be estimated in advance. To achieve this, multiple points on the laser line are obtained by measuring an object with a known shape lit by the laser pointer. Then, the multiple points are represented in the projector coordinates and are fitted to a line to obtain the parameters of the line. This process is required only once when the laser pointer is attached to the projector.
To determine a scaling factor of a target object (4-3), the laser beam is projected on measuring the scene and the point lit by the laser is detected (4-4) by the camera. Let the 3D location (4-3) estimated up to scale be (xpm, ypm, zpm)tin the projector coordinates. The real 3D coordinates of the point can be expressed as λ(xpm, ypm, zpm)t, where λ is the scaling multiplier. If we regard λ as a parameter, points represented by the above form define the line that passes both the origin of the projector coordinate system and the point (xpm, ypm, zpm)t. The point marked by the pointer is on the calibrated light beam of the laser pointer; thus, by taking the cross point of the line λ(xpm, ypm, zpm)t and the calibrated light beam of the laser pointer, the unknown scaling parameter λ (4-7) can be decided.
Next, a method of carrying out 3D measurements of a still target object a plurality of times while fixing one of the projector and the camera and moving the other of them and representing the 3D shapes obtained by the respective measurements in the same coordinate system will be described (
In the 3D measurement method of the embodiment, the relative position between the projector and the camera need not be measured previously and it is possible to conduct a measurement while moving the devices. At the time, the measurement is carried out a plurality of times while fixing any one of the projector and the camera (18-1) and moving the other of them (18-2, 18-3, 18-4, 18-5). Then, 3D shapes can be obtained as to the respective measurements. Here, it is assumed that all of these 3D shapes are represented by the coordinate system of the fixed device. Referring the explanatory figure, the measurements are carried out between (18-1) and (18-2) and between (18-1) and (18-3), then, all the shapes obtained by these measurements are represented on the coordinate system of the device (18-1). Thus, all the results of the plurality of measurements are represented on the same coordinate system. Although the result of these measurements does not have a common scaling factor, scalings can be made consistent each other by taking out one or a plurality of points from the common region of the result of measurements and comparing the depths to the points in the coordinate system of the device (18-1).
One of the advantages of the method is that a defect caused to the result of measurements by occlusion can be suppressed by covering the defect by the plurality of measurements.
When the still target object is measured while fixing one of the projector and the camera and moving the other of them as described above, the accuracy of the result of measurement can be enhanced by a method called a bundle adjustment method. A simple bundle adjustment method will be described below.
Using the 3D reconstruction method based on the self-calibration and the stereo method that is already described, it is sometimes hard to obtain a result with high precision even if a sufficiently large number of correspondence points are provided. One example is the case when the distance from the camera or projector to the target object are relatively large compared to the variation of depth of the scene, which means that the projection of the scene to the devices are nearly pseudo-perspective projections. Assuming pseudo-perspective projections, self-calibration using correspondences between only two images is impossible. Similarly, in the case that the projections are nearly pseudo-perspective, the solution gets unstable even with a sufficient number of correspondence points. Sometimes, the problem of unstableness can be alleviated by increasing the depth variation of the scene using the simultaneous 3D reconstruction that is already descried. However, this approach is sometimes hard to apply when the depth of focus is shallow. For these cases, using three or more camera images can stabilize the solution, since 3D reconstruction from three or more images is possible even on the condition of pseudo-perspective projections.
It is considered to carry out measurements a plurality of times while fixing one of the projector and the camera and moving the other of them (shown in
To process a bundle adjustment, reference points (13-3, 13-4) that are commonly observable from all the cameras and the projector are selected by sampling. The 3D locations of the reference points and the extrinsic parameters of all the cameras are estimated by the following processes. Let the estimated depth (13-11, 13-12) of the j th reference points (in the projector coordinates) observed from the projector be expressed as di (i=1, 2, . . . , Nr) where Nris the number of the reference points, and the extrinsic parameters (13-6, 13-7) of the camera j be expressed as θj. For the initial values of these values, the result of the self-calibration and the 3 D reconstruction method that are already described are used. The algorithm of the bundle adjustment is as follows.
Step 1
Sample 3D points (13-3, 13-4) that are commonly observable from all the camera and the projector as reference points.
Step 2
The initial positions of the reference points are calculated by self-calibration and the 3D reconstruction using one of the cameras and the projector.
Step 3
Repeat the following steps until the changes of θj become sufficiently small.
Step 3.1
Repeat the following steps for all the camera indexes j=1, 2, . . . , Nc.
Step 3.1.1
Update the extrinsic camera parameters θj(13-6, 13-7) of the camera j, using the current estimation of the depths di (i=1, 2, . . . , Nr) (13-14) of the reference points from the projector.
Step 3.2
Update di (i=1, 2, . . . , Nr) using the current θj.
The update process of the camera parameter θjis as the followings. Let the coordinates of the point represented in the projector coordinate system transformed from the coordinates of the 3D point x in the camera coordinate system using the extrinsic parameters θj:=αc,j, βc,j,, γc,j,, tc,jof the j th camera be Trans(θj, x). Let the mapping of the projection by the standard camera be Proj.
By minimizing
with respect to θj, the extrinsic parameters of the j th camera can be estimated (13-5, 13-8). The minimization process is the same problem with a normal camera calibration problem from 3D-2D correspondences. The problem can be solved by, for example, the Newton method. Other non-linear optimization, such as a simplex descendent method, can also be used. The initial estimation for the problem can be obtained by a linear method proposed for camera calibrations.
Update of the estimated depths of the reference points di (i=1, 2, . . . , Nr) (13-14) from the projector is processed as the followings. The reference points are reconstructed and the depth values from the projector to the reference points are calculated from the correspondence points between the projector and each of the cameras using the temporal estimation of the extrinsic parameters. Then, the new reference points are calculated by averaging the depth values calculated for all the cameras. This process is carried out as follows.
Step 1
Repeat the following steps for each camera index j=1, 2, . . . , Nc
Step 1.1
Calculate the distances (13-11, 13-12) between the projector and the reference points from the correspondence points (13-3, 13-4) between the projector and the j th camera, and the extrinsic parameters θj (13-6, 13-7) of the j th camera. The outputs of the calculation are depths dk,j(13-11, 13-12) from the projector, where k is the index of the reference point.
Step 2
Calculate the averages of the distances (13-14) between the projector and each of the reference points by the form
Next, an entire shape estimation method in the embodiment will be disclosed.
To obtain the entire shape of a target object, it is necessary to carry out 3D reconstruction measured from respective directions while changing the direction of the target object and to align and integrate the resultant 3D shapes.
At the time, the scalings of the 3D shapes reconstructed for the respective directions can be made consistent to each other and thus the 3D shapes can be aligned easily by measuring the correspondence points from the respective directions while fixing the camera and the projector and changing the direction of the target object, and applying the method of “simultaneous 3D reconstruction of multiple scenes” described above.
In the above method, when a means such as a turntable for changing the direction of the target object is further used to restrict the change in direction thereof to the rotation with respect to a certain axis, the change in direction of the target object can be estimated easily. With this arrangement, a processing for aligning the 3D shapes can be more simplified.
In the above method, when a turntable, which can control an amount of rotation, is used, the change in direction of the target object can be more easily estimated, thereby the estimation accuracy of the entire shape can be enhanced. As another method of estimating the change in direction of the target object, it is considered to make use of a turntable with a marker (
A method of calculating a rotation angle using the turntable will be described. First, the rotation axis and the rotation plane of the turntable are previously calibrated.
When the turntable is turned, an LED mounted on the turntable draws an elliptical shape on a two-dimensional image. Thus, the turn axis can be estimated by analyzing the elliptical shape. To calibrate the turntable, a method of putting a calibration pattern on the turntable, measuring the pattern while turning the turntable, and the turntable can be calibrated by an optimization calculation using the measured patterns (
Next, the 3D coordinate of the point, at which a line connecting the center of the camera and the LED on the image plane intersects with the actual LED, is determined. By using the coordinate and the calibration parameter, it is possible to calculate the rotation angle of the turntable and the distance between the LED and the center of rotation (radius of rotation)(
Further, when the target to be measured is placed on the turntable, because the LED occluded by the object cannot be observed, a plurality of LEDs are attached so that any one of them can be observed at all times. At the time, it is preferable to make it possible for the plurality of LEDs to be identified by setting the radii of rotation of them to different values (
Further, a method without previously conducting a calibration is also considered. This method can be realized because the axis of rotation can be easily calculated by applying the condition that the plurality of 3D coordinates of the positions of the LEDs, which are obtained by the present invention while rotating the turntable, are placed on the same plane, and they are also rotated about the same axis of rotation.
Further, as another method that utilizes the LED, it can be used as a switch. When, for example, a video camera is used as the image capturing device, the video camera can easily detect a movement of the LED. Further, the video camera can also detect that the LED is stationary likewise. Accordingly, the LED can be used as the switch by carrying out no measurement when the LED moves, that is, when the turntable moves, and automatically starting the 3D measurement according to the present invention when it detect that the turntable stops. With this arrangement, the user can automatically measure the entire shape of the target object only by placing the target to be measured on the turntable and simply turning the target together with the turntable without the need for operating a personal computer or other equipments.
As described above, according to the present invention, the 3D shape of a target object can be measured from a captured image using the projector and the camera. At the time, the positional relationship between the projector and the camera can be automatically estimated. With this arrangement, workload can be greatly reduced and the knowledge necessary for measurements can be reduced as compared with the conventional similar systems which require the previous calibration of the positional relationship.
Further, when the apparatus in which the laser pointer is attached to the projector is used, the scaling of a target object can be determined. This is important to the creation of a shape model that requires scaling.
Further, difficulties of constructing a construction of the entire shape of a target object can be reduced achieved by measuring the object from a plurality of directions and reconstructing it at the same time. At the time, when the turntable with the LED is used, the entire shape can be automatically reconstructed simply without the need for additional equipments.
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/JP2005/009116 | 5/12/2005 | WO | 00 | 11/13/2007 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2006/120759 | 11/16/2006 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
6055056 | Kuehmstedt et al. | Apr 2000 | A |
6441888 | Azuma et al. | Aug 2002 | B1 |
6542250 | Michaelis et al. | Apr 2003 | B1 |
6549288 | Migdal et al. | Apr 2003 | B1 |
6937348 | Geng | Aug 2005 | B2 |
7061628 | Franke et al. | Jun 2006 | B2 |
7286246 | Yoshida | Oct 2007 | B2 |
20010037191 | Furuta et al. | Nov 2001 | A1 |
20050002559 | Terauchi et al. | Jan 2005 | A1 |
Number | Date | Country | |
---|---|---|---|
20090097039 A1 | Apr 2009 | US |