The present disclosure relates to methods and systems for processing images.
It is well known that any pair of images, taken from two different positions, contains parallax information relating to the range to various objects in the scene. A three dimensional point cloud of features/objects can be constructed from stereo ranging measurements to the various scene features. However, if the physical locations of the cameras are unknown, the physical size of the point cloud will remain unknown and the cloud is defined as unscaled. In order to properly scale the point cloud to its true physical dimensions, the true camera locations should be known as well as the focal length (or angle calibration) of the camera.
If the camera is moved to three undefined locations, along some trajectory, it is possible to generate at least two stereo pairs of images. Each of these image pairs can generate a three dimensional point cloud of scene features. However, rescaling, rotating, and merging point clouds generated from multiple unknown camera positions is a challenging task. The problem compounds as the camera is moved to multiple positions so as to create multiple stereo pairs of images.
Existing techniques for merging point clouds from multiple images do not provide satisfactory results. The merged point clouds obtained with existing techniques only roughly approximate the original structures being modeled. Furthermore, these techniques require that all essential parts of a scene be visible to all the imagery.
One embodiment relates to a method for processing stereo rectified images, each stereo rectified image being associated with a camera position, the method comprising: selecting a first pair of stereo rectified images; determining a first point cloud of features from the pair of stereo rectified images; determining the locations of the features of the first point cloud with respect to a reference feature in the first point cloud; selecting a second pair of stereo rectified images so that one stereo rectified image of the second pair is common to the first pair, and scaling a second point cloud of features associated with the second pair of stereo rectified images to the first point cloud of features.
Another embodiment relates to an article of manufacture comprising a physical, non-transitory computer readable medium encoded with machine executable instructions for performing a method for processing stereo rectified images, each stereo rectified image being associated with a camera position, the method comprising: selecting a first pair of stereo rectified images; determining a first point cloud of features from the pair of stereo rectified images; determining the locations of the features of the first point cloud with respect to a reference feature in the first point cloud; selecting a second pair of stereo rectified images so that one stereo rectified image of the second pair is common to the first pair, and scaling a second point cloud of features associated with the second pair of stereo rectified images to the first point cloud of features.
These and other aspects of the present disclosure, as well as the methods of operation and functions of the related elements of structure and the combination of parts and economies of manufacture, will become more apparent upon consideration of the following description and the appended claims with reference to the accompanying drawings, all of which form a part of this specification, wherein like reference numerals designate corresponding parts in the various figures. In one example of the present disclosure, the structural components illustrated herein can be considered drawn to scale. It is to be expressly understood, however, that many other configurations are possible and that the drawings are for the purpose of example, illustration and description only and are not intended as a definition or to limit the scope of the present disclosure. It shall also be appreciated that the features of one embodiment disclosed herein can be used in other embodiments disclosed herein. As used in the specification and in the claims, the singular form of “a”, “an”, and “the” include plural referents unless the context clearly dictates otherwise.
Various embodiments will now be disclosed, by way of example only, with reference to the accompanying schematic drawings in which corresponding reference symbols indicate corresponding parts, in which:
The present disclosure provides a method for automatically constructing and merging and/or scaling feature point clouds from images taken from multiple unknown camera positions. The present disclosure provides the methodology and equations for reconstructing a three dimensional point cloud from the parallax positions of the feature points in multiple pairs of stereo rectified images. The images may be taken from multiple cameras or from a moving camera.
The present disclosure creates a single, correlated, point cloud from multiple stereo rectified image pairs using unposed cameras. For example, as a camera moves along some arbitrarily curved trajectory it views terrain and structures from multiple points of view. The camera may move all around the area of interest so that this area of interest is viewed from all directions.
In an embodiment of the invention, the methods and systems disclosed herein are configured to merge and/or scale point clouds generated from multiple captured images in accordance with the following scenarios. These scenarios are listed in order of increasing difficulty.
Scenario 1: The camera is attached to a high accuracy Inertial Navigation System (INS). In this scenario, the camera locations and orientations (poses) for each image in the sequence are known with considerable precision.
Scenario 2: In this scenario, shown in
Scenario 3: In this scenario, motion is constrained and there are major obscurations. The typical situation is a sequence of imagery taken from an automobile as it drives around a building. The constraint is provided by the roadway. Over flat terrain, this constraint significantly reduces the degrees of freedom of the problem. On the other hand, the front side and area around a building will become invisible when the vehicle is on the back side. Thus, a starting ensemble of feature points will progressively become invisible as new features come into view. It is desirable that the point cloud be continuously merged with the growing ensemble cloud as the camera moves to new positions. The real difficulty occurs when the camera moves back to, or close to, its starting position (i.e. closing the circle). In an embodiment, the methods and systems disclosed herein adjust the individual point clouds to get the best possible match to the real physical situation.
Scenario 4: In this scenario, shown in
The camera is defined by a camera center and an image plane.
The camera has a local x,y,z coordinate system (attached to the camera) which is centered at the camera center. In this local coordinate system there is a vector, z, the principal vector, from the camera center to the center of the image plane. This z vector is perpendicular to the image plane and touches the image plane at the image center point, c. The distance from the camera center to the image plane, f, is the focal length of the camera. In this local coordinate system the x and y axes are parallel to the image plane. Conventionally, the y-axis lies in the vertical direction.
The two cameras are observing a common world point, W. This point, together with the two camera centers RCC, LCC, defines a plane, referred to as the common plane CP. Because the baseline also joins the two camera centers, the stereo baseline SB therefore lies in this plane CP. It will be appreciated that there are an infinite number of points which lie in this common plane CP. Any two of these points will form the ends of a line in the image plane.
As can be seen in
The scene object, at point, W, as seen by the camera in either of its positions, has vector coordinates XWL, YWL, ZWL or XWR, YWR, ZWR with respect to the camera centers. From a single camera position, it is not possible to determine how far away the object W is from the camera center. The direction may be known, but not the distance. The direction with respect to the camera's orientation is known from the location of the image of W on the image plane. The ray between the camera center and W intercepts the image plane at the vector location xp, yp, f. Thus, the image location of this object can be defined by the vector p=[xp, yp, f]. The focal length of the camera can be adjusted to unity: f=1. The rescaling is the result of dividing all the vector components by f. This is a reasonable rescaling because, in many cases, the focal length is unknown, but the angles to various object locations can be measured. The abstract image of W is therefore defined by the vector [xp, yp, 1]. Here, the focal length is retained as part of the image vector.
In general, there will be a multitude of world points, Wa, Wb, Wc W in the observed far field. Each of these points will have its own common plane and each of these common planes will intersect the stereo baseline. As a result, these planes radiate out from the stereo baseline.
Objects, or features, Wa, Wb, Wc, in the far field are connected to the camera center by light rays. The images of these features occur where their rays pass through the image plane at points, pa, pb, pc. Each far field feature is one corner of a triangle, with the other corners being the camera centers. Such a triangle is part of an extended plane which passes through the stereo baseline SB. Where this feature plane also passes through the image plane, this intersection is defined as an epipolar line EL. Each plane and feature has a corresponding epipolar line EL which radiates from the epipole. Specifically, as shown in
The relative camera geometry of
In another case, the cameras may both point perpendicular to the stereo baseline SB. In this case, the epipoles are at infinity and it is no longer possible to define finite epipoles. As long as the cameras are also parallel—a condition defined as stereo rectified—the relative distances to various objects can be determined by their relative parallax motions. However, if the cameras have relative tilt (the cameras are rotated through different angles around the stereo baseline so that they are no longer parallel to each other), then the image motions are confused. Without correction to the stereo rectified camera position, range measurement becomes difficult.
Stereo rectification involves aligning the cameras perpendicular to the stereo baseline SB and parallel to each other. When this is done all the features in the left camera will move along parallel, and horizontal, lines to reach their positions in the right camera. The epipoles eR, eL, will be at infinity in both cameras. In practice, the cameras (or a single moving camera) will typically not be in stereo rectified position when the images are formed. Virtual, or homographic, mapping of the original images should therefore be performed to create the desired pair of stereo rectified images.
Referring now to
The method 500 comprises operation 510 where a first pair of stereo rectified images is selected. In an embodiment, stereo rectification of the pair of images is carried out in accordance with one or more methods described in co-pending U.S. application Ser. No. 13/445,454, entitled “Stereo Rectification Method” filed Apr. 12, 2012, the entire disclosure of which is incorporated by reference in its entirety. However, this is not limiting. It is envisioned in other embodiments to stereo rectify the images in accordance with other methods, including, for example, those that are based on the construction of the Essential Matrix.
The method 500 then proceeds to operation 520 where a first point cloud of features from the pair of stereo rectified images is determined. The first point cloud of features includes one or more objects and/or object features that are captured by the images.
In one embodiment, operation 520 may include determining the unscale range to each feature from the stereo rectified pair of images. This is done as follows. For example,
The cameras observe a feature in the far field. The feature is a distance, R, perpendicular to the (extension of) the stereo baseline. The feature is also a distance x from the left camera along the stereo baseline.
The angle between the ray to this object and the principal axis is β. The complementary angle, between this ray and the stereo baseline, is α.
With these definitions, it is now possible to develop the equation for the distance R of the object from the stereo baseline and the distance x of the object along the stereo baseline from the left camera position.
As seen in
Substituting equation (2) into equation (1) and rearranging, R can be defined as follows:
Using trigonometric substitutions, equation (3) becomes:
It is noted that αL=90−βL, and αR=90−βR. Substituting these relationships into equation (4) and using the appropriate trigonometric conversions, provide:
The distance, x is given by:
x=B+b
By rearranging equation (4) to isolate B and substitute this, together with equation (2) into equation (6), and after some manipulation and trigonometric substitution, equation (7) is obtained:
With further manipulation, resulting from trigonometric expansion of the angle difference sine, this reduces to the desired form:
Angle, β, is used to tell the direction to a feature within the common plane CP defined by that feature and the two camera centers. However, it will be appreciated that this common plane will be oriented differently for each such feature. Assuming the cameras are stereo rectified, parallel, and pointing in some arbitrary direction. The cameras then define a reference plane which passes through the image centers of both cameras (i.e. the plane lies along the x-axes). In general, the plane defined by a particular feature and the camera centers (i.e. the stereo baseline) will not coincide with this standard camera reference plane. The two planes will intersect with an angle, η, between them.
The angle, β, defines the direction to the distant feature within the common stereo plane. Thus, this angle is defined with respect to a reference vector, ry, which also lies within this plane. The vector ry terminates at the image plane, but its length is not equal to the camera focal length. The angle β is thus given by:
The angle, η, is given by:
In developing a three dimensional point cloud from the stereo parallax measurements, it is desirable to provide a reference orientation so that both cameras are tilted the same. In one embodiment, this orientation is developed by arbitrarily assigning one of the features to be the reference feature. Through perspective rotations of the left and right camera images, this reference feature is placed on the x-axis in both the left and right images. It will be appreciated that moving the reference feature to the x-axis results in a corresponding perspective transformation motion of all the other features, as well. Once these rotations have been completed, so that the cameras are parallel and fully stereo rectified, then the angle, η, provides the vertical reference angle of each feature with respect to the reference feature.
An alternative scheme would be to bring the reference feature to an arbitrary y-axis location in both cameras and then to simply compute the relative angles, ηi, for each of the other features.
Referring back to
As noted previously, these features in the captured images are presumed to be elements of some object out in the world. In an embodiment, in order to find the geometrical relationships of these features relative to each other, a local Cartesian coordinate system is established into which the locations of the features are embedded.
This local coordinate system is related to the relationship between the world object and the stereo camera pair. In general, the local coordinate system will not correspond to a coordinate system growing out of the ground plane. Rather, the local coordinate system may be established by selecting some world object feature as the origin of the local coordinate system. The reference plane is the plane defined by the locations of this feature and the two camera centers.
The coordinate system origin is at the principal feature P. The principal reference plane RP passes from this feature through the two anchoring cameras (and camera centers LCC and RCC).
The local x and y coordinates of feature i are:
Zi=Ri cos ηi−R0 (11)
and, using equation (10):
Yi=Ri sin ηi (12)
The third local coordinate, Xi, is found by reexamining
Using equation (8), equation (13) is obtained:
The foregoing embodiment shows how to compute the locations of multiple feature points with respect to a reference feature point P. In this embodiment, the reference feature P is located at the origin of a local Cartesian coordinate system and the orientation of this local coordinate system is controlled by the locations of the two cameras. It will be appreciated that, in the foregoing embodiment, any feature point can serve as the origin of the local coordinate system.
Referring back to
In an embodiment, this is done by linking the third camera orientation to the first and second camera orientations. This is done in an embodiment by constructing and/or defining a master local coordinate system.
Once R12 has been determined as an scale reference distance, it is possible to obtain the similarly scaled B12, b12, r1 and r2 through simple trigonometry, as follows:
As shown in
In an embodiment, the location of camera C3 is determined in the local coordinate system by the following procedure:
First, the angle, ε123, between the stereo baselines SB12 and SB23 is determined. This can be done with the unrectified initial image from camera C2. During the process of stereo rectification of the camera C1 and camera C2 images, it is possible to recover the location of the camera C1 epipole in camera C2's original image. This may be accomplished by standard techniques developed by the Multiview Geometry community. Similarly, during the process of separately stereo rectifying the images from cameras C2 and C3, it is possible to recover the location of the camera C3 epipole in camera C2's original image of the scene. The angle, with respect to the camera center, between these two epipoles is ε123.
After cameras C2 and C3 have been stereo rectified together, the reference feature P is located and the newly rectified camera C2 and camera C3 images are virtually tilted so that the image of this feature P lies along the x-axis of cameras C2 and C3. This establishes a new plane with triangular corners at camera C2, camera C3 and the reference feature P.
With this new plane, it is possible to create a secondary local Cartesian coordinate system, defined by the coordinates, x′, y′, z′. These coordinates are oriented to the plane defined by cameras C2 and C3 and the reference feature P, but they are oriented to the SB23 stereo baseline. This new local coordinate system is illustrated by
In order to determine the intercept, BI23, of R23 with the SB23 baseline, a transformation is defined to transform the, x′, y′, z′ coordinate system to the reference x, y, z, coordinate system.
In an embodiment, the desired transformation is defined according to the matrix equation which carries an object at some x′, y′, z′ location to its equivalent x, y, z coordinates:
The nine coefficients of matrix [A] can be determined as follows.
It is assumed that the procedures associated with
Because the location of the second camera C2 is initially unknown, the distance from the stereo baseline SB23 to the reference feature and the other common feature set will also be unknown. Thus, the apparent size of this common feature point cloud will usually be different for the camera pair C1-C2 than for the camera pair C2-C3. Finding the transformation matrix which couples the x, y, z coordinate system with the x′, y′, z′ coordinate system therefore will involve a rescaling of both point clouds so that they match in all but orientation (operation 550).
In an embodiment, the rescaling process is done as follows. Each point cloud is treated as a single vector and the norm of the vector is determined. Then, each point is rescaled in the cloud by dividing it by this norm. Thus, the following equations are obtained:
norm12=(x12+y12+z12+x22+y22+z22+ . . . +xn2+yn2+zn2)1/2 (19)
and
norm23=(x′12+y′12+z′12+x′22+y′22+z′22+ . . . +x′n2+y′n2+z′n2)1/2 (20)
Then, the measured locations of the feature points are replaced with the normalized locations of the feature points:
x1→x1/norm12,y1→y1/norm12, . . . x1′→x1′/norm23,y1′→y1′/norm23, . . . etc. (21)
Next, the matrix transformation is developed. A matrix transformation is developed between two feature point matrices which may be overdetermined. The desired solution should use as many good features as possible so as to minimize the impact of feature position errors. Equation (22) provides the starting matrix description:
For convenience, equation (22) is rewritten as:
[X]=[A][X′] (23)
Then, both sides of (23) are right multiplied by the transpose of X′:
[X][X′]T=[A][X′][X′]T (24)
Next, a correlation (or autocorrelation) matrix is defined:
So that equation (24) can be written as:
[X][X′]T=[A][C] (26)
Finally, both sides are right multiplied by the inverse of [C]:
[X][X′]T[C]−1=[A][C][C]−1=[A]
Thus, [A] is defined as:
[A]=[X][X′]T[C]−1 (27)
Equation (27) has an additional correlation matrix:
Equation (27) provides the relationship between the two coordinate systems.
Equation (23) can be used to find the direction along R23 to the intercept with the stereo baseline SB23. But first, the length of R23 should be determined. This can be done as follows.
From simple measurements in the stereo rectified image of camera C2, the angle, α23, of the reference feature with respect to the (SB23 rectified) camera C2 center is determined. Using equation (17) and the rectified images from cameras C1 and C2, the distance r2 to the origin of the local reference coordinate system(s) is determined. From
R23=r2 sin α23 (29)
and:
B23=r2 cos α23 (30)
In order to find BI23, the z′ direction is converted to the x, y, z reference coordinate system and the resulting vector is extended a distance R23 to the baseline intercept, BI23. This is done with appropriate normalization of the transformed coordinates.
The desired direction vector is developed using equation (23):
Where the normalization factor is:
norm=√{square root over (xA2+yA2+zA2)} (32)
The location of BI23 in the reference local coordinate system is therefore a vector position with the components:
xBI23=R23xnorm,yBI23=R23ynorm,zBI23=R23znorm (33)
Next, the relative camera coordinates are determined. It is now possible to find the locations of the three cameras C1, C2 and C3 in the local reference coordinate system, x, y, z. According to
x1=−B12,y1=0,z1=−R12 (34)
Similarly, camera C2 is located at:
x2=b12,y2=0,z2=−R12 (35)
To determine the location of camera C3, a vector is extended from camera 2, through intercept BI23 to a location SB23 from camera C2. The distance from camera C2 to intercept BI23 is, using equations (33) and (35):
B23=√{square root over ((R23xnorm−b12)2+(R23ynorm)2+(R23znorm+R12)2)}{square root over ((R23xnorm−b12)2+(R23ynorm)2+(R23znorm+R12)2)}{square root over ((R23xnorm−b12)2+(R23ynorm)2+(R23znorm+R12)2)} (36)
The length S23 is also determined. In order to do so, the length b23 is determined. Referring to
The distance from camera C2 to camera C3 is therefore:
S23=B23+b23 (38)
Using vector locations (33) and (35) and equation (38), the coordinates of the third camera C3, in the reference coordinate system x, y, z, are:
The foregoing embodiment describes how to chain three cameras together, and thus, the features from the second pair of stereo rectified images to the first pair of stereo rectified images. This can be done because there are scene features (e.g. P) that are visible to cameras C1, C2 and C3. The second point cloud can be rescaled and merged with the first point cloud once the third camera location has been determined. Thus, the foregoing process is used to scale and merge the features of the second point cloud associated with the second pair of stereo rectified images to the features of the first point cloud associated with the first pair of stereo rectified images. This is done using the matrix transformations defined above.
In an embodiment, the foregoing chaining process can be generalized to multiple camera positions. In an embodiment, this is done as follows.
Referring now to
In an embodiment, the basic procedure for chaining is to pair up the cameras so that each camera is paired with at least two other cameras (more, if sufficient features are shared). Stereo rectified images are created for each pair of cameras. From these rectified image pairs three dimensional feature point clouds are created for each image pair. The method is outlined in
In an embodiment, the chaining process involves successive observations of those feature points which are visible from at least three camera positions. The process starts with the first two camera positions P1, P2 in the chain. In accordance with the above described embodiment, these two cameras create a master coordinate system by selecting a particular prominent feature to act as the origin of the master coordinate system. The third camera (or third camera position P3) is then linked to the first two cameras (or first two camera positions P1, P2) by the method described above. This determines the position P3 of the third camera with respect to the first two cameras and the reference coordinate system. It also provides a way of describing the positions of the feature points with respect to the second and third cameras. And, it scales the point cloud generated by the second and third cameras to the reference point cloud created by the first and second cameras.
A fourth camera or fourth camera position P4 is then added to the mix and a second camera triplet is then created consisting of the second, third and fourth cameras (or second, third and fourth camera positions P2, P3, P4). These three cameras will see a different mix of trackable features. In order to register this new collection of features with the old collection, a new reference point P′, which is visible by both the original triplet and the new triplet, is established. This new reference point P′ then acts as the link for the chain which is being constructed.
In this chaining process, the points in the secondary point cloud are properly (though indirectly) registered in the primary point cloud. This means that the secondary point cloud is properly rescaled and oriented so as to match the original reference coordinate system. For this to be possible the position P4 of the fourth camera in the camera 1, camera 2 and reference point coordinate system is determined. The procedures outlined in the above embodiment provide the coordinate transformations that are used.
Chaining then continues, until the last camera position is reached—all new feature points being registered in the original master coordinate system. For all-around point cloud construction, feature clouds from successive camera triplets (cameras N, N+1 and N+2) are successively merged until the starting point is reached. The following double pairings are the minimal case: Camera 1 with camera 2 with camera 3, . . . camera N−1 with camera N with camera 1.
The last double pairing makes it possible to chain backwards, as well as forwards, so as to adjust the estimates in camera location and pose. In an embodiment, two chains are established: Camera 1 to 2 to 3 to 4, etc. And Camera 1 to N to N−1 to N−2, etc.
It will be appreciated that the above described chaining does not require simultaneous visibility of all feature points.
In order to correct the position and pose estimates for each camera position, the full circuit of the clockwise and counterclockwise chains can be performed. Counter chaining may be used to correct residual errors after 360° fly-around. In an embodiment, the position and pose estimates for each camera position can then be averaged between the two chains. This new position information is then used to modify the point cloud fragments and remerge them. The process is then repeated until it reaches a stable equilibrium.
Referring to
The system 1800 includes a stereo rectification module 1811 configured to stereo rectify images 1801 (e.g. captured in real-time). The system 1800 also includes a position determination module 1812 configured to determine the locations of the features of the first point cloud with respect to a reference feature in the first point cloud, and a scaling and merging module 1813 configured to scale and merge a second point cloud of features associated with the second pair of stereo rectified images to the first point cloud of features. The system 1800 outputs a stream of linked images 1802.
The position determination module 1812 and the scaling and merging module 1813 are configured to perform one or more transformations described above.
The different modules of
The methods described above can be used to process images in many applications, including, for example, military and commercial applications. Military applications may include: improved situational awareness; persistent surveillance; training; battle rehearsal; target recognition and targeting; GPS denied precision navigation; sensor fusion; mission system integration; and military robotics. Commercial and industrial application may include: geography and mapping; motion picture and television applications; architecture and building planning; advertising; industrial and general robotics; police and fire protection and school instruction. However, this is not limiting. It is envisioned to use the above described methods and systems in other applications.
Furthermore, the processing of the images described according to the above embodiments can be performed in real-time.
It will be appreciated that the different operations involved in processing the images may be executed by hardware, software or a combination of hardware and software. Software may include machine executable instructions or codes. These machine executable instructions may be embedded in a data storage medium of the processor module. For example, the machine executable instructions may be embedded in a data storage medium of modules 1811, 1812 and 1813 of the system of
The software code may be executable by a general-purpose computer. In operation, the code and possibly the associated data records may be stored within a general-purpose computer platform. At other times, however, the software may be stored at other locations and/or transported for loading into an appropriate general-purpose computer system. Hence, the embodiments discussed above involve one or more software or computer products in the form of one or more modules of code carried by at least one physical, non-transitory, machine-readable medium. Execution of such codes by a processor of the computer system enables the platform to implement the functions in essentially the manner performed in the embodiments discussed and illustrated herein.
As used herein, terms such as computer or machine “readable medium” refer to any medium that participates in providing instructions to a processor for execution. Such a medium may take many forms, including but not limited to, non-transitory non-volatile media, and volatile media. Non-volatile media include, for example, optical or magnetic disks, such as any of the storage devices in any computer(s) operating as discussed above. Volatile media include dynamic memory, such as the main memory of a computer system. Common forms of computer-readable media therefore include, for example: a floppy disk, a flexible disk, hard disk, magnetic tape, any other magnetic medium, a CD-ROM, DVD, any other optical medium, less commonly used media such as punch cards, paper tape, any other physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, any other memory chip or cartridge. Many of these forms of computer readable media may be involved in carrying one or more sequences of one or more instructions to a processor for execution.
Although the disclosure has been described in detail for the purpose of illustration based on what is currently considered to be the most practical and preferred embodiments, it is to be understood that such detail is solely for that purpose and that the disclosure is not limited to the disclosed embodiments, but, on the contrary, is intended to cover modifications and equivalent arrangements that are within the spirit and scope of the appended claims. For example, it is to be understood that the present disclosure contemplates that, to the extent possible, one or more features of any embodiment can be combined with one or more features of any other embodiment.
Number | Name | Date | Kind |
---|---|---|---|
20120177284 | Wang | Jul 2012 | A1 |
Number | Date | Country |
---|---|---|
2 597 620 | May 2013 | EP |
Entry |
---|
Mordohai, et al. “Real-Time Video-Based Reconstruction of Urban Environments”, Proceedings of 3DARCH: 3D Virtual Reconstruction and Visualization of Complex Architectures, 2007 (8 pgs.). |
Sinha, et al., “Interactive 3D Architectural Modeling From Unordered Photo Collections”, ACM Transactions on Graphics Proceedings of SIGGRAPH Asia 2008 (10 pgs.). |
Van Gool, et al., “The MURALE Project: Image-based 3D Modeling for Archaeology”, In: Niccolucci, F. (Ed.), VAST: International Symposium on Virtual Reality, Archaeology and Cultural Heritage, 2000, (13 pgs.). |
Pollefeys et al., “Hand-Held Acquisition of 3D Models With a Video Camera”, 3-D Digital Imaging and Modeling, 1999, Proceedings. Second International Conference on Ottawa, Ont., Canada, Oct. 4-8, 1999, Los Alamitos, CA,USA, IEEE Comput, Soc. US, Oct. 4, 1999 (pp. 14-23). |
Park, et al., “A Multiview 3D Modeling System Based on Stereo Vision Techniques”, Machine Vision and Applications, Springer, Berlin, DE, vol. 16, No. 3, May 1, 2005 (pp. 148-156). |
Liu, et al., “High-Accuracy Three-Dimensional Shape Acquisition of a Large-Scale Object From Multiple Uncalibrated Camera Views”, Applied Optics, Optical Society of America, Washington, D.C., US, vol. 50, No. 20, Jul. 10, 2011 (pp. 3691-3702). |
Written Opinion of the International Searching Authority for International Application No. PCT/US2013/040208, filed May 8, 2013, Written Opinion of the International Searching Authority mailed Aug. 7, 2013 (7 pgs.). |
International Search Report for International Application No. PCT/US2013/040208, filed May 8, 2013, International Search Report dated Jul. 31, 2013 and mailed Aug. 7, 2013 (4 pgs.). |
Number | Date | Country | |
---|---|---|---|
20140016857 A1 | Jan 2014 | US |