The present disclosure relates generally to the field of computer modeling of structures. More specifically, the present disclosure relates to systems and methods for adjusting model locations and scales using point clouds.
Accurate and rapid identification and depiction of objects from digital images (e.g., aerial images, satellite images, etc.) is increasingly important for a variety of applications. For example, information related to various features of buildings, such as roofs, walls, doors, etc., is often used by construction professionals to specify materials and associated costs for both newly-constructed buildings, as well as for replacing and upgrading existing structures. Further, in the insurance industry, accurate information about structures may be used to determine the proper costs for insuring buildings/structures. Still further, government entities can use information about the known objects in a specified area for planning projects such as zoning, construction, parks and recreation, housing projects, etc.
Various systems have been implemented to generate three-dimensional (“3D”) models of structures and objects present in the digital images. However, these systems have drawbacks, such as an inability to accurately depict elevation and correctly locate the 3D models on a coordinate system (e.g., geolocation). As such, the ability to generate an accurate 3D model having correct geolocation data is a powerful tool.
Thus, in view of existing technology in this field, what would be desirable is a system that automatically and efficiently processes a 3D model of an object, along with digital imagery and/or geolocation data for the same object, to generate a corrected 3D model of the object present in the digital imagery. Accordingly, the systems and methods disclosed herein solve these and other needs.
The present disclosure relates to systems and methods for adjusting three-dimensional (“3D”) model locations and scales using point clouds. Specifically, the present disclosure includes systems and methods for adjusting a 3D model of an object so that the 3D model conforms to a correctly georeferenced point cloud corresponding to the same object, when rendered in a shared 3D coordinate system, thereby ensuring that the geolocation of the 3D model after adjustment is also correct. The system can include a first database storing a 3D model of an object, a second database storing georeferenced point cloud data corresponding to the object, and a processor in communication with the first and second databases. The processor can be configured to retrieve the 3D model from the first database, retrieve the georeferenced point cloud data from the second database, and render the 3D model and the georeferenced point cloud data in a shared coordinate system, such that the 3D model and the georeferenced point cloud data are aligned from a first point of view. The processor can then calculate an affine transformation matrix based on the 3D model and the georeferenced point cloud data to align the 3D model and the georeferenced point cloud data from a second point of view. Finally, the processor applies the affine transformation matrix to the 3D model to generate a new 3D model.
The foregoing features of the invention will be apparent from the following Detailed Description of the Invention, taken in connection with the accompanying drawings, in which:
The present disclosure relates to systems and methods for adjusting model locations and scales using point clouds, as described in detail below in connection with
According to the embodiments of the present disclosure, the 3D model can represent a complete object (e.g., a building, structure, device, toy, etc.) or a portion thereof, and can be generated by any means known to those of ordinary skill in the art. For example, the 3D model could be built manually by an operator using computer-aided design (CAD) software, or generated through semi-automated or fully-automated systems, including but not limited to, technologies based on heuristics, computer vision, and machine learning. It should also be understood that the point cloud corresponding to the object, as described herein, is correctly georeferenced and can also be generated by various means, such as being extracted from stereoscopic image pairs, captured by a system with a 3D sensor (e.g., LiDAR), or other mechanisms for generating georeferenced point clouds known to those of ordinary skill in the art.
The system 10 includes system code 18 (i.e., non-transitory, computer-readable instructions) stored on a computer-readable medium and executable by the hardware processor or one or more computer systems. The code 18 could include various custom-written software modules that carry out the steps/processes discussed herein, and could include, but is not limited to, a point cloud selection module 20, a 3D model selection module 22, a 3D rendering module 24, an affine matrix generation module 26, and a 3D model transformation module 28. The code 18 could be programmed using any suitable programming language including, but not limited to, C, C++, C#, Java, Python, or any other suitable language. Additionally, the code 18 could be distributed across multiple computer systems in communication with each other over a communications network, and/or stored and executed on a cloud computing platform and remotely accessed by a computer system in communication with the cloud platform. The code 18 could communicate with the point cloud database 14 and 3D model database 16, which could be stored on the same computer system as the code 18, or on one or more other computer systems in communication with the code 18.
Still further, the system 10 could be embodied as a customized hardware component such as a field-programmable gate array (“FPGA”), application-specific integrated circuit (“ASIC”), embedded system, or other customized hardware component without departing from the spirit or scope of the present disclosure. It should be understood that
In step 108, the system 10 renders the 3D model and the point cloud in a shared 3D environment, such that the 3D model and the point cloud are aligned from at least one point of view (e.g., orthogonal or perspective). However, it should be understood that the 3D model and the point cloud may be misaligned from a different point of view. For example,
The system of the present disclosure aligns the 3D model 130 with the point cloud 132 from at least one point of view. As discussed herein, a point of view can be an orthometric or perspective view, can be directed at the 3D model and point cloud from any distance, scale and orientation, and can be defined by intrinsic and extrinsic camera parameters. For example, intrinsic camera parameters can include focal length, pixel size, and distortion parameters, as well as other alternative or similar parameters. Extrinsic camera parameters can include the camera projection center (e.g., origin) and angular orientation (e.g., omega, phi, kappa, etc.), as well as or other alternative or similar parameters.
Returning to
where n is the number of points in the set of points falling within the region 198 (e.g., the face of the 3D model), as shown in
The system 10 then proceeds to step 112, where the system 10 calculates an affine transformation matrix based on the single best fitting plane identified in step 111 and the corresponding face of the 3D model. Additional processing steps for calculating the affine transformation matrix are discussed herein in greater detail, in connection with
As discussed above, the system 10 calculates an affine transformation matrix that is multiplied by all of the coordinates in the 3D model to generate a new 3D model. The new 3D model is transformed in such a way that it substantially matches the point cloud on the shared coordinate system, and are thus substantially aligned from every point of view. The method for creating the affine transformation matrix can be given by: CreateAffineTransformation(Tx, Ty, Tz, S, Sz), which returns a 3D affine transformation defined by the following parameters: a 3D translation Tx, Ty, Tz; a 3D scale factor (affecting all three components, X, Y, Z) S; and a scale in Z component Sz. Accordingly, the resulting matrix can be arranged as the following 3D affine transformation matrix:
The transformation matrix (T) can be applied to the 3D model (M) to generate a new 3D model (M′), given by the equation: M′=M×T. It should be noted that this method does not rotate the 3D model or deform the 3D model, except in the Z scale for a specific stage when Sz is different from 1, discussed in greater detail herein.
Similarly,
In step 170, the system 10 determines the point of view (V) projection center 190. As discussed above, the point of view (V) can be represented as the entire set of parameters that define a point of view and the point of view (V) can be defined by both intrinsic and extrinsic camera parameters. Intrinsic camera parameters can include focal length, pixel size, and distortion parameters, as well as other alternative or similar parameters. Extrinsic camera parameters can include camera projection center and angular orientation (omega, phi, kappa), as well as other alternative or similar parameters. In step 172, the system 10 generates a point of view (V) projection plane 192. In step 174, the system 10 can select a point 194 on a given face of the 3D model 196, or alternatively, the system can receive an input from a user selecting a face of the 3D model 196. In step 176, the system 10 projects the selected point 194 towards the point of view (V) projection center 190 and onto the point of view (V) projection plane 192. In step 178, the system 10 defines a region 198 around the selected point 194 that was projected onto the (V) projection plane 192. For example, the region 198 could correspond to the entire face of the 3D model, or a portion thereof. In step 180, the system 10 projects the point cloud 200 towards the (V) projection center 190 and onto the (V) projection plane 192. In step 182, the system 10 identifies a set of points (e.g., point 200a) from the point cloud 200 that were projected onto the (V) projection plane 192 and fall within the region 198. Steps 170-182 for obtaining the set of points from the point cloud falling inside the region when projected onto the (V) projection plane can be given by: PointSelectionFromViewlnsideRegion(P, V, R=F), where P corresponds to the point cloud 200, V corresponds to the parameters defining the point of view, R corresponds to the region 198 on the projection plane 192, and F corresponds to a given face of the model 196. The system 10 can then proceed to step 184, where the system 10 generates a best fitting plane (e.g., corresponding to the selected face of the 3D model) based on the set of points in the point cloud 200 falling inside the region 198 when projected onto the (V) projection plane 192. Those of ordinary skill in the art will understand that the best fitting plane can be calculated using well-known algorithms, such as RANSAC. In step 184, the system determines if there are additional faces of the 3D model. If a positive determination is made, the system 10 returns to step 174 and if a negative determination is made, the system 10 proceeds to step 111, discussed herein in connection with
In step 210, the system 10 determines if the point of view is a vertical orthometric point of view. If a positive determination is made in step 210, the system 10 proceeds to step 212, where the system determines the height (z) of any point 250 on the face (F) 252 of the 3D model (see
T1=CreateAffineTransformation(Tx=0, Ty=0, Tz=z′, S=1, Sz=1);
T2=CreateAffineTransformation(Tx=0, Ty=0, Tz=0, S=1, Sz=s); and
T3=CreateAffineTransformation(Tx=0, Ty=0, Tz=−z, S=1, Sz=1).
After the system 10 has generated the transformation matrix (T) in step 222, the system 10 can proceed to step 114, discussed above in connection with
If a negative determination is made in step 210, the system 10 proceeds to step 224, where the system 10 determines the point of view origin (O) 270 (see
T1=CreateAffineTransformation(Tx=v′.x, Ty=v′.y, Tz=v′.z, S=1, Sz=1);
T2=CreateAffineTransformation(Tx=0, Ty=0, Tz=0, S=s, Sz=1); and
T3=CreateAffineTransformation(Tx=−v.x, Ty=−v.y, Tz=−v.z, S=1, Sz=1).
In the equation above, the scale factor (s) is given by: s=length(v′−O)/length(v−O). After the system 10 has generated the transformation matrix in step 240, the system 10 can proceed to step 114, discussed above in connection with
As shown in step 402, a system of the present disclosure identifies a first face of the 3D model, where (F0) is the first face in model (M). In step 404, the system executes code (e.g., system code 18) to carry out a method for obtaining a set of points (PP), given by: PointSelectionFromViewlnsideRegion(P, V, R=F0), where (P) corresponds to the point cloud (e.g., point cloud 200, discussed in connection with
where n is the number of points in the set of points falling within the region (R) and d(pi) is the distance from each point in the set of points to the projection plane (e.g., plane 192, discussed in connection with
Let z be p.z;
Let L be the vertical line passing through point p;
Let i be the intersection between line L and plane F′;
Let z′ be i.z;
Let s=slope(F′)/slope(F);
Let T1=CreateAffineTransformation(Tx=0, Ty=0, Tz=z′, S=1, Sz=1);
Let T2=CreateAffineTransformation(Tx=0, Ty=0, Tz=0, S=1, Sz=s);
Let T3=CreateAffineTransformation(Tx=0, Ty=0, Tz=−z, S=1, Sz=1); and
T=T1×T2×T3.
In step 418, the system applies the transformation matrix (T) to the 3D model (M) to generate a new 3D model (M′), given by the equation: M′=M×T.
If a negative determination is made in step 414, the system proceeds to step 420 and generates a transformation matrix (T), given the following parameters (e.g., discussed in connection with
Let o be point of view;
Let p be center point of F;
Let L be the line passing through o and p;
Let i be intersection of line L with plane F;
Let F″ be a plane with the same normal as F passing through i;
Let v be another point from F;
Let L′ be the line passing through o and v;
Let v′ be the intersection of line L′ with plane F″;
Let s=length(v′−o)/length(v−o);
Let M1=CreateAffineTransformation(Tx=v′.x, Ty=v′.y, Tz=v′.z, S=1, Sz=1);
Let M2=CreateAffineTransformation(Tx=0, Ty=0, Tz=0, S=s, Sz=1);
Let M3=CreateAffineTransformation(Tx=−v.x, Ty=−v.y, Tz=−v.z, S=1, Sz=1); and
Let T=T1×T2×T3.
In step 422, the system applies the transformation matrix (T) to the 3D model (M) to generate a new 3D model (M′), given by the equation: M′=M×T. The process 400 then ends.
Having thus described the system and method in detail, it is to be understood that the foregoing description is not intended to limit the spirit or scope thereof. It will be understood that the embodiments of the present disclosure described herein are merely exemplary and that a person skilled in the art can make any variations and modification without departing from the spirit and scope of the disclosure. All such variations and modifications, including those discussed above, are intended to be included within the scope of the disclosure.
This application claims priority to U.S. Provisional Patent Application Ser. No. 63/135,004 filed on Jan. 8, 2021, the entire disclosure of which is hereby expressly incorporated by reference.
Number | Date | Country | |
---|---|---|---|
63135004 | Jan 2021 | US |