The present disclosure relates generally to the field of computer modeling of structures. More particularly, the present disclosure relates to computer vision systems and methods for supplying missing point data in point clouds derived from stereoscopic image pairs.
Accurate and rapid identification and depiction of objects from digital images (e.g., aerial images, satellite images, etc.) is increasingly important for a variety of applications. For example, information related to various features of buildings, such as roofs, walls, doors, etc., is often used by construction professionals to specify materials and associated costs for both newly-constructed buildings, as well as for replacing and upgrading existing structures. Further, in the insurance industry, accurate information about structures may be used to determine the proper costs for insuring buildings/structures. Still further, government entities can use information about the known objects in a specified area for planning projects such as zoning, construction, parks and recreation, housing projects, etc.
Various software systems have been implemented to process aerial images to generate 3D models of structures present in the aerial images. However, these systems have drawbacks, such as missing point cloud data and an inability to accurately depict elevation, detect internal line segments, or to segment the models sufficiently for cost-accurate cost estimation. This may result in an inaccurate or an incomplete 3D model of the structure. As such, the ability to generate an accurate and complete 3D model from 2D images is a powerful tool.
Thus, what would be desirable is a system that automatically and efficiently processes digital images, regardless of the source, to automatically generate a model of a 3D structure present in the digital images. Accordingly, the computer vision systems and methods disclosed herein solve these and other needs.
The present disclosure relates to computer vision systems and methods for supplying missing point data in point clouds derived from stereoscopic image pairs. The system retrieves the at least one stereoscopic image pair from the memory based on a received geospatial region of interest, and processes the at least one stereoscopic image pair to generate a disparity map from the at least one stereoscopic image pair. The system then processes the disparity map to generate a depth map from the disparity map. The depth map is then processed to generate a point cloud from the depth map, such that the point cloud lacks any missing point data. Finally, the point cloud is stored for future use.
The foregoing features of the invention will be apparent from the following Detailed Description of the Invention, taken in connection with the accompanying drawings, in which:
The present disclosure relates to computer vision systems and methods for supplying missing point data in point clouds derived from stereoscopic image pairs, as described in detail below in connection with
By way of background,
PBTerrain(PA)=ProjectionOntoImageB(PA, TerrainZ) Equation 1
In step 52, the system determines each pixel of image B (PB) that corresponds to the pixel PA projected onto image B denoted by PBCandidates(PA). In particular, the system determines a set of pixels PB that forms an epipolar line via Equation 2 below:
PBCandidates(PA)=set of pixels that forms to epipolar line Equation 2
In step 54, the system determines a pixel matching confidence value, denoted by PixelMatchingConfidence(PA, PB), using at least one pixel matching algorithm for each pixel of image B corresponding to the pixel PA projected onto image B (PBCandidates(PA)) according to Equation 3 below:
PixelMatchingConfidence(PA, PB)=someFunctionA(PA, PB) Equation 3
It should be understood that the pixel matching confidence value is a numerical value that denotes a similarity factor value between a region near the pixel PA of image A and a region near the pixel PB of image B. In step 56, the system determines a best candidate pixel of image B corresponding to the pixel PA projected onto image B, denoted by BestPixelMatchingInB(PA), that maximizes the pixel matching confidence value via Equation 4 below:
BestPixelMatchingInB(PA)=PB Equation 4
where the PixelMatchingConfidence(PA, PB) is a maximum for every value of PBCandidates(PA).
In step 58, the system determines whether the maximum pixel matching confidence value of the best candidate pixel of image B is greater than a threshold. If the maximum pixel matching confidence value of the best candidate pixel of image B is greater than the threshold, then the system determines a disparity map value at the pixel PA as a distance between the best candidate pixel of image B and the pixel PA projected onto image B according to Equation 5 below:
If PixelMatchingConfidence(PA,BestPixelMatchingInB(PA))>threshold: DisparityMap(PA)=distance between BestPixelMatchingInB(PA) and PBTerrain(PA) Equation 5
Alternatively, if the maximum pixel matching confidence value of the best candidate pixel of image B is less than the threshold, then the system determines that the disparity map value at the pixel PA is null. It should be understood that null is a value different from zero. It should also be understood that if the maximum pixel matching confidence value of the best candidate pixel of image B is less than the threshold, then the system discards all matching point pairs between the image A and the image B as these point pairs can yield an incorrect disparity map value. Discarding these point pairs can result in missing point data (e.g., holes) in the disparity map. Accordingly and as described in further detail below, the system of the present disclosure addresses the case in which the disparity map value at the pixel PA is null by supplying missing point data.
Returning to
DepthMap(PA)=someFunctionB(PA, DisparityMap(PA), Image A camera intrinsic parameters) Equation 6
It should be understood that Equation 6 requires image A camera intrinsic parameters including, but not limited to, focal distance, pixel size and distortion parameters. It should also be understood that the system can only determine the depth map value at the pixel PA if the disparity map value at the pixel PA is not null.
Lastly, in step 16, the system generates a point cloud. In particular, for each pixel PA of the depth map, the system determines a real three-dimensional (3D) geographic coordinate, denoted by RealXYZ(PA), according to Equation 7 below:
RealXYZ(PAx, PAy)=someFunctionC(PA, DepthMap(PA), Image A camera extrinsic parameters) Equation 7
It should be understood that Equation 7 requires the pixel PA, the DepthMap(PA), and image A camera extrinsic parameters such as the camera projection center AO and at least one camera positional angle (e.g., omega, phi, and kappa). It should also be understood that the system can only determine the real 3D geographic coordinate for a pixel PA of the depth map if the disparity map value at the pixel PA is not null. Accordingly, the aforementioned processing steps of
As mentioned above, when the disparity map value at the pixel PA is null (e.g., if the maximum pixel matching confidence value of the best candidate pixel of image B is less than a pixel matching confidence threshold), then the system discards all matching point pairs between the image A and the image B as these point pairs can yield an incorrect disparity map value. Discarding these point pairs can result in missing point data (e.g., holes) in the disparity map such that the point cloud generated therefrom is incomplete (e.g., the point cloud is sparse in some areas). Accordingly and as described in further detail below, the system of the present disclosure addresses the case in which the disparity map value at the pixel PA is null by supplying the disparity map with missing point data such that the point cloud generated therefrom is complete.
The image database 104 could include digital images and/or digital image datasets comprising aerial nadir and/or oblique images, unmanned aerial vehicle images or satellite images, etc. Further, the datasets could include, but are not limited to, images of rural, urban, residential and commercial areas. The image database 104 could store one or more 3D representations of an imaged location (including objects and/or structures at the location), such as 3D point clouds, LiDAR files, etc., and the system 100 could operate with such 3D representations. As such, by the terms “image” and “imagery” as used herein, it is meant not only optical imagery (including aerial and satellite imagery), but also 3D imagery and computer-generated imagery, including, but not limited to, LiDAR, point clouds, 3D images, etc.
The system 100 includes computer vision system code 106 (i.e., non-transitory, computer-readable instructions) stored on a computer-readable medium and executable by the hardware processor or one or more computer systems. The code 106 could include various custom-written software modules that carry out the steps/processes discussed herein, and could include, but is not limited to, a disparity map generator 108a, a depth map generator 108b, and a point cloud generator 108c. The code 106 could be programmed using any suitable programming languages including, but not limited to, C, C++, C#, Java, Python or any other suitable language. Additionally, the code 106 could be distributed across multiple computer systems in communication with each other over a communications network, and/or stored and executed on a cloud computing platform and remotely accessed by a computer system in communication with the cloud platform. The code 106 could communicate with the image database 104, which could be stored on the same computer system as the code 106, or on one or more other computer systems in communication with the code 106.
Still further, the system 100 could be embodied as a customized hardware component such as a field-programmable gate array (FPGA), application-specific integrated circuit (ASIC), embedded system, or other customized hardware component without departing from the spirit or scope of the present disclosure. It should be understood that
The geospatial ROI can also be represented as a polygon bounded by latitude and longitude coordinates. In a first example, the bound can be a rectangle or any other shape centered on a postal address. In a second example, the bound can be determined from survey data of property parcel boundaries. In a third example, the bound can be determined from a selection of the user (e.g., in a geospatial mapping interface). Those skilled in the art would understand that other methods can be used to determine the bound of the polygon.
The ROI may be represented in any computer format, such as, for example, well-known text (WKT) data, TeX data, HTML, data, XML, data, etc. For example, a WKT polygon can comprise one or more computed independent world areas based on the detected structure in the parcel. After the user inputs the geospatial ROI, a stereoscopic image pair associated with the geospatial ROI is obtained from the image database 104. As mentioned above, the images can be digital images such as aerial images, satellite images, etc. However, those skilled in the art would understand that any type of image captured by any type of image capture source can be used. For example, the aerial images can be captured by image capture sources including, but not limited to, a plane, a helicopter, a paraglider, or an unmanned aerial vehicle. In addition, the images can be ground images captured by image capture sources including, but not limited to, a smartphone, a tablet or a digital camera. It should be understood that multiple images can overlap all or a portion of the geospatial ROI.
In step 144, the system 100 determines an overlap region R between the image A and the image B. Then, in step 146, the system 100 generates a disparity map by iterating over pixels of image A (PA) within the overlap region R where a pixel PA is denoted by (PAx, PAy). In step 148, the system 100 identifies a pixel PA in the overlap region R and, in step 150, the system 100 determines whether the disparity map value at the pixel PA is null. If the system 100 determines that the disparity map value at the pixel PA is not null, then the process proceeds to step 152. In step 152, the system 100 assigns and stores interpolation confidence data for the pixel PA denoted by InterpolationConfidence(PA). In particular, the system 100 assigns a specific value to the pixel PA indicating that this value is not tentative but instead extracted from a pixel match (e.g., MAX) according to Equation 8 below:
InterpolationConfidence(PAx, PAy)=MAX Equation 8
The process then proceeds to step 156.
Alternatively, if the system 100 determines that the disparity map value at the pixel PA is null, then the process proceeds to step 154. In step 154, the system 100 determines and stores missing disparity map and interpolation confidence values for the pixel PA. In particular, the system 100 determines a tentative disparity map value for the pixel PA when the maximum pixel matching confidence value of the best candidate pixel of image B is less than the pixel matching confidence threshold such that the pixel PA can be assigned an interpolation confidence value. It should be understood that the tentative disparity map value can be utilized optionally and can be conditioned to the pixel matching confidence value in successive processes that can operate on the point cloud. The process then proceeds to step 156. In step 156, the system 100 determines whether additional pixels are present in the overlap region R. If the system 100 determines that additional pixels are present in the overlap region R, then the process returns to step 148. Alternatively, if the system 100 determines that additional pixels are not present in the overlap region R, then the process ends.
It should be understood that the algorithm can determine the disparity map and interpolation confidence values for the pixel PA based on other pixels proximate to the pixel PA having different weight factors and can consider other information including, but not limited to, image DisparityMap(P), DepthMap(P), RGB(P) and RealXYZ(P). It should also be understood that the algorithm can determine the disparity map value for the pixel PA by utilizing bicubic interpolation or any other algorithm that estimates a point numerical value based on proximate pixel information having different weight factors including, but not limited to, algorithms based on heuristics, computer vision and machine learning. Additionally, it should be understood that the interpolation confidence value is a fitness function and can be determined by any other function including, but not limited to, functions based on heuristics, computer vision and machine learning.
Having thus described the system and method in detail, it is to be understood that the foregoing description is not intended to limit the spirit or scope thereof. It will be understood that the embodiments of the present disclosure described herein are merely exemplary and that a person skilled in the art can make any variations and modification without departing from the spirit and scope of the disclosure. All such variations and modifications, including those discussed above, are intended to be included within the scope of the disclosure. What is desired to be protected by Letters Patent is set forth in the following claims.
The present application claims the priority of U.S. Provisional Application Ser. No. 63/151,392 filed on Feb. 19, 2021, the entire disclosure of which is expressly incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
63151392 | Feb 2021 | US |