This disclosure is related to computer-implemented detection of physical layout elements of an architectural structure. More particularly, the embodiments disclosed herein are directed at estimating a first floor height of an architectural structure for location-based applications and analyses.
First floor elevation of an architectural structure is important for the insurance industry to evaluate the risk of property damage in the event of flooding. For example, the lower the first floor elevation of a building relative to a ground level, the higher is the risk of flooding. The potential cost of damages in a flood event increases when flood waters reach the occupied levels of a home. Flooding in the first floor not only causes damage to the structure itself, but also the possessions on the first floor. The additional value of these assets, in addition to cost of repairs to the structure, is factored into the insurance assessment of flood risk. The conventional standard is to have a land surveyor physically establish first floor elevation. The surveyor assesses flood risk by physically measuring the height of the first floor from the ground level. However, this is cumbersome, time-consuming, prone to human error, and expensive. This can be further challenging when the number of buildings (whose flood risks are to be assessed) in a geographical area are in the millions or hundreds of millions.
Embodiments of the present disclosure are directed at computationally generating an estimate of the first floor elevation (FFE) of a building using a Digital Terrain Model (DTM) and an image (e.g., showing the front) of the building. First, machine learning (ML) algorithms are used to segment the image of the building to identify doorways, stairs, garage doors, building extents, and other physical layout elements. Information relating to the physical layout elements is used to generate an estimate of the height of the first floor of the building above ground. The estimated height is added to a DTM height at the same location to provide an estimate of the FFE. One patentable benefit of the disclosed technology is that the estimate of the first floor height generated using the techniques disclosed herein corresponds to a true, first-floor height of the building. For example, the estimate of the first floor height of a building can be applied in real-world use cases as a substitute for the true, first-floor height of the building. The disclosed embodiments can be applied in estimating the first floor height of any kind of building. For example, single-family homes, townhomes, high rise buildings, stores, malls, shops, movie theaters, or any other architectural structure.
The following definitions are provided for ease in understanding the concepts discussed herein.
Datum: Reference surface defining the mean sea level, e.g., zero elevation.
Digital Terrain Model (DTM): Height of terrain above the datum (coincident with the building grade).
First Floor Height (FFH): Height of the first floor of a structure above the terrain (DTM height).
First Floor Elevation (FFE): Height of the first floor above the datum. Sum of the DTM height and FFH.
2.1 Digital Terrain Model (DTM)
In some embodiments, a Digital Terrain Model can be used to determine a ground elevation of a structure above the ground. The DTM can be generated using an elevation dataset (prepared using data captured by remote aerial sensors) providing Data Elevation Models (DEMs) and schedules for a geographical area corresponding to a location of the building. For example, the schedules can indicate lists of geographical locations of buildings in the fifty (50) states in the United States. Further, the lists of geographical building locations can be validated/verifying implying that the information of the geographical building locations is accurate. A few of the advantages of the disclosed technology are as follows:
In some embodiments, the principle elevation dataset built from the IFSAR data collection is a Digital Surface Model (DSM), which includes trees, buildings, and other objects that represent the first surface encountered by the radar signals. From the DSM, a DTM can be created, a DTM being a representation of the bare earth (e.g., without trees, buildings, etc.).
2.2 Machine Learning
In some embodiments, First Floor Elevation (FFE) estimation employs machine learning tools. For example, deep neural networks with over 100 neural layers can be used in image classification because of their ability to develop a set of abstract relationships between data points. These relationships are established based on the activation pathways of nodes through each layer of the model. The learned features are then capable of identifying classes of interest on a new set of data. This ability to amalgamate information and recognize key patterns makes machine learning tools suitable for the disclosed technology. For example, images used as input in the present system include significant variations in content (e.g., design of the buildings) and structure (e.g., look angle, zoom level, etc.).
Deep convolutional neural networks (CNNs) are a specific class of machine learning algorithms which are implemented for image analysis and/or image segmentation (e.g., as disclosed herein), or, simply, pixel-based classification of an image. These networks are ideal for image-based applications because convolutional layers preserve the spatial relationships between pixels. The actual identification of specific classes of pixels is done in the final layer using a non-convolutional neural layer. This is advantageous because abstract features learned by pre-trained models can be fine-tuned for a specific dataset of interest, considerably reducing processing time, computational power and, furthermore, the amount of reference data (e.g., DTM data) needed to train the model for the desired classes.
In some embodiments, the building recognition model is trained on a few dozen example images with manually defined features. The example images can be augmented using image manipulation techniques—flipping, rotating, adding noise, etc.—to build a dataset of several hundred images. These images can be used to fine-tune a pre-trained model for the present technology. In some embodiments, the training can be initialized using a pre-trained version of Microsoft's ResNet-101 model. During pre-training, the weights of the model are computed using the ImageNet dataset with 1.28 million images and 1,000 classes. Leveraging the ImageNet model can improve training speed and accuracy of the final model.
The first stage of the disclosed process is to use an image segmentation tool (generally, a machine learning tool) to classify/segment features of the image of the building. Each feature extracted from the image is mapped to an instance of a class. For example, each individual car (e.g., an extracted feature) in an image of a street can be regarded as a separately recorded instance of the class ‘vehicle.’
In some implementations, an instance segmentation algorithm identifies one or more polygons around each instance of a class. As one example implementation, the present technology utilizes the instance segmentation algorithm named “Mask R-CNN” from Matterport and available on GitHub. Mask R-CNN is an extension of the object detection algorithm Faster R-CNN. Mask R-CNN adds a mask ‘head’—neural classification layer—which classifies pixels within every bounding box as ‘object’ or ‘not-object.’ The resulting output is a binary mask for each object instance in the image. Thus, Mask R-CNN detects individual objects in an image and draws a bounding box (polygon) around an object.
Not only does Mask R-CNN allow detection of precise locations of objects in an image, but additionally provides (as output) the number of different objects within a class. This is useful for objects, such as stairs, where multiple instances of a class are adjacent to each other in the image. Mask R-CNN detects and classifies each of these instances separately so that the number of stairs in a staircase is included in the output.
The present technology is not limited to employing the Mask R-CNN algorithm. For example, alternative implementations of Mask R-CNN (such as Detectron provided by FACEBOOK®) or an entirely different instance segmentation algorithm, such as DeepMask, can be used in the disclosed technology.
Furthermore, in alternate embodiments, image segmentation algorithms which classify pixels without differentiating individual objects can also be used with suitable modification. One modification can be a post-processing step to separate non-adjacent clusters of same-class pixels.
3.1 Feature Identification
Advantageously, the present technology has broad applicability because there is no limitation on a machine learning algorithm for image segmentation. Regardless of which machine learning algorithm is used, in some embodiments, a machine learning algorithm for image segmentation can identify several key features (e.g., physical layout elements) of a building. Based on the context of discussions herein, the term “features” or “physical layout elements” can refer to a special case of generic “objects” based on detecting characteristics of a building that are of real-world interest for purposes of estimating the first floor height of the building. For example, the five key physical layout elements or features can be doorways, windows, stairs, garage doors, and building extents defining the outer edges of the building visible in the image. (In alternate embodiments, more than five features or physical layout elements can be identified from an image. For example, additional features can include double-wide doorways separate from single doors, retaining walls from the sidewalk, and other features.).
Advantageously, the example features shown in
In some images, windows can indicate presence of a basement. Windows are included as a class because they can be similar in appearance (e.g., similar shape, frames, etc.) to doors. One advantage of including windows as a separate class (rather than part of the class representing the background of non-object pixels) is that it enables the model to learn properties that differentiate doors and windows. This reduces the likelihood of falsely classifying a window as a door.
Garage doors and stairs are secondary indicators of the residential levels of a home. If a doorway is not visible in the image, alternately, the garage doors and/or stairs can provide the contextual information to infer the location of a doorway or the first floor position.
If none of the above features are present, then it can be inferred that there are not enough contextual clues in the image to draw reliable conclusions.
The building extents (e.g., boundaries of a building) are collected in order to provide context for the location of other features relative to the surrounding environment. The lower edge of the building can be useful as a point of reference relative to the positions of the other features. In some embodiments, contextual information about a building can be gleaned from sources other than the building, such as, for example, the state/geographical territory where the building is located, information related to one or more other buildings in the same location as the building, etc.
At the end of the training phase, the image to be segmented is fed into the image segmentation model (a/k/a a machine learning algorithm) as input. Accordingly, the image segmentation tool predicts the structural layout elements of the building as output. Examples of the structural layout elements are doorways, windows, stairs, garage doors, and building extents.
In the next stage of the process, the structural layout features (output by the ML tool) are used to determine at least two locations in the segmented image: first floor position and grade position. The first floor position is the point of interest to determine first floor height and elevation. However, without the grade position, i.e., the point where the building meets the ground in the image, there is no point of reference to measure the first floor position.
4.1 First Floor Position
In some embodiments, the disclosed techniques can identify the first floor position at the bottom of the lowest doorway in the image, based on identifying a pixel associated with the first floor position and another pixel associated with the grade position. In some embodiments, the disclosed techniques sets the first floor position in the image as the bottom row of pixels in the lowest valid doorway in the segmented image. In some embodiments, as a check to increase robustness, the disclosed techniques can verify that the ratio of doorway height to width (in the segmented image) falls within the standard external door height to width ratios. Features or physical layout elements that do not conform to the expected ratio may be misclassified. Thus, a doorway is determined to be valid if the ratio of doorway height to width falls within the standard external door height to width ratios.
If a doorway is not detected, the top of a staircase or the bottom of a garage door can be good potential indicators of the first floor position. In some embodiments, both features can be present in a segmented image. In some embodiments, the bottom of a garage door can be set as the first floor if the lowest point on the garage door is higher than the top stair. This can imply that the stairs may not lead directly to a door (possibly up a sloped lawn instead), since garages are not built above the first floor of a home. However, if the stairs are higher, then the garage door is likely either a basement garage or there is a front porch, requiring stairs to the doorway. In those embodiments, the top of the stairs can be considered as a choice for the first floor position.
At step 408, the process determines if stairs are detected as one of the features in the segmented image. If yes, the process moves to step 410. If no, the process moves to step 420. Both at step 410 and at step 420, the process determines if a garage door is detected as one of the features in the segmented image. At step 410, if the process determines that a garage door is detected as one of the features in the segmented image, then the process moves to step 412. At step 410, if the process determines that a garage door is not detected as one of the features in the segmented image, then the process estimates (step 414) the first floor height based on a pixel located at the top of the stairs in the segmented image. At step 420, if the process determines that a garage door is detected as one of the features in the segmented image, then the process moves to step 422 in which the process estimates the first floor elevation based on a pixel located at the bottom of the garage door in the segmented image. At step 420, if the process determines that a garage door is not detected as one of the features in the segmented image, then the process determines (step 424) that a first floor height cannot be estimated from the segmented image. This scenario arises when no doorways, no stairs, and no garage doors are detected in the segmented image.
At step 412, the process checks if a pixel located at the top of the stairs is lower than a pixel located at the bottom of the garage door. If yes, the process estimates (step 416) the first floor height based on a pixel located in the bottom of the garage door in the segmented image. The pixel in the bottom of the garage door can be the pixel used in decision block 412. If no, the process estimates (step 418) the first floor height based on a pixel located at the top of the stairs in the segmented image. The pixel in the bottom of the garage door can be the pixel used in decision block 412.
4.2 Grade Position
The grade position is the point where the building meets ground level. While terrain corresponds to the ground surrounding a building, “grade” refers to the terrain immediately adjacent to a building. The ground level in a segmented image of a building is computed using the DTM elevation (in the DTM obtained from IFSAR images) of the geographical area corresponding to a location of the building.
In some embodiments, the grade position is determined in the image by the extents of the building. To determine the grade position, in accordance with disclosed embodiments, pixels above the first floor position in the building extents are discarded. If the top of a garage door is below the first floor position (indicating a basement garage), then pixel columns in line with the garage door are discarded. In other words, line-wise columnar pixels are discarded. This is to avoid biasing the grade level downwards by including basement pixels. In some embodiments, the median value of the lowest pixel row in all columns is calculated and set as the grade position. The median value is used to reduce the impact of features like shrubs, which tend to bias the building extents too high, and physical layout elements like porch stairs, which bias the building extents too low. The median, in particular, is less sensitive to outliers than the mean value, which also reduces the risk of a biased position. In alternate embodiments, a suitable statistic (e.g., an arithmetic mean or a weighted average) different from the median value can be used.
In some embodiments, once the first floor position and the grade position are determined, the number of pixels between the two positions (pixel height differential) is computed as the pixel height of the first floor above the ground. The first floor position is detected/recognized as the first pixel and the grade position is detected/recognized as the second pixel. The pixel height differential is computed as an intervening number of pixels (in the segmented image) spatially located in between the first pixel and the second pixel. In some embodiments, the pixel height differential is expressed in pixels.
In some embodiments, the scale of the image is computed so that the scale of the image can be used to convert the first floor elevation (calculated in pixels) to real-world dimension units (e.g., inches, feet, meters, or other distance-based units of measurement), and therefore the scale of the image is determined. Thus, the final step of the process is to estimate or compute an image scale (e.g., a scale factor) which allows conversion of the first floor height expressed in pixels into real-world dimension units. Traditional photogrammetric techniques are very accurate at extracting real-world measurements from photos. However, photogrammetry requires images taken from a known position with calibrated cameras.
The disclosed first floor elevation (FFE) estimation process is designed to work with uncontrolled images, e.g., those collected using consumer-grade cameras which do not need to be calibrated. Further, the information about the environment surrounding a building with respect to a known position as used in conventional photogrammetric techniques may not be available. Advantageously, the methods disclosed herein for computing or estimating a scale factor are irrespective of a type of camera (e.g., camera-agnostic) and a type of image. That is, no special cameras are required for capturing images used in generating an estimate of the first floor elevation of a building.
Typically, doorways have standardized dimensions that can be used in determining height to width ratios. In embodiments disclosed herein, this property of doorways is used for computing image scale. External human doorways in North America typically have a small range of standard sizes. For example, the height of a standard doorway is 80 inches and the width is generally 36 inches. As other examples, doorway widths of 30 and 32 inches also exist.
The first floor height estimated in step 708 is in pixels. In some embodiments, the process converts the first floor height from pixels to real-world dimension units based on computing a scaling ratio (or equivalently, a scaling factor) of the image of the building. In some embodiments, the scaling ratio can be computed using a size of a doorway (or, generally a size of a physical layout element) in the segmented image of the building. In some embodiments, the scaling ratio can be estimated from sizes of more than one physical layout elements (e.g., with standardized real-world dimensions). In some embodiments, the process generates a DTM elevation of a geographical area surrounding (or, corresponding to) a location of the building. The process adds the DTM elevation to the first floor height (in real-word dimension units) to compute a first floor elevation expressed in real-world dimension units. Advantageously, the estimated first floor height can be used for modeling flood risk, earthquake risk, or modeling other types of risks arising from man-made, economic, or natural calamities. Alternatively, the estimated first floor height can also be used for city reconstruction, urban planning, design, mapping, and navigation purposes.
Alternately, if the process determines (step 906) that a garage door was not detected as a physical layout element or the top of the garage door is not below (step 908) the FFH position, the process moves to step 912. At step 912, the process determines indices (e.g., row numbers) of the first non-zero row for each column of pixels in the segmented image. Step 912 is performed on the building extents polygon, which is one of the classes or features obtained from image segmentation. The first non-zero row corresponds to the lower edge of the polygon (and thus the bottom of the structure is detected). In some embodiments, the process scans each column of pixels for better accuracy in estimating the position of the base of the structure. At step 914, the process stores (e.g., in a list) the indices for each column of pixels. At step 916, the process determines if the list is an empty list or not. If the list is empty, then the process determines that there are no pixels below the FFH. Accordingly, the process sets (step 922) the grade position the same as the position of the FFH. If the list, however, is non-empty, then the process uses a mathematical function to compute (step 918) a value based on the indices stored in the list. For example, the mathematical function can be a median, an arithmetic mean, or a weighted mean. It is likely that the detected base of the structure may have natural variations due to obscuring features like shrubs. In use-cases where the detected base is obscured by shrubs, taking the median value of the indices can reduce the impact of these variations. At step 920, the process sets the grade position to the value computed in step 918. The process terminates thereafter.
In this disclosure, a technique is proposed to leverage datasets and machine learning to computationally estimate the first floor height of a building. Such estimation can significantly reduce the cost and manpower to carry out flood risk assessment of buildings. The result is an automated first floor height estimation process that is faster, machine-generated (without requiring human inputs), dimensionally accurate, and inexpensive. More generally, embodiments disclosed herein can be used for measuring physical features based on an image of an object, where physical or in-person measurements are not practical. Although the discussions herein are presented using examples of first floor height estimation, such discussions are merely for illustrative purposes. In other embodiments, this technology can be suitably used for measuring features present in digital images. Although the discussions herein are presented in terms of using the first floor height estimation techniques for determining flood risk, the embodiments disclosed herein have broader applicability and can be used for automatic feature identification using images, where human analysis is of the images is impractical. For insurance-related use-cases, this includes identification and measurements of physical features of buildings, roads, bridges, and other infrastructure. The disclosed embodiments can also be used for measuring dimensions/physical features of objects that change over time such as inventories, pre-milled timber, or mine tailings.
Some of the embodiments described herein are described in the general context of methods or processes, which may be implemented in one embodiment by a computer program product, embodied in a computer-readable medium, including computer-executable instructions, such as program code, and executed by computers in networked environments. A computer-readable medium may include removable and non-removable storage devices including, but not limited to, Read-Only Memory (ROM), Random Access Memory (RAM), compact discs (CDs), digital versatile discs (DVD), etc. Therefore, the computer-readable media may include a non-transitory storage media. Generally, program modules may include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Computer- or processor-executable instructions, associated data structures, and program modules represent examples of program code for executing steps of the methods disclosed herein. The particular sequence of such executable instructions or associated data structures represents examples of corresponding acts for implementing the functions described in such steps or processes.
Some of the disclosed embodiments may be implemented as devices or modules using hardware circuits, software, or combinations thereof. For example, a hardware circuit implementation may include discrete analog and/or digital components that are, for example, integrated as part of a printed circuit board. Alternatively, or additionally, the disclosed components or modules may be implemented as an Application Specific Integrated Circuit (ASIC) and/or as a Field Programmable Gate Array (FPGA) device. Some implementations may additionally or alternatively include a digital signal processor (DSP) that is a specialized microprocessor with an architecture optimized for the operational needs of digital signal processing associated with the disclosed functionalities of this application. Similarly, the various components or sub-components within each module may be implemented in software, hardware or firmware. The connectivity between the modules and/or components within the modules may be provided using any one of the connectivity methods and media that is known in the art, including, but not limited to, communications over the Internet, wired, or wireless networks using the appropriate protocols.
The foregoing description of embodiments has been presented for purposes of illustration and description. The foregoing description is not intended to be exhaustive or to limit embodiments of the present invention(s) to the precise form disclosed, and modifications and variations are possible in light of the above teachings or may be acquired from practice of various embodiments. The embodiments discussed herein were chosen and described in order to explain the principles and the nature of various embodiments and their practical application to enable one skilled in the art to utilize the present invention(s) in various embodiments and with various modifications as are suited to the particular use contemplated. The features of the embodiments described herein may be combined in all possible combinations of methods, apparatus, modules, systems, and computer program products.
This application claims the benefit of U.S. Provisional Application No. 62/913,631, filed on Oct. 10, 2019, which is hereby incorporated by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
5109430 | Nishihara | Apr 1992 | A |
5187754 | Currin | Feb 1993 | A |
5926581 | Pritt | Jul 1999 | A |
6272448 | Ishii | Aug 2001 | B1 |
6448544 | Stanton | Sep 2002 | B1 |
6757445 | Knopp | Jun 2004 | B1 |
6792684 | Hyyppa | Sep 2004 | B1 |
6937775 | Gindele | Aug 2005 | B2 |
6973218 | Alderson | Dec 2005 | B2 |
6978050 | Hunter | Dec 2005 | B2 |
7013235 | Hyyppa | Mar 2006 | B2 |
7043090 | Gindele | May 2006 | B2 |
7116838 | Gindele | Oct 2006 | B2 |
7187452 | Jupp | Mar 2007 | B2 |
7236649 | Fenney | Jun 2007 | B2 |
7386164 | Shragai | Jun 2008 | B2 |
7627491 | Feyen | Dec 2009 | B2 |
7639842 | Kelle | Dec 2009 | B2 |
7856312 | Coombes | Dec 2010 | B2 |
7881913 | O'Neil | Feb 2011 | B2 |
7917292 | Du | Mar 2011 | B1 |
7917346 | Sullivan | Mar 2011 | B2 |
8010294 | Dorn | Aug 2011 | B2 |
8139111 | Oldroyd | Mar 2012 | B2 |
8208689 | Savolainen et al. | Jun 2012 | B2 |
8239179 | Duncan | Aug 2012 | B2 |
8295554 | Francini et al. | Oct 2012 | B2 |
8369579 | Barnes | Feb 2013 | B2 |
8473264 | Frigerio | Jun 2013 | B2 |
8543430 | Fields et al. | Sep 2013 | B1 |
8615362 | Laake | Dec 2013 | B2 |
8655595 | Green et al. | Feb 2014 | B1 |
8718393 | Chen | May 2014 | B2 |
8825456 | Vasudevan | Sep 2014 | B2 |
8855439 | Louis | Oct 2014 | B2 |
8874187 | Thomson | Oct 2014 | B2 |
9147285 | Kunath | Sep 2015 | B2 |
9245196 | Marks | Jan 2016 | B2 |
9299191 | Sinram et al. | Mar 2016 | B2 |
9418475 | Medioni | Aug 2016 | B2 |
9436987 | Ding | Sep 2016 | B2 |
9589362 | Sarkis | Mar 2017 | B2 |
9830690 | Chiang | Nov 2017 | B2 |
9866815 | Vrcelj | Jan 2018 | B2 |
9875554 | Imber | Jan 2018 | B2 |
9904867 | Fathi | Feb 2018 | B2 |
9911242 | Sundaresan | Mar 2018 | B2 |
9912862 | Peruch | Mar 2018 | B2 |
10002407 | Mercer | Jun 2018 | B1 |
10078913 | Rondao | Sep 2018 | B2 |
10147057 | Zhang | Dec 2018 | B2 |
10186015 | Mercer | Jan 2019 | B1 |
10204454 | Goldman et al. | Feb 2019 | B2 |
10304203 | Forutanpour | May 2019 | B2 |
10325349 | Mercer et al. | Jun 2019 | B2 |
10354411 | Fu | Jul 2019 | B2 |
10430641 | Gao | Oct 2019 | B2 |
10452789 | Madmony | Oct 2019 | B2 |
10579875 | Dal Mutto | Mar 2020 | B2 |
10636209 | Thaller et al. | Apr 2020 | B1 |
10636219 | Steinbrucker | Apr 2020 | B2 |
10699421 | Cherevatsky | Jun 2020 | B1 |
10733798 | Wagner | Aug 2020 | B2 |
10817752 | Kehl | Oct 2020 | B2 |
10825217 | Ulusoy | Nov 2020 | B2 |
10930074 | Fu | Feb 2021 | B2 |
10937178 | Srinivasan | Mar 2021 | B1 |
10937188 | Malisiewicz | Mar 2021 | B2 |
10970518 | Zhou | Apr 2021 | B1 |
11244189 | Fathi | Feb 2022 | B2 |
20030138152 | Fennye | Jul 2003 | A1 |
20040260471 | McDermott | Dec 2004 | A1 |
20080260237 | Savolainen et al. | Oct 2008 | A1 |
20100292973 | Barnes | Nov 2010 | A1 |
20110246935 | Maeder | Oct 2011 | A1 |
20110295575 | Levine et al. | Dec 2011 | A1 |
20120330553 | Mollaei | Dec 2012 | A1 |
20130046471 | Rahmes | Feb 2013 | A1 |
20130155109 | Schultz et al. | Jun 2013 | A1 |
20130197807 | Du | Aug 2013 | A1 |
20130342736 | Numata | Dec 2013 | A1 |
20140267250 | Tennant | Sep 2014 | A1 |
20140301633 | Furukawa et al. | Oct 2014 | A1 |
20150046139 | Iwamura | Feb 2015 | A1 |
20150154805 | Hsu | Jun 2015 | A1 |
20160148347 | Guido | May 2016 | A1 |
20160196659 | Vrcelj | Jul 2016 | A1 |
20160239924 | Fields et al. | Aug 2016 | A1 |
20160366348 | Dixon | Dec 2016 | A1 |
20170154127 | Madmony | Jun 2017 | A1 |
20170358067 | Jung | Dec 2017 | A1 |
20180165616 | Sun et al. | Jun 2018 | A1 |
20190108396 | Dal Mutto | Apr 2019 | A1 |
20190213389 | Peruch | Jul 2019 | A1 |
20190259202 | Taubin | Aug 2019 | A1 |
20200134733 | Maddox | Apr 2020 | A1 |
20200320327 | Fathi | Oct 2020 | A1 |
20200348132 | Du et al. | Nov 2020 | A1 |
20210118224 | Zhan | Apr 2021 | A1 |
Number | Date | Country |
---|---|---|
WO2008034465 | Mar 2008 | WO |
WO2010056643 | May 2010 | WO |
Entry |
---|
Snavley, N. (n.d.). Modeling the World from Internet Photo Collections. CSE576: Computer vision. Retrieved Jul. 29, 2022, from https://courses.cs.washington.edu/courses/cse576/ -2008 (Year: 2008). |
Final Office Action in U.S. Appl. No. 16/664,696, dated Jan. 24, 2022, 33 pages. |
International Search Report and Written Opinion dated Jan. 24, 2022 in International Patent Application No. PCT/US21/54498, 11 pages. |
Non-Final Office Action in U.S. Appl. No. 16/664,696, dated Sep. 20, 2021, 26 pages. |
Krizhevsky, A. et al., “ImageNet Classification with Deep Convolutional Neural Networks.” Jun. 2017, 9 pages. |
Non-Final Office Action in U.S. Appl. No. 14/825,806, dated Dec. 1, 2017, 13 pages. |
Non-Final Office Action in U.S. Appl. No. 15/963,937, dated May 31, 2018, 10 pages. |
Non-Final Office Action in U.S. Appl. No. 16/398,027, dated Aug. 6, 2019, 8 pages. |
Notice of Allowance in U.S. Appl. No. 14/825,806, dated Jul. 11, 2018, 12 pages. |
Notice of Allowance in U.S. Appl. No. 14/825,806, dated Sep. 4, 2018, 11 pages. |
Notice of Allowance in U.S. Appl. No. 15/723,154, dated Jan. 26, 2018, 9 pages. |
Notice of Allowance in U.S. Appl. No. 15/963,937, dated Sep. 19, 2018, 8 pages. |
Notice of Allowance in U.S. Appl. No. 16/221,859, dated Jan. 29, 2019, 10 pages. |
Notice of Allowance in U.S. Appl. No. 16/398,027, dated Sep. 24, 2019, 7 pages. |
Notice of Allowance in U.S. Appl. No. 17/193,277, dated Sep. 8, 2022, 10 pages. |
Number | Date | Country | |
---|---|---|---|
20210110564 A1 | Apr 2021 | US |
Number | Date | Country | |
---|---|---|---|
62913631 | Oct 2019 | US |