The following disclosure relates generally to techniques for automatically analyzing visual data of images acquired at a building to determine building information that includes building dimensions and for subsequently using such information in one or more automated manners, such as to determine scale information for building images using visible structural building objects of defined types, to use the image scale information to determine resulting building dimensions for walls and other elements visible in the images (e.g., for use with a floor plan generated from analysis of the images), and to provide navigational data for the building in accordance with the determined building information.
In various circumstances, such as architectural analysis, property inspection, real estate acquisition and development, general contracting, improvement cost estimation, etc., it may be desirable to know the interior of a house or other building without physically traveling to and entering the building. However, it can be difficult to effectively capture, represent and use such building interior information, including to identify buildings that satisfy criteria of interest, and to display visual information captured within building interiors to users at remote locations (e.g., to enable a user to understand the layout and other details of the interior, including to control the display in user-selected manners). Also, even if a user is present at a building, it can be difficult to effectively navigate the building and determine information about the building that is not readily apparent. While a floor plan of a building may provide some information about layout and other details of a building interior, such use of floor plans has some drawbacks, including that floor plans can be difficult to construct and maintain, to accurately scale and populate with information about room interiors, to visualize and otherwise use, etc.
The patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the Office upon request and payment of the necessary fee.
The present disclosure describes techniques for using computing devices to perform automated operations involving analyzing visual data of images acquired at a building to determine building information that includes building dimensions, and subsequently using the determined building information in one or more automated manners, such as to provide navigational data for the building in accordance with the building dimensions (e.g., for controlling navigation of mobile devices, such as autonomous vehicles, in the building). The automated determination of building dimensions and other building information may include analyzing visual data of building images to determine an estimated camera height of the camera(s) during capture of the images and other scale information for the images based on identified visible installed building objects of defined types, using the determined image scale information to further determine resulting building dimensions (e.g., lengths of and heights of walls visible in the images, such as to determine room sizes), and optionally associating the building dimension data with a floor plan generated from analysis of the images-such a floor plan may, in at least some embodiments, be for an as-built multi-room building (e.g., a house, office building, etc.) and generated from or otherwise associated with panorama images or other images (e.g., rectilinear perspective images) acquired at acquisition locations in and around the building (e.g., without having or using information from any depth sensors or other distance-measuring devices about distances from an image's acquisition location to walls or other objects in the surrounding building). The determined building information for the building may be further used in various manners, such as to improve automated navigation of the building, for display or other presentation on one or more client devices in corresponding GUIs (graphical user interfaces) to enable virtual navigation of the building, etc. Additional details are included below regarding automated determination and use of building information from analysis of building images, and some or all techniques described herein may, in at least some embodiments, be performed via automated operations of a Building Object-Based Scale Determination Manager (“BOBSDM”) system, as discussed further below.
As noted above, automated operations of a BOBSDM system may include analyzing visual data of images acquired at a building to determine building dimensions and other building information. In at least some embodiments, the automated analysis of the visual data of the images acquired at a building includes identifying visible structural objects or other installed objects of one or more defined types at the building that have standardized sizes or otherwise have expected sizes, such as doorways (e.g., with an expected height of 80 inches or 2.03 meters), and/or cabinets positioned on the floor (also referred to herein as ‘floor cabinets’) (e.g., with an expected height of 36 inches)—in other embodiments, other object types may be used that each have one or a limited quantity of standardized sizes, whether in addition to or instead of doorways and/or floor cabinets, such as one or more of the following: ovens, dishwashers, electrical plates, air outlets, fan blades, beds, televisions, electrical outlets, wall switch plates, etc. As one non-exclusive example, the four corners of a doorway may be identified in an image, the corresponding coordinates of those points within the image may be translated into real-world coordinates using the expected height of 80 inches along the sides of the doorway between pairs of corners, and an estimated height of the camera used to acquire the image may then be determined—for example, for a panorama image in straightened form (with vertical objects in the actual environment, such as the sides of a typical rectangular door frame or a typical border between 2 adjacent walls, being shown vertically in the image, such as in a single pixel column) and in an equirectangular format (e.g., with straight vertical object data remaining straight, and with straight horizontal data such as the top of a typical rectangular door frame or a border between a wall and a floor remaining straight at a horizontal midline of the image but being increasingly curved in the equirectangular projection image in a convex manner relative to the horizontal midline as the distance increases in the image from the horizontal midline), the determining of the estimated camera height h1 (in inches) may be performed using the following formula (1):
where x1 is the horizontal (left-and-right) position of the lower left corner of the doorway, x2 is the horizontal position of the lower right corner of the doorway, x3 is the horizontal position of the upper left corner of the doorway, x4 is the horizontal position of the upper right corner of the doorway, z1 is the vertical (up- and -down) position of the lower left corner of the doorway, z2 is the vertical position of the lower right corner of the doorway, z3 is the vertical position of the upper left corner of the doorway, and z4 is the vertical position of the upper right corner of the doorway. A similar estimated camera height determination may be performed in a simplified manner for images that are not in equirectangular format (e.g., perspective images in rectilinear format), with formula (1) adjusted to account for the lack of curvature for horizontal data in a perspective images in rectilinear format. Such estimated camera heights may similarly be determined for additional identified doorway objects in the building image(s) (e.g., the same doorway as seen in a different image, a different doorway seen in the same image, a different doorway seen in a different image, etc.), such as for multiple building images captured for a building using the same constant actual camera height above the floor or other underlying surface (e.g., multiple images captured in an image acquisition session by one or more cameras using the same tripod at a fixed height or held by an operator user at a consistent height), and a doorway-based estimated camera height may be determined from one or more individual camera height estimations for the one or more identified doorways (e.g., an aggregated estimated camera height for multiple identified doorways, such as using a mean or other average of the multiple individual camera height estimations).
In addition, estimated camera heights may similarly be determined based on one or more identified floor cabinets. As one non-exclusive example, for a floor cabinet identified in an image in straightened format, and for one or some or all pixel columns in the image that include visual data of that floor cabinet, the cabinet bottom (e.g., intersection with the floor) and cabinet top (e.g., top of a countertop or other upper surface of the floor cabinet) are identified. The corresponding coordinates in such a pixel column for the rows of the image that show the cabinet bottom and top may then be translated into real-world coordinates using the expected height of 36 inches between those rows, and an estimated height of the camera used to acquire the image may then be determined for an image (e.g., a panoramic image) in a manner analogous to that discussed above with respect to formula (1) if the image is in equirectangular format, or in an manner adjusted for the lack of curvature of horizontal data for images in non-equirectangular formats. As one non-exclusive example for a straightened panorama image having visual data of a floor cabinet across a sequence of pixel columns, the determining of the estimated camera height h2 (in inches) for each such pixel column may be performed using the following formula (2):
where alpha1 is the angle between a line from the camera center to the cabinet top front edge for that pixel column and a line straight down from the camera center in the direction of gravitational force, and alpha2 is the angle between a line from the camera center to the cabinet bottom front edge for that pixel column and a line straight down from the camera center in the direction of gravitational force. Such estimated camera heights may similarly be determined for additional identified floor cabinet objects in the building image(s) (e.g., the same cabinet object as seen in a different image, a different cabinet object seen in the same image, a different cabinet object seen in a different image, etc.), such as for multiple building images captured for a building using the same constant actual camera height above the floor or other underlying surface (e.g., multiple images captured in an image acquisition session by one or more cameras using the same tripod at a fixed height or held by an operator user at a consistent height), and an aggregate cabinet-based estimated camera height may be determined from individual camera height estimations for the one or more identified floor cabinets (e.g., an aggregated estimated camera height based on multiple individual pixel-column camera height estimations for a single floor cabinet in a single panorama image and/or based on multiple individual pixel-column camera height estimations for multiple identified floor cabinets in one or more panorama images, such as using a mean or other average of the multiple individual camera height estimations).
After such a doorway-based estimated camera height and a cabinet-based estimated camera height are determined for one or more building images acquired at a building (or one or more other camera height estimations based on one or more other object types, whether in addition to or instead of doorway-based ad/or cabinet-based estimated camera heights), the information from the multiple estimated camera heights may be used to determine a final estimated camera height for use with the image(s). In at least some embodiments and situations, the determination of the final estimated camera height for the image(s) may include comparing the multiple estimated camera heights to determine whether they satisfy one or more defined validation criteria (e.g., differ from each other by less than a defined maximum amount, such as a fixed distance, or a percentage, or other relative difference between the smallest and largest of the estimated camera heights, etc.), and if so determining the final estimated camera height for the image(s) from the multiple estimated camera heights (e.g., using a mean or other average, such as a weighted average based on object type and number of instances of an object; selecting a minimum value or maximum value or other representative value; etc.). In other embodiments and situations, a final camera height for a group of one or more building images may instead be determined using a single type of object visible in the image(s).
Once a final camera height is determined for one or more images of a building, that camera height data may be further used to determine additional scaling information for objects and other elements visible in the images, such as building dimensions based at least in part on widths/lengths and/or heights of walls visible in the image(s), widths of doorways and other non-doorway wall openings visible in the image(s), etc., including to determine actual sizes of rooms and/or other areas on a floor plan (rather than merely determining relative room shapes sizes and positions). In addition, such determined camera height data and/or resulting building dimensions may be used in other manners in other embodiments, whether in addition to or instead of determining actual size information for floor plans, such as to determine actual sizes of structural and/or non-structural objects (e.g., for use in remodeling, such as to determine sizes of objects to be replaced, sizes of empty areas in which to add objects, etc.). Additional details are included below related to using the determined final camera height data in various manners, including to determine building dimensions and other building information.
In addition, in at least some embodiments and situations, one or more other techniques may be used to determine a final camera height and/or resulting building dimensions for a group of one or more building images, such as to further validate a final camera height determined using visible objects of one or more determined types (e.g., to confirm that the camera heights from multiple techniques differ from each other by less than a defined maximum amount), or to instead use the one or more other techniques instead of using visible objects of one or more determined types (e.g., if the one or more validation criteria are not satisfied for estimated camera height data using visible objects of one or more determined types). As one non-exclusive example, a floor plan may be generated for a building using images acquired at the building, an exterior of the floor plan may be fitted to an exterior of the building as shown in an overhead image of the building (e.g., an image from a drone, airplane, satellite, etc.), and room dimensions and other building information (including a final camera height for one or more images acquired at the building) may be determined using the fitted floor plan and information about the size of the building exterior in the overhead image (e.g., using GPS data points associated with the overhead image). Additional details are included below related to determining a final camera height for building images using identified structural building objects.
The described techniques provide various benefits in various embodiments, including to allow floor plans of multi-room buildings and other structures to be identified and used more efficiently and rapidly and in manners not previously available, including to automatically determine building dimensions for a building by analyzing visual data of images acquired at the building to identify objects of one or more defined types and to use standardized sizes or otherwise expected sizes of such objects to determine the camera height and other scaling information for the images. In addition, such automated techniques allow such determination of building dimensions and other building information to be determined by using information acquired from the actual building environment (rather than from plans on how the building should theoretically be constructed), as well as enabling the capture of changes to structural elements and/or visual appearance elements that occur after a building is initially constructed. Such described techniques further provide benefits in allowing improved automated navigation of a building by mobile devices (e.g., semi-autonomous or fully autonomous vehicles), based at least in part on the determined building dimensions, including to significantly reduce computing power and time used to attempt to otherwise learn a building's layout. In addition, in some embodiments the described techniques may be used to provide an improved GUI in which a user may more accurately and quickly obtain and use building information that includes building dimensions (e.g., for use in navigating an interior of one or more buildings), including in response to search requests, as part of providing personalized information to the user, as part of providing value estimates and/or other information about a building to a user (e.g., after analysis of information about one or more target building floor plans that are similar to one or more initial floor plans or that otherwise match specified criteria), etc. Various other benefits are also provided by the described techniques, some of which are further described elsewhere herein.
As noted above, automated operations of a BOBSDM system may include determining information for a building floor plan in at least some embodiments. Such a floor plan of a building may include a 2D (two-dimensional) representation of various information about the building (e.g., the rooms, doorways between rooms and other inter-room connections, exterior doorways, windows, etc.), and may be further associated with various types of supplemental or otherwise additional information (about the building (e.g., data for a plurality of other building-related attributes)—such additional building information may, for example, include one or more of the following: a 3D, or three-dimensional, model of the building that includes height information (e.g., for building walls and other vertical areas); a 2.5D, or two-and-a-half dimensional, model of the building that when rendered includes visual representations of walls and/or other vertical surfaces without explicitly modeling measured heights of those walls and/or other vertical surfaces; images and/or other types of data captured in rooms of the building, including panoramic images (e.g., 360° panorama images); etc., as discussed in greater detail below.
In addition, in at least some embodiments and situations, some or all of the images acquired for a building and associated with the building's floor plan may be panorama images that are each acquired at one of multiple acquisition locations in or around the building, such as to generate a panorama image at each such acquisition location from one or more of a video at that acquisition location (e.g., a 360° video taken from a smartphone or other mobile device held by a user turning at that acquisition location), or multiple images acquired in multiple directions from the acquisition location (e.g., from a smartphone or other mobile device held by a user turning at that acquisition location), or a simultaneous capture of all the image information (e.g., using one or more fisheye lenses), etc. Such images may include visual data, and in at least some embodiments and situations, acquisition metadata regarding the acquisition of such panorama images may be obtained and used in various manners, such as data acquired from IMU (inertial measurement unit) sensors or other sensors of a mobile device as it is carried by a user or otherwise moved between acquisition locations (e.g., compass heading data, GPS location data, etc.). It will be appreciated that such a panorama image may in some situations be represented in a spherical coordinate system and provide up to 360° coverage around horizontal and/or vertical axes, such that a user viewing a starting panorama image may move the viewing direction within the starting panorama image to different orientations to cause different images (or “views”) to be rendered within the starting panorama image (including, if the panorama image is represented in a spherical coordinate system, to convert the image being rendered into a planar coordinate system). Additional details are included below related to the acquisition and usage of panorama images or other images for a building, including with respect to
In at least some embodiments, a BOBSDM system may operate in conjunction with one or more separate ICA (Image Capture and Analysis) systems and/or MIGM (Mapping Information and Generation Manager) systems, such as to obtain and use images from the ICA system and/or to obtain floor plan and other associated information for buildings from the MIGM system, while in other embodiments such an BOBSDM system may incorporate some or all functionality of such ICA and/or MIGM systems as part of the BOBSDM system. In yet other embodiments, the BOBSDM system may operate without using some or all functionality of the ICA and/or MIGM systems, such as if the BOBSDM system obtains information about images and/or building floor plans and associated information from other sources (e.g., from manual capture of one or more such images by one or more users, from manual creation or provision of such building floor plans and/or associated information by one or more users, etc.).
With respect to functionality of such an ICA system, it may perform automated operations in at least some embodiments to acquire images (e.g., panorama images) at various acquisition locations associated with a building (e.g., in the interior of multiple rooms of the building), and optionally further acquire metadata related to the image acquisition process (e.g., compass heading data, GPS location data, etc.) and/or to movement of a capture device between acquisition locations—in at least some embodiments, such acquisition and subsequent use of acquired information may occur without having or using information from depth sensors or other distance-measuring devices about distances from images' acquisition locations to walls or other objects in a surrounding building or other structure. For example, in at least some such embodiments, such techniques may include using one or more mobile devices (e.g., a camera having one or more fisheye lenses and mounted on a rotatable tripod or otherwise having an automated rotation mechanism; a camera having one or more fisheye lenses sufficient to capture 360 degrees horizontally without rotation; a smart phone held in a constant position relative to a user (e.g., chest height, eye height, etc.) and moved by the user, such as to rotate the user's body and held smart phone in a 360° circle around a vertical axis; a camera held by or mounted on a user or the user's clothing; a camera mounted on an aerial and/or ground-based drone or other robotic device; etc.) to capture visual data from a sequence of multiple acquisition locations within multiple rooms of a house (or other building). Additional details are included elsewhere herein regarding operations of device(s) implementing an ICA system, such as to perform such automated operations, and in some cases to further interact with one or more ICA system operator user(s) in one or more manners to provide further functionality.
With respect to functionality of such an MIGM system, it may perform automated operations in at least some embodiments to analyze multiple 360° panorama images (and optionally other images) that have been acquired for a building interior (and optionally an exterior of the building), and determine room shapes and locations of passages connecting rooms for some or all of those panorama images, as well as to determine wall elements and other elements of some or all rooms of the building in at least some embodiments and situations. The types of connecting passages between two or more rooms may include one or more of doorway openings and other inter-room non-doorway wall openings, windows, stairways, non-room hallways, etc., and the automated analysis of the images may identify such elements based at least in part on identifying the outlines of the passages, identifying different content within the passages than outside them (e.g., different colors or shading), etc. The automated operations may further include using the determined information to generate a floor plan for the building and to optionally generate other mapping information for the building, such as by using the inter-room passage information and other information to determine relative positions of the associated room shapes to each other, and to optionally add distance scaling information and/or various other types of information to the generated floor plan. In addition, the MIGM system may in at least some embodiments perform further automated operations to determine and associate additional information with a building floor plan and/or specific rooms or locations within the floor plan, such as to analyze images and/or other environmental information (e.g., audio) captured within the building interior to determine particular attributes (e.g., a color and/or material type and/or other characteristics of particular features or other elements, such as a floor, wall, ceiling, countertop, furniture, fixture, appliance, cabinet, island, fireplace, etc.; the presence and/or absence of particular features or other elements; etc.), or to otherwise determine relevant attributes (e.g., directions that building features or other elements face, such as windows; views from particular windows or other locations; etc.). Additional details are included below regarding operations of computing device(s) implementing an MIGM system, such as to perform such automated operations and in some cases to further interact with one or more MIGM system operator user(s) in one or more manners to provide further functionality.
In some embodiments and situations, an adjacency graph may also be generated that represents information about room adjacencies for a building and that further stores or otherwise includes some or all such data for the building, such as from analysis of a floor plan and with at least some such data stored in or otherwise associated with nodes of the adjacency graph that represent some or all rooms of the floor plan (e.g., with each node containing information about attributes of the room represented by the node), and/or with at least some such attribute data stored in or otherwise associated with edges between nodes that represent connections between adjacent rooms via doorways or other non-doorway inter-room wall openings, or in some situations further represent adjacent rooms that share at least a portion of at least one wall and optionally a full wall without any direct inter-room opening connecting those two rooms (e.g., with each edge containing information about connectivity status between the rooms represented by the nodes that the edge inter-connects, such as whether an inter-room opening exists between the two rooms, and/or a type of inter-room opening or other type of adjacency between the two rooms such as without any direct inter-room wall opening connection). In some embodiments and situations, the floor plan and/or adjacency graph may further represent at least some information external to the building, such as exterior areas adjacent to doorways or other wall openings between the building and the exterior and/or other accessory structures on the same property as the building (e.g., a garage, shed, pool house, separate guest quarters, mother-in-law unit or other accessory dwelling unit, pool, patio, deck, sidewalk, garden, yard, etc.), or more generally some or all external areas of a property that includes one or more buildings (e.g., a house and one or more outbuildings or other accessory structures)—such exterior areas and/or other structures may be represented in various manners in the adjacency graph and/or on the floor plan, such as via separate nodes for each such exterior area or other structure in the adjacency graph or by placing the floor plan on a representation of the property that includes the external areas, or instead as attribute information associated with corresponding nodes or edges of the adjacency graph or instead with the adjacency graph as a whole (for the building as a whole). The adjacency graph may further have associated attribute information for the corresponding rooms and inter-room connections in at least some embodiments, such as to represent within the adjacency graph some or all of the information available on a floor plan and otherwise associated with the floor plan (or in some embodiments and situations, information in and associated with a 3D model of the building)—for example, if there are images associated with particular rooms of the floor plan or other associated areas (e.g., external areas), corresponding visual attributes may be included within the adjacency graph, whether as part of the associated rooms or other areas, or instead as a separate layer of nodes within the graph that represent the images. In embodiments with adjacency information in a form other than an adjacency graph, some or all of the above-indicated types of information may be stored in or otherwise associated with the adjacency information, including information about rooms, about adjacencies between rooms, about connectivity status between adjacent rooms, about attributes of the building, etc. Additional details are included below regarding the generation and use of floor plans and/or adjacency graphs, including with respect to the examples of
In addition, automated operations of a BOBSDM system may further include generating and using one or more vector-based embeddings (also referred to herein as a “vector embedding”) to concisely represent information in an adjacency graph for a floor plan of a building, such as to summarize the semantic meaning and spatial relationships of the floor plan in a manner that enables reconstruction of some or all of the floor plan from the vector embedding. Such a vector embedding may be generated in various manners in various embodiments, such as via the use of representation learning and one or more trained machine learning models, and in at least some such embodiments may be encoded in a format that is not easily discernible to a human reader. Non-exclusive examples of techniques for generating such vector embeddings are included in the following documents, which are incorporated herein by reference in their entirety: “Symmetric Graph Convolution Autoencoder For Unsupervised Graph Representation Learning” by Jiwoong Park et al., 2019 International Conference On Computer Vision, Aug. 7, 2019; “Inductive Representation Learning On Large Graphs” by William L Hamilton et al., 31st Conference On Neural Information Processing Systems 2017 Jun. 7, 2017; and “Variational Graph Auto-Encoders” by Thomas N. Kipf et al., 30th Conference On Neural Information Processing Systems 2017 (Bayesian Deep Learning Workshop), Nov. 21, 2016.
As noted above, a floor plan may have various information that is associated with individual rooms and/or with inter-room connections and/or with a corresponding building and/or encompassing property as a whole, and the corresponding adjacency graph and/or vector embedding(s) for such a floor plan may include some or all such associated information (e.g., represented as attributes of nodes for rooms in an adjacency graph and/or attributes of edges for inter-room connections in an adjacency graph and/or represented as attributes of the adjacency graph as a whole, such as in a node representing the overall building, and with corresponding information encoded in the associated vector embedding(s)). Such associated information may include a variety of types of data, including information about one or more of the following non-exclusive examples: room types, room dimensions, locations of windows and doors and other inter-room openings in a room, room shape, a view type for each exterior window, information about and/or copies of images taken in a room, information about and/or copies of audio or other data captured in a room, information of various types about features of one or more rooms (e.g., as automatically identified from analysis of images, as supplied by operator users of the BOBSDM system and/or by end-users viewing information about the floor plan and/or by operator users of ICA and/or MIGM systems as part of capturing information about a building and generating a floor plan for the building, etc.), attributes of structures and objects (e.g., colors, shapes, materials, age, condition, quality, etc.), types of inter-room connections, dimensions of inter-room connections, etc. Furthermore, in at least some embodiments, one or more additional subjective attributes may be determined for and associated with the floor plan, such as via analysis of the floor plan information (e.g., an adjacency graph for the floor plan) by one or more trained machine learning models (e.g., classification neural network models) to identify floor plan characteristics for a building as a whole or a particular building floor (e.g., an open floor plan; a typical/normal versus atypical/odd/unusual floor plan; a standard versus nonstandard floor plan; a floor plan that is accessibility friendly, such as by being accessible with respect to one or more characteristics such as disability and/or advanced age; etc.)—in at least some such embodiments, the one or more classification neural network models are part of the BOBSDM system and are trained via supervised learning using labeled data that identifies floor plans having each of the possible characteristics, while in other embodiments such classification neural network models may instead use unsupervised clustering. Additional details are included below regarding the determination and use of attribute information for floor plans, including with respect to the examples of
After an adjacency graph and one or more vector embeddings are generated for a floor plan and associated information of a building, that generated information may be used by the BOBSDM system as specified criteria to automatically determine one or more other similar or otherwise matching floor plans of other buildings in various manners in various embodiments. For example, in some embodiments, an initial floor plan is identified, and one or more corresponding vector embeddings for the initial floor plan are generated and compared to generated vector embeddings for other candidate floor plans in order to determine a difference between the initial floor plan's vector embedding(s) and the vector embeddings of some or all of the candidate floor plans, with smaller differences between two vector embeddings corresponding to higher degrees of similarity between the building information represented by those vector embeddings. Differences between two such vector embeddings may be determined in various manners in various embodiments, including, as non-exclusive examples, by using one or more of the following distance metrics: Euclidean distance, cosine distance, graph edit distance, a custom distance measure specified by a user, etc.; and/or otherwise determining similarity without use of such a distance metrics. In at least some embodiments, multiple such initial floor plans may be identified and used in the described manner to determine a combined distance between a group of vector embeddings for the multiple initial floor plans and the vector embeddings for each of multiple other candidate floor plans, such as by determining individual distances for each of the initial floor plans to a given other candidate floor plan and by combining the multiple individual determined distances in one or more manners (e.g., a mean or other average, a cumulative total, etc.) to generate the combined distance for the group of vector embeddings of the multiple initial floor plans to that given other candidate floor plan.
Furthermore, in some embodiments, one or more explicitly specified criteria other than one or more initial floor plans are received (whether in addition to or instead of receiving one or more initial floor plans), and the corresponding vector embedding(s) for each of multiple candidate floor plans are compared to information generated from the specified criteria in order to determine which of the candidate floor plans satisfy the specified criteria (e.g., are a match above a defined similarity threshold), such as by generating a representation of a building that corresponds to the criteria (e.g., has attributes identified in the criteria) and generating one or more vector embeddings for the building representation for use in vector embedding comparisons in the manners discussed above. The specified criteria may be of various types in various embodiments and situations, such as one or more of the following non-exclusive examples: search terms corresponding to specific attributes of rooms and/or inter-room connections and/or buildings as a whole (whether objective attributes that can be independently verified and/or replicated, and/or subjective attributes that are determined via use of corresponding classification neural networks); information identifying adjacency information between two or more rooms or other areas; information about views available from windows or other exterior openings of the building; information about directions of windows or other structural features or other elements of the building (e.g., such as to determine natural lighting information available via those windows or other structural elements, optionally at specified days and/or seasons and/or times); etc. Non-exclusive illustrative examples of such specified criteria include the following: a bathroom adjacent to bedroom (e.g., without an intervening hall or other room); a deck adjacent to a family room (optionally with a specified type of connection, such as French doors); 2 bedrooms facing south; a kitchen with a tile-covered island and a northward-facing view; a master bedroom with a view of the ocean or more generally of water; any combination of such specified criteria; etc. In addition, in some embodiments, one or more target floor plans are identified that are similar to specified criteria associated with a particular end-user (e.g., based on one or more initial target floor plans that are selected by the end-user and/or are identified as previously being of interest to the end-user, whether based on explicit and/or implicit activities of the end-user to specify such floor plans; based on one or more search criteria specified by the end-user, whether explicitly and/or implicitly; etc.), and are used in further automated activities to personalize interactions with the end-user. Such further automated personalized interactions may be of various types in various embodiments, and in some embodiments may include displaying or otherwise presenting information to the end-user about the target floor plan(s) and/or additional information associated with those floor plans.
The described techniques may further include additional operations in some embodiments. For example, in at least some embodiments, machine learning techniques may be used to learn the attributes and/or other characteristics of adjacency graphs to encode in corresponding vector embeddings that are generated, such as the attributes and/or other characteristics that best enable subsequent automated identification of building floor plans having attributes satisfying target criteria (e.g., number of bedrooms; number of bathrooms; connectivity between rooms; size and/or dimensions of each room; number of windows/doors in each room; types of views available from exterior windows, such as water, mountain, a back yard or other exterior area of the property, etc.; location of windows/doors in each room; etc.). In addition, in at least some embodiments, machine learning techniques may be used to identify objects of one or more types in one or more images (e.g., doorways, kitchen cabinets, etc.), such as by one or more machine learning models trained to determine such information (e.g., a machine learning model specific to each defined object type). Furthermore, in at least some embodiments, machine learning techniques may be used to determine an estimated camera height for one or more images, such as by one or more machine learning models trained to determine such information (e.g., a machine learning model specific to each defined object type).
For illustrative purposes, some embodiments are described below in which specific types of information are acquired, used and/or presented in specific ways for specific types of structures and by using specific types of devices-however, it will be understood that the described techniques may be used in other manners in other embodiments, and that the invention is thus not limited to the exemplary details provided. As one non-exclusive example, while specific types of data structures (e.g., floor plans, adjacency graphs, vector embeddings, etc.) are generated and used in specific manners in some embodiments, it will be appreciated that other types of information to describe floor plans and other associated information may be similarly generated and used in other embodiments, including for buildings (or other structures or layouts) separate from houses, and that floor plans identified as matching specified criteria may be used in other manners in other embodiments. In addition, the term “building” refers herein to any partially or fully enclosed structure, typically but not necessarily encompassing one or more rooms that visually or otherwise divide the interior space of the structure-non-limiting examples of such buildings include houses, apartment buildings or individual apartments therein, condominiums, office buildings, commercial buildings or other wholesale and retail structures (e.g., shopping malls, department stores, warehouses, etc.), supplemental structures on a property with another main building (e.g., a detached garage or shed on a property with a house), etc. The term “acquire” or “capture” as used herein with reference to a building interior, acquisition location, or other location (unless context clearly indicates otherwise) may refer to any recording, storage, or logging of media, sensor data, and/or other information related to spatial characteristics and/or visual characteristics and/or otherwise perceivable characteristics of the building interior or subsets thereof, such as by a recording device or by another device that receives information from the recording device. As used herein, the term “panorama image” may refer to a visual representation that is based on, includes or is separable into multiple discrete component images originating from a substantially similar physical location in different directions and that depicts a larger field of view than any of the discrete component images depict individually, including images with a sufficiently wide-angle view from a physical location to include angles beyond that perceivable from a person's gaze in a single direction. The term “sequence” of acquisition locations, as used herein, refers generally to two or more acquisition locations that are each visited at least once in a corresponding order, whether or not other non-acquisition locations are visited between them, and whether or not the visits to the acquisition locations occur during a single continuous period of time or at multiple different times, or by a single user and/or device or by multiple different users and/or devices. In addition, various details are provided in the drawings and text for exemplary purposes, but are not intended to limit the scope of the invention. For example, sizes and relative positions of elements in the drawings are not necessarily drawn to scale, with some details omitted and/or provided with greater prominence (e.g., via size and positioning) to enhance legibility and/or clarity. Furthermore, identical reference numbers may be used in the drawings to identify the same or similar elements or acts.
In the illustrated embodiment, the BOBSDM system 140 analyzes obtained building images 141 (e.g., some or all images 165 acquired by the ICA system 160) for a building in order to determine dimension information 143 for the building (e.g., sizes of visible objects, rooms, etc.), such as by using visible objects 142 that are identified in the images of one or more defined types (e.g., doorways, floor cabinets, etc.) and that have standardized or otherwise expected heights or other sizes, in order to estimate the camera height(s) of one or more camera devices during capturing of those images and optionally other image scale information 143. The BOBSDM system may further use the determined building dimension information in various manners, including to determine and associate sizes of rooms and/or the building as a whole with a floor plan of the building, such as for use in improved navigation of the building. In some embodiments and situations, the BOBSDM system may optionally further use supporting information supplied by system operator users via computing devices 105 over intervening computer network(s) 170, and in some embodiments and situations some or all of the determinations performed by the BOBSDM system may include using one or more trained machine learning models (e.g., one or more trained neural networks). In some embodiments, the building images 141 that are analyzed by the BOBSDM system may be obtained in manners other than via ICA and/or MIGM systems 160 (e.g., if such ICA and/or MIGM systems are not part of the BOBSDM system), such as to receive building images from other sources. Additional details related to the automated operations of the BOBSDM system are included elsewhere herein, including with respect to
In addition, an Interior Capture and Analysis (“ICA”) system (e.g., an ICA system 160 executing on the one or more server computing systems 180, such as part of the BOBSDM system; an optional ICA system application 154 executing on a mobile image acquisition device 185; etc.) captures information 165 with respect to one or more buildings or other structures (e.g., by capturing one or more 360° panorama images and/or other images for multiple acquisition locations 210 in an example house 198), and a MIGM (Mapping Information Generation Manager) system 160 executing on the one or more server computing systems 180 (e.g., as part of the BOBSDM system) further uses that captured building information and optionally additional supporting information (e.g., supplied by system operator users via computing devices 105 over intervening computer network(s) 170) to generate and provide building floor plans 155 and/or other mapping-related information (not shown) for the building(s) or other structure(s). In the illustrated embodiment, the ICA and MIGM systems 160 are operating as part of the BOBSDM system 140 that analyzes building images 141 (e.g., images 165 acquired by the ICA system) and generates and uses corresponding building information 144 (e.g., as part of floor plan generation by the MIGM system) in one or more further automated manners, but in other embodiments may operate separately from the BOBSDM system. Similarly, while the ICA and MIGM systems 160 are illustrated in this example embodiment as executing on the same server computing system(s) 180 as the BOBSDM system (e.g., with all systems being operated by a single entity or otherwise being executed in coordination with each other, such as with some or all functionality of all the systems integrated together), in other embodiments the ICA system 160 and/or MIGM system 160 and/or BOBSDM system 140 may operate on one or more other systems separate from the system(s) 180 (e.g., on mobile device 185; one or more other computing systems, not shown; etc.), whether instead of or in addition to the copies of those systems executing on the system(s) 180 (e.g., to have a copy of the MIGM system 160 executing on the device 185 to incrementally generate at least partial building floor plans as building images are acquired by the ICA system 160 executing on the device 185 and/or by that copy of the MIGM system, while another copy of the MIGM system optionally executes on one or more server computing systems to generate a final complete building floor plan after all images are acquired), and in yet other embodiments the BOBSDM may instead operate without an ICA system and/or MIGM system and instead obtain panorama images (or other images) and/or building floor plans from one or more external sources. Additional details related to the automated operation of the ICA and MIGM systems are included elsewhere herein, including with respect to
Various components of the mobile computing device 185 are also illustrated in
One or more users (not shown) of one or more client computing devices 175 may further interact over one or more computer networks 170 with the BOBSDM system 140 (and optionally the ICA system 160 and/or MIGM system 160), such as to obtain determined building dimension information and/or to assist in the determining of the building dimension information, as well as obtaining and using the underlying images and/or resulting floor plans in one or more further automated manners-such interactions by the user(s) may include, for example, specifying target criteria to use in searching for corresponding floor plans or otherwise providing information about target criteria of interest to the users, or obtaining and optionally interacting with one or more particular identified floor plans and/or with additional associated information (e.g., to change between a floor plan view and a view of a particular image at an acquisition location within or near the floor plan; to change the horizontal and/or vertical viewing direction from which a corresponding view of a panorama image is displayed, such as to determine a portion of a panorama image to which a current user viewing direction is directed, etc.). In addition, a floor plan (or portion of it) may be linked to or otherwise associated with one or more other types of information, including for a floor plan of a multi-story or otherwise multi-level building to have multiple associated sub-floor plans for different stories or levels that are interlinked (e.g., via connecting stairway passages), for a two-dimensional (“2D”) floor plan of a building to be linked to or otherwise associated with a three-dimensional (“3D”) rendering of the building, etc. Also, while not illustrated in
In the depicted computing environment of
In the example of
One or more end users (not shown) of one or more building information access client computing devices 175 may further interact over computer networks 170 with the BOBSDM system 140 (and optionally the MIGM system 160 and/or ICA system 160), such as to obtain, display and interact with a generated floor plan (and/or other generated mapping information) and/or associated images (e.g., by supplying information about one or more indicated buildings of interest and/or other criteria and receiving information about one or more corresponding matching buildings), as discussed in greater detail elsewhere herein, including with respect to
In operation, the mobile device 185 and/or camera device(s) 184 arrive at a first acquisition location 210A within a first room of the building interior (in this example, in a living room accessible via an external door 190-1), and captures or acquires a view of a portion of the building interior that is visible from that acquisition location 210A (e.g., some or all of the first room, and optionally small portions of one or more other adjacent or nearby rooms, such as through doorway wall openings, non-doorway wall openings, hallways, stairways or other connecting passages from the first room). The view capture may be performed in various manners as discussed herein, and may include a number of objects or other features (e.g., structural details) that may be visible in images captured from the acquisition location—in the example of
After the first acquisition location 210A has been captured, the mobile device 185 and/or camera device(s) 184 may move or be moved to a next acquisition location (such as acquisition location 210B), optionally recording images and/or video and/or other data from the hardware components (e.g., from one or more IMUs, from the camera, etc.) during movement between the acquisition locations. At the next acquisition location, the mobile 185 and/or camera device(s) 184 may similarly capture a 360° panorama image and/or other type of image from that acquisition location. This process may repeat for some or all rooms of the building and in some cases external to the building, as illustrated for additional acquisition locations 210C-210P in this example, with the images from acquisition locations 210A to 210-O being captured in a single image acquisition session in this example (e.g., in a substantially continuous manner, such as within a total of 5 minutes or 15 minutes), and with the image from acquisition location 210P optionally being acquired at a different time (e.g., from a street adjacent to the building or front yard of the building). In this example, multiple of the acquisition locations 210K-210P are external to but associated with the building 198, including acquisition locations 210L and 210M in one or more additional structures 189 on the same property 241 (e.g., an ADU, or accessory dwelling unit; a garage; a shed; etc.), acquisition location 210K on an external deck or patio 186, and acquisition locations 210N-210P at multiple yard locations on the property (e.g., backyard 187, side yard 188, front yard including acquisition location 210P, etc.). The acquired images for each acquisition location may be further analyzed, including in some embodiments to render or otherwise place each panorama image in an equirectangular format, whether at the time of image acquisition or later, as well as further analyzed by the MIGM and/or BOBSDM systems in the manners described herein.
Various details are provided with respect to
In particular,
While not illustrated in
Additional details related to embodiments of a system providing at least some such functionality of an MIGM system or related system for generating floor plans and associated information and/or presenting floor plans and associated information are included in co-pending U.S. Non-Provisional patent application Ser. No. 16/190,162, filed Nov. 14, 2018 and entitled “Automated Mapping Information Generation From Inter-Connected Images” (which includes disclosure of an example Floor Map Generation Manager, or FMGM, system that is generally directed to automated operations for generating and displaying a floor map or other floor plan of a building using images acquired in and around the building); in U.S. Non-Provisional patent application Ser. No. 16/681,787, filed Nov. 12, 2019 and entitled “Presenting Integrated Building Information Using Three-Dimensional Building Models” (which includes disclosure of an example FMGM system that is generally directed to automated operations for displaying a floor map or other floor plan of a building and associated information); in U.S. Non-Provisional patent application Ser. No. 16/841,581, filed Apr. 6, 2020 and entitled “Providing Simulated Lighting Information For Three-Dimensional Building Models” (which includes disclosure of an example FMGM system that is generally directed to automated operations for displaying a floor map or other floor plan of a building and associated information); in U.S. Provisional Patent Application No. 62/927,032, filed Oct. 28, 2019 and entitled “Generating Floor Maps For Buildings From Automated Analysis Of Video Of The Buildings' Interiors” (which includes disclosure of an example Video-To-Floor Map, or BOBSDM, system that is generally directed to automated operations for generating a floor map or other floor plan of a building using video data acquired in and around the building); in U.S. Non-Provisional patent application Ser. No. 16/807,135, filed Mar. 2, 2020 and entitled “Automated Tools For Generating Mapping Information For Buildings” (which includes disclosure of an example MIGM system that is generally directed to automated operations for generating a floor map or other floor plan of a building using images acquired in and around the building); and in U.S. Non-Provisional patent application Ser. No. 17/013,323, filed Sep. 4, 2020 and entitled “Automated Analysis Of Image Contents To Determine The Acquisition Location Of The Image” (which includes disclosure of an example MIGM system that is generally directed to automated operations for generating a floor map or other floor plan of a building using images acquired in and around the building, and an example ILMM system for determining the acquisition location of an image on a floor plan based at least in part on an analysis of the image's contents); each of which is incorporated herein by reference in its entirety.
Various details have been provided with respect to
As noted above, in some embodiments, the described techniques include using machine learning to learn the attributes and/or other characteristics of adjacency graphs to encode in corresponding vector embeddings that are generated, such as the attributes and/or other characteristics that best enable subsequent automated identification of building floor plans having attributes satisfying target criteria, and with the vector embeddings that are used in at least some embodiments to identify target building floor plans being encoded based on such learned attributes or other characteristics. In particular, in at least some such embodiments, graph representation learning is used to search for a mapping function that can map the nodes in a graph to d-dimensional vectors, such that in the learned space similar nodes in the graph have similar embeddings. Unlike traditional methods such as graph kernel methods (see, for example, “Graph Kernels” by S. V. N. Vishwanathan et al., Journal of Machine Learning Research, 11:1201-1242, 2010; and “A Survey On Graph Kernels”, Nils M. Kriege et al., arXiv: 1903.11835, 2019), graph neural networks remove the process of hand-engineered features and directly learn the high-level embeddings from the raw features of nodes or the (sub) graph. Various techniques exist for extending and re-defining convolutions in the graph domain, which can be categorized into the spectral approach and the spatial approach. The spectral approach employs the spectral representation of a graph, and are specific to a particular graph structure, such that the models trained on one graph are not applicable to a graph with a different structure (see, for example, “Spectral Networks And Locally Connected Networks On Graphs”, Joan Bruna et al., International Conference on Learning Representations 2014, 2014; “Convolutional Neural Networks On Graphs With Fast Localized Spectral Filtering”, Michael Defferrard et al., Proceedings of Neural Information Processing Systems 2016, 2016, pp. 3844-3852; and “Semi-Supervised Classification With Graph Convolutional Networks”, Thomas N. Kipf et al., International Conference on Learning Representations 2017, 2017). The convolution operation for the spectral approach is defined in the Fourier domain by computing the eigendecomposition of the graph Laplacian, and the filter may be approximated to reduce the expensive eigen-decomposition by Chebyshev expansion of graph Lapacian, generating local filters, with the filters optionally limited to work on neighbors one step away from the current node. With respect to the spatial approach, it includes learning embeddings for a node by recursively aggregating information from its local neighbors. Various amounts of neighboring nodes and corresponding aggregation functions can be handled in various ways. For example, a fixed number of neighbors for each node may be sample, and different aggregation functions such as mean, max and long short term memory networks (LSTM) may be used (see, for example, “Inductive Representation Learning On Large Graphs”, Will Hamilton et al., Proceedings of Neural Information Processing Systems 2017, 2017, pp. 1024-1034). Alternatively, each neighboring node may be considered to contribute differently to a central node, with the contribution factors being learnable via self-attention models (see, for example, “Graph Attention Networks”, P. Velickovic et al., International Conference on Learning Representations 2018, 2018). Furthermore, each attention head captures feature correlation in a different representation subspace, and may be treated differently, such as by using a convolutional sub-network to weight the importance of each attention head (see, for example, “GaAN: Gated Attention Networks For Learning On Large And Spatiotemporal Graphs”, Jiani Zhang et al., Proceedings of Uncertainty in Artificial Intelligence 2018, 2018).
In addition, in some embodiments, the creation of an adjacency graph and/or associated vector embedding for a building may be further based in part on partial information that is provided for the building (e.g., by an operator user of the BOBSDM system, by one or more end users, etc.). Such partial information may include, for example, one or more of the following: some or all room names for rooms of the building being provided, with the connections between the rooms to be automatically determined or otherwise established; some or all inter-room connections between rooms of the building being provided, with likely room names for the rooms to be automatically determined or otherwise established; some room names and inter-room connections being provided, with the other inter-room connections and/or likely room names to be automatically determined or otherwise established. In such embodiments, the automated techniques may include using the partial information as part of completing or otherwise generating a floor plan for the building, with the floor plan subsequently used for creating a corresponding adjacency graph and/or vector embedding.
In the illustrated embodiment, the BOBSDM system 340 executes in memory 330 of the server computing system(s) 300 in order to perform at least some of the described techniques, such as by using the processor(s) 305 to execute software instructions of the system 340 in a manner that configures the processor(s) 305 and computing system 300 to perform automated operations that implement those described techniques. The illustrated embodiment of the BOBSDM system may include one or more components, not shown, to each perform portions of the functionality of the BOBSDM system, such as in a manner discussed elsewhere herein, and the memory may further optionally execute one or more other programs 335—as one specific example, a copy of the ICA and/or MIGM systems may execute as one of the other programs 335 in at least some embodiments, such as instead of or in addition to the ICA and/or MIGM systems 388-389 on the server computing system(s) 380, and/or a copy of a Building Information Access system may execute as one of the other programs 335. The BOBSDM system 340 may further, during its operation, store and/or retrieve various types of data on storage 320 (e.g., in one or more databases or other data structures), such as information 326 about identified target objects (e.g., doorways and floor cabinets) and other objects/elements from analysis of building images, determined camera height estimations and other image scaling information 325, determined building dimension information 327 (e.g., dimensions of target objects and other visible objects/elements, walls, rooms, the building as a whole, etc.), floor plans and images and other associated information 324 (e.g., images captured and/or generated by the ICA system; 2D and/or 2.5D and/or 3D models generated by the MIGM system; building and room dimensions for use with associated floor plans, such as generated by the BOBSDM system; additional images and/or annotation information; etc.), optionally various types of user information 322 for users who interact with the BOBSDM system, and/or various types of optional additional information 329 (e.g., various analytical information related to presentation or other use of one or more building interiors or other environments).
In addition, embodiments of the ICA and MIGM systems 388-389 execute in memory 387 of the server computing system(s) 380 in the illustrated embodiment in order to perform techniques related to generating panorama images and floor plans for buildings, such as by using the processor(s) 381 to execute software instructions of the systems 388 and/or 389 in a manner that configures the processor(s) 381 and computing system(s) 380 to perform automated operations that implement those techniques. The illustrated embodiment of the ICA and MIGM systems may include one or more components, not shown, to each perform portions of the functionality of the ICA and MIGM systems, respectively, and the memory may further optionally execute one or more other programs 383. The ICA and/or MIGM systems 388-389 may further, during operation, store and/or retrieve various types of data on storage 384 (e.g., in one or more databases or other data structures), such as video and/or image information 386 acquired for one or more buildings (e.g., 360° video or images for analysis to generate floor plans, to provide to users of client computing devices 390 for display, etc.), floor plans and/or other generated mapping information 387, and optionally other information 385 (e.g., additional images and/or annotation information for use with associated floor plans, building and room dimensions for use with associated floor plans, various analytical information related to presentation or other use of one or more building interiors or other environments, etc.)—while not illustrated in
The server computing system(s) 300 and executing BOBSDM system 340, server computing system(s) 380 and executing ICA and MIGM systems 388-389, and optionally executing Building Information Access system (not shown), may communicate with each other and with other computing systems and devices in this illustrated embodiment, such as via one or more networks 399 (e.g., the Internet, one or more cellular telephone networks, etc.), including to interact with user client computing devices 390 (e.g., used to view floor plans, and optionally associated images and/or other related information, such as by interacting with or executing a copy of the Building Information Access system), and/or mobile image acquisition devices 360 (e.g., used to acquire images and/or other information for buildings or other environments to be modeled), and/or optionally other navigable devices 395 that receive and use floor plans and optionally other generated information for navigation purposes (e.g., for use by semi-autonomous or fully autonomous vehicles or other devices). In other embodiments, some of the described functionality may be combined in less computing systems, such as to combine the BOBSDM system 340 and a Building Information Access system in a single system or device, to combine the BOBSDM system 340 and the image acquisition functionality of device(s) 360 in a single system or device, to combine the ICA and MIGM systems 388-389 and the image acquisition functionality of device(s) 360 in a single system or device, to combine the BOBSDM system 340 and the ICA and MIGM systems 388-389 in a single system or device, to combine the BOBSDM system 340 and the ICA and MIGM systems 388-389 and the image acquisition functionality of device(s) 360 in a single system or device, etc.
Some or all of the user client computing devices 390 (e.g., mobile devices), mobile image acquisition devices 360, optional other navigable devices 395 and other computing systems (not shown) may similarly include some or all of the same types of components illustrated for server computing system 300. As one non-limiting example, the mobile image acquisition devices 360 are each shown to include one or more hardware CPU(s) 361, I/O components 362, memory and/or storage 367, one or more imaging systems 365, IMU hardware sensors 369 (e.g., for use in acquisition of video and/or images, associated device movement data, etc.), and optionally other components 364. In the illustrated example, one or both of a browser and one or more client applications 368 (e.g., an application specific to the BOBSDM system and/or to ICA system and/or to the MIGM system) are executing in memory 367, such as to participate in communication with the BOBSDM system 340, ICA system 388, MIGM system 389 and/or other computing systems. While particular components are not illustrated for the other navigable devices 395 or other computing devices/systems 390, it will be appreciated that they may include similar and/or additional components.
It will also be appreciated that computing systems 300 and 380 and the other systems and devices included within
It will also be appreciated that, while various items are illustrated as being stored in memory or on storage while being used, these items or portions of them may be transferred between memory and other storage devices for purposes of memory management and data integrity. Alternatively, in other embodiments some or all of the software components and/or systems may execute in memory on another device and communicate with the illustrated computing systems via inter-computer communication. Thus, in some embodiments, some or all of the described techniques may be performed by hardware means that include one or more processors and/or memory and/or storage when configured by one or more software programs (e.g., by the BOBSDM system 340 executing on server computing systems 300, by a Building Information Access system executing on server computing systems 300 or other computing systems/devices, etc.) and/or data structures, such as by execution of software instructions of the one or more software programs and/or by storage of such software instructions and/or data structures, and such as to perform algorithms as described in the flow charts and other disclosure herein. Furthermore, in some embodiments, some or all of the systems and/or components may be implemented or provided in other manners, such as by consisting of one or more means that are implemented partially or fully in firmware and/or hardware (e.g., rather than as a means implemented in whole or in part by software instructions that configure a particular CPU or other processor), including, but not limited to, one or more application-specific integrated circuits (ASICs), standard integrated circuits, controllers (e.g., by executing appropriate instructions, and including microcontrollers and/or embedded controllers), field-programmable gate arrays (FPGAs), complex programmable logic devices (CPLDs), etc. Some or all of the components, systems and data structures may also be stored (e.g., as software instructions or structured data) on a non-transitory computer-readable storage mediums, such as a hard disk or flash drive or other non-volatile storage device, volatile or non-volatile memory (e.g., RAM or flash RAM), a network storage device, or a portable media article (e.g., a DVD disk, a CD disk, an optical disk, a flash memory device, etc.) to be read by an appropriate drive or via an appropriate connection. The systems, components and data structures may also in some embodiments be transmitted via generated data signals (e.g., as part of a carrier wave or other analog or digital propagated signal) on a variety of computer-readable transmission mediums, including wireless-based and wired/cable-based mediums, and may take a variety of forms (e.g., as part of a single or multiplexed analog signal, or as multiple discrete digital packets or frames). Such computer program products may also take other forms in other embodiments. Accordingly, embodiments of the present disclosure may be practiced with other computer system configurations.
The illustrated embodiment of the routine begins at block 405, where instructions or information are received. At block 410, the routine determines whether the received instructions or information indicate to acquire visual data and/or other data representing a building interior (optionally in accordance with supplied information about one or more additional acquisition locations and/or other guidance acquisition instructions), and if not continues to block 490. Otherwise, the routine proceeds to block 412 to receive an indication to begin the image acquisition process at a first acquisition location (e.g., from a user of a mobile image acquisition device that will perform the acquisition process). After block 412, the routine proceeds to block 415 in order to perform acquisition location image acquisition activities for acquiring a 360° panorama image for the acquisition location in the interior of the target building of interest, such as via one or more fisheye lenses and/or non-fisheye rectilinear lenses on the mobile device and to provide horizontal coverage of at least 360° around a vertical axis, although in other embodiments other types of images and/or other types of data may be acquired. As one non-exclusive example, the mobile image acquisition device may be a rotating (scanning) panorama camera equipped with a fisheye lens (e.g., with 180° degrees of horizontal coverage) and/or other lens (e.g., with less than 180° degrees of horizontal coverage, such as a regular lens or wide-angle lens or ultrawide lens). The routine may also optionally obtain annotation and/or other information from the user regarding the acquisition location and/or the surrounding environment, such as for later use in presentation of information regarding that acquisition location and/or surrounding environment.
After block 415 is completed, the routine continues to block 425 to determine if there are more acquisition locations at which to acquire images, such as based on corresponding information provided by the user of the mobile device and/or received in block 405—in some embodiments, the ICA routine will acquire only a single image and then proceed to perform blocks 430 and 440, followed by block 480 to provide that image and corresponding information (e.g., to return the image and corresponding information to the MIGM system for further use before receiving additional instructions or information to acquire one or more next images at one or more next acquisition locations). If there are more acquisition locations at which to acquire additional images at the current time, the routine continues to block 427 to optionally initiate the capture of linking information during movement of the mobile device along a travel path away from the current acquisition location and towards a next acquisition location within the building interior—the captured linking information may include additional sensor data (e.g., from one or more IMU, or inertial measurement units, on the mobile device or otherwise carried by the user) and/or additional visual information (e.g., images, video, etc.) recorded during such movement. Initiating the capture of such linking information may be performed in response to an explicit indication from a user of the mobile device or based on one or more automated analyses of information recorded from the mobile device. In addition, the routine may further optionally monitor the motion of the mobile device in some embodiments during movement to the next acquisition location, and provide one or more guidance cues (e.g., to the user) regarding the motion of the mobile device, quality of the sensor data and/or visual information being captured, associated lighting/environmental conditions, advisability of capturing a next acquisition location, and any other suitable aspects of capturing the linking information. Similarly, the routine may optionally obtain annotation and/or other information from the user regarding the travel path, such as for later use in presentation of information regarding that travel path or a resulting inter-panorama image connection link. In block 429, the routine determines that the mobile device has arrived at the next acquisition location (e.g., based on an indication from the user, based on the forward movement of the mobile device stopping for at least a predefined amount of time, etc.), for use as the new current acquisition location, and returns to block 415 in order to perform the acquisition location image acquisition activities for the new current acquisition location.
If it is instead determined in block 425 that there are not any more acquisition locations at which to acquire image information for the current building or other structure at the current time, the routine proceeds to block 430 to optionally analyze information about the one or more acquisition locations to identify possible additional areas in the building for which to acquire visual data (e.g., based on not obtaining visual data for a kitchen or a bathroom, on only obtaining visual data for 2 bathrooms while textual description information for the building indicates 3 bathrooms, etc.) and/or other information to gather (e.g., audio data), and to optionally further provide user suggestions and/or directions if so identified and/or to otherwise assist in capturing corresponding additional data. In block 435, the routine then optionally preprocesses the acquired 360° panorama images before their subsequent use (e.g., for generating related mapping information, for providing information about features of rooms or other enclosing areas, etc.), such as to produce images of a particular type and/or in a particular format (e.g., to generate a straightened equirectangular projection for each such image, with straight vertical data such as the sides of a typical rectangular door frame or a typical border between 2 adjacent walls remaining straight, and with straight horizontal data such as the top of a typical rectangular door frame or a border between a wall and a floor remaining straight at a horizontal midline of the image but being increasingly curved in the equirectangular projection image in a convex manner relative to the horizontal midline as the distance increases in the image from the horizontal midline). After block 435, the routine continues to block 440 to perform a Building Object-Based Scale Determination Manager (BOBSDM) routine to determine estimated camera heights and optionally other scaling information for the images acquired in block 415 and to use that scaling information to determine building dimension information, with one example of such a routine further illustrated with respect to
If it is instead determined in block 410 that the instructions or other information received in block 405 are not to acquire images and other data representing a building interior, the routine continues instead to block 490 to perform any other indicated operations as appropriate, such as to configure parameters to be used in various operations of the system (e.g., based at least in part on information specified by a user of the system, such as a user of a mobile device who captures one or more building interiors, an operator user of the ICA system, etc.), to respond to requests for generated and stored information (e.g., to identify one or more groups of inter-connected linked panorama images each representing a building or part of a building that match one or more specified search criteria, one or more panorama images that match one or more specified search criteria, etc.), to generate and store inter-panorama image connections between panorama images for a building or other structure (e.g., for each panorama image, to determine directions within that panorama image toward one or more other acquisition locations of one or more other panorama images, such as to enable later display of an arrow or other visual representation with a panorama image for each such determined direction from the panorama image to enable an end-user to select one of the displayed visual representations to switch to a display of the other panorama image at the other acquisition location to which the selected visual representation corresponds), to obtain and store other information about users of the system, to perform any housekeeping tasks, etc.
Following blocks 480 or 490, the routine proceeds to block 495 to determine whether to continue, such as until an explicit indication to terminate is received, or instead only if an explicit indication to continue is received. If it is determined to continue, the routine returns to block 405 to await additional instructions or information, and if not proceeds to step 499 and ends.
The illustrated embodiment of the routine begins at block 505, where information or instructions are received. The routine continues to block 510 to determine whether image information is already available to be analyzed for one or more rooms (e.g., for some or all of an indicated building, such as based on one or more such images received in block 505 as previously generated by the ICA routine), or if such image information instead is to be currently acquired. If it is determined in block 510 to currently acquire some or all of the image information, the routine continues to block 512 to acquire such information, optionally waiting for one or more users or devices to move throughout one or more rooms of a building and acquire panoramas or other images at one or more acquisition locations in one or more of the rooms (e.g., at multiple acquisition locations in each room of the building), optionally along with metadata information regarding the acquisition and/or interconnection information related to movement between acquisition locations, as discussed in greater detail elsewhere herein-implementation of block 512 may, for example, include invoking an ICA system routine to perform such activities, with
After blocks 512 or 515, the routine continues to block 520, where it determines whether to generate mapping information that includes a linked set of target panorama images (or other images) for a building or other group of rooms (referred to at times as a ‘virtual tour’, such as to enable an end user to move from any one of the images of the linked set to one or more other images to which that starting current image is linked, including in some embodiments via selection of a user-selectable control for each such other linked image that is displayed along with a current image, optionally by overlaying visual representations of such user-selectable controls and corresponding inter-image directions on the visual data of the current image, and to similarly move from that next image to one or more additional images to which that next image is linked, etc.), and if so continues to block 525. The routine in block 525 selects pairs of at least some of the images (e.g., based on the images of a pair having overlapping visual content), and determines, for each pair, relative directions between the images of the pair based on shared visual content and/or on other captured linking interconnection information (e.g., movement information) related to the images of the pair (whether movement directly from the acquisition location for one image of a pair to the acquisition location of another image of the pair, or instead movement between those starting and ending acquisition locations via one or more other intermediary acquisition locations of other images). The routine in block 525 may further optionally use at least the relative direction information for the pairs of images to determine global relative positions of some or all of the images to each other in a common coordinate system, and/or generate the inter-image links and corresponding user-selectable controls as noted above. Additional details are included elsewhere herein regarding creating such a linked set of images.
After block 525, or if it is instead determined in block 520 that the instructions or other information received in block 505 are not to determine a linked set of images, the routine continues to block 535 to determine whether the instructions received in block 505 indicate to generate other mapping information for an indicated building (e.g., a floor plan), and if so the routine continues to perform some or all of blocks 537-585 to do so, and otherwise continues to block 590. In block 537, the routine optionally obtains additional information about the building, such as from activities performed during acquisition and optionally analysis of the images, and/or from one or more external sources (e.g., online databases, information provided by one or more end users, etc.)—such additional information may include, for example, exterior dimensions and/or shape of the building, additional images and/or annotation information acquired corresponding to particular locations external to the building (e.g., surrounding the building and/or for other structures on the same property, from one or more overhead locations, etc.), additional images and/or annotation information acquired corresponding to particular locations within the building (optionally for locations different from acquisition locations of the acquired panorama images or other images), etc.
After block 537, the routine continues to block 540 to select the next room (beginning with the first) for which one or more images (e.g., 360° panorama images) acquired in the room are available, and to analyze the visual data of the image(s) for the room to determine a room shape (e.g., by determining at least wall locations), optionally along with determining uncertainty information about walls and/or other parts of the room shape, and optionally including identifying other wall and floor and ceiling elements (e.g., wall structural elements/features, such as windows, doorways and stairways and other inter-room wall openings and connecting passages, wall borders between a wall and another wall and/or ceiling and/or floor, etc.) and their positions within the determined room shape of the room. In some embodiments, the room shape determination may include using boundaries of the walls with each other and at least one of the floor or ceiling to determine a 2D room shape (e.g., using one or trained machine learning models), while in other embodiments the room shape determination may be performed in other manners (e.g., by generating a 3D point cloud of some or all of the room walls and optionally the ceiling and/or floor, such as by analyzing at least visual data of the panorama image and optionally additional data captured by an image acquisition device or associated mobile computing device, optionally using one or more of SfM (Structure from Motion) or SLAM (Simultaneous Location And Mapping) or MVS (Multi-View Stereo) analysis). In addition, the activities of block 545 may further optionally determine and use initial pose information for each of those panorama images (e.g., as supplied with acquisition metadata for the panorama image), and/or obtain and use additional metadata for each panorama image (e.g., acquisition height information of the camera device or other image acquisition device used to acquire a panorama image relative to the floor and/or the ceiling). Additional details are included elsewhere herein regarding determining room shapes and identifying additional information for the rooms. After block 540, the routine continues to block 545, where it determines whether there are more rooms for which to determine room shapes based on images acquired in those rooms, and if so returns to block 540 to select the next such room for which to determine a room shape.
If it is instead determined in block 545 that there are not more rooms for which to generate room shapes, the routine continues to block 560 to determine whether to further generate at least a partial floor plan for the building (e.g., based at least in part on the determined room shape(s) from block 540, and optionally further information regarding how to position the determined room shapes relative to each other). If not, such as when determining only one or more room shapes without generating further mapping information for a building (e.g., to determine the room shape for a single room based on one or more images acquired in the room by the ICA system), the routine continues to block 588. Otherwise, the routine continues to block 565 to retrieve one or more room shapes (e.g., room shapes generated in block 545) or otherwise obtain one or more room shapes (e.g., based on human-supplied input) for rooms of the building, whether 2D or 3D room shapes, and then continues to block 570. In block 570, the routine uses the one or more room shapes to create an initial floor plan (e.g., an initial 2D floor plan using 2D room shapes and/or an initial 3D floor plan using 3D room shapes), such as a partial floor plan that includes one or more room shapes but less than all room shapes for the building, or a complete floor plan that includes all room shapes for the building. If there are multiple room shapes, the routine in block 570 further determines positioning of the room shapes relative to each other, such as by using visual overlap between images from multiple acquisition locations to determine relative positions of those acquisition locations and of the room shapes surrounding those acquisition locations, and/or by using other types of information (e.g., using connecting inter-room passages between rooms, optionally applying one or more constraints or optimizations, etc.). In at least some embodiments, the routine in block 570 further refines some or all of the room shapes by generating a binary segmentation mask that covers the relatively positioned room shape(s), extracting a polygon representing the outline or contour of the segmentation mask, and separating the polygon into the refined room shape(s). Such a floor plan may include, for example, relative position and shape information for the various rooms without providing any actual dimension information for the individual rooms or building as a whole, and may further include multiple linked or associated sub-maps (e.g., to reflect different stories, levels, sections, etc.) of the building. The routine further optionally associates positions of the doors, wall openings and other identified wall elements on the floor plan.
After block 570, the routine optionally performs one or more steps 580-585 to determine and associate additional information with the floor plan. In block 580, the routine optionally estimates the dimensions of some or all of the rooms, such as from analysis of images and/or their acquisition metadata or from overall dimension information obtained for the exterior of the building, and associates the estimated dimensions with the floor plan—it will be appreciated that if sufficiently detailed dimension information were available, architectural drawings, blueprints, etc. may be generated from the floor plan. After block 580, the routine continues to block 583 to optionally associate further information with the floor plan (e.g., with particular rooms or other locations within the building), such as additional existing images with specified positions and/or annotation information. In block 585, if the room shapes from block 545 are not 3D room shapes, the routine further optionally estimates heights of walls in some or all rooms, such as from analysis of images and optionally sizes of known objects in the images, as well as height information about a camera when the images were acquired, and uses that height information to generate 3D room shapes for the rooms. The routine further optionally uses the 3D room shapes (whether from block 540 or block 585) to generate a 3D computer model floor plan of the building, with the 2D and 3D floor plans being associated with each other—in other embodiments, only a 3D computer model floor plan may be generated and used (including to provide a visual representation of a 2D floor plan if so desired by using a horizontal slice of the 3D computer model floor plan).
After block 585, or if it is instead determined in block 560 not to determine a floor plan, the routine continues to block 588 to store the determined room shape(s) and/or generated mapping information and/or other generated information, to optionally provide some or all of that information to one or more recipients (e.g., to block 440 of routine 400 if invoked from that block), and to optionally further use some or all of the determined and generated information, such as to provide the generated 2D floor plan and/or 3D computer model floor plan for display on one or more client devices and/or to one or more other devices for use in automating navigation of those devices and/or associated vehicles or other entities, to similarly provide and use information about determined room shapes and/or a linked set of panorama images and/or about additional information determined about contents of rooms and/or passages between rooms, etc.
If it is instead determined in block 535 that the information or instructions received in block 505 are not to generate mapping information for an indicated building, the routine continues instead to block 590 to perform one or more other indicated operations as appropriate. Such other operations may include, for example, receiving and responding to requests for previously generated floor plans and/or previously determined room shapes and/or other generated information (e.g., requests for such information for display on one or more client devices, requests for such information to provide it to one or more other devices for use in automated navigation, etc.), obtaining and storing information about buildings for use in later operations (e.g., information about dimensions, numbers or types of rooms, total square footage, adjacent or nearby other buildings, adjacent or nearby vegetation, exterior images, etc.), etc.
After blocks 588 or 590, the routine continues to block 595 to determine whether to continue, such as until an explicit indication to terminate is received, or instead only if an explicit indication to continue is received. If it is determined to continue, the routine returns to block 505 to wait for and receive additional instructions or information, and otherwise continues to block 599 and ends.
While not illustrated with respect to the automated operations shown in the example embodiment of
The illustrated embodiment of the routine begins at block 605, where one or more images for a building are received (e.g., multiple images captured throughout multiple rooms of a building). The routine continues to block 610 to analyze each image to identify any visible objects of one or more defined types (e.g., to identify the four corners of doorways; to identify at least the bottom and top of floor cabinets, such as for each pixel column having visual data showing such a cabinet, etc.). In block 615, the routine then, for each identified physical object, transforms points in the image corresponding to the object from a local relative coordinate system for the image to an actual physical coordinate system, including to determine the estimated camera height during the image acquisition using assumed physical dimensions (e.g., height) of that object's defined type, and to use that estimated camera height to determine actual distances corresponding to the transformed points. In block 620, the routine then, if more than one physical object was identified in the one or more images, compares some or all of the determined estimated camera heights (e.g., all heights for all objects of multiple object types across multiple images, multiple heights for different objects of the same type in one or more images, multiple heights for objects of different types in the same image, etc.), and determines if they satisfy one or more defined validation criteria. In some embodiments and situations, an intermediate aggregated estimated camera height may first be determined for each object type, such as by combining estimated camera heights for multiple objects of that type in one or more images, with multiple such intermediate aggregated estimated camera heights then compared to determine if they satisfy the one or more defined validation criteria.
In block 625, the routine then determines if the validation criteria are satisfied (or if no validation criteria are used, such if a single estimated camera height was determined in block 615), and if so proceeds to perform blocks 630-640, and otherwise proceeds to block 650. In block 630, the routine uses the compared estimated camera heights to determine the final estimated camera height for the one or more images (e.g., by determining a mean or other average of them; by selecting a single estimated camera height, such as a minimum or maximum; etc.), and in block 635 proceeds to determine building dimensions for portions of the building visible in those images based on the final estimated camera height (e.g., lengths/widths and/or heights of walls, dimensions of other visible objects and/or elements, overall room sizes, overall building dimensions if the images cover all of the building, etc.). In block 640, the routine then optionally further validates the determined camera height and resulting building dimensions using an alternative manner, such as by using a generated floorplan from the images (optionally waiting until it is generated if not yet done) and fitting an exterior of the floorplan to a visible exterior of the building in an overhead image, and using size information associated with the overhead image (e.g., GPS data) to determine the resulting overall building dimensions, from which the dimensions of particular rooms may be determined and matched to corresponding room dimensions determined from using the final estimated camera height-if such a comparison is performed and the building and/or room dimensions from the two techniques are not a match (e.g., differ by more than a defined threshold amount), final building dimension information to use may be determined in one or more manners, such as to select the results from one of the two techniques (e.g., a predefined preferred one of the two), to average or otherwise combine the dimensions from the two techniques, to request a separate review and determination (e.g., by a human operator user), etc. If it is instead determined in block 625 that the validation criteria are not satisfied, the routine continues to block 650 to perform a similar technique to that discussed with respect to block 640 to be used instead of information based on the estimated camera heights.
After blocks 640 or 650, the routine continues to block 695 to determine whether to continue, such as until an explicit indication to terminate is received, or instead only if an explicit indication to continue is received. If it is determined to continue, the routine returns to block 605 to wait for and receive additional images to analyze, and otherwise continues to block 699 and ends.
The illustrated embodiment of the routine begins at block 705, where instructions or information are received. At block 710, the routine determines whether the received instructions or information in block 705 are to display determined information for one or more target buildings, and if so continues to block 715 to determine whether the received instructions or information in block 705 are to select one or more target buildings using specified criteria (e.g., based at least in part on an indicated building), and if not continues to block 725 to obtain an indication of a target building to use from the user (e.g., based on a current user selection, such as from a displayed list or other user selection mechanism; based on information received in block 705; etc.). Otherwise, if it is determined in block 715 to select one or more target buildings from specified criteria (e.g., based at least in part on an indicated building), the routine continues instead to block 720, where it obtains indications of one or more search criteria to use, such as from current user selections or as indicated in the information or instructions received in block 705, and then searches stored information about buildings to determine one or more of the buildings that satisfy the search criteria or otherwise obtains indications of one or more such matching buildings, such as based at least in part on building dimensions information generated by the BOBSDM system. In the illustrated embodiment, the routine then further selects a best match target building from the one or more returned buildings (e.g., the returned other building with the highest similarity or other matching rating for the specified criteria, or using another selection technique indicated in the instructions or other information received in block 705), while in other embodiments the routine may instead present multiple candidate buildings that satisfy the search criteria (e.g., in a ranked order based on degree of match) and receive a user selection of the target building from the multiple candidates.
After blocks 720 or 725, the routine continues to block 735 to retrieve a floor plan for the target building and/or other generated mapping information for the building (e.g., a group of inter-linked images for use as part of a virtual tour), and optionally indications of associated linked information for the building interior and/or a surrounding location external to the building, and/or information about one or more generated explanations or other descriptions of why the target building is selected as matching specified criteria (e.g., based in part or in whole on one or more other indicated buildings), and selects an initial view of the retrieved information (e.g., a view of the floor plan, a particular room shape, a particular image, etc., optionally along with generated explanations or other descriptions of why the target building is selected to be matching if such information is available). In block 740, the routine then displays or otherwise presents the current view of the retrieved information, and waits in block 745 for a user selection. After a user selection in block 745, if it is determined in block 750 that the user selection corresponds to adjusting the current view for the current target building (e.g., to change one or more aspects of the current view), the routine continues to block 755 to update the current view in accordance with the user selection, and then returns to block 740 to update the displayed or otherwise presented information accordingly. The user selection and corresponding updating of the current view may include, for example, displaying or otherwise presenting a piece of associated linked information that the user selects (e.g., a particular image associated with a displayed visual indication of a determined acquisition location, such as to overlay the associated linked information over at least some of the previous display; a particular other image linked to a current image and selected from the current image using a user-selectable control overlaid on the current image to represent that other image; etc.), and/or changing how the current view is displayed (e.g., zooming in or out; rotating information if appropriate; selecting a new portion of the floor plan to be displayed or otherwise presented, such as with some or all of the new portion not being previously visible, or instead with the new portion being a subset of the previously visible information; etc.). If it is instead determined in block 750 that the user selection is not to display further information for the current target building (e.g., to display information for another building, to end the current display operations, etc.), the routine continues instead to block 795, and returns to block 705 to perform operations for the user selection if the user selection involves such further operations.
If it is instead determined in block 710 that the instructions or other information received in block 705 are not to present information representing a building, the routine continues instead to block 760 to determine whether the instructions or other information received in block 705 correspond to identifying other images (if any) corresponding to one or more indicated target images, and if so continues to blocks 765-770 to perform such activities. In particular, the routine in block 765 receives the indications of the one or more target images for the matching (such as from information received in block 705 or based on one or more current interactions with a user) along with one or more matching criteria (e.g., an amount of visual overlap), and in block 770 identifies one or more other images (if any) that match the indicated target image(s), such as by interacting with the ICA and/or MIGM systems to obtain the other image(s). The routine then displays or otherwise provides information in block 770 about the identified other image(s), such as to provide information about them as part of search results, to display one or more of the identified other image(s), etc. If it is instead determined in block 760 that the instructions or other information received in block 705 are not to identify other images corresponding to one or more indicated target images, the routine continues instead to block 775 to determine whether the instructions or other information received in block 705 correspond to obtaining and providing guidance acquisition instructions during an image acquisition session with respect to one or more indicated target images (e.g., a most recently acquired image), and if so continues to block 780, and otherwise continues to block 790. In block 780, the routine obtains information about guidance acquisition instructions of one or more types, such as by interacting with the ICA system, and displays or otherwise provides information in block 780 about the guidance acquisition instructions, such as by overlaying the guidance acquisition instructions on a partial floor plan and/or recently acquired image in manners discussed in greater detail elsewhere herein.
In block 790, the routine continues instead to perform other indicated operations as appropriate, such as to configure parameters to be used in various operations of the system (e.g., based at least in part on information specified by a user of the system, such as a user of a mobile device who acquires one or more building interiors, an operator user of the BOBSDM and/or MIGM systems, etc., including for use in personalizing information display for a particular user in accordance with his/her preferences), to obtain and store other information about users of the system, to respond to requests for generated and stored information, to perform any housekeeping tasks, etc.
Following blocks 770 or 780 or 790, or if it is determined in block 750 that the user selection does not correspond to the current building, the routine proceeds to block 795 to determine whether to continue, such as until an explicit indication to terminate is received, or instead only if an explicit indication to continue is received. If it is determined to continue (including if the user made a selection in block 745 related to a new building to present), the routine returns to block 705 to await additional instructions or information (or to continue directly on to block 735 if the user made a selection in block 745 related to a new building to present), and if not proceeds to step 799 and ends.
Aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the present disclosure. It will be appreciated that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. It will be further appreciated that in some implementations the functionality provided by the routines discussed above may be provided in alternative ways, such as being split among more routines or consolidated into fewer routines. Similarly, in some implementations illustrated routines may provide more or less functionality than is described, such as when other illustrated routines instead lack or include such functionality respectively, or when the amount of functionality that is provided is altered. In addition, while various operations may be illustrated as being performed in a particular manner (e.g., in serial or in parallel, or synchronous or asynchronous) and/or in a particular order, in other implementations the operations may be performed in other orders and in other manners. Any data structures discussed above may also be structured in different manners, such as by having a single data structure split into multiple data structures and/or by having multiple data structures consolidated into a single data structure. Similarly, in some implementations illustrated data structures may store more or less information than is described, such as when other illustrated data structures instead lack or include such information respectively, or when the amount or types of information that is stored is altered.
From the foregoing it will be appreciated that, although specific embodiments have been described herein for purposes of illustration, various modifications may be made without deviating from the spirit and scope of the invention. Accordingly, the invention is not limited except as by corresponding claims and the elements recited by those claims. In addition, while certain aspects of the invention may be presented in certain claim forms at certain times, the inventors contemplate the various aspects of the invention in any available claim form. For example, while only some aspects of the invention may be recited as being embodied in a computer-readable medium at particular times, other aspects may likewise be so embodied.