Indoor navigation using a portable computer (e.g., cellular phone) is a subset of overall navigation. In indoor navigation, the ability for the portable computer to receive signals from global positioning system (GPS) satellites may be limited, and thus portable computers may determine indoor location using signals from locally placed beacon devices, such as Bluetooth Low Energy (BLE) devices or Ultra Wide Band (UWB) devices. Knowing the location of the portable computer relative to beacon devices is only part of the process. Conveying the location information to the user may also involve showing the user's position on an indoor map, and perhaps even providing route guidance to arrive at the endpoint location.
Indoor maps may be created from architectural drawings, such as CAD drawings generated by the architect as the basis to buildout the indoor space. However, there are no universal standards for the contents of architectural drawings. For example, one architect may call a room a “meeting room” and a second architect may call the same room a “conference room.” Moreover, the symbols used by one architect to depict objects (e.g., furniture) are likely not the same as the symbols used by another architect to depict the same objects.
Notwithstanding the lack universal standards for architectural drawings, the CAD drawings are merely data depicting points, lines, polygons, and text stored with the computer file that is the CAD drawing. The points, lines, polygons, and text do not inherently identify the depicted space or location within the building. For example, a set of lines within the CAD drawing may depict a room, and another set of lines may depict a door into the room. However, the lines are not inherently a room, as lines could be used to depict many different objects (e.g., outside walls, inside walls, conference tables). At very fine detail, it may be that the lines that depict the walls of the room do not even touch on the respective ends.
For these reasons, and many others, creating indoor maps from the architectural CAD drawings is a cumbersome process. While some “automated” tools exist, those tools require significant human interaction at many stages throughout the process. Thus, any method or system that increases the automation of creating indoor maps from CAD drawings would provide a competitive advantage in the marketplace.
One example is a computer-implemented method of extracting room names from CAD drawings, the method comprising: preprocessing, by a device, a CAD drawing to create a text database containing text from the CAD drawing and associations of the text with locations within the CAD drawing; determining, by a device, a floor depicted in the CAD drawing, the determining results in a floor-level outline; identifying, by a device, a plurality of room-level outlines within the floor-level outline, the plurality of room-level outlines corresponds to a respective plurality of rooms; selecting, by a device, a name of a first room from the plurality of rooms, the selecting based on text within the text database; and creating, by a device, an indoor map including the name of the first room, the name of the first room associated with a location of the first room within the floor-level outline.
Another example is a computer-implemented method of determining a geo-location, the method comprising: determining, by a device, a floor-level outline of a floor depicted in a CAD drawing; receiving, by a device, an approximate geo-location of a building to which the CAD drawing applies; obtaining, by a device, an overhead image of a target area encompassing the approximate geo-location, the overhead image comprising a plurality of buildings within the target area; identifying, by a device, a plurality of building footprints within the target area; calculating, by a device, a plurality of distance functions that relate the floor-level outline to the each of the plurality of building footprints, the calculating creates a plurality of similarity scores; selecting, by a device, a building footprint from plurality of building footprints, the selecting based on the plurality of similarity scores; and calculating, by a device, a final geo-location of the building corresponding to the building footprint.
Yet another example is a computer-implemented method of machine learning, the method comprising: receiving, by a device, a first set of updates to a first indoor map, the first indoor map previously created by a production machine-learning system having a production map accuracy; training, by a device, a supporting machine-learning system using the first set of updates to the first indoor map; and then applying, by a device, test data to the supporting machine-learning system, the applying results in first-evaluation indoor map with a first-evaluation map accuracy; and when the first-evaluation map accuracy is within a predetermined window above the production map accuracy refraining, by a device, from updating the production machine-learning system based on the first set of updates.
Another is a computer-implemented method of creating an indoor map from a CAD drawing, the method comprising: preprocessing, by a device, an original CAD drawing to create a modified CAD drawing, a text database containing text from the original CAD drawing, a CAD vector-image of the modified CAD drawing, and a CAD raster-image of the modified CAD drawing; determining, by a device, a floor depicted in the CAD drawing by applying the CAD raster-image, the CAD vector-image, and the text database to a floor-level machine-learning algorithm, the determining results in a floor-level outline; sensing, by a device, furniture depicted on the floor by applying the floor-level outline, the CAD vector-image, and the text database to a furniture-level machine-learning algorithm, the sensing creates a set of furniture entities; identifying, by a device, each room depicted in the CAD drawing by applying the floor-level outline, the set of furniture entities, CAD vector-image, and the text database to room-level machine-learning algorithm, the identifying creates a plurality of room-level outlines; and creating, by a device, an indoor map by combining the set of furniture entities and the plurality of room-level outlines.
Yet another example is a computer-implemented method of creating an indoor map from a CAD drawing, the method comprising: preprocessing, by a device, an original CAD drawing and thereby creating a modified CAD drawing, a text database containing text from the modified CAD drawing, a CAD vector-image of the modified CAD drawing, and a CAD raster-image of the modified CAD drawing; creating, by a device, a floor-level bounding line that encircles a floor depicted in the modified CAD drawing, the creating by applying the CAD raster-image, the CAD vector-image, and the text database to a floor-level machine-learning algorithm; applying, by a device, an active contour model to an initial floor-level segmentation created from the floor-level bounding line, the active contour model creates an intermediate floor outline that delineates the floor; removing, by a device, drawing-entities from the modified CAD drawing that are a predetermine distance away from the intermediate floor outline to create a final floor outline; and creating, by a device, an indoor map for the floor using the final floor outline.
Another example is a computer-implemented method of creating an indoor map from a CAD drawing, the method comprising: preprocessing, by a device, an original CAD drawing to create a modified CAD drawing, a text database containing text from the original CAD drawing, a CAD vector-image of the modified CAD drawing, and a CAD raster-image of the modified CAD drawing; determining, by a device, a floor depicted in the CAD drawing, the determining results in a floor-level bounding line; sensing, by a device, furniture depicted on the floor by applying the floor-level bounding line, the CAD vector-image, and the text database to machine-learning algorithms, the sensing results in a plurality of furniture entities and associated location information; identifying, by a device, each room depicted in the CAD drawing within the floor-level bounding line, the identifying results in a plurality of room outlines; and creating, by a device, an indoor map for the floor by combining the plurality of furniture entities and associated location information with the plurality of room outlines.
Yet another example is a computer-implemented method of creating an indoor map from a CAD drawing, the method comprising: preprocessing, by a device, an original CAD drawing to create a modified CAD drawing, a text database containing text from the original CAD drawing, a CAD vector-image of the modified CAD drawing, and a CAD raster-image of the modified CAD drawing; creating, by a device, a floor-level outline; sensing, by a device, furniture depicted on the floor, the sensing creates set of furniture entities; identifying, by a device, a room depicted in the CAD drawing by: applying the floor-level outline and the CAD vector-image to a first machine-learning algorithm to produce a room-level bounding line and a first probability distribution regarding identity of the room; applying the room-level bounding line and the text database to a second machine-learning algorithm to produce a second probability distribution regarding identity of the room; applying the first and second probability distributions to a third machine-learning algorithm to generate a room identity; and selecting, based on the room-level bounding line, a room-level outline; and creating, by a device, the indoor map for a floor using the floor-level outline, the room-level outline and the room identity.
While the preceding examples are presented as computer-implemented methods, such examples may be equivalently stated as non-transitory computer-readable mediums and/or computer systems.
For a detailed description of example embodiments, reference will now be made to the accompanying drawings in which:
Various terms are used to refer to particular system components. Different companies may refer to a component by different names—this document does not intend to distinguish between components that differ in name but not function. In the following discussion and in the claims, the terms “including” and “comprising” are used in an open-ended fashion, and thus should be interpreted to mean “including, but not limited to . . . ” Also, the term “couple” or “couples” is intended to mean either an indirect or a direct connection. Thus, if a first device couples to a second device, that connection may be through a direct connection or through an indirect connection via other devices and connections.
“Bounding box” shall mean a closed line segment having four vertices.
“Bounding line” shall mean a closed line segment having four or more vertices. Thus, a bounding line may be, in some cases, a bounding box.
“CAD” shall mean computer-aided design.
“CAD drawing” shall mean a computer file containing data that, when rendered by a CAD program, shows a design. One example file format for a CAD drawing is the DXF format.
“Vector-image” shall mean a computer file containing data indicating relative locations of geometric shapes that, when rendered by a computer program, show a design.
“Raster-image” shall mean a computer file containing data indicating pixels of an array of a raster-graphics image that, when rendered by a computer program, show a design.
“Machine-learning algorithm” shall mean a computer algorithm, such as a convolution neural network, that creates a mathematical or computational model of relationships between input data and output data based on being trained by a set of training data, and then the machine-learning algorithm applies the mathematical or computational model to non-training data to produce predictions.
“Active contour model,” sometimes referred to as a snake algorithm, shall mean a deformable model that deforms in the direction of gradients, and stops deformation at high gradient locations.
“Generative adversarial network” or “GAN” shall mean two or more machine-learning algorithms (e.g., two neural networks) that work together (e.g., in an adversarial sense) to produce a room-level bounding-line.
The terms “input” and “output” when used as nouns refer to connections (e.g., software), and shall not be read as verbs requiring action. In systems implemented in software, these “inputs” and “outputs” define parameters read by or written by, respectively, the instructions implementing the function.
“Assert” shall mean changing the state of a Boolean signal. Boolean signals may be asserted high or with a higher voltage, and Boolean signals may be asserted low or with a lower voltage, at the discretion of the circuit designer. Similarly, “de-assert” shall mean changing the state of the Boolean signal to a voltage level opposite the asserted state.
“GeoJSON” shall mean an open standard geospatial data interchange format that represents geographic features and related non-spatial attributes.
In the claims, reference to “a processor” and later to “the processor”, in conformance with antecedent basis requirements, shall not be read to require only one processor. The reference to “a processor” may be one or more processors, and similarly the later reference with proper antecedent to “the processor” may likewise be one or more processors.
The following discussion is directed to various embodiments of the invention. Although one or more of these embodiments may be preferred, the embodiments disclosed should not be interpreted, or otherwise used, as limiting the scope of the disclosure, including the claims. In addition, one skilled in the art will understand that the following description has broad application, and the discussion of any embodiment is meant only to be exemplary of that embodiment, and not intended to intimate that the scope of the disclosure, including the claims, is limited to that embodiment.
Various examples are directed to systems and methods for automating conversion of drawings to indoor maps and plans. The example process may be conceptually, though not necessarily physically, separated into preprocessing of the input CAD drawing, performing floor detection (e.g., first floor or story, second floor or story) within the CAD drawing, performing furniture detection for each floor, performing room detection for each floor, and then generating an indoor map based on the outputs from each stage. In many cases, the processing may proceed with little or no human interaction, and thus greatly improves the speed and quality of creating indoor maps from CAD drawings. The description now turns to a high-level overview.
The next example stage in the processing is the floor detection 104 stage. As the name implies, the floor detection 104 is used to detect stories or floors shown in the CAD drawings, and to generate a floor outline for each floor. For example, the original CAD drawing may show the layout of one or more floors of a building within a single computer file. The floor detection 104 identifies each floor shown in the CAD drawings. For buildings that have a uniform exterior footprint for all floors, the floor detection may be a relatively straightforward task—once the first floor outline is determined, all the floors have the same outline. However, for buildings that change shape with changing exterior elevation, or for CAD drawings in which only partial floors are depicted, determining the outline for each floor is more challenging. In various examples, the floor detection 104 may be implemented by applying the CAD raster-image, the CAD vector-image, and the text database to a floor-level machine-learning algorithm.
Still referring to
The next example stage in the processing is room detection 108. As the name implies, the room detection 108 is used to identify each room shown on each floor in the CAD drawing. The identifying may have two conceptual components: identifying a room outline; and identifying the intended use of the room (e.g., executive office, conference room, water closet). In various examples, the room detection 108 may be implemented by applying the output of the floor detection 104 (e.g., a floor outline), the output of the furniture detection 106 (e.g., furniture outlines), the CAD vector-image, and the text database to a room-level machine-learning algorithm. The room-level machine-learning algorithm may create a plurality of room outlines, one each for each room on each floor. The room outlines, and corresponding room identities, may be supplied to the export 110 stage.
Still referring to
Couple of points before proceeding. Though the example flow diagram of
Preprocessing
The example workflow starts 200 and turns to parsing the original CAD drawing 202 to create scale information. In the example of
The next step in the example process is identifying blocks and polylines 206, and similarly detecting text, arrows, and hatches 208. These two steps may be described as finding, within the CAD drawing, various drawing-entities of interest, and then where possible deleting unnecessary entities 210. The steps are described visually below in reference to example drawings objects, but for now the steps 206, 208, and 210 may be considered to be finding and removing drawing-entities within the CAD drawing that obfuscate underlying floor-level and room-level detection (e.g., leader lines, text, words), and likewise saving information that may be helpful later in identifying drawing objects (e.g., chair, desk) or identifying the purpose of an interior space (e.g., conference room, water closet). For example, the method of
In example cases, the text information extracted from the CAD drawing is used to create a text database with associations, and the resultant is shown in
Thus, the result of the preprocessing is a modified CAD drawing with leader lines, text, duplicate entities, and cross-hatching removed. The text information, and associated location information, becomes the text database 214. The modified CAD drawing is used as the basis to create the CAD vector-image 218 and the CAD raster-image 222. The discussion now turns to the floor detection.
Floor Detection
Returning briefly to
In example systems, each floor bounding line may be a raster-image that indicates the location of a floor shown the CAD drawing. The floor bounding line may not “tightly” show the exterior footprint of the floor. Rather, in some examples the floor bounding line depicts a polygon (e.g., a square) that fully encircles an identified floor, even though the floor may have a smaller and more complex exterior footprint. Stated otherwise, in the coordinate space of the CAD drawing, there may be non-zero offsets between the actual floor outline and an inside dimension of the floor bounding line. The specification now turns to the floor post processing 602.
The floor post processing 602 is summarized here first, and then described in greater detail below. In summary, starting from a floor bounding line created by the floor-level machine-learning algorithm, the floor post processing 602 creates an intermediate floor outline (e.g., raster-image) that indicates the outer footprint of the floor. The intermediate floor outline is then simplified, converted into an intermediate CAD drawing, and the intermediate CAD drawing is overlaid with the modified CAD drawing. The example floor post processing then removes drawing-entities that are a predetermined distance outside the intermediate floor outline, and removes drawing-entities that are a predetermine distance within the intermediate floor outline. Stated otherwise, the example floor post processing 602 removes drawing-entities from the modified CAD drawing that are a predetermine distance away from the intermediate floor outline. For various drawing-entities that remain, the drawing-entities are used to create a reduced-vertex shape, and thus a final floor outline, from which an indoor map may be created.
Active Contour Model
Referring to
Smoothing the Bounding Line
Returning to
In particular, the example method takes as input the bounding line, the scale information 204, simplification thresholds 730, and the converged snake 722 or initial floor outline by the active contour model 700. Using the initial floor outline, the scale information 204, and the simplification thresholds 730, the example method calculates a distance 732 corresponding to one pixel of the raster-image comprising the floor outline. The next step in the example method is polygon simplification 734 (e.g., using Douglas Peucker algorithms). The resultant smoothed floor outline (e.g., again a raster image) may then be converted 736 into a CAD drawing for use in the further processing.
Returning to
Filtering Entities
Turning to
The filtering of the example method may further comprise operations on drawing-entities that reside between the inflated bounding line and the deflated bounding line. That is the say, the area between the inflated bounding line and the deflated bounding line (sometimes referred to as the snake patch) may contain many desirable drawing-entities (e.g., lines showing the exterior walls), but may also contain undesirable drawing-entities. The undesirable drawing-entities may include small stray lines (e.g., remnants of leaders lines), duplicate lines (e.g., two coextensive lines representing a single wall), and data errors that manifest as standalone vertices—termed spanned entities. Thus, the various spanned entities may be removed from the modified CAD drawing.
Referring specifically to the flow diagram of
The inflated bounding line 1008 and deflated bounding line 1010 delineate several categories of drawing-entities. For example, tree 1014 resides outside the inflated bounding line 1008. Within the snake patch 1012 resides various drawing-entities, such as outer walls of the floor (e.g., walls 1016 and 1018), short spanned entities 1020, and duplicate entity 1022 (i.e., a duplicate wall line). In practice, the duplicate entity 1022 may be hidden by being directly “behind” a line or set of lines showing the wall at that location, but duplicate entity 1022 is offset for clarity of the drawing. Inside the deflated bounding line 1010 resides various lines defining the rooms on the floor. In the example method, the drawing-entities that reside outside the inflated bounding line 1008, such as tree 1014, may be filtered or removed. Moreover, drawing-entities that reside inside the deflated bounding line 1010, including entities that cross the deflated bounding line 1010, may be filtered or removed. For example, all the interior lines (e.g., the lines depicting the rooms on the floor, furniture), may be removed.
Still referring to
Converging Remaining Entities
Returning to
More precisely then, the example method deletes short spanned entities, extra vertices, and duplicate entities 750. Before proceeding, a brief digression into shortcomings of CAD drawings is in order. It may be that, within the original CAD drawing, various drawing-entities (e.g., lines) that represent the exterior walls do not fully touch or “connect up.” At a drawing resolution where an entire floor is visible, the fact that two lines do not meet may not be evident or even relevant. However, the lack of continuity between drawing-entities (e.g., lines defining an outer wall of a floor) may cause issues in finding and refining the final floor outline. In order to address these potential shortcomings, the remaining entities are dilated with a given dilation increment 752. Thereafter, a unary union of the dilated entities is calculated, as shown by process 754, to create a unary entity or union shape. A determination 756 is made as to whether the union shape represents a closed-loop path. If no closed-loop path is found, this means the dilation was insufficient to make contiguous or “connect up” the drawing-entities, and the example method takes the “No” path out of the determination 764. Along the “No” path, the dilation increment is increased, and the dilation 752 and calculation of the unary union (process 754) are repeated until a closed-loop path is found (again determination 756).
Once a closed-loop path is found (e.g., the “Yes” path out of determination 756), the example method performs an exterior and interior fit 760 to find an exterior polygon of the union shape and an interior polygon of the union shape. In the example method, the exterior polygon is deflated, while the interior polygon is inflated, and a minimum or reduced vertex solution is selected, as shown by process 762.
The example method may continue refining of the closed-loop path in increments until the closed-loop path meets a predetermined criteria 764. For example, a Hausdorff distance may be calculated between reduced vertex solution and the floor outline or converged snake 722 (
So as not to unduly lengthen the specification, not specifically shown in the graphical representations are the pre- and post-versions of removal of the short spanned entities 1020 (
In accordance with example methods, the various lines are dilated with a dilation increment, and a resultant is shown as the upper-middle drawing 1108. In particular, the upper-middle drawing 1108 shows a situation in which dilation increment was insufficient to have all the dilated drawing-entities become contiguous, overlap, or “connect up.” For example, there are still gaps between the dilated line 1102 and the dilated line 1104. Similarly, the dilated lower line 1110 does not overlap or “connect up” with the other lines. As can be gleaned, any union of the drawing-entities of the upper-middle drawing 1108 will not result in a union shape defining a closed-loop path. Thus, the discussion assumes the example method retreats to dilation 752 (
Still referring to
In the next step (block 758 of
Still referring to
The lower drawing 1132 shows an opposite scenario. In particular, the lower drawing 1132 shows a situation in which the union shape 1134 has an interior outline 1136 having more vertices than the exterior outline 1138. It follows that after deflation and inflation of the exterior outline 1138 and the interior outline 1136, respectively, the reduced vertex solution will be the deflated exterior outline. The examples provided to this point have shown one outline (i.e., either the exterior or the interior) having four vertices; however, in practice both the exterior and interior outlines are likely to have many features resulting in many vertices, and thus the selected reduced vertex solution may not necessarily be selecting a square or rectangle.
Returning to
Geo-Location
An optional next step is finding a geo-location for the building for which the final building outline 766 is found. While finding a geo-location does not necessarily aid in indoor navigation, such geo-location may be useful in selecting the appropriate indoor maps for use at any particular building and/or determining the building's location on World coordinates.
The next step in the example is exporting candidate polygon using a segmentation machine-learning algorithm, such as a neural network, as shown by step 1146. That is, based on the buildings visible in a target area encompassing the approximate geo-location, a plurality of building footprints are identified using any suitable method. In one example, the neural network may be designed and trained to directly determine the footprint of each building in the target area from building footprints previously extracted. In other cases, the neural network may make a rough segmentation, identifying each building in the visible image, and additional steps may be implemented to extract the building footprints, such as converging a snake around the visible image of each building in the target area. Regardless of the precise method, the resultant is a plurality of candidate polygons or candidate building footprints, as shown by 1148.
The next step in the example method is determining how closely each of the candidate building footprints match the floor-level outline or final building outline 766 generated by the floor detection of
Calculating the plurality of distance functions that relate the final building outline 766 to each of the plurality of building footprints may take any suitable form. In the example shown in
The Hausdorff distance is a metric used to quantify the dissimilarity or similarity between two sets of points or two geometrical objects. The Hausdorff distance measures how far apart two sets are by considering the distance between any point in one set to its closest point in the other set. Hausdorff distance is sensitive to outliers or isolated points. A single point far away from the other points in one set can significantly affect the distance. To mitigate this, modified versions of the Hausdorff distance, such as the Modified Hausdorff distance or the Average Hausdorff distance, can be used.
Procrustes distance, also known as Procrustes analysis or Procrustes shape analysis, is a mathematical technique used to compare and analyze geometric shapes or configurations. In the context of shape analysis, Procrustes distance measures the dissimilarity between two shapes by aligning them, scaling them, and then calculating the Euclidean distance between corresponding points.
The Fréchet distance is a measure of similarity between two curves or paths. The Fréchet distance quantifies how similar two curves are by considering the minimum leash length for hypothetical person and a dog to traverse their respective paths without backtracking or removing the leash. More formally, given two curves or paths, the Fréchet distance measures the shortest length of the leash that enables a person and a dog to simultaneously walk along their respective paths from start to end. The person and the dog are not allowed to travel at different speeds, backtrack, or leave the paths. The Fréchet distance takes into account both the geometric shape of the curves and the parameterization of the paths. It provides a notion of similarity that considers the overall shape and spatial relationship between the points on the curves.
Turning distance is a measure of similarity between two polygons that quantifies the difference in their shape and orientation. Turning distance involves calculating the cumulative angular difference between corresponding edges or vertices of the polygons. By summing the absolute values of these angular deviations, the Turning distance captures the overall dissimilarity in the rotational transformations used to align the polygons. A smaller Turning distance indicates a higher degree of similarity, implying that the polygons share similar shapes and orientations, while a larger Turning distance suggests greater dissimilarity in their overall configurations.
Each of the example distance measures, Hausdorff distance 1152, Procrustes Shape Analysis 1154, Fréchet distance 1156, and Turning distance 1158, generates a value indicative of similarity, one each for each proposed building footprint. In this example, and for each proposed building footprint, four values indicative of similarity are created, relating the similarity of the proposed building footprint to the final building outline 766. In the example
Still referring to
The next step in the example method is computing a geo-reference matrix, as shown in step 1164. In particular, an example satellite image will rarely be from directly above the target area, so the image is likely from an oblique angle. The oblique angle of the image means that the building footprints used and/or extracted may be distorted. Thus, in this example step, a transformation matrix is calculated which corrects the distortion of the selected building footprint, such as caused by the obliqueness in the imaging. And finally, with the distortion-corrected building footprint, a final geo-location may be calculated relative to the approximate geo-location previously provided, as shown in step 1166.
Still referring to
The next step in the example method is determining how closely each of the candidate building footprints match the floor-level outline or final building outline 766 by computing the distance functions, as shown in block 1180, and combining the distance functions, as shown in step/block 1160, to generate similarity scores. In
The next step in the example method is computing a geo-reference matrix, as shown in step 1164, and with a partial matrix 1188 shown. The transformed building footprint is then geo-reference to the world coordinates, as shown by step 1190. The result is a final geo-location of the building, as shown in
The specification now turns to furniture detection.
Furniture Detection
Returning briefly to
In example systems, each furniture bounding box may be a raster-image that indicates the location of a piece of furniture (e.g., desk, guest chair, conference table, door) on a floor. The furniture bounding box may not “tightly” show the exterior footprint of the piece of furniture. Rather, in some examples the furniture bounding box depicts a polygon (e.g., a square, rectangle) that fully encircles an identified furniture drawing-entity, even though the furniture drawing-entity may have a smaller and more complex exterior footprint. Stated otherwise, in the coordinate space of the CAD drawing, there may be non-zero offsets between the footprint of any particular furniture drawing-entity and an inside dimension of the furniture bounding box.
The furniture-level machine-learning algorithms 1200 are summarized here first, and then described in greater detail below. In summary, example methods utilize a plurality of machine-learning algorithms, and in one example three machine-learning algorithms. In particular, in one example method the floor-level bounding line and the CAD vector-image are applied to a furniture-level machine-learning algorithm. The example furniture-level machine-learning algorithm is designed and trained to produce furniture bounding boxes around each furniture drawing-entity on the floor identified by the floor-level bounding line. The floor-level machine-learning algorithm may also make, for each bounding box, a furniture class prediction (e.g., chair, desk, conference table, door, double door) based on the size and drawing-entities within each bounding box (and keeping in mind that text was removed before creating the CAD vector image). Further in example methods, a parsed text database may be applied to a text-level machine-learning algorithm. In particular, the text database (e.g., generated in the preprocessing 102 of
Thus, at this stage the furniture-level machine-learning algorithm has produced the furniture bounding boxes (and possibly furniture class prediction). The text-level machine-learning algorithm has produced furniture identities with associated location information. The output of each of the furniture-level machine-learning algorithm and the text-level machine-learning algorithm may be applied to third machine-learning algorithm, namely an ensemble machine-learning algorithm. The example ensemble machine-learning algorithm may be designed and trained to generate or select a set of final-furniture bounding boxes and associated identity information. The resultant may then be applied to the furniture post processing 1202.
Now considering the furniture-level machine-learning algorithm. In example cases, the furniture-level machine-learning algorithm 1308 is provided the floor-level bounding line (not specifically shown) and the CAD vector image (partially shown). The furniture-level machine-learning algorithm is designed and trained to generate furniture bounding boxes around each drawing-entity representative of furniture. In
In example methods, the resultants or outputs of the text-level machine-learning algorithm 1304 and the furniture-level machine-learning algorithm 1308 are both applied to a third machine-learning algorithm, the ensemble machine-learning algorithm 1320. As alluded to above, the ensemble machine-learning algorithm 1320 is designed and trained to generate and/or select final-furniture bounding boxes and identity information regarding the furniture within each bounding box. In example cases, and as shown in the figure, the ensemble machine-learning algorithm 1320 produces two classes of outputs: 1) a set of bounding boxes for doors and associated door identity information (e.g., single door swing left, single door swing right, double door, sliding door); and 2) furniture bounding boxes and associated identity information. As shown in
Before turning to furniture post processing 1202, however, the specification turns to a gridding technique associated with the CAD drawings applied to the furniture-level machine-learning algorithm 1308. The scale of CAD drawings may vary significantly. For example, a CAD drawing of a relatively small office space having only two or three offices may be vastly different than the scale of a CAD drawing showing the office layout for an entire floor of a multiple-story building. The difference in scale may complicate detection by the furniture-level machine-learning algorithm. For example, if the furniture-level machine-learning algorithm is trained with training data having a different scale than a CAD drawing applied for analysis, the furniture-level machine-learning algorithm may be unable to correctly find and identify the furniture drawing-entities. Issues associated with the scale of the underlying training data and applied CAD drawings is addressed, in at least some cases, by gridding technique.
In particular, in preprocessing 102 (
Gridding has the positive consequence that the furniture-level machine-learning algorithm 1308, in being provided data of consistent scale, is more likely to correctly identify bounding boxes for furniture drawing-entities. However, non-overlapped gridding (i.e., each grid defines an area that is not duplicated in any adjacent grid) creates a difficulty in that some furniture drawing-entities may span two or more grids. Inasmuch as the furniture-level machine-learning algorithm 1308 may have difficulty identifying partial furniture drawing-entities, in example methods the grids are overlapped a predetermined amount (e.g., 5% of grid width, 10% of grid width) sufficient to address the divided furniture issue. In example cases, the overlap may be 2.5 meters, which in most cases is sufficient to ensure that a furniture entity partially shown on one grid will be fully shown on the adjacent grid. While gridding addresses the divided furniture issue, gridding also may create a duplicate furniture detection issue. Identifying duplicate furniture detections, and removing duplicate detections, takes place in furniture post processing 1202.
In summary, in the example furniture post processing 1202, the final-furniture bounding boxes created by the ensemble machine-learning algorithm are converted from raster-image form to CAD drawing form, and with overlapping bounding lines removed, the furniture bounding boxes are used in conjunction with the CAD vector image to create a set of filtered entities residing within each furniture bounding box. For directly recognized furniture drawing-entities (e.g., chair), the recognized drawing-entities may be replaced with predefined entities (e.g., drawing-entities showing a high-back chair replaced with a circle) for use on the indoor map. For filtered entities that are not directly identifiable (e.g., non-standard furniture items), the non-standard drawing-entities are reduced in complexity. Finally, the example furniture post processing may remove any duplicate entities created based the overlaps of the gridding.
The example furniture post processing conceptually takes two parallel paths at this point based on the furniture drawing-entities that reside in each inflated bounding box. If a furniture drawing-entity within an inflated bounding box is recognized as a “standard” entity (e.g., chair, rectangular office desk), then the example method replaces the “standard” entity with a predefined shape for use within an indoor map (e.g., complex drawing of a chair replaced with an opaque circle), and the method proceeds directly to the final furniture polygon 1418 for that furniture drawing-entity. On the other hand, if a furniture drawing-entity within an inflated bounding box is a “non-standard” entity (e.g., odd-shaped conference table, or a desk with internal boxes showing power and data connections), then the example method continues as shown by
Referring initially to bounding box 1500, in example cases the bounding box is inflated by a predetermined inflation increment, with the resultant shown as inflated bounding box 1506. The inflated bounding box 1506 thus encircles the desk, the associated internal drawing-entities of the desk, and a connecting entity associated with the return 1504, but not the return 1504 itself. The drawing-entities within the inflated bounding box 1506 are kept, while the drawing-entities not contained within the inflated bounding box, in this example the return 1504, are deleted. The desk and associated internal drawing-entities of the desk are an example of a “non-standard” furniture entity, and thus the example method continues with filtering the entities and creation of polygons using the filtered entities (processes 1408 and 1410).
Still referring to
Returning to the upper-left drawing, and particularly the bounding box 1502. Again, in example cases the bounding box 1502 is inflated by the predetermined inflation increment, with the resultant being inflated bounding box 1520. The inflated bounding box 1520 encircles the chair. The drawing-entities within the inflated bounding box 1520 are kept. The chair is an example of a recognized “standard” furniture entity, and thus the example method skips the various steps of the example method (the skipping shown by line 1522) to replacing the chair drawing-entity 1530 with a predefined shape for use within an indoor map. For example, the chair drawing-entity 1530 may be replaced with an opaque circle or a polygon. Thus, the method proceeds directly to the final furniture polygon 1524 for that furniture drawing-entity.
A few points to consider regarding “standard” furniture drawing-entities before proceeding. As mentioned above, there are no universally accepted standards for furniture drawing-entities in CAD drawings. Nevertheless, there may be duplicate uses of drawing-entities within a CAD drawing (e.g., each floor may use the same “chair” drawing-entity), there may be duplicate uses by the same architect across different CAD drawings, and there may be duplicate uses based on many architects having access to the same predefined sets of furniture drawing-entities (akin to clipart). Thus, the furniture-level machine-learning algorithms may be able to detect with high confidence, based on the training data set as well as later incremental training with furniture post processing results, that a particular furniture drawing-entity is a known entity (e.g., chair, desk, door, water fountain). In those situations then, the floor post processing may skip the processes 1408 through 1416, and proceed directly to the replacing the known furniture drawing-entity with a reduced complexity drawing-entity, such as in
Still considering furniture post processing. In order to address scale issues, in example cases the CAD drawings are divided into a plurality of grids having predetermined size. In example methods, each grid has a size of 25 square meters, and to address the divided furniture issue each grid may be overlapped a predetermined amount. While gridding addresses the divided furniture issue, gridding also may create a duplicate furniture detection issue. That is, the furniture-level machine-learning algorithms may detect the same piece of furniture in two adjacent grids because of the overlap. Thus, in accordance with example embodiments, another function of the furniture post processing is to remove duplicate furniture detection from the final furniture entities.
In the lower-left portion of
Identifying and removing duplicate furniture entities from the final list of furniture entities may take any suitable form. For example, each furniture entity is associated with location information. The furniture post processing may thus analyze the furniture entities and respective location information from each grid, and based on two furniture entities having the same location, or being within a predetermined distance of each other given slight processing differences (e.g., within 5 centimeters (cm), within 10 cm), remove duplicate furniture detections or duplicate furniture entities from the final furniture entities. The final furniture entities (one set each for each detected floor) are passed to the room detection 108 (
Room Detection
Returning briefly to
In example methods, each room bounding line may be a raster-image that indicates the location of a room on a floor. The room bounding line may not “tightly” show the exterior footprint of the room. Rather, in some examples the room bounding line depicts a polygon (e.g., a square, rectangle) that fully encircles an identified room, even though the room may have a more complex footprint. Stated otherwise, in the coordinate space of the CAD drawing, there may be non-zero offsets between the footprint of any particular room and an inside dimension of the room bounding line.
Room-Level Machine Learning Algorithms
The room-level machine-learning algorithms 1700 are summarized here first, and then described in greater detail below. In summary, example methods utilize a plurality of machine-learning algorithms, and in one example three machine-learning algorithms. In particular, in one example method the floor-level bounding line and the CAD vector-image are applied to a room-level machine-learning algorithm. The example room-level machine-learning algorithm is designed and trained to produce room bounding lines around each room on the floor. The room-level machine-learning algorithm may also create, for each bounding line, a class probability distribution regarding identity of the room (e.g., office, conference room, water closet). Given that the text was removed from CAD drawing that formed the basis of the CAD vector-image, the class probability distributions created by the room-level machine-learning algorithm may be referred to as the graphics-based probability distributions.
Further in example methods, a parsed text database may be applied to a text-level machine-learning algorithm. In particular, the text database 214 (
Thus, at this stage the room-level machine-learning algorithm produced the room bounding lines and graphics-based probably distributions. The text-level machine-learning algorithm produced text-based probability distributions regarding room identities. The output of the room-level machine-learning algorithm and the output of the text-level machine-learning algorithm, along with the furniture entities and associated location information (from the furniture detection 106 (
Now considering the room-level machine-learning algorithm. In example cases, the room-level machine-learning algorithm 1808 is provided the floor-level bounding line (not specifically shown) and the CAD vector-image 1821 (not including the text). The room-level machine-learning algorithm 1808 is designed and trained to generate room bounding lines around each room depicted on the floor identified by the floor-level bounding line. In
In example methods, the resultant of the room-level machine-learning algorithm 1808 produces not only a plurality of room bounding lines, one each for each room, but also produces a plurality of graphics-based probability distributions, one each for each room bounding line.
In example methods, the resultants or outputs of the text-level machine-learning algorithm 1804 and the room-level machine-learning algorithm 1808 are applied, along with the furniture detection output 1826 (i.e., the furniture entities and associated location information from the furniture detection 106 (
Room Post Processing
The specification now turns to room post processing 1702. In example methods, the resultants from the room-level machine-learning algorithms 1700 are applied in parallel to various post processing algorithms. More particularly still, in example methods the resultants from the room-level machine-learning algorithms 1700 are applied to: 1) entity-based post processing algorithms; 2) graph-based post processing algorithms; and 3) generative adversarial network (GAN) post processing algorithms. Each of the example three post processing algorithms may be particularly suited for finding final room outlines, each in their own particular situation. The example entity-based post processing algorithm generates a set of entity-based bounding lines, one entity-based bounding line for each room on the floor. The example graph-based post processing algorithm generates a set of graph-based bounding lines, one graph-based bounding line for each room on the floor. The example GAN-based post processing algorithm generates a set of GAN-based bounding lines, one GAN-based bounding line for each room on the floor.
The resultants from the example three post processing algorithms are then applied to a rule-based evaluator that selects, for each room and from all the bounding lines generated with respect to the room, a ruled-based selection. The resultants from the example three post processing algorithms are also applied to a selection machine-learning algorithm that selects, for each room and from all the bounding lines generated with respect to the room, a ML-based selection. At this stage then, each room on the floor is associated with a rule-based selection of the room outline and a ML-based selection of the room outline. For each room, the two room outlines (i.e., the rule-based selection and the ML-based selection) are applied to a room decisions engine that selects between the two, resulting in a final room outline. The final room outline may then be the basis for further processing to identify or extract the walls for each room. Finally, the extracted walls for each room on the floor are merged to create the indoor map.
Entity-Based Post Processing Algorithms
Drawing Entity Selection—Entity Based
Referring again to
Merging Close Lines
Still referring to
Finding Polygons
Referring to
In summary, the example method removes from the polygon patch: spanned entities whose size is below a predetermined size; duplicate entities; and extra vertices. For the drawing-entities remaining in each polygon patch, the example method: dilates the remaining entities; performs a unary union of the dilated entities to create a union shape; and attempts to find closed-loop path within the union shape. When a closed loop is found, the example method determines an internal and external outline of the union shape, deflates the exterior outline, and inflates in interior outline, as discussed with respect finding the floor-level outlines. The example method selects either the deflated external outline or the inflated internal outline, in example methods the selection based on which outline has the lowest number of vertices. From there, various further simplifications are performed to arrive a final room outline passed to the further processing, and the method continues for each room bounding line.
More precisely then, the example method deletes short spanned entities, extra vertices, and duplicate entities, as shown by process 2036. The remaining drawing-entities are dilated with a given dilation increment, as shown by process 2038. Thereafter, a unary union of the dilated entities is calculated, as shown by process 2040, to create a unary entity or union shape. A determination 2042 is made as to whether the union shape defines a closed-loop path. If no closed-loop path is found, this means the dilation was insufficient to make contiguous or “connect up” the drawing-entities, and the example method takes the “No” path out of the determination 2042. Along the “No” path, the dilation increment is increased, and the dilation, calculation of the unary union, and closed-loop determination are repeated until a closed-loop path is found.
With the closed-loop path found (i.e., the “Yes” path out of determination 2042), an exterior and interior fit is performed, as shown by process 2044. In particular, in example methods the exterior and interior fit process 2044 involves extracting interior and exterior outlines or polygons of the union shape created by process 2040. The exterior and interior polygons may be found using any suitable method, such as a converging active contour model or an expanding active contour model. The exterior polygon is deflated to create a deflated exterior polygon, while the interior polygon is inflated to create an inflated interior polygon, and from the deflated exterior polygon and the inflated interior polygon a minimum or reduced vertex solution is selected, as shown by determination 2046. From there, the reduced vertex solution may be the final room outline for the room under consideration, as implied by process 2052. If there are further rooms to process, determination 2054, the example method retreats to processing the further rooms, otherwise the results are passed to the next stage of room post processing (e.g., rule-based evaluator 1906 and selection machine-learning algorithm 1908 (both
The inflated bounding line 2112 and deflated bounding line 2114 delineate several categories of drawing-entities. For example, lines 2118 and 2120 cross the inflated bounding line 2112. Within the entity patch 2116 resides various drawing-entities, such as outer walls of the room (e.g., walls 2122 and 2124), short spanned entities 2126, and duplicate entity 2128 (i.e., a duplicate wall line). In practice, the duplicate entity 2128 may be hidden by being directly “behind” a line or set of lines showing the wall at that location, but duplicate entity 2128 is offset for clarity of the drawing. Inside the deflated bounding line 2114 resides various drawing-entities, likely defining furniture within the room. In the example method, the drawing-entities that reside wholly outside the inflated bounding line 2112 (none specifically shown), and portions of drawing-entities that cross the inflated bounding line 2112, such as portions of the lines 2118 that cross the inflated bounding line 2112, may be extracted or removed. Moreover, drawing-entities that reside wholly inside the deflated bounding line 2114 may be extracted or removed. For example, all the drawing-entities representing furniture within the room, may be removed.
Still referring to
The upper-right drawing of
In the example method, the interior and exterior outlines or polygons of the union shape 2226 are extracted, as shown by the middle-left drawing 2228. The resultant interior and exterior outlines are shown as the middle drawing 2230. That is, the middle-left drawing 2228 shows the union shape 1116, along with an exterior outline 2232 and an interior outline 2234. Finding the exterior outline 2232 and interior outline 2234 may take any suitable form. For example, the union shape 2226 may be in, or converted to, a raster-image. A shrinking active contour model may be initialized outside the union shape 2226, and thus may be used to find the exterior outline 2232. An expanding active contour model may initialized within the union shape 2226, and thus may be used to find the interior outline 2234. In any event, the exterior outline 2232 and the interior outline 2234 are determined.
Still referring to
Drawing 2240 shows the example exterior outline 2232 after deflation, and drawing 2242 shows the example interior outline 2234 after inflation. In accordance with example methods, either the exterior outline 2232 or the interior outline 2234 is selected as the candidate for the room outline. Here, the exterior floor outline 2232 has more vertices than the interior outline 2234, and thus the interior outline 2234 is selected as the entity-based room outline 2250 for further processing. The specification now turns to the graph-based post processing 1902 (
Graph-Based Machine Post Processing
Returning briefly to
Drawing Entity Selection—Graph Based
The example drawing-entity selection 2300 is provided the outputs of the room-level machine-learning algorithms 1700 (
Graph Conversion
Still referring to
Parallel Line Elimination
In example cases, the set of line segments created by the graph conversion 2302 is passed to the parallel line elimination 2304 process. In particular, the slopes of the line segments in the graph patch are calculated, as shown by process 2318. The drawing-entities within each graph patch are then grouped with respect to their slopes, as shown by process 2320. Much like the grouping with respect to slope discussed in reference to
Extending Line Segments
Still referring to
The specification now turns to a graphical explanation of the example graph-based room post processing 1902. In order not to unduly lengthen the specification, the drawing-entity selection 2300, which utilizes the inflation and deflation of the room-level bounding line procedure with respect to the graph-based room post processing, is not specifically shown. Several prior discussions in this specification graphically show several example processes in which a room-level bounding line is inflated and deflated to create a patch, and where entities within the patch are kept and entities outside the patch are discarded or removed from further consideration with the respect that particular process (e.g.,
The next step in the example method is the extending line segments. Still referring to
Gan-Based Post Processing
Returning briefly to
In example methods, the GAN-based post processing 1904 starts with dataset creation 2500. The example GAN-based post processing 1904 is provided the CAD raster-image 2502 (e.g., CAD raster-image 222 of
Turning now to
Similarly, and again as the name implies, the example grayscale GAN 2522 is designed and trained to operate on the room-level bounding line in the form of a grayscale raster-image. The example grayscale GAN 2522 may perform a series of 2D convolutions as part of a down-sampling procedure, as shown by process 2528. With the resultants from the down sampling, the example method may perform a series of linear up-sampling operations, as shown by process 2530. The example grayscale GAN 2522 may create a plurality of intermediate bounding lines (e.g., three intermediate bounding lines). Each of the plurality of intermediate bounding lines may also be considered a proposal, by the grayscale GAN 2522, regarding the outline of the room at issue.
Still referring to
In example methods, the upper-middle drawing 2604 is applied to the RGB GAN 2520 while the room-level bounding line 2610 is applied to the grayscale GAN 2522. The result, after upscaling and concatenation, is a set of mask proposals, as shown by the lower-left drawing 2620. The members of the set of mask proposals are subject to 2D convolution, resulting in a final bounding line, being the GAN-based bounding line 2622. The GAN-based bounding line 2622, and particularly the inside surface, is the GAN-based post processing's proposal for the actual room outline; however, additional rule-based and machine-learning evaluators select from among the bounding lines from graph-based post processing, the entity-based post processing, and the GAN-based post processing to arrive at the room outline.
Rule-Based and ML-Based Evaluators
Returning to
The example rule-based evaluator 1906 receives as input the three bounding lines—the entity-based bounding line, the graph-based bounding line, and the GAN-based bounding line. The rule-based evaluator 1906 is designed and constructed to choose among the three using any suitable method. For example, in some cases each of the entity-based post processing 1900, the graph-based post processing 1902, and the GAN-based post processing 1904 generate a respective confidence factor (e.g., value between 0.0 (i.e., no confidence) and 1.0 (complete confidence)) regarding their respective bounding lines. In one example case, the rule-based evaluator 1906 may thus select a bounding line based, at least in part, by choosing the bounding line with the highest confidence factor. Regardless of the precise method, the rule-based evaluator 1906 produces a rule-based selection being a bounding line that the rule-based evaluator 1906 considers to be the bounding line closest to the actual room outline of the room under consideration. In some cases, the rule-based evaluator 1906 also produces a rule-based confidence factor, indicating a confidence of the rule-based evaluator 1906 that the rule-based selection matches the actual room outline of the room under consideration.
The example selection machine-learning algorithm 1908 also receives as input the three bounding lines—the entity-based bounding line, the graph-based bounding line, and the GAN-based bounding line. The selection machine-learning algorithm 1908 may be machine-learning algorithm (e.g., convolution neural network) designed and trained to choose among the three bounding lines. For example, in some cases each of the entity-based post processing 1900, the graph-based post processing 1902, and the GAN-based post processing 1904 generate a respective confidence factor regarding their produced bounding lines. In one example case, the selection machine-learning algorithm 1908 may thus select a bounding line based, at least in part, on the confidence factors and any other suitable criteria. Regardless of the precise method, the selection machine-learning algorithm 1908 produces a ML-based selection being a bounding line that the selection machine-learning algorithm 1908 considers to be the bounding line that closest to the actual room outline of the room under consideration. In some cases, the selection machine-learning algorithm 1908 also produces a ML-based confidence factor, indicating a confidence of the selection machine-learning algorithm 1908 that the selection machine-learning algorithm matches the actual room outline of the room under consideration.
Room Decision Engine
Still referring to
Wall Extraction
Still referring to
The portions of
Identifying and Filling Wall Spaces
Referring initially to
Regardless of the precise mechanism for the inflation, the next step in the example method may be merging polygons to find an outer polygon, and then deflating the merged entity, as shown by processing 2710. The input polygons may then be combined with the deflated outer polygons to find the empty regions, as shown by process 2712. Conceptually here, the empty regions may be wall spaces or wall areas between rooms. The empty regions may then be filled by inflating certain of the polygons into the empty regions, as shown by process 2714. In some cases, a solution is found in which a reduced number polygons, less than all the polygons, are inflated to fill the empty spaces, and in some cases a solution is selected in which a minimum number of polygons are inflated. Regardless of the precise method of inflation to fill the empty spaces, the resultant is a set of final wall polygons (e.g. a vector-image, or a CAD drawing) that depicts the walls on the floor.
Identifying Walls
Referring to
Door Cropping
The next step in the example method is door cropping 2704. The example door cropping 2704 procedure takes as input: the set of walls within the floor-level outline from the identifying walls 2702 procedure; the furniture entities from the furniture detection 106 (
Merging Rooms and Walls
Returning to
Now referring to
Now referring to
Now referring to
Auto-Naming of Rooms
In the examples discussed to this point, the text in the text database 214 (
Thus, examples of the extraction of room names from the CAD drawings utilize the various methods described above. For example, the CAD drawing is preprocessing to create the text database 214 containing text from the CAD drawing and associations of the text with locations within the CAD drawing; a floor depicted in the CAD drawing is determined, resulting in a floor-level outline; and a plurality of room-level outlines are identified within the floor-level outline, the plurality of room-level outlines corresponds to a respective plurality of rooms. The further examples may include selecting a name of a room from the plurality of rooms, the selecting based on text within the text database. An example of the selecting is discussed below. The selecting names may continue for each room found in the floor-level outline. Thereafter, an indoor map may be created, the indoor map including the name of the room, and the name of the room may be associated with a location of the room within the floor-level outline.
With respect the grouped text for each room, the example method separates the numbers and words, as shown in step 2908. In particular, in some cases the text of the grouped text may lack spaces between words, or between words and numbers. The example separation separates or delineates the words and text. With the separated words and text of the grouped text, the example method may perform lemmatization to form a plurality of lemmas, as shown in step 2910. Lemmatization may be considered finding a root of a word in cases in which the word may be presented in two more inflected forms. Examples of lemmatization, and the resultant lemmas, are discussed in greater detail below. Each lemma of the plurality of lemmas is assigned a member probability, as shown in step 2912. Each member probability represents the likelihood of a lemma being a member of the name of the room. In one example, assignment of the member probabilities is performed by a machine-learning algorithm, such as a neural network. The example then generates a combination using each lemma having probability above a predetermined threshold, as shown in step 2914, and the generating creates a moniker. That is, for lemmas having member probability above a predetermined threshold, the lemmas are combined in various ways to generate a list of possible names or monikers for the room.
In various examples, a machine-learning algorithm, such as a neural network, may be used to check the combinations of lemmas, as shown in step 2916. In one example, the neural network used to assign the member probabilities at step 2912 may also be trained and used to check the combinations. In particular, each moniker is assigned a moniker probability, with each moniker probability representing the likelihood of a moniker being the name of the room.
The next example step may be a comparison of the combinations in the form of the monikers and the individual words in the form of the lemmas, as shown in step 2918. In particular, the additional checking may be performed by an analysis of the monikers and respective moniker probabilities, and the individual words or lemmas and their respective probabilities. Checking the combinations at step 2916 may be considered an analysis of the functioning of the machine-learning algorithm at generating monikers as candidates for room names. If the machine-learning algorithm is consistently generating probabilities that result in monikers with incoherent names, the check at step 2916 may trigger updated training or retraining machine-learning algorithms, as shown in step 2930. For example, if a lemma has a very high probability of being part of a room name, but the moniker with the highest moniker probability does not include the lemma with the highest probability, additional training may be needed.
Thereafter, name of the room is selected from the monikers with the highest moniker probability, as illustrated by step 2920. In one example, selecting the name of the room may be by selecting the moniker with the highest moniker probability, but any suitable selection from the monikers with moniker probability above a predetermined threshold may be used.
In the next example step, formatting characters and reserved words are removed from the text, with the resultant shown in block 3002. For example, in the character set “%% kSanJoseMeetingRoom%%k”, both instances of the formatting characters “%% k” are removed from the sequence. Again, the resultant shown in block 3002 includes words and numbers associated with the room of interest, as well as words and numbers not associated with the room of interest.
In the next example step, the words and numbers are grouped based on each respective word's or number's location association. In particular, the grouping takes as input the text with the reserved characters removed (e.g., block 3002) and the room-level outlines (e.g., “Post Processed Unit Polygons” 2904 in the figure). The grouping creates, for each room, an indication of text associated with or inside the rooms. In the example of
In the next example step, words and numbers are separated, with the resultant as shown in block 3006. For example, the single string of characters “SanJoseMeetingRoom” is separated into four distinct words. As another example, the indication of 12 people (“12pp”) is separated into the number 12 and the abbreviation for people (“pp”).
The next step in the example may be lemmatization to form a plurality of lemmas, with the resultant as shown in block 3008. Lemmatization may be considered finding a root for a word, where prior to lemmatization the word may be presented in two more inflected forms. For example, the word “walk” may be presented in several inflected forms depending on the part of speech and usage, such as “walks”, “walked”, or “walking.” Lemmatizing, or lemmatization, may thus be finding the root word, such as “walk” for any of the usages “walks”, “walked”, or “walking.” In the context of the separated words as shown in block 3006, several of the words are already in lemma form, such as “San”, “Room”, and “Red.” However, other words may be reduced to their lemma form, such as “Meet” for “Meeting,” “chair” for “chairs,” and “Carpet” for “Carpets.”
The next step in the example is assigning member probabilities to each lemma, each member probability represents the likelihood of a lemma being a member of the name of the room of interest, as shown in block 3010. That is, a machine-learning algorithm, such as a neural network, is designed and trained to produce probabilities of each word or lemma being a member of the name of the room of interest. For example, the word “room” has a very high probability of being part of the room name, 99.5%; whereas, the word “carpet” has a very low probability of being part of the room name, 0%.
The next step in the example method is generating combinations of the lemmas having probability above a predetermined threshold, the generating creates a plurality of monikers. For example, lemmas having member probabilities of 95% and above may be combined. The example of
For each moniker created, a machine-learning algorithm, such as a neural network, checks each combination and assigns a moniker probability to each moniker. In
The next step in the example method is comparing the monikers and the lemmas to determine whether issues exist in the training, as implied by line 3016. The example representation of
Export
Returning to
Automated Training
The updates or new user data may then be divided or split, as shown by block 3112. A portion of the new user data is used to create a training data set 3114, and another portion of the new user data is used to create a test data set 3116. In one example, the training data set 3114 is combined with existing training data 3130 to create an updated training data set 3118, and the test data set 3116 is combined with existing test data 3132 to create an updated test data set 3120. Stated otherwise, the example implementation generates the updated training data set 3118 by combining prior training data with a first portion of the new user data, and generates an updated test data set 3120 by combining prior test data with a second portion of the new user data.
The example implementation has two machine-learning systems. One machine-learning system is the production machine-learning system 3122 that generates the indoor maps from the CAD drawings, designated as the MapScale™ engine within the dashed box 3100. The second machine-learning system is an offline or supporting machine-learning system, generally shown within dashed box 3124, and hereafter just supporting machine-learning system. The supporting machine-learning system 3124 is used to evaluate how the new user data 3110 affects the one or more machine-learning models of the production machine-learning system 3122 before implementing the changes in the in the production machine-learning system 3122.
In various examples then, the supporting machine-learning system 3124 is trained using the updates to the indoor maps. More particularly, in various examples the supporting machine-learning system 3124 is trained, as illustrated by block 3134, with updated training data set 3118 comprising previously existing training data 3130 and the training data set 3114 derived from the new user data 3110. The training results in a machine-learning model registry 3136, having the parameters used to implement the machine-learning system, such as the weights and parameters of the nodes of a neural network. Once the supporting machine-learning system 3124 is trained, the updated test data set 3120 is applied to the supporting the machine-learning system 3124, as implied by line 3138. The resultant will be one more indoor maps. More particularly, when the updated test data set 3120 is applied to the supporting machine-learning system 3124, an evaluation indoor map is created along with an evaluation map accuracy.
The next step in the example method is a determination as to whether the newly trained supporting machine-learning system 3124 experienced a performance degradation, as shown in step 3140. Performance improvements and/or degradation may be determined by comparing the resultant indoor map to Ground Truth (e.g., comparing the map to the physical space). That is, if the updated training data set 3118 decreases the performance and accuracy of the supporting machine-learning system 3124, then of course the updated should not be transferred to the production machine-learning system 3122. If the updates result in degradation, then the example implementation moves to model refinement, as shown in block 3142, and as discussed in greater detail below. However, if the updates increase performance and/or accuracy of the supporting machine-learning system 3124, then those updates may be rolled out to the production machine-learning system 3122.
In some implementations, any update that results in increased performance and/or better map accuracy is automatically rolled out to the production machine-learning system 3112. For example, the machine-learning model registry 3136 may be copied to the production machine-learning system 3122. Alternatively, the production machine-learning system 3122 may be retrained with the updated training data set 3118. However, in other cases, the system may refrain from rolling out updates that result in minor improvements over the existing performance. In particular, in the example implementation a determination is made as to whether evaluation map accuracy is above predetermined threshold, as shown by block 3144. If the evaluation map accuracy is above a predetermined threshold (e.g., greater than 2% improvement), then the example method sends the updates to the production machine-learning system 3122, as shown by block 3146. If, on the other hand, the evaluation map accuracy is within a predetermined window above the production map accuracy (e.g., between 0.0% and 2.0%), then the example implementation may refrain from updating the production machine-learning system 3122, as shown by block 3148. It is noted, however, that the new user data is retained within the database 3102, and becomes part of the existing training data 3130 and the existing test data 3132 when second and subsequent updates are received from the example reviewers 3104, 3106, and 3108.
Still referring to
The model refinement may take any suitable form. For example, the various machine-learning models, such as neural networks, may have the registry weights manually and significantly altered based on the new user data 3110. In addition to or in place of the adjustments to the registry weights, the architecture of the machine-learning models, again such as neural networks, may be manually altered to account for statistically significant change in the data. If the model refined ultimately results in increased map accuracy, the updated machine-learning system may be rolled out the production environment. In some exceptional cases, one or more user's data may significantly differ from all other data. This may be due to geographical and/or architectural preferences. In such cases, a new user or new user group may be assigned to a more suitable machine-learning model. In such cases, when new user or new user group's data arrives, systems automatically switch and deploy in newly architected and trained machine-learning models.
The computer system 3200 includes a processing device 3202, a volatile memory 3204 (e.g., random access memory (RAM)), a non-volatile memory 3206 (e.g., read-only memory (ROM), flash memory, solid state drives (SSDs), and a data storage device 3208, the foregoing of which are enabled to communicate with each other via a bus 3210.
Processing device 3202 represents one or more general-purpose processing devices such as a microprocessor, central processing unit, or the like. More particularly, the processing device 3202 may be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets or processors implementing a combination of instruction sets. The processing device 3202 may also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a system on a chip, a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. The processing device 3202 may include more than one processing device, and each of the processing devices may be the same or different types. The processing device 3202 is configured to execute instructions for performing any of the operations and steps discussed herein.
The computer system 3200 may further include a network interface device 3212. The network interface device 3212 may be configured to communicate data (e.g., original CAD drawings, final indoor maps) via any suitable communication protocol. In some embodiments, the network interface devices 3212 may enable wireless (e.g., WiFi, Bluetooth, ZigBee, etc.) or wired (e.g., Ethernet, etc.) communications. The computer system 3200 also may include a video display 3214 (e.g., a liquid crystal display (LCD), a light-emitting diode (LED), an organic light-emitting diode (OLED), a quantum LED, a cathode ray tube (CRT), a shadow mask CRT, an aperture grille CRT, or a monochrome CRT), one or more input devices 3216 (e.g., a keyboard or a mouse), and one or more speakers 3218 (e.g., a speaker). In one illustrative example, the video display 3214 and the input device(s) 3216 may be combined into a single component or device (e.g., an LCD touch screen).
The network interface 3212 may transmit and receive data from a computer system application programming interface (API). The data may pertain to any suitable information described herein, such as a remaining useful life of floor outlines, room outlines, furniture identities and location, and indoor maps, among other information.
The data storage device 3208 may include a computer-readable storage medium 3220 on which the instructions 3222 embodying any one or more of the methods, operations, or functions described herein is stored. The instructions 3222 may also reside, completely or at least partially, within the volatile memory 3204 or within the processing device 3202 during execution thereof by the computer system 3200. As such, the volatile memory 3204 and the processing device 3202 also constitute computer-readable media. The instructions 3222 may further be transmitted or received over a network via the network interface device 3212.
While the computer-readable storage medium 3220 is shown in the illustrative examples to be a single medium, the term “computer-readable storage medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, or associated caches and servers) that store the one or more sets of instructions. The term “computer-readable storage medium” shall also be taken to include any medium capable of storing, encoding, or carrying a set of instructions for execution by the machine, where such set of instructions cause the machine to perform any one or more of the methodologies of the present disclosure. The term “computer-readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media.
The following clauses define various examples. The clauses are presented as computer-implemented method claims, but such clauses may be equivalently stated as non-transitory computer-readable medium claims and/or computer system claims:
Clause 1. A computer-implemented method of extracting room names from CAD drawings, the method comprising: preprocessing, by a device, a CAD drawing to create a text database containing text from the CAD drawing and associations of the text with locations within the CAD drawing; determining, by a device, a floor depicted in the CAD drawing, the determining results in a floor-level outline; identifying, by a device, a plurality of room-level outlines within the floor-level outline, the plurality of room-level outlines corresponds to a respective plurality of rooms; selecting, by a device, a name of a first room from the plurality of rooms, the selecting based on text within the text database; and creating, by a device, an indoor map including the name of the first room, the name of the first room associated with a location of the first room within the floor-level outline.
Clause 2. The computer-implemented method of clause 1 wherein selecting the name for the first room further comprises: grouping text in the text database associated with the first room, the grouping creates a first grouped text; lemmatizing words of the first grouped text to form a first plurality of lemmas; assigning a member probability to each lemma of the first plurality of lemmas, each member probability representing likelihood of a lemma being a member of the name of the first room; generating combinations of the first plurality of lemmas having probability above a predetermined threshold, the generating creates a first plurality of monikers; assigning a moniker probability to each moniker of the first plurality of monikers, each moniker probability representing likelihood of a moniker being the name of the first room; and selecting the name of the first room based on the first plurality of monikers and the moniker probabilities.
Clause 3. The computer-implemented method of clause 2 further comprising, after grouping but before lemmatizing, separating words and numbers of the first grouped text.
Clause 4. The computer-implemented method of clause 2 further comprising, after selecting the name of the first room, de-lemmatizing the name of the first room.
Clause 5. The computer-implemented method of clause 2 further comprising, before selecting the name of the first room: determining that the first plurality of monikers each lack a lemma having member probability above a predetermined threshold; and retraining a machine-learning model configured to generate the combinations of the first plurality of lemmas.
Clause 6. The computer-implemented method of clause 2 further comprising selecting a name for a second room, distinct from the first room, the selecting by: grouping text in the text database associated with the second room, the grouping creates a second grouped text; lemmatizing words of the second grouped text to form a second plurality of lemmas; assigning a member probability to each lemma of the second plurality of lemmas, each member probability of the second plurality of lemmas representing likelihood of a lemma being a member of the name of the second room; generating combinations of the second plurality of lemmas having probability above a predetermined threshold, the generating creates a second plurality of monikers; assigning a moniker probability to each moniker of the second plurality of monikers, each moniker probability of the second plurality of monikers representing likelihood of a moniker being the name of the second room; and selecting the name of the second room based on the second plurality of monikers and the moniker probabilities of the second plurality of monikers.
Clause 7. The computer-implemented method of clause 6 further comprising, after selecting the name of the second room, de-lemmatizing the name of the second room.
Clause 8. The computer-implemented method of clause 2 further comprising, prior to grouping text, removing predetermined characters.
Clause 9. The computer-implemented method of clause 8 wherein removing predetermined characters further comprises removing characters associated with formatting of the text.
Clause 10. The computer-implemented method of any preceding clause wherein determining the floor depicted in the CAD drawing further comprises receiving the floor-level outline from a user.
Clause 11. A computer system comprising: a processor; and a memory coupled to the processor. The memory storing instructions that, when executed by the processor, cause the processor to: preprocess a CAD drawing to create a text database containing text from the CAD drawing and associations of the text with locations within the CAD drawing; determine a floor depicted in the CAD drawing, the determining results in a floor-level outline; identify a plurality of room-level outlines within the floor-level outline, the plurality of room-level outlines corresponds to a respective plurality of rooms; select a name of a first room from the plurality of rooms, the selection based on text within the text database; and create an indoor map including the name of the first room, the name of the first room associated with a location of the first room within the floor-level outline.
Clause 12. The computer system of clause 11 wherein when the processor selects the name for the first room, the instructions further cause the processor to: group text in the text database associated with the first room, the grouping creates a first grouped text; lemmatize words of the first grouped text to form a first plurality of lemmas; assign a member probability to each lemma of the first plurality of lemmas, each member probability representing likelihood of a lemma being a member of the name of the first room; generate combinations of the first plurality of lemmas having probability above a predetermined threshold, the generating creates a first plurality of monikers; assign a moniker probability to each moniker of the first plurality of monikers, each moniker probability representing likelihood of a moniker being the name of the first room; and select the name of the first room based on the first plurality of monikers and the moniker probabilities.
Clause 13. The computer system of clause 12 wherein the instructions further cause the processor to, after grouping but before lemmatizing, separate words and numbers of the first grouped text.
Clause 14. The computer system of clause 12 wherein the instructions further cause the processor to, after select the name of the first room, de-lemmatize the name of the first room.
Clause 15. The computer system of clause 12 wherein the instructions further cause the processor to, before selecting the name of the first room: determine that the first plurality of monikers each lack a lemma having member probability above a predetermined threshold; and trigger retraining of a machine-learning model configured to generate the combinations of the first plurality of lemmas.
Clause 16. The computer system of clause 12 wherein the instructions further cause the processor to select a name for a second room, distinct from the first room, by causing the processor to: group text in the text database associated with the second room, the grouping creates a second grouped text; lemmatize words of the second grouped text to form a second plurality of lemmas; assign a member probability to each lemma of the second plurality of lemmas, each member probability of the second plurality of lemmas representing likelihood of a lemma being a member of the name of the second room; generate combinations of the first plurality of lemmas having probability above a predetermined threshold, the generating creates a second plurality of monikers; assign a moniker probability to each moniker of the second plurality of monikers, each moniker probability of the second plurality of monikers representing likelihood of a moniker being the name of the second room; and select the name of the second room based on the first plurality of monikers and the moniker probabilities of the second plurality of monikers.
Clause 17. The computer system of clause 16 wherein when the instructions further cause the processor to, after selecting the name of the second room, de-lemmatize the name of the second room.
Clause 18. The computer system of clause 12 wherein the instructions further cause the processor to, prior to grouping text, remove predetermined characters.
Clause 19. The computer system of clause 18 wherein when the processor removes predetermined characters, the instructions cause the processor to remove characters associated with formatting of the text.
Clause 20. The computer system of any of clauses 11-19 wherein when the processor determines the floor depicted in the CAD drawing, the instructions further cause the processor to receiving the floor-level outline from a user.
Clause 21. A computer-implemented method of determining a geo-location, the method comprising: determining, by a device, a floor-level outline of a floor depicted in a CAD drawing; receiving, by a device, an approximate geo-location of a building to which the CAD drawing applies; obtaining, by a device, an overhead image of a target area encompassing the approximate geo-location, the overhead image comprising a plurality of buildings within the target area; identifying, by a device, a plurality of building footprints within the target area; calculating, by a device, a plurality of distance functions that relate the floor-level outline to the each of the plurality of building footprints, the calculating creates a plurality of similarity scores; selecting, by a device, a building footprint from plurality of building footprints, the selecting based on the plurality of similarity scores; and calculating, by a device, a final geo-location of the building corresponding to the building footprint.
Clause 22. The computer-implemented method of clause 21 wherein calculating the plurality of distance functions further comprises calculating at least two selected from a group consisting of: a Hausdorf distance; a Modified Hausdorff distance; a Procrustes Shape Analysis; a Fréchet distance; and a turning distance.
Clause 23. The computer-implemented method of any of clauses 21-22 wherein calculating the plurality of distance functions further comprises, for each building footprint of the plurality of building footprints, combining a plurality of values indicative of similarity based on a respective plurality of weight values, the combining results in a similarity score.
Clause 24. The computer-implemented method of any of clauses 21-23 wherein selecting the building footprint further comprises: determining that at least two of the plurality of building footprints have similarity scores within a predetermined range of each other; selecting, as between the at least two of the plurality of building footprints, the building footprint closest to the approximate geo-location.
Clause 25. The computer-implemented method of clause 24 wherein determining that at least two of the plurality of building footprints have similarity scores within a predetermined range of each other further comprises determining that at least two of the plurality of building footprints have identical similarity scores.
Clause 26. The computer-implemented method of any of clauses 21-25 further comprising correcting the building footprint for at least one selected from the group consisting of: obliqueness of the overhead image; scale of the building footprint in relation to the floor-level outline; and rotational orientation of the building footprint in relation to the floor-level outline.
Clause 27. The computer-implemented method of any of clauses 21-26 wherein determining the floor-level outline further comprises: preprocessing, by a device, an original CAD drawing and thereby creating a modified CAD drawing, a text database containing text from the modified CAD drawing, a CAD vector-image of the modified CAD drawing, and a CAD raster-image of the modified CAD drawing; and applying the CAD raster-image, the CAD vector-image, and the text database to a floor-level machine-learning algorithm, the determining results in the floor-level outline.
Clause 28. The computer-implemented method of any of clauses 21-27 wherein determining the floor depicted in the CAD drawing further comprises receiving the floor-level outline from a user.
Clause 29. A computer system comprising: a processor; and a memory coupled to the processor. The memory storing instructions that, when executed by the processor, cause the processor to: determine a floor-level outline of a floor depicted in a CAD drawing; receive an approximate geo-location of a building to which the CAD drawing applies; obtain an overhead image of a target area encompassing the approximate geo-location, the overhead image comprising a plurality of buildings within the target area; identify a plurality of building footprints within the target area; calculate a plurality of distance functions that relate the floor-level outline to the each of the plurality of building footprints, the calculating creates a plurality of similarity scores; select a building footprint from plurality of building footprints, the selecting based on the plurality of similarity scores; and calculate a final geo-location of the building corresponding to the building footprint.
Clause 30. The computer system of clause 29 wherein when the processor calculates the plurality of distance functions, the instructions cause the processor to calculate at least two selected from a group consisting of: a Hausdorf distance; a Modified Hausdorff distance; a Procrustes Shape Analysis; a Fréchet distance; and a turning distance.
Clause 31. The computer system of any of clauses 29-30 wherein when the processor calculates the plurality of distance functions, the instructions further cause the processor to, for each building footprint of the plurality of building footprints, combine a plurality of values indicative of similarity based on a respective plurality of weight values, the combining results in a similarity score.
Clause 32. The computer system of any of clauses 29-31 wherein when the processor selects the building footprint, the instructions cause the processor to: determine that at least two of the plurality of building footprints have similarity scores within a predetermined range of each other; select, as between the at least two of the plurality of building footprints, the building footprint closest to the approximate geo-location.
Clause 33. The computer system of clause 32 wherein when the processor determines that at least two of the plurality of building footprints have similarity scores within a predetermined range of each other, the instructions further cause the processor to determine that at least two of the plurality of building footprints have identical similarity scores.
Clause 34. The computer system of any of clauses 29-33 wherein the instructions further cause the processor to correct the building footprint for at least one selected from the group consisting of: obliqueness of the overhead image; scale of the building footprint in relation to the floor-level outline; and rotational orientation of the building footprint in relation to the floor-level outline.
Clause 35. The computer system of any of clauses 29-34 wherein when the processor determines the floor-level outline, the instructions cause the processor to: preprocess an original CAD drawing and thereby creating a modified CAD drawing, a text database containing text from the modified CAD drawing, a CAD vector-image of the modified CAD drawing, and a CAD raster-image of the modified CAD drawing; and apply the CAD raster-image, the CAD vector-image, and the text database to a floor-level machine-learning algorithm, the determining results in the floor-level outline.
Clause 36. The computer system of any of clauses 29-35 wherein when the processor determines the floor depicted in the CAD drawing, the instructions cause the processor to receive the floor-level outline from a user.
Clause 37. A computer-implemented method of machine learning, the method comprising: receiving, by a device, a first set of updates to a first indoor map, the first indoor map previously created by a production machine-learning system having a production map accuracy; training, by a device, a supporting machine-learning system using the first set of updates to the first indoor map; and then applying, by a device, test data to the supporting machine-learning system, the applying results in first-evaluation indoor map with a first-evaluation map accuracy; and when the first-evaluation map accuracy is within a predetermined window above the production map accuracy refraining, by a device, from updating the production machine-learning system based on the first set of updates.
Clause 38. The computer-implemented method of clause 37 further comprising, when the first-evaluation map accuracy is below the production map accuracy, triggering, by a device, a manual intervention for the supporting machine-learning system.
Clause 39. The computer-implemented method of any of clauses 37-38 wherein training further comprises training the support machine-learning system with a training set incorporating at least a portion of the first set of updates and original CAD drawings.
Clause 40. The computer-implemented method of any of clauses 37-39 further comprising: receiving, by a device, a second set of updates to an indoor map; training, by a device, the supporting machine-learning system using the first and second sets of updates, the training results in a second-support machine-learning registry; and then applying, by a device, test data to the supporting machine-learning system, the applying results in a second-evaluation indoor map with a second-evaluation map accuracy; and when second-evaluation map accuracy is greater than the production map accuracy updating, by a device, a production machine-learning registry of the production machine-learning system.
Clause 41. The computer-implemented method of clause 40 wherein the test data includes changes based on the second set of updates.
Clause 42. The computer-implemented method of clause 40 wherein the test data includes changes based on the first and second set of updates.
Clause 43. The computer-implemented method of any of clauses 37-42 wherein receiving the first set of updates to the first indoor map further comprises receiving from a plurality of human reviewers.
Clause 44. The computer-implemented method of clause 43 wherein receiving from the plurality of human reviewers comprises receiving from reviews that are asynchronous.
Clause 45. A computer system comprising: a processor; and a memory coupled to the processor. The memory storing instructions that, when executed by the processor, cause the processor to: receive a first set of updates to a first indoor map, the first indoor map previously created by a production machine-learning system having a production map accuracy; train a supporting machine-learning system using the first set of updates to the first indoor map; and then apply test data to the supporting machine-learning system, the applying results in first-evaluation indoor map with a first-evaluation map accuracy; and when the first-evaluation map accuracy is within a predetermined window above the production map accuracy refrain from updating the production machine-learning system based on the first set of updates.
Clause 46. The computer system of clause 45 wherein the instructions further cause the processor to, when the first-evaluation map accuracy is below the production map accuracy, trigger a manual intervention for the supporting machine-learning system.
Clause 47. The computer system of any of clauses 45-46 wherein when the processor trains the supporting machine-learning system, the instructions cause the processor to train the support machine-learning system with a training set incorporating at least a portion of the first set of updates and original CAD drawings.
Clause 48. The computer system of any of clauses 45-46 wherein the instructions further cause the processor to: receive a second set of updates to an indoor map; train the supporting machine-learning system using the first and second sets of updates, the training results in a second-support machine-learning registry; and then apply test data to the supporting machine-learning system, the applying results in a second-evaluation indoor map with a second-evaluation map accuracy; and when second-evaluation map accuracy is greater than the production map accuracy update a production machine-learning registry of the production machine-learning system.
Clause 49. The computer system of clause 48 wherein the test data includes changes based on the second set of updates.
Clause 50. The computer system of clause 48 wherein the test data includes changes based on the first and second set of updates.
Clause 51. The computer system of any of clauses 45-46 wherein when the processor receives the first set of updates to the first indoor map, the instructions cause the processor to receive from a plurality of human reviewers.
Clause 52. The computer system of clause 51 wherein when the processor receives from the plurality of human reviewers, the instructions cause the processor to receive from reviews that are asynchronous.
The above discussion is meant to be illustrative of the principles and various embodiments of the present invention. Numerous variations and modifications will become apparent to those skilled in the art once the above disclosure is fully appreciated. For example. It is intended that the following claims be interpreted to embrace all such variations and modifications.
This application is a continuation of U.S. application Ser. No. 18/216,900 filed Jun. 30, 2023 titled “Systems and Methods for Automating Conversion of Drawings to Indoor Maps and Plans.” The '900 Application is a continuation-in-part of U.S. application Ser. No. 18/052,852 filed Nov. 4, 2022 titled “Systems and Methods for Automating Conversion of Drawings to Indoor Maps and Plans.” The '852 Application is a continuation of U.S. application Ser. No. 17/732,652 filed Apr. 29, 2022 titled “Systems and Methods for Automating Conversion of Drawings to Indoor Maps and Plans.” The '652 Application claims the benefit of U.S. Provisional Application No. 63/318,522 filed Mar. 10, 2022 and titled “Systems and Methods for Automating Conversion of Drawings to Indoor Maps and Plans.” All the noted applications are incorporated herein by reference as if reproduced in full below.
Number | Name | Date | Kind |
---|---|---|---|
7227893 | Srinivasa et al. | Jun 2007 | B1 |
11190902 | Brosowsky et al. | Nov 2021 | B1 |
11514633 | Cetintas et al. | Nov 2022 | B1 |
11657555 | Cetintas et al. | May 2023 | B1 |
11769287 | Cetintas et al. | Sep 2023 | B1 |
20040049307 | Beatty et al. | Mar 2004 | A1 |
20100214290 | Shiell et al. | Aug 2010 | A1 |
20150199557 | Zhang et al. | Jul 2015 | A1 |
20190311533 | Doh | Oct 2019 | A1 |
20210073433 | Austern et al. | Mar 2021 | A1 |
20210073435 | Segev | Mar 2021 | A1 |
20210150088 | Gallo et al. | May 2021 | A1 |
20210409903 | Shapiro | Dec 2021 | A1 |
20220035973 | Liebman | Feb 2022 | A1 |
20220138621 | Patil | May 2022 | A1 |
20220147026 | Poelman | May 2022 | A1 |
20230157506 | Tamino et al. | May 2023 | A1 |
20230306664 | Cetintas et al. | Sep 2023 | A1 |
Number | Date | Country |
---|---|---|
111854758 | Oct 2020 | CN |
Entry |
---|
Extended European Search Report dated Aug. 3, 2023 for European Application No. 23155925.3-1009, 6 pages. |
Simonsen, Christoffer P. et al., “Generalizing Floor Plans Using Graph Neural Networks”, International Conference on Image Processing (ICIP), Sep. 2021, pp. 654-658. |
“Import AutoCAD Files in 1 Minute Only”, https://www.mapwize.io/news/2019-01-import-autocad-files-in-1-minute--only/, ServiceNow, Jan. 2019, 2 pages. |
“Manage all your indoor maps from a dedicated platform”, https://visioglobe.com/indoor-mapping-tools, Visioglobe.com, Accessed: Jul. 6, 2022, 6 pages. |
“Integrate Indoor Mapping Data Format {IMDF) Using FME”, https://visioglobe.com/indoor-mapping-tools, Safe Software Accessed: Jul. 6, 2022, 9 pages. |
DIY Augmented Reality Indoor Mapping for your venue, https://www.safe.com/blog/2018/09/diy-augmented-reality-indoor-mapping-for-your-venue/, Safe Software, Accessed: Jul. 6, 2022, 11 pages. |
“Covert DWG to IMDF,” https://www.safe.com/blog/2018/09/diy-augmented-reality-indoor-mapping-for-your-venue/, Safe Software, Accessed: Jul. 6, 2022, 6 pages. |
Sarker, Iqbal H., “Deep Learning: A Comprehensive Overview of Techniques, Taxonomy, Applications and Research Directions”, SN Computer Science, Aug. 18, 2021, 20 pages. |
Huang H.C., et al., “Graph Theory-based Approach for Automatic Recognition of CAD Data”, ScienceDirect, Engineering Applications of Artificial Intelligence 21, 2008, 7 pages. |
Tang, Rui, et al., “Automatic Structural Scene Digitalization”, Centre for the Analysis of Motion, Entertainment Research and Applications, University of Bath UK, Nov. 17, 2017, 16 pages. |
Tang, Hao, et al., “Automatic Pre-Journey Indoor Map Generation Using AutoCAD Floor Plan”, The Journal on Technology and Persons with Disabilities, Sep. 2017, 17 pages. |
“Apple Indoor Maps and Positioning”, Apple Inc., Apr. 2021, 22 pages. |
De Las Heras, Lluis-Pere, et al., “Statistical Segmentation and Structural Recognition for Floor Plan Interpretation”, German Research Center for AI, Dec. 2013, 18 pages. |
Number | Date | Country | |
---|---|---|---|
20230360295 A1 | Nov 2023 | US |
Number | Date | Country | |
---|---|---|---|
63318522 | Mar 2022 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 18216900 | Jun 2023 | US |
Child | 18221485 | US | |
Parent | 17732652 | Apr 2022 | US |
Child | 18052852 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 18052852 | Nov 2022 | US |
Child | 18216900 | US |