The present disclosure relates generally to augmented reality computing devices. More specifically, the present disclosure relates to a system and method for collaboratively measuring an object and/or a feature of a structure that may include a video and audio connection (e.g., a video collaboration web portal) between a user utilizing a mobile device and a remote user utilizing a computing device or as a stand-alone feature utilized by a mobile device user.
In the insurance underwriting, building construction, solar, field services, and real estate industries, computer-based systems for generating floor plans and layouts of physical structures such as residential homes, commercial buildings, etc., objects within those homes (e.g., furniture, cabinets, appliances, etc.) is becoming increasingly important. In particular, to generate an accurate floor plan of a physical structure, one must have an accurate set of data which adequately describes that structure. Moreover, it is becoming increasingly important to provide computer-based systems which have adequate capabilities to measure interior and exterior features of buildings, as well as to measure specific interior objects and features of such buildings (e.g., a counter top length, a ceiling height, a room width, doors, windows, closets, etc.).
With the advent of mobile data capturing devices including phones and tablets, it is now possible to gather and process accurate data from sites located anywhere in the world. The data can be processed either directly on a hand-held computing device or some other type of device (provided that such devices have adequate computing power). However, industry professionals (e.g., a claims adjuster, a foreman, a utility installer, a real estate agent, etc.) are often not readily available for an on-site visit.
Accordingly, what would be desirable is a system and method for collaboratively measuring an object and/or feature of a structure that may include a video and audio connection (e.g., a video collaboration web portal) between a user (e.g., a homeowner) utilizing a mobile device and a remote user (e.g., an industry professional) utilizing a computing device or as a stand-alone feature utilized by a mobile device user.
The present invention relates to systems and methods for collaborative augmented reality measurement of an object using computing devices. The system establishes an audio and video connection between a mobile device of a first user and a remote device of a second user such that the second user can view and edit an augmented reality scene displayed on a display of the mobile device of the first user. The system receives a measurement tool selection from the first user or the second user to measure an object and/or feature present in the augmented reality scene displayed on the display of the mobile device of the first user. Then, the system detects a plane (e.g., a vertical or horizontal plane) of the augmented reality scene as a reference to position and capture points to execute a measurement of the object and/or feature present in the augmented reality scene. The system determines a measurement of the object and/or feature based on the selected measurement tool and transmits the measurement of the object and/or feature to a server.
The foregoing features of the invention will be apparent from the following Detailed Description of the Invention, taken in connection with the accompanying drawings, in which:
and
The present disclosure relates to a system and method for the collaborative augmented reality measurement of an object using computing devices, as described in detail below in connection with
Turning to the drawings,
The system 10 includes system code 20 (non-transitory, computer-readable instructions) stored on a computer-readable medium and executable by the hardware processor 12 or one or more computer systems. The code 20 could include various custom-written software modules that carry out the steps/processes discussed herein, and could include, but is not limited to, an audio/video (A/V) remote connection module 22a, a plane detection module 22b, and a measurement module 22c. The code 20 could be programmed using any suitable programming languages including, but not limited to, Swift, Kotlin, C, C++, C#, Java, Python or any other suitable language. Additionally, the code 20 could be distributed across multiple computer systems in communication with each other over a communications network, and/or stored and executed on a cloud computing platform and remotely accessed by a computer system in communication with the cloud platform. The code 20 could communicate with the server 14 and the remote device 16, which could be stored on one or more other computer systems in communication with the code 20.
Still further, the system 10 could be embodied as a customized hardware component such as a field-programmable gate array (“FPGA”), application-specific integrated circuit (“ASIC”), embedded system, or other customized hardware components without departing from the spirit or scope of the present disclosure. It should be understood that
The system 10 can snap to a point by executing a raycast hit test originating from a center of the display of the mobile device 12. If an existing point on the detected plane is hit (contacted), then the system 10 can update a world position (e.g., a position relative to the scene's world coordinate space) of the reticle overlay to be the world position of the existing point. If an existing point is not hit, the system 10 can update the world position of the reticle overlay to a position where a raycast hit test originating from the center of the display of the mobile device 12 hits a plane. The system 10 can also snap to the orthogonal guideline by executing a raycast hit test originating from a center of the display of the mobile device 12. The orthogonal guideline can be defined by a collision shape (e.g., planes, spheres, boxes, cylinders, convex hulls, ellipsoids, compounds, arbitrary shapes, or any suitable shape defining the orthogonal guideline). The collision shape can be hit by casted rays. If a collision shape of the orthogonal guideline is hit, the system 10 can utilize the hit position and project it onto a vector indicative of a direction of the orthogonal guideline as well as update a position of the reticle overlay to be the hit position adjusted to the orthogonal guideline direction. If the guideline collision shape is not hit, the system 10 can update a position of the reticle to a position where a center of the display raycast hits a plane.
Additionally, the system 10 can snap to a plane on an orthogonal guideline. In particular, when the reticle is snapped to the orthogonal guideline the system 10 can execute a raycast hit test with the origin set to the reticle position (e.g., a position of the reticle overly on the orthogonal guideline) and the direction set to the orthogonal guideline direction. If a plane is hit, the system 10 can determine a distance from the reticle to a plane hit position and if the distance is within a “snap range” (e.g., a predetermined centimeter threshold), the system 10 can update the reticle position to the plane hit position. If a plane is not hit, the system 10 can execute a raycast hit test with the origin set to the reticle position and the direction set to the negated orthogonal guideline direction. If a plane is hit, the system 10 can determine a distance from the reticle to a plane hit position and if the distance is within the “snap range” the system 10 can update the reticle position to the plane hit position. If a plane is not hit in the negated orthogonal guideline direction, the system 10 can maintain a position of the reticle on the guideline. The system 10 can execute the aforementioned raycast hit tests with each new position of the reticle.
The system 10 can also extend a measurement along the orthogonal line. When an initial measurement is positioned along an orthogonal guideline, a second point of the initial measurement becomes oriented along the directional vector of the orthogonal guideline. If a new measurement is started from the initial measurement's second point, the orthogonal guideline uses that point's orientation to extend along the same directional vector. The new measurement can then be completed along the guideline making it collinear with the initial measurement.
It should be understood that the system 10 allows the second user 18 to remotely position a point on the augmented reality scene. In particular, the second user 18 and/or the remote device 16 can transmit a signal via a video client's server to the first user 11 and/or the mobile device 12 requesting that the first user 11 and/or the mobile device 12 add a measurement point. The first user 11 and/or the mobile device 12 receives this signal and executes the operation to add a measurement point on behalf of the second user 18. This signal transmission can also be utilized to remotely initiate and close a measurement tool, select the type of measurement to be conducted, change a unit of measurement, and modify or discard a captured point.
In step 122, the system 10 determines a distance between the captured points. In particular, the system can determine the distance between two points by applying a distance formula from the three-dimensional coordinates of each point. In step 124, the system 10 labels and displays the determined distance between the captured points. It should be understood that the system 10 can carry out different operations for labeling and displaying the determined distance between two points based on an operating system executing on the mobile device 12.
For example, if iOS is executing on the mobile device 12, then the distance for the line measurement is displayed in a label using shape and label nodes form the Apple SpriteKit library. When a line measurement is pending (indicated by a solid line or a dashed line) the measurement label is positioned on the guideline no greater than four times the label's width from the reticle, or it is positioned above the reticle, thus keeping the line measurement visible on the screen until the line measurement is complete. Once a line measurement is complete a solid line is placed between the two points in two-dimensional space. When the line measurement is complete the label is positioned at a midpoint of the line in three-dimensional space, with the midpoint determined by using a midpoint segment formula. Measurements can be displayed in feet and inches or meters and/or centimeters, depending on the region settings of the mobile device 11 or the configuration override set in a menu on the system 10.
In another example, if Android is executing on the mobile device 12, then the system 10 can create a view that can be rendered in three-dimensional space, called a label, that displays a distance of the line measurement. When a line measurement is pending (indicated by a solid line or a dashed line) the label is displayed and positioned no further away from the reticle than a defined maximum distance that maintains the label visible while the line measurement is pending. On every frame, rotation, size, and position adjustments are required. For rotation adjustments, the system 10 aligns the label's up vector with the up vector of the camera of the mobile device 11 and subsequently aligns the label's forward vector with its screen point ray vector, thereby maintaining the label facing the camera and tilting with the camera. For size adjustments, the system 10 adjusts the label's size to be proportional to a base height and the distance from the camera. As the camera moves further away from a completed line measurement, the label will increase in size. Once a line measurement is complete a solid line is placed between the two points in three-dimensional space. When the line measurement is complete the label is positioned at the x, y, z coordinates that lie in the center between the start and end points of the line measurement. On every frame, the rotation, size, and position adjustments are made.
In some embodiments, the system 10 can extend a measurement along a different orthogonal guideline. The system 10 can generate a new orthogonal guideline that is titled relative to a previous orthogonal guideline. For example, there is a non-zero angle between the new orthogonal guideline and the previous orthogonal guideline. A new measurement can be started from the previous measurement along the new orthogonal guideline. For example, the system 10 can capture a third point along the new orthogonal guideline. The system 10 can calculate a distance between the second and third points. The system 10 can label and display the distance between the second and third points. An example is further described in
In step 144, the system captures additional points and links the additional points to point A to close a polygon formed by point A, point B, and the additional points. In step 146, the system 10 captures a point C indicative of a vertical distance of a height of the polygon prism. Then, in step 148, the system 10 determines geometrical parameters of the polygon prism, such as a perimeter and an area of each face of the polygon prism and a volume of the polygon prism. For example and with respect to a rectangular measurement, the system 10 determines a perimeter of a rectangular plane by applying a perimeter formula of a rectangle and determines an area of the rectangular plane by applying an area formula of a rectangle. Additionally, it should be understood that the system 10 can optionally merge coplanar polygons where a polygon refers to a closed, non-self-intersecting path formed by an ordered list of coplanar vertices. The system 10 can merge two polygons by positioning a first polygon on a ground plane, positioning a second polygon on the ground plane such that it overlaps with the first polygon, and determining a union between the first and second polygons. The system 10 can merge an additional polygon by determining a union between the additional polygon and the merged first and second polygons. In this way, the system 10 can merge any number of polygons. The system 10 can remove a section from the first polygon, or merged polygons, by creating a polygon within the interior of the existing polygon where at least one side of the polygon snaps to the perimeter of the existing polygon and no side of the additional polygon extends beyond the perimeter of the existing polygon. A line tool can create a face of the polygon that is not 90 degrees by marking a point on one face of the polygon and marking another point on a different face of the polygon. With this combination of tools a polygon with varying shapes can be created.
In step 150, the system 10 determines whether to exclude an area from a face of the polygon prism. If the system 10 determines not to exclude an area from a face of the polygon prism, then the process ends. Alternatively, if the system 10 determines to exclude an area from a face of the polygon prism, then the process proceeds to step 152. In step 152, the system 10 captures a point D utilizing the reticle overlay at a first corner. Then, in step 154, the system 10 captures a point E utilizing the reticle overlay at a second corner diagonally across the same plane of point D. In step 156, the system 10 determines the area bounded by the points and excludes the determined area from the polygon prism face and subsequently the process returns to step 150.
In step 174, the system 10 determines whether there are additional horizontal planes to capture. If the system 10 determines that there are additional horizontal planes to capture, then the process returns to step 170. Alternatively, if the system 10 determines that there are not additional horizontal planes to capture, then the process proceeds to step 176. In step 176, the system 10 captures at least one point C indicative of a vertical distance of a height of the polygon prism. It should be understood that the system 10 can carry out different operations for vertical and/or horizontal plane snapping based on an operating system executing on the mobile device 12.
For example, if an iOS operating system is executing on the mobile device 12, then when a vertical plane is detected the system 10 can extend a bounding box thereof to increase a likelihood of plane intersections to facilitate hit testing. Once the reticle is positioned on a ground plane, the system 10 can execute a hit test along an x-axis line segment and a z-axis line segment of the reticle. If the system 10 detects a vertical plane, then the system 10 can position the reticle at the position of the hit test and orient the reticle along a surface of the detected vertical plane. The system 10 can execute another hit test along the line segment that is oriented along the surface of the first detected plane to detect if the reticle intersects with a second plane. If the system 10 detects a second plane, then the system 10 can position the reticle at the position of the resulting hit test.
In another example, if Android operating system is executing on the mobile device 12, then the system 10 determines all lines in three-dimensional space where horizontal and vertical planes intersect and adds a guideline at each of the intersections with a collision box that is larger than the actual rendered guideline. Then, the system 10 executes a raycast hit test from a center of the display of the mobile device 12. If a result of the raycast hits the guideline, then the system 10 can snap to a corresponding position on the horizontal plane where the planes intersect.
Then, in step 178, the system 10 determines a perimeter and an area of each face of the polygon prism and a volume of the polygon prism. For example and with respect to a rectangular measurement, the system 10 determines a perimeter of a rectangular plane by applying a perimeter formula of a rectangle and determines an area of the rectangular plane by applying an area formula of a rectangle. Additionally, it should be understood that the system 10 can optionally merge coplanar polygons where a polygon refers to a closed, non-self-intersecting path formed by an ordered list of coplanar vertices. The system 10 can merge two polygons by positioning a first polygon on a ground plane, positioning a second polygon on the ground plane such that it overlaps with the first polygon, and determining a union between the first and second polygons. The system 10 can merge an additional polygon by determining a union between the additional polygon and the merged first and second polygons. In this way, the system 10 can merge any number of polygons. The system 10 can remove a section from the first polygon, or merged polygons, by creating a polygon within the interior of the existing polygon where at least one side of the polygon snaps to the perimeter of the existing polygon and no side of the additional polygon extends beyond the perimeter of the existing polygon. A line tool can create a face of the polygon that is not 90 degrees by marking a point on one face of the polygon and marking another point on a different face of the polygon. With this combination of tools a polygon with varying shapes can be created.
In step 180, the system 10 determines whether to exclude an area from a face of the polygon prism. Alternatively, the first user 11 or the second user 18 can determine whether to exclude an area from a face of the polygon prism. If the system 10 (or, the users 11 or 18) determines not to exclude an area from a face of the polygon prism, then the process ends. Alternatively, if the system 10 determines to exclude an area from a face of the polygon prism, then the process proceeds to step 182. In step 182, the system 10 captures a point D utilizing the reticle overlay at a fourth corner. Then, in step 184, the system 10 captures a point E utilizing the reticle overlay at a fifth corner diagonally across the same plane of point D. In step 186, the system 10 determines the area bounded by the points and excludes the determined area from the polygon prism face and subsequently the process returns to step 180.
As shown in
It is noted that the augmented reality scene disclosed herein can be displayed by either, or both, of the mobile device (e.g., of the first user) and the remote device (e.g., of the second user). Moreover, the various tools and processes disclosed herein could also be accessed, utilized, and/or executed by either, or both, of the mobile device and the remote device, thus permitting flexible augmented reality visualization and collaboration using either, or both, of the devices.
Having thus described the system and method in detail, it is to be understood that the foregoing description is not intended to limit the spirit or scope thereof. It will be understood that the embodiments of the present disclosure described herein are merely exemplary and that a person skilled in the art can make any variations and modification without departing from the spirit and scope of the disclosure. All such variations and modifications, including those discussed above, are intended to be included within the scope of the disclosure. What is desired to be protected by Letters Patent is set forth in the following Claims.
This application claims priority to U.S. Provisional Patent Application Ser. No. 63/121,156 filed on Dec. 3, 2020, the entire disclosure of which is hereby expressly incorporated by reference.
| Number | Date | Country | |
|---|---|---|---|
| 63121156 | Dec 2020 | US |