The present disclosure relates generally to a system and method for generating computerized models of structures using computing devices. More specifically, the present disclosure relates to a system and method for generating computerized models of structures using geometry extraction and reconstruction techniques.
In the insurance underwriting, building construction, solar, and real estate industries, computer-based systems for generating models of physical structures such as residential homes, commercial buildings, etc., is becoming increasingly important. In particular, in order to create an accurate model of a physical structure, one must have an accurate set of data which adequately describes that structure. Moreover, it is becoming increasingly important to provide computer-based systems which have adequate capabilities to generate three-dimensional (3D) models of both the interior and exterior features of buildings, as well as to identify specific features of such buildings (e.g., interior wall/floor/ceiling features, etc.) and condition (exterior wall damage, roof damage, etc.).
With the advent of mobile data capturing devices including phones, tablets and unmanned aerial and ground based vehicles, it is now possible to gather and process accurate data from sites located anywhere in the world. The data can be processed either directly on a hand-held computing device or some other type of device, such as an unmanned aerial vehicle (UAV) or system (UAS) (provided that such devices have adequate computing power), or remotely on a data processing server.
Accordingly, what would be desirable is a system and method for generating three-dimensional models of structures using geometry extraction (such as feature growing) and feature reconstruction techniques.
The present invention relates to a system and method for generating computerized models of structures (such as interior and exterior features of buildings, homes, dwellings, etc., including interior and exterior features of such structures) using geometry extraction and reconstruction techniques. The system includes a structure modeling engine executed by a computing device, such as a mobile smart phone. The system obtains raw data scanned by a sensor in communication with the smart phone, such as a series of photos, RGB image data (still, fisheye, panoramic, video, etc.), infrared (IR) image data, mobile sensor data (gyroscope/accelerometer/barometer, etc.), laser range data (point cloud data), LIDAR, global positioning system (GPS) data, X-ray data, magnetic field data, depth maps, and other types of data. A data fusion process is applied to fuse the raw data, and a geometry extraction process is performed on the fused data to extract features such as walls, floors, ceilings, roof planes, etc. Large-scale features of the structure are then reconstructed by the system using the extracted features. Optionally, small-scale features of the structure could also be reconstructed by the system. The large- and small-scale features are reconstructed by the system into a floor plan (contour) and/or a polyhedron corresponding to the structure. Optionally, the system can also process exterior features such as roof and wall image data to automatically identify condition and areas of roof damage.
The foregoing features of the invention will be apparent from the following Detailed Description, taken in connection with the accompanying drawings, in which:
The present disclosure relates to a system and method for generating three-dimensional computer models of structures using geometry extraction and reconstruction techniques, as described in detail below in connection with
The smart phone 14 can communicate via a network 16 with a remote user's computer system 18, and/or with a remote structure modeling server 20. The remote structure modeling server 20 could also be programmed with and execute the structure modeling engine 12, if desired. Such an arrangement is particularly advantageous where the smart phone 14 does not have sufficient processing power to rapidly generate three-dimensional models of structures, in which case the server 20 can remotely perform such functions. In such circumstances, the remote server 20 would receive raw captured data from the smart phone 14 via the network 16. The network 16 could include the Internet, a cellular data network, a wireless network, a wired network, etc. Of course, the server 20 could be a stand-alone server, or it could be part of a “cloud” computing environment/platform, if desired.
In the event that a determination is made in step 44 that scanning of the inside of a structure is to be done, step 60 occurs wherein the user moves through each room of the structure using the smart phone 14. Then, in step 62, as the user moves through a room, the user takes overlapping photos, videos, and/or LIDAR point clouds using the phone 14 and/or sensor associated with the phone 14. Then, in step 64, a determination is made as to whether the last room has been captured. If a positive determination is made, step 52 occurs; otherwise, control returns to step 60.
In step 74, a data fusion process is performed on the raw data. In order to utilize the raw data, it must be collated/fused into a coherent data set. For example, if the structure data is captured a frame at a time, the data from one frame is not sufficient to reconstruct the entire structure. The data from each frame must be merged, aligned and made to be consistent with the rest of the frame data from the entire structure. This process is called frame registration and may use position and orientation to calculate the sensor's position for each frame. In addition, data from multiple sources must be synchronized. For example, the color image data must be synchronized with the fisheye, IR, range, gyro/accelerometer and GPS data. The end result is a coherent data set from which meaningful information about the structure can be extracted. The processed data is stored in a fused data set database 76.
In step 78, the fused data is processed by the system to extract geometry features therefrom. Data extraction depends on the ability to identify specific geometric elements in the data set. Specific algorithms with optimal pipeline and parameters could be utilized depending on the nature of input and estimated input accuracy, and could include the following steps:
The geometries could be classified as real elements by assigning them a meaning, discarding, adapting or merging geometries from step 3 above, e.g., extract structure façade and rooms using concave and convex hull algorithms, identify geometries as real objects using neural networks and then, refine polyhedral geometry, etc. In addition, geometry extraction can be performed using publicly-available software algorithm libraries such as Point Cloud Library (PCL), Eigen Math Library, Open Computer Vision library (OpenCV), and others. Examples of features which can be extracted in step 76 include, but are not limited to, edges, lines, planes, points, corners (where lines and/or planes intersect), and other features.
In step 80, the system reconstructs large-scale structure features, such as walls, ceilings, floors, etc., from the extracted geometry features. This process is described in greater detail below in connection with
In step 94, the system squares angles formed by adjacent walls. This can be accomplished by using a PCL library to identify the major planes of the room, finding plane centroids, finding an angle between two plane normals vectors in the X-Z plane, adjusting the normal vectors to be an increment of 22.5 degrees, re-computing plane equations (dot product of new normal and centroid), and eliminating obstructions such as cabinets and furniture.
In step 96, the system eliminates vertical walls which do not contribute to the room structure, such as walls from other rooms seen through doorways. This can be accomplished by using a PCL library to identify the major planes of the room, identifying possible candidates (two planes that are parallel and exist in the same quadrant when the camera is positioned at the origin), finding the two planes' bounding boxes, and creating a pyramid for each plane. The top point of the pyramid is identified as the camera location, and the base is the plane's bounding box. Additionally, the system could identify closer and farther planes, and could automatically remove the farther plane if its pyramid is completely contained inside the closer plane's pyramid. The pyramid discussed herein can be seen and described in greater detail below in connection with
Referring to both
In step 118, the system assigns a score to each possible floor plan. The score reflects how well the edges of the floor plan line up with high value cells in the grid. Then, in step 120, the system sorts the scores. In step 122, the system identifies the floor plan having the highest score as the most likely to reflect the actual floor plan of the room.
The processing steps 100 illustrated in
Referring to
In step 148, the point of intersection between the ray 162 and the point cloud 160 is used by the system as a seed for a region growing plane extraction algorithm, such that the system processes the point cloud data using such an algorithm. The region growing algorithm can be described in two stages. The first stage initializes the region to extract and finds an arbitrary number of new region candidates. The region is initialized with the seed from the point cloud and its nearest neighbor in the point cloud 160. At this stage, the region is two points, and from this information, the system calculates the center and normal for an optimal plane of the region. Then, a set of candidates to be added to the region is initialized. The nearest neighbors to the seed of the region are tested to determine if they are close enough to be good region candidates. The second stage grows the region incrementally by evaluating each region candidate. In order for a region candidate to be added to the region, it must meet the following criteria: (1) the region candidate is within a distance threshold of the region; (2) if the region candidate is added to the region, the new mean squared error of the region must be less than a mean squared error threshold; and (3) if the region candidate is added to the region, it must be within a distance threshold to the new optimal plane calculated for the region. Many of these checks are computationally-intense if there are thousands of points being evaluated. Therefore, in order to optimize this portion so that it can run on mobile devices, the mean squared error and principle component analysis are calculated incrementally as candidates are added to a region. If a region candidate meets the above criteria, then it is added to the region. The candidate added to the region is then used to update the list of region candidates. In this way the region can continue to grow beyond the immediate neighbors of the seed.
The region continues growing until there are no more candidates or it has reached a maximum region size. If a grown region does not meet a minimum threshold for region size it is not considered a valid planar region. Thus, as is shown in step 152, a determination is made as to whether there are more walls (planes) to extract. This is illustrated in
In steps 156 and 158, the generated floor contour and/or polyhedron can be displayed to the user, and optionally downloaded or exported by the user for use in another program (such as a computer-aided design (CAD) program), if desired. As illustrated in
It is noted that, in addition to identifying the basic size and shape of the structure, other relevant data can be extracted by the system as well. This step is illustrated in step 82 of
Once the 3D models and all associated data have been extracted, this information can be made available through a database maintained by the system. The information can also be requested by specifying an individual property via an address, a geocode, etc. The information can also be aggregated and reports generated on multiple structures. For example, the system could be queried to display a list of all properties in postal code 84097 that have more than 4,000 square feet of living space and a pool. Properties in the database can be viewed online or on any mobile or desktop device.
In step 196, the system optionally maps the imagery to a data model. For example, it is possible to extract only the portions of the imagery that are roof and wall faces, or attempt to to segment the images using roof/wall material detection algorithms or known “grabcut” algorithms. These imagery clips could be sent through neural networks/algorithms to identify possible areas of wind and hail damage and can return a list of coordinates tied to the imagery where the damage is suspected. The user can then be walked through a wizard-like process where they see the possible damage locations highlighted and review, edit and confirm the results. It is noted that step 196 could be carried out by projecting the property in three dimensions in the photo. For each roof plane in the photo, the system could identify suspected damaged areas using segmentation and neural networks to classify suspected areas as corresponding to actual damage, as well as identifying importance of the damage.
In step 198, the system processes the photo(s) to identify areas of roof damage. Based on the number of shingles damaged and pre-defined threshold parameters, the system can determine whether a particular face needs to be repaired or replaced. As shown in step 200, the areas requiring repair can be displayed to the user, if desired. The damaged area information could also be utilized to obtain full estimate details and produce a line item and labor list detailing the costs to do the repairs, using another software package such as the PROPERTY INSIGHT package by XACTWARE SOLUTIONS, INC.
In step 216, the system calculates main guidelines. This can be accomplished by calculating the most important axis from the point cloud data and rotating the model so that it is aligned with Cartesian lines, splitting raw lines and grouping them over horizontal and vertical lines, identifying line subgroups by adjacency, creating a guideline per subgroup, removing outliers from raw lines using the guidelines, and calculating main guidelines using the filtered lines. This step is illustrated in screenshot 254 in
In step 222, the system identifies individual rooms, as shown in screenshots 260-262 in
Referring to
In step 1004, lines are extended and trimmed to create intersections with each other, as illustrated in
It is noted that the system of the present disclosure could compensate for the presence of objects in a space undergoing floorplan estimation. For example, if particular objects (such as appliances, etc.) have known measurements, then the system can compensate for the presence of such objects, and/or include estimates of such objects into floorplans generated by the system. Such measurements could be obtained by looking up dimensions of objects (e.g., refrigerators, stoves, washing machines, toilets, etc.) in a database using detected makes/brands of such objects. Further, the system could utilize measurements of other objects, such as countertop measurements, in order to estimate measurements of room features, such as wall measurements. Still further, the floorplans estimated by the present system could be utilized to estimate other parameters of a particular structure, such as the type of construction of the structure. Additionally, it is contemplated that previous (stored) information relating to building/construction materials could be utilized with the system in order to predict materials in a particular structure undergoing estimation by the system. Finally, it is noted that the system could generate multiple floorplans of a particular structure, which floorplans could then be “assembled” to form a model of the entire structure (e.g., assembling floorplans of each room in the structure until an entire model of the structure is generated).
Having thus described the system and method in detail, it is to be understood that the foregoing description is not intended to limit the spirit or scope thereof. It will be understood that the embodiments of the present disclosure described herein are merely exemplary and that a person skilled in the art may make any variations and modification without departing from the spirit and scope of the disclosure. All such variations and modifications, including those discussed above, are intended to be included within the scope of the disclosure.
This application is a continuation of, and claims the benefit of priority to, U.S. patent application Ser. No. 15/374,695 filed on Dec. 9, 2016, now U.S. Pat. No. 10,387,582, issued on Aug. 20, 2019, which claims priority to U.S. Provisional Application No. 62/265,359 filed on Dec. 9, 2015, the contents of the application is hereby incorporated by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
5701403 | Watanabe et al. | Dec 1997 | A |
6446030 | Hoffman et al. | Sep 2002 | B1 |
6448964 | Isaacs et al. | Sep 2002 | B1 |
8533063 | Erickson | Sep 2013 | B2 |
8843304 | Dupont et al. | Sep 2014 | B1 |
8868375 | Christian | Oct 2014 | B1 |
8983806 | Labrie et al. | Mar 2015 | B2 |
9158869 | Labrie et al. | Oct 2015 | B2 |
9501700 | Loveland et al. | Nov 2016 | B2 |
9679227 | Taylor et al. | Jun 2017 | B2 |
10127670 | Lewis et al. | Nov 2018 | B2 |
10181079 | Labrie et al. | Jan 2019 | B2 |
10289760 | Oakes, III et al. | May 2019 | B1 |
10387582 | Lewis et al. | Aug 2019 | B2 |
10445438 | Motonaga et al. | Oct 2019 | B1 |
10529028 | Davis et al. | Jan 2020 | B1 |
11314905 | Childs et al. | Apr 2022 | B2 |
20020116254 | Stein et al. | Aug 2002 | A1 |
20030009315 | Thomas et al. | Jan 2003 | A1 |
20070080961 | Inzinga et al. | Apr 2007 | A1 |
20070276626 | Bruffey | Nov 2007 | A1 |
20090179895 | Zhu et al. | Jul 2009 | A1 |
20100110074 | Pershing | May 2010 | A1 |
20100114537 | Pershing | May 2010 | A1 |
20100296693 | Thornberry et al. | Nov 2010 | A1 |
20110056286 | Jansen | Mar 2011 | A1 |
20110157213 | Takeyama et al. | Jun 2011 | A1 |
20110191738 | Walker et al. | Aug 2011 | A1 |
20120026322 | Malka et al. | Feb 2012 | A1 |
20120179431 | Labrie et al. | Jul 2012 | A1 |
20120253725 | Malka et al. | Oct 2012 | A1 |
20120253751 | Malka et al. | Oct 2012 | A1 |
20130201167 | Oh et al. | Aug 2013 | A1 |
20130206177 | Burlutskiy | Aug 2013 | A1 |
20130226451 | O'Neill et al. | Aug 2013 | A1 |
20130262029 | Pershing | Oct 2013 | A1 |
20130267260 | Chao et al. | Oct 2013 | A1 |
20130314688 | Likholyot | Nov 2013 | A1 |
20140043436 | Bell | Feb 2014 | A1 |
20140195275 | Pershing et al. | Jul 2014 | A1 |
20140301633 | Furukawa et al. | Oct 2014 | A1 |
20140320661 | Sankar et al. | Oct 2014 | A1 |
20150029182 | Sun et al. | Jan 2015 | A1 |
20150073864 | Labrie et al. | Mar 2015 | A1 |
20150093047 | Battcher et al. | Apr 2015 | A1 |
20150116509 | Bidder et al. | Apr 2015 | A1 |
20150153172 | Starns et al. | Jun 2015 | A1 |
20150193971 | Dryanovski et al. | Jul 2015 | A1 |
20150213558 | Nelson | Jul 2015 | A1 |
20150227645 | Childs et al. | Aug 2015 | A1 |
20150269438 | Samarasekera et al. | Sep 2015 | A1 |
20150302529 | Jagannathan | Oct 2015 | A1 |
20160098802 | Bruffey et al. | Apr 2016 | A1 |
20160110480 | Randolph | Apr 2016 | A1 |
20160246767 | Makadia et al. | Aug 2016 | A1 |
20160282107 | Roland et al. | Sep 2016 | A1 |
20170124713 | Jurgenson et al. | May 2017 | A1 |
20170132711 | Bruffey et al. | May 2017 | A1 |
20170132835 | Halliday et al. | May 2017 | A1 |
20170169459 | Bruffey et al. | Jun 2017 | A1 |
20170193297 | Michini et al. | Jul 2017 | A1 |
20170206648 | Marra et al. | Jul 2017 | A1 |
20170221152 | Nelson et al. | Aug 2017 | A1 |
20170316115 | Lewis et al. | Nov 2017 | A1 |
20170330207 | Labrie et al. | Nov 2017 | A1 |
20170345069 | Labrie et al. | Nov 2017 | A1 |
20180053329 | Roberts et al. | Feb 2018 | A1 |
20180067593 | Tiwari et al. | Mar 2018 | A1 |
20180089833 | Lewis et al. | Mar 2018 | A1 |
20180286098 | Lorenzo | Oct 2018 | A1 |
20180330528 | Loveland et al. | Nov 2018 | A1 |
20180357819 | Oprea | Dec 2018 | A1 |
20180373931 | Li | Dec 2018 | A1 |
20190114717 | Labrie et al. | Apr 2019 | A1 |
20190147247 | Harris et al. | May 2019 | A1 |
20190221040 | Shantharam et al. | Jul 2019 | A1 |
20190340692 | Labrie et al. | Nov 2019 | A1 |
20200100066 | Lewis et al. | Mar 2020 | A1 |
20200143481 | Brown et al. | May 2020 | A1 |
20210076162 | Wang et al. | Mar 2021 | A1 |
20210103687 | Harris et al. | Apr 2021 | A1 |
20210350038 | Jenson et al. | Nov 2021 | A1 |
20220309748 | Lewis et al. | Sep 2022 | A1 |
Number | Date | Country |
---|---|---|
2014151122 | Sep 2014 | WO |
2016154306 | Sep 2016 | WO |
2017100658 | Jun 2017 | WO |
Entry |
---|
Office Action dated Sep. 22, 2020, issued in connection with U.S. Appl. No. 16/580,741 (14 pages). |
Office Action dated Feb. 2, 2021, issued in connection with U.S. Appl. No. 14/620,004 (28 pages). |
Communication Pursuant to Article 94(3) EPC issued by the European Patent Office dated Feb. 18, 2021, issued in connection with European Patent Application No. 16873975.3 (5 pages). |
International Search Report of the International Searching Authority dated Dec. 12, 2019, issued in connection with International Application No. PCT/US2019/52670 (3 pages). |
Written Opinion of the International Searching Authority dated Dec. 12, 2019, issued in connection with International Application No. PCT/US2019/52670 (5 pages). |
Office Action dated Feb. 5, 2020, issued in connection with U.S. Appl. No. 16/580,741 (15 pages). |
International Search Report of the International Searching Authority dated May 14, 2015, issued in connection with International Application No. PCT/US15/015491 (3 pages). |
Written Opinion of the International Searching Authority dated May 14, 2015, issued in connection with International Application No. PCT/US15/015491 (9 pages). |
Fung, et al., “A Mobile Assisted Localization Scheme for Augmented Reality,” Department of Computer Science and Engineering, The Chinese University of Hong Kong, 2012 (76 pages). |
Sankar, et al., “Capturing Indoor Scenes With Smartphones,” UIST'12, Oct. 7-10, 2012, Cambridge, Massachusetts (9 pages). |
Office Action dated Aug. 8, 2017, issued in connection with U.S. Appl. No. 14/620,004 (26 pages). |
Office Action dated Aug. 28, 2018, issued in connection with U.S. Appl. No. 14/620,004 (33 pages). |
Farin, et al., “Floor-Plan Reconstruction from Panoramic Images,” Sep. 23-28, 2007, MM '07, ACM (4 pages). |
Office Action dated Mar. 29, 2019, issued in connection with U.S. Appl. No. 14/620,004 (22 pages). |
Office Action dated Dec. 10, 2019, issued in connection with U.S. Appl. No. 14/620,004 (27 pages). |
Zhang, et al., “Wallk&Sketch: Create Floor Plans with an RGB-D Camera,” Sep. 5-8, 2012, UbiComp '12, ACM (10 pages). |
Office Action dated Jul. 8, 2020, issued in connection with U.S. Appl. No. 14/620,004 (27 pages). |
Examination Report No. 1 dated Mar. 30, 2021, issued by the Australian Patent Office in connection with Australian Patent Application No. 2016366537 (6 pages). |
Office Action dated Apr. 21, 2021, issued in connection with U.S. Appl. No. 16/580,741 (15 pages). |
Office Action dated Dec. 27, 2021, issued in connection with U.S. Appl. No. 16/580,741 (13 pages). |
Notice of Allowance dated Dec. 16, 2021, issued in connection with U.S. Appl. No. 14/620,004 (12 pages). |
Invitation to Pay Additional Fees issued by the International Searching Authority dated Feb. 2, 2022, issued in connection with International Application No. PCT/US21/63469 (2 pages). |
Extended European Search Report dated Feb. 18, 2022, issued in connection with European Patent Application No. 19866788.3 (9 pages). |
Notice of Allowance dated Aug. 19, 2021, issued in connection with U.S. Appl. No. 14/620,004 (11 pages). |
Examiner-Initiated Interview Summary dated Aug. 10, 2021, issued in connection with U.S. Appl. No. 14/620,004 (1 page). |
International Search Report of the International Searching Authority dated Feb. 11, 2019, issued in connection with International Application No. PCT/US18/60762 (3 pages). |
Written Opinion of the International Searching Authority dated Feb. 11, 2019, issued in connection with International Application No. PCT/US18/60762 (7 pages). |
Office Action dated Apr. 16, 2020, issued in connection with U.S. Appl. No. 16/189,512 (10 pages). |
U.S. Appl. No. 62/512,989, filed May 31, 2017 entiitled, “Systems and Methods for Rapidly Developing Annotated Computer Models of Structures” (47 pages). |
Office Action dated Dec. 14, 2020, issued in connection with U.S. Appl. No. 16/189,512 (10 pages). |
Extended European Search Report dated Jul. 1, 2021, issued by the European Patent Office in connection with European Application No. 18876121.7 (8 pages). |
Office Action dated Jul. 20, 2021, issued in connection with U.S. Appl. No. 16/189,512 (11 pages). |
International Search Report of the International Searching Authority dated Mar. 27, 2017, issued in connection with International Application No. PCT/US2016/65947 (3 pages). |
Written Opinion of the International Searching Authority dated Mar. 27, 2017, issued in connection with International Application No. PCT/US2016/65947 (7 pages). |
Office Action dated Sep. 26, 2018, issued in connection with U.S. Appl. No. 15/374,695 (33 pages). |
Notice of Allowance dated May 13, 2019, issued in connection with U.S. Appl. No. 15/374,695 (7 pages). |
Extended European Search Report dated Jun. 11, 2019, issued in connection with European U.S. Appl. No. 16/873,975 3 (8 pages). |
Communication Pursuant to Article 94(3) EPC issued by the European Patent Office dated Apr. 22, 2020, issued n connection with European Patent Application No. 16873975.3 (6 pages). |
Dino, et al., “Image-Based Construction of Building Energy Models Using Computer Vision,” Automation in Construction (2020) (15 pages). |
Fathi, et al., “Automated as-Built 3D Reconstruction of Civil Infrastructure Using Computer Vision: Achievements, Opportunities, and Challenges,” Advanced Engineering Informatics (2015) (13 pages). |
International Search Report of the International Searching Authority dated Jul. 25, 2022, issued in connection with International Application No. PCT/US22/22024 (3 pages). |
Written Opinion of the International Searching Authority dated Jul. 25, 2022, issued in connection with International Application No. PCT/US22/22024 (5 pages). |
Office Action dated Sep. 2, 2022, issued in connection with U.S. Appl. No. 16/580,741 (13 pages). |
Notice of Allowance dated Sep. 6, 2022, issued in connection with U.S. Appl. No. 16/189,512 (7 pages). |
International Search Report of the International Searching Authority dated Apr. 8, 2022, issued in connection with International Application No. PCT/US21/63469 (5 pages). |
Written Opinion of the International Searching Authority dated Apr. 8, 2022, issued in connection with International Application No. PCT/US21/63469 (6 pages). |
Notice of Allowance dated Apr. 8, 2022, issued in connection with U.S. Appl. No. 16/189,512 (8 pages). |
Notice of Allowance dated Jun. 21, 2022, issued in connection with U.S. Appl. No. 16/189,512 (7 pages). |
International Search Report of the International Searching Authority dated Nov. 18, 2022, issued in connection with International Application No. PCT/US22/030691 (6 pages). |
Written Opinion of the International Searching Authority dated Nov. 18, 2022, issued in connection with International Application No. PCT/US22/030691 (11 pages). |
Notice of Allowance dated Dec. 9, 2022, issued in connection with U.S. Appl. No. 17/705,130 (10 pages). |
Examination Report No. 1 dated Dec. 15, 2022, issued by the Australian Patent Office in connection with Australian Patent Application No. 2021282413 (3 pages). |
Communication Pursuant to Article 94(3) EPC dated Jan. 31, 2023, issued in connection with European Patent Application No. 16873975.3 (8 pages). |
Notice of Allowance dated Feb. 14, 2023, issued in connection with U.S. Appl. No. 17/705,130 (5 pages). |
Communication Pursuant to Article 94(3) EPC issued by the European Patent Office dated Apr. 28, 2023, issued in connection with European Patent Application No. 19866788.3 (5 pages). |
Office Action dated Jun. 30, 2023, issued in connection with U.S. Appl. No. 17/729,613 (49 pages). |
Number | Date | Country | |
---|---|---|---|
20190377837 A1 | Dec 2019 | US |
Number | Date | Country | |
---|---|---|---|
62265359 | Dec 2015 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15374695 | Dec 2016 | US |
Child | 16545607 | US |