This subject matter of this application relates generally to methods and apparatuses, including computer program products, for three-dimensional (3D) object capture and object reconstruction using edge cloud computing resources
With the start of the 5-G deployments, wireless/mobile computing systems and applications can now take advantage of ‘edge’ cloud computing and offload computationally intensive processing to the cloud. This has significant benefits in terms of being able to run computer vision applications that previously would have been too slow or would have consumed too much battery power, due to the inherent limitations of mobile devices. This is especially true in the case of 3D computer vision processing applications such as the 3D scanning technology described in:
the real-time object recognition and modeling techniques as described in U.S. Pat. No. 9,715,761, titled “Real-Time 3D Computer Vision Processing Engine for Object Recognition, Reconstruction, and Analysis;”
the dynamic 3D modeling techniques as described in U.S. patent application Ser. No. 14/849,172, titled “Real-Time Dynamic Three-Dimensional Adaptive Object Recognition and Model Reconstruction;”
the 3D model generation techniques as described in U.S. Pat. No. 9,710,960, titled “Closed-Form 3D Model Generation of Non-Rigid Complex Objects from Incomplete and Noisy Scans;”
the 3D photogrammetry techniques described in U.S. Pat. No. 10,192,347, titled “3D Photogrammetry;”
the sparse SLAM techniques described in U.S. patent application Ser. No. 15/638,278, titled “Sparse Simultaneous Localization and Mapping with Unified Tracking;”
the 2D and 3D video compression techniques described in U.S. Pat. No. 10,380,762, titled “Real-Time Remote Collaboration and Virtual Presence using Simultaneous Localization and Mapping to Construct a 3D Model and Update a Scene Based on Sparse Data;”
the 3D geometry reconstruction techniques described in U.S. patent application Ser. No. 16/118,894, titled “Enhancing Depth Sensor-Based 3D Geometry Reconstruction with Photogrammetry;”
the 3D tracking techniques described in U.S. patent application Ser. No. 16/123,256, titled “Combining Sparse Two-Dimensional (2D) and Dense Three-Dimensional (3D) Tracking;”
the 4D hologram generation and control techniques described in U.S. patent application Ser. No. 16/240,404, titled “4D Hologram: Real-Time Remote Avatar Creation and Animation Control;” and
the object scanning techniques described in U.S. patent application Ser. No. 16/421,822, titled “Keyframe-Based Object Scanning.”
Each of the above-referenced patents and patent applications is incorporated by reference herein in its entirety.
These types of 3D computer vision processing applications require much more computer processing power than typical 2D vision applications. And, existing non-5G wireless networks and corresponding mobile devices are not capable of providing the processing bandwidth and power necessary to efficiently and quickly perform these applications that would minimize or avoid excessive battery consumption and/or heat generation of the mobile devices.
Therefore, the technology described herein advantageously utilizes edge cloud computing available in new 5G-based wireless networks for more reliable and robust implementation of real-time 3D computer vision capture technology using mobile devices—while also reducing excessive heat generation and battery consumption of said devices.
The invention, in one aspect, features a system for three-dimensional (3D) object capture and object reconstruction using edge cloud computing resources. The system comprises a sensor device, coupled to a mobile computing device, that captures (i) one or more depth maps of a physical object in a scene, the depth maps including related pose information of the physical object, and (ii) one or more color images of the physical object in the scene. The system comprises an edge cloud computing device, coupled to the mobile computing device via a 5G network connection, that receives the one or more depth maps and the one or more color images from the mobile computing device. The edge cloud computing device generates a new 3D model of the physical object in the scene based on the received one or more depth maps and one or more color images, when a 3D model of the physical object has not yet been generated. The edge cloud computing device updates an existing 3D model of the physical object in the scene based on the received one or more depth maps and one or more color images, when a 3D model of the physical object has previously been generated. The edge cloud computing device transmits the new 3D model or the updated 3D model to the mobile computing device.
The invention, in another aspect, features a computerized method of three-dimensional (3D) object capture and object reconstruction using edge cloud computing resources. A sensor device, coupled to a mobile computing device, captures (i) one or more depth maps of a physical object in a scene, the depth maps including related pose information of the physical object, and (ii) one or more color images of the physical object in the scene. An edge cloud computing device, coupled to the mobile computing device via a 5G network connection, receives the one or more depth maps and the one or more color images from the mobile computing device. The edge cloud computing device generates a new 3D model of the physical object in the scene based on the received one or more depth maps and one or more color images, when a 3D model of the physical object has not yet been generated. The edge cloud computing device updates an existing 3D model of the physical object in the scene based on the received one or more depth maps and one or more color images, when a 3D model of the physical object has previously been generated. The edge cloud computing device transmits the new 3D model or the updated 3D model to the mobile computing device.
Any of the above aspects can include one or more of the following features. In some embodiments, the mobile computing device crops the captured depth maps and color images prior to transmitting the captured depth maps and color images to the edge cloud computing device. In some embodiments, cropping the captured depth maps and color images comprises removing a portion of the captured depth maps and color images that corresponds to a background of the scene. In some embodiments, the edge cloud computing device performs the generating step, the updating step, and the transmitting step within less than 10 milliseconds after receiving the one or more depth maps and the one or more color images from the mobile computing device.
In some embodiments, updating an existing 3D model comprises tracking the physical object in the scene based upon the pose information received from the mobile computing device. In some embodiments, transmitting the updated 3D model to the mobile computing device comprises providing, to the mobile computing device, tracking information associated with the physical object in the scene based upon the tracking step. In some embodiments, the physical object is a non-rigid object.
In some embodiments, the edge cloud computing device performs one or more post-processing functions on the updated 3D model. In some embodiments, the one or more post-processing functions comprise a bundle adjustment process, a de-noising process, a mesh refinement process, a texture alignment process, a shadow removal process, or a blending process. In some embodiments, the edge cloud computing device generates a final 3D model from the updated 3D model after performing the one or more post-processing functions. In some embodiments, the edge cloud computing device transmits the final 3D model to the mobile computing device and/or to a cloud-based server computing device.
In some embodiments, a latency of the 5G network connection between the mobile computing device and the edge cloud computing device is less than 5 milliseconds.
Other aspects and advantages of the technology will become apparent from the following detailed description, taken in conjunction with the accompanying drawings, illustrating the principles of the technology by way of example only.
The advantages of the technology described above, together with further advantages, may be better understood by referring to the following description taken in conjunction with the accompanying drawings. The drawings are not necessarily to scale, emphasis instead generally being placed upon illustrating the principles of the technology.
Specifically, using the edge cloud 106 instead of the cloud computing environment 108 has following benefits for real-time 3D object capture and object reconstruction technology:
1. Object tracking is faster—because the edge cloud 106 provides a much more powerful computing resource (including, in some embodiments, a dedicated or shared GPU) versus a processor available in the mobile device 102, 3D object tracking applications provided by the mobile device are faster and more robust. This is because the edge cloud 106 can leverage its processing power to track at higher frames per second as well as use more robust 2D and 3D features—thus resulting in more accurate pose information used for 3D reconstruction.
2. 3D model quality is better—because the edge cloud 106 can utilize denser point clouds and higher resolution RGB images, the computation of which requires more processing power than is available on current mobile devices. Due to the processing strength and speed of the edge cloud, the time required to refine a 3D mesh and texturing a 3D object is also improved.
3. The edge cloud 106 enables the utilization of, and improves the performance of, extremely processing intensive techniques such as dynamic fusion to scan flexible objects (i.e., people) in real-time. Such processing is not currently possible on typical mobile devices.
Therefore, the availability of the edge cloud 106 allows for:
1. Processing of more object types such as smaller and large objects, flexible objects, and scene reconstruction.
2. Easier use of 3D object capture and tracking applications because tracking is more robust, and applications can handle faster movements of the camera and of the objects in a scene.
3. Higher quality 3D models, with greater details and photorealistic textures.
Generally, all the dynamic capture functions can run natively on the mobile device 102 without any cloud-based processing, which takes about one minute to scan and process. However, due to the processing power, heat, and battery consumption constraints of most mobile devices, many of these dynamic capture functions run at lower resolution and frame rates. As a result, the overall quality and reliability of the functions is limited.
One option is to use generic cloud-based processing (i.e., by connecting the mobile device 102 to a cloud computing environment 108). However, this is typically not feasible because of the more than 100 millisecond latency (in one direction) from the device 102 to the cloud computing environment 108 in typical wireless networks. Therefore, when the images are captured from the mobile device 102, it takes too long for the images to be processed and the resulting 3D model to be sent back to the mobile device 102. Because dynamic capture generally requires real-time interaction, the desired framework would have a minimum latency delay (i.e., less than a few milliseconds). Therefore, utilization of the edge cloud 106 overcomes this technical drawback.
As shown in
After the scanning process, the rest of the post-processing steps (shown in
In doing so, it is estimated that the overall scan time is reduced significantly, from one minute to a few seconds, battery consumption of the mobile device 102 is reduced by 80% or more, thermal heating problems on the mobile device 102 are eliminated, 1080p or higher RGB textures are added—making the entire process more reliable and easier to use.
In view of the above, the methods and systems described herein provides certain technological advantages, including but not limited to:
1) Leveraging 5G edge cloud architecture for real-time 3D capture;
2) Real-time feedback to a mobile device user of the 3D scan progress and quality;
3) Real-time tracking of objects and people that are being scanned; and
4) Reduction in the amount of bandwidth used between the mobile device 102 and edge cloud 106 by ‘segmenting’ the depth and RGB images intelligently based on the object pose.
The above-described techniques can be implemented in digital and/or analog electronic circuitry, or in computer hardware, firmware, software, or in combinations of them. The implementation can be as a computer program product, i.e., a computer program tangibly embodied in a machine-readable storage device, for execution by, or to control the operation of, a data processing apparatus, e.g., a programmable processor, a computer, and/or multiple computers. A computer program can be written in any form of computer or programming language, including source code, compiled code, interpreted code and/or machine code, and the computer program can be deployed in any form, including as a stand-alone program or as a subroutine, element, or other unit suitable for use in a computing environment. A computer program can be deployed to be executed on one computer or on multiple computers at one or more sites.
Method steps can be performed by one or more specialized processors executing a computer program to perform functions by operating on input data and/or generating output data. Method steps can also be performed by, and an apparatus can be implemented as, special purpose logic circuitry, e.g., a FPGA (field programmable gate array), a FPAA (field-programmable analog array), a CPLD (complex programmable logic device), a PSoC (Programmable System-on-Chip), ASIP (application-specific instruction-set processor), or an ASIC (application-specific integrated circuit), or the like. Subroutines can refer to portions of the stored computer program and/or the processor, and/or the special circuitry that implement one or more functions.
Processors suitable for the execution of a computer program include, by way of example, special purpose microprocessors. Generally, a processor receives instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a processor for executing instructions and one or more memory devices for storing instructions and/or data. Memory devices, such as a cache, can be used to temporarily store data. Memory devices can also be used for long-term data storage. Generally, a computer also includes, or is operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. A computer can also be operatively coupled to a communications network in order to receive instructions and/or data from the network and/or to transfer instructions and/or data to the network. Computer-readable storage mediums suitable for embodying computer program instructions and data include all forms of volatile and non-volatile memory, including by way of example semiconductor memory devices, e.g., DRAM, SRAM, EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and optical disks, e.g., CD, DVD, HD-DVD, and Blu-ray disks. The processor and the memory can be supplemented by and/or incorporated in special purpose logic circuitry.
To provide for interaction with a user, the above described techniques can be implemented on a computer in communication with a display device, e.g., a CRT (cathode ray tube), plasma, or LCD (liquid crystal display) monitor, for displaying information to the user and a keyboard and a pointing device, e.g., a mouse, a trackball, a touchpad, or a motion sensor, by which the user can provide input to the computer (e.g., interact with a user interface element). Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, and/or tactile input.
The above described techniques can be implemented in a distributed computing system that includes a back-end component. The back-end component can, for example, be a data server, a middleware component, and/or an application server. The above described techniques can be implemented in a distributed computing system that includes a front-end component. The front-end component can, for example, be a client computer having a graphical user interface, a Web browser through which a user can interact with an example implementation, and/or other graphical user interfaces for a transmitting device. The above described techniques can be implemented in a distributed computing system that includes any combination of such back-end, middleware, or front-end components.
The components of the computing system can be interconnected by transmission medium, which can include any form or medium of digital or analog data communication (e.g., a communication network). Transmission medium can include one or more packet-based networks and/or one or more circuit-based networks in any configuration. Packet-based networks can include, for example, the Internet, a carrier internet protocol (IP) network (e.g., local area network (LAN), wide area network (WAN), campus area network (CAN), metropolitan area network (MAN), home area network (HAN)), a private IP network, an IP private branch exchange (IPBX), a wireless network (e.g., radio access network (RAN), Bluetooth, Wi-Fi, WiMAX, general packet radio service (GPRS) network, HiperLAN), and/or other packet-based networks. Circuit-based networks can include, for example, the public switched telephone network (PSTN), a legacy private branch exchange (PBX), a wireless network (e.g., RAN, code-division multiple access (CDMA) network, time division multiple access (TDMA) network, global system for mobile communications (GSM) network), and/or other circuit-based networks.
Information transfer over transmission medium can be based on one or more communication protocols. Communication protocols can include, for example, Ethernet protocol, Internet Protocol (IP), Voice over IP (VOIP), a Peer-to-Peer (P2P) protocol, Hypertext Transfer Protocol (HTTP), Session Initiation Protocol (SIP), H.323, Media Gateway Control Protocol (MGCP), Signaling System #7 (SS7), a Global System for Mobile Communications (GSM) protocol, a Push-to-Talk (PTT) protocol, a PTT over Cellular (POC) protocol, Universal Mobile Telecommunications System (UMTS), 3GPP Long Term Evolution (LTE) and/or other communication protocols.
Devices of the computing system can include, for example, a computer, a computer with a browser device, a telephone, an IP phone, a mobile device (e.g., cellular phone, personal digital assistant (PDA) device, smart phone, tablet, laptop computer, electronic mail device), and/or other communication devices. The browser device includes, for example, a computer (e.g., desktop computer and/or laptop computer) with a World Wide Web browser (e.g., Chrome™ from Google, Inc., Microsoft® Internet Explorer® available from Microsoft Corporation, and/or Mozilla® Firefox available from Mozilla Corporation). Mobile computing device include, for example, a Blackberry® from Research in Motion, an iPhone® from Apple Corporation, and/or an Android™-based device. IP phones include, for example, a Cisco® Unified IP Phone 7985G and/or a Cisco® Unified Wireless Phone 7920 available from Cisco Systems, Inc.
Comprise, include, and/or plural forms of each are open ended and include the listed parts and can include additional parts that are not listed. And/or is open ended and includes one or more of the listed parts and combinations of the listed parts.
One skilled in the art will realize the technology may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The foregoing embodiments are therefore to be considered in all respects illustrative rather than limiting of the technology described herein.
This application claims priority to U.S. Provisional Patent Application No. 62/843,680, filed on May 6, 2019, the entirety of which is incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
5675326 | Juds et al. | Oct 1997 | A |
6259815 | Anderson et al. | Jul 2001 | B1 |
6275235 | Morgan, III | Aug 2001 | B1 |
6525722 | Deering | Feb 2003 | B1 |
6525725 | Deering | Feb 2003 | B1 |
7248257 | Elber | Jul 2007 | B2 |
7420555 | Lee | Sep 2008 | B1 |
7657081 | Blais et al. | Feb 2010 | B2 |
8209144 | Anguelov et al. | Jun 2012 | B1 |
8542233 | Brown | Sep 2013 | B2 |
8766979 | Lee et al. | Jul 2014 | B2 |
8942917 | Chrysanthakopoulos | Jan 2015 | B2 |
8995756 | Lee et al. | Mar 2015 | B2 |
9041711 | Hsu | May 2015 | B1 |
9104908 | Rogers et al. | Aug 2015 | B1 |
9171402 | Mien et al. | Oct 2015 | B1 |
9607388 | Lin et al. | May 2017 | B2 |
9710960 | Hou | Jul 2017 | B2 |
9886530 | Mehr et al. | Feb 2018 | B2 |
9978177 | Mehr et al. | May 2018 | B2 |
20050068317 | Amakai | Mar 2005 | A1 |
20050128201 | Warner et al. | Jun 2005 | A1 |
20050253924 | Mashitani | Nov 2005 | A1 |
20060050952 | Blais et al. | Mar 2006 | A1 |
20060170695 | Zhou et al. | Aug 2006 | A1 |
20060277454 | Chen | Dec 2006 | A1 |
20070075997 | Rohaly et al. | Apr 2007 | A1 |
20080180448 | Anguelov et al. | Jul 2008 | A1 |
20080310757 | Wolberg et al. | Dec 2008 | A1 |
20090232353 | Sundaresan et al. | Sep 2009 | A1 |
20100111370 | Black et al. | May 2010 | A1 |
20100198563 | Plewe | Aug 2010 | A1 |
20100209013 | Minear et al. | Aug 2010 | A1 |
20100302247 | Perez et al. | Dec 2010 | A1 |
20110052043 | Hyung et al. | Mar 2011 | A1 |
20110074929 | Hebert et al. | Mar 2011 | A1 |
20120056800 | Williams et al. | Mar 2012 | A1 |
20120063672 | Gordon et al. | Mar 2012 | A1 |
20120098937 | Sajadi et al. | Apr 2012 | A1 |
20120130762 | Gale et al. | May 2012 | A1 |
20120194516 | Newcombe et al. | Aug 2012 | A1 |
20120194517 | Izadi | Aug 2012 | A1 |
20120306876 | Shotton et al. | Dec 2012 | A1 |
20130069940 | Sun et al. | Mar 2013 | A1 |
20130123801 | Umasuthan et al. | May 2013 | A1 |
20130156262 | Taguchi et al. | Jun 2013 | A1 |
20130201104 | Ptucha et al. | Aug 2013 | A1 |
20130201105 | Ptucha et al. | Aug 2013 | A1 |
20130208955 | Zhao et al. | Aug 2013 | A1 |
20140160115 | Keitler et al. | Jun 2014 | A1 |
20140176677 | Valkenburg et al. | Jun 2014 | A1 |
20140206443 | Sharp et al. | Jul 2014 | A1 |
20140240464 | Lee | Aug 2014 | A1 |
20140241617 | Shotton et al. | Aug 2014 | A1 |
20140270484 | Chandraker et al. | Sep 2014 | A1 |
20140321702 | Schmalstieg | Oct 2014 | A1 |
20150009214 | Lee et al. | Jan 2015 | A1 |
20150045923 | Chang et al. | Feb 2015 | A1 |
20150142394 | Mehr et al. | May 2015 | A1 |
20150213572 | Loss | Jul 2015 | A1 |
20150234477 | Abovitz et al. | Aug 2015 | A1 |
20150262405 | Black et al. | Sep 2015 | A1 |
20150269715 | Jeong et al. | Sep 2015 | A1 |
20150279118 | Dou et al. | Oct 2015 | A1 |
20150301592 | Miller | Oct 2015 | A1 |
20150325044 | Lebovitz | Nov 2015 | A1 |
20150371440 | Pirchheim et al. | Dec 2015 | A1 |
20160026253 | Bradski et al. | Jan 2016 | A1 |
20160071318 | Lee et al. | Mar 2016 | A1 |
20160171765 | Mehr | Jun 2016 | A1 |
20160173842 | De La Cruz et al. | Jun 2016 | A1 |
20160358382 | Lee et al. | Dec 2016 | A1 |
20170053447 | Chen et al. | Feb 2017 | A1 |
20170054954 | Keitler et al. | Feb 2017 | A1 |
20170054965 | Raab et al. | Feb 2017 | A1 |
20170221263 | Wei et al. | Aug 2017 | A1 |
20170243397 | Hou et al. | Aug 2017 | A1 |
20170278293 | Hsu | Sep 2017 | A1 |
20170316597 | Ceylan et al. | Nov 2017 | A1 |
20170337726 | Bui et al. | Nov 2017 | A1 |
20180005015 | Hou et al. | Jan 2018 | A1 |
20180025529 | Wu et al. | Jan 2018 | A1 |
20180114363 | Rosenbaum | Apr 2018 | A1 |
20180144535 | Ford et al. | May 2018 | A1 |
20180241985 | O'Keefe et al. | Aug 2018 | A1 |
20180288387 | Somanath | Oct 2018 | A1 |
20180300937 | Chien et al. | Oct 2018 | A1 |
20180336714 | Stoyles et al. | Nov 2018 | A1 |
20190122411 | Sachs et al. | Apr 2019 | A1 |
20190208007 | Khalid | Jul 2019 | A1 |
20190244412 | Yago Vicente et al. | Aug 2019 | A1 |
20200086487 | Johnson | Mar 2020 | A1 |
20200105013 | Chen | Apr 2020 | A1 |
Number | Date | Country |
---|---|---|
1308902 | May 2003 | EP |
10-1054736 | Aug 2011 | KR |
10-2011-0116671 | Oct 2011 | KR |
2006027339 | Mar 2006 | WO |
Entry |
---|
Rossignac, J. et al., “3D Compression Made Simple: Edgebreaker on a Corner-Table,” Invited lecture at the Shape Modeling International Conference, Genoa, Italy (Jan. 30, 2001), pp. 1-6. |
Melax, S., “A Simple, Fast, and Effective Polygon Reduction Algorithm,” Game Developer, Nov. 1998, pp. 44-49. |
Myronenko, A. et al., “Point Set Registration: Coherent Point Drift,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, No. 12, Dec. 2010, pp. 2262-2275. |
Bookstein, F., “Principal Warps: Thin-Plate Splines and the Decomposition of Deformations,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 11, No. 6, Jun. 1989, pp. 567-585. |
Izadi, S. et al., “KinectFusion: Real-time 3D Reconstruction and Interaction Using a Moving Depth Camera,” UIST '11, Oct. 16-19, 2011, 10 pages. |
Papazov, C. et al., “An Efficient RANSAC for 3D Object Recognition in Noisy and Occluded Scenes,” presented at Computer Vision—ACCV 2010—10th Asian Conference on Computer Vision, Queenstown, New Zealand, Nov. 8-12, 2010, 14 pages. |
Biegelbauer, Georg et al., “Model-based 3D object detection—Efficient approach using superquadrics,” Machine Vision and Applications, Jun. 2010, vol. 21, Issue 4, pp. 497-516. |
Kanezaki, Asako et al., “High-speed 3D Object Recognition Using Additive Features in a Linear Subspace,” 2010 IEEE International Conference on Robotics and Automation, Anchorage Convention District, May 3-8, 2010, pp. 3128-3134. |
International Search Report and Written Opinion from PCT patent application No. PCT/US13/062292, dated Jan. 28, 2014, 10 pages. |
International Search Report and Written Opinion from PCT patent application No. PCT/US14/045591, dated Nov. 5, 2014, 9 pages. |
Sumner, R. et al., “Embedded Deformation for Shape Manipulation,” Applied Geometry Group, ETH Zurich, SIGGRAPH 2007, 7 pages. |
Rosten, Edward, et al., “Faster and better: a machine learning approach to corner detection,” arXiv:08102.2434v1 [cs.CV], Oct. 14, 2008, available at https://arxiv.org/pdf/0810.2434.pdf, 35 pages. |
Kim, Young Min, et al., “Guided Real-Time Scanning of Indoor Objects,” Computer Graphics Forum, vol. 32, No. 7 (2013), 10 pages. |
Rusinkewicz, Szymon, et al., “Real-time 3D model acquisition,” ACM Transactions on Graphics (TOG) 21.3 (2002), pp. 438-446. |
European Search Report from European patent application No. EP 15839160, dated Feb. 19, 2018, 8 pages. |
Liu, Song, et al., “Creating Simplified 3D Models with High Quality Textures,” arXiv:1602.06645v1 [cs.GR], Feb. 22, 2016, 9 pages. |
Stoll, C., et al., “Template Deformation for Point Cloud Filtering,” Eurographics Symposium on Point-Based Graphics (2006), 9 pages. |
Allen, Brett, et al., “The space of human body shapes: reconstruction and parameterization from range scans,” ACM Transactions on Graphics (TOG), vol. 22, Issue 3, Jul. 2003, pp. 587-594. |
International Search Report and Written Opinion from PCT patent application No. PCT/US15/49175, dated Feb. 19, 2016, 14 pages. |
Harris, Chris & Mike Stephens, “A Combined Corner and Edge Detector,” Plessey Research Roke Manor, U.K. (1988), pp. 147-151. |
Bay, Herbert, et al., “Speeded-Up Robust Features (SURF),” Computer Vision and Image Understanding 110 (2008), pp. 346-359. |
Rublee, Ethan, et al., “ORB: an efficient alternative to SIFT or SURF,” Willow Garage, Menlo Park, CA (2011), available from http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.370.4395&rep=rep1&type=pdf, 8 pages. |
Lowe, David G., “Distinctive Image Features from Scale-Invariant Keypoints,” International Journal of Computer Vision, vol. 60, Issue 2, Nov. 2004, pp. 91-110. |
Kaess, Michael, et al., “iSAM: Incremental Smoothing and Mapping,” IEEE Transactions on Robotics, Manuscript, Sep. 7, 2008, 14 pages. |
Kummerle, Rainer, et al., “g2o: A General Framework for Graph Optimization,” 2011 IEEE International Conference an Robotics and Automation, May 9-13, 2011, Shanghai, China, 7 pages. |
Number | Date | Country | |
---|---|---|---|
20200357172 A1 | Nov 2020 | US |
Number | Date | Country | |
---|---|---|---|
62843680 | May 2019 | US |