The present invention relates to the area of server-based software services, and more particularly to computer-implemented methods for providing location identification services as well as augmented reality functionality and branded experiences in relation to usage of computer devices such as mobile smart phones.
Since the presently described invention touches on several fields, it is useful to discuss prior art in these separate areas.
Google and other companies such as Apple, Microsoft, and MapQuest have built mapping services which are based on a combination of tile servers as well as vector graphics. Google has integrated its mapping services with its “Street View” fleet of cars with cameras on the roofs in order to create a further ground-level view of the world's streets.
The use of SLAM mapping for the purposes of augmented reality was pioneered by a company called 13th Lab in Scandinavia.
Microsoft Live Labs created a product called Photosynth capable of building point clouds of physical scenes from a series of photographs.
In a first embodiment of the invention there is provided a computer-implemented method of providing a server-based feature cloud model of a realm. The method of this embodiment includes:
Optionally, the contributions are in the form of feature clouds. Alternatively or in addition, the contributions are in a form other than feature clouds, and processing by the server includes converting the contributions to feature clouds. Also alternatively or in addition, the method further includes
In another related embodiment, an object database is coupled to the server, and the method further includes storing, by the server, an association between a subset of feature cloud data in the feature cloud model of the realm and a selected one of the objects in the object database. Optionally, the method further includes causing, by the server, the subset to be presented as part of a displayed view of a part of the feature cloud model of the realm, wherein the subset is identified using the association. Optionally, the method, further includes, before storing the association, processing by the server of the subset of feature cloud data to determine the association.
In another related embodiment, receiving by the server the series of digital contributions includes receiving a digital contribution from a given one of the remote computing devices in which the subset of feature cloud data is a part and is identified by the given computing device as a candidate for object processing and the method further comprises, before storing the association, processing by the server of the subset of feature cloud data to determine the association.
In yet another related embodiment, the method further includes:
In further related embodiments, the realm is a domain.
The foregoing features of embodiments will be more readily understood by reference to the following detailed description, taken with reference to the accompanying drawings, in which:
Definitions. As used in this description and the accompanying claims, the following terms shall have the meanings indicated, unless the context otherwise requires:
A “server” includes a hardware-implemented server, a cluster of computers configured to operate as a hardware-implemented server, and a set of logical processes configured to perform functions associated with a hardware-implemented server.
A “set” includes at least one member.
A “brand” is a trademark or an affinity group.
An “affinity group” is a grouping of individuals having a shared interest, which may, for example, be (1) an interest in goods or services from a particular source, which in turn may be identified by a trademark, (2) an interest in celebrating the birthday of a particular child, or (3) an interest in a program of a school in rewarding performance or participation by students in some activity, such as a spelling bee, an athletic competition, academic achievement, etc.
An “end-user” is a consumer using any device which is consuming any service provided by the embodiment of the present system by a brand.
A “realm” is a geographic area that is sufficiently large that it cannot be fully captured in a single camera view and that may be sufficiently large as to encompass the world.
A “domain” is a geographic region that is sufficiently large that it cannot be fully captured in 100 views and that may be sufficiently large as to encompass the world. A domain is always a realm, but not vice versa.
A “feature cloud” is a digital representation, created using machine vision, of a three-dimensional volume that is populated with a set of features derived from physical objects. A SLAM map is an example of a feature cloud.
An “object” is any shape that can be viewed, and includes a a natural geographical feature, such as a mountain or a rock, regardless of scale, an erected building or collection of buildings, or any other man-made item, regardless of scale, including an item of inventory on a shelf in store, etc.
“Object data” means data that characterizes the appearance of an object.
“Real-time” refers to a product interaction experience which occurs simultaneously, or near-simultaneously, to an end-user's actions or interactions with that product.
An “application” is a program that is written for deployment on a device running in its regular native mode.
A “device” is a machine or product, containing a CPU and memory that can execute programs. A “mobile device” is a device that is readily portable.
A “computer process” is the performance of a described function in a computer using computer hardware (such as a processor, field-programmable gate array or other electronic combinatorial logic, or similar device), which may be operating under control of software or firmware or a combination of any of these or operating outside control of any of the foregoing. All or part of the described function may be performed by active or passive electronic components, such as transistors or resistors. In using the term “computer process” we do not necessarily require a schedulable entity, or operation of a computer program or a part thereof, although, in some embodiments, a computer process may be implemented by such a schedulable entity, or operation of a computer program or a part thereof. Furthermore, unless the context otherwise requires, a “process” may be implemented using more than one processor or more than one (single- or multi-processor) computer.
In various embodiments, the presently-described invention is implemented by means of a specialized server that, in combination with a feature cloud model implemented by the server and services offered by the server in receiving contributions to the model and in serving data associated with the model, we call a “SLAM-Map Fabric”. Its purpose is to provide a 3-D mapping service of the world which is considerably more accurate and faster than current GPS-based technology such as Google Maps, Apple Maps, Bing Maps, or MapQuest, and to classify 3-D objects within this map. SLAM is an acronym which stands for Simultaneous Localization And Mapping, and it was originally developed in the field of robotics and autonomous vehicles. SLAM allows an autonomous robot or vehicle to use its robotic “eyes” to build a three-dimensional map of an environment which it has never visited before, in effect giving it the ability to model new environments and locations in real time in order to avoid obstacles. This is done by processing feeds from cameras, radars, lidars, or other sensors to build a three-dimensional model of the robot's line-of-sight. In one embodiment, this 3-D model is constructed as a “feature cloud” which is created using computer vision technology based on parallax (stereoscopically or monoscopically over time) to capture “features” from the environment which are then modeled as sets of 3-D points. With enough points forming a feature cloud, it becomes simple to build and process a 3-D model of the immediate environment (for instance, it is easy to turn a point cloud into a 3-D triangle mesh, and then skin and shade it using standard 3-D libraries such as OpenGL), and this form of SLAM-mapping has found applications outside the field of robotics in areas such as augmented reality, where it can be used to accurately place virtual objects in the real world.
In various embodiments, we use standard SLAM-Mapping techniques in order to build feature clouds which can be “stitched together” to form our universal map of the world. The overall embodiment of the present invention includes this ability as a component, and can then use the virtual SLAM-Map Fabric in order to place objects or triggers in the real world via augmented reality. Similarly, it can also use the same technology to identify objects and products in the real world and to provide various subsequent services based on this information to end-users. A reader interested in learning more about SLAM technology is referred to “The SLAM Problem: A Survey”, Josep Aulinas, Yvan Petillot, Joaquim Salvi, and Xavier Llado, University of Girona and University of Edinburgh, available at http://atc.edg.edu/˜llado/CVpapers/ccia08b.pdf. This paper is hereby incorporated herein by reference in its entirety.
Since then, this SLAM technology has been adapted for augmented reality purposes in order to overlay virtual 3-D objects (videos, user interface, etc.) in the real-world space using a device such as a smart phone. This technology creates the illusion that a virtual object exists even though it doesn't. Our SLAM-Map Fabric makes use of this technology in a new way. Rather than building a dynamic map in real time, the embodiment of the present system will provide a service which is used to build a SLAM-Map of the world which can then be accessed by end-users. The idea is to create a service similar to GPS and Google Maps, except that instead of being GPS-based, this model of the world is SLAM-based. The construction of the system's SLAM-Map Fabric can be carried out in several different ways using client software on mobile devices. In the simplest manner, an individual carrying a mobile device which is running this software visits a location and uses the phone's camera to scan the physical features of the world. The SLAM-mapping software on the phone constructs a feature cloud or other 3-D model of every physical location which is scanned, and then uploads this data to the SLAM-Map Fabric's centralized server, together with the GPS coordinates of each area along with compass information. The GPS coordinates and compass data are used to create a rough estimate of where each modeled area is located, and how it is oriented. The individual creating the SLAM-Map need not necessarily be in the employ of the owner of the Fabric's server, but rather could be totally unaffiliated. Since the client software requires nothing more than a modern camera-enabled smart phone, by open-sourcing or otherwise distributing the client software used to build the SLAM-Map, the task of SLAM-Mapping the world could be crowd-sourced to thousands or even millions of volunteers, possibly even in the context of using a different service. Alternatively, instead of using smart phones carried by individuals on foot, the SLAM-mapping software could be deployed in cars with cameras built on top in the same way that Google currently uses its Street View cars to drive around, gathering data for Google Maps. All of these separate streams of SLAM-mapped data are gathered by the central server where they are stitched together to create a model of the world. This service can then be used by arbitrary parties to enable SLAM-mapped experiences for their end-users. In the presently-described embodiment, the SLAM-Map Fabric's central server carries out the additional orthogonal role of processing the 3-D map of the world using machine learning classifier algorithms in order to identify portions of the world's 3-D model as objects within the world. For instance, machine learning classifiers for trees, houses, cars, fire hydrants, streets, and other common objects in the world can label those subsets of the feature cloud as being members of those objects, thereby giving the SLAM-Map Fabric a certain semantic understanding of the 3-D world that it contains. A degenerate case of this involves using a SLAM-based Object Database rather than the SLAM-Map Database in order to identify individual items such as products.
The experience of building a SLAM-mapped scene is shown in
Once a SLAM-mapped scene has been created, it can be used by administrators of the system as well as third parties such as brands to create new experiences for end-users. For instance, a scene could be used to create an augmented reality experience, in which a virtual object is placed in the virtual SLAM-mapped universe, and can then be seen in the real world through an end-user's phone when he/she visits that location. More specific to the embodiment of the presently-described system, another compelling use-case involves a type “trigger”. A SLAM Trigger can be placed in the virtual SLAM-mapped world, and it fires when an end-user, interacting with this virtual world by means of his/her mobile device, comes within a certain distance of this trigger, SLAM Triggers are created by placing them into precise 3-D locations in the SLAM-Map Fabric using a graphical interface which allows for this virtual world to be explored on a computer. Once SLAM Triggers have been created, they function by causing arbitrary end-user-centric events to occur which have been programmed by the associated brand. In addition to these augmented reality experiences, the SLAM-Map Fabric has the ability to identify and classify the objects which it contains, making many more use-cases possible, such as the ability for end-users to point their camera-enabled mobile devices at certain objects, and to have the system identify the object in question and then respond with information or a list of possible actions.
A typical end-user's experience interacting with the system's SLAM-Map Fabric is illustrated in
In
Once the SLAM-Map Data Processor has determined the model's approximate location, it queries the SLAM-Map Database 570, which contains both map and organizational information identifying object SLAM models, in order to determine if it is entering a new SLAM-mapped model for a location that wasn't previously in the system, or whether it is augmenting or modifying a location which has already been stored after having been partially mapped. If the location in question is entirely new in the system, then it is simply stored in the SLAM-Map Database 570 together with its object information and a relevant reference to the Object Database, if any. On the other hand, if it there are one or more similar locations already in the system, then the SLAM-Map Data processor fetches all of them from the SLAM-Map Database and tries to match the new location with them, using the traditional location information such as GPS, wifi, and compass information as guidance for a first approximation. Because of the unlikelihood that any two scenes in the real world would have any significant overlaps in their feature clouds, especially within the local area of the first approximation, the system has a high tolerance to noise, and partial matches are good enough to determine that a scene is the same as one which was previously mapped. If a match is found, then it stitches the models together, saves all relevant object information and Object Database references, and saves the overall result to the SLAM-Map Database. In some cases, a new model may cause several old models to be stitched together, and in this manner, the model in the database grows from fragments into a unified whole, until finally the whole world is modeled. Stitching is performed by determining exact alignment, and then performing standard matrix operations such as translations, rotations, and scaling, as necessary.
Similarly, it is possible that the area in which the new model falls has been largely or entirely mapped, in which case the system doesn't fetch the entire map and return it to the Data Processor. Instead, it returns only a fixed portion of the model which contains the new model. This is made easy because the SLAM-Map model is stored using a standard technique called “binary space partitioning” in which the world is divided into a “binary space tree”. The root of the tree contains the entire world, and its children contain the hemispheres. Then each node has children, further dividing up the space. In the leaves of this tree are the actual SLAM-models. Because the height of the tree is logarithmic, accessing any particular model/location can be done very quickly.
Because the real world is ever-changing, the machine learning algorithms for matching new models to ones that were already stored need to have the ability to make matches despite a certain amount of noise, which is something that is well-understood in the technical literature. For instance, since trees lose their leaves and bloom at different times of the year, SLAM-models of a given tree could vary greatly if taken at different times. For this reason, the matching algorithm must be able to match models even if they are somewhat different. This isn't difficult because the majority of most scenes stay relatively static. Updating models therefore doesn't overwrite old features, but rather when matches are found, the differences are stored as well, thereby making future matches easier as well as providing a greater amount of data such as seasonal variations in our tree example.
As the system's Object Database grows, and as the SLAM-Map Database starts to receive several duplicated SLAM-Mappings of the same areas, the present embodiment periodically has the SLAM-Map Data Processor in conjunction with the Object Identification Processor revisit the main SLAM-Map in order to identify and classify objects which it may have previously missed. As before, this is done using machine learning classifiers, but a particularly easy case is to identify objects from scenes in which the object is present in one but not the other, thereby showing that it was not a permanent fixture.
For instance, if the SLAM-Map contains two overlaid maps of the same street taken at different times, then it is possible for one or more cars to be present in one, but absent in the other, making it trivial to identify, isolate, and categorize them as objects. These efforts are further improved by providing an interface 594 coupled via 593 with the Object Database 592. This interface allows trusted system administrators to upload 3-D models of various objects, including high-resolution 3-D models of various products along with identifying metadata such as names, manufacturers, SKU codes, etc. to the Object Database in order to provide it with a rich source of object information. This interface also provides an interactive component which shows a model to the administrator and asks if it has been correctly identified and categorized. As an example, the interface can show a fire hydrant and query the human operator as to whether it is a stump, in which case the operator would respond in the negative. At other times, the interface may show a car and query to verify that it is indeed a car, in which case the operator would respond in the affirmative. In addition, this interface can be used to associate additional metadata with identified objects. All of these human responses provide valuable training inputs to the system's machine learning classification training algorithms, and as the system classifies more and more objects in its map as being temporary rather than permanent fixtures, this also makes it progressively easier to match future incoming feature sets because it can ignore temporary objects (in the main map as well as the incoming feature cloud) for the purpose of matching.
In an alternative embodiment, client data can be processed on the server rather than locally in order to construct the local SLAM-Map. Instead of receiving pre-constructed SLAM-Map data from client devices 501, 502, and 503 over channel 510, the SLAM-Map Data Receiver 530 receives raw data in the form of camera images, video, LIDAR, RADAR, or other technological formats for mapping a three-dimensional space. The Data Receiver then performs the initial step of processing the raw data itself in order to construct a local SLAM-Map from the client data, and then proceeds as before.
In another alternative embodiment, a copy of the Object Database is stored or cached locally on the client, thereby giving it the ability to match and identify objects much more quickly. This Object Database can be routinely updated from a central source, if required.
Another important use-case that is enabled by this SLAM-Map Fabric system, and not related to the consumption of virtual augmented reality, has to do with object identification. In this use-case, the user interface 641 can be used purely to identify objects in the real world. For instance, this could involve the end-user pointing his/her mobile device's camera at a particular make and model of car, and have the device respond by identifying it, including make, model, year, and so on. Similarly, it could be used to identify a consumer product and to then purchase it online. In this use-case, the interface 641 on the end-user's device sends a SLAM Map scene containing the object to the SLAM-Map Data Receiver 530 as before, with the additional information that the intent is to identify the object rather than to use the system's SLAM-Mapping service. The Data Receiver then passes the model via channel 540 to the SLAM-Map Data Processor 550, which processes the scene and isolates the models in it. These are sent via channel 580 to the Object Identification Processor 590, which via channel 591 attempts to determine a match with objects in the Object Database 592. If a match is found, the database responds with any information it has, such as the name, type of object, and SKU information, if applicable. For instance, the system could be used to recognize a famous statue or landmark, and to then provide historical or contextual information regarding it. Having identified the object(s), the Object Identification Processor 590 now passes this information via 580 to the Data Processor, and then via 710 to the Data Sender, which then sends it via 720 to the end-user's device, which can interpret that information according to the use-case for which it was designed. For instance, having identified the object(s) and received SKU data, it may send this to online shopping services such as Amazon.com or Google Shopping and allow the end-user to purchase the item(s) online, thereby enabling a seamless real-world mobile device-based shopping experience. Similarly, any piece of information or action can be associated with a specifically-identified object, in a sense turning identified objects into a type of primary key for data retrieval or invoking an action.
The presently-described embodiment of the SLAM-Map Fabric has considerable advantages over regular GPS-based mapping technology. Because it is based on having line-of-sight to satellites, GPS technology is inherently limited to the outdoors. It works very poorly indoors, and not at all when underground or in the core of a multi-level building. In addition, GPS-based location services are slow to load because they first need to receive information from several satellites, and even once they have locked onto them, GPS accuracy is extremely variable, at best getting the end-user to within a few meters. GPS location also constantly varies, so fine-grained location tends to “jump around” a lot. By contrast, the present embodiment of the SLAM-Map Fabric of the world is much more reliable in several different ways: It works in virtually all environments including indoors and underground. SLAM-Maps load far faster than GPS because they need not lock onto satellites, and they are also far more accurate; rather than having an accuracy of a few meters, their accuracy is on the order of a few centimeters, and does not jump around like GPS-based location services. Of course, SLAM technology also has its limitations, including the inability to work in totally dynamic environments, such as the open ocean where waves are constantly changing, but in that case mobile devices tend to lack reception in any event. In that sense, our SLAM-Map Fabric represents a “third layer” in positioning technology. The first layer involves techniques such as IP lookup. The second layer involves GPS, and the third layer is our system, thereby creating a series of progressively superior technologies for determining location. Because IP and GPS are used as a first approximation for determining location within our SLAM-Map Fabric, it is always guaranteed to be at least as accurate as these precursor technologies.
Similarly, the presently-described embodiment of the SLAM-based object identification service has considerable advantages over existing identification technologies such as Google Shopper, which currently depend on using mobile devices to scan bar codes and in some cases, packaging, since those methods will not work with objects that have been removed from their packaging, whereas the presently-described embodiment can do exactly this.
The embodiments of the invention described above are intended to be merely exemplary; numerous variations and modifications will be apparent to those skilled in the art. All such variations and modifications are intended to be within the scope of the present invention as defined in any appended claims.
This application is a continuation application of and claims priority to U.S. application Ser. No. 15/661,247, entitled “Platform for Constructing and Consuming Realm and Object Feature Clouds,” filed on Jul. 27, 2017, and issuing on Jun. 9, 2020 as U.S. Pat. No. 10,681,183, which itself is a continuation application of and claims priority to U.S. application Ser. No. 14/289,103, entitled “Platform for Constructing and Consuming Realm and Object Feature Clouds,” filed on May 28, 2014, and issued on Aug. 1, 2017 as U.S. Pat. No. 9,723,109, all of which are incorporated herein by reference in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
5850352 | Moezzi et al. | Dec 1998 | A |
6295502 | Hancock et al. | Sep 2001 | B1 |
6330356 | Sundareswaran et al. | Dec 2001 | B1 |
6437805 | Sojoodi et al. | Aug 2002 | B1 |
6564246 | Varma et al. | May 2003 | B1 |
8117281 | Robinson et al. | Feb 2012 | B2 |
8234218 | Robinson et al. | Jul 2012 | B2 |
8316450 | Robinson et al. | Nov 2012 | B2 |
8515875 | Murray | Aug 2013 | B1 |
8525829 | Smithwick | Sep 2013 | B2 |
8635519 | Everingham et al. | Jan 2014 | B2 |
9070216 | Golparvar-Fard et al. | Jun 2015 | B2 |
9122368 | Szeliski et al. | Sep 2015 | B2 |
9218609 | Hertel et al. | Dec 2015 | B2 |
9367870 | Klein et al. | Jun 2016 | B2 |
9495760 | Swaminathan et al. | Nov 2016 | B2 |
9529826 | Harp et al. | Dec 2016 | B2 |
9539498 | Hanke et al. | Jan 2017 | B1 |
9554123 | Rhoads | Jan 2017 | B2 |
9582516 | McKinnon et al. | Feb 2017 | B2 |
9669296 | Hibbert et al. | Jun 2017 | B1 |
9706350 | Ling | Jul 2017 | B2 |
9721386 | Worley, III et al. | Aug 2017 | B1 |
9886720 | Hertel et al. | Feb 2018 | B2 |
10026230 | Kim | Jul 2018 | B2 |
10042862 | Gorman et al. | Aug 2018 | B2 |
10083186 | Har-Noy et al. | Sep 2018 | B2 |
10203762 | Bradski et al. | Feb 2019 | B2 |
10380646 | Hertel et al. | Aug 2019 | B2 |
10554872 | Fan et al. | Feb 2020 | B2 |
10769680 | Hertel et al. | Sep 2020 | B2 |
10846937 | Rogers et al. | Nov 2020 | B2 |
20020095333 | Jokinen et al. | Jul 2002 | A1 |
20030058242 | Redlich | Mar 2003 | A1 |
20070078883 | Hayashi et al. | Apr 2007 | A1 |
20080082264 | Hill | Apr 2008 | A1 |
20080163379 | Robinson | Jul 2008 | A1 |
20080307311 | Eyal | Dec 2008 | A1 |
20090249244 | Robinson et al. | Oct 2009 | A1 |
20100042382 | Marsh | Feb 2010 | A1 |
20100114661 | Aderfer et al. | May 2010 | A1 |
20100211880 | Haggar et al. | Aug 2010 | A1 |
20100257252 | Dougherty et al. | Oct 2010 | A1 |
20100302056 | Dutton et al. | Dec 2010 | A1 |
20100309225 | Gray et al. | Dec 2010 | A1 |
20110224902 | Oi | Sep 2011 | A1 |
20110310227 | Konertz et al. | Dec 2011 | A1 |
20120075342 | Choubassi et al. | Mar 2012 | A1 |
20120092327 | Adhikari | Apr 2012 | A1 |
20120162255 | Ganapathy et al. | Jun 2012 | A1 |
20120166260 | Steelberg et al. | Jun 2012 | A1 |
20120176516 | Elmekies | Jul 2012 | A1 |
20120183172 | Stroila | Jul 2012 | A1 |
20120214590 | Newhouse et al. | Aug 2012 | A1 |
20120249544 | Maciocci et al. | Oct 2012 | A1 |
20130125027 | Abovitz | May 2013 | A1 |
20130187952 | Berkovich et al. | Jul 2013 | A1 |
20130208007 | Kubo et al. | Aug 2013 | A1 |
20130217333 | Sprigg et al. | Aug 2013 | A1 |
20130218561 | Naimark | Aug 2013 | A1 |
20130222369 | Huston | Aug 2013 | A1 |
20130321461 | Filip | Dec 2013 | A1 |
20140125651 | Sharp et al. | May 2014 | A1 |
20140320593 | Pirchheim et al. | Oct 2014 | A1 |
20140358666 | Baghaie et al. | Dec 2014 | A1 |
20140369595 | Pavlidis et al. | Dec 2014 | A1 |
20150009206 | Arendash | Jan 2015 | A1 |
20150016712 | Rhoads et al. | Jan 2015 | A1 |
20150062120 | Reisner-Kollmann et al. | Mar 2015 | A1 |
20150091891 | Raheman | Apr 2015 | A1 |
20150109338 | McKinnon et al. | Apr 2015 | A1 |
20150172626 | Martini | Jun 2015 | A1 |
20150193982 | Mihelich et al. | Jul 2015 | A1 |
20150310669 | Kamat et al. | Oct 2015 | A1 |
20150339324 | Westmoreland et al. | Nov 2015 | A1 |
20160148433 | Petrovskaya et al. | May 2016 | A1 |
20160342318 | Melchner et al. | Nov 2016 | A1 |
20170034586 | Melchner et al. | Feb 2017 | A1 |
20170228942 | Arsan et al. | Aug 2017 | A1 |
20180211399 | Lee et al. | Jul 2018 | A1 |
20200410554 | Hertel et al. | Dec 2020 | A1 |
Number | Date | Country |
---|---|---|
102289784 | Dec 2011 | CN |
105547151 | May 2016 | CN |
2372582 | Oct 2011 | EP |
2611127 | Jul 2013 | EP |
2695129 | Feb 2014 | EP |
2940624 | Nov 2015 | EP |
2407910 | Sep 2018 | EP |
10-1260425 | May 2013 | KR |
WO 2012109188 | Aug 2012 | WO |
WO 2012138784 | Oct 2012 | WO |
WO 2013122536 | Aug 2013 | WO |
WO 2014012444 | Jan 2014 | WO |
WO 2015160713 | Oct 2015 | WO |
WO 2016103115 | Jun 2016 | WO |
WO 2017066275 | Apr 2017 | WO |
WO 2017212484 | Dec 2017 | WO |
WO 2019016820 | Jan 2019 | WO |
Entry |
---|
Agarwal et al., “From Point Cloud to Grid DEM: A Scalable Approach,” Department of Computer Science, Duke University and Department of Computer Science, University of Aarhus, pp. 1-15, date unknown. |
Brilakis et al., “Progressive 3D reconstruction of infrastructure with videogrammetry,” Google Patents, 2 pages, 2011, retrieved from the Internet on Apr. 5, 2019 https://patents.google.com/scholar/4482122932006126723?q=feature&q=cloud&q=model%. |
Hammoudi et al., “Extracting Wire-Frame Models of Street Facades from 3D Point Clouds and the Corresponding Cadastral Map,” IAPRS, vol. XXXVIII, Part 3A, pp. 91-96, Sep. 1-3, 2010. |
Lee et al., “Immersive authoring: What You eXperience Is What You Get (WYXIWYG),” Communications of the ACM, vol. 48, No. 7, pp. 76-81, Jul. 2005. |
Leonard et al., “Simultaneous map building and localization for an autonomous mobile robot,” IEEE Digital Library, 1 page, Nov. 3-5, 1991, retrieved from the Internet on Apr. 5, 2019 https://ieeexplore.ieeee.org/document/174711. |
Levoy et al., “The Use of Points as a Display Primitive,” Technical Report 85-022, Computer Science Department, University of North Carolina at Chapel Hill, 1 page, Jan. 1985, retrieved from the Internet on Apr. 5, 2019 http://graphics.stanford.edu/papers/points/. |
PCT Inc., Vuforia Engine Release Notes, PTC Inc., 99 pages, 2011-2018, retrieved from the Internet on Apr. 5, 2019 https://library.vuforia.com/articles/Release_Notes/Vuforia-SDK-Release-Notes. |
Pucihar et al., “Exploring the Evolution of Mobile Augmented Reality for Future Entertainment Systems,” Research Gate, 28 pages, 2019, retrieved from the Internet on Apr. 5, 2019 https://www.researchgate.net/publication/272362062_Exploring_the_Evolution_of_Mobile. |
Reisner-Kollmann, “Reconstraction of 3D Models from Images and Point Clouds with Shape Primitives,” Dissertation to the Faculty of Informatics at the Vienna University of Technology, 108 pages, Feb. 18, 2013. |
Rusinkiewicz et al., “QSplat: a multresolution point rendering system for large meshes,” ACM Digital Library, 2 pages, 2000, retrieved from the Internet on Apr. 5, 2019 https://dl.acm.org/citation.cfm?doid=344779.344940. |
Springer Nature, “Augmented Reality 2.0,” Springer Nature Switzerland AG., 15 pages, 2018, retrieved from the Internet on Apr. 5, 2019 https://link.springer.com/chapter/10.1007%2F978-3-211-99178-7_2. |
Uematsu et al., “Balog: Location-based information aggregation system,” Yokohama National University and National Institute of Informatics, 2 pages, date unknown. |
Wang et al., “Simultaneous Localization, Mapping and Moving Object Tracking,” Department of Computer Science and Information Engineering and Graduate Institute of Networking and Multimedia National Taiwan University, 47 pages, date unknown. |
Zou et al., “CoSLAM: Collaborative Visual SLAM in Dynamic Environments,” Journal of Latex Class Files, vol. X, No. X, pp. 1-15, Jan. 20XX. |
Castro et al., “Multi-robot SLAM on Client-Server Architecture”, Brazilian Robotics Symposium and Latin American Robotics Symposium, pp. 196-201, Oct. 16, 2012. |
Ventura et al., “Global Localization from Monocular SLAM on a Mobile Phone”, IEEE Transactions on Visualization and Computer Graphics, vol. 20, No. 4, pp. 531-539, Apr. 1, 2014. |
International Searching Authority, International Search Report—International Application No. PCT/US2015/032676 dated Aug. 27, 2015, together with the Written Opinion of the International Searching Authority, 12 pages. |
International Searching Authority, International Search Report—International Application No. PCT/US2015/025592 dated Dec. 1, 2015, together with the Written Opinion of the International Searching Authority, 15 pages. |
International Searching Authority, International Search Report—International Application No. PCT/US2016/056577 dated Jan. 24, 2017, together with the Written Opinion of the International Searching Authority, 12 pages. |
Tanenbaum et al., “Distributed Systems: Principles and Paradigms (2nd Edition)”, Oct. 12, 2006, 87 pages. |
Windley, “The Live Web: Building Event-Based Connections in the Cloud”, Dec. 21, 2011, Course Technology PTR, 18 pages. |
Number | Date | Country | |
---|---|---|---|
20200329125 A1 | Oct 2020 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15661247 | Jul 2017 | US |
Child | 16895557 | US | |
Parent | 14289103 | May 2014 | US |
Child | 15661247 | US |