The present disclosure relates generally to intelligent deployment of airborne agents (e.g., drones), and more specifically to systems and methods for deploying airborne agents to capture data related to a physical environment on demand.
A physical environment can include static objects (e.g., roads, vegetation, lane markings, traffic signs) as well as dynamic objects (e.g., vehicles, pedestrians) and dynamic events (e.g., collisions, illegal activities). Currently, a variety of platforms, entities, and tools need to be involved to fully capture such a physical environment. For example, road data is usually captured using GPS systems by map services, while traffic data is often captured using traffic cameras. Further, because of the inconsistencies among the types and qualities of data captured, it is difficult to interpret and present the data in a coherent and precise manner. Additionally, much of the data related to a physical environment cannot be generated on demand. For example, to update map data, significant resources must be used to procure and deploy the necessary equipment, gather the data, and process the data. As another example, to study traffic patterns, a different set of resources may be needed to obtain the right data and process the data to extract information of interest.
Thus, there is a need for a platform that can receive requests for high-fidelity data (e.g., data related to a physical environment and objects/events in the physical environment) on demand and automatically fulfill the requests in an efficient, scalable, and intelligent manner.
In some embodiments, a computer-enabled method for deploying an airborne agent to capture data related to a physical environment comprises receiving a user request indicative of a geographical region and a data type; in response to receiving the user request, generating a flight path based on the region; causing the airborne agent to traverse at least a portion of the region based on the generated flight path; causing the airborne agent to gather data based on the data type in the user request; processing the gathered data to obtain a set of data of interest; and providing an output based on the set of data of interest.
An exemplary electronic device comprises: one or more processors; a memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for: receiving a user request indicative of a geographical region and a data type; in response to receiving the user request, generating a flight path based on the geographical region; causing the airborne agent to traverse at least a portion of the geographical region based on the generated flight path; causing the airborne agent to gather data based on the data type in the user request; processing the gathered data to obtain a set of data of interest; and providing an output based on the set of data of interest.
An exemplary non-transitory computer-readable storage medium stores one or more programs, the one or more programs comprising instructions, which when executed by one or more processors of an electronic device having a display, cause the electronic device to: receive a user request indicative of a geographical region and a data type; in response to receiving the user request, generate a flight path based on the geographical region; cause the airborne agent to traverse at least a portion of the geographical region based on the generated flight path; cause the airborne agent to gather data based on the data type in the user request; process the gathered data to obtain a set of data of interest; and provide an output based on the set of data of interest.
Provided are systems and methods for receiving requests for high-fidelity data (e.g., data related to a physical environment and objects and events in the environment) on demand and automatically fulfilling the requests in an efficient, scalable, and intelligent manner. As discussed below, an exemplary platform includes one or more airborne agents (e.g., drones) and techniques for intelligent flight path generation, intelligent data collection/selection, intelligent data processing, and a streamlined user interface for requesting and visualizing the data. The platform produces rich, high-fidelity, and correlated information about the physical environment and the objects/events in the physical environment.
Data produced by the system can be used by human users and robot users for a variety of purposes. For example, the data can include up-to-date knowledge of the physical environments (e.g., new lane markings) and thus can be used to provide accurate and up-to-date navigation guidance of the physical environment. Further, the data can be used to train algorithms to improve the driving capabilities of autonomous vehicles. Further, the data can be used to aid predictive analytics (e.g., of traffic patterns, of human behaviors). The knowledge of real-world events, patterns, and interactions can be valuable data for urban planning (e.g., operation of traffic lights). Further, the data can be used to detect and respond to real-world events. For example, the system can automatically detect accidents and deploy drones to send emergency supplies. As another example, the system can monitor an environment (e.g., a port, a warehouse) and alert authorities when anomalies are detected.
The following description sets forth exemplary methods, parameters, and the like. It should be recognized, however, that such description is not intended as a limitation on the scope of the present disclosure but is instead provided as a description of exemplary embodiments.
Although the following description uses terms “first,” “second,” etc. to describe various elements, these elements should not be limited by the terms. These terms are only used to distinguish one element from another. For example, a first sensing device could be termed a second sensing device, and, similarly, a second sensing device could be termed a first sensing device, without departing from the scope of the various described embodiments. The first sensing device and the second sensing device are both sensing devices, but they are not the same sensing device.
The terminology used in the description of the various described embodiments herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used in the description of the various described embodiments and the appended claims, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term “and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms “includes,” “including,” “comprises,” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
The term “if” is, optionally, construed to mean “when” or “upon” or “in response to determining” or “in response to detecting,” depending on the context. Similarly, the phrase “if it is determined” or “if [a stated condition or event] is detected” is, optionally, construed to mean “upon determining” or “in response to determining” or “upon detecting [the stated condition or event]” or “in response to detecting [the stated condition or event],” depending on the context.
The system 100 includes a web portal 102, through which users of the system 100 can submit data requests, track the status of the data requests, view the requested data, and submit follow-up requests. Users of the system include human users 104 and robot users 106 (e.g., autonomous vehicles). For example, a human user (e.g., an engineer) can access the web portal 100 and submit a request to view an ortho-image of a particular neighborhood or view accidents at a particular intersection on a particular date. Exemplary graphical user interfaces of the web portal 100 are described in detail with reference to
The robot user 106 is communicatively coupled to the system 100, for example, via a wireless network, such that the robot user can transmit data requests to the system 100 and receive the requested data. For example, a robot user (e.g., an autonomous vehicle) can transmit a request for “pedestrians' behavior around T intersections in Palo Alto, Calif.” and receive from the system 100 one or more sets of data accordingly.
Data gathered by the system 100 can be used by the human users 104 and the robot users 106 for a variety of purposes. For example, the data can include updates in the physical environment (e.g., new lane markings) and thus can be used to provide accurate and up-to-date navigation guidance of the physical environment. Further, the data can be used to train algorithms to improve the driving capabilities of autonomous vehicles. Further, the data can be used to aid predictive analytics (e.g., of traffic patterns, of human behaviors). The knowledge of real-world events, patterns, and interactions can be valuable data for urban planning (e.g., operation of traffic lights). The data can be used to detect and respond to real-world events. For example, the system can automatically detect accidents and deploy drones to send emergency supplies. As another example, the system can monitor an environment (e.g., a port, a warehouse) and alert authorities when anomalies are detected.
With reference to
The system 100 includes a distributed database 110. The distributed database 110 can provide information needed to generate flight instructions to direct airborne agents to gather data efficiently and accurately. Further, the distributed database 110 can provide information needed for the data processing pipeline 112 to process the gathered data and extract data of interest. Further still, the distributed database 110 can store the extracted data in order to fulfill future data requests.
In some embodiments, the distributed database 110 stores geographical information such as the local civil structures (e.g., roads, buildings, vegetation), traffic information (e.g., local live traffic information), local event information (e.g., gathering), and local environmental information (e.g., weather, time). These information can be obtained or derived from external sources (e.g., third-party service providers that provide weather information or map information), the airborne agents, the data processing pipeline, or a combination thereof.
In some embodiments, the distributed database 110 can reside on one or more local data centers, a central data center, the cloud, or a combination thereof. The local data centers can have local storage and can communicate with each other. In some embodiments, the local data centers are geographically distributed. These local data centers can be existing data facilities, such as facilities of wireless service providers, depending on the data throughput, latency requirement, and the area covered. In some embodiments, the information stored at the local data centers is synced to the cloud. In some embodiments, some information stored at the local data centers (e.g., license plates, human faces) is not synced on the cloud but stored only locally for latency and privacy considerations.
Generation of the flight instructions can be performed at the local data centers hosting the distributed database 110, the one or more airborne agents 114, a separate set of devices, or a combination thereof. In some embodiments, the generation of flight instructions involves the collaboration between the data centers and the airborne agents. In some embodiments, the airborne agents receive a set of precise flight instructions (e.g., specifying flight coordinates, trajectories, directions, speed, time stamps) and thus need to perform minimal calculation before and/or during flight. In some embodiments, the airborne agents receive high-level instructions (e.g., a flight route including a set of checkpoints) and thus need to calculate the precise flight instructions before and/or during flight. During flight, the data centers and the airborne agents can communicate to generate updated flight instructions as necessary. In some examples, the airborne agents receive only a high-level goal (e.g., specifying the region and the type of data to be gathered) and thus need to generate the necessary flight instructions. In some embodiments, one or more human pilots 116 can formulate part of the flight instructions, which are transmitted to the airborne agents before and/or during flight.
With reference to
The data processing pipeline 112 can further incorporate intervention by human annotators 120. In some embodiments, a local data center aggregates the map update information from all the agents over a temporal period. The transient changes over the temporal period are filtered out and the consistent changes are packaged and sent to the annotators 120. In some embodiments, the system automatically detects semantic types (e.g., traffic signs, lane markings, and road boundaries) in the gathered data, and only sends a portion (e.g., a fixed percentage, a portion having low confidence scores) of the detected semantic types to human annotators 120 for verification. In some embodiments, the system deploys one or more airborne agents to the geographical region to verify or recapture data before requesting intervention by human annotators 120.
In some embodiments, in addition to airborne agents, other agents such as vehicles, pedestrians, and bikers can gather data and provide the gathered data to the system (e.g., by transmitting the data to a nearby local data center). The data gathered by these agents can include location data (e.g., GPS signals of a pedestrian), motion data (e.g., movement of a vehicle), and image data (e.g., a photo captured by the pedestrian). The data gathered by these agents can be integrated into the data processing pipeline. Depending the hardware and software used by the other moving agents, the other moving agent may gather only lower-fidelity data. The lower-fidelity data (e.g., location and motion of a vehicle) can be refined in the airborne agent's observation frame and the refined data (e.g., more precise location and motion data) can be stored by the system (e.g., stored at a local data center). In some embodiments, the data gathered by the moving agents can be used to filter the moving objects from the data gathered by the airborne agents, when necessary.
After the data processing pipeline 112 extracts data of interest, the data of interest can be displayed via the web portal 102. Subsequently, the users of the system 100 can view the data in different formats or visualization settings and submit follow-up requests, as discussed below. In some embodiments, when the data request is submitted by a robot user, the system automatically transmits the data of interest to the robot user.
At block 102, a system (e.g., the system 100 of
The user interface 300 further includes a dialog box 304 for specifying the data to be requested. The dialog box 304 includes a plurality of user affordances (e.g., multi-level drop-down menus) for specifying attributes of the requested data according to a predefined data taxonomy. In the depicted example, the data taxonomy includes static data 306, which refers to data representing the physical environment and the static objects in the physical environment, and dynamic data, which refers to data representing the dynamic objects or events in the physical environment.
For static data 306, the dialog box 304 allows the user to specify a data format 310, a terrain type 312, a road type 314, and a surface type 316, according to an exemplary taxonomy. Exemplary data formats 310 include: 3D point cloud, 2D ortho-projected images, and 3D semantic map. Exemplary terrain types 312 include: local streets, highway, industrial park, farm land, water areas (e.g., river, lake). Exemplary road types 314 includes: straight road, curvy road, 4-way signaled intersection, 3-way signaled intersection, 4-way stop, 3-way stop, and roundabout. Exemplary surface types 316 include paved, unpaved, mountain, swamp, and beach.
For dynamic data, the dialog box 304 allows the user to specify an object type 320, an object motion 322, and a multi-object interaction 324, according to an exemplary taxonomy. Exemplary object types 320 include: small vehicles (which may be further categorized into sedan, coupe, cross over, two wheelers, bicycle, motorcycle, etc.), large vehicles (which may be further categorized into SUV, van, bus, truck, etc.), and human (which may be further categorized into pedestrian, rider, etc.). Exemplary object motions 322 include: vehicle motion (which may be further categorized into straight motion, left turn, right turn, U-turn, roundabout turn, merging onto highway, illegal turn, etc.) and human motion (which may be further categorized into walking on sidewalk, sitting on the sidewalk, crossing street, walk on the street, jaywalking, etc.). Exemplary multi-object interactions 324 include: following, passing, towing, rear-end crashing and head-on crashing.
In some embodiments, the user interface 300 allows the user to further narrow the data request by additional parameters such as time period, traffic volume, and weather condition. In some embodiments, the user interface 300 allows the user to specify logic relationships among the attributes in the data request. For example, the user can request data related to “all Toyota Camrys or Toyota Corolla making U-turns at a particular intersection in Sunnyvale, Calif. at 8-10 AM on week days”. The user interface 300 can also allow the user to specify customized data of interest that is not included in the default taxonomy. For example, a user can specify a customized data request to identify manholes of a particular size in a particular geographical region, even if manholes are not defined the default taxonomy of the system. Customized data requests can be submitted via the user interface 300 (e.g., via the text box 330), for example. In some embodiments, the system supports a plurality of taxonomies (e.g., traffic data, port data), and the user interface 300 allows the user to select a particular taxonomy before further specifying attributes of data to be requested.
Turning back to
At block 402, the system obtains a ground map based on a data request. For example, based on the geographical region specified in the data request, the system can obtain a high-fidelity map from the distributed database (e.g., from a local data center). If the high-fidelity map is not available from the distributed database, the system can download a ground map (e.g., OpenStreetMap) corresponding to the specified geographical region. Thus, the ground map used in process 400 can be a high-fidelity map or a low-fidelity map.
At block 404, the system extracts the loops and/or road boundaries from the obtained ground map. In some embodiments, the system first identifies closed loops from the map. For an open road segment that is not part of a closed loop, the system identifies the boundaries of the open road segment (e.g., beginning, end, edges of the segment).
At block 406, the system computes a required buffer width to be maintained between the airborne agent and the edge of a road. In some embodiments, the airborne agent may not be allowed to fly directly over the road, and thus flies beside the road but maintains a buffer width such that the airborne agent is still close enough to the road to gather data of the road surface. The buffer width can be determined based on the required accuracy, road coverage, standard altitude, camera specifications (field of view), or a combination thereof. An exemplary buffer width is around 2 meters from an edge of the road.
At block 408, the system generates a flight path. Specifically, the system obtains sets of way points along the loops and/or road segments based on the computed buffer widths. Based on the set of way points, the system generates a flight path, for example, by doing interpolation between the way points. As shown in
At block 410, the system checks the coverage of the generated flight path. The coverage can be calculated based on the camera specifications (e.g., point of view) and the flight path.
After the system verifies that the generated flight paths cover the entire geographical region, one or more airborne agents can traverse the geographical region based on the generated flight paths to gather data.
Turning back to
The system is able to deploy airborne agents having different hardware and software configurations. For example, a first airborne agent may be equipped with a first set of equipment (e.g., including high-fidelity 3D sensing devices such as LiDAR), while a second airborne agent may be equipped with a second set of equipment (e.g., excluding LiDAR equipment). As discussed below, the gathering of static data and the gathering of dynamic data may require different hardware and software configurations. For example, a LiDAR system is needed to obtain a high-fidelity 3D point cloud of a physical environment, but is not necessary to capture dynamic objects. Thus, based on the data to be gathered, different airborne agents may be selected and deployed. Further, the system can deploy the airborne agent(s) based on the physical proximity of the agent to the geographical region to be surveyed. For example, the system can deploy an airborne agent located at a local data center close to the region to be surveyed.
At block 208, the system gathers data based on the data request. During flight, the airborne agent(s) can adapt the flight paths based on real-time contextual information. In some embodiments, if the airborne agent encounters events that prevent the agent from gathering data efficiently, the agent may update the flight path accordingly. For example, if there is heavy traffic blocking the surface of a road or a red light preventing the agent from traveling forward, the agent may travel elsewhere and return at a later time. In some embodiments, the airborne agent may update its flight path based on environmental factors. For example, the airborne agent may update its flight path to avoid gathering data from a location when the location is experiencing excessive shadows. In some embodiments, the airborne agent can time the capturing of data. For example, in order to capture behavior of cars running yellow light, the airborne agent can plan its flight path and time the activation of sensors accordingly. In some embodiments, the airborne agent is programmed to handle unexpected scenarios. For example, when the airborne agent's battery level is below a threshold, the airborne agent can automatically detect what is underneath and travel to a location to minimize damage during landing.
As discussed above, the data requested may be static data (e.g., a 3D point cloud representing a geographical region) or dynamic data (e.g., traffic patterns at an intersection). As such, block 208 can comprise gathering static data (block 210), dynamic data (block 212), or a combination thereof.
In some embodiments, processing of dynamic data requires a preexisting high-fidelity map. If the system does not have a high-fidelity map for the specified geographical region, the system needs to gather static information needed to construct the map, even if the user has only requested dynamic data for the region. On the other hand, if the system already has a high-fidelity map for a region (e.g., stored at a nearby local data center hosting a portion of the distributed database), the system only needs to gather dynamic data specified in the user request. In some embodiments, if the system already has a high-fidelity map for a geographical region (e.g., stored at a nearby local data center) but the user has requested an update, the system needs to gather static information and identify updates to the map, if any.
At block 214, after data is gathered, the system processes the gathered data to extract data of interest. The processing can be performed by the one or more airborne agents gathering the data, the one or more local data centers, the cloud, or a combination thereof. Further, the processing can be performed during the flight of the airborne agents, after the flight of the airborne agents, or a combination thereof. Block 208 and block 214 may be performed simultaneously at the same device(s), in some examples.
In some embodiments, block 214 may comprises generating a high-fidelity map, electing data for further processing, updating the high-fidelity map based on the selected data, extracting dynamic objects/scenarios based on the selected data, or a combination thereof.
With reference to
At block 508, the point cloud scans 502 and the GPS/IMU signals 504 are used to perform point cloud aggregation and data representing dynamic objects (e.g., cars, pedestrians) are identified as transient changes and removed from the point cloud. For example, the system can aggregate results from different 3D scans (e.g., at different times, by different 3D sensors) to construct a single point cloud. Further, the system can identify correlations between the point cloud scans 502 and the GPS/IMU data 504 to associate points in the point cloud with geographical information (e.g., longitude, latitude, elevation).
At block 510, the point cloud scans 502 and the color images 506 are used to perform cross-modality calibration. For example, the system can identify correlations between the point cloud scans 502 and the color images 506 to associate points in the point cloud with color information. In some embodiments, the correlations can be established based on the time stamps associated with the data and/or the known positioning of the sensors.
At block 512, 3D reconstruction and colorization are performed to obtain a colorized 3D point cloud 514. A colorized and geo-referenced 3D point cloud 514 representative of the physical environment surveyed by the airborne agents is obtained.
At block 516, orthographic projection is performed based on the colorized point cloud 514 to obtain an orthographic color map and a height map 518. At block 520, deep-learning based semantic segmentation is performed to identify predefined semantic types (or semantic mask 522) in the physical environment. The predefined semantic types can include objects of interest and shapes of interest, such as traffic signs, lane markings, and road boundaries. In some embodiments, the system can identify a portion of the map(s) to be associated with a predefined semantic type based on physical characteristics (e.g., color, shape, pattern, dimension, irregularity or uniqueness) of the pixels of the map(s). Further, the system can identify a portion of the map(s) to be associated with a predefined semantic type based on metadata (e.g., location) of the portion of the map(s). Further, the system can identify a portion of the map and/or assign a confidence value to the identification by analyzing the map measured at different times. In some embodiments, one or more neural network classifiers are used to identify predefined semantic types applicable to the maps.
In some embodiments, the identified semantic types are associated with the corresponding point(s) in the point cloud or the corresponding pixels in the maps as labels or annotations. In some embodiments, the identified semantic types form a tactical layer that can be referenced to other data in a 3D map dataset, such as the 3D point cloud.
At block 524, the semantic masks 522 are used to obtain roads or lanes. In some embodiments, the roads or lanes are represented as 3D vectors. As discussed above, annotations may be manually performed by human annotators. Accordingly, road network 526 is obtained. The road networks 526 can be stored in a distributed database. The maps generated by process 500 (e.g., 3D map, 2D color map, height map) include high-fidelity data (e.g., expected error is <10 CM, or even <4 CM). In some embodiments, a portion of the road networks can be stored at a local data center in proximity to the corresponding geographical location.
As shown in
As shown in
In some embodiments, the system detects dynamic scenarios from the gathered data. In the context of traffic scenarios, dynamic scenarios can include accidents (e.g., collisions), traffic conditions (e.g., traffic jams), abnormal activities (e.g., traffic violations). In some embodiments, the system detects the dynamic scenarios even if they are not data explicitly requested by the user. For example, the system can detect a traffic jam in a geographical region in real time and issue alerts to vehicles in the region. As another example, the system can actively respond to the detected event by deploying emergency supplies using the airborne agents. As another example, the system can detect a car's behavior when getting around a street cleaning vehicle as a traffic violation and flag the abnormal behavior to the user. These special/abnormal behaviors can be used to train autonomous vehicles.
Additional exemplary systems and methods for efficiently and accurately generating a high-fidelity three-dimensional representation of a physical environment that can include dynamic objects and scenarios are provided in U.S. Provisional Patent Application Ser. 62/727,986, entitled “INTELLIGENT CAPTURING OF A DYNAMIC PHYSICAL ENVIRONMENT,” filed Sep. 6, 2018, the content of which is hereby incorporated by reference in its entirety for all purposes.
Turning back to
At block 222, the system provides an output based on the extracted data. In some embodiments, the user can access a web portal (e.g., the web portal 102 of
With reference to
In some embodiments, after the user submits a data request, the system can provide a status of the data request on the web portal. For example, the web portal can display progress information (e.g., percentage of necessary data gathered), time estimate, and/or information about the activities of the airborne agents (e.g., their locations). Further, the web portal allows the user to submit follow-up requests. For example, the web portal can allow the user to request additional data for a geographical region (e.g., to patch up a hole in the map), to request additional processing to the data (e.g., to obtain annotations of lane markings), or to request exporting of the data (e.g., to a training algorithm). In some embodiments, when the data request is submitted by a robot user, the system automatically transmits the data of interest to the robot user.
At block 602, the system receives a user request indicative of a geographical region and a data type. At block 604, the system, in response to receiving the user request, generates a flight path based on the region. At block 606, the system causes the airborne agent to traverse at least a portion of the region based on the generated flight path. At block 608, the system causes the airborne agent to gather data based on the data type in the user request. At block 610, the system processes the gathered data to obtain a set of data of interest. At block 612, the system provides an output based on the set of data of interest.
The operations described above with reference to
Input device 720 can be any suitable device that provides input, such as a touch screen, keyboard or keypad, mouse, or voice-recognition device. Output device 730 can be any suitable device that provides output, such as a touch screen, haptics device, or speaker.
Storage 740 can be any suitable device that provides storage, such as an electrical, magnetic or optical memory including a RAM, cache, hard drive, or removable storage disk. Communication device 760 can include any suitable device capable of transmitting and receiving signals over a network, such as a network interface chip or device. The components of the computer can be connected in any suitable manner, such as via a physical bus or wirelessly.
Software 750, which can be stored in storage 740 and executed by processor 710, can include, for example, the programming that embodies the functionality of the present disclosure (e.g., as embodied in the devices as described above).
Software 750 can also be stored and/or transported within any non-transitory computer-readable storage medium for use by or in connection with an instruction execution system, apparatus, or device, such as those described above, that can fetch instructions associated with the software from the instruction execution system, apparatus, or device and execute the instructions. In the context of this disclosure, a computer-readable storage medium can be any medium, such as storage 740, that can contain or store programming for use by or in connection with an instruction execution system, apparatus, or device.
Software 750 can also be propagated within any transport medium for use by or in connection with an instruction execution system, apparatus, or device, such as those described above, that can fetch instructions associated with the software from the instruction execution system, apparatus, or device and execute the instructions. In the context of this disclosure, a transport medium can be any medium that can communicate, propagate or transport programming for use by or in connection with an instruction execution system, apparatus, or device. The transport readable medium can include, but is not limited to, an electronic, magnetic, optical, electromagnetic or infrared wired or wireless propagation medium.
Device 700 may be connected to a network, which can be any suitable type of interconnected communication system. The network can implement any suitable communications protocol and can be secured by any suitable security protocol. The network can comprise network links of any suitable arrangement that can implement the transmission and reception of network signals, such as wireless network connections, T1 or T3 lines, cable networks, DSL, or telephone lines.
Device 700 can implement any operating system suitable for operating on the network. Software 750 can be written in any suitable programming language, such as C, C++, Java or Python. In various embodiments, application software embodying the functionality of the present disclosure can be deployed in different configurations, such as in a client/server arrangement or through a Web browser as a Web-based application or Web service, for example.
Although the disclosure and examples have been fully described with reference to the accompanying figures, it is to be noted that various changes and modifications will become apparent to those skilled in the art. Such changes and modifications are to be understood as being included within the scope of the disclosure and examples as defined by the claims.
The foregoing description, for purpose of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The embodiments were chosen and described in order to best explain the principles of the techniques and their practical applications. Others skilled in the art are thereby enabled to best utilize the techniques and various embodiments with various modifications as are suited to the particular use contemplated.
This application claims the benefit of U.S. Provisional Application 62/751,227, filed on Oct. 26, 2018, the entire content of which is incorporated herein by reference for all purposes. This application relates to U.S. Provisional Patent Application Ser. No. 62/727,986, entitled “INTELLIGENT CAPTURING OF A DYNAMIC PHYSICAL ENVIRONMENT,” filed Sep. 6, 2018, the content of which is hereby incorporated by reference in its entirety for all purposes.
Number | Date | Country | |
---|---|---|---|
62751227 | Oct 2018 | US |