Embodiments described herein generally relate to systems and methods for the dynamic generation of restricted flight zones for drones.
Drones have proliferated in recent years due to advances in manufacturing and technology, and with this proliferation comes the need for better control to ensure safe operation. Geo-fencing is a technique that may be used to restrict the operational space of autonomous vehicles such as drones. In a typical implementation, the user of a drone defines a path that is a series of waypoints which can be connected by straight lines. The drone then uses its internal global positioning system (UPS) sensors to determine its position and ensure that it will not fly across “virtual fences” defined by boundaries associated with the path.
In the drawings, which are not necessarily drawn to scale, like numerals may describe similar components in different views. Like numerals having different letter suffixes may represent different instances of similar components. Some embodiments are illustrated by way of example, and not limitation, in the figures of the accompanying drawings in which:
In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of some example embodiments. It will be evident, however, to one skilled in the art that the present disclosure may be practiced without these specific details.
The following acronyms and definitions are provided for the discussion below:
The use of virtual fences with drone technology is a convenient way to control a flight path of the drone. In some situations, it may be advantageous if virtual fences are automatically generated by a drone or by a group of drones based on their mission and on information gathered from onboard sensors, such as radar, cameras, etc., and external sensors such as beacons. A few examples of situations where virtual fences may be useful are: (1) multiple drones flying in close proximity to each other that require a protected space around them; (2) government regulations in countries such as the United States requiring that drones remain inside a field of view of the pilot at all times and avoid flying over groups of people; and (3) camera drones that are capturing a same scene as other camera drones (in which case such camera drones should try to avoid entering inside the camera FOV of another drone). Described below are examples of systems, devices, algorithms, and methods for dynamically generating restricted flight zones for drones, as well as their application to the above problems.
The FIGS. below illustrate certain aspects in 2D form—however, one of skill in the art would understand how the illustrations may be properly extended into 3D space.
In one implementation, this line representing the viewing direction 145 may be approximated by presuming that the operator 110A is looking in the direction of the drone 130, and thus determined based on the operator's 110A location (or possibly the operator's 110A head location) and a current position of the drone 130. In another implementation, image recognition utilizing information from the drone's camera or other sensor may be utilized to determine an actual direction that the drone operator 110A is looking to determine the line 145 representing the viewing direction.
For the sake of convenience, as discussed herein, boundaries may be indicated as attractive (e.g., inward forces 150, as shown in
Static geo-fencing and path planning methods are methods that may be used to solve the problems described above however, both of these solutions require prior knowledge of restricted flight zones, which are either static or change slowly relative to an intended mission. The mission planner, e.g., the pilot or some type of centralized planning system, defines the boundaries and goals of the mission so that a lower level planner may generate conflict-free trajectories for all agents involved. Existing solutions for static geo-fencing involve the user to manually defining the boundaries for the drone. However, this approach is not possible to do when the boundaries are dynamically defined based on the states of various agents (drones, cameras, subjects, obstacles, zones), as in the case of using multiple drones 130A, 130B with cameras or flying a drone 130 (reference numbers may have character suffixes omitted herein when referring to multiple elements collectively, or one element representatively) over changing groups of people.
Online planning algorithms may be used to generate collision-free trajectories around static and even some dynamic obstacles. However, when multiple agents are involved, as in the multi-drone with camera problem, planning for multiple agents is computationally intensive and may be difficult to do online.
Described herein are possible solutions which may provide drones 130 with a fast reaction capability for avoiding close moving obstacles. Some solutions may enable coexistence with multiple agents even if not all of them apply the same or similar collision avoidance algorithms. Moreover, various algorithms may be modified to enable an automatic creation of restricted flight zones which may depend on the dynamic characteristics of one or multiple interacting agents.
The solutions discussed herein may be embedded in a flight controller of the drones 130, enabling low-level compliance with safety regulations and new multi-drone applications via the creation of dynamically generated restricted flight zones. This approach may be used to supplement and simplify the tasks of higher-level planners, especially in multi-drone applications, by automatically managing collision-prevention with obstacles as well as restrictions imposed by the dynamic boundaries defined by the application.
As shown in
The reactive control component 320 (and the default control component 310) may have a drone state 330 as an input that includes the drone's 130 location, orientation, velocity, etc. The reactive control component 320 may also have the drone state 330 as an input as well as an obstacle state 340 that may include forbidden zones (which may include a position and a velocity of a nearest point for the obstacle/forbidden zone). Although the obstacle state 340 is described as a region to be avoided, a dynamic environment state with regions to be favored as well as, or in place of, regions to be avoided may be utilized in some implementations as well. The drone state 330 and obstacle state 340 may be obtained by localization (e.g., GPS and other forms of determining position and orientation), and may include vision algorithms with feature recognition algorithms.
The outputs of the default control component 310 and the reactive control component 320 may be combined 350 with the combined output being fed to the actuators 360.
In operations S415 and S420, the relative velocity and position between the agent 130 and obstacle are calculated by taking the difference between the previously determined position and velocity components.
The reaction may be separated into two parts: a velocity reaction UV (scalar) and a position reaction UR (vector). The position reaction may take into account the distance and direction to the nearest point in the obstacle, which can be virtually modified in order to add a safe region between the agent and the obstacle. It may be determined, as shown in operation S430, by:
The position reaction UR (as a vector) points in an opposite direction towards the obstacle, thereby moving the agent 130 away from it. The magnitude of this reaction may increase inversely proportional to the distance to the obstacle, i.e., there is a greater position reaction UR when closer to the obstacle. The order of calculating the velocity reaction S425 and the position reaction S430 is not important and these can be sequenced differently in different implementations.
In operation S425, the velocity reaction may be calculated by:
U
V
=e
−λv
·r
.
where
The velocity reaction UV may serve to diminish the magnitude of the position reaction UR such that obstacles moving towards the agent 130 generate a bigger reaction than an obstacle moving away from the agent 130. In operation S435, the full reaction U is calculated by combining both the position and velocity reactions, where U=−URUV. In operation S440, this full reaction may be added to the standard control input (at 350), which, in operation S445, is ultimately provided to the actuators 360 of the agent 130. Thus, the end the vector of reactions U (ux,uy,uz) may then be converted in commands for the roll, pitch, yaw, and thrust. This reaction control thereby allows the agent 130 to move in scenarios with various obstacles and possibly other agents), when all reactions are added together. At operation S450, the process 400 returns to S405 to wait for the next control cycle.
In the US, Federal Aviation Administration (FAA) regulations require that whenever a person wants to fly a drone 130, the drone 130 always needs to be in the field-of-view (FOV) of the pilot, except when waivers are in place. The reactive collision avoidance technique described herein may be applied to ensure compliance of this requirement.
In this situation (and referring to
The difference of this situation with respect to general collision avoidance is that the nearest point from the drone 130 to the forbidden zone (here the boundary of the field of view) is not known beforehand. For such a purpose, the following calculation may be used. In this use case, the state of the agent rA is needed as well as the position of the pilot rP with respect to the same frame of reference (here, the position of the pilot may be defined as an origin for the same frame of reference).
The pose of the pilot's head may also be provided to estimate the field of view (although in one implementation, it may be presumed that the pilot is facing the direction of the drone 130). These quantities may be easy to obtain: the pilot's remote control 120 may be equipped with sensors in order to measure the orientation, distance to the drone 130, and direction towards the drone 130. If the direction of view is represented as a unitary vector {circumflex over (d)}, and θ the estimated angle of view of the pilot 110A (which may be an estimation known beforehand), then in order to obtain the coordinates of the nearest point on the boundary of the visual cone to the current position of the agent (i.e., the drone 130), the following may be calculated.
First, a unitary vector between the pilot and the agent may be computed:
The vectors that lie in the boundary of the cone (more specifically, the nearest line in the cone to the agent 130 starting at the pilot 110A) are a linear combination of {circumflex over (r)}A and {circumflex over (d)} (as long as the agent 130 does not lie in the line formed by {circumflex over (d)}, which is not a problem since those points are not in danger of falling outside the cone), then there is at least one point in that line that is a combination of the form:
v
c
=
+{circumflex over (r)}
A
Combining this equation with the elation:
Then, α is the positive solution of the polynomial:
W={circumflex over (r)}
A
·{circumflex over (d)}
v
λ
=λv
c
+r
P.
Then vλ is used as the “obstacle” coordinates and a repulsive force is calculated.
Another FAA regulation requires that drones 130 must not fly over people. The reactive control component 320 may be used to address this restriction by dynamically placing virtual forces over individuals and over groups of people. For such a purpose, the position of the people needs to be detected using a down facing camera and appropriate people detection and tracking algorithms.
The equations that allow determining the dimensions of the image 810, considering the available information, may be:
Likewise, the width of the image may be calculated as:
Once the total length L and width W of the image is calculated in meters and in pixels, the position of the people 710 detected can be calculated using the ratio between pixels and meters. Then a reaction force U may be placed on a boundary of the cylinder 720 (extruded from circle in a direction perpendicular to a ground plane) with a radial axis on the person 710 coordinates—see
In the situation illustrated in
The algorithm may be to place forces on the space as illustrated in
Embodiments may be implemented in one or a combination of hardware, firmware, and software. Embodiments may also be implemented as instructions stored on a machine-readable storage device, which may be read and executed by at least one processor to perform the operations described herein. A machine-readable storage device may include any non-transitory mechanism for storing information in a form readable by a machine (e.g., a computer). For example, a machine-readable storage device may include read-only memory (ROM), random-access memory (RAM), magnetic disk storage media, optical storage media, flash-memory devices, and other storage devices and media.
A processor subsystem may be used to execute the instruction on the machine-readable medium. The processor subsystem may include one or more processors, each with one or more cores. Additionally, the processor subsystem may be disposed on one or more physical devices. The processor subsystem may include one or more specialized processors, such as a graphics processing unit (CPU), a digital signal processor (DSP), a field programmable gate array (FPGA), or a fixed function processor.
Examples, as described herein, may include, or may operate on, logic or a number of components, modules, or mechanisms. Modules may be hardware, software, or firmware communicatively coupled to one or more processors in order to carry out the operations described herein. Modules may be hardware modules, and as such modules may be considered tangible entities capable of performing specified operations and may be configured or arranged in a certain manner. In an example, logic or circuits may be arranged (e.g., internally or with respect to external entities such as other circuits) in a specified manner as a module. In an example, the whole or part of one or more computer systems (e.g., a standalone, client or server computer system) or one or more hardware processors may be configured by firmware or software (e.g., instructions, an application portion, or an application) as a module that operates to perform specified operations. In an example, the software may reside on a machine-readable medium. In an example, the software, when executed by the underlying hardware of the module, causes the hardware to perform the specified operations. Accordingly, the term hardware module is understood to encompass a tangible entity, be that an entity that is physically constructed, specifically configured (e.g., hardwired), or temporarily (e.g., transitorily) configured (e.g., programmed) to operate in a specified manner or to perform part or all of any operation described herein. Considering examples in which modules are temporarily configured, each of the modules need not be instantiated at any one moment in time. For example, where the modules comprise a general-purpose hardware processor configured using software; the general-purpose hardware processor may be configured as respective different modules at different times. Software may accordingly configure a hardware processor, for example, to constitute a particular module at one instance of time and to constitute a different module at a different instance of time. Modules may also be software or firmware modules, which operate to perform the methodologies described herein.
Example computer system 1100 includes at least one processor 1102 (e.g., a central processing unit (CPU), a graphics processing unit (GPU) or both, processor cores, compute nodes, etc.), a main memory 1104 and a static memory 1106, which communicate with each other via a link 1108 (e.g., bus). The computer system 1100 may further include a video display unit 1110, an alphanumeric input device 1112 (e.g., a keyboard), and a user interface (UI) navigation device 1114 (e.g., a mouse). In one embodiment, the video display unit 1110, input device 1112 and UI navigation device 1114 are incorporated into a touch screen display. The computer system 1100 may additionally include a storage device 1116 (e.g., a drive unit), a signal generation device 1118 (e.g., a speaker), a network interface device 1120, and one or more sensors (not shown), such as a global positioning system (GPS) sensor, compass, accelerometer, or other sensor.
The storage device 1116 includes a machine-readable medium 1122 on which is stored one or more sets of data structures and instructions 1124 (e.g., software) embodying or utilized by any one or more of the methodologies or functions described herein. The instructions 1124 may also reside, completely or at least partially, within the main memory 1104, static memory 1106, and/or within the processor 1102 during execution thereof by the computer system 1100, with the main memory 1104, static memory 1106, and the processor 1102 also constituting machine-readable media.
While the machine-readable medium 1122 is illustrated in an example embodiment to be a single medium, the term “machine-readable medium” may include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more instructions 1124. The term “machine-readable medium” shall also be taken to include any tangible medium that is capable of storing, encoding or carrying instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure or that is capable of storing, encoding or carrying data structures utilized by or associated with such instructions. The term “machine-readable medium” shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media. Specific examples of machine-readable media include non-volatile memory, including but not limited to, by way of example, semiconductor memory devices (e.g., electrically programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM)) and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks.
The instructions 1124 may further be transmitted or received over a communications network 1126 using a transmission medium via the network interface device 1120 utilizing any one of a number of well-known transfer protocols (e.g., HTTP). Examples of communication networks include a local area network (LAN), a wide area network (WAN), the Internet, mobile telephone networks, plain old telephone (POTS) networks, and wireless data networks (e.g., Wi-Fi, 3G, and 4G LTE/LTE-A or WiMAX networks). The term “transmission medium” shall be taken to include any intangible medium that is capable of storing, encoding, or carrying instructions for execution by the machine, and includes digital or analog communications signals or other intangible medium to facilitate communication of such software.
Example 1 is drone controller logic at least partially comprising hardware logic to: determine or receive a drone state comprising a position and a velocity of the drone; determine or receive a relative obstacle state, including a relative position and a relative velocity of the drone with respect to an obstacle; determine a reaction to avoid the obstacle based on the relative obstacle state; and apply a signal related to the reaction to one or more actuator control inputs of the drone that modifies a drone path existing at a first time when the drone is in flight to avoid the obstacle.
In Example 2, the subject matter of Example 1 includes, wherein the controller logic is further to define an obstacle boundary as a geofence associated with the obstacle that utilizes information collected at the first time.
In Example 3, the subject matter of Example 2 includes, wherein the obstacle is at least one of: a space outside of a field-of-view (FOV) of an operator of the drone; a space of and over an obstacle object; a space within an FOV of an other drone; a space proximate to an object; and a first space proximate to an object and a second space away from the object that leaves a gap between the first and second space.
In Example 4, the subject matter of Example 3 includes, wherein the controller logic is further to determine the space outside of the drone operator FOV by being operable to: determine a conical space representing the drone operator FOV based on location information of the drone operator and a predefined subtended angle of the conical space from a viewing line.
In Example 5, the subject matter of Example 4 includes, wherein the controller logic is further to determine the viewing line as a line originating from the drone operator location and ending with a location of the drone.
In Example 6, the subject matter of Examples 4-5 includes, wherein the controller logic is further to determine the viewing line as a line along a viewing direction of the drone operator.
In Example 7, the subject matter of Examples 3-6 includes, wherein the controller logic is further to determine the space outside of the drone operator FOV by being operable to: determine a conical space representing the drone operator FOV based on location information of the drone operator and a predefined subtended angle of the conical space from a line originating from the drone operator location and ending with a location of the drone.
In Example 8, the subject matter of Examples 3-7 includes, wherein the controller logic is further to determine the space of and over the obstacle object by being operable to determine a cylindrical space centered on a location of the obstacle person, wherein the cylindrical space has a predefined radius and extends perpendicular to a ground plane.
In Example 9, the subject matter of Examples 3-8 includes, wherein the controller logic is further to determine of the space within the other drone FOV by being operable to determine the space using state information of the other drone.
In Example 10, the subject matter of Example 9 includes, wherein the other drone state information is information received from the other drone.
In Example 11, the subject matter of Examples 9-10 includes, wherein the other drone state information is information received from a source other than the drone and the other drone.
In Example 12, the subject matter of Examples 9-11 includes, wherein the controller logic is further to determine the other drone state information from imaging information taken by a drone camera.
In Example 13, the subject matter of Examples 1-12 includes, wherein the reaction comprises a position reaction component and a velocity reaction component.
In Example 14, the subject matter of Example 13 includes, wherein: the position reaction is determined by:
where
the velocity reaction is determined by:
V
V
=e
−λv
·r
where
In Example 15, the subject matter of Examples 1-14 includes, wherein the controller logic is further to determine the relative obstacle state by being operable to: estimate an obstacle state comprising a relative position and a relative velocity of the obstacle; and determine the relative obstacle state by determining a difference between the drone state and the obstacle state.
In Example 16, the subject matter of Examples 1-15 includes, a drone camera mounted on a drone; and memory coupled to the drone controller logic and the drone camera.
Example 17 is a drone controller apparatus of a drone comprising a drone camera mounted on the drone, the apparatus comprising: memory; and processing circuitry coupled to the memory, the processing circuitry to: at a first time when the drone is in flight, using the processing circuitry of the drone to: determine a drone state comprising a position and a velocity of the drone; determine a relative obstacle state comprising a relative position and a relative velocity of the drone with respect to an obstacle; determine a reaction to avoid the obstacle based on the relative obstacle state; and apply a signal related to the reaction to one or more actuator control inputs of the drone that modifies a drone path existing at the first time to avoid the obstacle.
Example 18 is a method for operating drone controller logic that at least partially comprises hardware logic, the method comprising: determining a drone state comprising a position and a velocity of the drone; determining a relative obstacle state comprising a relative position and a relative velocity of the drone with respect to an obstacle; determining a reaction to avoid the obstacle based on the relative obstacle state; and applying a signal related to the reaction to one or more actuator control inputs of the drone that modifies a drone path existing at a first time when the drone is in flight to avoid the obstacle.
In Example 19, the subject matter of Example 18 includes, defining an obstacle boundary as a geofence associated with the obstacle that utilizes information collected.
In Example 20, the subject matter of Example 19 includes, wherein the obstacle is at least one of: a space outside of a field-of-view (FOV) of an operator of the drone; a space of and over an obstacle object; a space within an FOV of an other drone; a space proximate to an object; and a first space proximate to an object and a second space away from the object that leaves a gap between the first and second space.
In Example 21, the subject matter of Example 20 includes, determining the space outside of the drone operator FOV by: determining a conical space representing the drone operator FOV based on location information of the drone operator and a predefined subtended angle of the conical space from a viewing line.
In Example 22, the subject matter of Example 21 includes, determining the viewing line as a line originating from the drone operator location and ending with a location of the drone.
In Example 23, the subject matter of Examples 21-22 includes, determining the viewing line as a line along a viewing direction of the drone operator.
In Example 24, the subject matter of Examples 20-23 includes, determining the space outside of the drone operator FOV by: determining a conical space representing the drone operator FOV based on location information of the drone operator and a predefined subtended angle of the conical space from a line originating from the drone operator location and ending with a location of the drone.
In Example 25, the subject matter of Examples 20-24 includes, wherein the determining the space of and over the obstacle object is performed by determining a cylindrical space centered on a location of the obstacle person, wherein the cylindrical space has a predefined radius and extends perpendicular to a ground plane.
In Example 26, the subject matter of Examples 20-25 includes, wherein the determining of the space within the other drone FOV is performed by determining the space using state information of the other drone.
In Example 27, the subject matter of Example 26 includes, wherein the other drone state information is received from the other drone.
In Example 28, the subject matter of Examples 26-27 includes, wherein the other drone state information is received from a source other than the drone and the other drone.
In Example 29, the subject matter of Examples 26-28 includes, wherein the other drone state information is determined from imaging information taken by a drone camera.
In Example 30, the subject matter of Examples 18-29 includes, wherein the reaction comprises a position reaction component and a velocity reaction component.
In Example 31, the subject matter of Example 30 includes, wherein: the position reaction is determined by:
where
the velocity reaction is determined by:
V
V
=e
−λv
·r
where
In Example 32, the subject matter of Examples 18-31 includes, wherein determining the relative obstacle state comprises: estimating an obstacle state comprising a relative position and a relative velocity of the obstacle; and determining the relative obstacle state by determining a difference between the drone state and the obstacle state.
Example 33 is a method for operating a drone controller of a drone comprising a drone camera mounted on the drone, the drone controller comprising: at a first time when the drone is in flight, using a processor of the drone to perform operations of determining a drone state comprising a position and a velocity of the drone; determining a relative obstacle state comprising a relative position and a relative velocity of the drone with respect to an obstacle; determining a reaction to avoid the obstacle based on the relative obstacle state; and applying a signal related to the reaction to one or more actuator control inputs of the drone that modifies a drone path existing at the first time to avoid the obstacle.
Example 34 is a computer-readable storage medium that stores instructions for execution by drone controller logic that at least partially comprises hardware logic, the instructions to configure the drone controller logic to cause the wireless device to: determine a drone state comprising a position and a velocity of the drone; determine a relative obstacle state comprising a relative position and a relative velocity of the drone with respect to an obstacle; determine a reaction to avoid the obstacle based on the relative obstacle state; and apply a signal related to the reaction to one or more actuator control inputs of the drone that modifies a drone path existing at a first time when the drone is in flight to avoid the obstacle.
In Example 35, the subject matter of Example 34 includes, instructions for defining an obstacle boundary as a geofence associated with the obstacle that utilizes information collected at the first time.
In Example 36, the subject matter of Example 35 includes, wherein the obstacle is at least one of a space outside of a field-of-view (FOV) of an operator of the drone; a space of and over an obstacle object; a space within an FOV of an other drone; a space proximate to an object; and a first space proximate to an object and a second space away from the object that leaves a gap between the first and second space.
In Example 37, the subject matter of Example 36 includes, instructions for determining the space outside of the drone operator FOV by: determining a conical space representing the drone operator FOV based on location information of the drone operator and a predefined subtended angle of the conical space from a viewing line.
In Example 38, the subject matter of Example 37 includes, instructions for determining the viewing line as a line originating from the drone operator location and ending with a location of the drone.
In Example 39, the subject matter of Examples 37-38 includes, instructions for determining the viewing line as a line along a viewing direction of the drone operator.
In Example 40, the subject matter of Examples 36-39 includes, instructions for determining the space outside of the drone operator FOV by: determining a conical space representing the drone operator FOV based on location information of the drone operator and a predefined subtended angle of the conical space from a line originating from the drone operator location and ending with a location of the drone.
In Example 41, the subject matter of Examples 36-40 includes, wherein the determining the space of and over the obstacle object is performed by instructions for determining a cylindrical space centered on a location of the obstacle person, wherein the cylindrical space has a predefined radius and extends perpendicular to a ground plane.
In Example 42, the subject matter of Examples 36-41 includes, wherein the determining of the space within the other drone FOV is performed by instructions for determining the space using state information of the other drone.
In Example 43, the subject matter of Example 42 includes, wherein the other drone state information is received from the other drone.
In Example 44, the subject matter of Examples 42-43 includes, wherein the other drone state information is received from a source other than the drone and the other drone.
In Example 45, the subject matter of Examples 42-44 includes, wherein the other drone state information is determined from imaging information taken by a drone camera.
In Example 46, the subject matter of Examples 34-45 includes, wherein the reaction comprises a position reaction component and a velocity reaction component.
In Example 47, the subject matter of Example 46 includes, wherein:
the position reaction is determined by:
where
the velocity reaction is determined by:
U
V
=e
−λv
·r
where
In Example 48, the subject matter of Examples 34-47 includes, wherein the medium further comprises instructions for the determining of the relative obstacle state by: estimating an obstacle state comprising a relative position and a relative velocity of the obstacle; and determining the relative obstacle state by determining a difference between the drone state and the obstacle state.
Example 49 is a computer-readable storage medium that stores instructions for execution by processing circuitry of a drone controller apparatus of a drone comprising a drone camera mounted on the drone, the instructions to configure the one or more processors to cause the wireless device to, at a first time when the drone is in flight: determine a drone state comprising a position and a velocity of the drone; determine a relative obstacle state comprising a relative position and a relative velocity of the drone with respect to an obstacle; determine a reaction to avoid the obstacle based on the relative obstacle state; and apply a signal related to the reaction to one or more actuator control inputs of the drone that modifies a drone path existing at the first time to avoid the obstacle.
Example 50 is drone controller logic, at least partially comprising hardware logic, and comprising: means for determining a drone state comprising a position and a velocity of the drone; means for determining a relative obstacle state comprising a relative position and a relative velocity of the drone with respect to an obstacle; means for determining a reaction to avoid the obstacle based on the relative obstacle state; and means for applying a signal related to the reaction to one or more actuator control inputs of the drone that modifies a drone path existing at a first time when the drone is in flight to avoid the obstacle.
In Example 51, the subject matter of Example 50 includes, means for defining an obstacle boundary as a geofence associated with the obstacle that utilizes information collected at the first time.
In Example 52, the subject matter of Example 51 includes, wherein the obstacle is at least one of: a space outside of a field-of-view (FOV) of an operator of the drone; a space of and over an obstacle object; a space within an FOV of an other drone; a space proximate to an object; and a first space proximate to an object and a second space away from the object that leaves a gap between the first and second space.
In Example 53, the subject matter of Example 52 includes, means for determining the space outside of the drone operator FOV with: means for determining a conical space representing the drone operator FOY based on location information of the drone operator and a predefined subtended angle of the conical space from a viewing line.
In Example 54, the subject matter of Example 53 includes, means for determining the viewing line as a line originating from the drone operator location and ending with a location of the drone.
In Example 55, the subject matter of Examples 53-54 includes, means for determining the viewing line as a line along a viewing direction of the drone operator.
In Example 56, the subject matter of Examples 52-55 includes, means for determining the space outside of the drone operator FOV with: means for determining a conical space representing the drone operator FOV based on location information of the drone operator and a predefined subtended angle of the conical space from a line originating from the drone operator location and ending with a location of the drone.
In Example 57, the subject matter of Examples 52-56 includes, wherein the means for determining the space of and over the obstacle object is performed by means for determining a cylindrical space centered on a location of the obstacle person, wherein the cylindrical space has a predefined radius and extends perpendicular to a ground plane.
In Example 58, the subject matter of Examples 52-57 includes, wherein the means for determining of the space within the other drone FOV is performed by means for determining the space using state information of the other drone.
In Example 59, the subject matter of Example 58 includes, wherein the other drone state information is received from the other drone.
In Example 60, the subject matter of Examples 58-59 includes, wherein the other drone state information is received from a source other than the drone and the other drone.
In Example 61, the subject matter of Examples 58-60 includes, wherein the other drone state information is determined from imaging information taken by a drone camera.
In Example 62, the subject matter of Examples 50-61 includes, wherein the reaction comprises a position reaction component and a velocity reaction component.
In Example 63, the subject matter of Example 62 includes, wherein:
the position reaction is determined by:
where
the velocity reaction is determined by:
U
V
=e
−λv
·r
where
In Example 64, the subject matter of Examples 50-63 includes, wherein the means for determining the relative obstacle state comprises: means for estimating an obstacle state comprising a relative position and a relative velocity of the obstacle; and means for determining the relative obstacle state by determining a difference between the drone state and the obstacle state.
In Example 65, the subject matter of Examples 50-64 includes, means for capturing an image mounted on a drone; and memory means coupled to the drone controller logic and the drone camera.
Example 66 is a drone controller apparatus of a drone comprising a drone camera mounted on the drone, the apparatus comprising, at a first time when the drone is in flight: means for determining a drone state comprising a position and a velocity of the drone; means for determining a relative obstacle state comprising a relative position and a relative velocity of the drone with respect to an obstacle; means for determining a reaction to avoid the obstacle based on the relative obstacle state; and means for applying a signal related to the reaction to one or more actuator control inputs of the drone that modifies a drone path existing at the first time to avoid the obstacle.
Example 67 is a computer program product comprising one or more computer readable storage media comprising computer-executable instructions operable to, when executed by processing circuitry of a device, cause the device to perform any of the methods of Examples 18-33.
Example 68 is a system comprising means to perform any of the methods of Examples 18-33.
Example 69 is a system perform any of the operations of Examples 1-66.
Example 70 is a method to perform any of the operations of Examples 1-66.
Example 71 is at least one machine-readable medium including instructions that, when executed by processing circuitry, cause the processing circuitry to perform operations to implement of any of Examples 1-66.
Example 72 is an apparatus comprising means to implement of any of Examples 1-66.
The above detailed description includes references to the accompanying drawings, which form a part of the detailed description. The drawings show, by way of illustration, specific embodiments that may be practiced. These embodiments are also referred to herein as “examples.” Such examples may include elements in addition to those shown or described. However, also contemplated are examples that include the elements shown or described. Moreover, also contemplated are examples using any combination or permutation of those elements shown or described (or one or more aspects thereof), either with respect to a particular example (or one or more aspects thereof), or with respect to other examples (or one or more aspects thereof) shown or described herein.
Publications, patents, and patent documents referred to in this document are incorporated by reference herein in their entirety, as though individually incorporated by reference. In the event of inconsistent usages between this document and those documents so incorporated by reference, the usage in the incorporated reference(s) are supplementary to that of this document; for irreconcilable inconsistencies, the usage in this document controls.
In this document, the terms “a” or “an” are used, as is common in patent documents, to include one or more than one, independent of any other instances or usages of “at least one” or “one or more,” In this document, the term “or” is used to refer to a nonexclusive or, such that “A or B” includes “A but not B,” “B but not A,” and “A and B,” unless otherwise indicated. In the appended claims, the terms “including” and “in which” are used as the plain-English equivalents of the respective terms “comprising” and “wherein.” Also, in the following claims, the terms “including” and “comprising” are open-ended, that is, a system, device, article, or process that includes elements in addition to those listed after such a term in a claim are still deemed to fall within the scope of that claim. Moreover, in the following claims, the terms “first,” “second,” and “third,” etc. are used merely as labels, and are not intended to suggest a numerical order for their objects.
The above description is intended to be illustrative, and not restrictive. For example, the above-described examples (or one or more aspects thereof) may be used in combination with others. Other embodiments may be used, such as by one of ordinary skill in the art upon reviewing the above description. The Abstract is to allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. Also, in the above Detailed Description, various features may be grouped together to streamline the disclosure. However, the claims may not set forth every feature disclosed herein as embodiments may feature a subset of said features. Further, embodiments may include fewer features than those disclosed in a particular example. Thus, the following claims are hereby incorporated into the Detailed Description, with a claim standing on its own as a separate embodiment. The scope of the embodiments disclosed herein is to be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2017/068588 | 12/27/2017 | WO | 00 |