The present disclosure relates generally to unmanned aerial systems. In particular, methods and systems for detection of obstructions within the approach path of unmanned aerial vehicles (UAVs) executing autonomous landings are described.
Unmanned aerial vehicles, like any aerial vehicle, run the risk of collision with objects in their flight paths. A collision between a ground object and a UAV will typically result in damage to the UAV and, depending upon the size of the UAV in question, possible damage to the struck object. When the object is a person or animal, severe bodily harm or death could result. Where a UAV is under continuous control from a ground operator, as is the case of most model aircraft, the ground operator is responsible for seeing possible obstructions and altering the UAV's course to avoid. In recent years, however, UAVs have gained autonomous flight capabilities to the point where a UAV can be preprogrammed with a mission comprised of a set of flight paths between waypoints, concluding with a landing at a predetermined landing spot. Thus, it is possible for a UAV to take off, fly, and land, without real time input or guidance from a ground operator.
Known landing systems for UAVs are not entirely satisfactory for the range of applications in which they are employed. For example, existing systems and methods typically do not provide object detection during an autonomous landing. Thus, the UAV operator must monitor a landing area for potential objects within the UAV's path and either clear the obstructions in a timely fashion, or take control of the UAV to manually avoid the obstructions. Where the operator cannot be present at the landing site during landing, the landing site must be secured in advance to avoid a possible obstruction collision. Furthermore, even clearing and securing a site in advance may not prevent unexpected incursions by unforeseen persons or animals.
Thus, there exists a need for systems and methods that improve upon and advance the design of known systems and methods for conducting UAV autonomous landings. Examples of new and useful systems and methods relevant to the needs existing in the field are discussed below.
Disclosure addressing one or more of the identified existing needs is provided in the detailed description below. Examples of references relevant to methods and systems for obstruction detection during an autonomous unmanned aerial vehicle landing include U.S. patent application Ser. No. 15/017,263, filed on 5 Feb. 2016, and directed to Visual Landing Aids for Unmanned Aerial Systems. The complete disclosure of the above patent application is herein incorporated by reference for all purposes.
The present disclosure is directed to systems and methods for obstruction detection during autonomous unmanned aerial vehicle landings that include an unmanned aerial vehicle equipped with at least one video camera, an image processor that analyzes a feed from the video camera to detect possible obstructions, and an autopilot programmed to abort an autonomous landing if it receives a signal indicating an obstruction was detected. In some examples, the systems and methods are in communication with a ground station to perform obstruction detection analysis instead of performing such processing on board the UAV. In some further examples, the landing area includes a ground-based visual target that the UAV can locate and home in upon from the air.
The disclosed methods and systems will become better understood through review of the following detailed description in conjunction with the figures. The detailed description and figures provide merely examples of the various inventions described herein. Those skilled in the art will understand that the disclosed examples may be varied, modified, and altered without departing from the scope of the inventions described herein. Many variations are contemplated for different applications and design considerations; however, for the sake of brevity, each and every contemplated variation is not individually described in the following detailed description.
Throughout the following detailed description, examples of various methods and systems for obstruction detection during autonomous UAV landings are provided. Related features in the examples may be identical, similar, or dissimilar in different examples. For the sake of brevity, related features will not be redundantly explained in each example. Instead, the use of related feature names will cue the reader that the feature with a related feature name may be similar to the related feature in an example explained previously. Features specific to a given example will be described in that particular example. The reader should understand that a given feature need not be the same or similar to the specific portrayal of a related feature in any given figure or example.
With reference to
For example, system 100 allows a UAV to continuously monitor a designated landing area during an autonomous landing procedure for possible obstructions, such as persons or animals, impinging upon the UAV's flight path. Collision can then be avoided upon detection by a variety of different approaches, such as holding for an obstruction to clear, or diverting to an alternate landing site or around the obstruction. Thus, potential damage to both the UAV and any ground obstructions can be avoided. Further, by providing obstruction detection capabilities, the UAV operator is freed from having to monitor the landing area, secure it, or even pre-clear it from obstructions.
System 100 for detecting an obstruction by an unmanned aerial vehicle 102 (UAV) during an autonomous landing includes at least one image sensor 104 on board UAV 102 that is capable of producing a video feed and possesses a field of view 108 that encompasses the target landing area 110. An image processing unit 106 is in data communication with at least one image sensor 104 so as to receive the video feed, wherein image processing unit 106 analyzes at least a portion of field of view 108 that encompasses target landing area 110 of the video feed using one or more object detection algorithms to detect an obstruction 112 within the flight path of the unmanned aerial vehicle 102. An autopilot is in data communication with image processing unit 106, and is programmed to abort the autonomous landing if an obstruction is detected.
As can be seen in
UAV 102 is preferably of a multi-rotor or single-rotor conventional helicopter format, or a similar style of aircraft that is capable of vertical take-off and landing (VTOL). However, it will be appreciated by a person skilled in the relevant art that the disclosed systems and methods could be easily modified to work with a fixed-wing aircraft or other UAV that lands conventionally or with short distances (STOL). As will be discussed further below, UAV 102 must be capable of executing an autonomous landing, where the UAV can approach and land in a predesignated location without input from a ground controller. Examples of autonomous landings can include the relatively primitive GPS-based return to home capability offered on the DJI Phantom and similarly equipped multirotors, where the UAV will fly back and land on a predetermined GPS location if the signal from the ground controller is lost, to UAVs that are capable of fully autonomous flight, and can be programmed to take off, fly a mission, and land without direct input from a ground station.
In the example shown in
The video feed is in the well-known format of a series of successive frames, and may use a compressed or uncompressed format. Examples of such video formats may include AVC-HD. MPEG4, DV, or any other video encoding format now known or later developed. Selection of a video encoding method may inform the selection of detection algorithms subsequently employed, or may require the video feed to be decompressed and/or decoded into a series of uncompressed successive frames. Image sensor 104 may be sensitive to infrared, ultraviolet, visible light, a combination of the foregoing, or any other type of electromagnetic radiation as appropriate to accurately detect and image target landing area 110. For example, where image sensor 104 can detect infrared light or is equipped with image intensifying equipment, low light or nighttime landings may be facilitated. Image sensor 104 may use CCD, CMOS, or any other suitable imaging technology now known or later developed.
Image sensor 104 provides a video feed constrained to a field of view 108 that depends upon the optics as well as the size of the imaging technology utilized with image sensor 104. During an autonomous landing, field of view 108 will encompass at least the target landing area 110, and preferably at least a safety buffer zone 109 that surrounds and includes target landing area 110. Field of view 108 may encompass beyond safety buffer zone 109, especially when UAV 102 is relatively distant from target landing area 110. As will be described further herein, those portions of field of view 108 outside of safety buffer zone 109 may be disregarded by image processing unit 106.
Referring to
Safety buffer zone 206 (and its corollary 109) constitutes that portion of field of view 200 that image processing unit 106 monitors for obstructions. When a person is in position 208, inside of safety buffer zone 206, image processing unit 106 will signal the autopilot on UAV 102 to abort the landing. However, a person in position 210 will not be registered as an obstruction by image processing unit 106, until the person moves into position 208. Although safety buffer zone 206 is depicted as a rectangle in
Target landing area 204 (and 110 in
Alternatively, target landing area 204 can be implemented using a visual or optical landing target of a different style than those depicted in the patent application for Visual Landing Aids for Unmanned Aerial Systems, including existing ground features or spaces, provided such features can be distinguished from other features within field of view 200. Still further, target landing area 204 need not be implemented with a fixed ground target, but instead could be implemented using any guidance and/or navigation mechanism now known or later developed, such as GPS location, GPS-RTK location, a visual-based or radio-based beacon, radar signal guidance, or via any other navigational aid that allows UAV 102 to locate a predetermined target landing area. With any of the foregoing implementations, the autonomous landing is guided with reference to the implemented guidance mechanism. For example, where target landing area 204 is determined by a GPS location, UAV 102 will possess a GPS navigation device, which in turn supplies GPS guidance to the autopilot to guide the autonomous landing to the target landing area 204. Other guidance mechanism implementations will have UAV 102 equipped with corresponding guidance devices, such as radar signal generators, radio receivers, or other such equipment as appropriate to the technology used to determine target landing area 204. In such implementations, safety buffer zone 206 may be established with reference to the predetermined location in conjunction with a detected altitude.
Returning to
Image processing unit 106 is preferably implemented using a dedicated microcontroller which is sized so as to be placed on-board UAV 102. Suitable technologies may include a general purposes embedded microcontroller, such as Atmel's ATmega AVR technology or an ARM architecture processor, similar to the microprocessors used in many smartphones. Where such microcontrollers are used, image processing unit's 106 functionality is typically implemented in software, which is executed by the microcontroller. Other possible implementing technologies may include application specific integrated circuits (ASICs), where an integrated circuit or collection of integrated circuits are specifically designed to carry out the functionality required of image processing unit 106 at a hardware level.
Image processing unit 106 is in data communication with UAV's 102 autopilot. The autopilot in turn either provides flight control functionality, or interfaces with an inertial measurement unit or similar such device which provides flight control. The autopilot preferably handles autonomous flight mission tasks, such as interfacing with position sensors for directing UAV 102 along a predesignated course, and/or handling take-offs and landings. In this context, image processing unit 106 effectively comprises an additional position sensor providing flight data to the autopilot. The autopilot may be any suitable commercially available fight control system that supports autonomous flight capabilities. Alternatively, autopilot functionality could be integrated into image processing unit 106 to comprise a single unit that receives a video feed, detects obstructions, and controls UAV 102.
Turning attention to
Image sensor 302 and image processing unit 306 each have similar functionality to image sensor 104 and image processing unit 106, described above. Likewise, video feed 304 is identical to the video feed described above that is generated by image sensor 104, and autopilot 310 possesses the functionality described above for the autopilot with reference to
Radio transceiver 314 and associated data links 316a and 316b are implemented using any radio control link technology now known or later developed. Examples of such technology include DJI's Lightbridge data link, which is capable of communicating a video feed along with control information from a UAV to a ground station. Radio transceiver 314 will typically be implemented using a pair of transceivers, with one transceiver located on UAV 102 and in data communication with image processing unit 306 and autopilot 310, and a corresponding transceiver located on a ground station in data communication with off-site processing equipment 312. In this configuration, the pair of transceivers communicates bi-directionally using predetermined wireless frequencies and protocols. In addition to video feed 304 and detection status 308, data links 316a and 316b could be used to transmit control information to autopilot 310 for manual control of UAV 102, to upload mission parameters to autopilot 310 for autonomous flight, or to provide a location for a target landing area.
Turning attention to
Receiving a video feed of the target landing area from an image sensor in step 402 has been discussed above with reference to
Step 404 can be optionally performed prior to step 406. Step 404 includes isolating and extracting from the video feed the safety buffer zone that includes the target landing area, to reduce the amount of video data that must be processed in step 406. Where the safety buffer zone is defined precisely as the target landing area, step 404 includes isolating the target landing area from the video feed and processing for obstructions.
As described above, at step 408 if an obstruction is detected within the safety buffer, the landing is aborted in step 412. If no obstruction is detected, the landing proceeds in step 410. Method 400 is an iterative process, being continually performed until the UAV finally lands. Accordingly, following step 410 method 400 cycles back to step 402. Typically implementations run steps 402 through 408 continuously while an autonomous landing is in process, with the autopilot executing the programmed landing unless an abort signal is received.
If an obstruction is detected, in step 412 the autopilot is instructed to abort the autonomous landing. Aborting the landing can be accomplished in a number of different ways. The selected way of aborting the landing can depend upon mission parameters, the size of the UAV involved, the altitude of the UAV, the remaining battery life of the UAV, and other similar parameters. For example, an abort signal may trigger the UAV to hold in position and wait until the safety buffer zone is cleared from the obstruction. Alternatively, the UAV may divert to a predetermined alternate landing site; in some instances, the alternate landing site can be designated as the UAV's point of takeoff. Still further, the UAV may revert to manual control and hold in place, awaiting further instructions from a ground controller. The UAV may also implement combinations of the foregoing, such as holding in place for a predetermined length of time before proceeding to an alternate site if the safety buffer zone does not clear within the predetermined length of time. In addition to aborting an autonomous landing, the UAV could be programmed to illuminate a landing light prior to aborting if a potential obstruction is detected either within the safety buffer zone or approaching the zone, in an attempt to alert the obstruction to the presence of the approaching UAV.
The disclosure above encompasses multiple distinct inventions with independent utility. While each of these inventions has been disclosed in a particular form, the specific embodiments disclosed and illustrated above are not to be considered in a limiting sense as numerous variations are possible. The subject matter of the inventions includes all novel and non-obvious combinations and subcombinations of the various elements, features, functions and/or properties disclosed above and inherent to those skilled in the art pertaining to such inventions. Where the disclosure or subsequently filed claims recite “a” element, “a first” element, or any such equivalent term, the disclosure or claims should be understood to incorporate one or more such elements, neither requiring nor excluding two or more such elements.
Applicant(s) reserves the right to submit claims directed to combinations and subcombinations of the disclosed inventions that are believed to be novel and non-obvious. Inventions embodied in other combinations and subcombinations of features, functions, elements and/or properties may be claimed through amendment of those claims or presentation of new claims in the present application or in a related application. Such amended or new claims, whether they are directed to the same invention or a different invention and whether they are different, broader, narrower or equal in scope to the original claims, are to be considered within the subject matter of the inventions described herein.