The present invention relates generally to the control and landing of unmanned aerial vehicles. More specifically, the present invention relates to systems, methods, devices, and computer readable media for landing unmanned aerial vehicles using sensor input and image processing techniques.
Unmanned aerial vehicles (UAVs) are aircraft that fly without onboard pilots. They rely on complete or partial automation for control during their flight. UAVs have become increasingly popular for use in support of military operations, but the logistical complexity of UAV control, and the resultant cost, often makes their use burdensome. First, the soldiers who fly UAVs will always have other duties or circumstances which require them to be able to draw their attention away from their flight controls for at least some period of time. Second, larger UAVs require highly trained pilots for takeoff and landing. As a result, units which fly large UAVs often have one set of crew to fly the mission phase and a second crew for the takeoff and landing phases. These larger UAVs also must be landed on a prepared runway, which requires soldiers to clear a landing location. Micro and small UAVs require somewhat less overhead than larger UAVs. Micro and small UAVs do not require two crews—they are usually flown and landed by the same soldiers throughout the entire mission—but flying is often the secondary occupational specialty of the pilots who operate these UAVs. While micro and small UAVs can usually land in any open area at a non-prepared airfield, infrequent practice of UAV landings often results in hard or inexpert landings, which can damage the UAV.
Automated landing systems can mitigate some of the risk associated with landing a UAV by improving the accuracy of the landing touchdown point. This reduces wear and tear on the vehicle, and reduces both the training level and active attention required of the operator. First, an automated landing system can more accurately control the velocity of the UAV—both speed, and direction—than a human operator. This increased level of control can reduce bumps and scrapes on landing. Additionally, the higher level of control can reduce the soldiers' work in preparing a runway. With an automated landing system, the UAV can often be guided to a small, more precise landing area, which reduces the amount of preparation work required from the soldiers. Finally, the use of automation allows human operators to oversee the landing, but permits them to focus their attention elsewhere for most of that time.
Several types of automated landing systems are currently available in different UAVs. GPS and altimeter-based systems are the most common automated landing systems. In these systems, a human operator enters the latitude and longitude of the intended landing location, and the ground altitude, into a software controller. The operator then creates an approach pattern with waypoints, and designates the direction of landing. The autopilot flies the aircraft on the designated approach pattern and lands the aircraft at the intended landing location, within the accuracy limits of the GPS navigation system. To reduce the impact of landing, it is possible to either cut power or deploy a parachute at a preprogrammed location shortly before touchdown.
GPS and altimeter-based systems are sufficient for establishing the aircraft in a landing pattern and beginning the approach descent, but the actual touchdown control is less than optimal. Although GPS latitude and longitude are extremely accurate, altitude calculations may be off by several meters. Pitot-static systems, which use pressure-sensitive instruments (e.g. air pressure-sensitive instruments) to calculate the aircraft's airspeed and altitude, are generally more accurate than GPS-based systems, but are susceptible to similar problems—changes in ambient air pressure during the flight can affect altitude measurements. In both cases, then, the aircraft may touch down several meters before or after reaching its intended landing site. An off-site landing can easily damage the aircraft when working on an unprepared landing strip or in an urban area.
Certain UAVs use energy-absorption techniques for landing. These are simple for the operator to use and have a high rate of survivability. A human operator programs the latitude and longitude of the intended landing location, the ground altitude, and an approach pattern into a software controller. Using GPS, these aircraft fly the approach path, and then just before reaching the intended landing site, enter a controlled stall. The stall causes the aircraft to lose forward speed and drop to the ground. Although these UAVs sustain a heavy impact, they are designed to break apart and absorb the energy of the impact without damaging the airframe. Advantages of this system are that it requires minimal control input for the landing, and new operators are able to learn to use it quickly and effectively. A major disadvantage, however, is that this system is not portable to many other UAVs. It requires specially-designed aircraft that are capable of absorbing the shock of hard belly-landings. Larger, heavier aircraft create greater kinetic energy in a stall and would most likely suffer significant airframe damage if they attempted this sort of landing. Additionally, aircraft must also have adequate elevator authority to enter and maintain a controlled stall. Finally, any payloads installed on the UAV would need to be specially reinforced or protected to avoid payload damage.
As an alternative to GPS, there are several radar-based solutions to auto-landing. These systems track the inbound trajectory of the aircraft as they approach the runway for landing, and send correction signals to the autopilots. Radar-based systems have the advantage of working in fog and low-visibility conditions that confound visual solutions. Their primary disadvantage is that they require substantial ground-based hardware, which makes them impractical for use with small and micro UAVs. The use of ground-based hardware also increases their logistics footprint for larger UAVs, which may reduce their practicality in expeditionary warfare.
Although not automated, the U.S. Navy has used a visual-based system for manually landing aircraft on aircraft carriers since the 1940s. Aircraft carrier pilots currently use a series of Fresnel lenses, nicknamed the ‘meatball’, to guide them to the aircraft carrier during landing. Different lenses are visible to the pilot depending on whether the is above, below, left, or right of the ideal approach path. The pilot steers onto the proper glideslope and lineup by following the lights on the meatball, and maintains that approach path all the way to touchdown. The meatball is a proven system for directing the landing of Navy aircraft. However, it is expensive and requires accurate human pilot directed adjustment to effect the proper glideslope, so it would not be practical to use it for most UAV operations.
The present invention discloses vision-based automated systems and methods for landing unmanned aerial vehicles. The system of the invention includes one or more UAVs, and one or more targets, of known geometry, positioned at one or more intended landing locations. The system further includes one or more sensors coupled to each UAV, such that at least one sensor is aligned with the direction of movement of the UAV, and captures one or more images in the direction of movement of the UAV. The system further includes at least one processor-based device, which determines the visual distortion of at least one target visible in one or more of the captured images as a function of the UAV's current position. This processor-based device calculates the UAV's current glideslope and lineup angle, and adjusts the current glideslope and alignment of the UAV to an intended glideslope and lineup angle, so as to safely land the UAV.
a) shows a wave-off procedure.
b) shows a wave-off procedure initiated by a human operator.
c) shows a wave-off procedure initiated upon the occurrence of a preprogrammed condition.
d) shows a wave-off procedure initiated upon a determination that the UAV cannot land safely.
The present invention provides a vision-based automated system for landing UAVs, as shown in
The placement of the target 120 at the intended landing location may be permanent or fixed (i.e. removable). In one embodiment of the invention, the target 120 may be painted on a runway or other landing site. In another embodiment, the target 120 may be fixed on a portable mat, such that the mat can be placed on the landing site when necessary, but stored away when out of use. In yet another embodiment, the target 120 may be designated by any light source, such as chemical lights or infrared strobe lights, on three or more signature corners 250 of the target 120.
One end of the vertical arm 220 may be pre-designated as an approach end by a special marker 230. The special marker 230 can be any marker of known geometry capable of identifying a single arm or piece of a target 120. Special markers 230 may be of any color or easily-identified shape that clearly differentiates the special marker 230 from the rest of the target 120, such as, for example, a star, rectangle, or circle. The special marker 230 must indicate the approach end of the target 120 without interfering with the sensor's 130 ability to measure the length of the arm. In a preferred embodiment, as shown in
It should be noted, however, that despite the use of particular colors and lengths as described above, the respective lengths of the horizontal arm 210 and the vertical arm 220, and the colors of the cross 200 and the special marker 230, may be varied as applicable to the situation, provided that the target 120 is of a shape that is identifiable and is of a known configuration.
In accordance with a preferred embodiment, the system includes at least one sensor 130, capable of detecting the targets 120, that is connected to the UAV 110, so that the sensor 130 is aligned with the direction of movement 140 of the UAV 110, and captures one or more images of the landscape in the direction of movement 140 of the UAV 110. In a preferred embodiment of the invention, the sensor 130 is a digital camera, which produces a digital image of the landscape in the direction of movement of the UAV 110. In alternative embodiments, the sensor 130 may be a single-lens reflex (SLR) camera, or an infrared camera, or any other device capable of capturing one or more images of the landscape and detecting the target 120 placed at the intended landing location.
The system determines the visual distortion of any target 120 visible in one or more of the captured images as a function of the UAV's 110 current position. As the UAV's 110 position changes with respect to the position of the target 120, the target 120 will appear to be skewed, or distorted, in any captured images.
The present invention also includes methods for landing a UAV, as shown in
The method of the invention may also include analyzing the image to determine whether it includes a target 720. In a preferred embodiment, the method includes analyzing the image to determine whether the image contains any objects which may be a target, which will be referred to as a “possible target,” and to determine whether that possible target is the “actual target” where the UAV is intended to land. In one embodiment, the analyzing may be performed by a human operator, who manually confirms that the image includes an actual target. In an alternative embodiment, the analyzing may be performed by image processing techniques (e.g. computer-based image processing techniques). Examples of such targets may include, but are not limited to, runways, taxiways, buildings (e.g. building rooftops), or the entire airfield. In a preferred embodiment, the target is a bilaterally symmetric cross (e.g. a bilaterally symmetric cross placed horizontally on the landing surface).
Image processing may be done in any manner known to one of skill in the art. In one embodiment, image processing may include identifying the outline of the possible target. In a preferred embodiment, the outline of the possible target may be determined by first identifying the region of the captured image which contains a contiguous area dominated by the color of the actual target. For example, if the actual target is red, any contiguous region in the image which is red is noted. The red channel of the image can then be converted into a binary mask, such that, for example, the red region is designated by a ‘1’, and all other colors are designated as a ‘0’. It should be noted that any equivalent binary formulation such as, for example, ‘true’ and ‘false’, or ‘positive’ and ‘negative’ could also be used for the designation. For simplicity, the binary mask will hereafter be referred to with reference to ‘1’ and ‘0’, but this is not intended to limit the scope of the invention in any way. Using basic morphology operations, it is possible to smooth the silhouette of the region to form a more precise outline of the possible target.
Image processing may also include identifying at least three signature corners of the possible target. The three signature corners of the possible target may be compared to the known signature corners of the actual target. Based on the comparison, it may be determined whether the signature corners of the possible target substantially match the signature corners of the actual target.
Using the outline of the possible target, it is then possible to isolate signature corners of the possible target, and to compare the signature corners of the possible target to signature corners of the actual target.
The use of a special marker 230 in the actual target may improve the accuracy of the determination whether a possible target is an actual target by creating additional signature corners. For example, if the actual target is red, but contains a green stripe, a captured image will reflect this green stripe. When the red channel of the image is converted to a binary mask, all of the green stripe will be designated as a ‘0’, or the equivalent binary inverse of the red region, appearing as if it were a hole in the possible target. This creates additional signature corners, which are comparable to the special marker 230 of the actual target. In yet another embodiment, the analysis of the image to determine whether the image contains any objects which may be a target is performed using image processing (e.g. computer-based image processing), using a technique such as that described above, with a human operator verifying that the determination made via the image processing (e.g. automated computer-based image processing) is correct.
It should be noted that other image processing techniques may also be used to analyze the image and the above examples are in no way meant to limit the scope of the invention.
The method of the invention may also include assessing the dimensions of a possible target 730 and comparing those dimensions to the known dimensions of an actual target to determine a current glideslope 580 and lineup angle 680. The present invention is capable of working with a UAV traveling on any initial glideslope. In a preferred embodiment, the glideslope is between 2 and 45 degrees. In a most preferred embodiment, the glideslope is between 3 and 10 degrees. In one embodiment of the invention, the current glideslope is determined as a function of the apparent height-to-width ratio of the target, as captured in the image taken by a digital sensor in the direction of movement of the UAV. This height-to-width ratio can be determined by the equation H/W=PAR*(h/w)*sin(α), where:
H=the apparent height of the target as captured in the image;
W the apparent width of the target as captured in the image;
PAR=the pixel aspect ratio of the sensor;
h=the known, actual height of the target;
w=the known, actual width of the target; and
α=current glideslope of the UAV.
Transforming this equation, the current glideslope can then be calculated by solving the equation α=sin−1(H*w/(PAR*h*W)). These calculations, and any other calculations described herein, may be performed by software running on a processor-based device, such as a computer. The instructions associated with such calculations may be stored in a memory within, or coupled to, the processor-based device. Examples of such memory may include, for example, RAM, ROM, SDRAM, EEPROM, hard drives, flash drives, floppy drives, and optical media. In one embodiment, the processor-based device may be located on the UAV itself. In an alternative embodiment, the processor-based device may be located remotely from the UAV and may communicate wirelessly with the UAV.
There are straight-forward mathematical techniques to determine the current glideslope and the lineup angle from known measurements and constraints by solving a system of equations. In the preferred embodiment of the invention, both the current lineup angle 680 and the current glideslope 580 can be calculated by solving the system of equations generated by calculating the unit vectors for three signature corners. For example, in one embodiment, the current lineup angle 680 and the current glideslope 580 may be calculated by applying the equation
where
SX, SY, SZ=world coordinates for one signature corner of the target;
DX, DY=unit vector of that signature corner;
α=the current glideslope; and
β=the lineup angle,
to at least three signature corners of said target. The method may further include using the current lineup angle and current glideslope to force the UAV to adjust its altitude and alignment 740 to conform to the intended approach path 585. In one embodiment of the invention, the current glideslope and the current lineup angle can be sent to an autopilot control loop, which then adjusts the UAV's altitude and direction.
In some cases, it may be desirable to perform the method of the invention repeatedly, to ensure that the UAV maintains the intended approach path until the UAV has landed safely. The method may be repeated at any regular interval as desired.
As shown in
where
TTI1=expected time to impact;
t1=the time at which a first image is captured;
t2=the time at which a subsequent image is captured;
w1=the apparent width of the target as captured in said first image; and
w2=the apparent width of the target as captured in said subsequent image.
In other embodiments, the apparent height of the target or any other appropriate dimension may be used instead of width. If the expected time to impact is calculated 852, and it is determined that the UAV cannot land safely 854, the UAV will not land, but will instead execute the wave-off procedure 800.
What has been described and illustrated herein is a preferred embodiment of the invention along with some of its variations. The terms, descriptions and figures used herein are set forth by way of illustration only and are not meant as limitations. Those skilled in the art will recognize that many variations are possible within the spirit and scope of the invention, which is intended to be defined by the following claims, in which all terms are meant in their broadest reasonable sense unless otherwise indicated therein.
This application claims the benefit under 35 U.S.C. § 119(e) of U.S. Provisional Application No. 61/043,360, filed Apr. 8, 2008, the entirety of which is incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
61043360 | Apr 2008 | US |