The present invention relates to an automatic recovery method for an unmanned aerial vehicle (UAV), and more particularly to a vision-based automatic recovery method for an unmanned aerial vehicle (UAV) using a reference trajectory, in which information from a front vision camera and information from navigation equipment embedded in the unmanned aerial vehicle are combined by a weighting method.
An unmanned aerial vehicle makes a flight based on an automatic algorithm, and thus has to automatically take off and land to achieve a perfect automatic system. In case of mountainous areas, a distance required for landing is so short that methods usable for recovery are limited. However, there is no automatic recovery method of using a net to effectively recover the unmanned aerial vehicle within a short distance in the existing research.
The net recovery needs a higher degree of precision than other recovery methods because a recovery point and range are set to be about 2˜3 times larger than the size of a flight vehicle. If a differential global positioning system (DGPS), a laser and radar are used, it is possible to ensure high precision. However, topographical conditions of mountainous area are improper to use a signal of the DGPS, and a method of using the laser or radar is also disadvantageous since a construction cost is high and self technology development is difficult even though it has high precision.
Also, a method of using vision information has advantages that construction and development costs are low and precision within a short distance is high. However, in the case of using only the vision information, it is disadvantageously difficult to control an altitude because only a relative location and a distance are used.
Further, there is a method of employing a reference trajectory. In the case of using the reference trajectory, it is advantageously possible to maintain an altitude optimized for flying toward an aiming point because an optimum path of connecting given start and end points is generated and traced. However, it is disadvantageously difficult for the net recovery to provide such high precision since the recovery point and range are set to be about 2˜3 times larger than the size of a fight vehicle.
The present invention is conceived to solve the foregoing problems, and an aspect of the present invention is to provide a vision-based automatic recovery method using a reference trajectory, in which both vision information and the reference trajectory are used. That is, the reference trajectory connecting a recovery point and an align point given by a wind direction and a mission is generated. Then, the reference trajectory is followed based on information from navigation equipment in an initial recovery section (between the align point and the recovery point), and a weight of a generated command is increased based on the vision information having a higher precision as getting closer to the recovery point. Accordingly, it is possible to use both the advantage of using the reference trajectory and the advantage of using the vision information.
In accordance with one aspect of the present invention, there is provided a vision-based automatic recovery method using a reference trajectory for an unmanned aerial vehicle, the method comprising: a reference trajectory generating step of generating the reference trajectory connecting an align point and a recovery point on an on-board based on the recovery point given to the unmanned aerial vehicle through wireless communication; an image processing step of taking an image containing a symbol (e.g., a net) through a front vision camera embedded in the unmanned aerial vehicle, determining the position and size of the symbol in the taken image through image processing computer, and calculating a distance between the symbol and the unmanned aerial vehicle; an attitude command issuing step of generating an attitude command for centering the symbol at the center of the image by using position of the symbol in the obtained image; and a virtual trajectory generating step of generating a virtual trajectory by combining the reference trajectory generated in the reference trajectory generating step and the attitude command generated in the attitude command generating step, in which the virtual trajectory is generated by moving a weight to the attitude command as going from the align point to the recovery point.
Here, the reference trajectory may be set up to pass one or more waypoints from the align point to the recovery point, in which a virtual reference trajectory for going toward the next waypoint is generated before passing each waypoint, and a guide control command is issued to follow the generated virtual reference trajectory based on information of embedded navigation equipment.
Also, the attitude command may be a command to calculate the azimuth angle ψVPUCmd and the elevation angle θVPUCmd between the positions of the symbol in the center of the taken image, and change a direction of the unmanned aerial vehicle as much as the calculated azimuth angle ψVPUCmd and elevation angle θVPUCmd.
Further, the moving the weight in the virtual trajectory generating step may include linearly increasing weight of the attitude command from 0% to 100% as going from the align point to the recovery point.
Also, distance calculation for moving the weight in the virtual trajectory generating step may use a distance between the symbol and the unmanned aerial vehicle calculated in the image processing step.
Further, the generation of the virtual trajectory may improve precision by following the reference trajectory through navigation information in an initial align section and generating an attitude command based on high-precise image information in a final section, i.e., at the recovery point.
In accordance with an aspect of the present invention, there is provided a vision-based automatic recovery method using a reference trajectory, in which both vision information and the reference trajectory are used.
That is, the reference trajectory connecting a recovery point and an align point given by a wind direction and a mission is generated. Then, the reference trajectory is followed based on information from navigation equipment in an initial recovery section (between the align point and the recovery point), and a weight of a generated command is increased based on the vision information having a higher precision as getting closer to the recovery point. Accordingly, both the advantage of using the reference trajectory and the advantage of using the vision information are used to thereby enable the automatic recovery with higher precision for the unmanned aerial vehicle.
Hereinafter, exemplary embodiments according to the present invention will be described in detail with reference to accompanying drawings. Prior to this, terms or words used in this specification and claims have to be interpreted as the meaning and concept adaptive to the technical idea of the present invention rather than typical or dictionary interpretation on a principle that an inventor is allowed to properly define the concept of the terms in order to explain his/her own invention in the best way.
Therefore, because embodiments disclosed in this specification and configurations illustrated in the drawings are nothing but preferred examples of the present invention and do not fully describe the technical idea of the present invention, it will be appreciated that there are various equivalents and alterations replacing them at a filing date of the present application.
An unmanned aerial vehicle makes a flight based on an automatic algorithm, and thus has to automatically take off and land to achieve a perfect automatic system. In case of mountainous areas, a distance required for landing is so short that methods usable for recovery are limited. However, there is no automatic recovery method of using a net to effectively recover the unmanned aerial vehicle within a short distance in the existing research.
The net recovery needs a higher degree of precision than other recovery methods because a recovery point and range are set to be about 2˜3 times larger than the size of a flight vehicle. If a differential global positioning system (DGPS), a laser and radar are used, it is possible to ensure high precision. However, topographical conditions of mountainous area are improper to use a signal of the DGPS, and a method of using the laser or radar is also disadvantageous since a construction cost is high and self technology development is difficult even though it has high precision.
A method of using vision information has advantages that construction and development costs are low and precision within a short distance is high. However, in the case of using only the vision information, it is difficult to control an altitude because only a relative location and a distance are used. To solve such shortcomings, an embodiment of the present invention proposes a method of using a reference trajectory. In the case of using the reference trajectory, it is possible to maintain an altitude optimized for flying toward an aiming point because an optimum path of connecting given start and end points is generated and traced.
As shown in
Further, as shown in
As shown in
According to an embodiment of the present invention, a front vision camera, image processing computer and the flight control computer are needed as basic hardware, and algorithms for generating the reference trajectory, processing the obtained image, generating a control command, etc. are required as software, which will be described later.
The step S10 of generating the reference trajectory includes generating the reference trajectory connecting the align point and the recovery point for recovering the unmanned aerial vehicle.
An initial guidance algorithm for performing the vision-based automatic recovery is to generate the reference trajectory passing the respective points P0, P1, P2, P3 and P4 of
That is, the reference trajectory is set up to pass one or more waypoints from the align point to the recovery point, in which a virtual reference trajectory for going toward the next waypoint is generated before passing each waypoint, and control command is issued to follow the generated virtual reference trajectory.
As shown in
As shown in
As shown in
If the position of each waypoint is known in an inertial coordinate system, R and dref can be calculated as shown in the following expression (1). A setup variable of δ is used to calculate dvir so that Rvir can be obtained.
R=√{square root over ((XWP2−XWP1)2+(YWP2−YWP1)2+(HWP2−HWP1)2)}{square root over ((XWP2−XWP1)2+(YWP2−YWP1)2+(HWP2−HWP1)2)}{square root over ((XWP2−XWP1)2+(YWP2−YWP1)2+(HWP2−HWP1)2)}
dref=√{square root over ((XWP2−Xuav)2+(YWP2−Yuav)2+(HWP2−Huav)2)}{square root over ((XWP2−Xuav)2+(YWP2−Yuav)2+(HWP2−Huav)2)}{square root over ((XWP2−Xuav)2+(YWP2−Yuav)2+(HWP2−Huav)2)} Expression (1)
dvir=dref−δ
Rvir=R−dvir Expression (2)
To calculate the virtual waypoint on the reference trajectory, an azimuth angle ψref and a path angle γref of the reference trajectory are obtained as follows.
In result, the coordinates of the virtual waypoint can be calculated using the following expression (5).
Xvir=XWP1+Rvir cos ψref cos γref
Yvir=YWP1+Rvir sin ψref cos γref
Hvir=HWP1+Rvir sin γref Expression (5)
The guidance command for flying toward the virtual waypoint on the reference trajectory is issued to thereby trace the reference trajectory. The guidance command for flying toward the virtual waypoint is generated by setting an aiming point between the current position of the unmanned aerial vehicle and the virtual waypoint and using the altitude and azimuth angle of the aiming point. Let a distance from the unmanned aerial vehicle to the aiming point be η. Then, the position of the aiming point can be calculated using the expressions (6) and (7).
The positions (Xaim, Yaim, Haim) of the aiming point are as shown in the following expression (8).
Xaim=Xuav+η cos ψvir cos γvir
Yaim=Yuav+η sin ψvir cos γvir
Haim=Huav+η sin γvir Expression (8)
Since the positions (Xaim, Yaim, Haim) of the aiming point is calculated in the expression (8) and the azimuth angle ψvir of the virtual reference trajectory is calculated in the expression (6), these values are applied as guidance commands for the controller. At this time, a velocity command may be set up to maintain an initial entry velocity for landing.
ψvir→ψcmd,Haim→Hcmd,Vinitial→Vcmd Expression (9)
The step S20 of image processing includes taking an image containing a symbol (i.e., net) through the front vision camera provided in the unmanned aerial vehicle, determining the position and size of the symbol in the taken image through image processing computer, and calculating a distance between the unmanned aerial vehicle and the symbol.
As shown in
Here, (up, vp) is the position of the symbol in the image, which can be obtained by taking the number of extracted pixels and the area into account. Also, the bim is the size of symbol, which can be obtained by x or y coordinate values of the rightmost pixel and the leftmost pixel because the symbol has a symmetrical shape. Using the position and size of the symbol, it is possible to obtain the center position of the net in accordance with distances, thereby issuing an align command (i.e., an attitude command) for the unmanned aerial vehicle.
As shown in
where kuf, kvf refers to a scale factor, which means a focal distance (f) in
The characteristics of the camera obtained in the expression (12) are used in obtaining azimuth and elevation viewing angles and a position of the symbol of the unmanned aerial vehicle. As shown in the following expression (13), it is possible to obtain the azimuth angle ψVPUCmd and an elevation angle θVPUCmd. If the size bim of the symbol is known, a distance Zc to the symbol can be obtained by the following expression (14).
The step S30 of issuing an attitude command is to generate an attitude command for positioning the symbol on the center of the image taken in the image processing step S20. This step includes calculating the azimuth angle ψVPUCmd and the elevation angle θVPUCmd between the positions of the symbol in the center of the taken image through the foregoing expressions (13) and (14), and issuing a command to change the direction of the unmanned aerial vehicle as much as the calculated azimuth angle ψVPUCmd and elevation angle θVPUCmd.
The step S40 of generating the virtual trajectory is to generate the virtual trajectory by combining the reference trajectory generated in the reference trajectory generating step and the attitude command generated in the attitude command generating step, in which the virtual trajectory is generated by increasing a weight to the attitude command as going from the align point to the recovery point.
Here, there are various methods of moving the weight in the virtual trajectory generating step. For example, weight of the attitude command may be linearly increased from 0% to 100% or be exponentially increased as going from the align point to the recovery point, but not limited thereto. That is, a distance that the unmanned aerial vehicle travels between the align point and the recovery point corresponds to a difference in the weight of the attitude command, and the ratio of the difference to the distance is the same for any distance that the unmanned aerial vehicle travels. Alternatively, various combinations or methods are possible.
Further, the distance calculation for moving the weight in the virtual trajectory generating step may be based on the distance between the symbol and the unmanned aerial vehicle calculated in the image processing step.
An embodiment of the present invention is for high precision and stable control as a proper weighting factor is applied to reference trajectory and image processing results in accordance with the distance between the unmanned aerial vehicle and the recovery point. In this embodiment, each algorithm is continuously calculated, and it is determined in a weighting function at the end which algorithm is more reliable. According to an embodiment of the present invention, a linear weighting function, which is very simple and is capable of giving a weight to the distance, is employed. The following expression (15) is to give a weight for generating a final control command.
where, W and I-W are weighting functions, sum of which is 1. B multiplied with the weighting function is a matrix consisting of an elevation and an azimuth angle, in which BTRIcmd of the initial term refers to the attitude command generated to follow the reference trajectory through navigation information, and B(BVPUCmd+(Btrim+BVPC)) of the last term refers to combination of the attitude command generated using vision information, a command considering characteristics of a flight vehicle and a correction command due to installation of an image processing camera.
With this combination, the intrinsic characteristic of the flight vehicle is reflected, and an error that may occur due to the installation or features of the image processing camera is canceled, thereby enabling more precise control.
Further, as shown in
Although some embodiments have been described herein with reference to the accompanying drawings, it will be understood by those skilled in the art that these embodiments are provided for illustration only, and various modifications, changes, alterations and equivalent embodiments can be made without departing from the scope of the present invention. Therefore, the scope and spirit of the present invention should be defined only by the accompanying claims and equivalents thereof.
Number | Date | Country | Kind |
---|---|---|---|
10-2010-0127276 | Dec 2010 | KR | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/KR2010/009187 | 12/22/2010 | WO | 00 | 8/22/2013 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2012/081755 | 6/21/2012 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
5235513 | Velger | Aug 1993 | A |
5716032 | McIngvale | Feb 1998 | A |
9064222 | Saad | Jun 2015 | B2 |
20070023582 | Steele | Feb 2007 | A1 |
20100084513 | Gariepy | Apr 2010 | A1 |
20100198514 | Miralles | Aug 2010 | A1 |
20120050525 | Rinner | Mar 2012 | A1 |
20120210853 | Abershitz | Aug 2012 | A1 |
20140192193 | Zufferey | Jul 2014 | A1 |
20150051758 | Cho | Feb 2015 | A1 |
Number | Date | Country |
---|---|---|
10-0842101 | Jun 2008 | KR |
10-0985195 | Sep 2010 | KR |
Number | Date | Country | |
---|---|---|---|
20130325222 A1 | Dec 2013 | US |