The present invention relates generally to the field of methods and systems for controlling rotary-wing drones.
With the development of robotics and aeronautics, drones (also known as Unmanned Aerial Vehicle) are now more and more integrated and easy to control, even in non-military context. Many applications making use of such devices may be now found in the civil life as, for instance, surveillance, video recording or even gaming.
Several kinds of drones exist including: fixed-wing drones (plane-like), rotary-wing drones (helicopter-like) and flapping-wing drones (hummingbird-like). Due to the good trade-off between the payload, control and price that they offer, rotary-wing drones are the most developed drones on the civil market. For instance, some rotary-wing drones being controllable from a remote tablet are now available for sale to non-professional consumers. Such kinds of drones are able to embed a video camera and broadcast the data over a wireless network.
However, despite the new possibilities offered by such kind of drone to ease its control, one trained human is often still required to pilot the drone as well as another person to control the embedded camera in case of simultaneous video-recording. In addition, controlling the drone in terms of position, speed and acceleration as well as in terms of orientation in a very accurate way may be not possible, even for a well-trained pilot. Such level of control may be especially required when a drone is navigating close to moving persons, or in interior conditions with a lot of tridimensional obstacles.
In order to ease the understanding of the issues met by a person operating manually a rotary-wing drone, some basic notions of flight dynamics are briefly described here after. A rotary-wing drone only relies on its rotors to move in space and it is therefore by changing the rotation speed and/or the angular inclination of the same rotors that an operator is controlling the drone. On a dynamic point of view, all the motions performed by the drone can be resumed by the variation of four parameters, also called flight controls: the pitch angle, the roll angle, the yaw speed and the elevation speed. More precisely:
It is therefore by continuously adapting these four flight controls that the operator defines a 6-dimensions-path (3 translations and 3 rotations) to be followed by the drone. One can easily understand the difficulties that may be encountered by an operator to control a drone in both adapting continuously these four parameters and taking into account the position of the drone, its instantaneous speed, orientation, as well as the constraints imposed by its dimensions and its environment (wind, rain).
Background art discloses several works aiming to make the drone being autonomous while following a controlled path. Those works fall under the resolution of a control theory problem. The following steps define such problem:
Several systems and methods from the background art tend to solve this control problem, but some major technical issues still remain, as mentioned hereafter. As a matter of example, document U.S.2004/245378 A1 discloses a control method for an unmanned helicopter comprising a GPS (Global Positioning System) and other sensors to measure its elevation. The control method comprises a feedback control loop based on a well-known LQG control (Linear Quadratic Regulation) implementing independent controllers to control the longitudinal, lateral, and vertical displacements of the drone.
This clear disassociation of the controllers for each translation dimension obviously makes the computation of the whole controlling system less optimal in term of flight controls command than a method, which would process the whole dynamics at the same time.
In addition, the longitudinal and lateral controllers use a serial arrangement of separate controllers for velocity and orientation. Hence, it is not possible for an operator to attach more importance to the control of velocity, rather than orientation, or vice-versa, as both flight dynamics must be serially processed. Such an architecture does not provide enough freedom of use to an operator to always best answer his current needs.
As another drawback, it shall be noticed that the system disclosed by U.S.2004/245378 A1 implements a specific model to control the servomotor of a dedicated helicopter. Said model is therefore not easily adaptable to another type of helicopter, what obviously forces the operator to use a single type of drone whereas another type might be more suitable regarding the mission to undertake.
More generally, there are several prior art techniques for controlling UAV, which are all designed for a specific type of drone.
It would hence be desirable to provide a method for controlling a rotary-wing drone showing improvements of the prior art.
Notably, it would be desirable to provide such a method, which would propose a generic interface of control of UAV, and which would hence be adapted to control the path of any rotary-wing drone.
It would also be desirable to provide such a method, which would ease the control of a rotary-wing drone, and which would optimize the path followed to reach its final destination.
It would also be desirable to provide such a method, which would reduce the computation load over the prior art.
It would also be desirable to provide such a control method, which computational complexity would be low enough for it to be embedded in a drone processing unit.
In one particular embodiment of the invention, a method for controlling a path of a rotary-wing drone is disclosed, which comprises steps for:
Said steps of estimating are performed independently.
In the following description, the expression “rotary-wing drone” refers to an Unmanned Aerial Vehicle (UAV) whose displacements in the space can be controlled by the variations of the four different flight controls mentioned here before: the roll angle, the pitch angle, the yaw speed and the elevation speed. The term “path” refers to a track defined in six dimensions in a global three-dimensional frame, including three translations and three rotations, to be followed by the rotary-wing drone. The term “course” refers to the rotational speed of the drone around the vertical axis in a clockwise way, this value being linearly related to the yaw angle of the drone. The term “trajectory” refers to the translational displacement of the drone from one point to another. The expression “flight dynamics” refers to the position, the speed and the orientation of the drone, in respect to a global three-dimensional frame.
Based on its features, the present invention relies on a novel and inventive approach of the control of a rotary-wing drone, and includes several advantages and benefits. First of all, the step of estimating the drone's position and its course allows the drone to control in real-time and in an autonomous way its flight dynamics. The drone is then able to stick the best to a specific path that has been assigned to it, while keeping the capacity to adapt its flight dynamics to face any unexpected events that may occur and move it away from its initial path. As a matter of example, one can order the drone to reach some point of the space by following a specific path. On the way to its destination, the drone is then moved away from its initial path under the action of a strong wind. Thanks to the step of estimating its position according to the present invention, the drone is then able to make a new estimation of the path to follow and to adjust its flight dynamics in accordance with. Thus, the present invention allows the operator to transfer part of its capacity of decision to the drone in order to ease its control while optimizing the path followed by the drone to reach its final destination.
Another advantage of the invention relies on the estimation of no more than two variables when performing the step of control of the six-dimensional path to be followed by the drone: the trajectory and the course. This limitation introduced by the invention on the number of controlled variables eases the computation of the whole controlling method while remaining valid enough for a rotary-wing drone whose trajectory is close to planar in general. The path followed by the drone is also smoother since the whole dynamics are processed in no more than two operations. In addition, the linearization of the control problem on the basis of the two first-order-temporal related models implemented by the invention also contributes to reduce the computational load of said method for controlling. The controlling method of invention thus advantageously relies on a very interesting compound and coupled model of a generic-rotary wing drone.
Another advantage of the invention relies on the independence of the two sub-steps of estimating the course and the position. In opposition with some methods disclosed in the background art, these two sub-steps of estimating do not need to be performed serially. In other terms, the course and the position can be estimated either in parallel, or one after the other. An operator also has the option to perform more estimations on one variable more than on the other in a certain amount of time. The operator is then able to allocate different computing resources for the run of each of these two estimating sub-steps, based on the accuracy required in the determination of the path and on the current activity of the drone.
Another advantage of the present invention relies in the fact that the position and the course are estimated in relation to the flights control. Nowadays, as mentioned here before, the displacement of any kind of rotary-wing drone is determined based on these four flight controls. Therefore, the method for controlling according to the present invention can be easily adapted to any kind of rotary-wing drone, provided that the operator inputs a few drone-dependent values prior to the first use. A method for determining these drone-dependent values based on the running of a basic unitary test is described in the description here after.
In one particular embodiment, the step of controlling the path of said drone comprises estimating a speed of said rotary-wing drone on the basis of said Explicit Discrete Time-Variant State-Space Representation of a translation control of said drone.
An advantage of a method for controlling the drone according to this particular embodiment is that it allows estimating the translational speed that said rotary-wing drone should have to reach its final destination in a required time frame.
In one particular embodiment, the step of controlling the path of said drone comprises estimating the flight controls, which shall be applied to said drone for it to follow a predetermined path.
An advantage of a method for controlling the drone according to this particular embodiment is that the flight dynamics, estimated following the run of the method for controlling, are directly converted in related orders based on a variation of the drone flight controls. As a matter of example, an order based on the variation of the yaw speed of the drone can be given in order to modify its course. A command based on the variation of the pitch angle, the roll angle and the elevation speed of the drone can be given in order to modify its position and its translational speed. The flight controls estimated following the run of the method for controlling can therefore be applied to the drone so that it follows a predetermined path.
According to another aspect, the method for controlling also comprises a step of measuring flight dynamics of said drone, said flight dynamics belonging to the group comprising:
Hence, a flight dynamic belonging to the group mentioned here before can first be measured and then used in the sub-steps of estimating the position, the speed and/or the course of the drone in order to make the estimation more accurate. Therefore, a method for controlling according to this particular embodiment is able to take in account the variations of the flight dynamics of the drone occurring during its displacements.
In one particular embodiment, the step of measuring comprises detecting by at least one thermal camera at least one predetermined reference point on said drone.
The use of a thermal camera is an advantageous alternative to other kind of localization systems known from the background art (GPS, lasers). It is particularly well suited in an indoor environment. Of course, any other measuring technique may also be used according to the invention.
In another particular embodiment, the steps of controlling and measuring are successively repeated at a predetermined period of time k.
At each period of time k, the flight dynamics and flight controls of the drone are updated in order to make the drone follow a path as close as possible to the predetermined one. As a consequence, the value of the period of time k is in direct relation with the efficiency of the method for controlling the drone, regarding its accuracy but also its computation load. The decrease of the period of time k induces the increase of the accuracy of the method for controlling but also the increase of the computation load required for its implementation. One should therefore define the period of time k by making a good trade-off between the advantages and drawbacks mentioned here before. According to particular embodiments of the invention, different values of the period of time k can be respectively assigned to the sub-steps of estimating the course and the position of the drone, regarding the needs of the operator.
In one particular embodiment, the period of time k is smaller or equal to 0.1 second.
Series of measurements conducted on basic rotary-wing drones showed that a period of time k equal or smaller to 0.1 second allows the drone to adapt efficiently its path according to unexpected events that may occur in its direct environment. If the operator adopts a value of k higher than 0.1 second, a sufficient reactivity of the drone is not guaranteed in normal conditions of use. When using the drone in an environment requiring faster adjustments, the operator has the option to reduce the value of k in order to make the control of the drone comply with its specific needs.
In one particular embodiment, the step of establishing a first-order temporal relation between flight control parameters and flight dynamics for said drone implements at least one of the predetermined coefficients K and T that refer respectively to the linear gain value and the amortization coefficient value of said drone.
Such kind of coefficients are specific to each rotary-wing drone and depend on the mass of the drone, its shape, the power of its rotors and several other parameters known in the background art. An advantage of a method for controlling the drone according to this particular embodiment is that this method takes account of the specific technical features of each kind of rotary-wing drone while remaining easily adaptable to others. Both the coefficients K and T are determined following the run of a unitary test. Such unitary test consists in a process comprising the following steps:
In one particular embodiment, the method for controlling the path of a rotary-wing drone comprises a step of inputting spatial coordinates of a geographical point to be reached by the drone.
According to this particular embodiment, the operator inputs the spatial coordinates of a geographical point to be reached by the drone. This input can be performed by manually keying the spatial coordinates of said point or by using a more advanced interface of localization as one known from the background art. Once the point to be reached is determined, the method for controlling according to this particular embodiment is then able to direct the drone on the most optimal path. It shall be noticed that the step of inputting the coordinates can be performed either prior to the course of the drone, or at any time during its course. The operator is therefore able to update at any time the geographical point to be reached by the drone.
In one particular embodiment, the method for controlling the drone comprises a step implemented by said drone of locating a point of the space to be reached.
According to this particular embodiment, the drone itself inputs in an autonomous way the spatial coordinates of a geographical point to reach. An advantage of this embodiment is the autonomy of the drone in the determination of the point of the space to reach. The drone is then able to modify its destination according to a target that can be seen by the drone only, and not by the operator. This superiority of the drone on the operator in the localization of the target to be reached can be due to a better angle of view or the use by the drone of additional detecting devices. Another advantage of this embodiment is the possibility to define the position to be reached by the drone according to the position of a moving target. The drone is then able to update during its course the localization of the position to be reached according to the displacements of the assigned target. As a matter of example, this particular technical feature can be implemented in some media applications in which a drone equipped with a camera is instructed to reach and to remain at a certain distance forward of a person walking on the street. According to the displacement of this person, the drone is then updating during its course the localization of the point to reach to comply with the instructions given by the operator.
According to another aspect of the invention, a system for controlling a path of a rotary-wing drone is disclosed, which implements two feedback control loops:
Such a system hence relies on a compound and coupled model of a generic rotary-wing drone, which gives a first-order temporal relation between its flight control and its dynamics. Moreover, it offers a global control architecture integrating such a drone model and a Full State Feedback strategy to control the generic rotary-wing drone. Both the drone model and the architecture are generic enough to control the path of any rotary-wing drone with a generic input interface.
According to yet another aspect of the invention, said first feedback control loop computes:
The advantages of such a system are the same as the advantages related to the method for controlling a drone described here before.
The invention also concerns a rotary-wing drone characterized in that it comprises the system for controlling according to this particular embodiment of the invention.
The invention also concerns a use of a drone according to this particular embodiment of the invention for recording audio-visual data.
As described here before, an advantage of the use of such a drone for recording audio-visual data is that this drone can capture sounds and pictures from a point of the space inaccessible for the operator, or while performing complex motions that cannot be conducted by a person with such a level of accuracy, or without implementing numerous costly and time-consuming additional technical means.
While not explicitly described, the present embodiments may be employed in any combination or sub-combination.
The invention can be better understood with reference to the following description and drawings, given by way of example and not limiting the scope of protection, and in which:
The components in the figures are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the invention.
The present invention relates to systems and methods for controlling a rotary-wing drone embodying two feedback control loops that can be performed independently. Many specific details of certain embodiments of the invention are set forth in the following description and in
Flight Controls and Dynamics of a Rotary-Wing Drone
The variation of these four flight controls then induces the variation of its flight dynamics, namely: its position pr, its speed vr and its course cr.
Description of System for Controlling the Path of a Rotary-Wing Drone
Regarding
The system 3 also comprises a course feedback control loop COURSE FCL 8 and a translational feedback control loop TRANS FCL 9.
COURSE FCL 8 is intended to estimate the course cr of the drone 1 on the basis of a course model detailed here after whereas TRANS FCL 9 is intended to estimate the position pr of the drone 1 on the basis of a translational model detailed here after.
According to one embodiment of the invention, COURSE FCL 8 is also intended to estimate the speed vr of the drone 1 on the basis of the same translational model.
The system 3 also comprises a flight controls commanding unit 10 that computes the commands u[k] provided by the processing unit 5 into commanding orders transmitted to each of the four rotors 2 of the drone 1, in order to make respectively their rotational speed and their orientation varying.
In one embodiment of the invention, the system 3 also comprises sensors 11 that determine in real-time, during the flight of the drone 1, the instantaneous values of its flight dynamics. As a matter of example, these sensors 11 can comprise a thermal camera intended to detect the displacements of a predetermined reference point located on the drone 1. In other embodiments of the invention, alternative forms of sensors can be implemented, as GPS systems or lasers.
In one embodiment of the invention, the system 3 also comprises a camera 12 used to localize the position of obstacles to avoid by the drone and/or point of the space to be reached.
According to one embodiment of the invention, the whole system 3 is built in a rotary-wing drone 1. According to another embodiment of the invention, part of the system 3 is got on-board while another part remains external to the drone 1, for example the interface H/M 7 and the sensors 11.
Description of a Method for Controlling a Rotary-Wing Drone
After an initial step INIT 13, the operator conducts the step INPUT K&T 14 in which the operator inputs, using the interface H/M 7, the technical features specific to the rotary-wing drone 1.
In one embodiment, these features are limited to the linear gain Kα and the amortization coefficient τα, both determined following the run of a unitary test of the type described in the summary of invention.
In one embodiment, the respective values of the linear gain Kα and the amortization coefficient τα as set for a previous flight are saved by the system 3, for example in the memory 4. These values are then re-used when running a further flight. According to this embodiment, the operator only performs the step INPUT K&T 14 when the correction of at least one of these values is required.
The operator then conducts the step INPUT DEST 15 in which the operator inputs the spatial coordinates of a point of the space to be reached by the drone 1.
According to another embodiment of the invention, the drone 1 locates in an autonomous way the point to be reached, for example by using a camera 12 built in, and then performs by itself the step INPUT DEST 15.
Following the run of the step INPUT DEST 15, the system 3 runs the step M-DYNAMIC 16 in which the system 3 measures at least one of the instantaneous dynamics of the drone 1. This measurement can be performed using the sensors 11.
In one embodiment, the system 3 first runs the step COMPUT COURSE 17 in which the system 3 determines a first command to be implemented on the yaw speed {dot over (ψ)} of the drone 1, by using the COURSE FCL 8.
The system 3 then runs the step COMPUT TRANS 18 in which the system 3 determines a first command to be implemented on the pitch angle θ, the roll angle Φ and the elevation speed ż of the drone 1, by using the TRANS FCL 9.
In another embodiment, the system 3 first runs the step COMPUT TRANS 18 before running the step COMPUT COURSE 17.
In another embodiment, the steps COMPUT COURSE 17 and COMPUT TRANS 18 are run in parallel by the system 3.
In all of these embodiments, it shall be noticed that the steps COMPUT COURSE 17 and COMPUT TRANS 18 are run independently of each other.
Following the run of COMPUT COURSE 17 and COMPUT TRANS 18, the command u1[k] to be implemented on the yaw speed {dot over (ψ)} of the drone 1 and the command u2[k] to be implemented on the pitch angle θ, the roll angle Φ and the elevation speed ż are transmitted to the flight controls commanding unit 10 (step COMMANDING 19).
The succession of the steps M-DYNAMIC 16, COMPUT COURSE 17, COMPUT TRANS 18 and COMMANDING 19 constitute a general step named CONTROLLING.
Following the run of the step COMMANDING 19, the step CONTROLLING is repeated at a predetermined period of time k equal or inferior to 0.1 second.
In one embodiment of the invention, the operator can input and correct at wish the value of this predetermined period of time k while running the system 3.
Prior to a new iteration of the step CONTROLLING, the operator has the possibility to decide to define a new point to be reached by the drone (test 20) and therefore to run the step INPUT DEST 13 before the following steps 16 to 19.
In one embodiment of the invention, this decision can be taken 20 by the drone itself, autonomously, using for example its camera 10 on-board to determine the position of a new point to be reached.
The following gives additional details about the model according to one embodiment of the invention, i.e. the mathematical formulation linking the flight controls and the flight dynamics of the drone 1. The architecture of the control system 3 is also described.
Model
In the following description, the notations introduced in
G is the origin of a global frame (G; g1; g2; g3) whereas M is the center of mass of the drone but also the origin of a mobile frame (M; m1; m2; m3).
p(t)=GM(t) (respectively p[k]=GM[k]) is the position vector of the drone in the global frame and v(t) (respectively v[k]) its speed.
Φ(t) (respectively Φ[k]), θ(t) (respectively θ[k]), {dot over (ψ)}(t) (respectively {dot over (ψ)}[k]) and ż(t) (respectively ż[k]) are the flight controls of a rotary-wing drone.
The modeling part described here after relies on the two following assumptions:
Considering the previous assumptions, to model the translation control part, a continuous simple linear amortized model of the speed of the generic rotary-wing drone is initially proposed. It can be mathematically formulated by:
With Kα and τα being respectively the linear gain and the amortization coefficient of the model along each direction. Those gains are drone-dependent and may be easily determined for one specific drone making use of basic unitary tests of the type described in the summary of invention.
In addition, m1, m2 and m3 are considered independent of the time during one step of integration (due to the assumptions of a)) with
Where ĉ(t−) is an estimate of the course of the drone just before one step of integration.
Under those conditions, it can be shown that {dot over (v)}(t) may be written as:
After a discretization step, this same equation may be rewritten by:
v[k+1]=(I3−TD[k])v[k]+MD[k]TD[k]Ku1[k]
Where I3 is the identity matrix in dimension 3,
With Δ[k] designating the discretization period of time k.
From this last equation, one can then easily derived an Explicit Discrete Time Variant State Space Representation to model the translation control part of the drone.
One has the following Explicit Discrete Time-Variant State Space Representation:
The course modeling can be mathematically formulated by:
One has the following Explicit Discrete Time-Variant State Space Representation:
c[k+1]=1c[k]+K{dot over (ψ)}Δ[k]{dot over (ψ)}[k]+f2[k]
And
y
2
[k]=1c[k]+h2[k]
Where K{dot over (ψ)} is a drone-dependent linear gain that may be easily determined for one specific drone making use of basic unitary tests.
By noting u2[k]={dot over (ψ)}[k], we thus have the course control model by:
x
2
[k+1]=a2x2[k]+b2[k]u2[k]+f2[k]
and
y
2
[k]=c
2
c[k]+h
2
[k]
Control Architecture
Given those two models,
As illustrated by
One can notice that the steps COMPUT COURSE 17 and COMPUT TRANS 18 remain independent of each other since these steps can be performed in parallel or one after the other. In one embodiment, one step is performed more than the other in a certain amount of time. In particular, when the step COMPUT TRANS 18 is performed more than the step COMPUT COURSE 17 in a certain amount of time, the value of the estimated course {circumflex over (x)}2[k+1] remains unchanged between two iteration of the step COMPUT COURSE 17 and is therefore used several times, once at each iteration of the step COMPUT TRANS 18.
The command u[k] at each period of time k is computed from:
where
Number | Date | Country | Kind |
---|---|---|---|
14305973.1 | Jun 2014 | EP | regional |