This application claims priority to foreign French patent application No. FR 1871698, filed on Nov. 22, 2018, the disclosure of which is incorporated by reference in its entirety.
The present invention relates to a learning method for a neural network embedded in an aircraft for assisting in the landing of said aircraft. The invention relates also to a server for implementing such a method.
The technical field of the invention is that of the detection and the recognition of an environment relative to the position of an observer. The main field of operation is that of the radar, for landing assistance applications. This invention more specifically targets the “EVS” (Enhanced Vision System) landing assistance systems. The invention could be applied to other sensors (for example optical or electro-optical).
The invention addresses in particular the problem of assisting in the landing of aircraft on a landing runway, in conditions of reduced visibility, in particular because of difficult weather conditions, for example in case of fog. The standards impose rules for obtaining visibility during the landing phase. These rules are reflected by decision thresholds which refer to the altitude of the aeroplane during its descent phase. At each of these thresholds, identified visual markers must be obtained to continue the landing manoeuvre, without which it must be aborted. The aborted landing manoeuvres represent a real problem for air traffic management and for flight scheduling. The capacity to be able to land at destination must be estimated before take-off on the basis of weather forecasts, which are more or less reliable, and, if necessary, fallback solutions must be provided.
The problem of the landing of aircraft in conditions of reduced visibility has been the subject of the development of multiple techniques which are currently used.
One of these techniques is the instrument landing system (ILS). The ILS system relies on radio frequency equipment installed on the ground, at the landing runway, and a compatible instrument placed onboard the aircraft. The use of such a guidance system requires expensive equipment and specific qualification of the pilots. Also, it cannot be installed on all airports. It requires maintenance by aeroplanes used to calibrate the system. This system is not generalised and it is currently being withdrawn from operation.
Another alternative is landing assistance by GPS. Although this has sufficient accuracy, the reliability of this solution is too low since it can easily—deliberately or not—be subject to scrabbling. Its integrity is not guaranteed.
Finally, an augmented vision technique is also employed (Enhanced Vision System, EVS). The principle is to use more powerful sensors than the eye of the pilot in degraded weather conditions, and to embed the information collected in the field of view of the pilot, by means of a head-up display or on the visor of a headset worn by the pilot. This technique relies essentially on the use of sensors to detect the radiation of the lamps disposed along the runway and on the approach ramp. The incandescent lamps produce visible light but they also emit in the infrared range. Sensors in the infrared range make it possible to detect these radiations and the detection range is better than that of the human eye in the visible range, in degraded weather conditions. A visibility enhancement therefore makes it possible, to a certain extent to improve the approach phases and to limit the aborted approaches. However, this technique relies on the stray infrared radiation from the lamps present in the vicinity of the runway. In the interests of durability of the lamps, the current trend is to replace incandescent lamps with LED lamps. The latter have a narrower spectrum in the infrared range. A collateral effect is therefore to provoke a technical obsolescence of the EVS systems based on infrared sensors.
An alternative to infrared sensors is to obtain images by a radar sensor, in a centimetric or millimetric band. Some frequency bands chosen outside of the water vapour absorption peaks exhibit a very low sensitivity to difficult weather conditions. Such sensors therefore make it possible to produce an image through fog for example. However, even if these sensors have a fine distance resolution, they have a much coarser angular resolution than the optical solutions. The resolution is directly linked to the size of the antennas used, and it is often too coarse to obtain an accurate positioning of the landing runway at a sufficient distance to form adjustment manoeuvres.
There is therefore a need for new technical solutions that make it possible to guide the approach manoeuvre with a view to a landing in reduced visibility conditions.
One aim of the invention is notably to allow such guidance in reduced visibility conditions. To this end, the subject of the invention is a learning method for a neural network embedded in an aircraft for assisting in the landing of said aircraft on at least one given runway, said neural network positioning said aircraft relative to the runway, said method using a fleet of aircraft being equipped at least with one radar sensor and comprising at least:
In a particular implementation, said neural network transmits, to a display and/or control means, the trajectory of said aircraft.
Said database comprises, for example, labelled radar images specific to several landing runways, the labelled images comprising identification of the imaged runway.
Each labelled image comprises, for example, the identification of the aircraft having transmitted said image.
In a particular implementation:
The estimation of said bias for a given aircraft and for a given runway is, for example, produced by comparison between at least one radar image obtained by the radar sensor with which said aircraft is equipped and a reference image of said runway and of its environment. Said reference image consists, for example, of a digital terrain model.
The means for transmitting said labelled images between an aircraft and said database are made, for example, by means of the radar sensor with which said aircraft is equipped, the transmissions being performed by modulation of the data forming said images on the radar wave.
In a particular implementation, an aircraft carrying said radar sensor, the labelling of said images comprises at least one of the following indications:
Said database is, for example, updated throughout the nominal landings performed by said aircraft on at least said runway.
Another subject of the invention is a server comprising a database for the learning of an embedded neural network for the implementation of the method as described previously, said server being capable of communicating with aircraft. Said neural network is, for example, trained in said server, the trained network being transmitted to at least one of said aircraft.
Other features and advantages of the invention will become apparent from the following description, given in light of the attached drawings which represent:
To guide an aircraft to rejoin a runway, the invention advantageously combines a radar sensor, with very little sensitivity to weather conditions, and a neural network, both embedded in the aircraft. This neural network shares a learning base of radar images with neural networks of other aircraft, this base being updated collectively by these aircraft by a stream of radar images taken in landing phases. The use of the complete environment of an airport and of the landing runway allows for an accurate positioning using the embedded neural network, trained over several landings.
The first step is to describe the part of the landing assistance system for guiding an aircraft. The description is given for rejoining a given runway.
The device comprises, for example, a system 5 for displaying the runway, visual mark and relevant navigation data incorporated in the field of view of the pilot via a head-up display (HUD) system or a headset, any other display system being possible.
The collection block, the neural network and the block 4 for analysing the data are, for example, incorporated in the flight computer of the aircraft.
Once labelled, the radar images are used by the neural network 3 as will be described hereinbelow. They are used to train the latter. More specifically, the neural network performs the learning of the landing runway from all the images stored and labelled in the learning base. Once trained, the neural network 3 is capable, from a series of radar images, of positioning the runway and its environment with respect to the carrier, more particularly of positioning the landing point. The images input to the neural network are the images taken by the radar 1 in the current landing phase.
It is then possible, for the functional block 4, to estimate and correct the deviation of the corresponding trajectory (trajectory of the carrier in the current landing) relative to a nominal landing trajectory. It is also possible to display the runway in the field of view of the pilot by means of the display system 5.
The radar sensor 1 operates, for example, in centimetric band or in millimetric band, it makes it possible to position the carrier with respect to a landing runway on which the latter wants to land, independently of the conditions of visibility of the pilot. The radar images obtained can be direct radar images or images of SAR (Synthetic Aperture Radar) type. The latter make it possible to refine the angular accuracy while benefitting from the changing viewing angle from the movement of the carrier.
The radar images are obtained by the radar sensor 1 in each landing phase, which makes it possible to continuously enrich the database of radar images. As indicated previously, the acquisition of these radar data is performed during nominal landing manoeuvres, in clear weather, during the day or at night. This acquisition is also done in different aerology and manoeuvre conditions (different types of winds, different skewed arrivals, different approach slopes), the information on all these conditions being contained in the labelling of the images. Once the landing is finished, the radar data obtained during the landing phase are recorded in the database and labelled as forming part of a nominal landing manoeuvre. The labelling comprises at least this nominal landing information, but it can advantageously be extended to the following additional information, depending on the availability on the carrier:
Two preliminary steps that are not represented concern the collection of the images and the learning of the runway from the collected images.
A first step 21 performs the taking of a first series of radar images, these images being radar images or SAR images obtained by the radar sensor 1 embedded on the aircraft. Each radar image is labelled in accordance with the images already recorded in the collection system 2.
In the second step 22, the situation of the carrier relative to the runway and to its environment is estimated by means of the neural network, from the series of radar images obtained. It is possible to provide a series consisting of a single radar image, the estimation being able to be performed from a single image.
In a third step 23, the estimation supplied by the neural network 3 is analysed, this analysis being performed using the functional block 4 for example. The latter supplies the data formatting for the display (performed by the display system 5) and to provide the flight controls that make it possible to correct the trajectory. It also makes it possible to present, in a useable form, the confidence indicator calculated by the neural network.
At the end of this third step 24, if the aeroplane is not in final landing phase (that is to say at the point of rejoining the landing runway), the method loops back to the first step 21 where a new series of radar images is obtained. Otherwise, if the aeroplane is in final landing phase, the method arrives 25 at the final positioning of the aircraft with respect to the runway with the definitive landing trajectory.
The landing method used by the invention therefore relies on the learning of the landing sequence and of the associated radar images. This method requires learning data (the labelled radar images) to operate appropriately. In particular, the accuracy of the positioning depends on the number of learning data available and updated.
The method according to the invention uses several aircraft each equipped with the same landing assistance device as that described with respect to
That requires a prior learning step, during which the radar images obtained during nominal landing phases are labelled with the data available during the landing, as described previously. These labelled images are used to train the neural network of the landing assistance device. Once trained, the neural network makes it possible, using the images obtained during a landing in conditions of reduced visibility, to obtain the relative positioning of the carrier with respect to the runway. The operational operation has been described with respect to
This network takes input radar images 31, and optionally data 32 originating from additional sensors, such as GPS for example. Using these data, the neural network establishes the positioning of the carrier with respect to the runway. This positioning includes the attitude of the carrier. It can be enriched with its speed and with the estimated point of touchdown of the wheels.
The method according to the invention uses a fleet of N aircraft A1, . . . AN each equipped with a landing assistance device according to
This communication means can be incorporated in the radar sensor 1 of each embedded device, the data transmissions being performed by modulation of the data on the radar wave. In other words, the modulation of the transmitted radar wave codes the transmitted data.
This server uses these labelled data in order to train neural networks associated respectively with the corresponding runways. The training (or learning) of the neural networks is done from the data stored in the server 41, the learning consisting in particular in learning at least one landing trajectory on the identified runway.
The server sends 43, to the different aircraft, the trained neural networks (forming the functional block 3 in each landing assistance device).
More specifically in this step of sending 43 of a trained neural network to an aircraft, the neural network is transmitted to a means for controlling the trajectory of this aircraft, this means being typically the functional block 3 then the formatting and analysis block 4 whose operations have been described previously. This control means allows the display of the trajectory for assisting in the piloting or directly makes it possible to control and correct the trajectory of the aircraft.
Given that the different radar sensors 1 can exhibit a bias, notably in the mounting plane on installation in the aircraft, provision is made, according to the invention, to compensate these different biases.
With this neural network, each aircraft has a landing assistance function on an extended database which advantageously offers a good accuracy by virtue of the pooling of the data.
In the example of
This comparison 62 between the labelled radar images (labelled notably with the position of the aircraft) and the reference images makes it possible, for each aircraft A1, . . . AN, to estimate 63 the bias between the image taken by the radar sensor and the point of view of the aircraft projected into the reference images, for example, into the digital terrain model. The main bias is linked to the mounting plane of the radar sensor and it leads to a systematic angular error with respect to a normalised reference frame linked to the axes of the aircraft. The cross-referencing of the data relative to several runways enhances the estimation of this systematic error which can then be finely corrected.
Advantageously, the invention makes it possible to produce a collective database for the learning of the neural networks. A larger dataset is thus obtained which enhances the quality of the learning, in order to achieve a good accuracy of positioning of the runways with respect to the aircraft. In particular, an aircraft landing for the first time on a runway benefits from the collective experience, while taking account of the features specific to its sensor.
Again advantageously, the landing by means of a device according to the invention is particularly robust to the random variations of the environment, for example the presence of vehicles or of seasonal vegetation, which pose problems to fixed algorithms. Furthermore, the invention adapts to the ongoing variations of the environment, such as new constructions or infrastructures for example, by incorporating these elements in the learning base.
Number | Date | Country | Kind |
---|---|---|---|
1871698 | Nov 2018 | FR | national |