This application claims priority to foreign French patent application No. FR 1871696, filed on Nov. 22, 2018, the disclosure of which is incorporated by reference in its entirety.
The present invention relates to a device and to a method for landing assistance in conditions of reduced visibility.
The technical field of the invention is that of detecting and recognizing an environment in relation to the position of an observer. The primary field of use is that of radar, for landing assistance applications. This invention more precisely targets “EVS” (enhanced vision system) landing assistance systems. The invention could apply to other sensors (for example optical or electro-optical sensors).
The invention notably addresses the problem of assisting the landing of aircraft on a runway in conditions of reduced visibility, in particular caused by challenging weather conditions, for example in the case of fog. The standards impose rules for achieving visibility during the landing phase. These rules are reflected in decision thresholds that refer to the altitude of the aeroplane during the descent phase thereof. At each of these thresholds, identified visual markers must be acquired in order to continue the landing manoeuvre, without which said manoeuvre has to be abandoned. Abandoned landing manoeuvres represent a real problem for air traffic control and for flight planning. It is necessary, before take-off, to estimate the ability to be able to land at a destination on the basis of weather forecasts, these being more or less reliable, and where applicable to provide backup solutions.
The problem of landing aircraft in conditions of reduced visibility has been subject to the development of numerous techniques that are nowadays used.
One of these techniques is the instrument landing system (ILS). The ILS system is based on a radiofrequency device installed on the ground, on the runway, and a compatible instrument situated on board the aircraft. The use of such a guidance system requires expensive devices and a specific qualification for the pilots. It is also not able to be installed at all airports. This system is not widespread and it is in the phase of being removed from use.
Another alternative is GPS landing assistance. Although it exhibits sufficient precision, this solution is too unreliable since it may easily—intentionally or unintentionally—be subject to jamming. The integrity thereof is not guaranteed.
Lastly, an enhanced viewing technique is also used (enhanced vision system, EVS). The principle is that of using sensors with a better performance than the pilot's eye in degraded weather conditions, and of superimposing the collected information in the pilot's field of view by way of a head-up display or on the visor of a headset worn by the pilot. This technique is essentially based on using sensors to detect the radiation from the lamps positioned along the runway and on the approach ramp. Incandescent lamps produce visible light, but they also emit in the infrared range. Sensors in the infrared range make it possible to detect this radiation, and the detection range is better than that of a human in the visible range in degraded weather conditions. Improving visibility therefore to a certain extent makes it possible to improve approach phases and to limit abandoned approaches. However, this technique is based on stray infrared radiation from the lamps present close to the runway. In order to ensure that the lamps have a long life, the current trend is to replace incandescent lamps with LED lamps. These have a narrower spectrum in the infrared range. One collateral effect is therefore that of bringing about technical obsolescence of infrared sensor-based EVS systems.
One alternative to infrared sensors is that of acquiring images by way of a radar sensor in the centimetre or millimetre band. Certain frequency bands chosen outside of the absorption peaks of water vapour exhibit very low sensitivity to challenging weather conditions. Such sensors therefore make it possible to produce an image through fog for example. However, even though these sensors have a fine distance resolution, they have an angular resolution that is far coarser than optical solutions. The resolution is linked directly to the size of the antennas that are used, and it is often too coarse to achieve precise positioning of the runway at a distance sufficient to perform recalibration manoeuvres.
There is therefore a need for new technical solutions for guiding the approach manoeuvre for the purpose of landing in conditions of reduced visibility.
One aim of the invention is notably to allow such guidance in conditions of reduced visibility. To this end, one subject of the invention is a landing assistance device for an aircraft for joining up with a given runway, said device detecting and positioning said runway with respect to said aircraft, and comprising at least:
With said interface making it possible to display said runway or symbols representing it, said device comprises for example a display system linked to said functional formatting block. This display system is for example a head-up viewing system or a headset.
Said interface for example supplies flight commands allowing said aircraft to join up with a nominal landing trajectory.
In one particular embodiment, for an aircraft carrying an image acquisition radar sensor, tagging said images comprises at least one of the following indications:
With said collection system comprising a memory for storing said tagged radar images, said memory is for example updated throughout the nominal landings performed by a set of aircraft on said runway. Said storage memory is for example shared by several aircraft equipped with said device for the learning of said learning network.
Said learning network supplies for example a performance indicator that quantifies the positioning precision of said runway with respect to said aircraft, this indicator being acquired through correlation between the image of said runway as calculated by said learning network and a reference image.
The invention also relates to a landing assistance method for an aircraft for joining up with a given runway, said method detecting and positioning said runway with respect to said aircraft, and comprising at least:
Said estimated position is for example transmitted to an interface for displaying the runway or representative symbols by way of a display system.
Said estimated position is for example transmitted to an interface supplying flight commands allowing said aircraft to join up with a nominal landing trajectory.
In one particular mode of implementation, for an aircraft carrying an image acquisition radar sensor, tagging said images comprises at least one of the following indications:
With said collected and tagged radar images being stored in a storage memory, said memory is for example updated throughout the nominal landings performed by a set of aircraft on said runway. Said storage memory is for example shared for the learning of learning networks of several aircraft.
Other features and advantages of the invention will become apparent with the aid of the following description, given with reference to the appended drawings in which:
The device comprises for example a system 5 for displaying the runway, visual markers and relevant navigation data integrated into the pilot's field of view via a head-up display (HUD) viewing system or a headset, any other viewing system being possible.
The radar sensor, carried by the aircraft, operates for example in the centimetre band or in the millimetre band. It makes it possible to position the carrier with respect to a runway on which said carrier wishes to land, independently of the visibility conditions of the pilot.
The radar images are supplied by the sensor at each landing phase, thereby making it possible to continuously enrich the database of the collection system 2. As indicated above, these radar data are acquired during nominal landing manoeuvres, in clear weather, during the day or at night. This acquisition is also performed in various possible aerology and manoeuvring conditions (various types of wind, various angles of arrival, various approach gradients), the information about all of these conditions being contained in the tagging of the images. Once the landing has ended, the radar data acquired during the landing phase are recorded in the database and tagged as forming part of a nominal or possible landing manoeuvre. The tagging comprises at least this nominal landing information, but it may advantageously be expanded to the following additional information depending on the availability on the carrier:
Once they have been tagged, these radar images are used by the neural network 3. They serve to train said neural network. More precisely, the neural network learns the runway on the basis of all of the images that are stored and tagged in the database of the collection system 2. Once it has been trained, the neural network 3 is capable, on the basis of a series of radar images, of positioning the runway and its environment with respect to the carrier, more particularly of positioning the landing point. The series of images at the input of the neural network are the images captured by the radar 1 in the current landing phase.
It is then possible, for the functional block 4, to estimate and to correct the difference in the corresponding trajectory (trajectory of the carrier in the current landing) with respect to a nominal or possible landing trajectory. It is also possible to display the runway in the pilot's field of view by way of the display system 5. The precision of this positioning and of the trajectory correction are more precise the fuller the base of learning images (stored by the collection system 2).
This base of learning images may be fed collectively by all of the aircraft that use the same system. Thus, all of the aircraft that land on one and the same runway may enrich this base with the radar images acquired by their radars. Advantageously, each aircraft then benefits from an exhaustive and up-to-date base.
Given that each database is updated from several on-board devices, it is necessary to take into account the biases of each device contributing to a database. These biases are linked in particular to the technological differences on the radar of each device and to installation discrepancies. The recorded data (radar images) are therefore for example also tagged with the information of the carrier, so as to be able to identify and correct the biases. These biases are corrected by the neural network, which utilizes the tagged images so as to position the runway and its environment with respect to the carrier.
The neural network 3 may also supply a performance indicator for quantifying the positioning and guidance precision of the carrier in real time during landing phases, so as to achieve a degree of confidence. This performance indicator is for example an index of correlation between the image rendered by the neural network and in which the carrier is positioned and a reference image, such as a recorded image or a digital terrain model (DTM).
The convergence and performance metrics associated with the database of each recorded runway may be calculated. They make it possible to evaluate the quality of the guidance able to be achieved by the device on each of the runways.
This quality depends on the size of the database, but also on the environment of the runways, on the quality of the noteworthy structures that have an all the greater weight in the learning of the neural network. Noteworthy structures such as the runway itself or else the various approach lamps are systematically encountered. Other elements specific to the environment of each runway have an important role in improving the positioning; these specific elements are for example the fencing of the airport area, antennas or else buildings.
The device according to the invention may be provided with a function for viewing the neural network in order to guarantee the observability thereof. This viewing function makes it possible notably to view the highly weighted reference structures and schemes that dominate recognition and marking.
The acquired radar images may be direct radar images or SAR (“synthetic aperture radar”) images. The latter make it possible to refine angular precision while at the same time benefiting from the change in viewing angle of the movement of the carrier.
In one particular embodiment of a device according to the invention, all of the components thereof (radar sensor 1, radar image collection system 2, neural network 3, block 4 for utilizing and formatting the data from the neural network, and display system 5) are on the aircraft. The database of the collection system is regularly enriched with the tagged images from the collection systems of other aircraft. The neural network maintains or improves its learning of the runway or runways as this database is enriched. The learning takes place outside of the landing phases when the neural network is not called upon to render the parameters of the runway. The learning takes place on the tagged images as are stored in the database of the device.
The databases of the collection systems may be updated by any communication means. Each update is performed for example after each nominal landing, at least with the images of the carrier that has just landed. Rules for updating on the basis of the images from the collection systems of other aircraft may be established in particular in order to define the periodicity of these updates and the update modes that are used, notably in terms of the communication means.
The collection system 2 and the neural network and the block 4 for utilizing the data are for example integrated into the flight computer of the aircraft.
In another embodiment, the collection system 2 is not on board the carrier, in particular its storage memory. The tagging function is performed for example in-flight with the captured radar images. The tagged images are sent from each aircraft to the collection system and the storage memory by appropriate communication means, in real time throughout the landing or in a deferred manner, for example after each landing. In this other embodiment, the learning by the neural network is performed on the ground. In this case as well, one and the same memory for storing the tagged images may be shared by several aircraft. Advantageously, the shared storage memory thus comprises a larger amount of data, promoting learning.
The landing method according to the invention, implemented for example by a device of the type in
Two preliminary steps, not shown, relate to collecting images and learning the runway on the basis of the collected images, as described above.
A first step 21 captures a first series of radar images, these images being radar images or SAR images acquired by the radar sensor 1 on board the aircraft. Each radar image is tagged in accordance with the images already recorded in the collection system 2.
In the second step 22, the situation of the carrier with respect to the runway and its environment is estimated by way of the neural network, on the basis of the series of acquired radar images. It is possible to provide a series consisting of a single radar image, the estimation being able to be performed on the basis of a single image.
In a third step 23, the estimation supplied by the neural network 3 is utilized, this utilization being performed by way of the functional block 4 for example. Said functional block supplies the formatted data for display (performed by the display system 5) and in order to supply the flight commands for correcting the trajectory. It also makes it possible to present the confidence indicator calculated by the neural network in a usable form.
At the end of this third step 24, if the aeroplane is not in the final landing phase (that is to say at the point of joining up with the runway), the method loops back to the first step 21 at which a new series of radar images is acquired. In the opposite case, if the aeroplane is in the final landing phase, the final positioning of the aircraft with respect to the runway is reached 25 with the definitive landing trajectory.
Advantageously, the landing by way of a device according to the invention is particularly robust to one-off variations in the environment, for example the presence of vehicles or seasonal vegetation, which pose problems for fixed algorithms. The invention furthermore adapts to long-term variations in the environment, such as new structures or infrastructures for example, by integrating these elements into the learning base.
Number | Date | Country | Kind |
---|---|---|---|
1871696 | Nov 2018 | FR | national |