The field of the invention is that of human-machine interactions in the cockpit of an aircraft and, more specifically, that of systems comprising a voice command device and a touch device.
In modern cockpits, interactions between the pilot and the aircraft take place by means of various human-machine interfaces. The main ones occur via interactions with instrument panel display devices which display the main flight and navigation parameters required for the flight plan to be carried out smoothly or for the mission to be executed. Increasingly, touch surfaces, which allow simple interactions with display devices, are used to this end.
In order to further simplify the pilot's interactions with the onboard system, it is possible to use speech as a means for interacting via a voice recognition system.
Voice recognition has been studied experimentally in the field of avionics. In order to guarantee recognition that is compatible with use in an aeronautical environment, which may be noisy, solutions based on a limited dictionary of commands and on user prior learning have been implemented. Furthermore, these solutions require the use of a push-to-talk device, for example a physical button in the cockpit, which allows voice recognition to be triggered or stopped.
It is also possible to use a touch surface in order to trigger voice recognition. Thus, the application WO2010/144732, entitled “Touch anywhere to speak”, describes a system for mobile electronic devices triggering voice recognition through touch interaction. This application makes no mention of the safety aspects specific to the field of aeronautics and does not propose any solutions for improving the reliability of voice recognition in noisy environments.
Thus, the current solutions require a physical push-to-talk device, pilot learning of the list of commands available through voice recognition and a system for acknowledging the result. Moreover, the levels of performance from voice recognition generally limit its use.
The method for using a human-machine interface device for an aircraft comprising a speech recognition unit according to the invention does not have these drawbacks. It makes it possible:
It also ensures, in a simple manner, the management of critical commands and non-critical commands. The term “critical command” is understood to mean a command liable to endanger the safety of the aircraft. Thus, starting or stopping the engines is a critical command. The term “non-critical command” is understood to mean a command having no significant impact on flight safety or the safety of the aircraft. Thus, changing a radiocommunication frequency is not a critical command.
More specifically, the subject of the invention is a method for using a human-machine interface device for an aircraft comprising at least one speech recognition unit, one display device with a touch interface, one graphical interface computer and one electronic computing unit, the set being designed to graphically present a plurality of commands, each command being classed in at least a first category, referred to as the critical category, and a second category, referred to as the non-critical category, each non-critical command having a plurality of options, each option having a name, said names assembled in a database called a “lexicon”, characterized in that:
Advantageously, when the command is non-critical, the option corresponding to the name in the lexicon is automatically implemented.
Advantageously, the function for activating the speech recognition unit is active only for a limited duration starting from the time at which the command activated by a user by means of the touch interface is recognized.
Advantageously, this duration is proportional to the size of the lexicon.
Advantageously, this duration is less than or equal to 10 seconds.
The invention will be better understood and other advantages will become apparent upon reading the following non-limiting description and by virtue of the appended
The method according to the invention is implemented in a human-machine interface device for an aircraft and, more specifically, in its electronic computing unit.
By way of example, the set of means of the human-machine interface device 1 is shown in
The display device 10 is conventionally a liquid crystal flat screen. Other technologies may be envisaged. It presents flight or navigation information, or information on the avionics system of the aircraft. The touch interface 11 takes the form of a transparent touchpad positioned on the screen of the display device. This touchpad is akin to the touchpads implemented on tablets or smartphones intended for the general public. Multiple technical solutions, well known to those skilled in the art, allow this type of touchpad to be produced.
The graphical interface 12 is a computer which, from various data arising from the sensors or from the databases of the aircraft, generates the graphical information sent to the display device. This information comprises a certain number of commands. Each command has a certain number of possible options. For example, the “transmission frequency” command has a certain number of possible frequency options.
The graphical interface 12 also retrieves information arising from the touchpad which is converted into command or validation instructions for the rest of the avionics system.
The speech recognition unit 13 conventionally comprises a microreceiver 130 and speech processing means allowing the words uttered by a user to be recognized. Here again, these various means are known to those skilled in the art. This unit is configurable in the sense that the lexicons of commands/words to be recognized can be specified thereto at any time.
The speech recognition unit is active only for a limited duration starting from the time at which the command activated by a user by means of the touch interface is recognized. The triggering and stopping of voice recognition is therefore a smart mechanism:
For non-critical commands, the electronic computing unit 14 comprises a certain number of databases, referred to as “lexicons” 140. Each lexicon comprises words or names corresponding to a particular command option. Thus, the “Frequency” command comprises only names indicative of frequency or frequency values.
The electronic computing unit 14 carries out the following specific tasks:
As stated above, there are two types of command, referred to as critical and non-critical commands.
By way of first example, in order to illustrate the operation of the human-machine interface according to the invention in the case of a critical command, it is supposed that a fire has broken out on the left engine and the pilot wishes to stop this engine.
By pressing a virtual button displayed on the touch interface which allows the left engine to be stopped, the pilot must simultaneously utter “stop left engine” while continuing to press the button for stopping the left engine. The action is validated by the system only if the phrase “stop left engine” is recognized by the speech recognition unit.
By way of second example, in order to illustrate the operation of the human-machine interface according to the invention in the case of a non-critical command, it is supposed that the graphical interface is displaying a radio frequency and the pilot wishes to change this frequency.
On a display screen of the cockpit, the current value of said radio frequency for VHF communications is displayed. The pilot pressing the touchpad at the position of the representation of this frequency triggers voice recognition for a determined duration and selects the lexicon allowing radio frequencies to be recognized. This lexicon comprises, for example, a set of particular values. Since the pilot has designated a frequency, he or she can naturally utter a new value for the frequency; voice recognition carries out an analysis according to the lexicon restricted to the possible frequencies. If the recognized word appears in the lexicon, then the gate 144 proposes a text value which is displayed in proximity to the current value. The pilot may or may not validate the new value through a second touch interaction. Validation may also be automatic when the new choice does not entail any negative consequences.
This human-machine interface has the following advantages.
The first advantage is the safety of the device in the case of both critical commands and non-critical commands. Safety is an essential feature of interfaces intended for aeronautical applications. First, voice recognition is restricted to a particular context, the recognition of a frequency in the preceding example, which makes it possible to guarantee a higher level of safety for the device than for devices operating blind. Furthermore, touch information and voice recognition are redundant. Lastly, by limiting the time for which voice recognition is active, unintentional recognitions are avoided and the result of the command can be verified with respect to possible values.
The second advantage is the wider range of options of the device. The combination of touch and voice recognition allows a greater number of commands to be recognized while making the use of voice recognition safe. Specifically, instead of a single lexicon of words to be recognized, voice recognition is based on a plurality of lexicons. Each of these lexicons is of limited size but the sum of these lexicons makes a large number of command options possible.
The third advantage is the highly ergonomic nature of the device. Specifically, the designation of the object to be modified allows the pilot to intuitively know the nature of the voice command to be issued and therefore decreases the learning required by the voice command. Moreover, the selection of the right lexicon and voice recognition are intuitively triggered via a touch interaction on a element of the human-machine interface of the cockpit. This device thus allows the pilot to interact intuitively and efficiently with the onboard system since touch is used to designate the parameter to be modified and voice is used to give the new value.
The fourth advantage is doing away with a physical “push-to-talk” device, i.e. means for starting and stopping voice recognition. This push-to-talk device is most commonly a mechanical control button. In the device according to the invention, starting and stopping is achieved intelligently, solely when voice recognition must be called upon.
Number | Date | Country | Kind |
---|---|---|---|
1502480 | Nov 2015 | FR | national |