The present application claims the benefit of European Patent Application No. 23164433.7, entitled PILOT POSTURE RECOGNITION SYSTEM AND METHOD, filed Mar. 27, 2023, which is incorporated by reference in the entirety.
The subject-matter disclosed herein relates to a pilot posture recognition system, e.g., in a cockpit of an aircraft. Particularly, the subject-matter disclosed herein relates to systems and methods for the detection/recognition of pilot posture, pilot position, pilot fall and pilot-object interaction.
Pilot state assessment systems may be used to assess the state of a pilot and/or co-pilot during operation of an aircraft. The position and posture of a pilot and/or co-pilot is an important factor for identifying/predicting fatigue and incapacitation situations during a flight—for example, slouching and/or a loose head position (e.g., neck bending) can be indicators that a pilot is tired or asleep. Thus, the inventors have identified a need for an improved pilot monitoring system to better detect and recognise the state/activity of a pilot and/or a co-pilot.
In a first aspect, there is provided a system comprising: one or more imaging devices configured to collect image data of a cockpit area of an aircraft; a pilot detection module configured to determine a presence of one or more pilots in the cockpit area based on the image data; and a pilot posture module configured to determine three-dimensional posture of the one or more pilots based on the image data.
As used herein, the term ‘module’ should be understood to mean a ‘software module’, e.g., a software component comprising code corresponding to a part of or a whole of a program. A module is configured to be executed by/run on one or more processors of the system.
The pilot posture module may be capable of determining that a pilot is falling or has fallen.
The one or more imaging devices may include at least one conventional charged-coupled device (CCD) or complementary metal-oxide semiconductor (CMOS) based digital camera having a two-dimensional array of photosensitive pixels, optionally provided with the capability to determine range or depth (such as through one or more phase detect elements). The photosensitive pixels are capable of sensing electromagnetic radiation in the visible wavelength range. Alternatively, the one or more imaging devices may include at least one three-dimensional camera such as a time-of-flight camera (e.g., a camera comprising, or associated with, a time-of-flight sensor including, for example, an infrared (IR) emitter and receiver) or other scanning or range-imaging camera capable of imaging a scene in three dimensions. Alternatively, the one or more imaging devices may include a pair of like two-dimensional cameras operating in a stereo configuration and calibrated to extract depth. The one or more imaging devices comprise a two-dimensional digital camera configured to image in the visible wavelength range and a time-of-flight sensor (depth sensor), optionally including an IR emitter and receiver.
The one or more imaging devices may comprise a plurality of two-dimensional digital cameras configured to image in the visible wavelength range and a plurality of time-of-flight sensors, optionally each including an IR emitter and receiver.
The modules and models of the system described herein may be configured (e.g., trained) to process/handle image data from a number of different imaging device configurations. In other words, the modules and models may be presented (during training) with a plurality of different view configurations of the cockpit, each view configuration comprising one or more image views (with each image view being provided by a particular imaging device in a particular position). In this way, the modules of the system may be configured to be ‘viewed independently’, in that they can process/handle image data from any imaging device configuration (i.e., one or more imaging devices arranged in any position(s)) and yet still provide robust and accurate evaluations of the scene in the cockpit.
The system may include a data buffer (or frame buffer) to temporarily store the image data. The image data may be provided from the data buffer to one or more modules of the system, the one or more modules being configured to process the data and derive conclusions. As described above, the system may comprise one or more processors on which the one or more modules may be run.
The pilot detection module may comprise a pre-trained pilot detection machine learning model. In other words, the pilot detection machine learning model may be an object detection model configured (e.g., trained) to detect/identify a person (e.g., a pilot) in the cockpit.
The system may comprise an object detection image module configured to determine a presence and three-dimensional position of one or more objects in the cockpit area based on the image data. In the present context, an ‘object’ is understood to refer to inanimate cockpit objects, and in particular to flight deck objects. Relevant objects may include, for example, control levers, control panels including buttons and switches, sidesticks, control pedals, and control wheels/yokes/tillers.
The object detection module may comprise a pre-trained flight deck object detection machine learning model. In other words, the flight deck object machine learning model may be configured (e.g., trained) to detect/identify an object (e.g., a control stick of a flight deck) in the cockpit.
The system may comprise a pilot-object module configured to, if the pilot posture module detects that one or more pilots have fallen, determine that a pilot-object interaction event has occurred based on the three-dimensional posture of the one or more pilots and the three-dimensional positions of the one or more objects.
Detection of a pilot-object interaction may be carried out by a pre-trained long short-term memory (LSTM) network model. Herein, an LSTM network model is a deep neural network model capable of remembering a plurality of past image frames, and predicting a plurality of future image frames. The pilot-object LSTM network model may have been pre-trained using a dataset comprising videos of pilot interactions with flight deck objects.
The system may be configured to pass the determined three-dimensional posture and/or the determined pilot-object interaction event to an alert system.
The alert system may be configured to: verify that an action has been input into a flight system log of the aircraft as a result of the determined pilot-object interaction event; and alert a control system of the aircraft that the action input was a result of the determined pilot-object interaction event.
The alert system may be configured to alert aircraft crew and/or ground control and/or Air Traffic Control that one or more pilots have fallen.
The pilot posture module may comprise a depth information module configured to extract depth information from the image data; and a pilot pose module configured to identify a plurality of two-dimensional keypoints corresponding to features of the one or more pilots in the image data, where the pilot posture module is configured to combine the depth information and the two-dimensional keypoints to determine a three-dimensional posture of the one or more pilots in the cockpit.
The pilot pose module may identify the plurality of two-dimensional keypoints by use of a pre-trained human pose estimation model.
The two-dimensional keypoints may comprise the location of the pilot's eyes, ears, nose, shoulders, elbows, wrists, and hips.
Determination of the three-dimensional posture of the one or more cockpits may be carried out by a pre-trained LSTM network model. The pilot posture LSTM network model may have been pre-trained using a dataset comprising images and videos e.g. of one or more pilots in the cockpit.
The LSTM network model may be configured (e.g., trained) to determine feature relationships between key points (i.e., the three-dimensional distance and angles between them) and to utilise the determined feature relationships to identify a three-dimensional posture. During (pre-)training, the LSTM network model will learn to recognize or classify different postures based on characteristic/distinctive feature relationships.
In another aspect, there is provided a method, comprising: collecting, using one or more imaging devices, image data of a cockpit area of an aircraft; determining, using a pilot detection module, a presence of one or more pilots in the cockpit area based on the image data; and determining, using a pilot posture image module, three-dimensional posture of the one or more pilots based on the image data.
The method may comprise use of the system of the first aspect, optionally including any of the optional features thereof.
The method may comprise determining, using the pilot posture recognition module, that a pilot is falling or has fallen.
The method may comprise determining, using an object detection image module, a presence and three-dimensional position of one or more objects in the cockpit area based on the image data.
After determining that one or more pilots have fallen, the method may comprise determining, using a pilot-object module, that a pilot-object interaction event occurred based on the three-dimensional posture of the one or more pilots and the three-dimensional positions of the one or more objects before and after, and/or during the fall. In other words, if the pilot posture recognition module identifies a fall event of one or more pilots, the pilot-object module will determine (based on the three-dimensional posture of the one or more pilots and the three-dimensional positions of the one or more objects at the time of the fall event) whether a pilot-object interaction event occurred (e.g., identifying whether the pilot(s) knocked any controls as they fell down).
The method may comprise verifying, using an alert system, that an action has been input into a flight system log of the aircraft as a result of the determined pilot-object interaction event; and alerting, using the alert system, a control system of the aircraft that the action input was a result of the determined pilot-object interaction event.
The method may comprise alerting, using an alert system, aircraft crew and/or ground control and/or Air Traffic Control that one or more pilots have fallen.
The method may comprise extracting, using a depth information module of the pilot posture image module, depth information from the image data; identifying, using a pilot pose module of the pilot posture image module, a plurality of two-dimensional keypoints corresponding to features of the one or more pilots in the image data; and combining, using the pilot posture image module, the depth information and the two-dimensional keypoints to determine three-dimensional posture of the one or more pilots in the cockpit.
As shown in
As shown in
The modules and models of the systems 100, 200 described herein may be configured (e.g., trained) to process/handle image data from a number of different imaging device configurations. In other words, the modules and models may be presented (during training) with a plurality of different view configurations of the pilot(s)/cockpit, each view configuration comprising one or more image views (with each image view being provided by a particular imaging device 202 in a particular position/orientation). In this way, the modules of the system 100, 200 may be configured to be ‘viewed independently’, in that they can process/handle image data from any imaging device configuration (i.e., one or more imaging devices 202 arranged in any position(s)/orientation(s)) and yet still provide robust and accurate evaluations of the scene in the cockpit.
As can be seen in
The processed image data from the depth information module 211 and the processed image data from the pilot pose module 213 are then combined by the posture position module 214 to create a three-dimensional reconstruction of the pilot's pose (e.g., to determine three-dimensional positions of the keypoints of the one or more pilots in the cockpit). The three-dimensional keypoints of the one or more pilots in the cockpit are then fed into a LSTM network model of the posture position module 214. The LSTM network model is configured, e.g., pre-trained(pre-rained), to determine feature relationships between the key points (i.e., the three-dimensional distance and angles between each key point) and to utilise the determined feature relationships to identify a three-dimensional posture. During training, the LSTM network model will learn to recognize or classify different postures based on characteristic/distinctive feature relationships. Accordingly, the LSTM network model is configured, e.g., trained, to recognise/infer postures (e.g., postures including at least head/neck leaning forward, backward, to right, or to left, body leaning to right or to left, normal upright, slouching, sliding down seat, falling off seat, etc.) of the one or more pilots from the three-dimensional keypoints.
Once the pilot posture(s) have been determined by the LSTM network of the pilot posture position module 214, the information is sent to the decision module 120 and/or an alert system/module 215 which may be additionally provided with information from other sensors and subsystems in the system (e.g., from subsystems 112a-e). The decision module 120 and/or alert system/module 215 can then determine a final pilot state 130 and decide if an alert needs to be raised.
Another exemplary usage of the system is shown in
In the event that the system determines that one or more pilots have fallen (e.g., the posture position indicates that a pilot is falling/has fallen), the processed image data from the object detection module 220 (e.g., the three-dimensional object positions) and the processed image data from the posture position module 214 (e.g., the three-dimensional postures of the one or more pilots P1, P2) are passed to the pilot-object module 222. The pilot-object interaction module 222, as shown in
If the system determines that a pilot-object interaction has occurred after a fall, then this information is passed to the decision module 120 and/or alert system/module 215. The decision module 120 and/or alert system 215 is configured to check the time at which the determined pilot-object interaction event occurred and identify if an action was input to a flight system log at the time of the determined pilot-object interaction event. If an action was input to the flight system log at the same time as a determined pilot-object interaction event, then the alert system 215 is configured to alert an aircraft management system/control system that the action input was a result of the determined pilot-object interaction. Accordingly, the aircraft management system/control system may be instructed to automatically address the action input (e.g., to rectify or undo the action performed by the inadvertent interaction with the flight controls), or notify aircraft crew (e.g., the remaining one or more pilots) and/or ground control and/or Air Traffic Control. Similarly, the alert system 215 may alert aircraft crew and/or ground control and/or Air Traffic Control that one or more pilots have fallen (e.g., even if a pilot-object interaction has not occurred).
In other words, the system 200 is capable of recognising a first anomalous event (e.g., a pilot fall) and instructing the alert system/module 215 to raise a corresponding alert (e.g., to notify at least one of the aircraft management system, aircraft crew, ground control, and Air Traffic Control), and capable of recognising a second anomalous event (e.g., a pilot-object interaction after a pilot fall) and instructing the alert system/module 215 to check if a flight action input was a result of the determined pilot-object interaction and, if so, to raise a corresponding alert (e.g., to notify at least one of the aircraft management system, aircraft crew, ground control, and Air Traffic Control).
Although this disclosure has been described in terms of preferred examples, it should be understood that these examples are illustrative only and that the claims are not limited to those examples. Those skilled in the art will be able to make modifications and alternatives in view of the disclosure which are contemplated as falling within the scope of the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
23164433.7 | Mar 2023 | EP | regional |