Geographically separated teams such as pilots and ground control, or trainees and instructors experience challenges assessing each other's level of engagement and attention when they cannot see what others are seeing or how they are reacting. Gaze and eye movements are key indicators of attention but participants do not have a common shared point of reference. Further, remote teams may not be able to assess other cues such as facial expressions.
In one aspect, embodiments of the inventive concepts disclosed herein are directed to a team monitoring system that receives data for determining user engagement for each team member. A team engagement metric is determined for the entire team based on individual user engagement correlated to discreet portions of a task. User engagement may be determined based on arm/hand positions, gaze and pupil dynamics, and voice intonation.
In a further aspect, individual user engagement is weighted according to a task priority for that individual user at the time.
In a further aspect, the system determines a team composition based on individual user engagement during a task and team engagement during the task; even where the users have not engaged as a team during the task.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and should not restrict the scope of the claims. The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate exemplary embodiments of the inventive concepts disclosed herein and together with the general description, serve to explain the principles.
The numerous advantages of the embodiments of the inventive concepts disclosed herein may be better understood by those skilled in the art by reference to the accompanying figures in which:
Before explaining various embodiments of the inventive concepts disclosed herein in detail, it is to be understood that the inventive concepts are not limited in their application to the arrangement of the components or steps or methodologies set forth in the following description or illustrated in the drawings. In the following detailed description of embodiments of the instant inventive concepts, numerous specific details are set forth in order to provide a more thorough understanding of the inventive concepts. However, it will be apparent to one of ordinary skill in the art having the benefit of the instant disclosure that the inventive concepts disclosed herein may be practiced without these specific details. In other instances, well-known features may not be described in detail to avoid unnecessarily complicating the instant disclosure. The inventive concepts disclosed herein are capable of other embodiments or of being practiced or carried out in various ways. Also, it is to be understood that the phraseology and terminology employed herein is for the purpose of description and should not be regarded as limiting.
As used herein a letter following a reference numeral is intended to reference an embodiment of a feature or element that may be similar, but not necessarily identical, to a previously described element or feature bearing the same reference numeral (e.g., 1, 1a, 1b). Such shorthand notations are used for purposes of convenience only, and should not be construed to limit the inventive concepts disclosed herein in any way unless expressly stated to the contrary.
Further, unless expressly stated to the contrary, “or” refers to an inclusive or and not to an exclusive or. For example, a condition A or B is satisfied by anyone of the following: A is true (or present) and B is false (or not present), A is false (or not present) and B is true (or present), and both A and B are true (or present).
In addition, use of “a” or “an” are employed to describe elements and components of embodiments of the instant inventive concepts. This is done merely for convenience and to give a general sense of the inventive concepts, and “a” and “an” are intended to include one or at least one and the singular also includes the plural unless it is obvious that it is meant otherwise.
Also, while various components may be depicted as being connected directly, direct connection is not a requirement. Components may be in data communication with intervening components that are not illustrated or described.
Finally, as used herein any reference to “one embodiment,” or “some embodiments” means that a particular element, feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the inventive concepts disclosed herein. The appearances of the phrase “in at least one embodiment” in the specification does not necessarily refer to the same embodiment. Embodiments of the inventive concepts disclosed may include one or more of the features expressly described or inherently present herein, or any combination or sub-combination of two or more such features.
Broadly, embodiments of the inventive concepts disclosed herein are directed to a team monitoring system that receives data for determining user engagement for each team member. A team engagement metric is determined for the entire team based on individual user engagement correlated to discreet portions of a task. User engagement may be determined based on arm/hand positions, gaze and pupil dynamics, and voice intonation. Individual user engagement is weighted according to a task priority for that individual user at the time. The system determines a team composition based on individual user engagement during a task and team engagement during the task; even where the users have not engaged as a team during the task.
We describe a method for capturing the gaze, facial expressions, and intonation of each operator and infer their individual levels of engagement with the team task. This invention combines gaze, facial expression, and intonation to assess the level of engagement of each individual team member. Speech analysis is added to evaluate turn taking and reaction time responses to further assess engagement of participants with the group activity, providing a measure of the team's level of collective engagement.
Referring to
In at least one embodiment, one or more audio/video sensors 108 record eye movement/gaze of a user, eye lid position, hand/arm position and movement, other physical data landmarks, and voice intonation and volume. The processor executable code configures the processor 102 to continuously log the audio/video sensor data in a data storage element 106. The processor 102 analyzes the audio/video sensor data to identify gaze and pupil dynamics (e.g., pupil response and changes over time), and physical pose estimate for the user.
In at least one embodiment, the audio/video sensor data are correlated with discreet portions of a task, and/or specific stimuli such as instrument readings, alerts, or the like. Furthermore, the processors 102 from each node 100, 116 correlate audio/video sensor data from different users engaged in a common or collective task. Each processor 102 may receive different discreet portions of a task, specific stimuli, and alerts based on the specific function of the user; such different discreet portions, stimuli, and alerts are correlated in time such that user responses may be individually analyzed and correlated to each other to assess total team engagement. In at least one embodiment, team engagement may be weighted according to a priority of team members in time. For example, a first node 100 may analyze the engagement of a first team member performing a critical portion of the task while a second node 116 analyzes the engagement of a second team member simultaneously performing a less critical portion of the task. The assessment of total team engagement may be weighted toward the engagement of the first team member.
In at least one embodiment, total team engagement may be at least partially based on lag between team members. Each processor 102 may identify a delay between team member actions and determine if the delay exceeds some threshold. For example, the engagement of the first team member may be partially based on the first team member's response to the actions of the second team member as communicated to the first node processor 102 by the second node processor 102 via corresponding data communication devices 112.
In at least one embodiment, each node 100, 116 may share audio/video sensor data between nodes 100, 116 via the data communication device 112 to render on a corresponding display 114. The first team members engagement may be characterized with respect to the second team members audio/video sensor data (facial expression, voice intonation, etc.). Furthermore, the second node 116 may provide a real-time engagement assessment of the second team member to other team members; the engagement of the first team member may be at least partially characterized with respect to the engagement assessment of the second team member. For example, the first node may identify a gaze of the first team member and determine that the first team member is observing the second team member or the engagement assessment of the second team member; the second team members engagement may be associated with a predicted or characteristic response of the first team member. The engagement of the first team member may be assessed with respect to the speed and appropriateness of their response to the second team member, including interacting with the second team member or assuming some of their functions.
Each processor 102 may also receive physiological data from one or more corresponding physiological sensors 110. In at least one embodiment, the processor 102 may correlate audio/video sensor data (including at least gaze, pupil dynamics, and voice intonation) with physiological data. The processor 102 may compare the camera and physiological data to stored profiles. Such profiles may be specific to the user.
In at least one embodiment, the processor 102 transfers the stored audio/video sensor data and other correlated system and task data to an offline storage device for later analysis and correlation to historic data and other outside factors such as crew rest, crew sleet rhythms, flight schedules, etc. Such transfer may be in real time via the wireless communication device 112. Furthermore, team members may be correlated against each other and other team members for similar tasks to identify teams with complimentary engagement patterns. For example, offline analysis may identify a team wherein no team members consistently demonstrate reduced engagement at the same time during similar tasks.
Referring to
In at least one embodiment, each computer system (or some separate computer system) receives 208 each user engagement metric, and potentially each audio/video stream. Based on the multiple user engagement metrics and audio/video streams, the computer system produces 212 a team engagement metric. The team engagement metric may be an average of engagement metrics, the minimum engagement for each discreet task, or the like. In at least one embodiment, the team engagement metric may be weighted 210 by the associated team member. For example, the computer system may define or receive a priority associated with the task of each team member and weight the corresponding engagement metric by the associated priority.
In at least one embodiment, the system receives physiological data from one or more physiological sensors such as an EEG and/or an fNIRs. Such physiological data provides the additional metric of neuroactivity when assessing 204, 206 user engagement. Likewise, the system may receive data related to factors specific to the task. Such task-specific data provides the additional metric of context when assessing 204, 206 user engagement. Such analysis may include processing via machine learning, neural network algorithms. Tasks may define specific future actions or future action potentialities from which to make a weighted engagement assessment.
In at least one embodiment, the system may compile data to facilitate the implementation of one or more of the future actions without the intervention of the user, and potentially before the user has made a determination of what future actions will be performed. The system may prioritize data compilation based on the determined probability of each future action.
Referring to
Outputs 312 from each of the nodes 310 in the input layer 302 are passed to each node 336 in a first intermediate layer 306. The process continues through any number of intermediate layers 306, 308 with each intermediate layer node 336, 338 having a unique set of synaptic weights corresponding to each input 312, 314 from the previous intermediate layer 306, 308. It is envisioned that certain intermediate layer nodes 336, 338 may produce a real value with a range while other intermediate layer nodes 336, 338 may produce a Boolean value. Furthermore, it is envisioned that certain intermediate layer nodes 336, 338 may utilize a weighted input summation methodology while others utilize a weighted input product methodology. It is further envisioned that synaptic weight may correspond to bit shifting of the corresponding inputs 312, 314, 316.
An output layer 304 including one or more output nodes 340 receives the outputs 316 from each of the nodes 338 in the previous intermediate layer 308. Each output node 340 produces a final output 326, 328, 330, 332, 334 via processing the previous layer inputs 316, the final output 326, 328, 330, 332, 334 corresponding to an engagement metric for one or more team members. Such outputs may comprise separate components of an interleaved input signal, bits for delivery to a register, or other digital output based on an input signal and DSP algorithm. In at least one embodiment, multiple nodes may each instantiate a separate neural network 300 to process an engagement metric for a single corresponding team member. Each neural network 300 may receive data from other team members as inputs 318, 320, 322, 324. Alternatively, a single neural network 300 may receive inputs 318, 320, 322, 324 from all team members, or a separate neural network 300 may receive inputs 318, 320, 322, 324 from each team member's neural network 300 to determine a team engagement metric.
In at least one embodiment, each node 310, 336, 338, 340 in any layer 302, 306, 308, 304 may include a node weight to boost the output value of that node 310, 336, 338, 340 independently of the weighting applied to the output of that node 310, 336, 338, 340 in subsequent layers 304, 306, 308. It may be appreciated that certain synaptic weights may be zero to effectively isolate a node 310, 336, 338, 340 from an input 312, 314, 316, from one or more nodes 310, 336, 338 in a previous layer, or an initial input 318, 320, 322, 324.
In at least one embodiment, the number of processing layers 302, 304, 306, 308 may be constrained at a design phase based on a desired data throughput rate. Furthermore, multiple processors and multiple processing threads may facilitate simultaneous calculations of nodes 310, 336, 338, 340 within each processing layers 302, 304, 306, 308.
Layers 302, 304, 306, 308 may be organized in a feed forward architecture where nodes 310, 336, 338, 340 only receive inputs from the previous layer 302, 304, 306 and deliver outputs only to the immediately subsequent layer 304, 306, 308, or a recurrent architecture, or some combination thereof.
Embodiments of the inventive concepts disclosed herein are critical to enabling reduced crew operations. An autonomous system can use detections of team engagement to estimate when to provide appropriate information to the users for an adaptive user interface scenario.
The ability to understand whether a remote team is fully engaged in their collective task or not is an important metric for team performance. This measure can help trainers evaluate team dynamics and team formation.
It is believed that the inventive concepts disclosed herein and many of their attendant advantages will be understood by the foregoing description of embodiments of the inventive concepts, and it will be apparent that various changes may be made in the form, construction, and arrangement of the components thereof without departing from the broad scope of the inventive concepts disclosed herein or without sacrificing all of their material advantages; and individual features from various embodiments may be combined to arrive at other embodiments. The forms herein before described being merely explanatory embodiments thereof, it is the intention of the following claims to encompass and include such changes. Furthermore, any of the features disclosed in relation to any of the individual embodiments may be incorporated into any other embodiment.
The U.S. Government has a paid-up license in this invention and the right in limited circumstances to require the patent owner to license others on reasonable terms as provided by the terms of DE-AR0001097 awarded by The United States Department of Energy.