The present invention relates, in general terms, to a cognitive training platform. The cognitive training platform may have application in digital therapeutics, for example.
Digital therapeutics have emerged as a non-pharmacological alternative for prevention and treatment of cognitive decline and mild dementia, among other conditions. Multiple studies have shown the efficacy of cognitive training delivered in a digital form, to assess dementia state, slow down progression of amnestic mild cognitive impairment, and eventually remediate age-related deficits in cognitive control.
Learning and training regimens in existing cognitive training products are often delivered at a fixed intensity level. This often leads to sub-optimal responses, or even no response at all. Similarly, fixed intensity training can lead to plateaus in learning trajectories and training outcomes. Such training regimens are therefore undesirable for digital therapeutics.
Some existing digital applications for enhancing cognition utilize basic methods of adjusting stimulus intensity, such as task difficulty, based on the user's performance. However, such applications do not take into account the complexity associated with an individual's response to different stimuli at different times.
It would be desirable to overcome or alleviate at least one of the above-described problems, or at least to provide a useful alternative.
Disclosed herein is a computer-implemented cognitive training process, comprising:
The cognitive response function may be generated by:
In some embodiments, at least two stimulus parameters are varied independently.
The process may comprise fitting said response data to a functional form that is non-linear in the one or more stimulus parameters. For example, the cognitive response function is a quadratic function of the one or more stimulus parameters.
In some embodiments the process comprises obtaining an updated cognitive response function based on the one or more first cognitive performance values; and/or based on one or more second cognitive performance values, the one or more second cognitive performance values indicative of a response to the second stimulus.
The process may comprise monitoring changes in the cognitive response function within a training session and/or across training sessions.
In some embodiments, the process comprises classifying the user into a subpopulation of users; wherein the classification is based on one or more of: the cognitive response function; the updated cognitive response function; and the changes in the cognitive response function.
Advantageously, by classifying the user into a subpopulation, for example based on similarity of their cognitive response function or changes in the cognitive response function to an average cognitive response function of the subpopulation, the behaviour of the subpopulation can be used to make predictions of the user's cognitive performance. For example, a user may have a cognitive response function after a certain number of training sessions that is similar to a subpopulation of users who do not improve in performance after that number of training sessions. That may enable a clinician or other therapist to switch the cognitive training process in favour of another process, as it is likely to be unproductive if continued. In another example, a user may have a cognitive response function that is similar to a subpopulation of users who improve dramatically when particular stimuli are presented to them in future sessions, thereby guiding the clinician to adopt those stimuli.
One or more of said stimuli may be presented via a user interface of a computing device.
In certain embodiments, one of said one or more stimulus parameters is indicative of a training intensity.
One or more of said stimuli may comprise a prompt to provide an input at said computing device. The prompt may be a prompt to provide an input at the user interface of the computing device.
In certain embodiments, the at least one sensor is a sensor of a user input device.
The cognitive response function may also be a function of one or more previously measured cognitive performance values for the user.
Also disclosed herein is a cognitive training platform, comprising:
The at least one processor may be configured to generate the cognitive response function by:
In certain embodiments, the at least one processor is configured to fit said response data to a functional form that is non-linear in the one or more stimulus parameters. For example, the cognitive response function may advantageously be a quadratic function of the one or more stimulus parameters.
In certain embodiments, the at least one processor is configured to present one or more of said stimuli via a user interface of a computing device.
One of said one or more stimulus parameters may be indicative of a training intensity.
In certain embodiments, one or more of said stimuli comprises a prompt to provide an input at said computing device. The prompt may be a prompt to provide an input at the user interface of the computing device.
At least one sensor of said one or more sensors may be a sensor of a user input device.
In certain embodiments, the cognitive response function is also a function of one or more previously measured cognitive performance values for the user.
Also disclosed herein is a method of obtaining a cognitive response function for a user, the cognitive response function representing a cognitive state, or change in cognitive state, as a function of one or more stimulus parameters of a stimulus to which the user is exposed, the method comprising:
The cognitive response function may be a quadratic function of the one or more stimulus parameters.
Further disclosed herein is a non-transitory computer-readable medium having stored thereon instructions for causing at least one processor to perform a cognitive training process according to any preceding paragraph, or a method of obtaining a cognitive response function for a user according to any preceding paragraph.
Embodiments of the present invention will now be described, by way of non-limiting example, with reference to the drawings in which:
Embodiments of the invention relate to a cognitive training process and a cognitive training platform that advantageously make use of user-specific cognitive response profiles, also referred to herein as N-of-1 learning trajectory profiles, to dynamically adjust and thereby optimize a user's response to cognitive training.
Embodiments may identify N-of-1 learning trajectory profiles and learning optimization via a digital interface. By varying the nature the stimulus presented to a user, and measuring the responses to the varying stimulus, it is possible to develop N-of-1 learning trajectory profiles that may actionably mediate training optimization at the single-subject level by dynamically identifying training inputs (for example, the type and/or intensity of the training inputs) that drive the best possible scoring outcome or output relating to cognitive ability and/or state. Accordingly, embodiments of the invention may serve as a powerful optimization platform for digital therapy, student learning, cognitive decline prevention, and other indications.
Advantageously, population-based big data sets are not required by certain embodiments. Synergy prediction between the various inputs is not required in order to globally optimize training for an individual. Empirically recorded or derived measurements or information from the individual can be used to define the individual's profile, used to identify, recommend, and/or be used in a direct feedback based manner to choose a training stimulus that will yield the desired response of the individual.
Cognitive Training Process 100
With reference to
The cognitive response function may be a pre-generated function that is stored on a computer-readable medium and retrieved by the one or more processors as part of the training process 100. Alternatively, the training process 100 may itself generate the cognitive response function, in a manner which will be described below.
The cognitive response function depends on one or more variables and may represent a measurement of a cognitive state or change in cognitive state of the user. The one or more variables include one or more stimulus parameters of a stimulus to which the user is exposed. The one or more variables may also include one or more current or past cognitive state values of the user. That is, in some embodiments, the current cognitive state or change in cognitive state may depend both on the past cognitive state, and the nature and/or intensity of the stimulus to which the user is exposed. In some embodiments, the cognitive response function may depend on one or more variables that characterize the environment of the user, such as ambient temperature, background noise levels, and the like. Accordingly, the stimulus parameters may include both “active” parameters that are controllable, and “passive” parameters that are not controllable but nonetheless measurable such that the impact of their variability on the user's cognitive state (or change in cognitive state) can be determined.
The stimulus may, for example, be presented to the user via a user interface, such as a display of a computing device, another output device such as a speaker or tactile feedback device that is coupled to a computing device, and/or a brain-computer interface. In some embodiments, the stimulus is a prompt to perform one or more tasks, for example a prompt to enter a certain type of input at the user interface, such as tapping or clicking on a target presented on the display, or entering a text response to a question presented on the display. One or more measurements of the response to the stimulus may be made by one or more sensors. For example, the speed and/or accuracy of the response as recorded by an electromechanical sensor of a user input device such as a mouse, keyboard or gesture-based input device may be measured.
In other embodiments, the stimulus may be a visible or audible cue to which the user reacts. One or more sensors may measure a response of the user to the visible or audible cue. For example, a camera may capture one or more images of at least part of the user and determine a cognitive state measurement, for example a cognitive performance measurement (such as reaction speed), based on the one or more images.
At block 120, a first stimulus is presented to the user. The first stimulus is characterized by first stimulus parameters. For example, if the stimulus is a prompt to perform a task, the first stimulus parameters may include an intensity of the task. The intensity may be characterized as low, medium or high, or by a numerical value, for example. In some embodiments, the stimulus may be a prompt to perform multiple different tasks, and the stimulus parameters may be the respective intensities of the tasks, which may be varied together or independently.
At block 130, at least one first cognitive performance value is determined, for example by the computing device that presents the user interface. The first cognitive performance value or values may be determined by capturing data from the one or more sensors, and processing the data to compute one or more numerical values, such as the speed and/or accuracy of the response to the first stimulus.
At block 140, the process 100 determines second stimulus parameters that result in an improved cognitive performance value or values relative to the first cognitive performance value or values. It does so based on the cognitive response function and optionally, the first cognitive performance value(s). For example, process 100 may determine second stimulus parameters that optimize the cognitive response function given the first (i.e., current) cognitive state value. That is, if the cognitive response function is F(x,p) where x is the first (current) cognitive state value and p is the set of parameters characterizing a stimulus, process 100 optimizes F for fixed x to determine second stimulus parameters poptimum that result in the optimum value of F. In some embodiments, F may be independent of x, so that all that is required is to optimize a function F(p).
At block 150, a second stimulus is presented to the user, where the second stimulus is characterized by the second stimulus parameters. For example, if the outcome of block 140 is that poptimum corresponds to a task intensity of “high”, the process 100 adjusts the user interface (for example) to present a second stimulus at high intensity.
Calibration Process 200
Turning now to
At block 210, the process 200 begins by initializing respective values of the stimulus parameters.
Next, at block 220, a stimulus is presented to the user, the stimulus being characterized by the initial values of the stimulus parameters. The stimulus is presented by the user interface of the computing device, for example.
At block 230, the user response to the stimulus is recorded. For example, if the stimulus is a prompt to perform a task, one or more sensors (such as electromechanical or optical sensors) measure a user input or other action that is performed in relation to the task, such as a user input made via a mouse, keyboard or other input device. The process 200 may record response data indicative of the speed and/or accuracy of the user input.
At block 240, the process 200 checks whether one or more criteria relating to the measurement have been satisfied. These may include a time criterion (e.g., whether a predetermined time elapsed since process 200 commenced) and/or a requirement for a certain number of measurements.
If the measurement criteria have not been satisfied, process 200 loops back to block 210, where the stimulus parameters are adjusted. The stimulus parameters may be adjusted independently. For example, each stimulus parameter may be adjusted on each iteration, or one or more parameters may be adjusted while the others are maintained at the same level.
Presentation 220 and measurement 230 steps are then repeated, and the process 200 continues until the measurement criterion has, or measurement criteria have, been satisfied.
At block 250, after the one or more measurement criteria have been satisfied, the process 200 determines a cognitive response function from the measured response data. For example, a non-linear function may be fitted to the response data or to values derived therefrom.
In one embodiment, the non-linear function may be a quadratic function of the one or more parameters characterizing the stimulus presented to the user. For example, a healthy and optimized individual's response can be represented as F(S), and a different non-optimized (e.g. mild cognitive impairment/mental decline or healthy baseline) individual by F(S′), where S represents the individual's optimized cognitive/learning/training network mechanisms and S′ the aberrant, sub-optimal, and/or average baseline cognitive/learning/training network mechanisms. The indicator of the individual's cognitive response is the human response of interest that can be measured (e.g. via a digital interface), such as improvement in cognitive performance or function via a quantifiable score (e.g. based on clinically established scoring, game scoring, etc.). The non-optimized individual's response can be parametrized by a parameter C—the manipulation or characteristic (e.g. fatigue) amplitude/level and/or manipulation or characteristic type. Owing to the complexity of these mechanistic networks, explicit forms of these functions—F(S), F(S′), and F(S′,C)—are unknown. F(S′,C) can be expanded about F(S′) to give the following expression:
F(S′,C)=F(S′)+x0+Σxici+Σyijcj2+Σzijcicj+high-order terms (1)
where xi is the individual response coefficient to a factor i (which may be a stimulus parameter or a characteristic of the individual) at amplitude/level ci, and zij is the individual response coefficient to the interaction of manipulation/characteristic i and manipulation/characteristic j at their respective amplitude/level.
Advantageously, the high-order terms (order higher than 2 in the ci) may be dropped from Eq. (1). This enables the introduction of non-linearity into the response, while keeping the number of parameters that must be fitted as low as possible. Because human cognition is thought to respond to inputs in a nonlinear fashion with respect to manipulation i, yii represents a second-order response to the manipulation amplitude/level ci. The values of x0, xi, yii, and zij can be experimentally determined by calibrating performance outcomes of a specific individual and the manipulation-level inputs (e.g. intensity or difficulty level). Hence, the optimized manipulation-level combination is dynamically personalized to this specific individual, using only their own data. This approach does not require population-level information. Accordingly, the response function can be determined empirically without needing to assume a particular functional form or make any other modelling assumptions.
By moving F(S′) to the left side of Eq. (1) and removing the high-order terms, the following expression is obtained:
R(C)=F(S′,C)−F(S′)=x0+Σxici+Σyiici2+Σzijcicj (2)
The difference between the two unknown functions F(S′,C) and F(S′) is the overall individual performance response R(C) to the manipulation(s), which can be approximated by a second-order algebraic equation of manipulation levels (stimulus parameters) alone, independent of the specific physiological and/or cognitive mechanisms. Therefore, embodiments provide a platform that is cognitive/learning/training physiological mechanism-independent and disease indication-agnostic. Additionally, because experimental data are used to construct this response surface by calibrating the coefficients, the process 200 is not a model-specific algorithm.
Accordingly, therefore, process 200 may be summarized in a broad sense as including steps of:
Once the parameters of cognitive response function R(C) are obtained, they may be used to predict user response to a particular stimulus ci, and to thereby determine a stimulus that will produce an optimized response as discussed above.
Advantageously, in some embodiments, the cognitive response function of a user can be monitored for changes over time. For example, the cognitive response function may change within a training session, and/or between training sessions, thus changing the identified optimised training levels for the desired outcome. To this end, the process 100 and/or the process 200 may comprise determining an updated cognitive response function based on the one or more first cognitive performance values; and/or based on one or more second cognitive performance values, the one or more second cognitive performance values indicative of a response to the second stimulus.
For example, as shown in
In some embodiments, within-session or session-to-session changes in the behavior of the cognitive response function can be analysed by a comparison with population data.
Cognitive Training Platform 300
An example cognitive training platform 300, as depicted in
The one or more sensors 310 may be electrical, electromechanical, electromagnetic, and/or optical sensors. For example, the sensors may comprise one or more of: a sensor of a user input device such as a keyboard, mouse or stylus; a camera; a microphone; one or more electrodes of a brain-computer interface; or one or more physiological sensors such as a heart rate monitor, a blood pressure sensor, a temperature sensor or a muscle tension sensor.
The user interface 320 may be a display of a computing device, such as a mobile computing device or a desktop computing device. In some embodiments, the user interface 320 may itself include one or more sensors for detecting user input, such as touchscreen sensors (which may be resistive, capacitive, surface acoustic wave, infrared or optical imaging sensors, for example). In addition, one or more other input devices 322 may be provided to detect user input which is detected and analysed by the training module 340.
The calibration module 330 is a hardware and/or software component that is configured to execute the steps of calibration process 200. Calibration module 330 is in communication with sensors 310 and receives data from sensors 310 to determine the response of the user 301 to stimuli that are presented by user interface 320, for example. Calibration module 330 is also in communication with user interface 320, and adjusts the current values of the stimulus parameters to alter the stimulus that is presented by user interface 320 at any given time. Calibration module 330 comprises parameter fitting sub-module 332, that receives the response data indicative of the recorded user responses to the stimuli presented by user interface 320, and fits the parameters of the quadratic form R(C) in Eq. (2) to the response data.
The training module 340 is also a hardware and/or software component that executes the cognitive training process 100. In particular, training module 340 is in communication with the sensors 310, user interface 320 and other input devices 322 to receive data recorded by the sensors 310 and/or other input devices 322, and adjust the stimuli presented by user interface 320 in accordance with the cognitive response function determined by the calibration process 200 of calibration module 330, to thereby optimize the cognitive response of user 301.
The cognitive training platform 300 may have an architecture that is based on the architecture of a desktop or laptop computing device, or a mobile computing device, such as the architecture depicted in
In other embodiments, the cognitive training platform 300 may comprise a plurality of computing devices, with different components being implemented via different computing devices of the platform. For example, UI 320 and at least some of the sensors 310 may be implemented in a mobile computing device which is operated by the user 301, while calibration module 330 and training module 340 may be implemented in one or more desktop computing devices or servers that are in communication with the mobile computing device.
As shown, the mobile computer device 300 includes the following components in electronic communication via a bus 406:
Although the components depicted in
The display 402 generally operates to provide a presentation of content to a user, and may be realized by any of a variety of displays (e.g., CRT, LCD, HDMI, micro-projector and OLED displays).
In general, the non-volatile data storage 404 (also referred to as non-volatile memory) functions to store (e.g., persistently store) data and executable code.
In some embodiments for example, the non-volatile memory 404 includes bootloader code, modem software, operating system code, file system code, and code to facilitate the implementation components, well known to those of ordinary skill in the art, which are not depicted nor described for simplicity.
In many implementations, the non-volatile memory 404 is realized by flash memory (e.g., NAND or ONENAND memory), but it is certainly contemplated that other memory types may be utilized as well. Although it may be possible to execute the code from the non-volatile memory 404, the executable code in the non-volatile memory 404 is typically loaded into RAM 408 and executed by one or more of the N processing components 410.
The N processing components 410 in connection with RAM 408 generally operate to execute the instructions stored in non-volatile memory 404. As one of ordinarily skill in the art will appreciate, the N processing components 410 may include a video processor, modem processor, DSP, graphics processing unit (GPU), and other processing components.
The transceiver component 412 includes N transceiver chains, which may be used for communicating with external devices via wireless networks. Each of the N transceiver chains may represent a transceiver associated with a particular communication scheme. For example, each transceiver may correspond to protocols that are specific to local area networks, cellular networks (e.g., a CDMA network, a GPRS network, a UMTS networks), and other types of communication networks.
The mobile computer device 300 can execute mobile applications. The cognitive training application 418 could be a mobile application, web page application, or computer application. The cognitive training application 418 may be accessed by a computing device such as mobile computer device 12, a desktop computing device or laptop, or a wearable device such as a smartwatch.
It should be recognized that
A pilot study pertaining to the application of embodiments of the invention towards the derivation of N-of-1 learning trajectory profiles (cognitive response functions) was performed.
We used the Multi-Attribute Test Battery (MATB) platform in this pilot study. Initially developed by NASA and further refined by the US Air Force, the MATB is a flight deck operations simulator that requires the user to perform four tasks concurrently. These tasks include managing fuel tank levels, tracking a target via joystick, adjusting a radio in response to verbal commands, and responding to indicator lights and gauges. Individuals are given instructions for each task, but they must learn effective strategies for task performance and coordination through experience. As such, the difficulty of the tasks could be expected to affect what is learned, though it is challenging—if not impossible—to specify the best difficulty level a priori.
The MATB includes a sophisticated parameterization of task control and user performance, rendering it a potent evaluative tool in several domains. The parameters of the task control may be the parameters p of the first and second stimuli as discussed above.
Foremost, it has been used to characterize subjective and objective measures of mental workload (e.g. scale ratings and EEG signatures) across different levels of task intensity. Individuals perform differently on the MATB even with the same event sequences and control settings, and some of these inter-individual differences have been associated with stable variation in cognitive abilities or personality traits. The MATB has also been used to study improvements in performance with training or experience. Multitasking costs can be reduced or, in some cases, even virtually eliminated without direct instruction. But again, the degree and completeness of such performance improvements varies considerably across individuals. Despite such differences, training regimens typically involve a single difficulty level, and relevant adaptive procedures that could be employed are limited.
Given the previous findings with the platform and its features, the MATB may serve as an ideal candidate for training optimization based on embodiments of the present invention. We tested this notion in three experiments. In the first, we characterized training benefits on each task and interindividual differences in performance, including both baseline and training trajectories. In the second, we tested for training effects on each measure within a single session, and whether task intensity affected performance and improvement rate. In the third, we varied training intensities for each participant, thereby allowing us to attempt the creation of N-of-1 cognitive response functions.
Experimental Results
Constant Training Intensity Over Multiple Sessions
Twenty-eight individuals participated in a five-day training study designed to characterize the variability of individual performance improvement trajectories at a fixed training intensity. A schematic of the training study is shown in
During each day's training session, participants completed a 10-minute session of the MATB-II, NASA's current version of the simulator. An example display of MATB-II is shown in
Across the five days of training, participants improved their performance. Significant training gains were found for five of the six metrics and three of the four tasks, with only Resource Management being too variable for a clear trend to emerge, as shown in Table 2.
3.27
(p = 0.006)
0.58 (p = 0.003)
−5.55 (p < 0.001)
0.57 (p = 0.005)
5.60 (p < 0.001)
0.58 (p = 0.003)
−13.0 (p < 0.001)
0.78 (p < 0.001)
−0.25 (p = 0.80)
0.66 (p < 0.001)
−8.23 (p < 0.001)
There were also significant inter-individual differences in performance on each task. Importantly, these inter-individual differences emerged even though each participant experienced exactly the same event sequence within a given session number. The differences were also relatively stable across sessions (and event sequences), with significant test-retest (Session 1 to Session 5) correlations across participants (Table 2). Nevertheless, training trajectories across the sessions also showed substantial variability that was not fully attributable to either noise or initial performance.
Alternating Training and Testing Blocks within a Single Session
Six individuals completed a pilot study for assessing the feasibility of a design that used alternating testing and training blocks within a single session and determining each task's performance improvement trajectory within such a design. A schematic of the experimental design is shown in
Performance in each block was sensitive to training intensity. Comparisons between average performance on training blocks and average performance on testing blocks by subject revealed that performance was significantly worse on the TRCK task and COMM reaction time metric, and marginally worse on the SYSM reaction time metric, during higher-intensity blocks (Table 4).
Performance also improved across time within even the single MATB session (Table 5). As measured by each subject's performance change slope across the eight testing blocks, significant improvements were found for the COMM and RMAN tasks, whereas the other two tasks remained statistically unchanged. We therefore combined the metrics for the tasks showing improvement, calculating a sum of each measure's z-scores across blocks to produce an overall performance metric sensitive to training gains (
5.33
(p = 0.003)
−0.280 ± 0.130
−5.26 (p = 0.003)
−67.0 ± 59.9
−2.74 (p = 0.041)
Modulated Training Intensity
Three individuals successfully completed a pilot study to demonstrate the ability to identify N-of-1 optimization of performance improvement in a single session. Two additional individuals stopped performing one or more component tasks; their cases are discussed below, but they were not used for creating individualized profiles. The AF-MATB session was composed of alternating two-minute testing blocks set at medium intensity collecting performance score, and two-minute training blocks set at low, medium, or high intensity, with training intensity defined per task (Table 3). Performance improvement was defined as the difference in performance during testing blocks before and after the intervening training block (
Individualized profiles for subjects 1-3 were constructed and found to represent the unique relationship between training intensity and performance from the training blocks with performance improvement from the collected AF-MATB data (
Subject 2 had a performance range of −1.59 to 1.04 (
Discussion
Based on the performance profiles during both multiple and single training sessions, the MATB demonstrated its potential utility as a platform for performance optimization based on embodiments of the present invention. Across five sessions of training, performance improvements were substantial for almost all metrics and MATB tasks. Furthermore, even without modulating task difficulty, baseline performance and improvement trajectories varied greatly across individuals. These same features were observed even during a single session with modulated training intensity. In addition, participants were sensitive to these training intensity manipulations, with performance during higher-intensity blocks generally poorer compared to lower-intensity blocks. Modulation of training intensity may therefore be similar to the dose modulations to which embodiments of the present platform has now been extensively and successfully applied.
The profiles from individual subjects further demonstrate the potential of embodiments of the present invention for optimizing behavioral performance and its rate of improvement. The individual surfaces varied in overall shape, ranging from convex to saddle-like. Specifically, the convex behavior of the profile for subject 1 (
Importantly, training intensity is also not expected to have a monotonic effect on training improvements. Difficulty settings that are too high may result in individuals giving up on one or more tasks, whereas settings that are too easy may result in little and inefficient learning. Indeed, out of the five subjects recruited for the modulated training intensity experiment, one subject resigned from performing the resource management task when training intensity increased to the high level, and one subject ceased to perform the communication task. Under other circumstances, such difficulty settings may be beneficial. For example, easier settings may enable individuals to focus on improving each task's performance to their overall benefit, whereas difficult settings may be needed to detect and address “latent bottlenecks” in multitasking. These latent bottlenecks induce coordination problems or other costs that are only revealed in challenging circumstances. As such, further performance gains could be found by stretching the training intensity space, a key feature of cognitive training optimization using the presently described embodiments. As noted above, the most useful training intensity will vary across the course of training and across individuals. Such differences will be due in part to the specific difficulties an individual has with a given task, as well as stable inter-individual differences in general cognitive or motivational capacities.
Individuals in multi-tasking situations often trade-off performance across tasks, improving on one by sacrificing effort on another. For example, the two participants who each stopped performing a task in the modulated training intensity experiment potentially could not cope with the demands of the high-intensity training blocks. Unfortunately, their solution effectively halted learning of the dropped task and coordination across the full set of tasks. Less dramatically, the tasks affected by training intensity modulation were often not the tasks that showed training gains. Individuals may trade off performance as a deliberate strategy, or such effects may emerge as a byproduct of the training procedure. Regardless of the cause, the presently disclosed embodiments may be useful for optimizing desired regimen compliance.
Experimental Section
Participants. The study was conducted according to a protocol approved by NUS Institutional Review Board (S-17-180) and listed on Clinicaltrials.gov (identifier NCT03832101). In total, 41 individuals were recruited, gave informed consent, and participated in MATB simulator experiments. Participants were required to be fluent in English and to have no history of perceptual or memory deficits. No participants had prior experience with the MATB or a similar platform.
Apparatus. The MATB is a flight deck simulator with versions developed by the National Aeronautics and Space Administration (NASA) and United States Air Force (USAF). The two MATB versions, NASA's MATB-II (v2.0), and the USAF's AF-MATB (v3.03) use highly similar displays and interfaces (
Tasks and Measures. The MATB consists of four primary tasks: Communications (COMM), System Monitoring (SYSM), Resource Management (RMAN), and Tracking (TRCK). All four tasks were used in each experiment in a similar way. In the Communications (COMM) task, participants acted upon messages preceded by their call sign (true comms), while ignoring other messages (false comms). True comms required the participant to select the radio and adjust its frequency using the mouse, trackpad, and/or keyboard; the speed and accuracy of these responses were the dependent measures. System Monitoring (SYSM) consisted of two subtasks, lights and gauges. For the former, participants needed to click on a green indicator light if it turned off and on a red indicator light if it turned on (
Each task was controlled by a script of event timings and other settings (e.g. tracking difficulty). Experiment-specific settings for each task and condition are listed in Tables 1 and 3.
Procedure. Participants were seated approximately 60 cm from the screen for each session. During the first (or only) session, participants were instructed on each task and allowed to experience each task in isolation, prior to the real experimental session. On subsequent days, no additional familiarization was provided. Specific procedural and analytical aspects of the three experiments follow below.
Constant Training Intensity over Multiple Sessions Experiment. Twenty-eight individuals (12 males, 27 right-hand dominant, mean age of 23.0, age range of 19-30) were recruited for a five-day training study. On each day, participants completed several cognitive tasks for up to 90 minutes total, including a 10-minute session of the MATB-II. Identical settings and event timings were used for each participant by session. The same task settings were used across sessions as well, though different event timings were used to prevent participants from learning specific sequences.
Alternating Testing and Training Blocks Experiment. Six individuals (1 male, all right-hand dominant, mean age of 23.2, age range of 21-29) were randomized into two groups, and then each completed a single session of the AF-MATB. The data from two additional participants could not be used due to technical difficulties with the computer. Each session contained a total of 15 blocks of two minutes each. Blocks alternated between testing at medium intensity (8 blocks) and training at either low or high intensity (7 blocks), with training intensity set by group (Table 3).
The metrics that showed either significant or marginally significant training gains (COMM accuracy) were used to construct an overall performance measure that was sensitive to training improvements. Each metric's performance scores across blocks were converted into z-scores. This conversion was done separately for each subject, and scales were flipped if necessary so that positive z-scores indicated better performance. The converted metrics were then summed (with equal weighting by task) and the results normalized again. Mathematically, the conversion was RMAN-COMM z-scores=z(−2*z(RMAN deviation)+−1*z(COMM reaction time)+1*z(COMM accuracy)).
Calibration experiment. Five individuals (4 males, all right handed, mean age of 24, age range of 21-24) took part in a training MATB simulator with a total of 17 blocks, each lasting two minutes. Every other block including the first block (testing blocks) were set at a constant medium task intensity to collect performance values from which the RMAN-COMM z-scores could be calculated (see above). The training intensity for the blocks administered in between the testing blocks alternated amongst low, high, and medium. For analytical purposes, numeric values of one, two and three were assigned to the low, medium and high intensity conditions for these training blocks. The performance improvement associated with each training block was defined as the difference in performance for the testing blocks before and after the training block in question. A subject's profile (cognitive response function) represented the performance improvement during each training block as a function of the performance of the immediately preceding testing block and the training block's intensity. A visual representation of each profile's phenotypic surface was plotted using MATLAB R2017a (MathWorks Inc.).
Statistical Analysis: The R-squared values for the profiles (n=3) were calculated using regression analysis features in MATLAB R2018a (MathWorks Inc.). All other analyses were performed in RStudio version 1.0.136 (R Foundation for Statistical Computing) running R version 3.2.4 and MATLAB R2018a. The nominal alpha criterion level for all tests was set at 0.05, and correction for multiple comparison was achieved through p-value adjustment according to the False Discovery Rate (FDR) procedure.
From this proof of concept analysis of the two sets of parameters, three volunteers (D1, D2, D3) underwent a 62-minute session on the AF-MATB. The MATB involves using a joystick, keyboard and trackball to perform four different tasks simultaneously. The four tasks (TRCK, COMM, RMAN, SYSM) were blocked together with same difficulties, such as Medium difficulty for all four tasks, like the conditions in the Advanced Therapeutics publication, and dynamically changed throughout the session. The volunteers z-scores (performance scores) can be seen increasing and then decreasing over time. Two CURATE.AI N-of-1 profiles were calibrated per participant using the initial and last nine testing blocks (
For the session to session analysis, both volunteer participants P6 and P3 underwent MATB training with two out of the four modules in MATB having the difficulty dynamically changed throughout each session, SYSM and RMAN. Both P6 and P3 began with concave behavior in their first session CURATE.AI profiles, with a shift in behavior being different in both participants beginning from session 3 and their CURATE.AI profiles becoming more and more different between both participants over the remaining sessions. Though more volunteer data and studies are needed, the preliminary analysis of the two participants seems to indicate that outcomes may be correlated to changes in CURATE.AI profiles and volunteer/patient responses, relating to being able to identify individuals to subpopulations (e.g. those that can improve the most, those that improve moderately, those that do not improve) and serve as a predictor of outcomes (e.g. z-score, training).
For volunteer P6 (
For volunteer P3 (
It will be appreciated that many further modifications and permutations of various aspects of the described embodiments are possible. Accordingly, the described aspects are intended to embrace all such alterations, modifications, and variations that fall within the spirit and scope of the appended claims.
Throughout this specification and the claims which follow, unless the context requires otherwise, the word “comprise”, and variations such as “comprises” and “comprising”, will be understood to imply the inclusion of a stated integer or step or group of integers or steps but not the exclusion of any other integer or step or group of integers or steps.
The reference in this specification to any prior publication (or information derived from it), or to any matter which is known, is not, and should not be taken as an acknowledgment or admission or any form of suggestion that that prior publication (or information derived from it) or known matter forms part of the common general knowledge in the field of endeavor to which this specification relates.
| Number | Date | Country | Kind |
|---|---|---|---|
| 10201903518P | Apr 2019 | SG | national |
| Filing Document | Filing Date | Country | Kind |
|---|---|---|---|
| PCT/SG2020/050240 | 4/17/2020 | WO | 00 |