The present invention relates in general to the field of cognitive training, and more specifically to a computer program and systems for cognitive training techniques for improving cognition in a myriad of ways. Such systems and methods are useful to normal populations as well as individuals with a wide-range of cognitive disorders including but not limited to stress, anxiety, social anxiety disorder, depression, ADHD, Autism-Asberger's, Schizophrenia, TBI, PTSD and stroke.
Our everyday activities also drive cognitive changes. Those activities might be intentional, like learning how to play tennis or how to speak a new language, but also unintentional such as the formation of habits. It is the intention of our training to bring a specific set of skills related to attention and attentional control back to a normal range of intentional control with the goal to enhance health and performance.
Cognitive training exists in many different forms including simple practice and repetition of activity with intentional goals in a therapeutic or coaching setting to computer directed tasks that are highly controlled, systematic, and adaptive to the individual. Tasks are any set of stimuli and related cognitive activities that require cognitive or behavioral responses by an individual. With the advent of mobile computing systems, many software programs for cognitive training are now available to individuals as they progress through their day, also known as in situ availability.
Many cognitive training programs focus on specific recognition tasks or memory tasks. Focus is the quality of a person's attentional control to appropriately accomplish a task. It is the objective of this invention to train higher level of cognitive skills commonly referred to as executive function. Executive function knowledge and awareness are required for improving these processes through training.
Brain training methods use repetitive tasks to drive changes in neural responses, without explicit learning, that can impact future performance. Cognitive training, as opposed to brain training, relates to how learning through concepts and experiences cause change in the neurology and behavior of individuals. Neuroscience has shown that such guided learning activities can have significant impact on the human maladies and performance. That impact not only appears on pre- and post-tests administered to individuals to measure behavioral changes but also through modern imaging techniques such as fMRI and EEG.
Despite the significant advantages that cognitive training can have on an individual's life, it is generally still inaccessible to the population as a whole. Cognitive training has been relegated to the laboratory and clinical settings, which is generally costly or difficult to access.
It is therefore apparent that an urgent need exists for systems and methods for improved means of cognitive training that is more widespread and accessible by the general population. Such systems and methods are designed to provide cognitive training to alleviate disorders and improve functioning by helping users make guided enhancements of their cognitive abilities through increased awareness and consequent training of control based on that awareness. This invention is designed to both teach and train users how to increase self-awareness, and provide tools for the benefit of one's well-being.
The present systems and methods relate to improving cognitive functioning through guided training activities on a computerized device. Such systems and methods enable improvements in not just improved cognitive functioning when engaged in these cognitive exercises but transfer directly into real-life situations.
In some embodiments, contextual information regarding the user and her environment are collected. This context includes biometric and psychological data collected from the user, environmental conditions, date, and time. Subsequently, the user is presented with a cognitive task, wherein the cognitive task includes a stimuli and the task specifies a desired response to the stimuli. In some cases, the stimuli is presented as random inter-stimulus intervals (ISI), which are substantially between a reaction-reset interval and an attention-sustaining interval. The device used to present the cognitive task to the user may include any of a smartphone, a tablet, a personal computer, a VR headset, a smart TV and other home entertainment systems, an augmented reality headset and holographic projection systems. Additionally, the device could be integrated into physical exercise equipment and digital home entertainment and home control systems for lighting, heating and sleep systems. The device may also be used to present the cognitive task to multiple people at the same time and to incorporate both an individual user's performance and the performance of others in a group through group interfaces (e.g. digital theater), and may be implemented across multiple group interfaces around the world at the same time.
Feedback is then collected from the user. This feedback may include free-form text notes and other collected data (such as EEG data, facial recognition/emotion data, pupil dilation, heart rate and heart rate variability data, respiratory data, body temperature data, electrodermal response data, etc.). Additionally, the exercise results are assessed. The feedback, results and context information are then all aggregated in order to generate guidance for the user of the system. Guidance may be determined by a rule based system, via AI modeling (of both individual and group data), or by some combination of the two. The guidance includes resource documents and progression to an advanced cognitive task.
Note that the various features of the present invention described above may be practiced alone or in combination. These and other features of the present invention will be described in more detail below in the detailed description of the invention and in conjunction with the following figures.
In order that the present invention may be more clearly ascertained, some embodiments will now be described, by way of example, with reference to the accompanying drawings, in which:
The present invention will now be described in detail with reference to several embodiments thereof as illustrated in the accompanying drawings. In the following description, numerous specific details are set forth in order to provide a thorough understanding of embodiments of the present invention. It will be apparent, however, to one skilled in the art, that embodiments may be practiced without some or all of these specific details. In other instances, well known process steps and/or structures have not been described in detail in order to not unnecessarily obscure the present invention. The features and advantages of embodiments may be better understood with reference to the drawings and discussions that follow.
Aspects, features and advantages of exemplary embodiments of the present invention will become better understood with regard to the following description in connection with the accompanying drawing(s). It should be apparent to those skilled in the art that the described embodiments of the present invention provided herein are illustrative only and not limiting, having been presented by way of example only. All features disclosed in this description may be replaced by alternative features serving the same or similar purpose, unless expressly stated otherwise. Therefore, numerous other embodiments of the modifications thereof are contemplated as falling within the scope of the present invention as defined herein and equivalents thereto. Hence, use of absolute and/or sequential terms, such as, for example, “always,” “will,” “will not,” “shall,” “shall not,” “must,” “must not,” “first,” “initially,” “next,” “subsequently,” “before,” “after,” “lastly,” and “finally,” are not meant to limit the scope of the present invention as the embodiments disclosed herein are merely exemplary. Conversely, terms such as “can” or “may” are used interchangeably and are intended to describe alternative and/or optional features, i.e., may not be necessary or preferred, for the disclosed embodiments.
The present invention relates to systems and methods for training an individual's cognitive abilities. In some embodiments, attention will focus on cognitive tasks that are designed to improve a person's Meta-attention and attentional control. In yet other embodiments, the guided training may be more widely applicable, focusing on a greater selection of cognitive functions. The training programs uses the techniques enumerated below to drive a user to experience all forms of attentional state change and control. Other tasks include tailored guidance and feedback to the user in order to maximize cognitive enhancements. The user's performance can be monitored through real-time and/or periodically and feedback can be provided relative to the user's history and/or compared to other users.
Referring to
Now referring to
The following example describes the inhibition of response cognitive task. The training system displays a number between 0-9, which is the non-response stimulus 510. The training task begins and a number between 0-9 is displayed 520. The user recognizes the number as being either the non-response stimulus or not. The training system can recognize whether the user responds to the stimulus, e.g. touches the screen or does not touch the screen 530. After the response the training system can show a new number between 0-9 540. Once again, the user responds according to whether the number displayed is the non-responsive stimulus or not. The training continues for a prescribed amount of time or until the user has a prescribed number of incorrect responses 550.
The following example describes the timing specificity cognitive task. An image of a firefly enters from any side of the display screen 610. The user focuses on the firefly and touches the screen as it passes through a specific, highlighted spot on the display screen 620. In real-time the training system can optionally inform the user as to whether the user's response was timely, early, or late 630. The training system can display a new firefly and repeat until the cognitive task is complete 640. Throughout the cognitive task, the firefly can flash on and off, requiring the user to maintain a consistency of response, whether the stimulus is visible or not, and determine when to touch the screen.
The following example describes the low variance of response time cognitive task. The display screen has a ripple, concentric circle like pattern with slight variation in color from the peaks to the troughs of the waves. The cognitive task displays a stimulus which is a patch of light, also known as a Gabor patch, which is slightly brighter than the background 710. This type of stimulus is called a near-threshold stimulus. The near-threshold stimulus is slightly above a person's perceptual threshold or the level of stimulus salience that is just enough to be perceived. The user taps anywhere on the screen after the stimulus is displayed and before the next stimulus. The system records whether or not the user responded 720. The cognitive task varies the timing between the old stimulus and the new stimulus 730. A new stimulus is displayed 740 and the cognitive task repeats for three minutes or until the user makes a total of three misses (e.g. a miss is defined as when the user does not respond after a stimulus has occurred within a pre-defined window of time following the stimulus) 750.
It has been shown that patients (elderly with cognitive decline) can be differentiated on the basis of fMRI-based brain measures of distractor suppression versus target enhancement, demonstrating that these separate top-down control systems can be altered independently as well. This implies a need for intra-individual measures of brain processes over time to characterize maintenance of both processes: attending targets and ignoring distractors. Objective assessment tests must obtain brain measures of both components of attentional control and their intra-individual fluctuations during sustained attention. Detecting fluctuations over time requires continuous measures of both target and distractor processing. Doing so improves diagnostics and monitoring of the brain effects of treatment.
Neurophysiological Attention Test (NAT) utilizes a novel EEG method to measure infra-slow fluctuations in BOTH attending and ignoring simultaneously during sustained attention tasks. The NAT is the first EEG-based method for continuously tracking neurophysiological indices of attending targets and ignoring distractors simultaneously during sustained tasks. We adapt the steady-state visual evoked potential (SSVEP) method to take advantage of the fact that the magnitude of stimulus processing in sensory cortex (measured by the SSVEP) provides an index of attentional top-down control from frontal-parietal systems. Our method makes it possible for the first time to continuously measure the intra-individual variability (infra-slow fluctuations) of electrophysiological brain activity representing the top-down controlled processing of BOTH attended targets and of ignored distractors in ADHD. Measurements of infra-slow fluctuations can also be fMRI-based.
In some embodiments, However, it is challenging to define EEG measures that can be used unambiguously to measure the responses to each of multiple stimuli presented simultaneously. The SSVEP frequency tagging method allows the attended target signal and the ignored distractor signal to be identified by the frequency of the SSVEP. Each stimulus type (target, distractor) is assigned a flicker frequency (e.g., 15, 17 Hz respectively) that drives visual sensory cortices at the flicker frequency of each stimulus, thereby isolating and stabilizing the EEG activity corresponding to each stimulus type even when they are presented at the same time or even in the same location, e.g. in figure/background configuration. It is the infra-slow fluctuations in attending and ignoring that one can be aware of (meta-attention), and thus these measures can be used for assessment of meta-attention and for feedback about attending and ignoring and training meta-attention to improve attentional control. Many patients, for example those with ADHD, stress, anxiety and depression, have less meta-attention, e.g. they are less self-aware of their attention, or are less frequently self-aware of their attention. This results in negative symptoms and decrement of quality of life. Improving their meta-attention and thereby also their attentional control can greatly benefit these patients.
In accordance to the embodiments of the present invention, in addition to relating physiological indices to performance, the physiological indices themselves have functional significance. In fact, they can uncover important brain function differences, in the presence of similar behavioral measures, that can be differentially targeted by therapeutics. At a minimum, they can provide additional, otherwise unavailable, information to behavioral measures that may not completely differentiate sub-groups of patients (but suggest that there may be sub-groups), to aid in that differentiation.
Referring also to the screenshots of
The user may also be provided with an optional suitable focal cue, such as to focus user's eyes on fixation point 1640. In this example, the first flickering frequency can be 12 Hertz and the second flickering frequency can be 16 Hertz, and the fixation point 1640 can be located approximately midway between the attended circle 1620 and the ignored circle 1630. In some embodiments as illustrated by the respective screenshot 16C and screenshot 17C, a randomized plurality of targets and/or distractors, e.g. target 1652 and/or distractor 1752, can be presented to user either in attended circle 1628 and/or ignored circle 1738. Presentation frequency of targets and/or optional distractors can be approximately between one and five seconds, and can be randomized with respect to presentation rate, location and/or duration. Although squares are used for targets/distractors in this embodiment, other shapes are also possible, e.g., circles, ovals, rectangles, polygons, triangles or any other regular or irregular shapes.
A neural scanner measures the user's SSVEP (steady state visual evoked potential) and/or SSAEP (steady state auditory evoked potential). Neural scanner can be an EEG headset as seen at
This can be computed by frequency tagging, e.g., using a moving window fast Fourier (FFT) of the EEG to extract the magnitude (e.g., amplitude) of the SSVEP signal over time, thereby yielding a waveform that includes the infra-slow fluctuations of the SSVEP magnitude over a sustained period of time (e.g., from about five seconds to about two minutes, preferably approximately between 10 seconds and 100 seconds). Note that the “raw” SSVEP signal includes measurable representations of the above described flickering frequencies.
Different frequencies may be measured based upon the cognitive task being performed by the user. For example, when measuring attention, the FFT of the waveform can be used to compute the magnitude of the Infra-Slow Fluctuations (ISF) in an exemplary frequency band (such as 0.01-0.2 Hz). The infra—slow fluctuations of the extracted SSVEP can be used to compose an attention score which is useful for example in diagnosing ADHD. This technique results in an attention score that aids clinicians in their assessment of an individual's capacity to attend and ignore inputs, in simple form, for example:
A(sub ignore)=Power(infraslow band over time)
A(sub attend)=Power(infraslow band over time)
A(sub performance)=Power(infraslow band over time)
Wherein A is the attention score for each.
These techniques can be summarized by the exemplary equations:
A
I
=P
I(t)ISF EQUATION A
A
A
=P
A(t)ISF EQUATION B
A
P
=P
P(t)ISF EQUATION C
Wherein:
A=attention score (indicate sub-I=Ignore; sub-A=Attend; sub-P=Performance)
ISF=InfraSlow Fluctuations—very low frequencies, such as from 0.01-0.2 Hz.
P(t)=Power over time for the infraslow frequencies
NOTE: The magnitude of the power can be found using a FFT, wavelet or other type of transform.
Other modifications and additions are also possible. For example, the magnitude of the waxing and waning of attention reflected in the ISF magnitude can also vary over tens of minutes and hours of the day, or across days or longer periods which can be measured with the ISF magnitude at different periods.
Alternatively or in addition, a new measure that combines the ISF with other EEG measures can be created by relating the ISF to other EEG measures extracted at the same time. For example frontal lobe activity could be found that co-varies with the ISF.
Many other modifications and additions are also possible. For example, it may be possible to present targets substantially outside of the attended area, and identify the targets using a color, a shape or an alphanumeric character (or any suitable symbol such as an Arabic character or Chinese calligraphic symbol).
There are also alternate methods for explicitly and/or implicitly incorporating a target with the attended area, such as superimposing a “target” onto the attended area, by, for example, substantially lengthening/narrowing the duration of the flickering pulse and/or varying the color, shape and/or size of the attended area. There can be any number of attended and ignored areas which can be located anywhere, even overlapping.
Turning now from the data collection side of the systems, outputs provided to the user will now be discussed. In the initial iterations of the proposed systems and methods, the user may rely upon a smart device, such as a smartphone or tablet, which allows for display of visual cues, audio cues, and usually vibration or other physical sensations. In the future however, it is entirely possible that such devices will be able to provide olfactory cues, or other sensory outputs.
In other substantiations, other input and output devices may be leveraged by the proposed systems and methods. Examples of such IO devices are presented in reference to
Moving on,
Each user's devices are coupled to a backend AI system 2040 via a network 2030. In most cases the network is comprised of a cellular network and/or the internet. However, it is envisioned that the network includes any wide area network (WAN) architecture, including private WAN's, or private local area networks (LANs) in conjunction with private or public WANs. The AI system 2040 has access to data stores 2050 to effectuate the system's operations.
Turning to
In addition to the larger data store 2050 leveraged by the external AI system 2040, the device may include its own internal memory 2150, which generally stores the training application 2120, which may be executed by the device's processor. A radio 2140 (or wired connection) couples the device to the network 2030, as previously discussed, and ultimately to the external AI system 2040.
In this disclosure, the local training application is differentiated as a different component from the backend AI system. This is a result of the AI processing requirements being substantial, and the local user's device is generally incapable of performing such operations. However, it is contemplated that, in the future, it is possible that local devices may have sufficient processing power and internal memory to effectuate local AI processing. In such situation, the AI system 2040 may physically reside within the user's device. Model updates will generally require connectivity to an external system that generates and updates the AI models, but actual model operation may be performed at the user's device.
The AI system accesses data located in the data store 2050 for training, modeling and selection of personalized guidance for the user. The data maintained in the datastore includes context data 2151 for each user. Context data can be extremely varied, and may include data about the user's psychological, physical and mental state, as well as information about the user's environment. Some of this data may be collected by the device and/or sensors, provided by the user, obtained from other users, or collected from ancillary sources (the weather, for example, where the user is located can be collected from the National Weather Service or similar third party sources).
Past performance data 2152 from prior sessions in the training tool for the user is also collected and stored, as is any notes and/or feedback 2153 supplied from the user to the system as part of an exercise. Historical guidance 2154 that was provided to the user is also stored. The trained machine learning algorithms 2155 are likewise retained in the datastore, along with EEG data 2156 and/or other massive data sets collected from the user (fMRI for example, in some embodiments).
Feedback from the user is elicited by a feedback collector 2230. The contextual data collected for the user, performance data (both real-time and historical) and collected feedback may be provided to the guidance system 2240 for generation of tailored guidance for the user. The guidance system may operate locally based upon a deterministic rule based model, and remotely, via the AI system to provide guidance based upon a more complex, context driven, data set that is consumed by the ML models for determining the most effective guidance. This guidance may be provided to a real-time feedback provider 2260 for relay to the user. A distractor 2270 may generate distractions and/or interruptions to the user, to assist in attention exercises, or to otherwise promote a changed mental state in the user. A multimodal system 2280 may be leveraged to present stimuli to the user in two (or more) mechanisms. For example, the given exercise may include both audio and visual stimuli that the user is expected to react to.
Turning now to
This listing is not limiting in scope, as other options are likewise available, such as journaling and guidance (not illustrated). Likewise, not all the options presented here necessarily are required to be present in the system. For example, “assessing their performance” may be omitted as a separate module, and instead be part of the loop in which the user performs the exercise, receives feedback, allows the user to reflect upon the feedback to gain insights and then replays the exercise in order to apply the insights. This reflection step is augmented by guidance (provided separately from the options) to enhance the generation of insights. The past performance data may likewise be leveraged in the formation of insights. Regardless of option selected, the user is then provided relevant real-time feedback based upon their selection (at 2480) and may then be returned to the home screen. Real-time feedback is provided during the exercise, whereas end of session (EOS) feedback may be provided after the exercise has been completed.
For the results process 2476, comprehensive data may be accessed, which includes: the Data/Results from the last session, also their entire dataset for all sessions and any other data input, parsed into modules that are meaningful to the user. Such as type of results (accuracy, RT variability, Self-reported improvement scores, etc.) and time (display data for a day, week, month etc.). Patterns in the performance and self-report data as well as relationships/patterns in those data are displayed together with all contextual data over time as determined by algorithms and AI. This is performed for different categories such as an association between improvement with one exercise type more than another, or improvement in that exercise as a function of time of day. These performance data and self-reports are related to the content in their notes and to their contextual data.
The exercise process 2471 is provided in greater detail in relation with
For ‘Focus’, warm energizing colors may be employed. The intention is to immerse the mind in background sounds, and return a wandering mind to the target. Some may recognize this as a “mindfulness” type meditation state. The user may have the ability to configure the sound scene, time and difficulty of the exercise. The timing of the exercise is typically 2 minutes, it is at a fast pace, with more lively audio scenes, and harder options, such as requiring responses every other target (tone). Assessments for this exercise could include accuracy, the variability in response speed over time—a measure of consistency in sustained attentional control. As with other exercises (except sleep), the assessment for the exercise is performed at the end of the exercise, and includes a request for feedback regarding improvement in the relevant mental state. These answers are tracked, using a Likert scale for example, to show the users relevant patterns in their data. For example, under this focus exercise, it may show that focus is improved more or less at particular times or particular days.
For ‘Shifting Gears’, the color scheme may be neutral and cool. The graphics are relaxing. The intention of this mode is to place attention gently on a target tone and allowing it to occupy the user's thoughts. It's also intended to clear the user's mind such that they are capable of preparing for a change in activities and to be able to be focused for that activity. The parameters that may be customized by the user are again the sounds available, time and difficulty. The target time for the exercise is 4 minutes, and it is a slow pace. Metrics for this exercise that are assessed include the variability in response speed over time—a measure of consistency in sustained attentional control.
For ‘Refresh’, again the color scheme may be neutral and cool. The graphics are relaxing. The intention of this mode is to address mental fatigue, to put thoughts and thinking on hold, to relax while giving just a little attention to the target sound in order to keep the mind from drifting too much. Again, parameters that may be customized by the user are again the sounds available, time and difficulty. The target time for the exercise is 4 minutes, and it is a slow pace. Metrics for this exercise that are assessed include the variability in response speed over time—a measure of consistency in sustained attentional control
For ‘Spinning Mind’, cool colors and calming graphics are employed. The intention of this exercise is to let the user's thoughts go, diminish recurrent thinking and attachment to the thoughts, and allow attention to be drawn away from the thoughts by attending to the target sound. Parameters that are configurable for this exercise include sounds and time. The target time for the exercise is 4 minutes, the pace is slow, and a breathing pre-exercise is required. Metrics for this exercise that are assessed include the variability in response speed over time—a measure of consistency in sustained attentional control
For ‘Calm’, a subdued color scheme with relaxing graphics are employed. The intention of this exercise is to let the user relax and reduce stress by periodically taking deeper breaths, opening up their mind and being immersed in the sounds while gently paying attention to the target sound to keep them focused at the same time. The target time for the exercise is 4 minutes, the pace is slow, and a breathing pre-exercise is required. Metrics for this exercise that are assessed include the variability in response speed over time—a measure of consistency in sustained attentional control.
For ‘Sleep’, a dark color scheme that avoids blue hues is used, with sleep graphics. The intention of this exercise is to allow the user to let go of thoughts in order to prepare the user for sleep. Only sound options are configurable by the user, while the exercise is on a preset timer. The exercise includes an active response segment that lasts for 4 minutes, which then transitions to an audio segment. There is no assessment with this exercise, the system merely times out.
For ‘Meditate’, a calm color scheme is utilized. The intention of this exercise is to allow the user to let go of their thoughts and feelings, immersed in the background sounds, while gently focusing on the target sound as an anchor for their attention to come back to if their mind drifts to thoughts or feelings. The configurable parameters include the option for a simple mindfulness preparation versus an extended meditation session. The target time for this exercise is 4 minutes (extendable), and has a slow pace. Metrics for this exercise that are assessed include the variability in response speed over time—a measure of consistency in sustained attentional control.
The user may select between the different exercises (at 2510A-G, respectively). Subsequently, the user is routed to perform the given exercise (at 2520A-H, respectively).
The exercise is then run (at 2650) based upon the type of exercise selected, as noted in the previous figure. The exercise is then concluded (at 2660).
After the exercise, the user is given an option to go back to the start screen, or to proceed further (at 2730). If the user proceeds, they are presented a prompt (at 2735). Again, the prompt may be visual in nature, or may include audio, tactile, olfactory, or other inputs (or a combination thereof). The user then reacts to these prompts, and this feedback by the user is collected by the system (at 2740). The user is also presented the option, at the end of the exercise, to provide notes (at 2745). If the user opts to provide these notes they are collected (at 2750). Notes may be processed by natural language processors to identify conceptual information. This may be performed by normalization of the text, parsing the text by constituent parts, and matching tokens in the parsed text to a conceptual lexigraphy hierarchy, based upon distance measurements between the token and the given abstracted concept. These concepts may be provided to the AI system as inputs to generate additional contextual information for the user.
Regardless if notes are collected or not, the user is then presented with a completion screen (at 2755). The completion screen can provide the user a congratulations message and options to either view their data from the exercise, replay the exercise, or complete the exercise (at 2760). If the user were to wish to replay the exercise, they are routed back to the start screen. If the user is interested in viewing her data, she would be routed to a results screen (at 2765) which displays the user's data measures for the given exercise and/or data for past exercises as well, in addition to data analytics derived from machine learning pattern recognition. Such as their performance data from the exercise (e.g. number of hits and misses) and derivative measures from those data (e.g. the variability in response time over the period of the exercise); the user's responses to the prompts/questions (e.g. are you more calm now) and the user's notes. In addition to display of the user data, the user may be requested to provide feedback and reflect upon their performance in a guidance framework in order to generate insights. This also allows the user to repeat the exercise, improving their performance on each iteration. After reviewing their data the user is asked to reflect upon their performance, and consider it together with what they are learning via guidance that is provided to them (not shown). This provides the user insights into how best to control their attention and how to improve on the next exercise. Subsequently, the user is re-routed back to the completion screen. Lastly, if the user selects to move on, the user is presented with a completion message (at 2770) and the exercise completes.
It should be noted that the above disclosed method of exercise operation is exemplary, and an attempt to genericize the process for the sake of clarity and brevity. In reality, each of the various exercises may deviate from the given process. For example, for the sleep exercise, after the system collects feedback on the user's progress, the system may slowly wind down. This may include playing soothing sounds/music, and a gradually lower amplitude, dimming the screen slowly, and requiring no further input or prompts to the user. In such an exercise, the user is not asked for notes, there is no completion screen, and the user does not select between options for the exercise to fully complete. As such, it is intended that the above exercise processes are merely illustrative in nature, and are in no way intended to limit the scope of the system's operation.
Returning now back to
In addition to providing guidance, the user naturally “gets better” at the mental tasks presented in the exercises. This occurs in two ways: due to practice and learning, and secondly due to insights gained during the reflection (2355) that occurs in the guidance process (2350). This enhancement of learning (at 2340) assists in improving performance in subsequent exercises. Likewise, this enhancement in the user's learning, coupled to the guidance provided, is applicable to real-life situations. This transfer of skills from the system, to the user's everyday life (at 2360) is the ultimate goal of the system.
Turning now to
In
In this example interface, the user is able to return to the pervious welcome screen by selecting the ‘back’ button at the top left section of the screen. The user account and settings may be accessed by the set of ellipses at the top right side of the screen. Each of the exercises may be selected directly, or information for the given exercise may be accesses by selecting the “i” symbol next to the given exercise. Along the bottom of the screen, there are quick access icons to bring the user back to the home screen, view prior results, access their library of resources and guidance, and access their settings.
When a user selects an exercise, they may be taken to a start screen for the given exercise. An example of such a screen is provided at
Likewise, the set of sounds played, or selectable by the user, may be tailored based upon the exercise selected. For example, for the sleep exercise, very soothing and quiet sounds may be selected, whereas for focus, even but louder sounds may be provided for the user. Even the same types of sounds may differ between the mental exercises. Again, returning to our prior examples, rain in the sleep exercise may include a soft drizzle type sound, whereas a focus exercise rain sound may include thunder and a downpour of rain sounds.
As seen in this example figure, the user may select among a variety of suitable sound options for the mental exercise, and select a duration (within an accepted range for the given exercise). Likewise the tempo for the exercise can be selected by the user. Once all parameters are thus set, the user can begin the exercise.
Turning now to
At the same time, the system may provide the user with notifications to assist in motivating the user to continue engaging with the system. Effectively, this process of guidance and transfer of learning is a loop—the exercise teaches an aspect of attentional control, then the user implements that control in real life activity, then user sees how the control learned in the exercise impacts real life thereby making the learning in the exercise more relevant/meaningful and useful in improving real life.
In addition to the notifications, or possibly as part of the notification process, in some embodiments users may get credit for improving their cognitive control and well-being by doing other activities with other devices and apps. This is generally known as “gamification” of the process. Examples of this are receiving credits, badges, or other rewards for activities such as running, yoga, mindfulness class or other cognitive apps. The benefits of these other activities are revealed to the user by seeing how these activities impact their ability to control their attention during the present exercises, e.g., their scores in the above disclosed exercises. As a result the user gets cognitive control/well-being credits for improving their scores on the presently disclosed exercises after doing these other activities.
The factors determining which guidance to provide the user, and whether to move the user from a given module of an exercise to a more advanced module, is based upon several factors. These may include the number of sessions a user completes (in aggregate, or for a given type of exercise). The user's performance on the exercise may likewise help determine guidance and advancement in modules. The user's pattern in usage, context, user defined goals, and user desires/feedback may also all be leveraged by either the rule based engine, or by the AI models, to determine when the user is ready for advancement and/or the type of guidance the user should get as feedback.
Now that the systems and methods for the mental exercises and improvements in mental acuity have been provided, attention shall now be focused upon apparatuses capable of executing the above functions in real-time. To facilitate this discussion,
Processor 3022 is also coupled to a variety of input/output devices, such as Display 3004, Keyboard 3010, Mouse 3012 and Speakers 3030. In general, an input/output device may be any of: video displays, track balls, mice, keyboards, microphones, touch-sensitive displays, transducer card readers, magnetic or paper tape readers, tablets, styluses, voice or handwriting recognizers, biometrics readers, motion sensors, brain wave readers, or other computers. Processor 3022 optionally may be coupled to another computer or telecommunications network using Network Interface 3040. With such a Network Interface 3040, it is contemplated that the Processor 3022 might receive information from the network, or might output information to the network in the course of performing the above-described promotion offer generation and redemption. Furthermore, method embodiments of the present invention may execute solely upon Processor 3022 or may execute over a network such as the Internet in conjunction with a remote CPU that shares a portion of the processing.
Software is typically stored in the non-volatile memory and/or the drive unit. Indeed, for large programs, it may not even be possible to store the entire program in the memory. Nevertheless, it should be understood that for software to run, if necessary, it is moved to a computer readable location appropriate for processing, and for illustrative purposes, that location is referred to as the memory in this disclosure. Even when software is moved to the memory for execution, the processor will typically make use of hardware registers to store values associated with the software, and local cache that, ideally, serves to speed up execution. As used herein, a software program is assumed to be stored at any known or convenient location (from non-volatile storage to hardware registers) when the software program is referred to as “implemented in a computer-readable medium.” A processor is considered to be “configured to execute a program” when at least one value associated with the program is stored in a register readable by the processor.
In operation, the computer system 3000 can be controlled by operating system software that includes a file management system, such as a medium operating system. One example of operating system software with associated file management system software is the family of operating systems known as Windows® from Microsoft Corporation of Redmond, Wash., and their associated file management systems. Another example of operating system software with its associated file management system software is the Linux operating system and its associated file management system. The file management system is typically stored in the non-volatile memory and/or drive unit and causes the processor to execute the various acts required by the operating system to input and output data and to store data in the memory, including storing files on the non-volatile memory and/or drive unit.
Some portions of the detailed description may be presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is, here and generally, conceived to be a self-consistent sequence of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.
The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the methods of some embodiments. The required structure for a variety of these systems will appear from the description below. In addition, the techniques are not described with reference to any particular programming language, and various embodiments may, thus, be implemented using a variety of programming languages.
In alternative embodiments, the machine operates as a standalone device or may be connected (e.g., networked) to other machines. In a networked deployment, the machine may operate in the capacity of a server or a client machine in a client-server network environment or as a peer machine in a peer-to-peer (or distributed) network environment.
The machine may be a server computer, a client computer, a personal computer (PC), a tablet PC, a laptop computer, a set-top box (STB), a personal digital assistant (PDA), a cellular telephone, an iPhone, a Blackberry, Glasses with a processor, Headphones with a processor, Virtual Reality devices, a processor, distributed processors working together, a telephone, a web appliance, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine.
While the machine-readable medium or machine-readable storage medium is shown in an exemplary embodiment to be a single medium, the term “machine-readable medium” and “machine-readable storage medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term “machine-readable medium” and “machine-readable storage medium” shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the presently disclosed technique and innovation.
In general, the routines executed to implement the embodiments of the disclosure may be implemented as part of an operating system or a specific application, component, program, object, module or sequence of instructions referred to as “computer programs.” The computer programs typically comprise one or more instructions set at various times in various memory and storage devices in a computer (or distributed across computers), and when read and executed by one or more processing units or processors in a computer (or across computers), cause the computer(s) to perform operations to execute elements involving the various aspects of the disclosure.
Moreover, while embodiments have been described in the context of fully functioning computers and computer systems, those skilled in the art will appreciate that the various embodiments are capable of being distributed as a program product in a variety of forms, and that the disclosure applies equally regardless of the particular type of machine or computer-readable media used to actually effect the distribution
While this invention has been described in terms of several embodiments, there are alterations, modifications, permutations, and substitute equivalents, which fall within the scope of this invention. Although sub-section titles have been provided to aid in the description of the invention, these titles are merely illustrative and are not intended to limit the scope of the present invention. It should also be noted that there are many alternative ways of implementing the methods and apparatuses of the present invention. It is therefore intended that the following appended claims be interpreted as including all such alterations, modifications, permutations, and substitute equivalents as fall within the true spirit and scope of the present invention.
This non-provisional application claims the benefit of U.S. Provisional Application No. 63/216,448 filed on Jun. 29, 2021, of the same title, pending, which application is incorporated herein in its entirety by this reference.
Number | Date | Country | |
---|---|---|---|
63216448 | Jun 2021 | US |