A portion of the disclosure of this patent document contains material that is subject to copyright protection. The copyright owner has no objection to the reproduction of the patent document or the patent disclosure, as it appears in the U.S. Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.
Not Applicable
Not Applicable
The present disclosure relates generally to devices, systems, and methods for influencing behavior change in humans and more particularly to devices, systems, and methods for providing multi-sensory stimuli to users in a dynamic virtual environment to influence behavior and decision-making.
It is widely known in healthcare fields that behaviors and lifestyle choices greatly impact individual health conditions. Numerous health risk behaviors such as smoking, lack of exercise, poor nutrition, tobacco use, and excessive alcohol consumption lead to higher incidences of illness and premature death. These risk behaviors also contribute greatly to obesity, type two diabetes, heart disease, stroke, cancer, and other ailments.
Although some conventional educational and therapy systems aim to inform users on behavior and lifestyle choices in an attempt to influence users and patients to make healthier decisions and daily choices, such existing systems of this nature are generally perceived by users as being overly clinical and uninteresting. This makes such systems generally ineffective at moderating and constructively influencing behavior over time.
Also, existing content platforms aiming to influence behavior and lifestyle decisions are generally not personalized to individual users, but instead include generic content distributed to various users of different backgrounds and life experiences. This “one size fits all” approach to conventional behavior change content is often ill-suited for providing effective results in patients of diverse ages and backgrounds.
Further, difficulties with financial management of physician practices is often cited as a leading obstacle to providing efficient and profitable healthcare. Much of this difficulty is related to management of chronic diseases and health problems related to lifestyle choices and risk behaviors. By better educating and influencing patients to make beneficial lifestyle choices, health outcomes will be improved and administrative and financial burdens on healthcare providers will be lessened. Healthcare providers need better platforms for assisting patients in addressing lifestyle choices and risk behaviors.
What is needed then are improvements in devices, systems, and methods for influencing behavior and lifestyle choices in users and patients.
This Brief Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
One aspect of the disclosure is to provide a hardware and software-based system to provide a user or patient with interactive, dynamic digital content in a simulation experience to influence behavior and lifestyle choices.
Another aspect of the disclosure is to provide a system to monitor patient feedback and/or visual activity to make dynamic content selections.
A further aspect of the disclosure is to provide a system to monitor patient biometric activity such as breathing patterns, respiration rate, muscle activity, heart rate, body temperature, heart rate variability, electrodermal activity (EDA), galvanic skin response (GSR), electroencephalogram (EEG), eye movement, and/or other physiological or psychological parameters and to make dynamic content selections and time-optimized content introduction based on the measured patient biometric activity.
Another aspect of the disclosure is to provide a system to monitor both patient feedback and patient biometric activity, and to make dynamic content selections based on the measured activity. The dynamically-selected content is provided to the user within a session via a display interface such as a computer screen, an augmented-reality headset, or a virtual-reality headset. The system further makes a determination of time-optimization to introduce the dynamically-selected content based on the patient feedback and patient biometric activity.
Yet another aspect of the disclosure is to provide a software-based dynamic content selection engine including at least one database housing numerous content packages available for dynamic selection. Over time, user data and content selection performance data is logged. The logged data is used to make future predictive enhancements to dynamic content selection.
Numerous other objects, advantages and features of the present disclosure will be readily apparent to those of skill in the art upon a review of the following drawings and description of a preferred embodiment.
While the making and using of various embodiments of the present invention are discussed in detail below, it should be appreciated that the present invention provides many applicable inventive concepts that are embodied in a wide variety of specific contexts. The specific embodiments discussed herein are merely illustrative of specific ways to make and use the invention and do not limit the scope of the invention. Those of ordinary skill in the art will recognize numerous equivalents to the specific apparatus and methods described herein. Such equivalents are considered to be within the scope of this invention and are covered by the claims.
The present disclosure relates to a dynamic, multi-sensory simulation system for effecting behavior change. The system includes three main parts, an example of which is show in
User interface 102 includes any suitable display operable to provide visual or other types of content to a user. As shown in
Sensory stimulation is provided to the user via the user interface 102. Sensory stimulation may take many forms, including visual, auditory, haptic, olfactory, gustatory, or other forms to create a cognitive experience for a user. By providing sensory stimulation, it is possible to effect the mental state of the user and to place the user into a relaxed state of mental activity such that the user may be more susceptible to selected behavior change content.
The simulations communicated to the user via the user interface 102 are generally created using devices and software to replace the normal sensory inputs the user experiences with dynamic and personalized sensory inputs that guide the user through a simulated and interactive experience. For example, a remote software platform 110 includes software configured to make dynamic selections of content for communication to the user based on various types of feedback associated with the user during a session, or obtained from prior sessions.
Sensor 106 may include any suitable biometric monitoring device to monitor the state of a user's body during the simulated experience. For example, sensor 106 may include biometric sensors to measure heart rate, heart rate variability, electrodermal activity (EDA), galvanic skin response (GSR), electroencephalogram (EEG), eye-tracking, body temperature, and others. As shown in an embodiment in
Software residing on the remote computer 116 is operable to process the measured data to make a determination of what content to dynamically select from a database 118 for transmission to the user interface 102. The software residing on remote computer 116 is also operable to make a determination of when to transmit the dynamically-selected content from the database 118 to user interface 102 during a session based on the measured data. In some embodiments, the full content package including available content options to be displayed to user interface 102 is stored locally on local computer 112, and the remote computer 116 makes a determination of which selected portions of that content to send to the user interface 102. The remote computer 116 then sends an instruction of which content portions to send to the user interface 102. The remote computer 116 also sends an instruction of when to send the selected content portions based on the measured data. The measured data may also be analyzed in combination with other feedback acquired from the user, such as voice inputs or detected activity within a virtual space.
For example, during a session the sensor array 106 may detect data indicating certain content stored on database 118 should be selected and transmitted to a user to facilitate behavior change objectives. However, sensory array 106 may not yet detect an optimal physiological or mental condition for optimal effect of the content. Sensor array 106 will continue to monitor the physiological and/or mental condition of the user, and when a predetermined set of parameters is detected in the biometric data, the system will transmit the dynamically selected content via network 114 to local computer 112 and to user interface 102. Alternatively, in some embodiments, the system will send an instruction via network 114 to local computer 112 identifying a specific portion of the content stored locally on local computer 112 to send to the user interface 102. In this exemplary embodiment, the acquired biometric data may be aggregated on the local computer 112 prior to transmission to remote computer 116 as shown in
Referring to
User interface 102 communicates with a local computer 112 via a wired or a wireless signal path. Digital content is transmitted to user interface 102 from local computer 112 for communication to the user. Additionally, biometric data from sensor array 106 is transmitted to local computer 112. Local computer 112 communicates over a network 114 with one or more remote computers. In another embodiment, the biometric data is transmitted directly to a remote computer.
The communications signal between local computer 112 and one or more remote computers include two main components, an example of which is demonstrated in
In some embodiment, the threshold values are determined in relation to data captured for each user. For example, if a user's baseline heart rate, captured at the start of the experience, starts at 80 bpm, the system determines how much the user's average heart rate declines or increases in relation to the user's baseline, by using measures of variation or change, such as standard deviation across all captured data from the user during the session. Threshold values are not limited specifically to heart rate, but any metric used to determine a user's state during a session.
In other embodiments, the threshold values are determined in relation to data captured across a population. For example, the system can either receive data associated with a population's baseline heart rate during a state of relaxation. The system determines that a user has not reached a state of relaxation based on the user's heartrate relative to the population's baseline heart rate indicative of a state of relaxation. The system may deliver content to a user once the user's heartrate has reached a threshold value based on a population's baseline heart rate during a state of relaxation. Other embodiments might include a hybrid approach, wherein the system is able to determine threshold values based on user specific values and population values.
Second, a dynamic user experience service collects log file information sent from the local computer 112 of the multi-sensory simulation machine. These log files may include one or more of: answers to questions posed to the user during the simulation, records of what virtual objects inside the simulation the user fixed their gaze on or interacted with, navigation and/or locomotion choices inside the simulation that caused the user to move around inside the simulated experience. These log files are transmitted to a second dedicated dynamic content selection program 116b, collected, stored and interpreted to ascertain elements of the user's motivation and mindset during the experience (for example, they may have answered the question of ‘why they are motivated to quit smoking’ by selecting one or more answers inside the experience). These data, combined with business rules encoded inside the dynamic user experience service and with predictive models, will be used to decide what specific content is best to deliver to the user of the multi-sensory experience at a given time. That content may then be selected from second database 118b. The dynamic user experience service may use various types of information previously collected and stored about the user and their experience, including, but not limited to: user demographic data, explicit answers to questions posed inside the experience, other physiologic or psychologic indicators which may be ascertained through passive monitoring of how they interact with the simulation.
Additionally, the simulation service computer 112 may collect various records (logs) of how the user interacts with the experience, and will store and forward this information to the dynamic user experience service 116b periodically. The dynamic user experience service 116b will send messages to the simulation service computer 112 instructing it on what content to deliver when to the user. Such content includes explicit descriptions of computer generated stimuli, which may include computer graphic simulations of people, places or things, video recordings of the real world, audio content (music, voice, sounds), or other simulations of the real world.
In many embodiments, a user may interact with a front-end software application, or Physician Control Panel or Administrative Control Panel. The front-end application or remote biometrics services 116a record biometric data captured from sensor array 106, including one or more devices connected to or worn by the patient. The biometric data is captured in data packets and streamed via network 114 in some embodiments. In some embodiments, the sensor array 106 and front-end software application, including associated data acquisition hardware, may be programmed to different data acquisition sampling rate. In some embodiments, the sensor array 106 is configured for a data acquisition sampling rate of once every sixteen seconds. In other embodiments, the sensor array 106 is configured for a data acquisition sampling rate of once every 160 milliseconds. The sampling rate is adjustable. The front-end application collects the data in a local database on local computer 112. In other embodiments the sensor array 106 directly transmits the biometric data to the remote service 116a over the network 114. The collected biometric data may be transmitted via network 114 at a programmable transmission frequency. In some embodiments, the data is transmitted at 1 Hz, or once per second. The data is transmitted via network 114 to a remote server 116 on which first and second programs 116a, 116b are stored. In alternative embodiments, the data is transmitted to more than one remote server. For example, in some embodiments a first remote server houses first program 116a and accesses first database 118a, and a second remote server houses second program 116b and accesses second database 118b.
The front-end software application on local computer 112 or the sensor array 106 may perform analysis of the acquired biometric data prior to transmission over network 114. For example, in some applications, the software is programmed for the front-end software application to calculate the mean of the biometric data every ten seconds for the prior ten second interval. The calculated data is sent via network 114 to the remote computer 116. The back end server 116 then calculates a moving average of the mean and standard deviation of a predetermined number of previous “n” iterations of the biometric summaries. In some embodiments, the back end server 116 calculates a moving average of the mean and standard deviation of the previous five transmitted biometric summaries.
When a user begins a simulation session that is dynamically-driven by the acquired biometric data, the remote computer 116 sets baseline values of the average and standard deviation of the “n” most recent biometric summaries. As the simulation experience continues, the back end server calculates a moving average of the “n” most recent summaries, and compares the moving average examples to the baseline values. When a target differential is met (for example: Moving Average Heart Rate<[Baseline Heart Rate−[0.5*Baseline Standard Deviation]]) the back end server sends a signal via application programming interface (API) to the simulation experience computer 112 that the patient has achieved the targeted biometric state, and is ready for the delivery of behavior-influencing content. This type of example calculation may be used to determine when to send the dynamically selected content to a user based on the acquired biometric data.
All the time intervals such as frequency of collecting, storing, and sending biometrics data to the back end server 116, are configurable on the back end server 116 in some embodiments. Also the number of data points that will be aggregated to evaluate the above condition is configurable. The mathematical condition used above is a preliminary hypothesis, subject to change based on the results gathered over time.
At the start of a patient session at interface 102, an operator collects information in one of two methods, or both. Either a) the operator asks the patient questions, and enters the information manually into the Physician Control Panel or Administrative Control Panel application on the local computer 112 or remote computer 116; or b) the front-end application or remote computer 116 retrieves information electronically via an API connection to the office practice management system or electronic medical records database; or c) a combination of both methods is used. The information captured is demographic information such as name, age, gender, ethnicity, etc. or condition related information such as disease state, success/failure of prior attempts at behavior change, etc., or both. This demographic and condition related information is sent to the back end server 116 where it is continually stored.
As a simulation experience commences, and during the simulation experience, data is collected in several ways. Log files are collected on the local computer 112, which record patient actions inside the simulation experience, such as navigational choices, what tagged virtual objects were examined (i.e. looked at) or interacted with by the user, and these log files are sent to the back end server 116 for storage.
The patient is also asked questions while inside the simulation experience, and responses to these questions (which may be captured by way of digital interfaces inside simulation enabling answers to be chosen (i.e. multiple choice)), or by way of voice recording from a microphone that is part of the VR head mounted display or worn on the person of the patient) are recorded.
Biometric values are captured via one or more sensors on sensor array 106, which are used as indicators of physiological or psychological arousal or relaxation, for example, during the experience.
All of these three types of data are captured and stored continually. Patient success at achieving desired behavior changes are evaluated by asking patients about their success and readiness to change inside the simulation experience, and also by follow-up outside of the simulation experience. All data collected about patient success is recorded in the same persistent data store as the other patient data.
The system then utilizes a variety of statistical learning & analytical techniques to evaluate which simulation experiences for which types of patients (types being indicated through analysis of demographic data) have the best outcomes in terms of desired behavior changes. The techniques utilized include but are not limited to: logistic regression, linear regression, linear discriminant analysis, K-Nearest Neighbors classification, Decision Trees, Bagging, Random Forests, Boosting, and Support Vector Machines.
Referring further to
An example of this conditional logic may look like (but is not limited to):
IF condition A is true: State 1 should be followed by State 2.
OTHERWISE: State 1 should be followed by State 3.
The conditional logic could be dependent on multiple factors such as the actions the user has taken in the current VRX session or in any previous VRX sessions, demographics data about the user or predictive models using biometrics, demographics and user interaction data. Thus, the system has the capability to provide personalized content to different users based on complex analysis.
After processing the actions of each state, the VRX makes a request via API to the DXE software 116b on remote computer or server 116 to get the next state it should transition to and the content it should present. This continues until the VRX is instructed by the DXE software 116b that the last state has been reached and to exit the program.
The workflow is defined for all possible instructions that are available at any time during any session. An instruction describes what should happen during the session, including, but not limited to displaying content. In one embodiment, the front-end application (VRX) makes a request to the DXE 116b for instructions that the VRX needs to process. The VRX repeatedly makes requests to the DXE 116b for new instructions as the VRX finishes processing the instructions already delivered from the DXE 116b. The instructions are conditional and are evaluated by an in-house rules engine which is part of the DXE 116b. The rules engine is defined using various technologies, including, but not limited to SQL statements, stored procedures, functions and web service methods. The conditions can be evaluated on any data in the system (biometrics, user input, demographic information, etc.).
An exemplary embodiment of the Dynamic Multi-Sensory Simulation System includes a user interface 102, a sensor array 106, a software platform 110. Information is presented to the user via the user interface 102, the user's reaction to the information is recorded by the sensor array 106, and the software platform determines subsequent information to present to the user based on the user's reaction. The system 100 is operable to present a therapy session to the user based on inputs recorded from the user. A therapy session may consist of modules.
As seen in
The various modules include content of the types shown in
In one exemplary embodiment, a session for smoking cessation is provided. The session begins with an Avatar welcoming the user and continues with walking the user through numerous pieces of content as well as gathering data. Potentially, a session could be any combination of educational videos, audio tracks, animations, or mindfulness exercises. In this exemplary embodiment of smoking cessation, the program includes ten modules which are structured as five knowledge modules and five mindfulness modules which are delivered alternately. A knowledge module typically consists of one or more of the following sections: (1) Motivational interviewing (e.g., Why does the user smoke, why does the user want to quit smoking, etc.), (2) Educational videos (e.g., harmful chemicals in cigarette smoke, effect of smoking on different parts of the body, etc.), and (3) Animations (e.g., short animated story about how quitting smoking can impact their lives). A mindfulness module typically consists of a user selecting the virtual location (e.g., a beach in Maldives and open green fields in Germany) and their guide (e.g., a male or female guide) for mindfulness followed by guided audio tracks. A module typically ends by describing what the users can expect in the upcoming modules as well as gathering user experience data like Net Promoter Score.
An exemplary embodiment of a module in which a physiological state triggers specific content delivery is provided. The mindfulness module in the session begins with trying to make the user calm and comfortable by lowering the user's heart-rate. The lowering of the user's heart-rate may be achieved by using a specific set of audio scripts. As long as the desired heart rate drop is not achieved, audio scripts from this set are repeatedly delivered to the user.
An exemplary embodiment of a module in which a user interactions with the system trigger specific content delivery is provided. Prior to launching the mindfulness module, a user is asked to choose the virtual location where they would like to practice mindfulness. Based on this choice, the appropriate 360 video or a 3D environment is delivered to the user.
In other embodiments, the system may further provide for various programs including content tailored for effecting specific behavioral changes. The system can be used for treatment of any suitable undesirable behavior or condition. The system may implement the following programs for: smoking, obesity, diabetes, pain management, lower-back pain recovery, pain neuroscience education, medication adherence, surgical peri-operative program, addiction recovery, COPD management, hypertension management, and cognitive behavioral therapy-based interventions for anxiety, obsessive compulsive disorder, post-traumatic stress disorder, and phobias.
Numerous other configurations for executing the disclosed system and method may be achieved, and the illustrations and description provided herein provide an exemplary embodiment. The overall system is operable to utilize biometric data in combination with user feedback during a real-time simulation session to dynamically select behavior-change content optimized for the user, and the system further assesses the biometric data in combination with the user feedback during a real-time simulation session to optimize the optimal time to present the dynamically-selected content to the user to have the greatest effect. The dynamically-selected content will vary from user-to-user, and by utilizing a virtual-reality or augmented-reality interactive user interface, it is possible to present the dynamically-selected content at an optimal time within a session in a profound and engaging way to better influence behavior and lifestyle decisions in users.
Included in
Thus, although there have been described particular embodiments of the present invention of a new and useful DYNAMIC MULTI-SENSORY SIMULATION SYSTEM FOR EFFECTING BEHAVIOR CHANGE, it is not intended that such references be construed as limitations upon the scope of this invention.
This application is a continuation of U.S. patent application Ser. No. 15/912,200, entitled “Dynamic Multi-sensory Simulation System for Effecting Behavioral Change,” filed Mar. 5, 2018, and which is pending; which claims priority to U.S. Provisional Patent Application No. 62/466,709, filed Mar. 3, 2017; all of which are incorporated by reference in their entirety.
Number | Date | Country | |
---|---|---|---|
62466709 | Mar 2017 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15912200 | Mar 2018 | US |
Child | 17443897 | US |