All publications and patent applications mentioned in this specification are herein incorporated by reference in their entirety, as if each individual publication or patent application was specifically and individually indicated to be incorporated by reference in its entirety.
This disclosure relates generally to the field of post health event care, and more specifically to the field of post-acute stroke management. Described herein are systems and methods of stroke care management.
Stroke is the third most common cause of death in the United States and the most disabling neurologic disorder. Approximately 800,000 patients suffer from stroke annually, and there are about 6 to 8 million stroke survivors. Stroke is a medical emergency characterized by the acute onset of a neurological deficit that persists for at least 24 hours, reflecting focal involvement of the central nervous system, and is the result of a disturbance of the cerebral circulation. Its incidence increases with age. Risk factors for stroke include systolic or diastolic hypertension, hypercholesterolemia, cigarette smoking, heavy alcohol consumption, diabetes, and oral contraceptive use.
Hemorrhagic stroke accounts for about 13% of the annual stroke population. Hemorrhagic stroke often occurs due to rupture of an aneurysm or arteriovenous malformation bleeding into the brain tissue, resulting in cerebral infarction. The remaining 87% of the stroke population are ischemic strokes and are caused by occluded vessels that deprive the brain of oxygen-carrying blood. Ischemic strokes are often caused by emboli or pieces of thrombotic tissue that have dislodged from other body sites or from the cerebral vessels themselves to occlude in the narrow cerebral arteries more distally. When a patient presents with neurological symptoms and signs which resolve completely within 1 hour, the term transient ischemic attack (TIA) is used. Etiologically, TIA and stroke share the same pathophysiologic mechanisms and thus represent a continuum based on persistence of symptoms and extent of ischemic insult.
Notwithstanding the foregoing, once the patient is discharged from the hospital, the patient's road to recovery is long and arduous, especially since existing therapies are incomplete at treating or reversing the effects of the stroke. Many disabilities or effects of the stroke may present early or may not present until days or weeks or months later. As such, patients and their caregivers and/or care partners require many resources, tools, and support in adapting to their current state, recovering after the stroke event, and connecting with their doctor, care network, and other survivors. Accordingly, there exists a need for improved stroke care management after a stroke event.
Disclosed herein is a method for generating and updating one or more user interfaces of a mobile software application operating on a smart phone of a stroke survivor having an impairment. In some implementations, the method comprises: tracking, from a plurality of network-based non-transitory storage devices, a first state of stroke survivor post discharge from a hospital; generating a first user interface configured to be displayed on the smart phone, wherein the first user interface is dynamically updated based on the tracked first state of the stroke survivor; including a first ribbon in the first user interface, said first ribbon corresponding to a first learning content enabled on the first user interface based on the first tracked state; determining a plurality of first learning content user interfaces to split the learning content based on the impairment associated with the stroke survivor and a first property of the first learning content; and including a second ribbon in the first user interface, said second ribbon corresponding to a second learning content, wherein the second ribbon is disabled until the plurality of first learning content user interfaces are viewed by the stroke survivor.
In some implementations, the method further includes storing one or more metrics corresponding to the stroke survivor after one or more first learning content user interfaces are viewed by the stroke survivor, wherein the one or more metrics are stored in relation with a first electronic identification corresponding to the first learning content. In some implementations, the method includes further includes generating analysis based on the stored one or more metrics corresponding to a plurality of stroke survivors for the first learning content. In some implementations, the method further includes changing a timing of delivery of the first learning content to the mobile software application based on the generated analysis. In some implementations, the one or more metrics comprises a compliance measurement. In some implementations, the method the one or more metrics comprise a fall detection event. In some implementations, the one or metrics comprise an indication of infection.
In some implementations, a first electronic identification is associated with the learning content. In some implementations, the method further includes removing the learning content from the mobile software application based on the first electronic identification. In some implementations, the method further includes providing a dashboard user interface, said dashboard interface comprising a plurality of tabs; and including a notification indicator adjacent to one of the plurality of tabs, said notification indicator corresponding to an activity detected from the mobile software application. In some implementations, the method further includes updating the first learning content based on one or more scores measured from an assessment. In some implementations, the method further includes generating a suggested list of questions based on the first state prior to an appointment with a health care team member. In some implementations, the method further includes providing an ability to digitally record an answer from the appointment. In some implementations, the method further includes enabling for a plurality of users, from a web interface, an ability to add ideas for a plurality of learning content, write a second learning content, review the second learning content, and send the second learning content to the mobile software application, without requiring the plurality of users to download any local copies corresponding to the learning content and without requiring opening of separate software applications.
Disclosed herein is a system for generating and updating one or more user interfaces of a mobile software application operating on a smart phone of a stroke survivor having an impairment. In some implementations, the system comprises one or more hardware processors configured to: track, from a plurality of network-based non-transitory storage devices, a first state of stroke survivor post discharge from a hospital; generate a first user interface configured to be displayed on the smart phone, wherein the first user interface is dynamically updated based on the tracked first state of the stroke survivor; include a first ribbon in the first user interface, said first ribbon corresponding to a first learning content enabled on the first user interface based on the first tracked state; determine a plurality of first learning content user interfaces to split the learning content based on the impairment associated with the stroke survivor and a first property of the first learning content; and include a second ribbon in the first user interface, said second ribbon corresponding to a second learning content, wherein the second ribbon is disabled until the plurality of first learning content user interfaces are viewed by the stroke survivor.
In some implementations, the one or more hardware processors are further configured to store one or more metrics corresponding to the stroke survivor after one or more first learning content user interfaces are viewed by the stroke survivor, wherein the one or more metrics are stored in relation with a first electronic identification corresponding to the first learning content. In some implementations, the one or more hardware processors are further configured to generate analysis based on the stored one or more metrics corresponding to a plurality of stroke survivors for the first learning content. In some implementations, the one or more hardware processors are further configured to change a timing of delivery of the first learning content to the mobile software application based on the generated analysis. In some implementations, the one or more metrics comprises a compliance measurement. In some implementations, the one or more hardware processors are further configured to enable for a plurality of users, from a web interface, an ability to add ideas for a plurality of learning content, write a second learning content, review the second learning content, and send the second learning content to the mobile software application, without requiring the plurality of users to download any local copies corresponding to the learning content and without requiring opening of separate software applications.
For purposes of summarizing the disclosure, certain aspects, advantages, and novel features are discussed herein. It is to be understood that not necessarily all such aspects, advantages, or features will be embodied in any particular implementation of the disclosure, and an artisan would recognize from the disclosure herein a myriad of combinations of such aspects, advantages, or features.
The foregoing is a summary, and thus, necessarily limited in detail. The above-mentioned aspects, as well as other aspects, features, and advantages of the present technology are described below in connection with various implementations, with reference made to the accompanying drawings.
The illustrated implementations are merely examples and are not intended to limit the disclosure. The schematics are drawn to illustrate features and concepts and are not necessarily drawn to scale.
The road to recovery following a health event can be challenging and demanding for any party involved. The systems and methods contained herein can offer a path towards recovery in which a user can progress through the stages of recovery. With the assistance of the health application platform, the systems and methods described herein can assist patients and care partners during the recovery process. Following a health event, such as a stroke, patients can be left unsure of their quality of life and may not be aware of the resources available to them. Further, care takers wanting to assist patients may not have the necessary tools or knowledge readily available to effectively assist during the recovery period. Care partners may have to manage hundreds of survivors of a health event. The computing systems and methods described herein can improve management and treatment of survivors. In some implementations, the management and treatment is improved through improvement in user interfaces that addresses challenges in communication with a stroke survivor.
The healthcare system 170 can be implemented in computer hardware and/or software. The healthcare system 170 can execute on one or more remote computing devices 110, such as one or more physical server computers. In implementations where the healthcare system 170 is implemented on multiple servers, these servers can be co-located or can be geographically separate (such as in separate data centers). Additionally or alternatively, the healthcare system 170 can be implemented on one or more virtual machines that execute on a physical server or group of servers. Further, healthcare system 170 can be hosted in a cloud computing environment, such as in Amazon Web Services (AWS) Elastic Computer Cloud or the Microsoft® Windows® Azure Platform.
The user computing device 120 can remotely access some or all of the healthcare system 170 on these servers or through the network 115. The user computing device 120 can include thick or thin client software that can access the healthcare system 170 on the one or more servers through the network 115. The network 115 can be a local area network (LAN), a wide area network (WAN), such as the Internet, combinations of the same, or the like. For example, the network 115 can include any combination of associated computer hardware, switches, etc. (for example, an organization's private intranet, the public Internet, and/or a combination of the same). In some implementations, the user software on the user computing device 120 is a browser software or other application. The user computing device can access the healthcare system 170 through the browser software. In certain implementations, some or all of the healthcare system 170's functionality can be implemented on the user computing device.
The user computing device 120 can receive user input 160 (e.g., via one or more user input elements via a graphical user interface) from a user, for example, biographic data, symptom data (e.g., based on self-reporting, fatigue, mood, pain, diet, spasticity, etc.), emotion data, questions for a healthcare provider, requests for help to caregivers, patient-initiated assessment (e.g., general, lifestyle, home safety, fall risk, etc.), and the like. Further, the user computing device 120 may receive data, software updates, healthcare provider information (e.g., recommendations, answers to questions, etc.) from a remote computing device 110, for example a server or remote workstation.
The user computing device 120 may be communicatively coupled, for example, directly or indirectly, to one or more devices that can include a third-party device 130, a wearable device 140, or an electronic health record or medical record 150 (EHR/EMR) such that the user computing device 120 receives data or information from any one or more of these sources or devices 130, 140, 150. The data or information may include, but not be limited to, activity tracking data (e.g., steps, minutes of activity, etc.), heart rate data, breathing rate (e.g., indicator of stress level), blood pressure (e.g., from a communicatively coupled cuff, manually input, EHR/EMR), blood oxygen saturation, blood sugar (e.g., from a communicatively coupled glucose monitor, manually input, EHR/EMR), and/or clinician-generated assessment data (e.g., PROMIS questionnaires, Fugl Meyer, ARAT, PHQ2/9, etc.) stored in an EHR/EMR or completed directly in the application. Some possible wearables that are configurable with the present systems and methods are described in related International Patent Application PCT/US2020/055604, filed Oct. 14, 2020, the contents of which are herein incorporated by reference in their entirety. Other possible wearables that may be configured to work with the present systems and methods include devices, systems, or wearable available from Apple®, Fitbit®, Garmin®, Samsung®, and/or Beats® or pedometers, blood pressure cuffs, SpO2 monitors, heart rate monitors, and/or scales.
The permissions may be set by an administrator, for example a healthcare provider, a primary caregiver, or the patient. The application 180 may be downloaded from a remote computing device 110 onto each user's personal device or a remote workstation, a mobile device, a wearable device, a laptop, desktop, etc. to which the user has access. The application 180 may have various levels of permissions such that the patient 192 has access to all information and resources. A caregiver 194 may have access to an overall view of the patient's health and/or wellbeing, a healthcare provider interface (e.g., to ask questions of or interact with the healthcare provider, etc.), caregiver resources, etc. An optional user, such as a healthcare provider 190, may have access to all the patient's health information (e.g., historical and in real-time), user input at least related to symptoms and emotions, etc. An optional user, such as a navigator 198, may have access to patient onboarding materials and biographic data to help a patient and their care team to begin to use the application and related materials. The system 200 may have any number of users 196 with varying levels of permission or access to system features and components. For example, some users may only receive alerts from the system 200, for example in the event that the patient has a recurrence or has reached a milestone. Some users may only receive task requests from the system 200, for example to help the patient manage day to day activities and tasks, to transport the patient to and from appointments, to provide meals to the patient, to monitor the patient, etc.
Still referring to
In some implementations, a user computing device 120 further includes a power supply 126. Power supply 126 may include a rechargeable battery (e.g., Lithium-ion battery), a disposable battery, solar energy-based source, kinetic energy-based source, or other renewable energy source. Power supply 126 may provide energy for one or more components in user computing device 120, for example one or more sensors, hardware processor 118, memory 122, etc.
In some implementations, user computing device 120 includes an antenna 128 communicatively coupled to the hardware processor 118. Antenna 128 may receive and demodulate data over a communication network and/or prepares and transmits data over a communication network. Antenna 128 may act as a receiver, transmitter, or both (i.e., transceiver). Alternatively, or in addition to antenna 128, a data bus (e.g., serial or parallel) may be included to receive data from, or send data to, one or more sensors from memory and/or hardware processor via a wired connection.
In some implementations, user computing device 120 includes a display 112 communicatively coupled to the hardware processor 118. Display 112 may present one or more GUIs based on user input; inputs from one or more devices, as shown in
In some implementations, content presented to a user includes a set of curated, personalized learnings based on the patient's specific health event details available at discharge, along with a more expanded (and, in certain implementations, less personalized) library of learning materials and resources that patients can access later on. Some of the information may be linked to or may prompt the user to execute certain actions, for example an article about the importance of medication management may then prompt the user to set up a medication list for tracking over time. Various learning materials may also include understanding one or more complications at the hospital (e.g., aspiration pneumonia, brain swelling, difficulty swallowing, elevated pressure, reperfusion Hemorrhage, salt imbalance, vessel spasm, etc.).
In some implementations, user computing device 120 optionally includes speaker 116 and/or microphone 132. Such components may provide greater accessibility to systems for post-health event care management. For example, depending on a disability of the patient, a GUI presented on display 112 may be updated to enhance accessibility for the user. In one embodiment, a patient that experienced a stroke in the brainstem region may suffer from total or partial alterations in hearing or vision or a patient suffering from MS may experience vision loss. As such, a GUI presented on display 112 may be updated at any time so that the user may interact with the GUI primarily through audible means (i.e., utilizing speaker 116 and/or microphone 132) in situations of vision loss or through visual means (i.e., relying more on text or tactile based inputs and outputs) in situations of hearing loss. In another embodiment, a patient that experienced a stroke in the cerebral cortex or a patient suffering from amyotrophic lateral sclerosis may have difficulty with verbal expression, auditory comprehension, or may present with dysarthria. As such, a GUI presented on display 112 may be updated to present information using text or tactile based inputs and outputs. In still another embodiment, a patient that experienced a stroke in the central nervous system (e.g., spinothalamic tract, corticospinal tract, dorsal column) may suffer from hemiplegia such that tactile interaction with the GUI may be difficult. As such, the GUI presented by display 112 may be updated such that the user primarily interacts with the GUI via audible (e.g., using microphone 132, speaker 116, etc.) and/or visual means. Stroke location and/or type, symptoms, disabilities, etc. may be received from an EHR/EMR, wearable, remote computing device, etc. such that a hardware processor of the system receives these inputs, determines how information should be displayed/output to the user based on the received information (and/or any other information in the system), and updates a GUI presented by the display based on said determining.
In some implementations, speaker 116 and/or microphone 132 is further, or alternatively, used to determine a speech quality of the user. For example, a speaker 116 may prompt the user to speak and/or a user may speak into the microphone, such that a speech of the user may be analyzed by the processor, and an indication of speech quality may be output. For example, a speech quality indicator may be based on, but not be limited to, slurred speech, disorganized speech, dysarthria, etc. A quality of speech over time of the patient may be input into the system or received by the system to assess a recovery or health quality of the patient over time, as shown in
In some implementations, user computing device 120 optionally includes an image sensor 134. Image sensor 134 may be used to image a body portion of the user, for example a face to detect one or more emotions, facial symptoms (e.g., eyelid drooping, facial muscle weakness, etc.); one or more limbs to detect, for example muscle spasticity, flaccidity, a gait or balance of the patient, etc.; etc. Such information may be input or received by the system to assess a recover or health quality of the patient over time, as shown in
In some implementations, user computing device 120 optionally includes a location sensor 136, for example a global positioning device. Location sensor 136 may be configured to determine a location of a patient in the patient's home, for example to monitor a patient when he/she is alone; to monitor whether the patient is immobile during periods of time in which the patient is typically mobile (possible indicating a health event like falling or stroke); to monitor a patient when he/she is away from home; etc. Location sensor 136 may be used independently or together with accelerometer 114 to determine whether the patient is exhibiting normal activity or activity that may be indicative of a health event. In some implementations, location sensor 136 additionally, or alternatively, tracks a frequency with which a patient leaves her home, how frequently she is stopping during her route, how long she is away from her home, how far (distance) does she go from her home, elevation changes, how many places does she go, etc. as leading or lagging indicators of her wellbeing, stress, depression, and/or the like.
In some implementations, accelerometer 114 is further used to assess one or more symptoms, disabilities, or a recovery status of the user. For example, accelerometer may be used to determine a gait, balance, muscle weakness, motor activity quality, muscle spasticity, muscle flaccidity, vertigo, coordination, activity, etc. of the user. These accelerometer data may also, or alternatively, be used as leading or lagging indicators of her wellbeing, stress, depression, etc.
In some implementations, user computing device 120 optionally includes haptics 138, for example piezo electronics, eccentric rotating mass motors, linear resonant actuators, etc. In some implementations, a GUI presented by display 112 is updated or personalized to provide haptic feedback provided by haptics 138. For example, in instances where user has visual and/or auditory impairments, information may be communicated to the user, at least in part, via haptic feedback.
The healthcare system 170 can optionally consider multiple factors in determining how to present data and/or information to the users. More specifically, if the system, which includes the user computing device 120 and the system 170, determines that there is an elevated probability of at least one sensory impairment, the system 170 may be configured to modify the output format of data/information accordingly. In some implementations, the system 170 determines the elevated probability of at least one sensory impairment from an input by a user, caregiver, medical records, and the like where data may be retrieved from. User preferences may also be used to determine how to present data/information to users. For example, a user may prefer a certain type or style or font of visual presentation. User preferences, such as the visual presentation, are considered by the healthcare system 170, and these preferences may be modified over time by either or both the user and the system 170. Another consideration that may alter how data/information is presented to users is the computing device or devices they are utilizing. For example, different devices may have different displays, and the type and size of the particular display is an important factor in how data/information is presented to users.
The healthcare system 170 analyzes and processes the monitored data in S2020. Analyzing and processing may involve machine learning systems including, but not limited to, classification, regression, and/or clustering techniques, linear algorithms; regularization techniques, such as lasso, ridge, and elastic net; approaches such as random forest, decision trees, nearest neighbors, support vector machines, gradient boosting, neural networks, deep learning techniques, etc. The monitored data may be provided in various formats, including, but not limited to: numerical, text, natural language, video, image, and other formats, and may also include both structured and unstructured data. The healthcare system 170 receives and processes the monitored data regardless of the different formats. The healthcare system 170 analyzes the monitored data in order to determine the probability that the patient experienced, or is currently exhibiting, any impairment. More specifically, the analysis assesses the data using one or more, but not limited to, the following: machine learning algorithms, various sensory baselines, or trends of recorded data associated with the patient. Based on the analysis, the healthcare system 170 is configured to assign a probability of impairment, such as a visual, hearing, and/or a speech impairment. It will be appreciated that the analysis may use patient-specific data, such as their baselines, trends, and previously received data, such as shown in
Several output presentation modifications may be defined in the event the probability exceeds the predetermined threshold. It will be appreciated that modifications to the output presentation may occur at any time over the monitoring period. For example, in response to an elevated probability of a visual impairment, the healthcare system 170 may modify the output presentation of visual data/information text and/or graphics by one or more of the following: provide corresponding or enhanced auditory output to the patient; modify the visual output presentation, such as increasing a font size of the text or changing the brightness of the text and graphics. As another example, in response to an elevated probability of a hearing impairment, the healthcare system 170 may modify the output presentation of auditory data/information by one or more of the following: provide corresponding visual texts and graphics; increase a volume level of transmitted auditory sounds provided to the patient; modify the frequency level to a higher or lower frequency; or transmit a command to the user computing and/or monitoring devices to raise the volume of the speaker. Further, in response to an elevated probability of a speech impediment, which would reduce an accuracy of a speech recognition interface provided by the healthcare system 170, the healthcare system 170 may modify the output presentation by one or more the following: solicit input responses from the patient via text, such as through a keyboard or other user-selectable display options, such as response buttons or a menu; presenting a text chat box. In further examples, in response to an elevated probability of visual, auditory, speech, and/or other impairment in the patient, the healthcare system 170 may also automatically contact another human being, such as a healthcare provider or emergency contact.
Additional factors that may also prompt modifications in the output data/information provided to users can include, for example, the type of computing device or coupled monitoring device that the user is using or initialized. More specifically, in 52060, the healthcare system 170 receives input parameters regarding one or more computing devices. Input parameters could include computing device type and capabilities, coupled monitoring device types, such as display, camera, microphone, speaker, sensors, etc., memory size, data output format, transmission protocols, etc. The healthcare system 170 receives the input parameters at block 52040 and may modify the data/information output to correspond with the parameters of the device(s) being used. For example, the healthcare system 170 may be configured to increase font sizes for devices having a larger display or provide auditory sounds for devices having a speaker.
The healthcare system 170 may also be configured to receive user preferences regarding the type or format of desired data/information output presentation. For example, the output presentation may be configured to provide an enhanced sound output or an enhanced visual output, or both, depending on user preferences. Further, the output presentation may be configured with varying output presentations based on the time of day. For example, a user preference may specify sound outputs during periods when they are away from the display and visual outputs during periods when they do not want to be disturbed by sound. User preferences may also include muting some or all of the output presentations during periods in which users may be sleeping. In some situations, the healthcare system 170 detects an emergency when a patient may be sleeping, for example, and may be configured to output an alarm, other loud sound, and/or vibration in order to wake them. User preferences may also be configured to store emergency names and numbers (see
The healthcare system 170 may also monitor the responses users provide in order to customize the type of output. In S2050, the healthcare system 170 is configured to analyze user responses over time and modify the output presentation accordingly in order to provide output presentations that are most meaningful for every user. For example, the healthcare system 170 may provide output presentations in different visual styles, such as, different font sizes, colors, and/or placement of information. The healthcare system 170 is configured to determine the visual style that results in the most responsiveness from the user and may modify or vary the output format to optimize the presentation and responsiveness from the user. As another example, the healthcare system 170 may provide output presentations using both sound and visual information. Depending on the responsiveness from the user, the healthcare system 170 may provide an emphasis of one or both sound and visual information. For example, the healthcare system 170 may determine that a user is more responsive to provided visual information, and the output presentation for that user is modified or updated for an emphasis on visual information. Alternatively, the healthcare system 170 may determine that the user is more responsive when both visual and auditory output information is provided, and the output presentation for that user is modified or updated to provide both visual and auditory information.
In some implementations, the hardware processor, for example on a local or remote computing device 110, is configured to output information to a user, for example a care team, a patient and/or a caregiver. (see
In some implementations, the software further includes one or more tools for a user, for example a patient or caregiver. The tools may help manage disabilities of a user, for example a bathroom finder may be included to help manage incontinence; games may be included to help improve memory; exercises or games to help the patient develop new ways of functioning/understanding/conversing/etc. The tools may help manage underlying health issues, for example high blood pressure, heart conditions, chronic kidney disease
In general, the system 300 may include various stages, for example an onboarding stage 310; a processing stage 320; a content presentation stage 330; and an updated content stage 340. Stages 320, 330, 340 may be repeated any number of times, as shown by arrow 350 and/or various inputs at blocks S342, S344, S346. The various inputs at blocks S342, S344, S346 can be fed back into the rules and/or filters at block S322 to further adjust the GUI, update the content, and/or achieve timely delivery of content. For example, stages 320, 330, 340 can be repeated every time a new or updated input (e.g., user input, input from EHR/EMR, input from wearable, input from third-party device, etc.) is received; automatically based on a certain time interval or various criteria; manually based on user request or user selection; or based on any other criteria or parameters.
Any process descriptions, elements, or blocks in the flow diagrams described herein and/or depicted in the attached figures should be understood as potentially representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps in the process. Alternate implementations are included within the scope of the implementations described herein in which elements or functions may be deleted, executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those of ordinary skill in the art.
Turning now to onboarding stage 310, which includes one or more of: receiving one or more confidential patient specific parameters at block S312; receiving a history of a patient at block S314; and/or receiving information about one or more health events of the patient at block S316. For example, patient specific parameters may include, but not be limited to: name, date of birth, contact information (e.g., phone number, email address, etc.), biographic data, demographics (e.g., living situation, financial situation, etc.), identity, goals, and the like. Further for example, history of a patient may include, but not be limited to: underlying health conditions (e.g., high blood pressure, diabetes, etc.), social history, medical history, surgical history, family history, tests or assessments, clinical data, etc. Still further for example, health event information may include, but not be limited to: a type of health event (e.g., type of stroke, type of multiple sclerosis, etc.), a frequency of the health event (e.g., frequency of relapses, frequency of symptoms, etc.), a date of the health event, a location of treatment, a location of the health event, a location of the health event within the body (e.g., location of stroke within the brain, location of demyelination in multiple sclerosis, etc.), one or more disabilities as an outcome of the health event, one or more outcomes as a result of the health event, and the like. Information for any of blocks S312, S314, S316 may be received by the system via user input, pulled from one or more databases (e.g., EHR/EMR, profile on third party platform, etc.), or otherwise pulled form one or more devices (e.g., wearable, third party device, remote computing device, etc.).
One or more patient specific parameters may be displayable over time, for example to determine a trend or trajectory for the user. Symptom trends may be invaluable for a user to determine his/her progress.
In some implementations, in response to received symptom data, the application provides resources, a list of one or more future symptoms that may occur, future signs or symptoms to watch for, and/or on-going concerns as a result of the symptom. Table 1 below shows various symptoms and possible outcomes, results, or additional issues/symptoms for which a care team or the application can monitor.
In some implementations, as shown at block S322 of stage 320, the one or more inputs from blocks S312, S314, S316 are processed. For example, processing at block S322 may include using various rules and/or filters to determine one or more resources (e.g., videos, support groups, written materials, appointments, etc.) that the patient needs to support his/her recovery; determine a likelihood of the patient developing one or more symptoms or disabilities; determine a likelihood of partial or full recovery; determine a treatment and/or therapy plan (e.g., pharmaceuticals, physical therapy, appointments, surgeries, etc.) for the patient; determine a care plan (e.g., in conjunction with one or more caregivers) for the patient; determine a safety plan (e.g., recommended home safety adjustments, recommended work safety adjustments, recommended mobility adjustments, etc.) for the patient; etc.
In some implementations, the software includes tools, action items, and/or information for creating a care plan for the patient. The care plan may include clinical action plans or non-clinical action plans. The care plan may be personalized for a patient and/or dynamically updated over time based on one or more user inputs or inputs about a user. The care plan may be focused on the first few weeks after a health event, after the initial few weeks, or for any length of time or time frame. In some implementations, a care plan includes a rehabilitation plan, for example that includes personalized exercises for the patient based on the patient's health event type and/or disability. Optionally, the care plan may include non-clinical action plans, for example, but not limited to, home improvements to increase the safety of the patient in their home (e.g., fall prevention) and/or paperwork that the patient should have on file (e.g., advanced directive, power of attorney, etc.). Optionally, the care plan may further include clinical action plans such as smoking cessation programs and/or blood pressure lowering plans. Also, the care plan may include tools to facilitate the user in navigating the healthcare system.
In some implementations, as shown at stage 330, a graphical user interface presented on a display of a user device is updated to improve accessibility of the display for the user at block S332, display patient specific content at block S334, and/or to display time appropriate content at block S336. For example, and as described elsewhere herein, the type of health event may dictate the type or types of symptoms and/or disabilities that the patient will experience at any particular point in time. Patients may experience muscle weakness or loss of function, vision changes or loss, hearing changes or loss, mental capacity changes, etc. As such, a GUI of a user device may update to accommodate these changes in abilities, symptoms, and/or disabilities so that the user can easily and effectively interact with the system. This may include switching to audio content delivery, text delivery, haptic delivery, switching a distribution or location of material on the GUI, changing a complexity of the material presented, etc. or a combination thereof. Additionally or alternatively, the GUI may update to divide and/or separate content into different distinct pages based on the user's impairment. For example, at least based on the impairment, the font, images, selectable elements, and the like can be adjusted causing the content to be divided amongst divided section and/or pages. Further, the user can input preferences to adjust GUI parameters to further adjust the presentation of the content. In some implementations, the symptom or disability may impact the patient asymmetrically, such that the GUI is updated to accommodate this asymmetry. For example, when a patient experiences vision loss or motor function loss on a left side, content and/or inputs may be positioned toward a right side of the GUI. Additionally or alternatively, a GUI may be updated to have enhanced speech recognition capabilities, such as when a patient is experiencing altered speech or is having difficulty articulating well. Further, a GUI may be updated to increase or reduce complexity of the content that is delivered based on a perceived cognitive capacity of the user, for example severely mentally impaired patients, as a result of their health event, may receive more basic content while patients that are more severely impaired in motor function, but not cognitively, may receive more complex content.
In some implementations, as shown at block S334, content in one or more databases is filtered and presented on a GUI to the user based on patient specific information. For example, the content may be filtered based on a type of health event; type of symptoms and/or disabilities currently being experienced by the patient or expected to be experienced by the patient at a future time point; type of treatment or therapy prescribed for the patient; based on patient reported symptoms, emotions, etc.; based on a history of a patient; based on a support network of the patient; etc.
In some implementations, as shown at block S336, content is selected and/or displayed in a GUI to the user based on time appropriateness. For example, time appropriateness may be based on at least an amount of elapsed time from the health event; a projected health trajectory based on a severity of the health event; a prescribed treatment or therapy; a history of the patient; and/or symptoms and/or disabilities experienced by the patient.
Turning to stage 340, which includes one or more of: receiving updated patient specific parameters at block S342, receiving measured patient parameters at block S344, and/or computing elapsed time since a health event at block S346. Patient specific parameters, as described in connection with block S312, may be updated over time. For example, they may be updated in response to a request or prompt to verify and/or update one or more parameters; updated at a predefined interval; updated when there is a change in any one or more of the parameters; etc. Further, additional inputs may be received by the system over time, for example, one or more measured parameters may be received by the system. For example, a step rate, blood pressure, blood oxygen saturation level, weight, activity level, heart rate, heart rate variability, etc. may be collected by one or more wearables, third party devices, etc. communicatively coupled to the system. The system may then update a GUI presented on the display, content displayed to the patient, and/or timing of content delivery. The system, at block S346, may further be configured to compute an elapsed time since the health event. Such calculated timing may be used to further filter content so that the displayed content is specific for the patient for where they are in their recovery and/or based on a current status of their symptoms and/or disabilities post the health event.
In some implementations, one or more patient specific parameters includes completing a clinical self-assessment, for example a Taking Charge after Stroke (TaCAS) assessment, a Mood Assessment (Patient Health Questionnaire-2, Mental Component Summary of the Short Form 36); an Autonomy-Mastery-Purpose-Connectedness (AMP-C) Assessment; an Activation Assessment (Patient Activation Measure); a medication adherence assessment (Medication Adherence Questionnaire), Modified Rankin Scale for Neurologic Disability (mRS; measures the degree of disability or dependence in the daily activities of people who have suffered a stroke or other causes of neurological disability), patient health questionnaires (PHQ), PROMIS GH (Patient-Reported Outcomes Measurement Information System Global Health) Scale, or the like. For example, the application may comprise one or more of: image upload or capture capabilities, audio recording (and optionally storing) capabilities, drawing capabilities (e.g., user can draw in the app with using touch responsive capabilities of the app), and/or optional or mandatory questionnaires.
In some implementations, an application for post-health even management also includes one or more questionnaires or assessments to determine a preparedness level of one or more caregivers and/or a strain or stress level of a caregiver (e.g., Modified Caregiver Strain Index).
In some implementations, a GUI is configured to request and receive notes from a user, for example user observations, accomplishments, and/or setbacks related to pain, fatigue, mood, sleep, and/or overall wellbeing.
In some implementations, the application includes discussion board functionality, for example in private and public settings.
Often times, patients may be discharged from care with too much or too little information regarding their health event and the recovery process. In addition, caregivers may be assisting several patients at once in varying stages of recovery and/or have limited knowledge regarding the care of the patient. The navigator dashboard interfaces can provide a tailored experience for other users based on each of the patient's needs. By possessing the relevant information in one centralized location, navigators can search for relevant information quickly using the various tabs and ribbons of the system 170. Further, navigators can monitor a patient's progress, assign educational content and assessments, assist a patient with their questions, provide guidance, and much more.
In
In
Further, the navigator GUI 1700 can be configured to display various learning lessons and the corresponding title and tasks of the learning lessons in display area 1704. The learning lessons can be personally curated by a navigator towards a patient's specific health event or level of progress in the recovery stage. In some implementations, a patient and their caregiver are assigned similar learning lessons that appear differently to each user depending on the enrollee status of the user. In some implementations, the metrics related to the learning lessons are analyzed to determine trends in the recovery process and to assist in predicting events.
The navigator can also notify any of the corresponding users when the navigator assigns a new lesson, or to complete an uncompleted lesson, using the “Notify Uncompleted Lesson” element 1706. In some implementations, the system automatically submits notifications at least based on the date the learning lesson was assigned, recognition of a lack of activity, periodic reminders such as a daily or weekly notification, and the like. In some implementations, an algorithm assigns and populates new lessons at least partly based on the demographic data and medical history of the patient. In some implementations, the navigator views a completion or progress status of the learning lesson, the responses of the patient with respect to the learning lesson, and/or comments or notes from the patient related to the learning lessons.
In
Assessments can vary from a wide range of topics which can inform the navigator, for example, if the navigator, caregiver, or clinician should provide assistance, if a learning lesson should be assigned that covers topics related to an assessment, and/or if the navigator should further curate the types of assessments. In some implementations, the GUI 1800 is configured to is display an assigned assessment in a display area 1804 along with a due data 1806, a completion 1808 element indicating whether the patient answered or completed the assessment, a view element 1810 to view the assessment, and/or a delete 1812 element to delete the assigned assessment.
In
In
In
Through the use of a mobile application, users can participate in a post-health event care program. The mobile application can provide an assortment of aids such as learning lessons, assessments, facilitation, communication, requesting assistance, and many more. Once the user is approved to participate in the program, the user can download the mobile application to their user device for accessing.
Any process descriptions, elements, or blocks in the flow diagrams described herein and/or depicted in the attached figures should be understood as potentially representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps in the process. Alternate implementations are included within the scope of the implementations described herein in which elements or functions may be deleted, executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those of ordinary skill in the art.
Turning now to ideation stage 3110, which includes one or more of: receiving learning article requests at block 53112; storing learning articles in database and generating a list of learning articles at block 53114; review database for similar completed articles at block 53116; and/or prioritizing learning article requests 53118. Anyone of the type of users can submit learning article requests to the content management system for consideration. Following the receipts of one or more requests, the learning content management system can prioritize the requests at block 53118 at least based on the requestee, a keyword search, and/or population served. For example, users such as medical professionals can request learning articles to provide patients with information in response to commonly asked questions. While any user can submit a learning article request, requests from medical professions may be prioritized above certain others. Additionally or alternatively, the click rate, reading count, key word searching of learning articles can prompt the generation of similar articles to address similar issues to further aid in the transition following a health evening. One or more similar requests may be made over time showing a need for learning article topic. Trends in requests can assist the system in determining user needs.
In some implementations, as shown at block 53122 of stage 3120, the one or more requests from ideation stage 3110 are assigned for drafting. For example, block 53122 can include assigning a learning articled to written. In some implementations, articled can be assigned based on availability of the writers. In other implementations, the articles can be assigned based on the professional expertise of the writer and/or the writer's educational background. Along with assigning a learning article request to a writer, a deadline to produce the learning articled can be assigned at block 53124.
At block 53126, learning articles assigned for drafting can be assigned a reference number. In some implementations, the reference number can be used to track the progress of the learning article, to track the learning article for editing, and/or to recall the learning the article. Once the learning article request is assigned a drafter, deadline, and/or reference number, the drafter can begin preparing and editing the learning article as shown in block S3028. For example, a drafter can begin preparing a new learning articled at least based on an unmet need and/or a request for a new topic or category. In other implementations, a drafter can, for example, edit articles to update them based on new learnings or information. In cases of updating articled, drafters can use the reference number to retrieve previously prepared articles for revisions.
In some implementations, as shown at stage 3130, the learning article can be reviewed internally as in block 53132 or externally by third-party reviewers in block 53034. For example, as shown in block 53132, an internal reviewer may review the article to determine whether the learning articled is responsive to the learning articled request, that the learning article is properly formatted, and/or whether the learning article satisfies accessibility standards. During the internal review, the learning article can be assigned to content reviewers who specialize in certain topics. Learning articles can be edited in response to the internal review by the same learning article drafter in stage 3120 or a new drafter.
At block 53134, the learning articles can be distributed to third-party reviewers. Third-party review can include, but are not limited to, caregivers, stroke survivors, medical professionals, and the like. Third-party reviewers can determine the accessibility of the learning articles before publication. Further, if the learning articled is medically related, medical professionals can provide a quality review.
At block 53134, the learning articles recalled based at least partially on the assigned reference number. The learning articled can be recalled from publication to be updated, deleted, hidden, and/or the like. In some implementations, recalling articles deletes a local copy stored on a user device.
Following the review process, the learning article is published at stage 3140. The learning article can be built in the health application mentioned herein at block 53042. Feedback gleaned from stage 3130 can be considered in building the article in the health application. The learning article can be stored on a remote computing server or downloaded and stored on the user device. Once the build is completed, the learning article can be released to users at block 53144. Over time, metrics related to the viewership can be collected and processed at block 3146. The metrics can be used to improve learning articles request and provide feedback for further edits and recalls.
Turning to enrollment stage 3210, which can include: enrolling user in a post-health event care management program at block 53212; creating user record and profile in a database at block 53214; and/or a user receiving welcome content and an initial contact with the post-health even care management program at block 53216. The enrollment process at block 53212 can include a user being selected to participate in a post-health event program. A user can begin enrolment between admission to a care facility following a health event and prior to discharge. In some implementations, enrollment may not follow a health event and instead begin following a recommendation from a caregiver. Enrollment at block 53214 can include creating a user profile and updating the profile with the relevant medical records. A user's information can be uploaded into a database using the hospital medical records and self-reported information. Following the creation of a user profile, the user can receive introductory information and an initial contact at block 53216. The user can download an application associated with the post-health event care management program and begin reviewing any welcome content available.
Turning to the pre-discharge stage 3220, which in some instances may overlap with the enrollment stage 3210, any missing records and additional health event information can be further identified and collected to update the user profile at block 53222. A user can then receive an initial curated package and application content at block 53224 that can be at least partly based on the initial records. In some implementations, a post-care health program automatically selects the initial content from the medical records collected in blocks 53212 and 53222. Before the user is discharged from the care facility, the user can begin reviewing assigned learning content.
In some implementation, as shown at the post-discharge stage 3230, the user can report profile updates at block 53232. For example, the user may submit impairments, health events, additional records, and/or living situation. Following discharge, the use can also begin receiving post-discharge assessments at block 3234. Such assessments can be directed towards the physical and emotional state of the user. In some implementations, the assessments can be based on the severity of the health event and/or the level of impairment a patient is experiencing. A patient can receive assessments and learning content directed towards their situation or more general content. At block 53236, as the user progresses through the recovery process, the user can receive resources and learning content at least based on the progression and recovery rate of the user. For instance, a navigator may assign content that reflects improvement or setback to further curate the patient's experience. In some implementations, the results from the assigned content and learning experience is reported back to the care facility for determine courses of action. In the post-discharge stage 3230, the patient can continuously receive content from the navigator at least based on the recovery process. Content assigned to a user discharged from a care facility, such as a hospital, can differ from the content a user further along the recovery process could be assigned. As the user progresses through the recovery process, the user's state can change which influences the recovery program. For example, as the user completes learning lessons, the navigator and/or system may assign content that corresponds to the state of recovery such as more complicated exercises or returning to work. Additionally or alternatively, the GUI of a user computing device can further update to reflect the state of the user during the recovery process following discharge from a care facility. For example, the GUI may update to increase user interaction with the user device or alter the display of the content on the user device.
The foregoing is a summary, and thus, necessarily limited in detail. The above-mentioned aspects, as well as other aspects, features, and advantages of the present technology will now be described in connection with various implementations. The inclusion of the following implementations is not intended to limit the disclosure to these implementations, but rather to enable any person skilled in the art to make and use the contemplated invention(s). Other implementations may be utilized, and modifications may be made without departing from the spirit or scope of the subject matter presented herein. Aspects of the disclosure, as described and illustrated herein, can be arranged, combined, modified, and designed in a variety of different formulations, all of which are explicitly contemplated and form part of this disclosure.
In general, as used herein, a “user” may include, but not be limited to, a patient, a caregiver, a care partner, a healthcare provider, a navigator (e.g., user that helps or enables system setup), a friend, a relative, a family member, a nurse, a support group member, a therapist, a service provider, etc.
In general, as used herein, a “health event” may include a stroke, a traumatic brain injury, a multiple sclerosis relapse, episode, or diagnosis; a Parkinson's Disease diagnosis, a Diabetes diagnosis; a hypo or hyperinsulinemia episode; a Fibromyalgia diagnosis; a Cervical spondylosis episode or diagnosis; a Guillain-Barre syndrome diagnosis or episode; a Lambert-Eaton myasthenic syndrome diagnosis or episode; a Myasthenia gravis episode or diagnosis; an amyotrophic lateral sclerosis diagnosis; a spinal cord injury; or any other event, condition, or disease that causes neurological changes, disruptions, or loss of function or disruption or muscle weakness, spasticity, or loss of function.
In general, as used herein, “symptoms” of stroke onset or “disabilities” as a result of a stroke event may include, but not be limited to: blurred vision; speech impediments; slurring speech; involuntary eye or other body part movement; memory changes; balance changes; hemiplegia; gait changes; motor activity changes; muscle stiffness; muscle spasticity; behavioral changes (e.g., anxiety, anger, irritability, lack of concentration, lack of comprehension, etc.); emotional changes (e.g., depression); shoulder pain; shoulder subluxation (i.e., partial shoulder joint dislocation); altered smell, taste, hearing, and/or vision; drooping of eyelid (i.e., ptosis); weakness of ocular muscles; decreased reflexes; decreased sensation and muscle weakness of the face; nystagmus; altered breathing rate; altered heart rate; weakness in tongue; weakness in sternocleidomastoid muscle; aphasia; dysarthria; apraxia; visual field defects; hemineglect; disorganized thinking; confusion; hypersexual gestures; lack of insight of his/her disability; vertigo; disequilibrium; lack of consciousness; headache; vomiting; etc.
As used herein, user input received into the system may be either static or dynamic. For example, a demographic of a user may be static (e.g., requiring infrequent updating), while a biometric of a user may be dynamic (e.g., requiring frequent updating).
Dynamic user input, either collected directly by the software, a device communicatively coupled to the software, or a user of the software, includes but is not limited to: living situation, financial situation, clinical data (e.g., vitals, lab test results, scans, assessments, health history, etc.), recent assessments (e.g., neurological, motor skills, cognitive, etc.), identity, goals (e.g., be able to drive again, regain a motor skill, etc.), disabilities related or unrelated to the stroke (e.g., aphasia, dysphagia, fatigue, incontinence, etc.), life events, life changes, etc.
The systems and methods of the preferred embodiment and variations thereof can be embodied and/or implemented at least in part as a machine configured to receive a computer-readable medium storing computer-readable instructions. The instructions are preferably executed by computer-executable components preferably integrated with the system and one or more portions of the hardware processor on the user device, wearable, and/or computing device. The computer-readable medium can be stored on any suitable computer-readable media such as RAMs, ROMs, flash memory, EEPROMs, optical devices (e.g., CD or DVD), hard drives, floppy drives, or any suitable device. The computer-executable component is preferably a general or application-specific hardware processor, but any suitable dedicated hardware or hardware/firmware combination can Additionally or alternatively execute the instructions.
As used in the description and claims, the singular form “a”, “an” and “the” include both singular and plural references unless the context clearly dictates otherwise. For example, the term “input” may include, and is contemplated to include, a plurality of inputs. At times, the claims and disclosure may include terms such as “a plurality,” “one or more,” or “at least one;” however, the absence of such terms is not intended to mean, and should not be interpreted to mean, that a plurality is not conceived.
The term “about” or “approximately,” when used before a numerical designation or range (e.g., to define a length or pressure), indicates approximations which may vary by (+) or (−) 5%, 1% or 0.1%. All numerical ranges provided herein are inclusive of the stated start and end numbers. The term “substantially” indicates mostly (i.e., greater than 50%) or essentially all of a device, substance, or composition.
As used herein, the term “comprising” or “comprises” is intended to mean that the devices, systems, and methods include the recited elements, and may additionally include any other elements. “Consisting essentially of” shall mean that the devices, systems, and methods include the recited elements and exclude other elements of essential significance to the combination for the stated purpose. Thus, a system or method consisting essentially of the elements as defined herein would not exclude other materials, features, or steps that do not materially affect the basic and novel characteristic(s) of the claimed disclosure. “Consisting of” shall mean that the devices, systems, and methods include the recited elements and exclude anything more than a trivial or inconsequential element or step. Implementations defined by each of these transitional terms are within the scope of this disclosure.
The examples and illustrations included herein show, by way of illustration and not of limitation, specific embodiments in which the subject matter may be practiced. Other embodiments may be utilized and derived therefrom, such that structural and logical substitutions and changes may be made without departing from the scope of this disclosure. Such embodiments of the inventive subject matter may be referred to herein individually or collectively by the term “invention” merely for convenience and without intending to voluntarily limit the scope of this application to any single invention or inventive concept, if more than one is in fact disclosed. Thus, although specific embodiments have been illustrated and described herein, any arrangement calculated to achieve the same purpose may be substituted for the specific embodiments shown. This disclosure is intended to cover any and all adaptations or variations of various embodiments. Combinations of the above embodiments, and other embodiments not specifically described herein, will be apparent to those of skill in the art upon reviewing the above description.
This application is related to International Patent Application PCT/US2020/055604, filed Oct. 14, 2020, and claims the benefit of U.S. Provisional Application No. 63/236,876, filed Aug. 25, 2021, entitled “SYSTEMS AND METHODS FOR STROKE CARE MANAGEMENT,” the contents of which are herein incorporated by reference in their entirety.
Number | Date | Country | |
---|---|---|---|
63236876 | Aug 2021 | US |