This application relates to rehabilitation, and particularly to virtual rehabilitation.
As science and technology advances, so does medicine. Technology allows users to explore simulated environments allowing users explore their anatomy, for example, and train by attempting simulated surgical procedures. The environments are often limited and not well suited to distributed processing.
Many computer generated environments are inaccessible to users as they require specialized interfaces that do not accommodate their needs or their medical conditions. Some equipment reduces the users' natural reactions because it is not tailored to the users' special needs. Children diagnosed with hyperactivity, for example, may not adapt to adult-sized equipment.
Another concern is safety. When equipment is used to rehabilitate an injury, for example, any movement presents a risk of reinjury due to its setup (some are tethered to other equipment or weighted), a user moving beyond prescribed therapy levels and/or the user engaging beyond prescribed therapy sessions. Similarly, not moving to prescribed levels or engaging for extended time periods can also cause injury.
The patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the Office upon request and payment of the necessary fee. The elements in the figures are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the disclosure. Moreover, in the figures, like referenced numerals designate corresponding parts throughout the different views.
The disclosed systems and processes (referred to as systems) render new and unique treatment plans through virtual rehabilitation. The systems provide rehabilitation programs by enabling users to move and react to computer-simulated virtual environments. The systems allow users to sense, move, and/or influence virtual objects much like they do to physical objects in a natural real-life environment. The natural flow and interactions of some therapeutic activities provide entertainment and real-time movements without revealing to the user that the user is undergoing therapy meant to restore their good health. The immersion into simulated environments allow users to forget about their surroundings, health, and other external situations. Some simulated environments create sight sensations that encourage physical movements. Others, transmit sight, sound, and sensations that simulate the real world making the simulations different from traditional computer simulations. Some systems record movements, and some alternate systems record and track sound, facial expressions, movements and/or positions. The systems monitor and assess progress as users move through rehabilitative programs while ensuring user safety. Alternate systems include safety monitoring programs that guard against reinjury by processing feedback and adjusting the program in real-time when users exercise beyond their prescribed activities, beyond their prescribed durations, or express pain. A real-time operation comprises an operation matching or occurring faster than a human's perception of time (e.g., processing information or data at the same rate or at a faster rate than it is received) or a virtual process that occurs (or is perceived to occur) like a process in a real life environment.
In some systems, the virtual worlds are generated by mathematical models and programming that creates the illusion of motion, the appearance of objects, and their motion and/or removal. By manipulating the transitions between scenes, and recalculating positions between them, the systems provide the illusion of continuous motion of virtual objects. Some processes calculate the viewer's perspective of the virtual object, calculate lighting, add shadings, reflections, shadows, and textures to simulate the manipulation of what appears to be real life objects. By specifying motions, appearances, and transparency (virtual object removal) scenes are rendered and action specified through displays such as through a head-mounted display.
To see in the virtual environment, the users wear a head-mounted display with screens directed to each of the user's eye. The head-mounted display includes a position tracker to monitor the location and movement of the user's head, and appendages and in alternate systems, include vision sensor and microphones to track the user's eyes, facial expressions, and sound. With respect to the tracking of the user's head and eyes, the systems process position data to recalculate the images rendered in the virtual environment to match the direction in which the user is looking and displaying those images on the head-mounted display. Using a plurality of sensors, such as optical sensors, capacitive sensors, and/or force sensors, for example, the systems detect a volume of a motion cloud in which an appendage moves. This is the amount of space or cloud of space in which the appendage can move. When improving the motor training of an appendage extension or extension motion, the cloud of space comprises the amount of spatial volume in which an appendage can reach out, extend, and move. It may encompass the spatial volume that allows a user to point, reach, or extend toward an object or simulate the spatial volume required to point to, throw, or throw at an object. By employing a very high sampling rate (e.g., some systems track motion coordinates about every 100 milliseconds), the systems capture subtle gestures and track their movements in these spatial volumes that provide a very high granular measurement of the user's range of motion. This is valuable in therapy sessions where incremental movements can make the difference between a successful rehabilitation therapy and an injury. In some systems, appendage tracking, such as arm and hand tracking, for example, may occur through one or more optional handheld devices in wireless communication with the head-mounted display. The optional handheld devices may be adjusted to different hand sizes, and in some systems, be made with anti-microbial surfaces to maintain sterile environments.
In some systems, pain is tracked by monitoring sound (e.g., within or outside of the aural range), facial expressions, and head movements via a pain tracking engine. Because pain often coincides with a facial expressions and head movements, active model are used in alternate systems that compare features extractions from motion analytics in which the user is in pain with pain-free features extracted from benchmark data and user head motions and rates of those movement against head motions and rates of those motions associated with pain. In other alternate systems, pain may be recognized through voice recognitions that act on verbal commands, verbal expressions, or other sounds to detect the user's sound inflections that are often associated with pain through the pain tracking engine. In these alternate systems, when pain is recognized (or a pain level is reached) the alternate systems initiate assessments and other some systems initiate remedial actions such as terminating the interactive virtual therapeutic session to prevent reinjury and/or initiate real time adjustments to achieve the most impactful recovery path and care plan.
In some fully immersive systems, the user's may hear sounds via speakers, such as the popping of a balloon in an interactive virtual therapeutic session. And, in some alternate systems, the user may sense touch that is simulated by physical feedback via a haptic interface in the handheld devices. The haptic interfaces relay feedback (that in other systems include sound and/or visual cues), such as when a user is making an unprescribed movement and/or engaging beyond a prescribed therapy session to provide an alert or notification. The alert or notification that may also be relayed to the therapist via an alert engine and a messaging systems. Some haptic interface relays physical sensations that coincide with the limits and boundaries in the virtual environment.
With the initial assessment completed, a dynamic active care plan engine (e.g., a machine learning trained engine) generates a recovery path and care plan at 104 that may be based on the therapist or therapeutic team's recommendations. The care prescribed and speed of the recovery path may be based on the user's diagnosis (e.g., a rotator cuff injury may require a less aggressive therapy than an overextended muscle), current health, desired rehabilitation period, and other factors.
In preparation for the virtual therapeutic session, the systems process the care plan and assessments made from any prior therapeutic sessions. For physical rehabilitation, the systems measure various distances. For a shoulder rehabilitation, for example, the systems measure the distances of the user's arms between their joints. The distances are translated into three dimensional virtual environment vectors that establishes a starting rehabilitative cloud centered around the simulated user. The systems also align the virtual environment vectors with prescribed range of motion established by the care plan.
Based on the care plan, a game play challenge zone is created at 106. A game play challenge zone comprises an area, such as a spatial volume in the virtual environment in which a user attempts to move part of his/her virtual self. Movement in the virtual environment is linked to the user's relative movement of the handheld or other device, but not to the precise physical location of the device. For example, if a user picks up the device without extending his/her arm during an arm reach rehabilitation therapy and then sets it down in another location, the representation or motion of the user in the virtual environment does not change because no arm extension was detected. When the user executes an extension, the movement is reflected by a motion in the virtual environment to reflect the user's extension relative to their starting position. A virtual therapeutic session is the time during which a program is running. In some systems, it is the time during which the user interacts within the interactive environment which may be described as the time during which the program accepts rehabilitative input from the user and processes that information.
At 108 the virtual therapeutic session begins. In some systems, the virtual therapeutic session occur for a predetermined amount of time that may occur on scheduled number of days of the week. For example, a virtual therapeutic session may last thirty minutes a day three to five days a week. During virtual therapeutic sessions, the user executes certain motions in response to requested activities prompted by the virtual world. A motor training of an appendage extension, for example, may be prompted by virtual reality exercises that encourage an extension and in some applications a desired trajectory to complete a task or movement. The virtual reality exercises recruit the user to move his/her injured appendage into the challenge zone throughout the virtual therapeutic session. The system records the location of patient's appendage at a predetermined rate to determine if the user is able to reach the prescribed targets. The movements are measured and translated into vector movements during the virtual therapeutic session.
A game play challenge may present a challenge that requires a user to reach or extend to as many balloons (targets) as possible that surround their virtual presence after reaching or extending toward them first with their healthy appendage (establishing a benchmark). The system analyzes the user's performance during the therapeutic session by comparing the user's current volume of motion capability to their full volume of motion capability at 110. When a desired capability is reached and maintained for a predetermined number of virtual therapeutic sessions 112, the virtual therapeutic session and episode (of care) terminates at 114. An episode refers to the length of time in which the user undergoes therapeutic activities to rehabilitate an injury. It comprises the total number of virtual therapeutic session required to execute the care plan.
Unlike conventional therapeutic approaches that rely on ranges of motion, the disclosed system process volumes of motion for motor training such as the motor training of an appendage. In this application, the volume of motion is described by the perimeter of the volume that a user's appendage can move. The system monitors this cubic range by recording points and vectors in three-dimensional space that a user (a.k.a. a patient in some field of use applications) can reach as measured between the head-mounted display and the wireless devices.
As explained, the system analyzes the user's current volume of motion capability against their volume of motion full capacity. The volume of motion full capacity is the volume of motion that a user can move their unimpaired arm/appendage or is a final targeted range based on other factors. This is the full range of motion that the system is attempting to achieve during the episode of care. Based on a current volume of motion capability, the system measures the progress or lack thereof that the user makes and provides that assessment to the dynamic active care plan engine as feedback. Based on the feedback, the dynamic active care plan engine modifies the challenge zone and volume of motion cloud for the next virtual therapeutic session to ensure that the user progresses to the desired target level during the episode of care. The volume of motion cloud is the space (a.k.a. the cloud space) in which a movement can be made. When executing a training of an appendage motion, for example, the volume of motion cloud determines the area the user can manipulate their arm. Should the user reach beyond the boundary, a safety monitoring program intervenes to prevent reinjury in some alternate systems.
Each virtual therapeutic session establishes zones or boundaries and difficulty levels. The metrics are analogous to a perimeter for a real geographic area and a resistance training. In
By an aggregation of movement data from all participants of the systems (impaired and fully functional in the appendage extension use case, for example,) retained in an on-line database, the game play zone algorithm matches participants with similar demographics and those with similar impairments as measured during each of the monitored virtual therapeutic session to identify similar prognoses. Based on those comparisons, the game play zone algorithm identifies assessments, and identifies users' virtual rehabilitation experiences, point targets, and movement patterns over time who exhibit the best progress in increasing and/or improving their volume of movement. In some exemplary unsupervised machine learning applications, the best practices render training data that is processed to train the dynamic active care plan engine. The training data may represent three dimensional coordinates (e.g., x, y, and z coordinates) of low and high limit clouds for participants and the percentage of points that should be applied in a game play zone (representing how aggressive the experience should be). The dynamic active care plan engine is trained on this data to minimize a loss function—such as a mean squared error (when mean is used) or mean absolute error (when median is used), for example,—by recursively splitting each of the many classes of the training data (e.g., three in this exemplary system) in a way that maximizes the best practices until an accuracy limit or threshold is met for each category. Thereafter, the trained dynamic active care plan engine establishes the game play zones as shown in
A game difficulty level algorithm enhances the level of difficulty for the users while the user is engaging in the game play challenge zones. By processing the performance on their most current clinical range of motion assessments, their goal range of motion performances assessments, (from the recovery path of the episode of care), variances that exceed care plan recovery paths, and their historical virtual rehabilitation assessments, the systems generates game level difficulty levels. The levels are rendered as prescribed high range intensity levels (preferably in percentages where 100%=full range of motion). The game difficulty level algorithm further processes the current date and/or time, most recent pain level indicators, user identifications, and episode identifications.
In use, each of the virtual worlds may prompt users to target points and follow trajectories. When target points are rendered, the systems prompt users to reach to or extend to specific points in their physical space by prompting them to reach for extend toward renderings in their virtual worlds (e.g., a reach task). The user's performance is processed by the game play zone algorithm and game difficulty level algorithm to determine the game play challenge zones for future virtual therapeutic sessions. Target vector movements in the game play zone prompt the user to make specific movements along desired therapy trajectories from a starting point to an ending point. The user's progress is processed by the game play zone algorithm and game difficulty level algorithm to determine game play challenge zones for future virtual therapeutic sessions too. Because recommendations occur automatically in some systems, a virtual menu generator determines the correct menu choices available to the user based on the user's recovery path, current assessment, and prior performance from earlier virtual therapeutic sessions.
In practice, the volume of motion current capacity will improve or decline after each virtual therapeutic session. As a result, the improvement or decline is processed by the dynamic active care plan engine which automatically modifies the user's recovery path and care plan to reflect their current health. As a result, the game play challenge zone may also change for the next virtual therapeutic session. If little progress is recognized or perceived over a predetermined number of sessions, the alert engine may provide the user and/or therapist with a notification (e.g., an audible, haptic, or visual alarm) in some systems to indicate or report the lack of progress, which may further indicate that the user is not rehabilitating their injury to their full abilities for personal or financial reasons, for example, or the rehabilitation is not remediating the injury. The notification allows for adjustments that may improve the quality of care and outcomes, and reduce healthcare costs.
The memory 1106 and 1108 and/or storage disclosed may retain an ordered listing of executable instructions for implementing the functions described above in a non-transitory computer code. The machine-readable medium may selectively be, but not limited to, an electronic, a magnetic, an optical, an electromagnetic, an infrared, or a semiconductor medium. A non-exhaustive list of examples of a machine-readable medium includes: a portable magnetic or optical disk, a volatile memory, such as a Random-Access Memory (RAM), a Read-Only Memory (ROM), an Erasable Programmable Read-Only Memory (EPROM or Flash memory), or a database management system. The memory 1106 and 1108 may comprise a single device or multiple devices that may be disposed on one or more dedicated memory devices or disposed on a processor or other similar device. An “engine” comprises a hardware processor or a portion of a program executed by a processor that executes or supports unique treatment functions through the disclosed virtual rehabilitation.
When functions, steps, etc. are said to be “responsive to” or occur “in response to” another function or step, etc., the functions or steps necessarily occur as a result of another function or step, etc. It is not sufficient that a function or act merely follow or occur subsequent to another. Further, the systems disclosed herein may be practiced in the absence of any elements not specifically disclosed herein. Some system may be practiced without disclosed elements that are described as alternative elements or alternative systems. The term “substantially” or “about” encompasses a range that is largely (anywhere a range within or a discrete number within a range of ninety-five percent and one-hundred and five percent), but not necessarily wholly, that which is specified. It encompasses all but an insignificant amount.
The disclosed systems rely on computers and wireless sensors to render and display exercises and interfaces to monitor and resolve user actions. In the disclosed systems data flows asynchronously at a sampling rate and frequency that is faster than the physical process it encourages. The high temporal granularity of the volumes of motion track not only rehabilitation progress but also user decisions not to exercise to their capability. The data gathered during the virtual therapeutic sessions can be stored locally or through on-line servers without the user's or therapist's direction. The distributed nature of the systems increase accessibility to user's that cannot travel or do not have access to rehabilitation practices. Further, the distributed nature of the systems allow therapists to track several patients simultaneously from one location and the artificial intelligence may supplement and improve the therapists' judgements increasing the probability of a successful outcome.
Other systems, methods, features and advantages will be, or will become, apparent to one with skill in the art upon examination of the figures and detailed description. It is intended that all such additional systems, methods, features and advantages be included within this description, be within the scope of the disclosure, and be protected by the following claims.
This application claims priority to U.S. Provisional Patent Application No. 62/804,990, titled System and Method for Virtual Reality Enhanced Adaptive Rehabilitation Feb. 13, 2019, which is herein incorporated by reference.
Number | Date | Country | |
---|---|---|---|
62804990 | Feb 2019 | US |