The present disclosure relates generally to virtual reality (VR) systems and more particularly to capturing and classifying movements for therapeutic exercises in VR activities.
Helping patients learn or relearn to perform movements for regular everyday tasks and activities of daily living (ADLs) can be important goals for therapeutic treatment. Virtual reality systems may be used in various applications, including therapeutic activities and exercises, to assist patients with their rehabilitation and recovery from illness or injury. Sometimes, a patient may struggle with certain movements and parts of movements (e.g., micromovements) required to perform one or more ADLs. VR therapy may be used to monitor and help patients, in a safe and observable environment, to retrain their brains and muscles to perform certain tasks that may be difficult for them. For instance, VR applications can immerse a patient in a virtual world and guide the patient to perform virtual tasks to help practice movements or micromovements beneficial to ADLs. VR systems can track body motion using sensors and, e.g., using machine learning, build a movement library of ADLs and key tasks based on captured and classified movements. VR activities may involve practicing motions (and parts of motions) foundational to vital everyday tasks, and further activities may be developed to help train specific movements and micromovements found in a movement library.
In some cases, a therapist may have difficulty recognizing and/or identifying several VR activities that may be appropriate for a patient needing to exercise problematic ADL areas. A VR therapeutic movement platform can identify VR activities that feature movements and micromovements involved in challenging ADLs and tailor a therapy session to practice similar smaller movements in a game-like virtual world that would be engaging as well as entertaining. As such, the patient would not be focused on the difficulties in performing the various activities or exercises, but instead be entertained and carried away by carefully crafted virtual environment and therapeutic exercises. The VR therapeutic movement platform may identify VR activities that can direct a patient to perform ADL-critical movements (and/or parts of movements) in a manner that challenges and engages the patient physically and mentally.
ADLs are basic self-care activities necessary for independent living, e.g., at home and/or in a community. Typically, many ADLs are performed on a daily basis. For instance, some of the most common ADLs may include the activities that many people complete when they wake up in the morning and get ready to leave their home, e.g., get up out of bed, use a toilet, shower or bathe, get dressed, groomed, and eat. Generally, there are typically five basic categories, including personal hygiene, dressing, self-feeding, bathroom activities, and functional mobility. People who experience mental or physical impairments may struggle to perform some of the movements necessary to accomplish one or more of these routine daily tasks, and, thus, may not be able to live independently. For example, someone with limited shoulder mobility may have difficulty washing one's hair. A patient experiencing a movement disorder, such as Parkinson's disease, may find drinking water challenging. A woman recovering from a stroke may not be able to lift a toothbrush to brush her own teeth. Doctors and therapists, such as occupational therapists, may evaluate a patient's abilities on using tools such as the Katz Index of Independence in Activities of Daily Living. Based on measurements and diagnoses, doctors and therapists may design a course of therapy to help a patient improve the basic movements that are essential for regular as well as vital activities.
Other important activities for living independently in a community may include instrumental activities of daily living (IADLs). IADLs are considered a little more complex than ADLs and may not be necessary for fundamental functioning. The Occupational Therapy Practice Framework from the American Occupational Therapy Association (AOTA) identifies IADLs such as care of others, care of pets, child rearing, communication management, driving and community mobility, financial management, health management and maintenance, home establishment and management, meal preparation and clean up, religious and spiritual activities and expressions, safety procedure and emergency responses, and shopping. Sometimes included as IADLs are activities like, rest and sleep, education, work, play, leisure, and social participation. Not all IADLs may be absolutely vital for everyday life, but a person's inability to perform movements necessary for some IADLs may encumber his or her independence, social interactions, etc.
Understandably, there are other activities that may challenge a person outside of basic ADLs and IADLs that may help to make daily living more independent and fulfilling. Less serious injuries with shorter-term recovery may inhibit certain movements that one may be accustomed to performing, even though it may not be necessary for typical daily functioning. For instance, someone experiencing back pain may not be able to jump the same way as when they were healthy. A person recovering from a broken leg may not be able to jog. A normally high-performing athlete rehabilitating an upper body injury may not even be able to perform motions necessary for swinging, e.g., a golf club or tennis racquet. Losing abilities to perform certain movements may unfortunately necessitate losing corresponding activities in one's life.
VR activities have shown promise as engaging therapies for patients suffering from a multitude of conditions, including various physical, neurological, cognitive, and/or sensory impairments. Generally, VR systems can be used to instruct users in their movements while therapeutic VR can recreate practical exercises that may further rehabilitative goals such as physical development and neurorehabilitation. For instance, patients with physical and neurocognitive disorders may use therapy for treatment to improve, e.g., range of motion, balance, coordination, mobility, flexibility, posture, endurance, and strength. Physical therapy may also help with pain management. Some therapies, e.g., occupational therapies, may help patients with various impairments develop or recuperate physically and mentally to better perform everyday living functions, and ADLs. VR systems can encourage patients by depicting avatars performing tasks that a patient with various impairments may not be able to fully perform.
VR therapy can be used to treat various disorders, including physical disorders causing difficulty or discomfort with reach, grasp, positioning, orienting, range of motion (ROM), conditioning, coordination, control, endurance, accuracy, and others. VR therapy can be used to treat neurological disorders disrupting psycho-motor skills, visual-spatial manipulation, control of voluntary movement, motor coordination, coordination of extremities, dynamic sitting balance, eye-hand coordination, visual-perceptual skills, and others. VR therapy can be used to treat cognitive disorders causing difficulty or discomfort with cognitive functions such as executive functioning, short-term and working memory, sequencing, procedural memory, stimuli tolerance and endurance, sustained attention, attention span, cognitive-dependent IADLs, and others. In some cases, VR therapy may be used to treat sensory impairments with, e.g., sight, hearing, smell, touch, taste, and/or spatial awareness.
A VR system may use an avatar as a representation of the patient and animate the avatar in the virtual world. Using sensors in VR implementations of therapy allow for real-world data collection as the sensors can capture movements of various body parts such as hands and arms for the system to convert and animate an avatar in a virtual environment. Such an approach may approximate the real-world movements of a patient to a high degree of accuracy in virtual-world movements. Data from the many sensors may be able to produce statistical feedback for viewing and analysis by doctors and therapists. Generally, a VR system collects raw sensor data (e.g., position, orientation, linear and rotational movements, movement vectors, acceleration data, etc.) from patient movements, filters the raw data, passes the filtered data to an inverse kinematics (IK) engine, and then an avatar solver may generate a skeleton and mesh model in order to render and animate the patient's virtual avatar.
Typically, avatar animations in a virtual world may closely mimic the real-world movements, but virtual movements may be exaggerated or modified in order to aid in therapeutic activities. Visualization of patient movements through avatar animation may stimulate and promote physical and neurological repairs, recovery, and regeneration for a patient. For example, a VR therapy activity may depict an avatar feeding a squirrel some seed from a spoon held by the avatar corresponding to a patient's actual movements of grabbing and shaking a seed dispenser into a spoon held by the other hand. A VR therapy activity may ask a patient to stack virtual ingredients for a specific sandwich by requiring the patient to reach towards and grab various ingredients, such as bread, meats, cheeses, lettuce, and condiments in a step-by-step fashion. VR activities have shown promise as engaging therapies for patients suffering from a multitude of conditions, and these activities can be made to include engaging features to otherwise mentally and physically tough activities. Therapy can be stress-inducing and still can cause patient fatigue and frustration. More VR activities are being developed to address specialized impairments with tailored exercises.
With a variety of VR activities, comes a variety of exercises for therapy patients. However, not every exercise or activity is correct or properly suited for every patient. Patients may each have various physical, neurological, cognitive, and/or sensory impairments to be treated. A therapist must be cognizant of movements that a patient may have difficulty performing and adjust the therapy by, e.g., selecting more appropriate VR activities, difficulty and/or intensity levels, etc. Asking a therapist to memorize movements of each VR activity, as well as movements that may be difficult for, e.g., dozens of patients, is not feasible. There exists a need to store movements and/or lists of movements particular to each activity and relevant for each patient. There exists a need for some type of a movement library.
As disclosed herein, a movement library may be a collection of movements (e.g., including parts of movements and micromovements) and activities found in ADLs, VR activities, problematic patient motions, and other important activities. For instance, a movement library may comprise organized lists, databases, catalogs, charts, plots, graphs, images, animations, videos, and other compilations of ADLs and activities that various patients may perform on a regular basis. In some cases, motion capture sessions may be performed by a motion-capture (mocap) performer—e.g., a performer working with a doctor, therapist, VR application designer, supervised assistant, actor, or other trained professional—performing an activity, e.g., brushing teeth or drinking water from a glass, to capture proper form of the motion in a movement library. For instance, VR hardware and sensors (e.g., electromagnetic sensors, inertia sensors, accelerometers, optical sensors, and any other position, orientation, angular and rotation sensors) may be positioned on the body of the mocap performer, and she may perform the necessary steps or movements of the ADL for capture by the VR system and incorporation into a movement library. Sensors of the VR system may capture a performed movement and the VR system may translate real-world movement to movement by a VR avatar in a virtual world. Real-world motion-capture performance may be completed with or without props, inside or outside the usual setting (e.g., bathroom, kitchen). In some embodiments, a mocap performer may be able to pantomime an ADL using movements proper enough for capture and training. In some cases, portions of a movement library may be built from patient capture data.
From the data of each movement in a movement library, a movement signature may be derived for each input movement. Generally, generating a movement signature comprises generating a visual, graphical, vector map or vector chart or vector plot, mathematical representation, and/or any other similar means of data illustrations describing a movement. For instance, a movement signature may be a graph of sensor data over time. In some embodiments, generating a movement signature may include steps of plotting one or more portions of sensor data as graphs and generating a movement signature based on the graphs. Each movement signature may be stored within the movement library—and each movement may be identified based on its movement signature. In some embodiments, a movement signature may be generated based on plotting or charting one or more of position data, rotation data, and/or acceleration data, captured from one or more of the plurality of sensors, over time. For instance, a movement signature may comprise a composite of multiple sensors' position, orientation, and tri-axial accelerometer data.
Using data from a movement library, such as movement signatures, an activity classification model may be trained using machine learning techniques. For instance, inputting sensor data, VR avatar skeletal data, and/or movement signatures, a trained model can classify movement as a particular activity that was documented in the movement library previously.
As disclosed herein, a VR activity platform can classify movements, such as ADLs and essential activities, using a virtual reality system. Generally, a VR system may receive input from a plurality of sensors, generate a movement signature based on the input from the plurality of sensors, determine a movement classification based on the movement signature, and then provide the movement classification. The VR system may generate a movement signature based on the input from the plurality of sensors by, e.g., plotting time against sensor data, such as at position data, rotation data, and acceleration data, from one or more of the plurality of sensors.
In some embodiments, movement classification may be performed by, e.g., using a trained machine learning model to generate data indicative of a classification of the movement signature based on a plurality of stored movement signatures. The model may be trained to receive the movement signature as input and output at least one movement classification describing the input movement signature. In some embodiments, determining the movement classification may be performed with data analytics techniques to generate data indicative of a classification of the movement signature based on a plurality of stored movement signatures.
In some embodiments, a movement library may comprise a visual representation of each movement, such as an avatar animation performing the movement. A movement library interface, e.g., user interface, may render each movement amination when selected or may provide a demonstration activity to allow a patient to directly see and practice vital activities. Still, a therapy patient being asked to mimic or mirror specific motions may be boring and ultimately discourage focus and completion of therapy, even knowing how valuable practicing such actions may be for independent living. There further exists a need for providing VR therapy applications that help teach and develop motions for ADLs in an engaging manner.
VR therapy can be quite engaging when, e.g., using a VR activity to help develop coordination and strength to perform a real-world task. However, matching real-world tasks with motions performed in VR activities is not always easily done. For instance, practicing motions like feeding some seeds via spoon to a virtual squirrel may be helpful exercises for brushing one's teeth. Motions like grabbing a spoon handle and lifting and lowering the spoon can be analogous and beneficial to grabbing and lifting a toothbrush. Small movements like those may be considered micromovements. In some embodiments, identifying activities helpful for practicing ADL-like motions may be performed by matching micromovements between VR activities and critical real-world activities.
Generally, micromovements may be considered smaller portions of a movement required to perform the movement. For instance, a hand-washing movement by a patient (or motion capture actor) may comprise soaping his hands, lowering his hands under a faucet, and rinsing his hands in separate steps. Micromovements may be referred to as smaller movements, sub-movements, partial movements, and/or small motions. In some embodiments, signatures of micromovements may be incorporated in a movement signature. In some embodiments, micromovements, such as a lather action, a lowering action, and a rinsing action, may each have their own movement signature, e.g., within a larger movement signature, such as hand washing.
In some embodiments, micromovements may be identified within a movement signature. Extracting a micromovement from a larger movement signature may comprise identifying breaks in the movement signature. In some embodiments, breaks may be identified by, e.g., abrupt changes in position, rotation, and/or acceleration in one or more directions. For instance, a large change in height position (e.g., y-position) may indicate a lifting or lowering micromovements. A large change in horizontal position (e.g., z-position) may indicate a pulling or pushing micromovements. In some embodiments, breaks may be identified by brief or instantaneous pauses in activity. In some embodiments, breaks may be identified by changes in patterns of motion, such as when oscillating starts or stops, or when raising or lowering starts/stops. In some embodiments, a machine learning model may be trained to identify changes between micromovements within a movement. Micromovements and their respective movement signatures (and portions of full movement signatures) may be recorded in a movement library.
As disclosed herein, a VR activity platform can identify and provide one or more therapeutic activities to encourage practicing of particularly challenging movements related to ADLs and activities essential for independent living. Generally, a VR platform may receive an input associated with a first movement, determine a plurality of micromovements based on the first movement, access a plurality of activities, e.g., from a movement library, each with one or more micromovements and compare the micromovements of the first movement with the micromovements of the activities to determine an activity that is a good match. In some embodiments, the VR platform may compare the plurality of micromovements with the one or more exercise micromovements associated with each of the plurality of activities and identify a subset of the plurality of activities based on the comparison of the plurality of micromovements with the one or more exercise micromovements associated with each of the plurality of activities, then provide the subset of the plurality of activities.
As disclosed herein, a VR activity platform can classify movements, such as ADLs and essential activities, using a virtual reality system. Generally, a VR system may receive input from a plurality of sensors, generate a movement signature based on the input from the plurality of sensors, determine a movement classification based on the movement signature, and then provide the movement classification. The VR system may generate a movement signature based on the input from the plurality of sensors by, e.g., plotting time against sensor data, such as at position data, rotation data, and acceleration data, from one or more of the plurality of sensors. In some embodiments, there may be classifications, sub-classifications, as well as groups and sub-groups, and/or other hierarchies of classifying and grouping.
In some embodiments, the VR platform may then assign each of the subset of the plurality of activities a match score based on the comparison of the plurality of micromovements with the one or more exercise micromovements associated with each of the plurality of activities, rank the subset of the plurality of activities a match score based on the match score, and select an activity of the plurality of activities based on the ranking of the subset of the plurality of activities before providing the selected activity of the plurality of activities. In some embodiments, the VR platform can capture movements and micromovements of a patient, e.g., within VR activities, and compare them to one or more exemplary movement signatures to identify potentially problematic ADLs and activities.
In some embodiments, the first movement may be a movement classification determined by a trained machine learning model. The VR platform may, for instance, receive input from a plurality of sensors for a movement, generate a first movement signature based on the input from the plurality of sensors for the movement and then pass the movement signature to a trained model. The trained model may generate data indicative of a classification of the first movement signature based on training from a plurality of stored movement signatures and output the movement classification. A movement signature, in some embodiments, may be determined based on sensor input by, e.g., charting time against position data, rotational data, and/or acceleration data. In some embodiments, micromovements may have signatures that may be, e.g., found in movement signatures when compared.
With a VR platform identifying and suggesting therapy activity to aid movements and micromovements in ADLs, a therapist may be able to better focus on the patient. A VR platform may also allow a patient to independently practice portions of a guided VR activity regimen outside of a therapist's office, e.g., at home under the supervision of a family member and/or a remote supervisor.
The above and other objects and advantages of the disclosure will be apparent upon consideration of the following detailed description, taken in conjunction with the accompanying drawings, in which like reference characters refer to like parts throughout, and in which:
Various systems and methods disclosed herein are described in the context of a therapeutic system for helping patients with physical, neurological, cognitive, and/or sensory impairments, but this application is only illustrative. Such a VR system may be suitable for wellness and athletic pursuits or guided activities and the like, for example, coaching athletics, training dancers or musicians, teaching students, and other activities. Such systems and methods disclosed herein may apply to various VR applications. Moreover, embodiments of the present disclosure may be suitable for augmented reality, mixed reality, and assisted reality systems.
In context of the VR system, the word “therapy” may be considered equivalent to physical therapy, cognitive therapy, neurological therapy, sensory therapy, behavioral therapy, meditational therapy, occupational therapy, preventative therapy, assessment for therapies, and/or any other methods to help manage an impairment or condition, as well as a combination of one or more therapeutic programs. Such a VR system may be suitable with, for example, therapy, coaching, training, teaching, entertainment and other activities. Such systems and methods disclosed herein may apply to various VR applications.
In context of the VR systems, the word “patient” may be considered equivalent to a subject, user, participant, student, etc. and the term “therapist” may be considered equivalent to doctor, physical therapist, clinician, coach, teacher, supervisor, or any non-participating operator of the system. A therapist may configure and/or monitor via a clinician tablet, which may be considered equivalent to a personal computer, laptop, mobile device, gaming system, or display. Some disclosed embodiments include a digital hardware and software medical device that uses VR for health care, focusing on physical and neurological rehabilitation. The VR device may typically be used in a clinical environment under the supervision of a medical professional trained in rehabilitation therapy.
In some embodiments, the VR device may be configured for personal use at home, e.g., with remote monitoring. A therapist or supervisor, if needed, may monitor the experience in the same room or remotely. In some cases, a therapist may be physically remote or in the same room as the patient. For instance, some embodiments may need only a remote therapist. Some embodiments may require a remote therapist with someone, e.g., a nurse or family member, assisting the patient to place or mount the sensors and headset and/or observe for safety. Generally, the systems are portable and may be readily stored and carried by, e.g., a therapist visiting a patient.
Movements such as brushing teeth may be incorporated into the VR system, e.g., as part of a movement library. In some embodiments, a movement library may include data describing proper motions for various ADLs and activities. In some embodiments, each movement of a movement library may have a movement signature. A movement library may comprise a user interface that provides an avatar animation of an ADL, e.g., scenario 100A. In some embodiments, scenario 100A may be a portion of a VR application or activity used to help train patients. Some embodiments may add a movement to a movement library through a process such as the steps depicted in
Scenario 100A of
Scenario 100A may be displayed to a patient view via the head-mounted display, e.g., “Patient View.” Scenario 100A may also be considered a user interface of the same VR application as depicted to a spectator, such as a therapist. For instance, a spectator, such as a therapist, may view scenario 100A and see a reproduction or mirror of a patient's view in the HMD, e.g., “Spectator View.” In some embodiments, scenario 100A may be displayed as another view via a VR application interface, e.g., exploring a movement library.
Patient View may be considered a view of the VR world from the VR headset. A VR environment rendering engine (sometimes referred to herein as a “VR application”) on a device, e.g., an HMD, such as the Unreal® Engine, may use the position and orientation data to generate a virtual world including an avatar that mimics the patient's movement and view. Unreal Engine is a software-development environment with a suite of developer tools designed for developers to build real-time 3D video games and applications, virtual and augmented reality graphics, immersive technology simulations, 3D videos, digital interface platforms, and other computer-generated graphics and worlds. A VR application may incorporate the Unreal Engine or another three-dimensional environment developing platform, e.g., sometimes referred to as a VR engine or a video game engine. Some embodiments may utilize a VR application, stored and executed by one or more of the processors and memory of a headset, server, tablet and/or other device to render scenario 100A. For instance, a VR engine may be incorporated in one or more of head-mounted display 201 and clinician tablet 210 of
Spectator View, as seen, e.g., in scenario 100A, may be a copy of what the patient sees on the HMD while participating in a VR activity, e.g., Patient View. In some embodiments, Scenario 100A may be depicted on a therapist's tablet or display, such as clinician tablet 210 as depicted in
In some embodiments, to generate and animate an avatar based on user movements, generally, a VR system collects raw sensor data (e.g., position, orientation, and acceleration data) from patient movements, filters the raw data, passes the filtered data to an inverse kinematics (IK) engine, and then an avatar solver may generate a skeleton and mesh in order to render and animate the patient's virtual avatar. An avatar includes virtual bones and comprises an internal anatomical structure that facilitates the formation of limbs and other body parts. To animate the avatar, an avatar solver may employ inverse kinematics and a series of local offsets to constrain the skeleton of the avatar to the position and orientation of the sensors. The VR skeleton then deforms a polygonal mesh to approximate the movement of the sensors. In some embodiments, systems and methods of the present disclosure may animate an avatar by blending the nearest key poses in proportion to their proximity to the user's tracked position, e.g., vertex animation. In some embodiments, a combination of vertex animation and skeletal animation may be applied to render an avatar. In some embodiments, movements of a movement library, such as those depicted in scenario 100A, may be mirrored for left-handed or right-handed individuals.
In some embodiments, for example, a tooth-brushing activity may be performed by a mocap performer wearing an HMD and sensors of
Sensor data may be construed as one or more matrices of, e.g., position, orientation, and/or acceleration data as depicted in
As the mocap performer brushes her teeth, while wearing an HMD and sensors, the VR system may generate avatar skeletal data based on sensor data and may render and animate a virtual avatar mimicking her movements. This animation may be recorded to a movement library so that someone may access the library and view movements at a later time. In some embodiments, the VR system may render and animate a virtual avatar at a later time, e.g., when providing the movement to the movement library. The movement library may be stored with profile and/or application data, e.g., in a database, e.g., as depicted in
In scenario 100A, exemplary teeth brushing signature 110 corresponds to the teeth-brushing movement(s) depicted by the avatar illustration. For instance, a movement signature may be derived from captured sensor data to designate the movement. A movement signature may be generated based on plotting one or more of position, rotation, and acceleration, for one or more sensors, on a chart against another variable, e.g., time. For instance, for each unit of time, each of several sensors may produce a matrix of position, orientation, and acceleration data (e.g., as depicted in
In some embodiments, a movement may be classified using a machine learning algorithm based on a movement library. For instance, a trained model may receive input of new movement data (e.g., a movement signature) and output an appropriate movement classification. In some embodiments, a movement classifier model labels any movement that is input as a particular activity. In some embodiments, a movement classifier model is trained on (a portion of) a movement library.
Depicted in scenario 100A, avatar 111 performs at least two smaller movements (e.g., micromovements) including raising action 112 and brushing action 114. For instance, avatar 111 may be displayed as raising a toothbrush and brushing his or her teeth in separate steps. In some embodiments, micromovements may be incorporated in one movement signature. In some embodiments, micromovements, such as raising action 112 and brushing action 114, may each have their own movement signature. In some embodiments, micromovements may be identified within a movement signature. For instance, a VR application may identify breaks, changes in direction, changes in rotation, changes in acceleration, changes in angular acceleration, etc. of one or more body parts based on a movement signature and extract one or more micromovements based on the identified breaks.
An activity of daily living, such as teeth brushing depicted in scenario 100A, may involve movements that could prove difficult for some people experiencing one or more various impairments. VR applications may be incorporate therapeutic activities to help patients learn, relearn, strengthen, and control body parts to better perform ADLs they may have difficulty with. In some embodiments, a VR application may directly ask a patient to perform a movement from an ADL as an exercise. For instance, a VR activity may display avatar 111 brushing his or her teeth and ask a patient to mimic the movements of the avatar, e.g., step by step. In some embodiments, such an activity may be used for instructions for or evaluation of a therapy patient.
In some embodiments, a VR application may indirectly ask a patient to perform a micromovement from an ADL as an exercise. For instance, a VR activity may require movement that incorporates a micromovement from a specific ADL, e.g., brushing one's teeth as depicted in scenario 100A. In some embodiments, for example, a VR activity involving feeding a virtual bird with a spoon may be used to encourage motions similar to raising a toothbrush, e.g., raising action 112 depicted in scenario 100A. In some embodiments, a VR activity involving painting shapes like circles on a virtual canvas may be used to help exercise a patient's hand dexterity and control to help practice motions like brushing teeth, e.g., brushing action 114 depicted in scenario 100A. In some embodiments, a patient's movement signature for each session may be compared to an exemplary movement signature to identify improvement (or deterioration) of the patient's movements and his or her ability to perform everyday activities. For instance, if a patient's movement signature gets closer (e.g., better match) to the mocap movement signature, it may be a sign of improvement, development, and/or strengthening.
In some embodiments, the VR platform can capture movements and micromovements of a patient, e.g., within VR activities, and compare them to one or more exemplary movement signatures to identify potentially problematic ADLs and activities. For instance, in a VR activity involving feeding a virtual bird with a spoon, capturing a patient's micromovement of lifting the spoon may allow a comparison to a movement signature from motion capture in the movement library, which may reveal problems with the patient's movement, e.g., lifting a toothbrush, lifting a hairbrush, and/or raising a fork. In some embodiments, potential motion problems may be associated with each VR activity. In some embodiments, a movement library can help diagnose a patient with potential issues, e.g., based on their prior motions in VR activity, as well as identify and provide VR activities that may help a patient practice and exercise particular motions to help with problematic movements.
Movements such as hand washing may be incorporated into the VR system, e.g., as part of a movement library. In some embodiments, a movement library may include data describing proper motions for various ADLs and activities. In some embodiments, each movement of a movement library may have a movement signature. A movement library may comprise a user interface that provides an avatar animation of an ADL, e.g., scenario 100B. In some embodiments, scenario 100B may be a portion of a VR application or activity used to help train patients. Some embodiments may add a movement to a movement library through a process such as the steps depicted in
Scenario 100B of
As the mocap performer washes his hands, while wearing an HMD and sensors, the VR system may generate avatar skeletal data based on sensor data and may render and animate a virtual avatar mimicking her movements.
In scenario 100B, exemplary hand washing signature 120 corresponds to the hand washing movement(s) depicted by the avatar illustration. For instance, a movement signature may be derived from captured sensor data to designate the movement. Hand washing signature 120 of scenario 100B may be considered a representation of a corresponding movement signature based on changes of a hand on a vertical y-axis over time, which is not necessarily drawn to scale.
Depicted in scenario 100B, avatar 121 performs at least three smaller movements (e.g., micromovements) including lather action 122, lowering action 124, and rinsing action 126. For instance, avatar 121 may be displayed as soaping his hands, lowering his hands under a faucet, and rinsing his hands in separate steps. In some embodiments, micromovements may be incorporated in one movement signature. In some embodiments, micromovements, such as lather action 122, lowering action 124, and rinsing action 126, may each have their own movement signature. In some embodiments, micromovements may be identified within a movement signature. Looking at hand washing signature 120 of scenario 100B, featuring a graph of a y-position of a hand sensor, lather action 122 appears to match an initial oscillating portion of the curve, lowering action 124 appears to correlate to a long, steep part of the curve, and rinsing motion 126 appears to match a latter oscillating portion of the curve. An alternative depiction or illustration of these micromovement may involve diagrams of velocity vectors to show the incremental mentions or displacements that make up a movement signature.
An activity of daily living, such as hand washing depicted in scenario 100B, may involve movements that could prove difficult for some people experiencing one or more various impairments. VR applications may be incorporate therapeutic activities to help patients exercise certain movements with which they may have difficulty. In some embodiments, a VR activity may display avatar 121 washing his hands and ask a patient to mimic the movements of the avatar, e.g., step by step. In some embodiments, such an activity may be used for instructions for or evaluation of a therapy patient.
In some embodiments, a VR activity may require movement that incorporates a micromovement from a specific ADL, e.g., washing one's hands as depicted in scenario 100B. In some embodiments, for example, a VR activity involving rubbing a stick to start a fire, e.g., at different speeds, may be used to encourage motions similar to lathering and/or rinsing one's hands, e.g., lather action 122 and rinsing action 126 depicted in scenario 100B. In some embodiments, a VR activity involving lifting and lowering a bird using both hands may be used to help exercise a patient's arm dexterity and control to help practice motions like lowering hands below a faucet, e.g., lowering action 124 depicted in scenario 100B.
In some embodiments, the VR platform may identify potentially problematic ADLs and activities based on comparing captured movements/micromovements of a patient, e.g., within VR activities, with one or more exemplary movement signatures from a movement library. For instance, in a VR activity involving rubbing a stick to start a fire, capturing a patient's micromovement of rubbing sticks together may allow a comparison to a movement signature from motion capture in the movement library, which may reveal problems with the patient's movement, e.g., lathering soap, drying hands, and/or coordinating hand motions.
Movements related to opening a refrigerator may be incorporated into the VR system, e.g., as part of a movement library. In some embodiments, a movement library may include data describing proper motions for various ADLs and activities. In some embodiments, each movement of a movement library may have a movement signature. A movement library may comprise a user interface that provides an avatar animation of an ADL, e.g., scenario 100C. In some embodiments, scenario 100C may be a portion of a VR application or activity used to help train patients. Some embodiments may add a movement to a movement library through a process such as the steps depicted in
Scenario 100C of
As the mocap performer opens a refrigerator door, while wearing an HMD and sensors, the VR system may generate avatar skeletal data based on sensor data and may render and animate a virtual avatar mimicking her movements.
In scenario 100C, exemplary refrigerator opening signature 130 corresponds to the refrigerator opening movement(s) depicted by the avatar illustration. For instance, a movement signature may be derived from captured sensor data to designate the movement. Refrigerator opening signature 130 of scenario 100C may be considered a representation of a corresponding movement signature based on changes of a hand on a horizontal z-axis over time, which is not necessarily drawn to scale.
Depicted in scenario 100C, avatar 131 performs at least two smaller movements (e.g., micromovements) including jerk action 132 and pulling action 134. For instance, avatar 131 may be displayed as jerking the refrigerator door open and gently pulling the refrigerator door further open in separate steps. In some embodiments, micromovements may be incorporated in one movement signature. In some embodiments, micromovements, such as jerk action 132 and pulling action 134, may each have their own movement signature. In some embodiments, micromovements may be identified within a movement signature. Looking at refrigerator opening signature 130 of scenario 100C, featuring a graph of a z-position of a hand sensor, jerk action 132 appears to match an initial steep part of the curve and pulling action 134 appears to correlate to a later less-steep part of the curve.
An activity of daily living, such as opening a refrigerator depicted in scenario 100C, may involve movements that could prove difficult for some people experiencing one or more various impairments. VR applications may be incorporate therapeutic activities to help patients exercise certain movements with which they may have difficulty. In some embodiments, a VR activity may display avatar 131 opening a refrigerator and ask a patient to mimic the movements of the avatar, e.g., step by step. In some embodiments, such an activity may be used for instructions for or evaluation of a therapy patient.
In some embodiments, a VR activity may require movement that incorporates a micromovement from a specific ADL, e.g., opening a refrigerator as depicted in scenario 100C. In some embodiments, for example, a VR activity involving quickly pulling a rope, e.g., in a tug of war, may be used to encourage motions similar to jerking a door from a vacuum seal, e.g., jerk action 132 depicted in scenario 100C. In some embodiments, a VR activity involving opening doors as part of a hide-and-seek with woodland critters may be used to help exercise a patient's arm dexterity and control to help practice motions like gently pulling a door open, e.g., pulling action 134 depicted in scenario 100C.
In some embodiments, the VR platform may identify potentially problematic ADLs and activities based on comparing captured movements/micromovements of a patient, e.g., within VR activities, with one or more exemplary movement signatures from a movement library. For instance, in a VR activity involving pulling a rope, capturing a patient's micromovement of pulling quickly may allow a comparison to a movement signature from motion capture in the movement library, which may reveal problems with the patient's movement, e.g., opening doors, opening refrigerators, turning on faucets, pulling luggage and/or grabbing and retrieving objects.
Interface 150 of scenario 140 of
Interface 150 depicts patient profile 152 for “Jane Doe.” Patient profile 152 is shown to be documented with the patient having ADL goals of, e.g., teeth brushing, hand washing, and refrigerator opening. Patient profile 152 may include further impairment data, health data, VR activity data, and other relevant data. Patient profile 152 may be accessed and loaded, e.g., as a patient logs in to interface 150, e.g., the VR therapy platform. In some embodiments, loading patient profile 152 may be initiated by a therapist or supervisor.
Interface 150 further depicts VR activities 172, 174, 176, and 178. In some embodiments, VR activities 172, 174, 176, and 178 may be, e.g., applications, environments, activities, games, characters, sub-activities, tasks, videos, and other content. In scenario 140 of
In some embodiments, VR activities 172, 174, 176, and 178 may be selected as suggested or recommended for patient profile 152. For instance, interface 150 may analyze impairments of patient profile 152 and impairments of each of the VR activities/exercises in the system to determine which activities would be most appropriate for the patient. Selecting activities to present may be accomplished in several ways. Process 600 of
In some embodiments, the VR platform may identify potentially problematic ADLs and activities based on comparing captured movements/micromovements of a patient, e.g., within VR activities, with one or more exemplary movement signatures from a movement library. For instance, required movements in “Feed the Squirrels” may reveal problems with the patient's movement, e.g., lifting utensils, brushing teeth, and/or steadily coordinating hand movement. In some embodiments, a VR activity “Start the Campfire” may allow a comparison to a movement signature from motion capture in the movement library and reveal problems with the patient's movement, e.g., lathering soap, drying hands, and/or coordinating hand motions. In some embodiments, a VR activity requiring squeezing fruit in, e.g., “Grab the Fruit,” may reveal problems with the patient's movement, e.g., grabbing items, squeezing items, using doorknobs and handles, and/or grabbing and retrieving objects. In some embodiments, a VR activity requiring opening doors to find animals in, e.g., “Find the Woodland Friends,” may reveal problems with the patient's movement, e.g., opening doors, opening refrigerators, turning on faucets, pulling luggage and/or grabbing and retrieving objects.
Movement library data structure 200 depicts movement library 221. Movement library 221 includes activity categories such as hygiene activities 222 and kitchen activities 224. In some embodiments, movement library 221 may include categories for activities, e.g., based on importance to independent living. Within each category are activities 230, 240, 250, 260, 265, 270, 275, and 280. For instance, hygiene activities 222 includes activity 230, “hands washing,” activity 240, “hair washing,” activity 250, “brushing teeth,” and activity 260, “soaping body,” among others. Kitchen activities 224 includes activity 265, “refrigerator opening,” activity 270, “cupboard opening,” activity 275, “drinking a glass of water,” and activity 280, “taking medicine,” among others.
In some embodiments, a movement library data structure may include movements and smaller movements, e.g., sub-movements or micromovements, that may be required to carry out an activity. Movement library 221 includes micromovements linked with each movement of an activity. For instance, activity 250 is labeled with movement 251, “Brushing Teeth,” and includes micromovements such as, micromovement 253, “Applying toothpaste,” micromovement 255, “Raising toothbrush,” micromovement 257, “Brushing motions,” and micromovement 259, “Turn on/off faucet.” Of course, each activity may have several micromovements associated with it. For example, teeth brushing may also involve gripping a toothbrush, squeezing toothpaste, rinsing a toothbrush, rinsing one's mouth out, and various brushing motions, as well as spitting and some movements which may or may not be practical to practice via VR system. In some embodiments, micromovements may be identified within a movement signature, e.g., by plotting sensor data versus time. For instance, a VR application may identify breaks, changes in direction, changes in rotation, changes in acceleration, changes in angular acceleration, etc. of one or more body parts based on a movement signature and extract one or more micromovements based on the identified breaks.
Some micromovements may be shared among several activities. For instance, some general micromovements like turning on or off a faucet may be shared among hand washing, teeth brushing, shampooing/showering, drinking water, and more. Likewise, micromovements for raising one's arms for hair washing and reaching for a high object from a kitchen cupboard may be very similar and may invoke similar VR activities to help practice. Even within activity 230, “hand washing,” micromovements of lathering one's hands and rinsing one's hands are very similar and the motions may be practiced together within one VR activity, at times.
In some embodiments, movement library 112 may include micromovements like “applying soap,” “lathering,” “rinsing hands,” and “tuning on/off a faucet,” among others, for activity 230, “hands washing.” In some embodiments, movement library 112 may include micromovements like “pour shampoo/soap,” “lathering scalp,” “arm raising,” “rinse hair,” and “tuning on/off the shower,” among others, for activity 240, “hair washing.” In some embodiments, movement library 112 may include micromovements such as “initial door pull,” “open fridge door,” “grab and lift object,” “close fridge door,” among others, for activity 265, “refrigerator opening.” In some embodiments, movement library 112 may include micromovements such as “lifting a glass of water,” “drinking from the glass,” “lowering a glass of water,” among others, for activity 275, “drinking a glass of water.” In some embodiments, movement library 112 may include micromovements like “identifying a (correct) medicine bottle,” “grabbing the bottle,” “opening the bottle,” “pouring the medicine,” among others, for activity 280, “taking medicine,” which may be considered an IADL.
Identifying micromovements associated with movements and activities may be valuable when selecting VR therapy activities to help strengthen and improve micromovements. In some embodiments, a movement library data structure may include links between activities that may allow a patient to practice micromovements associated with an ADL, e.g., activity 230, “hands washing.” For instance, activity 174, “Start the Campfire,” of
In some embodiments, a movement library data structure may be stored in or with application and activity data in a database, e.g., as depicted in
At any a given moment in time, each sensor may capture and transmit sensor data including a position, e.g., in the form of three-dimensional coordinates, a rotational measurement around each of the three axes, and/or three dimensions of acceleration data. In some embodiments, each sensor may transmit sensor data at a predetermined frequency, such as 200 Hz. In some embodiments, each sensor may capture and transmit sensor data at different intervals, such as 15-120 times per second.
Some embodiments may utilize a VR engine to perform one or more parts of process 300, e.g., as part of a VR application, stored and executed by one or more of the processors and memory of a headset, server, tablet and/or other device. For instance, a VR engine may be incorporated in one or more of head-mounted display 201 and clinician tablet 210 of
At step 302, a first sensor captures a movement performed by a user. A first sensor may be one of a plurality of sensors placed on a user's body, e.g., as depicted in
At step 304, a second sensor captures the movement performed by a user. A second sensor may be one of a plurality of sensors placed on a user's body, e.g., as depicted in
At step 306, an N sensor captures the movement performed by a user. An N sensor may be one of a plurality of sensors placed on a user's body, e.g., as depicted in
At step 308, the VR engine receives movement input from a plurality of sensors. Generally, each sensor may detect position and orientation (PnO) data and transmit such data to the HMD for processing. In some embodiments, each sensor may detect and transmit acceleration data e.g., based on inertial measurement units (IMUs), described in
At step 310, the VR engine generates a movement signature based on the movement input. Generally, generating a movement signature comprises generating a visual or mathematical representation of data describing a movement. For instance, a movement signature may be a graph of sensor data over time. In some embodiments, generating a movement signature may include steps of plotting one or more portions of sensor data as graphs and generating a movement signature based on the graphs. Process 400 of
At step 312, the VR engine accesses a plurality of stored movement signatures. In some embodiments, a movement library may store a plurality of movement signatures. For instance, movement library 221 in data structure 200 of
At step 314, the VR engine compares the generated movement signature to stored movement signatures. Generally, the VR engine may compare the generated movement signature to the stored movement signatures and classify the movement signature based on the comparison. In some embodiments, the generated movement signature may be compared to each of the stored movement signatures and a movement match score may be determined. For instance, process 320 of
At step 315, the VR engine determines whether the movement fits a prior movement classification. For instance, in some embodiments, a movement match score may be determined based on the comparison between the generated movement signature and each of the stored movement signatures and the highest movement match score will determine the movement classification. In some embodiments, if the highest movement match score is not above a threshold, e.g., 75 on a scale of 0-99, then the highest movement match score may not be a good fit or sufficient match. For example, a patient with a movement disorder may perform a teeth-brushing motion and, due to tremors, may produce a movement match score of 77 with a motion capture teeth-brushing movement signature, which may be sufficient. In some cases, a patient rehabilitating a serious shoulder injury may perform a motion of washing his hair and may produce a movement match score of 65 with a motion capture movement signature, which may be insufficient to classify the patient's movement as hair washing. In some embodiments, other statistical analysis may be used to determine the highest ranked match score is or is not a good fit. In some embodiments, a trained model may determine if the best matching classification is a good enough fit or requires a new classification.
If, at step 315, the VR engine determines the movement fits a prior movement classification, at step 317, the VR engine classifies the movement in a prior classification. For instance, in some embodiments, if the highest movement match score is above a threshold, e.g., 76 on a scale of 0-99, then the highest movement match score may be deemed a good fit and/or a sufficient match. For example, a patient recovering from a stroke may perform a refrigerator-door-opening motion and may produce a movement match score of 78 with a motion capture movement signature, which may be sufficient to classify the patient's movement. In some embodiments, a trained model may determine if the best matching classification is a good enough fit for the classification.
If, at step 315, the VR engine determines the movement does not fit a prior movement classification, at step 319, the VR engine generates new movement classification. In some cases, the best match is not sufficient as a classification. For instance, in some embodiments, if the highest movement match score is below a threshold, e.g., 67 on a scale of 0-99, then the highest movement match score may be deemed a good fit and/or a sufficient match. For example, a motion capture actor may perform a motion for using hand sanitizer for her hands and may produce a movement match score of 63 with a motion capture movement signature for hand washing. Sanitizing one's hands may use very similar movements as hand washing, but if a match score is below a threshold, the motion signature may require a new classification. In some embodiments, a trained model may determine if the best matching classification is not a good enough fit and requires a new classification.
Some embodiments may utilize a VR engine to perform one or more parts of process 320, e.g., as part of a VR application, stored and executed by one or more of the processors and memory of a headset, server, tablet and/or other device.
At step 322, a VR engine receives a new movement signature, e.g., as input. A movement signature may be considered a visual, graphical, or mathematical representation of data describing a movement. For instance, a movement signature may be a graph of sensor data, or VR avatar data, over time. Process 400 of
At step 324, the VR engine accesses a plurality of stored movement signatures. In some embodiments, a movement library may store a plurality of movement signatures. For instance, movement library 221 in data structure 200 of
At step 326, the VR engine compares new movement signature to a plurality of stored movement signatures. In some embodiments, the generated movement signature may be compared to each of the stored movement signatures and a movement match score may be determined. In some embodiments, a movement match score may be determined based on correlation of the generated movement signature with each of the stored movement signatures. For instance, a match score of 0-99 may be attributed to a comparison of two movement signatures based on, e.g., a measure of correlation. For example, a patient with a movement disorder may perform a hand washing motion and produce a motion signature that correlates to a motion-captured hand-washing signature at a movement match score of, e.g., 85. In some embodiments, a movement match score may be determined based on regression and/or other statistical comparisons. In some embodiments, a movement match score may be determined based on image analytics of graphic representations of movements.
At step 328, the VR engine classifies new movement signature based on the comparison to the stored signatures. For instance, in some embodiments, a movement match score may be determined based on the comparison between the generated movement signature and each of the stored movement signatures and the highest movement match score will determine the movement classification. In some embodiments, a patient with a movement disorder may perform a teeth-brushing motion and may produce a movement match score of 77 (e.g., on a scale of 0-99), when compared to a motion capture teeth-brushing movement signature and may produce a movement match score of, e.g., 45 when compared to a movement signature based on motion capture of a trained person eating soup. In some cases, a patient rehabilitating a serious shoulder injury may perform a motion of washing his hair and may produce a movement match score of 65 with a motion capture movement signature and may produce a movement match score of 45 when compared to a motion capture hand-washing movement signature, e.g., due to his limited motion. In some embodiments, other statistical analysis may be used to determine the highest movement match score. In some embodiments, this may be based on a count of multiple high-ranking match scores. For instance, there may be several movement signatures for hand washing and hair washing—each derived from trained motion capture—and while one movement signature for hand washing may generate a high movement match score, there may be several highly ranked movement match scores for hair washing (e.g., four, above a threshold of three) that indicate a classification of hair washing rather than hand washing.
At step 330, the VR engine provides the movement classification of the input movement signature. For instance, the VR engine may output or relay a movement classification based on the input movement signature (or other movement data). In some embodiments this may be the highest movement match score. In some embodiments, this may be based on a count of multiple high-ranking match scores of similar classifications.
Some embodiments may utilize a VR engine to perform one or more parts of process 340, e.g., as part of a VR application, stored and executed by one or more of the processors and memory of a headset, server, tablet and/or other device.
At step 342, the VR engine receives a new movement signature. A movement signature may be considered a visual or mathematical representation of data describing a movement. For instance, a movement signature may be a graph of sensor data, or VR avatar data, over time. Process 400 of
At step 344, the VR engine accesses a classifier model that was trained from a plurality of stored movement signatures. For instance, a classifier model may be trained similarly to training movement classifier model 540 of process 500 as depicted in the flow diagram of
At step 346, the VR engine classifies new movement signature based on the trained movement classifier model. For instance, patient motion may be input and classified as a particular activity or movement. Generally, in some embodiments, movement signature data may be processed to yield one or more movement features and passed to movement classifier model for a classification. In some embodiments, a trained movement classifier model may evaluate movement features and present a classification label of a movement and/or activity.
At step 348, the VR engine provides movement classification for the input movement signature. For instance, the movement classifier model may output the determined movement or activity classification label based on the input data, in light of training from prior movement data. If a classification label for new movement data can be verified outside the system, the movement classifier model may be further updated with feedback and reinforcement for further accuracy.
In some embodiments, movements of process 400 may be performed by patients learning or re-learning how to perform vital activities, e.g., ADLs. In some cases, motion capture sessions may be performed by a motion-capture performer—working with, e.g., a doctor, therapist, VR application designer, supervised assistant, actor, or other trained professional— performing the activity, e.g., brushing teeth or drinking water from a glass, to capture proper form of the motion in a movement library. For instance, VR hardware and sensors may be positioned on the body of the mocap performer, and she may perform the necessary steps of the ADL for capture by the VR system and incorporation into a movement library. Sensors of the VR system may capture a performed movement and the VR system may translate real-world movement to movement by a VR avatar in a virtual world. Real-world performance may be completed with or without props, inside or outside the usual setting (e.g., bathroom, kitchen). In some embodiments, a mocap performer may be able to pantomime an ADL using movements proper enough for capture and training. Process 400 may be performed on movement data generated by a patient or a mocap actor alike.
At step 402, a plurality of sensors captures position data of movement performed by a user. For instance, each sensor may capture position data for three axes. In some embodiments, for example, a position data matrix for sensor “n” may include a position on the x-axis, xn, a position on the y-axis, yn, and a position on the z-axis, zn.
At step 404, a plurality of sensors captures position data of movement performed by a user. For instance, each sensor may capture orientation and/or rotational data about three axes. In some embodiments, a sensor data matrix may include rotational measurements of yaw, pitch, and roll, e.g., ψ, θ, and φ.
At step 406, a plurality of sensors captures position data of movement performed by a user. For instance, each sensor may capture IMU and/or acceleration data for three axes. In some embodiments, each sensor may include IMU data inertial measurement units (IMUs), described in
At step 410, the VR engine receives sensor data (e.g., position, orientation, and/or acceleration). For example, a receiver connected to the VR engine may receive signals transmitted by each sensor. In some embodiments, sensor data may be construed as one or more matrices of, e.g., position, orientation, and/or acceleration data as depicted in
At step 410, the VR engine plots one or more of position, rotation, and acceleration data against time as one or more graphs. In some embodiments, the VR engine may plot one or more of position, rotation, and acceleration, for one or more sensors, on a chart against another variable, e.g., time. For instance, for each unit of time, each of several sensors may produce a matrix of position, orientation, and acceleration data (e.g., as depicted in
At step 412, the VR engine generates a movement signature based on the one or more graphs. Generally, generating a movement signature comprises generating a visual or mathematical representation of data describing a movement. For instance, a movement signature may be generated based on plotting one or more of position, rotation, and acceleration, for one or more sensors, on a chart against another variable, e.g., time. Some embodiments may generate a movement signature represented by a graph with many measurements (e.g., height, width, depth, pitch, roll, yaw, and acceleration in three directions) over a period of time, from a plurality of sensors (e.g., back, trunk, left and right hands, left and right arms, head, legs, knees, feet etc.). In some embodiments, a movement signature may be generated based on collecting one or more of position, rotation, and acceleration data for a specific sensor or body part in a data structure with a corresponding time. In some embodiments, generating a movement signature may comprise performing one or more mathematical operations to one or more graphs, e.g., rounding, averaging, integrations, derivations, regression analysis, Fourier transforms, etc.
Movement signatures may be very basic representations of a movement or activity. For example, Teeth brushing signature 110 of
At step 414, the VR engine adds the movement signature to a movement library. A movement library may record movement data so that motions and activities may be used to help present movement form, as well as classify future movement. Movements and activities (e.g., ADLs) such as brushing teeth, washing hands, and opening a refrigerator door may be incorporated into a movement library. In some embodiments, a movement library may include data describing proper motions for various ADLs and activities. In some embodiments, each movement of a movement library may have a movement signature. A movement library may comprise a user interface that provides an avatar animation of an ADL, e.g., scenario 100A. In some embodiments, scenario 100A may be a portion of a VR application or activity used to help train patients.
In some embodiments, adding a movement signature to a movement library may comprise creating an activity entry for the movement signature and adding the entry to a movement library data structure. For instance, movement library 221 in data structure 200 of
In some embodiments, signatures of micromovements may be incorporated in a movement signature. In some embodiments, micromovements, such as lather action, lowering action, and rinsing action, may each have their own movement signature (e.g., within a larger movement signature). In some embodiments, micromovements may be identified within a movement signature. There are many ways to extract smaller movements and micromovements and process 450 is one example. Breaks in a movement signature may signify a change in micromovement as part of a larger movement.
Generally, process 450 of
At step 452, a first sensor captures a movement performed by a user. A first sensor may be one of a plurality of sensors placed on a user's body, e.g., as depicted in
At step 454, a second sensor captures the movement performed by a user. A second sensor may be one of a plurality of sensors placed on a user's body, e.g., as depicted in
At step 456, an N sensor captures the movement performed by a user. An N sensor may be one of a plurality of sensors placed on a user's body, e.g., as depicted in
At step 458, the VR engine receives movement input from a plurality of sensors. Generally, each sensor may detect position and orientation (PnO) data and transmit such data to the HMD for processing. In some embodiments, each sensor may detect and transmit acceleration data e.g., based on inertial measurement units (IMUs), described in
At step 460, the VR engine generates a movement signature based on the movement input. Generally, generating a movement signature comprises generating a visual or mathematical representation of data describing a movement. For instance, a movement signature may be a graph of sensor data over time. In some embodiments, generating a movement signature may include steps of plotting one or more portions of sensor data as graphs and generating a movement signature based on the graphs. Process 400 of
At step 462, the VR engine identifies breaks in the movement signature. Breaks in a movement signature may signify a change in micromovement as part of a larger movement. In some embodiments, breaks may be identified by, e.g., abrupt changes in position, rotation, and/or acceleration in one or more directions. For instance, a large change in height position (e.g., y-position) may indicate a lifting or lowering micromovements. A large change in horizontal position (e.g., z-position) may indicate a pulling or pushing micromovements. In some embodiments, breaks may be identified by brief or instantaneous pauses in activity. In some embodiments, breaks may be identified by changes in patterns of motion, such as when oscillating starts or stops, or when raising or lowering starts/stops. In some embodiments, a machine learning model may be trained to identify changes between micromovements within a movement.
At step 464, the VR engine extracts a plurality of micromovements based on the identified breaks. For instance, as depicted in
At step 466, the VR engine outputs a first micromovement. For instance, as depicted in
At step 468, the VR engine outputs a second micromovement. For instance, as depicted in
At step 470, the VR engine outputs an M micromovement. For instance, as depicted in
Training models to accurately classify movements may be accomplished in many ways. Some embodiments may use supervised learning where, e.g., a training data set includes labels identifying movements by an activity label. Some embodiments may use unsupervised learning that may classify movements in training data by clustering similar data. Some embodiments may use semi-supervised learning where a portion of labeled movement data may be combined with unlabeled data during training. In some embodiments, reinforcement learning may be used. With reinforcement learning, a predictive model is trained from a series of actions by maximizing a “reward function,” via rewarding correct labeling and penalizing improper labeling. Process 500 includes data labels 512, indicating a supervised or semi-supervised learning situation. A trained model may return a movement labeled by an activity category describing the movement or may simply be labeled as similar to other movements (or not).
Process 500 depicts training movement data 510 along with data labels 512. Training data for transition movement identification may be collected by manually labeling training movements that are transition movements. In some embodiments, movement data may comprise VR sensor data, VR avatar skeletal data, movement signature data, and/or other data. Movement data without an activity classification, e.g., from a control group, may also be captured and used. In some embodiments, a capture session may be performed by a motion-capture performer (and supervised by a doctor, therapist, VR therapy application designer, etc.) performing the activity, e.g., brushing teeth, washing hands, drinking water, opening a refrigerator, showering, using a toilet, etc., to capture training data of motions for the VR system. In some circumstances, an analyst may mark captured movement data with a label of the corresponding activity, e.g., in near real time, to create the training data set. From the movement data collected, at least two groups of data may be created: training movement data 510 and test data 524.
In process 500, training movement data 510 is pre-processed using feature extraction to form training movement features 516. Pre-processing of training data is used to obtain proper data for training. In some embodiments, pre-processing may involve, for example, scaling, transforming, rotating, converting, normalizing, changing of bases, and/or translating coordinate systems in video and/or audio movement data. In some embodiments, pre-processing may involve filtering video and/or audio movement data, e.g., to eliminate video and/or audio movement noise.
After pre-processing, training movement features 516 are fed into Machine Learning Algorithm (MLA) 520 to generate an initial machine learning model, e.g., movement classifier model 540. Different types of movement classification problem spaces may require several different models. In some embodiments, MLA 520 may comprise multivariate time series classification models. In some embodiments, MLA 520 may comprise hierarchical multivariate time series classification models. In some embodiments, time series data may undergo feature engineering, such as extrapolating averages or other calculated values of sensor data, e.g., to not limit the MLA 520 to solely time-series based algorithms. In some embodiments, MLA 520 uses numbers between 0 and 1 to determine whether the provided data, e.g., training movement features 516, includes a transition movement or not. The more data that is provided, the more accurate MLA 520 will be in creating a model, e.g., movement classifier model 540.
Once MLA 520 creates movement classifier model 540, test data may be fed into the model to verify the system and evaluate how strongly model 540 behaves based on metrics such as accuracy, precision, specificity, and/or recall. In some embodiments, an additional subset of test data may be reserved for hyperparameter tuning to maximize performance of model 540. In some embodiments, test data 524 is pre-processed to become a movement feature 536 and passed to movement classifier model 540 for a classification. Movement classifier model 540 identifies an activity classification label for the input test data. In some embodiments, each iteration of test data 524 is classified and evaluated for performance based on metrics such as accuracy, precision, specificity, and/or recall. For example, if expected label 550 is not correct, false result 552 may be fed as learning data back into MLA 520. If, after test data 524 is classified and reviewed, model 540 does not perform as expected (e.g., accuracy below 75%) then additional training data may be provided until the model meets the expected criteria. In some embodiments, a poorly performing model may necessitate replacement with a higher performing algorithm. In some embodiments, a reinforcement learning method may be incorporated with test data to reward or punish MLA 520.
Once movement classifier model 540 performs at an acceptable level, new real-time data may be fed to the model, and determinations of whether the data may be classified as a particular activity with confidence. For instance, patient motion may be input and classified as a particular activity or movement. In some embodiments, such as process 500, new movement data 530 may be pre-processed as a movement feature 536 and passed to movement classifier model 540 for a classification as an activity. Movement classifier model 540 may evaluate movement feature 536 and present classification label 550 based on the input data. If a classification label 550 for new movement data 530 can be verified outside the system, model 540 may be further updated with feedback and reinforcement for further accuracy.
In some embodiments, user data (e.g., a patient profile) may be input with movement data. For instance, a patient's height, weight, sex, age, body mass, and impairment and/or health history may affect a movement. Likewise, a movement classifier model may take into account patient qualities that may help the model better predict and classify movement. For instance, data coming from patients with movement disorders may be used to identify a motion by another patient suffering from a similar movement disorder. In some embodiments, a movement classifier model may be used to help identify movement similar to patients suffering from particular disorders and suggest a diagnosis.
Some embodiments may utilize a VR engine to perform one or more parts of process 600, e.g., as part of a VR application, stored and executed by one or more of the processors and memory of a headset, server, tablet and/or other device. For instance, VR engine may be incorporated in one or more of head-mounted display 201 and clinician tablet 210 of
At step 602, the VR engine accesses the micromovements used in Activity 1. For instance, with steps 602, 604, and 606, there may be a library of VR activities and each VR activity is associated with at least one movement. Each movement may be broken down into one or more micromovements. In some embodiments, a movement library comprising various activities may be used to store activities, movements, and micromovements. For instance, movement library 221 in data structure 200 of
By way of a nonlimiting example, Activity 1 may be considered an activity titled “Feed the Squirrels” as depicted as VR activity 172 in
At step 604, the VR engine accesses a list of micromovements used in Activity 2. For example, Activity 2 may be considered a virtual therapy activity titled “Start the Campfire,” depicted as VR activity 174 of
At step 606, the VR engine receives a list of micromovements in Activity Q. In some embodiments, Activity Q may represent the last of Q activities potentially available in the movement library. Some embodiments may feature a several available activities, while some other embodiments may include dozens or more VR applications and/or activities. For instance, Activity Q may be considered an activity titled “Find the Woodland Friends,” e.g., depicted as VR activity 178 of
At step 608, the VR engine receives an input associated with a first movement related to a specific activity, such as an ADL. This first movement may be a problematic movement for a patient or a very essential motion that a particular patient needs to practice, e.g., with fun, challenging, immersive therapy. In some embodiments, this may be a simple selection on a user interface of an activity a patient needs to improve to gain more independence. In some embodiments, the input may be performance of a movement by a patient that falls short of a proper motion. In some embodiments, a patient profile may be used to identify and select a first movement. For instance, a patient may be participating in VR therapy and her profile is prepared for access by the VR engine. In scenario 140, patient profile 152 for “Jane Doe” is received. A patient, for example, may need to work on the ADLs of teeth brushing, hand washing, and refrigerator opening. These activities may then be added to their profile 152 and prepared for access by the VR engine, as demonstrated in
In some embodiments, the first activity, e.g., from the input, may be divided (or previously divided) into a plurality of micromovements. An exemplary process for extracting micromovements may be found in process 300 of
At step 610, the VR engine compares the micromovements identified in the patient profile to each activity's list of included micromovements. In some embodiments, micromovements from each activity's list and the patient profile may be compared. In some embodiments, matches between a patient profile and an activity's movement list are identified and counted. In some embodiments, matches from each activity's list may be prioritized or weighted based on prevalence within the activity or in the patient profile. For instance, matches may be identified and weighted based on a tier of each micromovement (e.g., prioritization). A match of an activity prioritizing brushing teeth for a patient with minimal ability to do so may be weighted (e.g., 125%) more than a match with an activity focusing on working memory when the patient has only minor memory issues (e.g., 50%).
Comparisons may be performed in several ways. In some embodiments, micromovements from each activity's list and the patient profile may be correlated and a match score (e.g., 1-100) for the activity calculated. For instance, each micromovement included in an ADL in a patient profile and each VR activity may be given a numeric identifier and a weight value. In some embodiments, numeric identifiers and each corresponding weight value for a profile or VR activity may form matrices and the matrices are correlated. In some embodiments, numeric identifiers and each corresponding weight value for a profile or VR activity may be charted as coordinates and use linear regression to compare. In some embodiments, an index of every micromovement in all of the applications may be used, wherein each micromovement is associated with one or more applications and/or activities that may include the micromovement or ADL as a whole.
In some embodiments, a comparison may be made by a trained model using, e.g., a neural network. For instance, a model may be trained to accept a patient profile as input and identify one or more VR activities suitable for the patient profile. Such a model may be trained by doctors and/or therapists who prove training data of profiles and identify which VR activities may be appropriate for use. Then, using a feedback loop, the model may be further trained with test patient profiles by, e.g., rewarding the neural network for correct predictions of suitable VR activities and retraining with incorrect predictions. In some embodiments, a comparison may use a combination of a trained model and comparative analysis.
At step 620, for each activity, the VR engine determines whether the activity's micromovements match the micromovements identified in the patient's profile, e.g., above a predetermined threshold. For instance, if the counted matches between an activity and the patient profile meet or exceed a threshold (e.g., five matches), the activity may be further analyzed. In some embodiments, e.g., when a weighting or correlation is used, the threshold may be a score from 1-100, such as 75, and the activity match score must meet or exceed the match score threshold. In some embodiments, the threshold may be based on the number of micromovements in a patient profile, e.g., the threshold may be two-thirds (e.g., 66%) of the total number of micromovements in a profile. In some embodiments, the match threshold may be one match, e.g., in situation where patients have only one or a couple micromovements or ADLs.
If the VR engine determines an analyzed activity's micromovements do not match the micromovements identified in the patient's profile above a threshold, then, at step 622, the VR engine does not add activity to a subset of activities for further analysis. For instance, if the threshold is five matches and the activity only has two matches, the activity is discarded for now. In some embodiments, if the threshold is a match score of 75 (e.g., on a 0-99 scale) and the activity only has a match score of 40, the activity is discarded for now. In some embodiments, if all the activities are evaluated for matches and none meet the predetermined threshold, a second (lower) predetermined threshold may be used (e.g., half or two-thirds of the first threshold).
If the VR engine determines an activity's micromovements do match the micromovements identified in the patient's profile above a threshold, then, at step 624, the VR engine adds the matching activity to a subset of activities. For instance, if the threshold is four matches and the activity has six matches, the activity is added to the subset for further review. In some embodiments, if the threshold is a match score of 85 (on a 0-99 scale) and the activity has a match score of 92, the activity is added to the subset for further review.
At step 626, the VR engine accesses more information for each activity of subset of activities. For instance, additional information for an activity may comprise calendar data of when the activity was last accessed, compatibility data, activity version and update data, average activity duration data, activity performance data, and other data. For instance, some additional activity data may indicate recent participation in an activity and/or recent success/struggles with the activity. In some embodiments, additional information may include recommendation/weighting by a doctor or therapist indicating a preference to use (or not use) a particular motion required by one or more activities. In some embodiments, an activity may be eliminated from the subset if, e.g., a conflict arises based on additional activity data. In some embodiments, a warning of a potential conflict may be provided.
At step 628, the VR engine provides one or more activities from the subset of activities. For instance, scenario 140 of
As further disclosed herein is an illustrative medical device system including a virtual reality system to enable therapy for a patient. Such a VR medical device system may include a headset, sensors, a therapist tablet, among other hardware to enable exercises and activities to train (or re-train) a patient's body movements.
As described herein, VR systems capable for use in physical therapy may be tailored to be durable, portable and allow for quick and consistent setup. In some embodiments, a virtual reality system for therapy may be a modified commercial VR system using, e.g., a headset and several body sensors configured for wireless communication. A VR system capable of use for therapy may need to collect patient movement data. In some embodiments, sensors, placed on the patient's body, can translate patient body movement to the VR system for animation of a VR avatar. Sensor data may also be used to measure patient movement and determine motion for patient body parts.
Clinician tablet 210 may be configured to use a touch screen, a power/lock button that turns the component on or off, and a charger/accessory port, e.g., USB-C. For instance, pressing the power button on clinician tablet 210 may power on the tablet or restart the tablet. Once clinician tablet 210 is powered on, a therapist or supervisor may access a user interface and be able to log in; add or select a patient; initialize and sync sensors; select, start, modify, or end a therapy session; view data; and/or log out.
Headset 201 may comprise a power button that turns the component on or off, as well as a charger/accessory port, e.g., USB-C. Headset 201 may also provide visual feedback of virtual reality applications in concert with the clinician tablet and the small and large sensors.
Charging headset 201 may be performed by plugging a headset power cord into the storage dock or an outlet. To turn on headset 201 or restart headset 201, the power button may be pressed. A power button may be on top of the headset. Some embodiments may include a headset controller used to access system settings. For instance, a headset controller may be used only in certain troubleshooting and administrative tasks and not necessarily during patient therapy. Buttons on the controller may be used to control power, connect to headset 201, access settings, or control volume.
The large sensor 202B and small sensors 202 are equipped with mechanical and electrical components that measure position and orientation in physical space and then translate that information to construct a virtual environment. Sensors 202 are turned off and charged when placed in the charging station. Sensors 202 turn on and attempt to sync when removed from the charging station. The sensor charger acts as a dock to store and charge the sensors. In some embodiments, sensors may be placed in sensor bands on a patient. Sensor bands 205, as depicted in
As shown in illustrative
HMD 201 is a piece central to immersing a patient in a virtual world in terms of presentation and movement. A headset may allow, for instance, a wide field of view (e.g., 110°) and tracking along six degrees of freedom. HMD 201 may include cameras, accelerometers, gyroscopes, and proximity sensors. VR headsets typically include a processor, usually in the form of a system on a chip (SoC), and memory. In some embodiments, headsets may also use, for example, additional cameras as safety features to help users avoid real-world obstacles. HMD 201 may comprise more than one connectivity option in order to communicate with the therapist's tablet. For instance, an HMD 201 may use an SoC that features WiFi and Bluetooth connectivity, in addition to an available USB connection (e.g., USB Type-C). The USB-C connection may also be used to charge the built-in rechargeable battery for the headset.
A supervisor, such as a health care provider or therapist, may use a tablet, e.g., tablet 210 depicted in
In some embodiments, such as depicted in
A wireless transmitter module (WTM) 202B may be worn on a sensor band 205B that is laid over the patient's shoulders. WTM 202B sits between the patient's shoulder blades on their back. Wireless sensor modules 202 (e.g., sensors or WSMs) are worn just above each elbow, strapped to the back of each hand, and on a pelvis band that positions a sensor adjacent to the patient's sacrum on their back. In some embodiments, each WSM communicates its position and orientation in real-time with an HMD Accessory located on the HMD. Each sensor 202 may learn its relative position and orientation to the WTM, e.g., via calibration.
The HMD accessory may include a sensor 202A that may allow it to learn its position relative to WTM 202B, which then allows the HMD to know where in physical space all the WSMs and WTM are located. In some embodiments, each sensor 202 communicates independently with the HMD accessory which then transmits its data to HMD 201, e.g., via a USB-C connection. In some embodiments, each sensor 202 communicates its position and orientation in real-time with WTM 202B, which is in wireless communication with HMD 201. In some embodiments HMD 201 may be connected to input supplying other data such as biometric feedback data. For instance, in some cases, the VR system may include heart rate monitors, electrical signal monitors, e.g., electrocardiogram (EKG), eye movement tracking, brain monitoring with Electroencephalogram (EEG), pulse oximeter monitors, temperature sensors, blood pressure monitors, respiratory monitors, light sensors, cameras, sensors, and other biometric devices. Biometric feedback, along with other performance data, can indicate more subtle changes to the patient's body or physiology as well as mental state, e.g., when a patient is stressed, comfortable, distracted, tired, over-worked, under-worked, over-stimulated, confused, overwhelmed, excited, engaged, disengaged, and more. In some embodiments, such devices measuring biometric feedback may be connected to the HMD and/or the supervisor tablet via USB, Bluetooth, Wi-Fi, radio frequency, and other mechanisms of networking and communication.
A VR environment rendering engine on HMD 201 (sometimes referred to herein as a “VR application”), such as the Unreal Engine™, uses the position and orientation data to create an avatar that mimics the patient's movement.
A patient or player may “become” their avatar when they log in to a virtual reality activity. When the player moves their body, they see their avatar move accordingly. Sensors in the headset may allow the patient to move the avatar's head, e.g., even before body sensors are placed on the patient. A system that achieves consistent high-quality tracking facilitates the patient's movements to be accurately mapped onto an avatar.
Sensors 202 may be placed on the body, e.g., of a patient by a therapist, in particular locations to sense and/or translate body movements. The system can use measurements of position and orientation of sensors placed in key places to determine movement of body parts in the real world and translate such movement to the virtual world. In some embodiments, a VR system may collect performance data for therapeutic analysis of a patient's movements and range of motion.
In some embodiments, systems and methods of the present disclosure may use electromagnetic tracking, optical tracking, infrared tracking, accelerometers, magnetometers, gyroscopes, myoelectric tracking, other tracking techniques, or a combination of one or more of such tracking methods. The tracking systems may be parts of a computing system as disclosed herein. The tracking tools may exist on one or more circuit boards within the VR system (see
Sensors 202 may be attached to body parts via band 205. In some embodiments, a therapist attaches sensors 202 to proper areas of a patient's body. For example, a patient may not be physically able to attach band 205 to herself. In some embodiments, each patient may have her own set of bands 205 to minimize hygiene issues. In some embodiments, a therapist may bring a portable case to a patient's room or home for therapy. The sensors may include contact ports for charging each sensor's battery while storing and transporting in the container, such as the container depicted in
As illustrated in
Once sensors 202 are placed in bands 205, each band may be placed on a body part, e.g., according to
Each of sensors 202 may be placed at any of the suitable locations, e.g., as depicted in
Generally, sensor assignment may be based on the position of each sensor 202. Sometimes, such as cases where patients have varying height discrepancies, assigning a sensor merely based on height is not practical. In some embodiments, sensor assignment may be based on relative position to, e.g., wireless transmitter module 202B.
The arrangement shown in
One or more system management controllers, such as system management controller 912 or system management controller 932, may provide data transmission management functions between the buses and the components they integrate. For instance, system management controller 912 provides data transmission management functions between bus 914 and sensors 902. System management controller 932 provides data transmission management functions between bus 934 and GPU 920. Such management controllers may facilitate the arrangements orchestration of these components that may each utilize separate instructions within defined time frames to execute applications. Network interface 980 may include an ethernet connection or a component that forms a wireless connection, e.g., 802.11b, g, a, or n connection (WiFi), to a local area network (LAN) 987, wide area network (WAN) 983, intranet 985, or internet 981. Network controller 982 provides data transmission management functions between bus 984 and network interface 980.
Processor(s) 960 and GPU 920 may execute a number of instructions, such as machine-readable instructions. The instructions may include instructions for receiving, storing, processing, and transmitting tracking data from various sources, such as electromagnetic (EM) sensors 903, optical sensors 904, infrared (IR) sensors 907, inertial measurement units (IMUs) sensors 905, and/or myoelectric sensors 906. The tracking data may be communicated to processor(s) 960 by either a wired or wireless communication link, e.g., transmitter 910. Upon receiving tracking data, processor(s) 960 may execute an instruction to permanently or temporarily store the tracking data in memory 962 such as, e.g., random access memory (RAM), read only memory (ROM), cache, flash memory, hard disk, or other suitable storage component. Memory may be a separate component, such as memory 968, in communication with processor(s) 960 or may be integrated into processor(s) 960, such as memory 962, as depicted.
Processor(s) 960 may also execute instructions for constructing an instance of virtual space. The instance may be hosted on an external server and may persist and undergo changes even when a participant is not logged in to said instance. In some embodiments, the instance may be participant-specific, and the data required to construct it may be stored locally. In such an embodiment, new instance data may be distributed as updates that users download from an external source into local memory. In some exemplary embodiments, the instance of virtual space may include a virtual volume of space, a virtual topography (e.g., ground, mountains, lakes), virtual objects, and virtual characters (e.g., non-player characters “NPCs”). The instance may be constructed and/or rendered in 2D or 3D. The rendering may offer the viewer a first-person or third-person perspective. A first-person perspective may include displaying the virtual world from the eyes of the avatar and allowing the patient to view body movements from the avatar's perspective. A third-person perspective may include displaying the virtual world from, for example, behind the avatar to allow someone to view body movements from a different perspective. The instance may include properties of physics, such as gravity, magnetism, mass, force, velocity, and acceleration, which cause the virtual objects in the virtual space to behave in a manner at least visually similar to the behaviors of real objects in real space.
Processor(s) 960 may execute a program (e.g., the Unreal Engine or VR applications discussed above) for analyzing and modeling tracking data. For instance, processor(s) 960 may execute a program that analyzes the tracking data it receives according to algorithms described above, along with other related pertinent mathematical formulas. Such a program may incorporate a graphics processing unit (GPU) 920 that is capable of translating tracking data into 3D models. GPU 920 may utilize shader engine 928, vertex animation 924, and linear blend skinning algorithms. In some instances, processor(s) 960 or a CPU may at least partially assist the GPU in making such calculations. This allows GPU 920 to dedicate more resources to the task of converting 3D scene data to the projected render buffer. GPU 920 may refine the 3D model by using one or more algorithms, such as an algorithm learned on biomechanical movements, a cascading algorithm that converges on a solution by parsing and incrementally considering several sources of tracking data, an inverse kinematics (IK) engine 930, a proportionality algorithm, and other algorithms related to data processing and animation techniques. After GPU 920 constructs a suitable 3D model, processor(s) 960 executes a program to transmit data for the 3D model to another component of the computing environment (or to a peripheral component in communication with the computing environment) that is capable of displaying the model, such as display 950.
In some embodiments, GPU 920 transfers the 3D model to a video encoder or a video codec 940 via a bus, which then transfers information representative of the 3D model to a suitable display 950. The 3D model may be representative of a virtual entity that can be displayed in an instance of virtual space, e.g., an avatar. The virtual entity is capable of interacting with the virtual topography, virtual objects, and virtual characters within virtual space. The virtual entity is controlled by a user's movements, as interpreted by sensors 902 communicating with the system. Display 950 may display a Patient View. The patient's real-world movements are reflected by the avatar in the virtual world. The virtual world may be viewed in the headset in 3D and monitored on the tablet in two dimensions. In some embodiments, the VR world is an activity that provides feedback and rewards based on the patient's ability to complete activities. Data from the in-world avatar is transmitted from the HMD to the tablet to the cloud, where it is stored for later analysis. An illustrative architectural diagram of such elements in accordance with some embodiments is depicted in
A VR system may also comprise display 970, which is connected to the computing environment via transmitter 972. Display 970 may be a component of a clinician tablet. For instance, a supervisor or operator, such as a therapist, may securely log in to a clinician tablet, coupled to the system, to observe and direct the patient to participate in various activities and adjust the parameters of the activities to best suit the patient's ability level. Display 970 may depict a view of the avatar and/or replicate the view of the HMD.
In some embodiments, HMD 201 may be the same as or similar to HMD 1010 in
The clinician operator device, clinician tablet 1020, runs a native application (e.g., Android application 1025) that allows an operator such as a therapist to control a patient's experience. Cloud server 1050 includes a combination of software that manages authentication, data storage and retrieval, and hosts the user interface, which runs on the tablet. This can be accessed by tablet 1020. Tablet 1020 has several modules.
As depicted in
The second part is an application, e.g., Android Application 1025, configured to allow an operator to control the software of HMD 1010. In some embodiments, the application may be a native application. A native application, in turn, may comprise two parts, e.g., (1) socket host 1026 configured to receive native socket communications from the HMD and translate that content into web sockets, e.g., web sockets 1027, that a web browser can easily interpret; and (2) a web browser 1028, which is what the operator sees on the tablet screen. The web browser may receive data from the HMD via the socket host 1026, which translates the HMD's native socket communication 1018 into web sockets 1027, and it may receive UI/UX information from a file server 1052 in cloud 1050. Tablet 1020 comprises web browser 1028, which may incorporate a real-time 3D engine, such as Babylon.js, using a JavaScript library for displaying 3D graphics in web browser 1028 via HTMLS. For instance, a real-time 3D engine, such as Babylon.js, may render 3D graphics, e.g., in web browser 1028 on clinician tablet 1020, based on received skeletal data from an avatar solver in the Unreal Engine 1016 stored and executed on HMD 1010. In some embodiments, rather than Android Application 1026, there may be a web application or other software to communicate with file server 1052 in cloud 1050. In some instances, an application of Tablet 1020 may use, e.g., Web Real-Time Communication (WebRTC) to facilitate peer-to-peer communication without plugins, native apps, and/or web sockets.
The cloud software, e.g., cloud 1050, has several different, interconnected parts configured to communicate with the tablet software: authorization and API server 1062, GraphQL server 1064, and file server (static web host) 1052.
In some embodiments, authorization and API server 1062 may be used as a gatekeeper. For example, when an operator attempts to log in to the system, the tablet communicates with the authorization server. This server ensures that interactions (e.g., queries, updates, etc.) are authorized based on session variables such as operator's role, the health care organization, and the current patient. This server, or group of servers, communicates with several parts of the system: (a) a key value store 1054, which is a clustered session cache that stores and allows quick retrieval of session variables; (b) a GraphQL server 1064, as discussed below, which is used to access the back-end database in order to populate the key value store, and also for some calls to the application programming interface (API); (c) an identity server 1056 for handling the user login process; and (d) a secrets manager 1058 for injecting service passwords (relational database, identity database, identity server, key value store) into the environment in lieu of hard coding.
When the tablet requests data, it will communicate with the GraphQL server 1064, which will, in turn, communicate with several parts: (1) the authorization and API server 1062; (2) the secrets manager 1058, and (3) a relational database 1053 storing data for the system. Data stored by the relational database 1053 may include, for instance, profile data, session data, application data, activity performance data, and motion data.
In some embodiments, profile data may include information used to identify the patient, such as a name or an alias. Session data may comprise information about the patient's previous sessions, as well as, for example, a “free text” field into which the therapist can input unrestricted text, and a log 1055 of the patient's previous activity. Logs 1055 are typically used for session data and may include, for example, total activity time, e.g., how long the patient was actively engaged with individual activities; activity summary, e.g., a list of which activities the patient performed, and how long they engaged with each on; and settings and results for each activity. Activity performance data may incorporate information about the patient's progression through the activity content of the VR world. Motion data may include specific range-of-motion (ROM) data that may be saved about the patient's movement over the course of each activity and session, so that therapists can compare session data to previous sessions' data.
In some embodiments, file server 1052 may serve the tablet software's website as a static web host.
In some embodiments, the activities and exercises may include gazing activities that require the player to turn and look. A gaze activity may be presented as a hide-and-seek activity, a follow-and-seek exercise, or a gaze and trigger activity. The activities may include sun rising activities that require the player to raise his or her arms. The activities may include hot air balloon exercise s that require the player to lean and bend. The activities may include bird placing activities that require the player to reach and place. The exercises may include a soccer-like activity that requires a player to block and/or dodge projectiles. These activities may be presented as sandbox activities, with no clear win condition or end point. Some of these may be free-play environments presented as an endless interactive lobby. Sandbox versions of the activities may be typically used to introduce the player to the activity mechanics, and it allows them to explore the specific exercise's unique perspective of the virtual reality environment. Additionally, the sandbox activities may allow a therapist to use objects to augment and customize therapy, such as with resistance bands, weights, and the like. After the player has learned how the exercise mechanics works, they can be loaded into a version of the activity with a clear objective. In these versions of the activity, the player's movements may be tracked and recorded. After completing the prescribed number of repetitions (reps) of the therapeutic exercise (a number that is adjustable), the activity may come to an end and the player may be rewarded for completing it. In some embodiments, activities and exercises may be dynamically adjusted during the activity to optimize patient engagement and/or therapeutic benefits.
The transition from activity to activity may be seamless. Several transition options may be employed. The screen may simply fade to black, and slowly reload through a fade from black. A score board or a preview of the next exercise may be used to distract the player during transition. A slow and progressive transition ensures that the patient is not startled by a sudden change of their entire visual environment. This slow progression may limit any disorientation that might occurs from a total, quick change in scenery while in VR.
At the end of an activity or exercise session, the player may be granted a particular view of the VR environment, such as a birds-eye view of the world or area. From this height, players may be offered a view of an ever-changing village. Such changes in the village are a direct response to the player's exercise progression, and therefore offer a visual indication of progression. These changes will continue as the player progresses through the activities to provide long-term feedback visual cues. Likewise, such views of the village may provide the best visual indicia of progress for sharing with family members or on social media. Positive feedback from family and friends is especially important when rehab progress is limited. These images will help illustrate how hard the player has been working and they will provide an objective measure of progress when, perhaps, physically the player feels little, if any, progress. Such features may enhance the positivity of the therapy experience and helps fulfill the VR activities' overall goals to be as positive as possible while to encouraging continued participation and enthusiasm.
While the foregoing discussion describes exemplary embodiments of the present invention, one skilled in the art will recognize from such discussion, the accompanying drawings, and the claims, that various modifications can be made without departing from the spirit and scope of the invention. Therefore, the illustrations and examples herein should be construed in an illustrative, and not a restrictive sense. The scope and spirit of the invention should be measured solely by reference to the claims that follow.