The present invention relates to systems and methods for physical training. More specifically, the present invention is related to the use of sensor assisted systems and methods for physical training and rehabilitation.
Millions of people all around the world require physical rehabilitation (injured athletes, post-surgery patients, etc.). Most rehabilitation activities require repetitive exercises, where the proper temporal/special execution is the key for a faster recovery. This is also applicable for refining motions and techniques in sports (e.g. golf swing, karate moves, etc.). Common rehabilitation practice requires patients to visit the physiotherapist (PT)'s office multiple times a week, as well as exercising at home. While physical rehabilitation is mostly successful for the majority of patients, there are currently multiple issues with the overall activities that result troublesome for both, patients and healthcare providers. For example, going to the PT's office is inconvenient and time consuming. In the case of PTs overloaded with patients, they often end up supervising multiple patients simultaneously, which is stressful for the healthcare professional, and at the same time can decrease the quality of treatment for certain patients. Additionally, PTs currently must record and document patient's progress manually, which is a time consuming and inconvenient activity for most providers, and they could benefit greatly from an automatic, accurate way to perform such tasks. Regarding home exercising, patients must learn (from the PTs) how to perform each exercise, which brings up more examples of inconvenience as this can be time consuming, and confusing in many cases. Moreover, patients' compliance regarding home exercises is usually below an ideal 100%, among other reasons, as they cannot remember how to perform the exercises, and/or because of they simply lack motivation. Missing or skipping home exercises contributes to delays in patient's recovery and can diminish the overall quality of the rehabilitation program. Documentation of patient's progress (for follow ups, PT-physician communication, insurance purposes, etc.) is time consuming and inconvenient for the PT and often measurements are not accurate or consistent enough.
Attempts have been made to overcome these problems (and some others related with physical rehabilitation) from having online or offline instructional videos all the way to replacing the human physical trainer altogether by virtual trainers, cameras, motion tracking, etc. For all these alternative technologies, it is extremely important to have an accurate system for movement/motion tracking of the anatomical structure of the user and also to have a system which can guide the user to carry out a set of exercises involving one or more body parts and to provide feedback on the actions done. Proper registration of the sensors to a body part being tracked is a key aspect in getting desired results. The present day systems and methods available for sensor registration to body parts are either very complicated or not accurate or not user friendly. In case of physical rehabilitation, the user may have limitations in terms of body parts movement and, in such cases; the system must offer user friendly steps for sensor registration. At the same time, the system must have such a user interface which can provide interactive guidance and feedback to the user without necessarily needing the user to be in close proximity to the system display. The present day systems and methods for physical training do not offer effective three-dimensional visual guidance to the users. Again, most of the present day applications, network connectivity is a must as the system needs support from a remote server.
Consequently, there exists in the art a long-felt need for a system and method for imparting physical training which can overcome the above mentioned shortcoming of the prior art.
It is, therefore, an object of the present invention to provide a system and method for physical rehabilitation and motion training.
Yet another object of the present invention to provide a system and method for real time motion tracking of anatomical parts through wireless sensors.
Another object of the present invention is to provide a system and method for easy registration of sensors to anatomical parts of a user for motion tracking.
Yet another object of the present invention is to provide a highly accurate sensor calibration process.
Still another object of the present invention is to provide a method for registering wearable sensors to body parts using an external device.
Another object of the present invention is to provide a highly interactive user interface for physical rehabilitation and motion training.
Yet another object of the present invention is to provide a user interface for multidimensional display of instructions and feedback for physical rehabilitation and motion training.
A further object of the present invention is to provide a user interface which requires minimal physical contact from the user for receiving instructions.
Still another object of the present invention is to provide a system and method for real time localized processing of physical rehabilitation and motion training data, which can work as a standalone system and does not require network connections with other remote systems or servers.
Another object of the present invention is to provide one or more views of the movements of a particular anatomical part of the user being monitored for physical rehabilitation and motion training.
A further object of the present invention is to provide a system and method for monitoring of an anatomical part of a user, allowing visualization from multiple views and various angles and different distances.
Yet another object of the present inventions is to provide a smart virtual camera which can be auto-controlled or controlled by the user or by a third party for obtaining optimum views of one or more anatomical parts of a user for physical rehabilitation and motion training.
Another object of the present invention is to provide a system and method for identifying location and orientation of a wearable sensor based on motion of the body part to which it is attached to or based on type of exercise selected.
A further object of the present invention is to provide feedback to the user in terms of physical stimulus against correct or wrong motion of an anatomical part.
Yet another object of the present invention is to provide a system having contextual awareness of the anatomy of the user based on the context and the exercises selected.
Still another object of the present invention is to provide a system and method for calibration of sensors with the help of a mobile computing device.
Details of the foregoing objects and of the invention, as well as additional objects, features and advantages of the invention will become apparent to those skilled in the art upon consideration of the following detailed description of the preferred embodiments exemplifying the best mode of carrying out the invention as presently perceived.
The following presents a simplified summary in order to provide a basic understanding of some aspects of the disclosed invention. This summary is not an extensive overview, and it is not intended to identify key/critical elements or to delineate the scope thereof. Its sole purpose is to present some concepts in a simplified form as a prelude to the more detailed description that is presented later.
The present invention is directed to a sensor assisted physical training and rehabilitation system and method. The system, hereinafter referred to as Smart Trainer, comprises one or more sensors (custom made as well as some existing commercial products such as smart watches, e.g. Apple Watch, Samsung-Gear 2, etc. and/or smart phones could also be used as ‘sensors’) which a user can wear on a body part to accurately capture and pre-process motion, a mobile computing device (such as a smartphone), an application or app (Android, Windows, iOS or any other operating system based) operably installed in the mobile computing device which provides a unique experience, through real time guidance with 2D and full 3D graphical user interface (GUI) and a smart UX/UI, audio-visual and tactile instructions/feedback. The system can further comprise an optional back-end cloud infrastructure implemented for data storage, statistical analysis, neural networks and data mining. It can also implement an optional web-based application for accounts managements.
The Smart Trainer uses the sensors to dynamically obtain position, orientation, and motion parameters (e.g. speed, accelerations, etc.) of the user's body parts, and analyzes the error or deviations of each joint, limb, part, etc. compared to a predefined sequence of movements. In addition to the raw value collected from sensors, Smart Trainer uses a calculus and prediction engine to estimate the range of motion, acceleration, force, metabolism, calories and activity of the main muscle groups involved in the exercise. Using some or all of these parameters, the Smart Trainer presents useful information to the user in real-time (text, numbers, color coded parameters, 2D and 3D graphics, audio, tactile indication etc.) to show users how to improve their movements, in the way a coach or health care professional would do, but based on quantitative analysis as opposed to expert opinion alone.
The Smart Trainer system provides users not only contextual smart help to control their performance during physical rehabilitation but it is also applicable to other types of physical activities (e.g. sports, fitness, physical re-habilitation, etc.). It takes into account the type of exercise that the user is performing (e.g. stretching, jogging, weight lifting, squatting, flexing, etc.) as well as body and limbs' position/orientation, movements, and acceleration.
The Smart Trainer system can behave as an expert (a physician, PT or a personal trainer, depending on the type of use) assessing and indicating corrections in a similar way a person would do, based on its capability of changing the virtual view of a 3D scene/rendering, showing/hiding tools and graphics, and providing custom guides to show the correct posture and movements versus the user's real posture and movements. The Smart Trainer also shows virtual 3D paths in the virtual scene to teach and to guide the user to the next step of the exercise.
The Smart Trainer system tracks in 3D body joints and parts, using gyros, accelerators, and compass (9 degree of freedom sensors) and integrating (fusion) all values through data fusion. The Smart Trainer system aims to help teaching, guiding, correcting, and documenting users' movements in real time, for health and fitness applications. Moreover, the Smart Trainer system behaves as a smart assistant that allows doing all that, showing the most useful information for each instance, in a smart way, without requiring user interaction while the user performs any kind of exercise or movement in any kind of activity.
The Smart Trainer system enables a user to decrease the level of attention the user needs to pay to the user interface while carrying out an exercise. The Smart Trainer helps the user to follow directions on how to perform an exercise (motion or combination of movements) by providing an intuitive way (3D and/or 2D and/or audio and/or tactile) without needing to physically reach for any conventional system-input type interface.
One exemplary non-transitory computer-readable storage medium is also described, the non-transitory computer-readable storage medium having embodied thereon a program executable by a processor to perform an exemplary method for assisting a user in physical rehabilitation and exercising. The exemplary program method describes attaching a sensor module over an anatomical part of the user. The wearable sensor modules comprise one or more sensors and are configured to acquire and transmit a first set of data generated by the sensors. The program method further describes processing a second set of data acquired from the sensors included in the mobile computing device and to register the sensors of the sensor modules to the anatomical part of the user after calculating a matrix/transformation of the data acquired from the sensor modules relative to the data acquired from the mobile device sensors. The mobile computing device should be positioned substantially aligned with the anatomical part of the user. The program method also describes determination of position, orientation and motion of the anatomical part being tracked and provides visual, audible and tactile instructions to carry out the exercise steps correctly.
To the accomplishment of the foregoing and related ends, certain illustrative aspects of the disclosed invention are described herein in connection with the following description and the annexed drawings. These aspects are indicative, however, of but a few of the various ways in which the principles disclosed herein can be employed and is intended to include all such aspects and their equivalents. Other advantages and novel features will become apparent from the following detailed description when considered in conjunction with the drawings.
In order to describe the manner in which features and other aspects of the present disclosure can be obtained, a more particular description of certain subject matter will be rendered by reference to specific embodiments that are illustrated in the appended drawings. Understanding that these drawings depict only typical embodiments and are not therefore to be considered to be limiting in scope, nor drawn to scale for all embodiments, various embodiments will be described and explained with additional specificity and detail through the use of the accompanying drawings in which:
In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the invention. However, it will be understood by those skilled in the art that the present invention may be practiced without these specific details. In other instances, well-known methods, procedures and components have not been described in detail so as not to obscure the present invention.
In the interest of clarity, not all of the routine features of the implementations described herein are shown and described. It will, of course, be appreciated that in the development of any such actual implementation, numerous implementation-specific decisions must be made in order to achieve the developer's specific goals, such as compliance with application- and business-related constraints, and that these specific goals will vary from one implementation to another and from one developer to another. Moreover, it will be appreciated that such a development effort might be complex and time-consuming, but would nevertheless be a routine undertaking of engineering for those of ordinary skill in the art having the benefit of this disclosure.
Still referring to
For the sake of explanation let us take a situation where a person is wearing one or more sensor modules 102 of the present invention. The one or more sensors 103 of the sensor module 102 are configured to send signals to the transmitter/processing module 106, transferring the values of the properties sensed by the one or more sensors 103. The data from the one or more sensors 103 can be collected by the processor 114. The connection between the module 102 and the mobile computing device 202 (e.g. the mobile phone, tablet, etc) is achieved though the transmitter/processing module 106, and it may be through electrical connector(s), but more often implemented through wireless transmission. Wireless transmission referred to herein includes, but not limited to, Bluetooth, BLE (Bluetooth low energy), WiFi and Zigbee etc. For non-wireless mode of signal transmission between the one or more sensor modules 102 and the mobile computing device 202 (e.g. smart phone), the transmitter/processing module 106 can use different types of insulated flexible wire connections.
The Smart Trainer app 250, custom built for the present invention, enables one or more persons to do various tasks related to physical rehabilitation and motion training. Examples of tasks carried out by the Smart Trainer app 250 includes, but are not limited to, facilitating calibration of the one or more sensors 103, registration or association of the one or more sensors 103 to an anatomical part, tracking of the position/orientation of the one or more sensors 103, providing guidance and feedback for physical rehabilitation and motion training and communication with one or more other mobile computing devices and/or computers through a local or wide area network.
As illustrated in
Reference to
The remote server 402 includes an application server or executing unit and a data store. The application server or executing unit further comprises a web server and a computer server that can serve as the application layer of the present invention. It would be obvious to any person skilled in the art that, although described herein as the data being stored in a single data store with necessary partitions, a plurality of data stores can also store the various data and files of multiple users. The Smart Trainer server 402 can provide facilities such as data storage, statistical analysis, neural networks and data mining. It also implements an optional web-based application for user account management. In some embodiments, the functions of Smart Trainer server 402 can be implemented in a cloud computing environment.
Reference to
Examples of different configurations supported by the Smart Trainer system 400 include, but are not limited to—
The sensor modules 102 are identified by the mobile computing device 202 in a number of ways. Examples of sensor module identification includes, but are not limited to, identification based on user input, identification based on color coding, bar-coding of the sensors (so that each one has a pre-defined position), identification based on motion pattern detection for each sensor corresponding to an exercise and identification based on detection of the motion pattern of each sensor even without defining the exercise.
In a preferred embodiment, the smart GUI on the mobile computing device 202 provides step-by-step directions/guidance to the user for wearing the sensor module 102 in a particular way which may vary depending on the exercise to be done. The optimum nominal place for the sensor module positioning depends on the application and the part of the anatomy to be tracked. For example, for an exercise involving leg 502 of a user, the smart GUI instructs the user to put sensor modules 102A and 102B in the positions as shown in
Once the one or more sensor modules 102 are attached to an anatomical part, the smart GUI provides further instructions for facilitating registration/association of the one or more sensor modules to the anatomical part to which the one or more sensor modules are attached to. Correct spatial interpretation of information from these sensor modules requires knowledge of their position and orientation (that is, their pose) in a frame of reference coordinate system. The task of determining the sensor pose relative to the body part pose is called sensor registration and it amounts to estimating a plurality of parameters that define the coordinate transformation locating the sensor coordinates. A sensor registered to an anatomical part i.e. a sensor-anatomy registration allows tracking the motion of the anatomical part from the data acquired by the sensor registered to the anatomical part.
In a preferred embodiment, the present invention enables convenient and accurate sensor-anatomy registration using a registration by reference method wherein the mobile computing device 202 is required to be positioned substantially aligned with the sensor module over the anatomical part of the body of a user which needs motion tracking. The device sensors 225 of a mobile computing device are generally configured to obtain readings with respect to an XYZ coordinate system 512, 514 and 516 of the device. The coordinate-system of a mobile computing device can be defined relative to the screen of the device in its default orientation as shown in
There could be multiple ways available for aligning the virtual coordinate system of the mobile computing device 202 with respect to a sensor module. For example, as shown in
While the information from either of the methods shown in
In some embodiments, after registering a sensor module to an anatomical part, with the help of the mobile computing device 202, the sensor-anatomy registration can be improved further without using any external device (not even the mobile computing device). This can be done by performing a series of known/defined movements while dynamically collecting positioning/orientation data from the one or more sensor modules and then analyzing the acquired data to obtain patterns and key information (e.g. axis of rotation, pivoting center, etc.). This method includes providing instruction to the user through the Smart GUI by the GUI module 302 of Smart Trainer app 250 to strap/clip/place/wear the sensor modules in a specific way (e.g. one sensor in the ankle and another sensor over the knee as shown in
In some other embodiments, the present invention allows sensor-anatomy registration without requiring positioning of the mobile computing device over the anatomical part with respect to the sensor module. The Smart GUI module 302 provides instructions through the GUI (GUI displayed on the mobile computing device or on TV/Computer screen etc.) to the user for positioning himself/herself (or their limbs, or body part to be tracked) in certain ways. Once the user is in proper position (detected by the Smart Trainer app 250 in different ways, like voice command, tapping on a touch screen GUI, gesture—detected by motion sensors, or simply lack of further movements), it calculates the registration matrices. The data related to the sensor-anatomy registration are stored in the data store 245 of the mobile computing device 202 or, in some other embodiments, this data can be stored in the Smart Trainer Server 402. The sensor-anatomy registration process of the present invention can be used for the initial calibration of the sensors also.
During the registration process described with reference to
It could be difficult and inconvenient for users to reach a touch screen or keyboard while performing an exercise. In a preferred embodiment, the one or more smart modules included in the Smart Trainer app 250 of the present invention allow users to interact and control various functions of the Smart Trainer app 250 even without coming in physical contact with the user interface. For example, once the sensor-anatomy registration is over, a user can control the display and other content of the GUI through gesture control without touching the touch screen of the mobile computing device 202. The gesture control module 308 uses the data acquired from the one or more sensor modules 102 worn by a user for motion tracking to read the gestures made by the user and interpret the data into appropriate command for controlling the functions of the Smart Trainer app 250. The gesture control module 308 can detect and evaluate if the user is having trouble following the directions or the instructions for any given exercise.
In a preferred embodiment, a large set of physical exercise instructions approved by experts (e.g. physiotherapist, personal trainer, etc.) are stored in the data store 245 and/or in the Smart Trainer server 402. These instructions are used as reference parameters to provide instructions and compare movements of body part(s) and/or sequence of movements of body parts of users. Once a user selects a particular exercise, the Smart Trainer app 250 provides instructions related to the targets or goals for each exercise through the GUI.
The smart camera module 304 provides a virtual camera which can render optimum view of the user as a whole and/or the anatomy being tracked, in particular, relevant to the exercise selected and presents the view(s) on the GUI as decided by the user or as per pre-set or real-time conditions. The virtual camera of the present invention can be set at any angle and focus to render 2D (2-dimensional) and/or 3D (3-dimensional) visuals of the anatomy being tracked.
When the virtual camera 806 moves automatically as per the settings or on demand or by automatic error detections, it shows the different targets for a specific exercise which gets activated at different moments of the exercise sequence.
The Smart Trainer app 250 provides guidance to the user in the form various visual and audible cues. For example, reference to
In a preferred embodiment, the feedback module 310 compares actual motion/movement/position of an anatomical part being tracked with an ideal motion/movement/position and provides visual and/or audible instructions for correcting the motion/movement/position on finding an error/deviation. By way of example, reference to
The models (desired 1208 & measured position/motion 1204, 1206 in
While the Smart Trainer app 250 can present a vast amount of information related to the user exercise execution at any time, the system only presents the user with the relevant information based on the instance of the exercise sequence (hiding, but available on demand unnecessary data/graphics). The system evaluates in real time and applies custom algorithms to determine in a smart way in what stage of the process is the user at any time, and selects what to display accordingly.
Based on the exercise type and users preferences, the system 400 can play, through output device 235 of the mobile computing device 202 and/or through the external output device 406, audio, sounds, voice messages, etc. that change dynamically based on the magnitude of error (ideal vs measured position). These audio signals can change dynamically:
In addition to the raw value collected from sensor modules 102, Smart Trainer app 250 uses a calculus and prediction engine (prediction module 306) to estimate a plurality of parameters such as the range of motion, acceleration, force, the metabolism, calories and activity of the main muscle groups involved in the exercise. The prediction module 306 can then provide the feedback on error and predict as to what extent the exercise execution can be improved in a current session. Using these parameters, the Smart Trainer app 250 presents useful information to the user in real-time (text, numbers, color coded parameters, 2D and 3D graphics, audible, tactile indication etc.) to show users how to improve the movements, in the way a coach or health care professional would, but based on quantitative analysis as opposed to expert opinion alone.
The Smart Trainer app 250 can perform not only analysis of the sequence of movements and their execution performance in real-time, but, additionally it can also calculate and predict physiological parameters, like the main muscle group activity and metabolism, using a local prediction engine 306, for the disconnected mode, and a more accurate prediction engine for the connected mode where it takes help of server system.
The specific muscle activity for an anatomical part of the user can be measured directly with the actual sensor modules 102 (e.g. electromyography and/or thermal sensors), or can be estimated by the (local or remote) ‘prediction engine’ 306 based on the motion/position/orientation readings acquired from the sensor modules 102. The prediction engine 306 uses neural networks and fuzzy logic for the local engine, based on training existing data (obtained from actual sensors on multiple users during neurons training), or using a deep learning based prediction engine. In both of the last two cases where prediction is used, muscle activity would present a predictable percentage error.
Using sensor-anatomy registration techniques (described above reference to
Virtual sensors' readings, in accordance with an embodiment of the present invention, are calculated based on the 9D motion/orientation sensor modules 102 which represent the position of body members. Theses virtual sensors provide an estimation of the specific muscles activity of the body member, the ones that are involved in the analyzed movement, the neural control, and the metabolism, based on a machine learning system trained using the same exercise, patient features and real sensors to get real training data. The sensor modules 102 provide the orientation of body parts/member using accelerometers, gyros and compass and a customized fusion algorithm. The orientation and position are translated and analyzed to anatomical coordinates. Virtual sensors provide the muscle group activity, neural control, and metabolism, using the local prediction module 306 in stand-alone mode and, optionally, using the server 402 in cloud environment if connectivity exists for more powerful processing and/or for more accurate value.
As shown in
In addition to improving and miniaturizing the control and guidance for the execution of a sequence of movements as part of a physical rehab treatment or motion exercise, the Smart Trainer system 400 can be used to track the movement/motion sequence performance and the muscle and neural control activity of the anatomy being tracked. Therefore, the system 400 can be used for training on a new program to increase force, resistance, and ability, or during different stages of a championship, or to evaluate another kind of rehab treatment, like other types of therapy including the ones that require specific medicaments.
The Smart Trainer system 400 can present the information in multiple (simultaneous or otherwise) devices, and automatically detects the number of display devices 202 and 406 (e.g. smart watch, phone, tablet, TV, etc.) and their resolution in pixels. The system 400 implements different modes for presenting the information/guidance/feedback to the user and/or a physical trainer.
The Smart Trainer app 250 can implement a unique feature related to the position/posture of a user with respect to real world coordinates. The sensor-anatomy registration and/or calibration process enables the Smart Trainer app 250 to define the relationship between the coordinate system of the sensors worn by a user and the global coordinate system. The orientation of the anatomy of the user can be represented by an orientation matrix based on which the position/orientation of the anatomy of the user can be determined with respect to the real world coordinates. Reference to
There are multiple parameters that the user (and/or the Smart Trainer app) can dynamically change:
In addition to the 3D rendering of the scene, the patient model, and the ‘shadow’/instructor, in some embodiments, the system implements immerse reality features like ‘Google Cardboard’. This will allow the user not only to have perspective/depth feeling, but also change the point of view (camera location) based on movements of his/her head and body.
One of the key features of this Smart Trainer system is to help increasing patients/athletes compliance. Some of the examples of the key features designed to keep the user motivated are—
The use of the terms “a” and “an” and “the” and similar referents in the context of describing the invention are to be construed to cover both the singular and the plural, unless otherwise indicated herein or clearly contradicted by context. The terms “comprising,” “having,” “including,” and “containing” are to be construed as open-ended terms (i.e., meaning “including, but not limited to,”) unless otherwise noted. The terms “affixed”, “fitted”, “attached”, “tied” are to be construed as partly or wholly contained within, attached to, or joined together, even if there is something intervening. All methods described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The use of any and all examples, or exemplary language (e.g., “such as”) provided herein, is intended merely to better illuminate embodiments of the invention and does not pose a limitation on the scope of the invention unless otherwise claimed. No language in the specification should be construed as indicating any non-claimed element as essential to the practice of the invention.
Further, although process steps, method steps, algorithms or the like may be described in a sequential order, such processes, methods and algorithms may be configured to work in alternate orders. In other words, any sequence or order of steps that may be described does not necessarily indicate a requirement that the steps be performed in that order. The steps of processes described herein may be performed in any order practical. Further, some steps may be performed simultaneously.
Preferred embodiments of this invention are described herein. Variations of those preferred embodiments may become apparent to those of ordinary skill in the art upon reading the foregoing description. The inventor expects skilled artisans to employ such variations as appropriate, and the inventor intends for the invention to be practiced otherwise than as specifically described herein. Accordingly, this invention includes all modifications and equivalents of the subject matter recited in the claims appended hereto as permitted by applicable law. Moreover, any combination of the above-described elements in all possible variations thereof is encompassed by the invention unless otherwise indicated herein or otherwise clearly contradicted by context.
This application claims the benefit of U.S. Provisional Application No. 62/256,732, filed Nov. 18, 2015, the contents of which are incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
62256732 | Nov 2015 | US |