The present invention is of a system, method and apparatus for diagnosis and therapy, and in particular, to such a system, method and apparatus for diagnosis and therapy of neurological and/or neuromuscular deficits.
Patients who suffer from one or more neurological and/or neuromuscular deficits often need specialized therapy in order to regain at least partial functionality, for example in terms of ADL (activities of daily living). For example, specialized physical therapy may be required to enable a patient suffering from a brain injury, such as a stroke or traumatic brain injury, to regain at least some lost functionality. However such specialized physical therapy requires dedicated, highly trained therapists, and so may not be available to all patients who need it.
Although various games and other solutions are available for physical therapy, none of them are designed for the specific needs of patients having neuromuscular or neurological deficits. Such patients require solutions that feature a much more granular and calibrated ability to isolate specific body parts and encourage a simulated range of motions that influence the virtual capabilities of the patient. Such an ability would have a significant impact on accelerating, extending and broadening patient recovery, while at the same time providing important psychological motivation and support.
This is especially important within the first few weeks following a trauma when the neuroadaptive and neuroplastic capacities of the patient are most likely to benefit from additional motivational treatment. However, for these patients in particular, any solution has many stringent requirements which are not currently being met. For example, such patients require personalized treatments that are based on an understanding of the pathologies involved and a variety of therapeutic techniques for treating them. On the other hand, gaming or other physical activities for such patients should not require the use of any tools (e.g, joysticks), as the patients may not be able to use them. Any solution should have graduated levels of difficulty that are based on an integrated understanding of brain sciences, neuroplasticity and self-motivated learning, which can also be personalized for each patient. Unfortunately, no such solution is currently available.
The present invention provides, in at least some embodiments, a system, method and apparatus for diagnosis and therapy. Preferably, the system, method and apparatus is provided for diagnosis and therapy of neurological and/or neuromuscular deficits by using a computational device. Optionally and preferably, the system, method and apparatus track one or more physical movements of the user, which are then analyzed to determine whether the user has one or more neurological and/or neuromuscular deficits. Additionally or alternatively, the system, method and apparatus monitor the user performing one or more physical movements, whether to diagnose such one or more neurological and/or neuromuscular deficits, to treat such one or more neurological and/or neuromuscular deficits, or a combination thereof.
By “neurological deficit”, it is meant any type of central nervous system deficit, peripheral nervous system deficit, or combination thereof, whether due to injury, disease or a combination thereof. Non-limiting examples of causes for such deficits include stroke and traumatic brain injury.
By “neuromuscular deficit” it is meant any combination of any type of neurological deficit with a muscular component, or any deficit that has both a neurological deficit and a muscular deficit, or optionally any deficit that is musculoskeletal in origin.
In regard to a physical user limitation, such as a limited range of motion in at least one body part (for example, a limited range of motion when lifting an arm), the “limitation” is preferably determined according to the normal or expected physical action or activity that the user would have been expected to engage in, without the presence of the limitation.
A physical limitation or deficit may optionally have a neurological or neuromuscular cause, but is referred to herein generally as a “physical” limitation deficit in regard to the impact that it has on movement of one or more body parts of the user.
Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The materials, methods, and examples provided herein are illustrative only and not intended to be limiting.
Implementation of the method and system of the present invention involves performing or completing certain selected tasks or steps manually, automatically, or a combination thereof. Moreover, according to actual instrumentation and equipment of preferred embodiments of the method and system of the present invention, several selected steps could be implemented by hardware or by software on any operating system of any firmware or a combination thereof. For example, as hardware, selected steps of the invention could be implemented as a chip or a circuit. As software, selected steps of the invention could be implemented as a plurality of software instructions being executed by a computer using any suitable operating system. In any case, selected steps of the method and system of the invention could be described as being performed by a data processor, such as a computing platform for executing a plurality of instructions.
Although the present invention is described with regard to a “computer” on a “computer network”, it should be noted that optionally any device featuring a data processor and the ability to execute one or more instructions may be described as a computer or as a computational device, including but not limited to any type of personal computer (PC), a server, a cellular telephone, an IP telephone, a smart phone, a PDA (personal digital assistant), a thin client, a mobile communication device, a smart watch, head mounted display or other wearable that is able to communicate externally, a virtual or cloud based processor, or a pager. Any two or more of such devices in communication with each other may optionally comprise a “computer network”.
The invention is herein described, by way of example only, with reference to the accompanying drawings. With specific reference now to the drawings in detail, it is stressed that the particulars shown are by way of example and for purposes of illustrative discussion of the preferred embodiments of the present invention only, and are presented in order to provide what is believed to be the most useful and readily understood description of the principles and conceptual aspects of the invention. In this regard, no attempt is made to show structural details of the invention in more detail than is necessary for a fundamental understanding of the invention, the description taken with the drawings making apparent to those skilled in the art how the several forms of the invention may be embodied in practice.
Sensor data from the sensors is collected by a device abstraction layer 108, which preferably converts the sensor signals into data which is sensor-agnostic. Device abstraction layer 108 preferably handles all of the necessary preprocessing such that if different sensors are substituted, only changes to device abstraction layer 108 would be required; the remainder of system 100 would preferably continuing functioning without changes, or at least without substantive changes. Device abstraction layer 108 preferably also cleans up the signals, for example to remove or at least reduce noise as necessary, and may optionally also normalize the signals. Device abstraction layer 108 may be operated by a computational device (not shown). Any method steps performed herein may optionally be performed by a computational device; also all modules and interfaces shown herein are assumed to incorporate, or to be operated by, a computational device, even if not shown.
The preprocessed signal data from the sensors is then passed to a data analysis layer 110, which preferably performs data analysis on the sensor data for consumption by a game layer 116. By “game” it is optionally meant any type of interaction with a user. Preferably such analysis includes gesture analysis, performed by a gesture analysis module 112. Gesture analysis module 112 preferably decomposes physical actions made by the user to a series of gestures. A “gesture” in this case may optionally include an action taken by a plurality of body parts of the user, such as taking a step while swinging an arm, lifting an arm while bending forward, moving both arms and so forth. The series of gestures is then provided to game layer 116, which translates these gestures into game play actions. For example and without limitation, and as described in greater detail below, a physical action taken by the user to lift an arm is a gesture which could translate in the game as lifting a virtual game object.
Data analysis layer 110 also preferably includes a system calibration module 114. As described in greater detail below, system calibration module 114 optionally and preferably calibrates the physical action(s) of the user before game play starts. For example, if a user has a limited range of motion in one arm, in comparison to a normal or typical subject, this limited range of motion is preferably determined as being the user's full range of motion for that arm before game play begins. When playing the game, data analysis layer 110 may indicate to game layer 116 that the user has engaged the full range of motion in that arm according to the user calibration—even if the user's full range of motion exhibits a limitation. As described in greater detail below, preferably each gesture is calibrated separately.
System calibration module 114 may optionally perform calibration of the sensors in regard to the requirements of game play; however, preferably device abstraction layer 108 performs any sensor specific calibration. Optionally the sensors may be packaged in a device, such as the Kinect, which performs its own sensor specific calibration.
In stage 3, session calibration is optionally performed. By “session”, it is meant the interactions of a particular user with the system. Session calibration may optionally include determining whether the user is placed correctly in regard to the sensors, such as whether the user is placed correctly in regard to the camera and depth sensor. As described in greater detail below, if the user is not placed correctly, the system may optionally cause a message to be displayed to user, preferably at least in a visual display and/or audio display, but optionally in a combination thereof. The message indicates to the user that the user needs to adjust his or her placement relative to one or more sensors. For example, the user may need to adjust his or her placement relative to the camera and/or depth sensor. Such placement may optionally include adjusting the location of a specific body part, such as of the arm and/or hand of the user.
Optionally and preferably, at least the type of game that the user will engage in is indicated as part of the session calibration. For example, the type of game may require the user to be standing, or may permit the user to be standing, sitting, or even lying down. The type of game may optionally engage the body of the user or may alternatively engage specific body part(s), such as the shoulder, hand and arm for example. Such information is preferably provided so that the correct or optimal user position may be determined for the type of game(s) to be played. If more than one type of game is to be played, optionally this calibration is repeated for each type of game or alternatively may only be performed once.
Alternatively, the calibration process may optionally be sufficiently broad such that the type of game does not need to be predetermined. In this non-limiting example, the user could potentially play a plurality of games or even all of the games, according to one calibration process. If the user is potentially not physically capable of performing one or more actions as required, for example by being able to remain standing, and hence could not play one or more games, optionally a therapist who is controlling the system could decide on which game(s) could be played.
In stage 4, user calibration is performed, to determine whether the user has any physical limitations. User calibration is preferably adjusted according to the type of game to be played as noted above. For example, for a game requiring the user to take a step, user calibration is preferably performed to determine whether the user has any physical limitations when taking a step. Alternatively, for a game requiring the user to lift his or her arm, user calibration is preferably performed to determine whether the user has any physical limitations when lifting his or her arm. If game play is to focus on one side of the body, then user calibration preferably includes determining whether the user has any limitations for one or more body parts on that side of the body.
User calibration is preferably performed separately for each gesture required in a game. For example, if a game requires the user to both lift an arm and a leg, preferably each such gesture is calibrated separately for the user, to determine any user limitations. As noted above, user calibration for each gesture is used to inform the game layer of what can be considered a full range of motion for that gesture for that specific user.
In stage 5, such calibration information is received by a calibrator, such as the previously described system calibration module for example. In stage 6, the calibrator preferably compares the actions taken by the user to an expected full range of motion action, and then determines whether the user has any limitations. These limitations are then preferably modeled separately for each gesture.
In stage 7, the gesture provider receives calibration parameters. In stage 8, the gesture provider adjusts gestures according to the modeled limitations for the game layer, as described in greater detail below. The gesture provider therefore preferably abstracts the calibration and the modeled limitations, such that the game layer relates only to the determination of the expected full range of motion for a particular gesture by the user. However, the gesture provider may also optionally represent the deficit(s) of a particular user to the game layer (not shown), such that the system may optionally recommend a particular game or games, or type of game or games, for the user to play, in order to provide a diagnostic and/or therapeutic effect for the user according to the specific deficit(s) of that user.
The system according to at least some embodiments of the present invention preferably monitors a user behavior. The behavior is optionally selected from the group consisting of a performing physical action, response time for performing the physical action and accuracy in performing the physical action. Optionally, the physical action comprises a physical movement of at least one body part. The system is optionally further adapted for therapy and/or diagnosis of a user behavior.
Optionally, alternatively or additionally, the system according to at least some embodiments is adapted for cognitive therapy of the user through an interactive computer program. For example, the system is optionally adapted for performing an exercise for cognitive training.
Optionally the exercise for cognitive training is selected from the group consisting of attention, memory, and executive function.
Optionally the system calibration module further determines if the user has a cognitive deficit, such that the system calibration module also calibrates for the cognitive deficit if present.
As shown, game layer 116 preferably features a game abstraction interface 200. Game abstraction interface 200 preferably provides an abstract representation of the gesture information to a plurality of game modules 204, of which only three are shown for the purpose of description only and without any intention of being limiting. The abstraction of the gesture information by game abstraction interface 200 means that changes to data analysis layer 110, for example in terms of gesture analysis and representation by gesture analysis module 112, may optionally only require changes to game abstraction interface 200 and not to game modules 204. Game abstraction interface 200 preferably provides an abstraction of the gesture information and also optionally and preferably what the gesture information represents, in terms of one or more user deficits. In terms of one or more user deficits, game abstraction interface 200 may optionally poll game modules 204, to determine which game module(s) 204 would be most appropriate for that user. Alternatively or additionally, game abstraction interface 200 may optionally feature an internal map of the capabilities of each game module 204, and optionally of the different types of game play provided by each game module 204, such that game abstraction interface 200 may optionally be able to recommend one or more games to the user according to an estimation of any user deficits determined by the previously described calibration process. Of course, such information could also optionally be manually entered and/or the game could be manually selected for the user by medical, nursing or therapeutic personnel.
Upon selection of a particular game for the user to play, a particular game module 204 is activated and begins to receive gesture information, optionally according to the previously described calibration process, such that game play can start.
Game abstraction interface 200 also optionally is in communication with a game results analyzer 202. Game results analyzer 202 optionally and preferably analyzes the user behavior and capabilities according to information received back from game module 204 through to game abstraction interface 200. For example, game results analyzer 202 may optionally score the user, as a way to encourage the user to play the game. Also game results analyzer 202 may optionally determine any improvements in user capabilities over time and even in user behavior. An example of the latter may occur when the user is not expending sufficient effort to achieve a therapeutic effect with other therapeutic modalities, but may show improved behavior with a game in terms of expended effort. Of course, increased expended effort is likely to lead to increased improvements in user capabilities, such that improved user behavior may optionally be considered as a sign of potential improvement in user capabilities. Detecting and analyzing such improvements may also optionally be used to determine where to direct medical resources, within the patient population and also for specific patients.
Game layer 116 may optionally comprise any type of application, not just a game. Optionally game results analyzer 202 may optionally analyze the results for the interaction of the user with any type of application.
Game results analyzer 202 may optionally store these results locally or alternatively, or additionally, may optionally transmit these results to another computational device or system (not shown). Optionally, the results feature anonymous data, for example to improve game play but without any information that ties the results to the game playing user's identity or any user parameters.
Also optionally, the results feature anonymized data, in which an exact identifier for the game playing user, such as the user's name and/or national identity number, is not kept; but some information about the game playing user is retained, including but not limited to one or more of age, disease, capacity limitation, diagnosis, gender, time of first diagnosis and so forth. Optionally such anonymized data is only retained upon particular request of a user controlling the system, such as a therapist for example, in order to permit data analysis to help suggest better therapy for the game playing user, for example, and/or to help diagnose the game playing user (or to adjust that diagnosis).
Optionally the following information is transmitted and/or other analyzed, at least to improve game play:
System 300 as shown optionally and preferably includes four levels: a sensor API level 302, a sensor abstraction level 304, a gesture level 306 and a game level 308. Sensor API level 302 preferably communicates with a plurality of sensors (not shown) to receive sensor data from them. According to the non-limiting implementation described herein, the sensors include a Kinect sensor and a Leap Motion sensor, such that sensor API level 302 as shown includes a Kinect sensor API 310 and a Leap Motion sensor 312, for receiving sensor data from these sensors. Typically such APIs are third party libraries which are made available by the manufacturer of a particular sensor.
The sensor data is then passed to sensor abstraction level 304, which preferably handles any sensor specific data analysis or processing, such that the remaining components of system 300 can be at least somewhat sensor agnostic. Furthermore, changes to the sensors themselves preferably only necessitate changes to sensor API level 302 and optionally also to sensor abstraction level 304, but preferably not to other levels of system 300.
Sensor abstraction level 304 preferably features a body tracking data provider 314 and a hands tracking data provider 316. Optionally all parts of the body could be tracked with a single tracking data provider, or additional or different body parts could optionally be tracked (not shown). For this implementation, with the two sensors shown, preferably data from the Kinect sensor is tracked by body tracking data provider 314, while data from the Leap Motion sensor is tracked by hands tracking data provider 316.
Next, the tracked body and hand data is provided to gesture level 306, which includes modules featuring the functionality of a gesture provider 318, from which specific classes inherit their functionality as described in greater detail below. Gesture level 306 also preferably includes a plurality of specific gesture providers, of which only three are shown for the purpose of illustration only and without any intention of being limiting. The specific gesture providers preferably include a trunk flexion/extension gesture provider 320, which provides information regarding leaning of the trunk; a steering wheel gesture provider 322, which provides information regarding the user interactions with a virtual steering wheel that the user could grab with his/her hands; and a forearm pronation/supination gesture provider 324, which provides information about the rotation of the hand along the arm.
Each gesture provider relates to one specific action which can be translated into game play. As shown, some gesture providers receive information from more than one tracking data provider, while each tracking data provider can feed data into a plurality of gesture providers, which then focus on analyzing and modeling a specific gesture.
A non-limiting list of gesture providers is given below:
An optional additional or alternative gesture provider is a ClimbingGestureProvider, which provides information about a hand-over-hand gesture by the user.
Optionally any of the above gesture provides may be included or not included in regard to a particular game or the system.
Optionally each such gesture provider has a separate calibrator that can calibrate the potential range of motion for a particular user and/or also determine any physical deficits that the user may have in regard to a normal or expected range of motion, as previously described. The gesture providers transform tracking data into normalized output values that will be used by the game controllers of game level 308 as inputs, as described in greater detail below. Those output values are generated by using predefined set of ranges or limits that can be adjusted. For instance, the above Forearm Pronation/Supination Gesture Provider will return a value between −1.0 (pronation) and 1.0 (supination) which represents the current rotation of the forearm along its axis (normalized). Note that initial position (value equals to 0) is defined with the thumb up position. Similar ranges could easily be determined by one of ordinary skill in the art for all such gesture providers.
Suppose that a given patient was not able to perform the full range of motion (−45° to) 45° for such a motion. In that case, the Gesture Provider parameters could be adjusted to allow the patient to cover the full range of the normalized value (−1.0 to 1.0). With those adjustments, the patient will therefore be able to fully play the game like everyone else. This adjustment process is called Gesture Provider Calibration and is a non-limiting example of the process described above. It's important to note that preferably nothing has changed in the game logic; the game always expects a normalized value between −1.0 and 1.0, so the adjustment requires no changes to the game logic.
At game level 308, a plurality of game controllers is provided, of which only three are shown for the sake of description only and without wishing to be limited in any way. These game controllers are shown in the context of a game called the “plane game”, in which the user controls the flight of a virtual plane with his/her body part(s). Each such game controller receives the gesture tracking information from a particular gesture provider, such that trunk flexion/extension gesture provider 320 provides tracking information to a trunk flexion/extension plane controller 326. Steering wheel gesture provider 322 provides tracking information to a steering wheel plane controller 328; and forearm pronation/supination gesture provider 324 provides tracking information to a forearm pronation/supination plane controller 330.
Each of these specific game controllers feeds in information to a general plane controller 332, such the game designer can design a game, such as the plane game, to exhibit specific game behaviors as shown as a plane behaviors module 334. General plane controller 332 determines how the tracking from the gesture providers is fed through the specific controllers and is then provided, in a preferably abstracted manner, to plane behaviors module 334. The game designer would then only need to be aware of the requirements of the general game controller and of the game behaviors module, which would increase the ease of designing, testing and changing games according to user behavior.
Data from Kinect API 402 first goes to a color camera source view 406, after which the data goes to a camera color texture provider 408. Color camera source view 406 provides raw pixel data from the Kinect camera. Camera color texture provider 408 then translates the raw pixel data to a texture which then can be used for display on the screen, for example for trouble shooting.
Next the data is provided to an optional body tracking trouble shooting panel 410, which determines for example if the body of the user is in the correct position and optionally also orientation in regard to the Kinect sensor (not shown). From there, the data is provided to a body tracking provider 412, which is also shown in
For the tracking feedback flow, body tracking provider 412 also preferably communicates with a sticky avatar module 414, which shows an avatar representing the user or a portion of the user, such as the user's hand for example, modeled at least according to the body tracking behavior. Optionally the avatar could also be modeled according to the dimensions or geometry of the user's body. Both sticky avatar module 414 and body tracking provider 412 preferably communicate with a body tracking feedback manager 416. Body tracking feedback manager 416 controls the sticky avatar provided by sticky avatar module 414, which features bones and joints, by translating data from body tracking to visually update the bones and joints. For example, the sticky avatar could optionally be used with this data to provide visual feedback on the user's performance.
From body tracking trouble shooting panel 410, the data communication preferably moves to an overlay manager 418, which is also shown in
Turning now to the other side of the drawing, data from Leap Motion API 404 is transmitted to a Leap Motion camera source view 420. Data goes to a Leap Motion camera texture provider 422. Leap Motion camera source view 420 provides raw pixel data from the Leap Motion device. Leap Motion camera texture provider 422 then translates the raw pixel data to a texture which then can be used for display on the screen, for example for trouble shooting.
Next the data is provided to an optional hand tracking trouble shooting panel 424, which determines for example if the hand or hands of the user is/are in the correct position and optionally also orientation in regard to the Leap Motion sensor (not shown). From there, the data is provided to a hand tracking provider 426, which is also shown in
For the tracking feedback flow, a hand tracking provider 426 also preferably communicates with a sticky hand module 428, which shows an avatar representing the user's hand or hands, modeled at least according to the hand tracking behavior. Optionally the hand could also be modeled according to the dimensions or geometry of the user's hand(s). Both sticky avatar module 414 and hand tracking provider 426 preferably communicate with a hand tracking feedback manager 430.
From hand tracking trouble shooting panel 424, the data communication preferably moves to the previously described overlay manager 418. In this non-limiting example, if hand tracking trouble shooting panel 424 determines that the hand(s) of the user (playing the game) is not correctly positioned with regard to the Leap Motion sensor, then hand tracking trouble shooting panel 424 could provide this information to overlay manager 418. Overlay manager 418 would then cause a message to be displayed to the user controlling the computational device, to indicate the incorrect positioning of the hand(s) of the user playing the game.
User interface entry point 902 preferably also controls an apps query module 910 to provide a list of all applications according to criteria, for example to filter by functions, body part, what is analyzed and so forth; and a user app storage module 912, optionally for user's own applications, or for metering the number of applications provided in the license.
Next in a main menu panel 1004, the user may optionally be presented with a list of choices to made, for example regarding which game to play and/or which user deficits to be diagnosed or corrected. From there, once a game is selected, the user is taken to a game information panel 1006 and then to a gesture calibration panel 1008, to initiate the previously described gesture calibration process.
Optionally from main menu panel 1004, the user may select one or more languages through an options panel 1010.
Turning now to
The user may then optionally personalize one or more functions in a user creation edition panel 1018.
Next the user optionally can access data regarding a particular user (the “user” in this case is the game player) in a performance panel 1020. This data may optionally be represented as a graph in performance graph 1022.
Next, if the user hasn't done so already, the user is prompted by checking module 1104 to insert a hardware dongle 1106 into a port of the computational device, such as a USB (universal serial bus) port as a non-limiting example. Checking module 1104 checks for user security, optionally to verify user login details match, but at least to verify that dongle 1106 is valid. Checking module 1104 also checks to see if a valid, unexpired license is still available through dongle 1106. If dongle 1106 is not valid or does not contain a license that at one point was valid (even if expired now), the process stops and the software launch is aborted. An error message may optionally be shown.
If dongle 1106 is valid and contains a license that at one point was valid, software launch 1108 continues. Next checking module 1104 checks to see that the license is not expired. If the license is currently valid and not expired, then a time to expiration message 1110 is shown. Otherwise, if the license is expired, then an expired license message 1112 is shown.
In stage 6, access to patient information and other parts of the system are preferably only possible if the dongle or other secondary verification device is validated in stage 4.
A launcher 1200 is initiated upon launch of the system, as shown in
An initial screen 1252 invites the user to login in a login screen 1254, if the launcher detects that the system is offline or cannot validate through the internet. The Offline validation method optionally includes the time left on the USB Dongle as previously described. Alternatively, launcher 1200 uses the Grace Period (by checking how long the application is allowed to run without license). License status information is preferably provided by a license status view 1256. Any update information is provided by an update view 1258. If a software update is available, preferably update view 1258 enables the user to select and download the software update. Optionally the update is automatically downloaded in the background and then the user is provided with an option as to whether to install. If an update is considered to be urgent or important, optionally it will also install automatically, for example as a default.
When the user successfully logs in, the system is started with the launch of the software interface. From that point, both applications (software interface and launcher) are linked through a TCP channel. If one application dies or loses communication with the other, optionally both die. The Launcher then periodically makes sure the user license is still valid.
System 1300 preferably features two core processes for operating a games session and the launcher. The games session is operated by a framework 1306, which supports game play. Launch is operated by a launcher 1312, which optionally operates as described for
Framework 1306 is preferably supported by one or more engines 1316, which may optionally be third party engines. For example and without limitation, engines 1316 may optionally include mono runtime, Unity Engine and one or more additional third party engines. Engines 1316 may then optionally be able to communicate with one or more sensors 1322 through one or more drivers 1318. Drivers 1318 in turn communicate with one or more sensors 1322 through an operating system 1320, which assists to abstract data collection and communication. Sensors 1322 may optionally include a Kinect and a Leap Motion sensor, as shown. The user may optionally provide inputs through user inputs 1324, such as a keyboard and mouse for example. All of the various layers are preferably operated by and/or through a computational device 1326 as shown.
Patient inputs 1502 are provided to a car control layer 1506 of a game subsystem 1504. Car control layer 1506 includes control inputs which receive their information directly from patient inputs 1502. For example, a shoulder control input receives information from the shoulder movement of patient inputs 1502. Steering wheel movement from patient inputs 1502 is provided to steering wheel control in car control layer 1506. Trunk movement from patient inputs 1502 is provided to trunk control in car control layer 1506.
Car control layer 1506 then provides the collected inputs to a car control module 1508, to determine how the patient is controlling the car. Car control module 1508 then provides the control information to a car behavior output 1510, which determines how the car in the game will behave, according to the patient movement.
A controller 1604 optionally performs one or more of the following: redirect a user to the screen shown in
While the invention has been described with respect to a limited number of embodiments, it will be appreciated that many variations, modifications and other applications of the invention may be made, including different combinations of various embodiments and sub-embodiments, even if not specifically described herein.
Number | Date | Country | |
---|---|---|---|
62440481 | Dec 2016 | US | |
62574788 | Oct 2017 | US |