The present invention relates to a rehabilitation assistance system, a rehabilitation assistance method, and a rehabilitation assistance program.
In the above technical field, patent literature 1 discloses a system configured to assist rehabilitation performed for a hemiplegia patient suffering apoplexy or the like.
Patent literature 1: Japanese Patent Laid-Open No. 2015-228957
In the technique described in the above literature, however, it is impossible to perform active target updating according to an action of a user, and the same load needs to be repeated for any user.
The present invention enables to provide a technique of solving the above-described problem.
One example aspect of the present invention provides a rehabilitation assistance system comprising:
an action detector configured to detect a first rehabilitation action of a user;
a display controller configured to display an avatar image that moves in accordance with the detected first rehabilitation action and a target image representing a target of the first rehabilitation action;
an evaluator configured to evaluate a rehabilitation ability of the user by comparing the first rehabilitation action and a target position represented by the target image; and
an updater configured to update the target position in accordance with an evaluation result by the evaluator,
wherein the display controller performs display to request a second rehabilitation action in addition to the first rehabilitation action, and
the evaluator evaluates the rehabilitation ability based on both the first rehabilitation action and the second rehabilitation action.
Another example aspect of the present invention provides a rehabilitation assistance method comprising:
detecting a first rehabilitation action of a user;
displaying an avatar image that moves in accordance with the detected first rehabilitation action and a target image representing a target of the first rehabilitation action;
evaluating a rehabilitation ability of the user by comparing the first rehabilitation action and a target position represented by the target image; and
updating the target position in accordance with an evaluation result in the evaluating,
wherein in the displaying, display to request a second rehabilitation action in addition 10 the first rehabilitation action is performed, and
in the evaluating, the rehabilitation ability is evaluated based on both the first rehabilitation action and the second rehabilitation action.
Still other example aspect of the present invention provides a rehabilitation assistance program for causing a computer to execute a method, comprising:
detecting a first rehabilitation action of a user;
displaying an avatar image that moves in accordance with the detected first rehabilitation action and a target image representing a target of the first rehabilitation action;
evaluating a rehabilitation ability of the user by comparing the first rehabilitation action and a target position represented by the target image; and updating the target position in accordance with an evaluation result in the evaluating,
wherein in the displaying, display to request a second rehabilitation action in addition to the first rehabilitation action is performed, and
in the evaluating, the rehabilitation ability is evaluated based on both the first rehabilitation action and the second rehabilitation action.
According to the present invention, it is possible to perform active target updating according to the rehabilitation action of a user.
Example embodiments of the present invention will now be described in detail with reference to the drawings. It should be noted that the relative arrangement of the components, the numerical expressions and numerical values set forth in these example embodiments do not limit the scope of the present invention unless it is specifically stated otherwise.
A rehabilitation assistance system 100 according to the first example embodiment of the present invention will be described with reference to
As shown in
The action detector 101 detects a rehabilitation action of a user 110. The display controller 102 displays an avatar image that moves in accordance with the detected rehabilitation action and a target image representing the target of the rehabilitation action.
The evaluator 103 evaluates the rehabilitation ability of the user in accordance with the difference between the rehabilitation action and a target position represented by the target image. The updater 104 updates the target position in accordance with the evaluation result by the evaluate 103.
The action detector 101 further detects a second rehabilitation action of the user during a first rehabilitation action. When a predetermined or more evaluation is made only for the first rehabilitation action, the evaluator 103 evaluates the rehabilitation ability based on both the first rehabilitation action and the second rehabilitation action. This makes it possible to perform active and proper target updating according to the rehabilitation action of the user.
A rehabilitation assistance system 200 according to the second example embodiment of the present invention will be described next with reference to
As shown in
In addition, the rehabilitation assistance server 210 includes an action detector 211, a display controller 212, an evaluator 213, an updater 214, a voice input/output unit 215, a target database 216, and a background image+question answer database 217.
The action detector 211 acquires the positions of the controllers 234 and 235 in the hands of a user 220 and the position of the head mounted display 233 via the base stations 231 and 232, and detects the rehabilitation action of the user 220 based on changes in the positions.
The display controller 212 causes the head mounted display 233 to display an avatar image that moves in accordance with the detected rehabilitation action and a target image representing the target of the rehabilitation action.
In addition, for example, as shown in
The evaluate 213 compares the rehabilitation action detected by the action detector 211 and the target position represented by the target image displayed by the display controller 212, and evaluates the rehabilitation ability of the user. More specifically, the evaluator 213 decides, by comparing the positions in a three-dimensional virtual space, whether the avatar image 311 that moves in correspondence with the rehabilitation action detected by the action detector 211 overlaps the object 411 serving as the target image. As a result, if these overlap, (be evaluator 213 evaluates that one rehabilitation action is completed, and adds a point. As for the position of the object 411 in the depth direction, various steps (for example, three steps) are prepared and set to different points (a high point for a far object, and a low point for a close object), respectively.
The updater 214 updates the target task in accordance with the integrated point, for example, the target task may be updated using a task achievement ratio (number of achieved targets/number of tasks) or the like.
In addition, a target according to the attribute information (for example, whether the user is an athlete or suffers from the Parkinson disease) of the user is set by referring to the target database 216. For example, in a case of an injured athlete, an initial value not to make the injury worse is set. In a case of a user suffering from the Parkinson disease, an exercise to make the disease progress slow is set to the initial value. Furthermore, each patient is first caused to do a work in an action range, it is set to the initial value, and the target is initialized in accordance with the user.
Next, in step S503, the avatar images 311 and 312 are displayed in accordance with the positions of the controllers 234 and 235 detected by the action detector 211. Furthermore, in step S505, the object 411 is displayed at a position and speed according to the set task.
In step S507, the motions of the avatar images 311 and 312 and the motion of the object 411 are compared, and it is determined whether the task is completed. If the task is not completed, the process directly returns to step S505, and the next object is displayed without changing the difficulty of the task.
If the task is completed, the process advances to step S509 to calculate an accumulated point, a task achievement probability, and the like. The process further advances to step S511 to compare the accumulated point, the task achievement probability, or the like with a threshold T. If the accumulated point, the task achievement probability, or the like exceeds the predetermined threshold T, the process advances to step S513 to update the exercise intensity of the task. If the accumulated point, the task achievement probability, or the like does not reach the threshold T, the process returns to step S505, and the next object is displayed without changing the difficulty of the task.
For example, when the achievement level in a short range exceeds 80% (or a count such as 10 times may be used), the display frequency of an object in a middle range is raised. When the achievement level of the object in the middle range exceeds 80% (or a count such as 10 times may be used), the display frequency of an object in a long range is raised. Conversely, if the achievement level is low, the target value may be set to the short range.
As for the task updating here as well, the task is changed in accordance with the attribute of the user (for example, whether the user is an injured athlete or B patient suffering from the Parkinson disease). As the task updating method, a method of switching the background image is also conceivable.
A tier the task is updated, the process advances to step S515, and the fatigue level of the user is calculated and compared with a threshold N. If the fatigue level exceeds the predetermined threshold, the “stop condition” is satisfied, and the processing is ended. For example, (fatigue level=1−collection ratio of closest objects) can be calculated. Alternatively, (fatigue level=1/eye motions) may be calculated. If it is obvious that the user is not concentrating (for example, the user is not searching for an object at all or does not move the head), it would be meaningless to continue the rehabilitation any more, and the user takes a break. In addition, the fatigue level may be calculated by detecting a decrease in the speed (acceleration) of stretching out the hand.
Additionally, for example, when the accumulated point exceeds a predetermined threshold, which one of the two, left and right controllers 234 and 235 should be used to touch the object 411 (right here) is instructed, as indicated by a character image 601 shown in
Note that in
(Dual Task)
An able-bodied person makes two or more actions simultaneously; for example, “walks while talking” in a daily life. Such “an ability to make two actions simultaneously” declines with age. For example, “stop when talked to during walking” occurs. It is considered that an elderly person falls not only because of “the deterioration of the motor function” but also because of involvement of such “decline in the ability to make two actions simultaneously”. In fact, there are many elderly persons who are judged to have sufficiently recovered the motor function by rehabilitation but fall after returning to the home. One factor responsible to this is that the rehabilitation is performed in a state in which the environment and conditions to allow a person to concentrate on the rehabilitation action are organized. That is, a living environment includes factors mat impede concentration on an action, and an action is often made under a condition that, for example, the view is poor, an obstacle exists, or consciousness is turned to a conversation.
Hence, it is considered that it is important to perform such rehabilitation that makes the user distract attention. It is preferable to give a specific dual task and perform training. Such a dual task training is an effective program not only to prevent a fall of an elderly person but also to prevent dementia.
The dual task training includes not only a training that combines a cognitive task and an exercise task but also a training that combines two types of exercise tasks.
As cognitive task+exercise task, a training such as walking while subtracting one by one from 100 can be performed. As exercise task+exercise task, a training such as walking without spilling water from a glass can be performed.
In a case in which the walking speed is lower about 20% in a dual task walking test than in simple walking, the evaluator 213 evaluates that the risk of fall is high, and notifies the display controller 212 to repeat the dual task.
Note that the dual task is readily more effective to “a person having a relatively high moving ability”. For example, for an elderly person who cannot move without a stick even indoors, strengthening the balance ability (muscle power, sense of equilibrium, and the like) is given higher priority than the dual task ability. Roughly judging, it can be expressed that the dual task ability is important for a person requiring support, and the balance ability other than the dual task ability is important for a person requiring care. A time-series change in calibration is displayed, and the improvement of the exercise range of the user is visually displayed.
(Setting Change by User Attribute)
For a patient expected to normally improve (a patient suffering from an orthopedic disease such as a bone fracture and assumed to completely improve), hardest rehabilitation actions are set to speed up the improvement.
For a patient whose degree of improvement changes individually (in a case of brain infarction or the like, a paralysis of a different form occurs depending on the morbid portion), the load of a task is improved to some extent, and the improvement of the load is stopped at a certain level.
In a case of a patient suffering from hypofunction in principle due to the Parkinson disease or the like, periodically evaluating the current exercise enable state is useful.
(Other Examples of Dual Task Training)
In addition, a number may be simply displayed on each object, and only acquisition of an object of a large number may be evaluated. Alternatively, a traffic signal may be displayed in the background image 313, and when the user acquires an object at red light, the evaluator 213 may decrement the point.
According to this example embodiment, since the task is updated in accordance with the achievement level (for example, achievement probability) of the rehabilitation action, a load according to the degree of progress of rehabilitation can be given to the user. In addition, when the background image 313 is displayed, the patient can enjoy and also perform rehabilitation in a situation in which he/she turns consciousness to the periphery; and can implement a safer life when returning to the physical world.
As a reaction of the user, a result that the object collection ratio lowers is expected in a dual task. A result that the object collection achievement ratio does not change even when the dual task is displayed is expected as a goal. The object collection ratio or object reach ratio in a single task is compared with that in a dual task, and training is repetitively performed until the difference falls within a predetermined range.
Dual task training that simultaneously requires a motor function and a cognitive function has been described above. However, the present invention is not limited to this, and dual task training that simultaneously requires two motor functions may be performed.
For example, as shown in
Additionally, for example, as indicated by an image 1201 shown in
In addition, it can be considered that the user is required to cause the avatar image on the reverse side of the avatar image on the side of collecting the object to always touch a designated place. The user may be required to collect the object while pressing a designated one of the buttons provided on the controllers 234 and 235 a predetermined number of times. In addition, when a sensor configured to acquire the motion of a foot of the user is provided, the user may be required to move a designated foot.
A rehabilitation assistance system according to the third example embodiment or the present invention will be described next with reference to
In the second example embodiment, the moving distance of an avatar image 1320, that is, an exercise distance 1312 of a user 220 is measured based on the distance between a reference 1310 and the sensor of the avatar image 1320 (the head portion of the avatar image 1320). A target distance 1311 that is a distance required to move an arm or the tike by the user 220 is decided based on the distance between the reference 1310 and a reference line 1331 of an object 1330 serving as a target image. As a rehabilitation exercise, the user 220 moves the avatar image 1320 and bangs it close to the object 1330.
However, as shown in
The system provider side warns the avatar image 1320 to touch the object 1330 when the user 220 completely stretches out the arm as the rehabilitation exercise. However, if the size of the object 1330 is large (the distance between the apex and the reference line 1331 is long), it is determined that the avatar image 1320 touches the object 1330 when it just touches an edge of the object 1330. Hence, since the user 220 cannot move the arm by the initially assumed distance, the expected rehabilitation effect is difficult to obtain.
In addition, since the user 220 can touch the object 1330 before he/she completely stretches out the arm, the feeling of achievement or feeling of satisfaction cannot sufficiently be obtained, and the motivation to rehabilitation may lower.
In this case. The exercise distance 1312 that is the distance the avatar image 1320 has actually moved deviates from the target distance 1311 that is the distance The user 220 should move. For this reason, the user 220 cannot do the exercise through the exercise distance 1312 set before the start of the rehabilitation, and the effect obtained by the rehabilitation is less than the expected effect.
For example, the length of one side of the object 1330 is set to 20.0 cm, and a diameter 1321 of the sensor portion of the avatar image 1320 (the head portion of the avatar image 1320) is set to 5.0 cm. In this case, when the user 220 makes the avatar image 1320 touch not the reference line 1331 but the apex 1332 of the object 1330, an error of about 10.0 cm is generated between the target distance 1311 and the exercise distance 1312.
For this reason, since the user 220 does not move the avatar image 1320 by the exercise distance 1312 assumed before the start of the rehabilitation, the effect of the rehabilitation the user 220 should enjoy decreases.
On the other hand, if the object 1330 is made small such that the user 220 can touch the object 1330 by completely stretching out the arm, it becomes difficult for the user 220 to visually recognize the position of the object 1330 in the screen. If the object 1330 cannot be visually recognized, the rehabilitation cannot hold.
In this example embodiment, the sensor portion (reactive portion) or the avatar image 1320 is formed into a region smaller than the head portion of the avatar image 1320. This can decrease the deviation (error) between the target distance 1311 and the exercise distance.
Viewed from the user 220, when the size of the object 1330 is made small the object 1330 is difficult to see (the visibility lowers). However, to compensate for the decrease in the visibility, the visual recognition support image 1333 is arranged around the object 1330 that has become small.
For example, the length of one side of the object 1330 is set to 5.0 cm. the length of one side of the visual recognition support image 1333 is set to 20.0 cm, and the diameter of a sensor portion 1322 of the avatar image 1320 is set to 2.0 cm. Then, the error (deviation) between the target distance 1311 and the exercise distance 1312 decreases to about 2.0 cm.
This can make it possible to decrease the deviation (error) between the target distance 1311 and the exercise distance 1312 while preventing the visibility of the object 1330 from lowering due to the gradation difference and the size difference between the object 1330 and the visual recognition support image 1333. Additionally, as a secondary effect, the degree of experience obtained by bringing the avatar image 1320 into contact with the object 1330 increases. That is, the sensation of touching the object 1330 is clear for the user 220, and the joy in achieving the target of the rehabilitation also increases.
However, as shown in
Other shapes of the visual recognition support image 1333 will be described next with reference to
As shown in
As shown in
As shown in
Using the alternate long and short dashed line of the visual recognition support image 1360 as a guideline, the user 220 moves the line of sight along the alternate long and short dashed Hue and visually recognizes the object 1330, thereby recognizing the existence position of the object 1330. Furthermore, when the avatar image 1320 is moved along the alternate long and short dashed line, the user can make the avatar image 1320 touch the object 1330. Note that when the visual recognition support image 1333 is displayed together with a visual recognition support image 1360, the visibility of the object 1330 further improves.
As shown in
As shown in
In addition, the rehabilitation assistance server may change the size of the visual recognition support image 1333 not in accordance with the degree of progress of rehabilitation of the user 220 but in accordance with, for example, the eyesight of the user 220. That is, the rehabilitation assistance server displays the large visual recognition support image 1333 for the user 220 with poor eyesight, and displays the small visual recognition support image 1333 for the user 220 with relatively good eyesight. In this way, the rehabilitation assistance server may display the visual recognition support image having a size according to the eyesight of the user 220.
Additionally, for example, if the user 220 has dementia, the rehabilitation assistance server may display the visual recognition support image 1333 having a size according to the degree of progress of dementia or the cognitive function. Note that the size of the visual recognition support image 1333 may be changed automatically by the rehabilitation assistance server, or may be changed manually by an operator such as a doctor who operates the rehabilitation assistance system and changed by the user 220.
The action detector 1411 acquires the position of a controller in the hand of the user 220 and the position of a head mounted display or the like worn by the user 220, and detects the motion (rehabilitation action) of the user 220 based on changes in the acquired positions.
The display controller 1412 causes the display unit 1402 to display the avatar image 1320 that moves in accordance with the detected rehabilitation action, the target image representing the target of the rehabilitation action, and at least one visual recognition support image 1333 used to improve the visibility of the target image.
The display controller 1412 displays the target image and the visual recognition support image 1333 in a superimposed manner. For example, the size of the target image is made smaller than the size of the visual recognition support image 1333, and the target image is displayed such that it is included in the visual recognition support image 1333.
The display controller 1412 may display the target image, for example, near the center of the visual recognition support image 1333. In addition, the display controller 1412 may display the target image not near the center of the visual recognition support image 1333 but at a position included in the visual recognition support image 1333 and on a side close to the avatar image 1320, that is, on the near side when viewed from the user 220.
The display controller 1412 may identifiably display the object 1330 and the visual recognition support image 1333. More specifically, for example, the gradation of the object 1330 is displayed darker than the gradation of the visual recognition support image 1333. Since the object 1330 is displayed darker, a contrast difference is generated with respect to the visual recognition support image 1333 displayed lighter, and the user 220 can reliably recognize the object 1330. Note that how to apply gradation to the object 1330 and the visual recognition support image 1333 is not limited to the method described here For example, gradation may be applied such that even the user 220 with poor eyesight can reliably identify the object 1330 and the visual recognition support image 1333.
In addition, the display controller 1412 displays the object 1330 and the visual recognition support image 1333 in different colors so as to identifiably display the object 1330 and the visual recognition support image 1333. The display controller 1412 applies, for example, a dark color to the object 1330 and a light color to the visual recognition support image 1333. However, the combination (pattern) of applied colors is not limited to this. For example, a combination of colors that allow even the user 220 with color anomaly (color blindness) to reliably identify the object 1330 and the visual recognition support image 1333 may be used. Furthermore, the display controller 1412 may perform coloring capable of coping with the users 220 of various types such as weak eyesight, narrowing of visual field, and color anomaly. Note that the colors to be applied to the object 1330 and the visual recognition support image 1333 may be selected by the user 220 or may be selected by an operator such as a doctor.
Note that the gradations and colors of the object 1330 and the visual recognition support image 1333 have been described here. The gradations and colors may similarly be changed for the other visual recognition support images 1340, 1350, 1360, 1370, and 1380 as well.
Furthermore, the display controller 1412 controls the change of the display of the visual recognition support image 1333 in accordance with at least one of the eyesight of the user 220 and the evaluation result of the evaluator 1413. For example, the display controller 1412 changes the size of the visual recognition support image 1333 in accordance with the eyesight of the user 220, the degree of progress of the rehabilitation of the user 220, the degree of progress of the dementia of the user 220, or the like.
The evaluator 1413 compares the rehabilitation action detected by the action detector 1411 and the target position represented by the object 1330 serving as the target image displayed by the display controller 1412 and evaluates the rehabilitation ability of the user 220.
The updater 1414 updates the target position represented by the object 1330 in accordance with the evaluation result of the evaluator 1413.
The display unit 1402 displays the target image, the visual recognition support image, and the like under the control of the display controller 1412. The display unit 1402 is a head mounted display, a display, a screen, or the like but is not limited to these.
The current level 1514 is data representing the current rehabilitation level of the patient. That is, the current level 1514 is data representing the degree of progress or the like of the rehabilitation of the patient. The data is data dividing rehabilitation stages from the initial stage to the final stage into a plurality of ranks, for example, A rank, B rank, and the like. Note that the rehabilitation level division method is not limited to this. The rehabilitation menu 1515 is information concerning the menu of rehabilitation that the patient should undergo.
Next,
The target image ID 1521 is an identifier used to identify the object 1330 to be displayed on the display unit 1402. The visual recognition support image ID 1522 is an identifier used to identify the visual recognition support image 1333, 1340, 1350, 1360, 1370, or 1380 to be displayed on the display unit 1402. The display parameter 1523 is a parameter necessary for displaying the object 1330 or the visual recognition support image 1333, 1340, 1350, 1360, 1370, or 1380 on the display unit 1402. The display parameter 1523 includes, for example, pieces of information such as a position and a magnification. However, the pieces of information included in the display parameter 1523 are not limited to these.
The image type 1531 is information for discriminating whether the image to be displayed is a target image or a visual recognition support image. The image data 1532 is the image data of the object 1330 or the visual recognition support image 1333 to the displayed on the display unit 1402 and includes image data of various image file formats. The display position 1533 is data representing a position in the display unit 1402 at which an image should be displayed, and is, for example, the data of a set of (X-coordinate position, Y-coordinate position, Z-coordinate position). The magnification 1534 is data used to decide the size to display the object 1330, the visual recognition support image 1333, or the like on the display unit 1402.
The rehabilitation assistance server 1401 refers to the tables 1501, 1502, and 1503 and displays the visual recognition support images 1333, 1340, 1350, 1360, 1370, and 1380 on the display unit 1402.
The RAM 1640 is a random access memory used by the CPU 1610 as a work area for temporary storage. In the RAM 1640, an area to store data necessary for implementation of this example embodiment is allocated. Patient data 1641 is data concerning a patient who undergoes rehabilitation using the rehabilitation assistance system. Image data 1642 is the data of the object 1330 serving as a target image or the visual recognition support image 1333 to be displayed on the display unit 1402. A display position 1643 is data representing a position in the display unit 1402 at which the object 1330 or the visual recognition support image 1333 should be displayed. A magnification 1644 is data representing the size to display an image such as the object 1330 or the visual recognition support image 1333 on the display unit 1402. These data are read out from, for example, the patient table 1501, the display parameter table 1502, and the image table 1503.
Input/output data 1645 is data input/output via the input/output interface 1660. Transmission/reception data 1646 is data transmitted/received via the network interface 1630. In addition, the RAM 1640 includes an application execution area 1647 used to execute various kinds of application modules.
The storage 1650 stores databases, various kinds of parameters, and following data and programs necessary for implementation of this example embodiment. The storage 1650 stores the patient table 1501, the display parameter table 1502, and The image table 1503. The patient table 1501 is a table that manages the relationship between the patient ID 1511 and the attribute information 1512 and the like shown in
The storage 1650 further stores an action detection module 1651, a display control module 1652,. an evaluation module 1653, and an updating module 1654.
The action detection module 1651 is a module configured to detect the rehabilitation action of the user 220. The display control module 1652 is a module configured to display the avatar image 1320, the object 1330 serving as a target image, the visual recognition support image 1333 used to improve the visibility of the object 1330, and the like on the display unit 1402. The evaluation module 1653 is a module configured to evaluate the rehabilitation ability of the user 220. The updating module 1654 is a module configured to update the target position represented by the target image in accordance with the evaluation result. The modules 1651 to 1654 are loaded into the application execution area 1647 of the RAM 1640 and executed by the CPU 1610. A control program 1655 is a program configured to control the entire rehabilitation assistance server 1401.
The input-output interface 1660 interfaces input output data to from an input/output device. A display unit 1661 and an operation unit 1662 are connected to the input/output interface 1660. In addition, a storage medium 1664 may further be connected to the input output interface 1660. Furthermore, a speaker 1663 that is a voice output unit, a microphone that is a voice input unit, or a GPS (Global Positioning System) position determiner may be connected. Note that programs and data concerning general-purpose functions or other implementable functions of the rehabilitation assistance server 1401 are not illustrated in the RAM 1640 and the storage 1650 shown in
In step S1701, the rehabilitation assistance server 1401 causes the display unit 1402 or the like to display a visual recognition support image.
In step S1721, the rehabilitation assistance server 1401 acquires patient information representing the attribute of the patient who undergoes rehabilitation using the rehabilitation assistance system 1400 and what kind of rehabilitation menu the patient should undergo.
In step S1723, the rehabilitation assistance server 1401 acquires display parameters necessary for displaying, on the display unit 1402, the visual recognition support image 1333 and the tike to be displayed on the display unit 1402. The display parameters to be acquired are parameters concerning the position and magnification of the visual recognition support image 1333 and the like.
In step S1725, the rehabilitation assistance server 1401 acquires image data of the visual recognition support image 1333. In step S1727, the rehabilitation assistance server 1401 displays the visual recognition support image 1333 and the like on the display unit 1402.
In step S1729, the rehabilitation assistance server 1401 judges whether the display of the visual recognition support image 1333 and the like needs to be changed. If the display change is not needed (NO in step S1729), the rehabilitation assistance server 1401 ends the processing. If the display change is needed (YES in step S1729), the rehabilitation assistance server 1401 advances lo the next step.
In step S1731, the rehabilitation assistance server 1401 changes the size of the visual recognition support image 1333 in accordance with the eyesight of the user 220 or the evaluation result of the rehabilitation ability of the user 220.
According to this example embodiment, even if the size of the target image is made small to reduce the deviation between the target distance and the exercise distance, the effect of rehabilitation can be increased by making the target distance and the exercise distance close while maintaining the visibility of the target image. In addition, since the sensation of touching the target image is clear for the user, the user can experience feeling of satisfaction in achieving the target.
A rehabilitation assistance system according to the fourth example embodiment of the present invention will be described next with reference to FIGS. 18 to 21.
A rehabilitation assistance system 1800 includes a rehabilitation assistance server 1801 and a sound output unit 1802. The rehabilitation assistance server 1801 includes a sound output controller 1811. The sound output controller 1811 controls output of a sound in accordance with the positional relationship between an object 1350 serving as a target image and an avatar image 1320. The sound whose output is controlled by the sound output controller 1811 is output from the sound output unit 1802.
For example, when the object 1330 falls downward from above, the sound output controller 1811 outputs a sound based on the distance, that is, the positional relationship between the object 1330 and the avatar image 1320.
The output sound may be changed to a sound of a higher frequency as the distance between the object 1330 and the avatar image 1320 decreases, that is, the object 1330 moves close to the avatar image 1320. In addition, similarly, the output sound may be changed to a sound of a lower frequency as the distance between the object 1330 and the avatar image 1320 increases, that is, the object 1330 moves away from the avatar image 1320. That is, an acoustic effect like the Doppler effect for observing a difference in the frequency of the sound (wave) in accordance with the distance between the object 1330 (sound source) and the avatar image 1320 (user 220 (observer)) may be expressed. Note that instead of changing the frequency of the output sound, the volume of the output sound may be increased/decreased in accordance with the distance between the object 1330 and the avatar image 1320.
In addition, the position of the object 1330 may be instructed to the user 220 by outputting a sound from the sound output controller 1811. That is, the position of the object 1330 is instructed using the sense of hearing of the user 220.
For example, consider a case in which the user 220 wears a headphone when using the rehabilitation assistance system 1800. When the object 1330 serving as a target image is located on the right side of the avatar image 1320 (user 220), the rehabilitation assistance server 1801 outputs a sound from the right ear side of the headphone. Similarly, when the object 1330 is located on the left side of the avatar image 1320 (user 220), the rehabilitation assistance server 1801 outputs a sound from the left ear side of the headphone. This allows the user 220 to judge, based on the direction of the sound, whether the object 1330 is located on the right side or left side of the user 220. In addition, when the object 1330 is located in front of the avatar image (320 (user 220), the rehabilitation assistance server 1801 outputs a sound from both sides of the headphone.
In the above description, the position of the object 1330 is instructed using the sense of sight or the sense of hearing of the user 220. However, one of the five senses other than the sense of sight and the sense of hearing, for example, the sense of taste, the sense of touch, or the sense of smell may be used to instruct the position of the object 1330 to the user 220.
For example, a sensor is placed on the tongue of the user 220 to cause the user 220 to feel a taste according to the position of the object 1330. Alternatively, the controller in the hand of the user 220 or the headphone or head mounted display worn by the user 220 may be vibrated. That is, the position of the object 1330 may be instructed using the sense of touch of the user 220.
A storage 2050 stores databases, various kinds of parameters, and following data and programs necessary for implementation of this example embodiment. The storage 2050 stores the sound table 1901. The sound table 1901 is the table that manages the relationship between the image type 1531 and the sound data 1911 shown in
The storage 2050 further stores a sound output control module 2051. The sound output control module 2051 is a module configured to control output of a sound in accordance with the positional relationship between the object 1330 serving as a target image and the avatar image 1320. The module 2051 is loaded into an application execution area 1647 of the RAM 2040 and executed by the CPU 1610. Note that programs and data concerning general-purpose functions or other implementable functions of the rehabilitation assistance server 1801 are not illustrated in the RAM 2040 and the storage 2050 shown in
In step S2101, the rehabilitation assistance server 1801 controls output of a sound. In step S2121, the rehabilitation assistance server 1801 acquires the position of the avatar image 1320. In step S2123, the rehabilitation assistance server 1801 acquires the position of the object 1330. Instep S2125, the rehabilitation assistance server 1801 determines the positional relationship between the avatar image 1320 and the object 1330. In step S2127, the rehabilitation assistance server 1801 controls the output of a sound in accordance with the determined positional relationship.
According to this example embodiment, since the rehabilitation is executed using the sense of hearing in addition to the sense of sight of the user, the user can more easily visually recognize the object, and the effect obtained by the rehabilitation can further be enhanced. In addition, the user can grasp the position of the object not only by the sense of sight but also by the sense of hearing. Furthermore, since a sound is output, even a user with poor eyesight can undergo the rehabilitation according to this example embodiment.
A system according to the fifth example embodiment of the present invention will be described next with reference to
On the other hand, in a case of patient ID 002, the exercise level is low, but the cognitive level is high. In this case, the distance to the object, that is, the distance to stretch out the hand at maximum is short (here, for example, level 2 in five levels), the object appearance range is wide (here, for example, level 5), and the speed of the motion of the object is low (here, for example, level 1). On the other hand, the object appearance interval is short (here, for example, 5 in five levels), and both the object size and the sensor size are small (here, for example, 5 in five levels).
In a case of patient ID 003, both the exercise level and the cognitive level are low. In this case, the distance to the object, that is, the distance to stretch out the hand at maximum is short (here, for example, level 1 in five levels), the object appearance range is narrow (here, for example, level 1), and the speed of the motion of the object is low (here, for example, level 1). In addition, the object appearance interval is long (here, for example, 1 in five levels), and both the object size and the sensor size are large (here, for example, 1 in five levels).
In this way, the parameters are variously changed in accordance with the attributes of the patient.
In general, the relationship between the motor function, the cognitive function, and various kinds of parameters is expected as shown in
After that, the user 220 moves controllers 234 and 235 in accordance with the position of the failing object 2411 to move an avatar image 311 (not shown in
In this example as well, it is possible to set a rehabilitation intensity appropriate for the user by changing various kinds of parameters (the distance to the falling sweet potato, the range of appearance of the fanner, the falling speed of the sweet potato, the interval to throw the sweet potato by the farmer, and the size of the basket) in accordance with the motor function and the cognitive function of the user.
According to the above-described examples, it is possible to give a task to both the motor function and the cognitive function of the user. For example, a task to the cognitive function of the user can be given by displaying a preliminary state such as a fanner bending forward or a monkey appearing, and a task to the motor function of the user can be given by changing the distance, direction, speed, and the like of an object. That is, the patient is caused to perform both a motor rehabilitation action of stretching out an arm and a cognitive rehabilitation action of predicting the next appearance position of an object and moving the line of sight. This makes it possible to perform more effective rehabilitation.
Note that the visual recognition support image described in the third example embodiment may be additionally displayed for the object in each of
While the invention has been described with reference to example embodiments thereof, the invention is not limited to these example embodiments. For example, the display device is not limited to the head mounted display but may be a large screen. The controller is not limited to a grip type but may be a wearable sensor.
While the invention has been particularly shown and described with reference to example embodiments thereof, the invention is not limited to these example embodiments, it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present invention as defined by the claims.
The present invention is applicable to a system including a plurality of devices or a single apparatus. The present invention is also applicable even when an information processing program for implementing the functions of example embodiments is supplied to the system or apparatus directly or from a remote site. Hence, the present invention also incorporates the program installed in a computer to implement the functions of the present invention by the computer, a medium storing the program, and a WWW (World Wide Web) server that causes a user to download the program. Especially, the present invention incorporates at least a non-transitory computer readable medium storing a program that causes a computer to execute processing steps included in the above-described example embodiments.
Number | Date | Country | Kind |
---|---|---|---|
2017-086674 | Apr 2017 | JP | national |
2017-204243 | Oct 2017 | JP | national |
This application is based upon and claims the benefit of priority from Japanese patent application No. 2017-086674, filed on Apr. 25, 2017, and Japanese patent application No. 2017-204243, filed on Oct. 23, 2017, the disclosures of all of which are incorporated herein in their entireties by reference.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2018/016886 | 4/25/2018 | WO | 00 |