This application is based upon and claims the benefit of priority from Japanese patent application No. 2021-148075, filed on Sep. 10, 2021, and Japanese patent application No. 2022-027232, filed on Feb. 24, 2022, the disclosure of which is incorporated herein in their entirety by reference.
The present invention relates to a cognitive ability estimation apparatus, a method thereof, and a program.
In the above technical field, patent literature 1 discloses a technique of supporting rehabilitation of a patient.
Patent literature 1: Japanese Patent Laid-Open No. 2015-228957
However, in the technique described in the above literature, paragraph describes “it is possible to intuitively grasp the progress of treatments, that is, the recovery of motor functions or reflect it on a current action as needed”, and therefore, the purpose of treatments is “recovery of motor functions”. For this reason, in the conventional technique, cognitive function determination is not performed.
The present invention enables to provide a technique of solving the above-described problem.
One example aspect of the invention provides a cognitive ability estimation apparatus comprising:
In the cognitive ability estimation apparatus, the estimator further estimates one of a driving risk and a falling risk based on the estimated cognitive ability level.
Another example aspect of the present invention provides a cognitive ability estimation method comprising:
Still other example aspect of the present invention provides a cognitive ability estimation program for causing a computer to execute:
According to the present invention, it is possible to accurately evaluate the cognitive ability of a user.
Example embodiments of the present invention will now be described in detail with reference to the drawings. It should be noted that the relative arrangement of the components, the numerical expressions and numerical values set forth in these example embodiments do not limit the scope of the present invention unless it is specifically stated otherwise.
There are a lot of indices for evaluating a cognitive level, including MMSE (Mini-Mental State Examination), HDS-R (Hasegawa's Dementia Scale-Revised), MoCA (Montreal Cognitive Assessment), TMT (Trail-Making Test), and FAB (Frontal Assessment Battery).
However, these indices require questions or tasks on paper to a user and therefore are evaluation indices requiring language elements or including many elements such as literacy and writing ability, which are irrelevant to pure cognition.
Aiming at correctness, a time of 15 to 20 min or more is needed for examinations. If examinations are repeated in a short period, the user learns answers to questions. Alternatively, if the condition is mild, there exists a so-called ceiling effect and, for example, the cognitive function is hardly reflected on the examination result. As described above, these methods can be used only to evaluate the cognitive level in a specific range.
Hence, there is a demand for a cognitive ability evaluation method that does not depend on literacy, writing ability, or a language, can be performed in a short period, can estimate various cognitive ability levels, and can avoid a situation where a subject knows answers due to repetitive examinations.
If such a method is established, training according to the estimated cognitive function can continuously be conducted, and this is expected to be effective for establishment of function improvement of cognitive ability (https://www.neurology-jp.org/guidelinem/degl/degl_2017_02.pdf, p. 26).
A cognitive ability estimation apparatus 100 according to the first example embodiment of the present invention will be described with reference to
As shown in
The display controller 101 generates and displays, in a virtual space 150, a target object 152 configured to urge a user 110 to do a three-dimensional body action.
As the attributes of the target object 152 in the virtual space 150, the setter 102 can set at least one of the moving speed, the number of displayed objects, the size, the display position, and the display interval.
If the user 110 achieves the body action to the target object 152, the estimator 103 estimates the cognitive ability level of the user 110 in accordance with the attributes of the target object 152. As for the estimation of the cognitive ability level, one body action is defined as one task, and the estimation may be performed based on one task, or may be done based on the results of a plurality of tasks.
With the above-described configuration, it is possible to estimate the cognitive ability level of the user more accurately.
A rehabilitation support system 200 according to the second example embodiment of the present invention will be described with reference to
As shown in
The two base stations 231 and 232 sense the motion of the head mounted display 233 and the motions of the controllers 234 and 235, and send these to the rehabilitation support apparatus 210. The rehabilitation support apparatus 210 performs display control of the head mounted display 233 based on the motion of the head mounted display 233. The rehabilitation support apparatus 210 also evaluates the rehabilitation action of the user 220 based on the motions of the controllers 234 and 235. Note that the head mounted display 233 can be of a non-transmissive type, a video see-through type, an optical see-through type, or a spectacle type. In this example embodiment, a virtual space of VR (Virtual Reality) is presented to the user. However, a physical space and a virtual space may be displayed in a superimposed manner, like AR (Augmented Reality), physical information may be reflected on a virtual space, like MR (Mixed Reality), or a hologram technology may be used as an alternative means.
In this example embodiment, as an example of a sensor configured to detect the position or action of the hand or head of the user, the controllers 234 and 235 held in the hands of the user 220, and the base stations 231 and 232 have been described. However, the present invention is not limited to this. A camera (including a depth sensor) configured to detect the positions or action of the hands themselves of the user by image recognition processing, or a sensor configured to detect the positions of the hands of the user by a temperature may be used. A wristwatch-type wearable terminal put on an arm of the user, a motion capture device, or the like can also be applied to the present invention interlockingly with an action detector 211. That is, using a three-dimensional tracking device such as Kinect® or an action analysis device or attaching a marker to the body is also one example embodiment.
The rehabilitation support apparatus 210 includes the action detector 211, display controllers 212 and 213, a feedback unit 214, an evaluation updater 215, a task set database 216, a setter 217, an estimator 218, and a storage unit 219.
The action detector 211 acquires, via the base stations 231 and 232, the positions of the controllers 234 and 235 held in the hands of the user 220, and detects the rehabilitation action of the user 220 based on changes in the positions of the hands of the user 220.
The display controller 212 generates and displays, in a virtual space 240, a target object 242 configured to urge the user 220 to do a three-dimensional body action. In particular, the display controller 212 generates, in the virtual space, avatar objects 241 that move in accordance with a detected rehabilitation action and the target object 242 representing the target of the rehabilitation action. The display controller 212 displays, on a display screen (in the virtual space 240), the images of the avatar objects 241 and the target object 242 in accordance with the direction and position of the head mounted display 233 detected by the action detector 211. The images of the avatar objects 241 and the target object 242 are superimposed on a background image 243. Here, the avatar objects 241 have the same shape as the controllers 234 and 235. However, the present invention is not limited to this, and the size, shape, or color may be changed on the left and right sides. The avatar objects 241 move in the virtual space 240 in accordance with the motions of the controllers 234 and 235. The controllers 234 and 235 are each provided with at least one button and configured to perform various kinds of settings including initial settings such as origin setting by operating the button. The button can be disabled, or instead of arranging the button, all settings may be executed separately using an external operation unit. The background image 243 is cut out from a virtual space including a horizontal line 244 and a ground surface image 245.
The display controller 212 generates the target object 242 in the virtual space and moves this downward from above the user 220. Accordingly, on the head mounted display 233, the target object is displayed such that the display position and size gradually change (for example, gradually becomes large and then becomes small). The user 220 moves the controllers 234 and 235 to make the avatar objects 241 in the screen close to the target object 242. Note that as for the moving direction of the target object, for example, the target object may be displayed such that it rises from the floor surface to above the head. Also, including movement in the depth direction or the left-and-right direction in addition to the movement in the vertical direction, any three-dimensional movement may occur, or the target object may be fixed at a specific coordinate position without moving.
The user 220 moves the controllers 234 and 235 to bring the avatar objects 241 in the screen close to the target object 242. If the avatar object 241 hits the target object 242, the display controller 212 causes the target object 242 to disappear, and the feedback unit 214 determines that the target action is achieved, and displays a message. More specifically, if the shortest distance between the target object 242 and a sensor object included in the avatar object 241 falls within a predetermined range, the target is achieved, and the target object 242 disappears. At this time, if the shortest distance between the target object 242 and the sensor object (for example, a spherical object including the center point of the avatar object 241) included in the avatar object 241 is equal to or less than a predetermined threshold S1, the target is completely achieved. In a case of complete achievement, “excellent” is displayed, and a corresponding voice is output simultaneously to make feedback. At the same time, the controller 234 or 235 held by the hand that has achieved the task may be vibrated, or stimuli may be imparted to the sense of smell or the sense of taste. The rehabilitation action may be evaluated in three or more levels depending on how much the distance between the sensor object and the target object 242 decreases. Also, two or more target objects may be generated simultaneously in the virtual space and displayed. Notifications of task achievement for five sense stimulation may be combined in any way in accordance with the action type or the like. If the shortest distance between the target object 242 and the sensor object included in the avatar object 241 is not less than the threshold S1 and not more than a threshold S2, “well done” is displayed because of the achievement of the target, and a corresponding voice is output to make feedback. Note that the output voice need not be the same as the displayed message, and a nonverbal sound effect, for example, “dingdong” may be used.
The display controller 213 displays a radar screen image 250 on the display screen of the head mounted display 233. The radar screen image 250 is a notification image used to notify the user of generation of the target object 242. The radar screen image 250 notifies the user of the direction in which the generated target object 242 is located relatively with respect to a reference direction (initially set in the front direction of the user who is sitting straight on a chair) in the virtual space. The radar screen image 250 also notifies the user how far the position of the generated target object 242 is apart from the user 220. Note that the notification image is not limited to the radar screen image, and the notification may be made using characters, an arrow, a symbol, an illustration, or a type, intensity, blinking, or the like of light or a color. The notification method is not limited to the image, and may use a voice, a vibration, or a combination of some of a voice, a vibration, and an image. Independently of the direction of the head of the user 220, the display controller 213 displays the radar screen image 250 at the center (for example, within the range of −50° to 50°) of the display screen of the head mounted display 233. However, the display portion is not limited to the center, and may be, for example, an arbitrary place on the four comers, the upper end, the lower end, the left end, and the right end of the screen.
The radar screen image 250 includes a head image 251 representing the head of the user viewed from above, a block image 252 obtained by dividing the periphery of the head image 251 into a plurality of blocks, and a fan-shaped image 253 as a visual field image representing the visual field of the user. A target position image representing the position of a target object is shown by coloring, blinking, or lighting a block in the block image 252. This allows the user 220 to know whether the target object exists on the left side or the right side with respect to the direction in which he/she faces. Note that in this example embodiment, the block image 252 is fixed, and the fan-shaped image 253 moves. However, the present invention is not limited to this, and the block image 252 may be moved in accordance with the direction of the head while fixing the fan-shaped image 253 and the head image 251. More specifically, if the head turns to the left, the block image 252 may rotate to right.
The feedback unit 214 preferably changes the message type via the display controller 212 in accordance with evaluation of the rehabilitation action. For example, if the sensor object contacts the center of the target object 242, “excellent” is displayed. If the sensor object contacts only the peripheral portion of the center of the target object 242, “well done” is displayed. The size of the target object 242 and the size of the peripheral portion can be set by the setter 217. The size of the sensor object can also be set by the setter 217.
The feedback unit 214 performs feedback to impart stimuli to two or more of the five senses (sense of sight, sense of hearing, sense of touch, sense of taste, and sense of smell) of the user who has virtually touched the target object 242. The feedback is performed almost at the same time as the timing at which the sensor object enters a predetermined distance from the center of the target object 242 or the timing at which the sensor object contacts the target object 242 (real-time multi-channel biofeedback). The effect is large if a delay from the timing to feedback is, for example, 1 sec or less. The shorter the interval between the operation timing of the user and the timing of feedback is (the smaller the delay is), the larger the effect is. While performing feedback of giving stimuli to the sense of sight of the user by an image “excellent!”, the feedback unit 214 simultaneously performs feedback of giving stimuli to the sense of hearing of the user by a voice output from a speaker 236. The stimulating feedback to the five senses (sense of sight, sense of hearing, sense of touch, sense of taste, and sense of smell) is performed to notify the user of the presence/absence of achievement in each task.
Also, the feedback unit 214 may simultaneously output feedback of giving stimuli to the sense of sight of the user 220 by the image “excellent!”, feedback of giving stimuli to the sense of hearing of the user 220 by the voice output from the speaker, and feedback of giving stimuli to the sense of touch of the user 220 by causing the controller 234 to vibrate. Alternatively, the feedback unit 214 may simultaneously output only two types of feedback, including feedback of giving stimuli to the sense of sight of the user 220 by the image “excellent!”, and feedback of giving stimuli to the sense of touch of the user 220 by causing the controller 234 to vibrate. The feedback unit 214 may simultaneously output only two types of feedback, including feedback of giving stimuli to the sense of hearing of the user 220 by a voice “excellent!”, and feedback of giving stimuli to the sense of touch of the user 220 by causing the controller 234 to vibrate.
The action of the user 220 moving the controllers 234 and 235 is a rehabilitation action, and display of a target object for urging the user 220 to do one rehabilitation action that he/she should perform is called a task. Information (task data) representing one task includes at least the moving speed, the number of displayed objects, the size, the display position, and the display interval as the attributes of the target object 242 in the virtual space 240. The task data may also include the appearance direction of the target object (right 90°, right 45°, front, left 45°, and left 90° with respect to the front direction of the chair), the distance to the target object, the shape (size) of the target object, the appearance position (the depth direction distance from the user), the appearance interval (time interval), the moving speed in falling, rising, or the like, the color of the target object, which one of the left and right controllers should be used to acquire, the number of target objects that simultaneously appear, the size of the sensor object, and the like. The distance from the user 220 to the fall position of the target object 242 in the depth direction may continuously be set, or may be set to one of, for example, three stages. For example, a change can be made such that the target object falls quite near the user 220 or falls to a position that the user 220 cannot reach unless largely inclining the body forward. This can control an exercising load to be given to the user, and a load to a spatial cognitive ability or a spatial comprehension ability.
The evaluation updater 215 evaluates the rehabilitation action of the user in accordance with the amount and quality of the task achieved by the user 220 and adds a point. Here, the quality of the achieved task includes “well done” or “excellent”, that is, how close the avatar object could be brought to the target object. The evaluation updater 215 adds different points to achieved tasks (a high point to a far object, and a low point to a close object). The evaluation updater 215 can update a task in accordance with the integrated point. For example, a task (the attribute of a target object) may be updated using a task achievement ratio (the number of achieved targets/the number of tasks). The evaluation updater 215 compares the rehabilitation action detected by the action detector 211 and a target position represented by the target object displayed by the display controller 212, and evaluates the rehabilitation capability of the user 220. More specifically, it is decided, by comparing the positions in the three-dimensional virtual space, whether the target object 242 and the avatar object 241 that moves in correspondence with the rehabilitation action detected by the action detector 211 overlap. If these overlap, it is evaluated that one rehabilitation action is cleared, and a point is added. The display controller 212 can make the target object 242 appear at different positions (for example, positions of three stages) in the depth direction. The evaluation updater 215 gives different points (a high point to a far object, and a low point to a close object).
The evaluation updater 215 updates a target task in accordance with the integrated point. For example, a target task may be updated using a task achievement ratio (the number of achieved targets/the number of tasks).
The task set database 216 stores a set of a plurality of tasks. A task represents one rehabilitation action that the user should perform. More specifically, as information representing one task, the position of a target object that appears, its speed and size, and the size of an avatar object at that time, and the like are stored. The task set database 216 stores a task set that decides the order of providing the plurality of tasks to the user. For example, task sets may be stored as templates for each hospital, or a history of executed task sets may be stored for each user. The rehabilitation support apparatus 210 may be configured to be communicable with another rehabilitation support apparatus via the Internet. In this case, one task set may be executed by the same user in a plurality of places, or various templates may be shared by a plurality of users in remote sites.
As the attributes of the target object 242 in the virtual space 240, the setter 217 sets at least one of the moving speed, the number of displayed objects, the size, the display position, and the display interval. If the user 220 can achieve a body action to the target object 242, the estimator 218 estimates the cognitive ability level of the user 220 in accordance with the attributes of the target object 242. The setter 217 can set a delay time from the timing of notifying the generation of the target object 242 to the timing of generating the target object 242, thereby controlling a cognitive load given to the user 220. That is, the user needs to continuously memorize and hold an action that he/she should perform during the time after he/she knows, by the radar screen image 250 or the like, a position in the virtual space where the target object is generated (a position representing in which direction the head mounted display should directed to display the target object) until actual generation of the target object. The “memorization time” is a cognitive load for the user. Also, the setter 217 may control the cognitive load by changing the time not “until the timing of generating the target object 152” but “until the target object 152 approaches the range the user 220 can reach”. The setter 217 may give a cognitive load to the user 220 by displaying the background image 243 other than the target object 242 on the head mounted display 233.
Note that when changing the cognitive load, it is preferable to notify the user in advance that the cognitive load is to be increased or decreased. As for the notification method, the notification may be done by visually using characters or a symbol, by a voice, or by touching a part of the body, for example, by tapping a shoulder, an elbow, an arm, or a foot.
The display that displays the operation panel 300 can be any device, and may be an external display connected to the rehabilitation support apparatus 210 or a display incorporated in the rehabilitation support apparatus 210. The operation panel 300 includes a user visual field display region 301, various parameter setting regions 321 to 324, a score display region 303, and a re-center button 305. For a descriptive convenience,
The user visual field region 301 shows an image actually displayed on the head mounted display 233. A reference direction in the virtual space is displayed in the user visual field region 301. As described with reference to
The various parameter setting regions 321 to 324 are screens configured to set a plurality of parameters for defining a task. The setter 217 can accept inputs to the various parameter setting regions 321 to 324 from an input device (not shown). The input device may be a mouse, a ten-key pad, or a keyboard, or may be various kinds of controllers, a joystick for game, or a touch panel, and can use any technical component.
The various parameter setting regions 321 to 324 include the input region 321 that decides the sizes of left and right target objects, the input region 322 that decides the size of the sensitivity range of the avatar object 241, the input region 323 that decides the moving speed of the target object, and the input region 324 that decides the position of a target object that appears next. The operation panel 300 also includes a check box 325 that sets whether to accept an operation of a target object appearance position by an input device (hot key).
The input region 321 can set, on each of the right and left sides, the radius (visual recognition size) of a visual recognition object that makes the target object position easy for the user to see, and the radius (evaluation size) of a target object that reacts with the avatar object 241. That is, in the example shown in
In the input region 322, the left and right sensor sizes of the avatar object 241 (the size of the sensor range of the sensor object) can separately be set. If the sensor size is large, a task is achieved even if the position of a hand largely deviates from the target object. Hence, the difficulty of the rehabilitation action is low. Conversely, if the sensor size is small, it is necessary to correctly move the hand to the center region (evaluation size) of the target object. Hence, the difficulty of the rehabilitation action is high. In the example shown in
In the input region 323, the speed of the target object 242 moving in the virtual space can be defined on each of the left and right sides. In this example, the speed is set to 45 cm/s.
The input region 324 is an image used to input the position of the next position (the distance to the task and the angle), and has the shape of the enlarged radar screen image 250. Since the check box 325 has a check mark, if an operation of clicking or tapping one of a plurality of blocks in the input region 324 is performed, the target object 242 is generated at a position in the virtual space corresponding to the position of the block for which the operation is performed.
That is, in the example shown in
At the point of time when the avatar object 241 contacts the visual recognition object 312, the target object 242 does not disappear, and correct achievement of the task is not obtained (a predetermined point is added, and good evaluation is done). Correct achievement of the task (perfect evaluation) is obtained only when the avatar object 241 contacts the target object 242.
On the other hand, the total number of tasks at each position, and a count point representing how many times a task has been achieved are shown in the score display region 303. The point may be expressed as a fraction or a percentage, or a combination thereof. After a series of rehabilitation actions decided by one task set, the feedback unit 214 derives a rehabilitation evaluation point using values in the score display region 303.
An example of calculation of the rehabilitation evaluation point is as follows. Rehabilitation evaluation point=100×([determination score×count of Short]+[determination score×count of Middle×1.1]+[determination score×count of Long×1.2]+[count of five continuous Perfect catch×10]), determination score: Perfect (excellent)=2, Good (well done)=1, Failure=0
The re-center button 305 is a button that accepts, from the operator, a reconstruction instruction for reconstructing the virtual space in accordance with the position of the user 220. If the re-center button 305 is operated, the display controller 212 sets the position of the head mounted display 233 at the instant to the origin, and reconstructs the virtual space having, as the reference direction, the direction of the head mounted display 233 at that instant.
The inro 511 is displayed at a position (depth and angle) set in the input region 324 of the operation panel 300. The position of the inro 511 does not change until the user makes the avatar object 241 touch it in the virtual space. That is, the inro 511 is a target object fixed in the space (also called a horizontal task because the user is required to horizontally stretch the body). Such a fixed target object is very effective as rehabilitation for a disease such as cerebellar ataxia. That is, an image of a limited body motion can be imprinted, by feed forward, in the brain of a patient who has forgotten how to move his/her body. If the distance of the inro 511 in the depth direction is increased, the motion intensity can be changed. Also, if multi-channel biofeedback by five sense stimulation for notifying the achievement of each task is combined, the memory can easily be fixed in the brain, and exercise ability greatly improves. Furthermore, according to such a horizontal task, chronic pains can be improved along with the reconstruction of cerebral cortex. Alternatively, it is possible to recover cognitive impairment called chemobrain or a phenomenon that the position sense of a cancer patient using an anticancer drug lowers due to neuropathy. The place where the target object appears may be notified in advance to give a hint and reduce the cognitive load. Touch notices by touching the body without any language is effective to reduce the cognitive load. a plurality of repeated language notices are also effective to reduce the cognitive load. As for the method of language informing, the cognitive load may be reduced by giving a simple short instruction close to the imperative form. Alternatively, a more complex instruction may be given in a questioning format, for example, using “blue? (take by the right hand)”. Language informing may be given in a form including a cognitive task such as calculation, for example, “take by the right hand if you hear a number divisible by 2”. Note that not only the horizontal position and depth where the inro 511 is generated but also a height may be set.
The setter 217 can set the delay time from the timing at which the trigger object 602 throws up the target object 603 and notifies the generation of the target object 603 until generation of the target object 703. Thus, the cognitive load given to the user can be adjusted. Note that in synchronism with the motion of the trigger object 602, generation of the target object may be notified at the same timing using the radar chart type notification image 250, or a notification by a voice may be combined.
In this way, the setter 217 can give a cognitive load to the user not only by a task of a background including only a horizontal line as shown in
In particular, the setter 217 changes the mode of the task and changes at least a part of the background image 301 along with time, thereby giving a cognitive load to the user 220. In the example shown in
Also, the setter 217 causes at least two to five target objects 803 to simultaneously exist in the three-dimensional virtual space 240 and displays these on the display screen, thereby giving a cognitively stronger load to the user 220. In other words, the setter 217 generates the at least two target objects 803 at different positions in the left-and-right direction in the three-dimensional virtual space.
In particular, if the at least two target objects 803 are generated at a plurality of positions in a direction (the left-and-right direction in
The evaluation updater 215 evaluates the cognitive ability of the user using information such as whether the avatar object has reached, in a good timing, a three-dimensional target position represented by the target object, the time interval from target object generation notification to generation and the number of target objects, and the degree of the load that causes the attention disorder on the background image.
Next, in step S903, a task (that is, display of the target object and evaluation of achievement by an avatar object by the display controller 212) is started.
In step S905, if all the tasks in the three continuous task sets obtain perfect determination, it is determined that the task sets are achieved, and the process advances to step S907. In accordance with the attribute of the target object in the task sets, the estimator 218 calculates the cognitive ability as, for example, a cognitive age. The attribute parameters used by the estimator 218 can variously be selected. The cognitive ability level of the user is calculated based on at least the moving speed of the target object and the number of displayed target objects (for example, one to five) in one task set. The estimator 218 calculates the cognitive ability level using different parameters or different parameter coefficients between a case where the target object moves in the vertical direction (vertical task) and a case where the target object does not move in the vertical direction (horizontal task). The estimator 218 calculates the cognitive ability level of the user using different parameters or different parameter coefficients in accordance with the background of the screen on which the target object is displayed. That is, the cognitive ability level is evaluated high for the user who has achieved the task on a complex background. Note that for an adult, a high cognitive ability level and a young cognitive age can be considered as equivalent. The higher the moving speed of the target object in the virtual space is in a task, the higher the cognitive ability level estimated by the estimator 218 is if the user can achieve the task. The larger the number of target objects generated in one task set is in the virtual space, the higher the cognitive ability level estimated by the estimator 218 is if the user can achieve the task. Here, the number of target objects generated in one task set indicates the number of target objects that can simultaneously exist in the virtual space. That is, if the number is two, two target objects are simultaneously displayed in the virtual space at maximum. The larger the number is, the larger the number of targets that the user needs to simultaneously recognize is, and the larger the load of parallel processing in the brain is.
The shorter the interval of the appearance of the target object in the virtual space is, the higher the cognitive ability level estimated by the estimator 218 is if the user can achieve the task. The smaller the size of the target object is, the higher the cognitive ability level estimated by the estimator 218 is if the user can achieve the task. The wider the appearance range of the target object is, the higher the cognitive ability level estimated by the estimator 218 is if the user can achieve the task. By employing such an estimation method, various problems of cognitive function evaluation indices represented by MMSE, HDS-R, MoCA, TMT, and FAB can be solved. More specifically, it is possible to estimate pure cognitive ability while excluding elements irrelevant to the cognitive ability, such as literacy and writing ability, suppress the time needed for examinations to a very short time, and eliminate learning and the ceiling effect caused by repetitive examinations.
The storage unit 219 stores an equation or table representing the relationship between the cognitive age and the attribute of the target object for which the user has achieved a body action. Using the equation or the table, the estimator 218 estimates the cognitive ability level of the user as the cognitive age. The apparatus may include an updater configured to update the equation or the table stored in the storage unit 219 based on big data that associates an actual age with a result of cognitive ability estimation. Machine learning may be performed using the actual age as supervisory data. If the data of population becomes large, the cognitive ability level estimation accuracy becomes high.
The setter 217 can also set the attributes of the target object to be displayed next in accordance with the cognitive ability level of the user estimated by the estimator 218.
The estimator 218 can calculate the cognitive age as the cognitive ability using, for example, the following expressions. (1) Estimation accuracy priority (adjusted contribution, Adjusted R-squared: 0.8974, Akaike Information Criterion AIC=2936): parameters are introduced as many as possible to obtain the highest estimation accuracy→the accuracy is so high that learning is unnecessary
Cognitive age=80 (simple background, horizontal task, number of target objects=1, 10 cm, 90°, speed=25 cm/sec)+[if the complexity of background is raised, −2.5 years]+[if a vertical task is possible, −10 years]+[every time the number of target objects is increased by one, −5 years]+[every time the size of the target object in 10 cm is decreased by 1 cm, −1 year]+[every time the generation range of the target object is extended by 10°, −0.4 years]+[every time the speed in the task is increased by 10 cm/s, −2.4 years] (2) Practicality priority (adjusted contribution, Adjusted R-squared: 0.7806, Akaike Information Criterion AIC=3310): while maintaining estimation accuracy at predetermined level, the number of variable parameters to be used is decreased to two
Cognitive age=84 (simple background, horizontal task, number of target objects=1, 10 cm, 90°, speed=25 cm/sec)+[every time the number of target objects is increased by one, −5 years]+[every time the speed in the task is increased by 10 cm/s, −2.7 years]
If the cognitive age is 65 years or more, the cognitive function is low, and it is evaluated that driving is dangerous. If the cognitive age is 55 to 65 years, the cognitive function is in decline, and it is evaluated that attention is needed in driving. On the other hand, if the cognitive age is 54 years or less, the cognitive function has no problem, and it is evaluated that the user is sufficiently capable of driving. Similarly, a falling risk in daily life may be evaluated. Here, thresholds are set to 65 years and 55 years. However, the thresholds may be changed in accordance with the required action. For example, if the user drives in a region of little traffic, it may be evaluated that driving is possible even if the cognitive age is 65 years.
Note that it is found that if the rehabilitation support system 200 according to this example embodiment is used, the cognitive function level can be improved and maintained (the effect of one training continues for three weeks, and the effect can be fixed to an almost constant level if the training is performed three times), and it is considered that for a user judged to have a risk in driving, the evaluation may be changed to “driving OK”.
When estimating the cognitive age, the setter 217 may display a message concerning control of the cognitive load (attribute parameter). For example, a message “reduce the speed and test again” can be considered. A recommended numerical value may be presented by, for example, “set the speed to 25 cm/see”. Also, to correctly calculate the cognitive function level, various kinds of parameters may be controlled automatically.
If the rehabilitation support system 200 according to this example embodiment is used, it is possible to recognize the type of the lowered cognitive function and effectively conduct rehabilitation more correctly. For example, a user evaluated to be 65 years old in a vertical task at 25 cm/sec can do spatial recognition. If the speed is low (25 cm/sec), he/she can handle a multitask (four targets). For such a user, training to gradually increase the speed is effective.
As described above, in this example embodiment, the cognitive function is estimated by adjusting two or more of parameters including the time limit (falling speed or the like), the size of the target, the motion of the target, the number of targets, the display position of the target in the three-dimensional space, and background bb information for intentionally causing an attention disorder. The system according to this example embodiment is a system that needs color identification. Also, the system according to this example embodiment is a system configured to judge the type of attention disorder or treat the attention disorder. Also, the system according to this example embodiment is a system configured to evaluate or treat a falling risk or a driving risk.
According to this example embodiment, it is also possible to judge which one of a sustained attention disorder (for example, the user cannot continuously concentrate to an object), a conversion attention disorder (for example, if two target objects are generated, the user diverts attention to the next object before he/she completely takes one object), a divided attention disorder (for example, the user cannot take a target object outside the visual field where the space can be recognized (cannot see a notification image)), and a selective attention disorder (for example, the user cannot concentrate to an object as soon as a background is displayed (the user cannot discriminate an object that should be seen)) the user has.
While the invention has been particularly shown and described with reference to example embodiments thereof, the invention is not limited to these example embodiments. It will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present invention as defined by the claims. A system or apparatus including any combination of the individual features included in the respective example embodiments may be incorporated in the scope of the present invention.
The present invention is applicable to a system including a plurality of devices or a single apparatus. The present invention is also applicable even when an information processing program for implementing the functions of example embodiments is supplied to the system or apparatus directly or from a remote site. Hence, the present invention also incorporates the program installed in a computer to implement the functions of the present invention by the computer, a medium storing the program, and a WWW (World Wide Web) server that causes a user to download the program. Especially, the present invention incorporates at least a non-transitory computer readable medium storing a program that causes a computer to execute processing steps included in the above-described example embodiments.
Number | Date | Country | Kind |
---|---|---|---|
2021-148075 | Sep 2021 | JP | national |
2022-027232 | Feb 2022 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2022/012211 | 3/17/2022 | WO |