The present disclosure relates to an evaluation apparatus, an evaluation method, and an evaluation program.
In recent years, cognitive impairment and brain impairment tend to increase, and it is demanded to detect cognitive impairment and brain impairment early and to quantitatively evaluate severity of symptoms. It is known that symptoms of cognitive impairment and brain impairment affect cognitive ability. Therefore, it is common to evaluate a subject based on cognitive ability of the subject. For example, an apparatus that displays a plurality of kinds of numbers, instructs a subject to add the numbers to obtain an answer, and checks the answer provided by the subject has been proposed (for example, see Japanese Laid-open Patent Publication No. 2011-083403 A).
However, in the method of JP 2011-083403 A or the like, the subject selects an answer by operating a touch panel or the like, so that it is difficult to perform verification including contingency and it is difficult to ensure high evaluation accuracy. Therefore, there is a need to evaluate cognitive impairment and brain impairment with high accuracy.
The present disclosure has been conceived in view of the foregoing situation, and an object of the present disclosure is to provide an evaluation apparatus, an evaluation method, and an evaluation program capable of evaluating cognitive impairment and brain impairment with high accuracy.
An evaluation apparatus according to the present disclosure comprising: a display screen; a gaze point detection unit that detects a position of a gaze point of a subject who observes the display screen; a display control unit that performs display operation including first display operation of displaying question information that is a question for the subject on the display screen, second display operation of displaying a guidance target object that guides the gaze point of the subject to a target position on the display screen, and third display operation of displaying, at positions that do not overlap with the target position on a same circumference of a circle centered at the target position, a plurality of answer target objects that are answers for the question on the display screen after the second display operation; a region setting unit that sets a specific region corresponding to a specific target object among the plurality of the answer target objects and comparison regions corresponding to comparison target objects that are different from the specific target object; a determination unit that determines whether the gaze point is present in the specific region and the comparison region during the display period in which the third display operation is performed, on the basis of the position of the gaze point; an arithmetic unit that calculates gaze point data during the display period on the basis of a determination result of the determination unit; and an evaluation unit that obtains evaluation data of the subject on the basis of the gaze point data.
An evaluation apparatus according to the present disclosure comprising: a display screen; a gaze point detection unit that detects a position of a gaze point of a subject who observes the display screen; a display control unit that performs display operation including first display operation of displaying question information that is a question for the subject on the display screen, second display operation of displaying a guidance target object that guides the gaze point of the subject to a target position on the display screen, and third display operation of detecting the gaze point of the subject and displaying, at positions that do not overlap with the gaze point of the subject at an end of the second display operation, a plurality of answer target objects that are answers for the question on the display screen after the second display operation; a region setting unit that sets a specific region corresponding to a specific target object among the plurality of the answer target objects and comparison regions corresponding to comparison target objects that are different from the specific target object; a determination unit that determines whether the gaze point is present in the specific region and the comparison region during the display period in which the third display operation is performed, on the basis of the position of the gaze point; an arithmetic unit that calculates gaze point data during the display period on the basis of a determination result of the determination unit; and an evaluation unit that obtains evaluation data of the subject on the basis of the gaze point data.
An evaluation method according to the present disclosure comprising: displaying an image on a display screen; detecting a position of a gaze point of a subject who observes the display screen; performing display operation including first display operation of displaying question information that is a question for the subject on the display screen, second display operation of displaying a guidance target object that guides the gaze point of the subject to a target position on the display screen, and third display operation of displaying, at positions that do not overlap with the target position on a same circumference of a circle centered at the target position, a plurality of answer target objects that are answers for the question on the display screen after the second display operation; setting a specific region corresponding to a specific target object among the plurality of the answer target objects and comparison regions corresponding to comparison target objects that are different from the specific target object; determining whether the gaze point is present in the specific region and the comparison region during the display period in which the third display operation is performed, on the basis of the position of the gaze point; calculating gaze point data during the display period on the basis of a determination result of the determination unit; and obtaining evaluation data of the subject on the basis of the gaze point data.
A non-transitory computer readable recording medium storing therein an evaluation program according to the present disclosure that causes a computer to execute: a process of displaying an image on a display screen; a process of detecting a position of a gaze point of a subject who observes the display screen; a process of performing display operation including first display operation of displaying question information that is a question for the subject on the display screen, second display operation of displaying a guidance target object that guides the gaze point of the subject to a target position on the display screen, and third display operation of displaying, at positions that do not overlap with the target position on a same circumference of a circle centered at the target position, a plurality of answer target objects that are answers for the question on the display screen after the second display operation; a process of setting a specific region corresponding to a specific target object among the plurality of the answer target objects and comparison regions corresponding to comparison target objects that are different from the specific target object; a process of determining whether the gaze point is present in the specific region and the comparison region during the display period in which the third display operation is performed, on the basis of the position of the gaze point; a process of calculating gaze point data during the display period on the basis of a determination result of the determination unit; and a process of obtaining evaluation data of the subject on the basis of the gaze point data.
An evaluation method according to the present disclosure comprising: a display screen; displaying an image on a display screen; detecting a position of a gaze point of a subject who observes the display screen; performing display operation including first display operation of displaying question information that is a question for the subject on the display screen, second display operation of displaying a guidance target object that guides the gaze point of the subject to a target position on the display screen, and third display operation of detecting the gaze point of the subject and displaying, at positions that do not overlap with the gaze point of the subject at an end of the second display operation, a plurality of answer target objects that are answers for the question on the display screen after the second display operation; setting a specific region corresponding to a specific target object among the plurality of the answer target objects and comparison regions corresponding to comparison target objects that are different from the specific target object; determining whether the gaze point is present in the specific region and the comparison region during the display period in which the third display operation is performed, on the basis of the position of the gaze point; calculating gaze point data during the display period on the basis of a determination result of the determination unit; and obtaining evaluation data of the subject on the basis of the gaze point data.
A non-transitory computer readable recording medium storing therein an evaluation program according to the present disclosure that causes a computer to execute: a display screen; a process of displaying an image on a display screen; a process of detecting a position of a gaze point of a subject who observes the display screen; a process of performing display operation including first display operation of displaying question information that is a question for the subject on the display screen, second display operation of displaying a guidance target object that guides the gaze point of the subject to a target position on the display screen, and third display operation of detecting the gaze point of the subject and displaying, at positions that do not overlap with the gaze point of the subject at an end of the second display operation, a plurality of answer target objects that are answers for the question on the display screen after the second display operation; a process of setting a specific region corresponding to a specific target object among the plurality of the answer target objects and comparison regions corresponding to comparison target objects that are different from the specific target object; a process of determining whether the gaze point is present in the specific region and the comparison region during the display period in which the third display operation is performed, on the basis of the position of the gaze point; a process of calculating gaze point data during the display period on the basis of a determination result of the determination unit; and a process of obtaining evaluation data of the subject on the basis of the gaze point data.
Embodiments of an evaluation apparatus, an evaluation method, and an evaluation program according to the present disclosure will be described below based on the drawings. The present disclosure is not limited by the embodiments below. In addition, constituent elements described in the embodiments below include one that can be easily replaced by a person skilled in the art and one that is practically identical.
In the description below, a three-dimensional global coordinate system is set to describe positional relationships among components. A direction parallel to a first axis of a predetermined plane is referred to as an X-axis direction, a direction parallel to a second axis perpendicular to the first axis in the predetermined plane is referred to as a Y-axis direction, and a direction perpendicular to each of the first axis and the second axis is referred to a Z-axis direction. The predetermined plane includes an XY plane.
Line-of-Sight Detection Apparatus
The display device 101 includes a flat panel display, such as a liquid crystal display (LCD) or an organic electroluminescence (EL) display (OLED). In the present embodiment, the display device 101 includes a display screen 101S. The display screen 101S displays an image. In the present embodiment, the display screen 101S displays, for example, an index for evaluating visual performance of a subject. The display screen 101S is substantially parallel to the XY plane. The X-axis direction corresponds to a horizontal direction of the display screen 101S, the Y-axis direction corresponds to a vertical direction of the display screen 101S, and the Z-axis direction corresponds to a depth direction perpendicular to the display screen 101S.
The stereo camera device 102 includes a first camera 102A and a second camera 102B. The stereo camera device 102 is arranged below the display screen 101S of the display device 101. The first camera 102A and the second camera 102B are arranged in the X-axis direction. The first camera 102A is arranged in the negative X direction relative to the second camera 102B. Each of the first camera 102A and the second camera 102B includes an infrared camera, an optical system capable of transmitting near-infrared light with a wavelength of 850 nanometers (nm) for example, and an imaging element capable of receiving the near-infrared light.
The lighting device 103 includes a first light source 103A and a second light source 103B. The lighting device 103 is arranged below the display screen 101S of the display device 101. The first light source 103A and the second light source 103B are arranged in the X-axis direction. The first light source 103A is arranged in the negative X direction relative to the first camera 102A. The second light source 103B is arranged in the positive X direction relative to the second camera 102B. Each of the first light source 103A and the second light source 103B includes a light emitting diode (LED) light source and is able to emit near-infrared light with a wavelength of 850 nm, for example. Meanwhile, the first light source 103A and the second light source 103B may be arranged between the first camera 102A and the second camera 102B.
The lighting device 103 emits near-infrared light as detection light and illuminates an eyeball 111 of the subject. The stereo camera device 102 captures an image of a part of the eyeball 111 (hereinafter, the part of the eyeball is also be referred to as the “eyeball”) by the second camera 102B when the eyeball 111 is irradiated with the detection light emitted from the first light source 103A, and captures an image of the eyeball 111 by the first camera 102A when the eyeball 111 is irradiated with the detection light emitted from the second light source 103B.
At least one of the first camera 102A and the second camera 102B outputs a frame synchronous signal. The first light source 103A and the second light source 103B output detection light based on the frame synchronous signal. The first camera 102A captures image data of the eyeball 111 when the eyeball 111 is irradiated with the detection light emitted from the second light source 103B. The second camera 102B captures image data of the eyeball 111 when the eyeball 111 is irradiated with the detection light emitted from the first light source 103A.
If the eyeball 111 is irradiated with the detection light, a part of the detection light is reflected by a pupil 112, and light from the pupil 112 enters the stereo camera device 102. Further, if the eyeball 111 is irradiated with the detection light, a corneal reflection image 113 that is a virtual image of a cornea is formed on the eyeball 111, and light from the corneal reflection image 113 enters the stereo camera device 102.
By appropriately setting the relative position between the first camera 102A/the second camera 102B and the first light source 103A/the second light source 103B, intensity of light that enters from the pupil 112 to the stereo camera device 102 is reduced, and intensity of light that enters from the corneal reflection image 113 to the stereo camera device 102 is increased. In other words, an image of the pupil 112 captured by the stereo camera device 102 has low luminance, and an image of the corneal reflection image 113 has high luminance. The stereo camera device 102 is able to detect a position of the pupil 112 and a position of the corneal reflection image 113 on the basis of the luminance of the captured image.
The computer system 20, the driving circuit 40, the output device 50, and the input device 60 perform data communication via the input-output interface device 30. The computer system 20 includes an arithmetic processing device 20A and a storage device 20B. The arithmetic processing device 20A includes a microprocessor, such as a central processing unit (CPU). The storage device 20B includes a memory, such as a read only memory (ROM) and a random access memory (RAM), or a storage. The arithmetic processing device 20A performs an arithmetic process in accordance with a computer program 20C that is stored in the storage device 20B.
The driving circuit 40 generates a driving signal and outputs the driving signal to the display device 101, the stereo camera device 102, and the lighting device 103. Further, the driving circuit 40 supplies image data of the eyeball 111 that is captured by the stereo camera device 102 to the computer system 20 via the input-output interface device 30.
The output device 50 includes a display device, such as a flat panel display. The output device 50 may include a printing device. The input device 60 generates input data by being operated. The input device 60 includes a keyboard or a mouse for a computer system. The input device 60 may include a touch sensor that is arranged on a display screen of the output device 50 that serves as a display device.
In the present embodiment, the display device 101 and the computer system 20 are separated devices. However, the display device 101 and the computer system 20 may be integrated. For example, if the line-of-sight detection apparatus 100 includes a tablet personal computer, the computer system 20, the input-output interface device 30, the driving circuit 40, and the display device 101 may be mounted on the tablet personal computer.
Further, the first camera input-output unit 404A supplies image data of the eyeball 111 that is captured by the first camera 102A to the computer system 20 via the input-output unit 302. The second camera input-output unit 404B supplies image data of the eyeball 111 that is captured by the second camera 102B to the computer system 20 via the input-output unit 302.
The computer system 20 controls the line-of-sight detection apparatus 100. The computer system 20 includes a display control unit 202, a light source control unit 204, an image data acquisition unit 206, an input data acquisition unit 208, a position detection unit 210, a curvature center calculation unit 212, a gaze point detection unit 214, a region setting unit 216, a determination unit 218, an arithmetic unit 220, a storage unit 222, an evaluation unit 224, and an output control unit 226. Functions of the computer system 20 are implemented by the arithmetic processing device 20A and the storage device 20B.
The display control unit 202 performs display operation including first display operation of displaying question information that is a question for the subject on the display screen 101S, second display operation of displaying a guidance target object that guides a gaze point of the subject to a target position on the display screen, and third display operation of displaying a plurality of answer target objects that are answers for the question at positions that do not overlap with a guidance position on the display screen 101S after the second display operation. The question information includes characters, figures, and the like. The guidance target object includes an eye-catching video or the like that guides the gaze point to a desired position on the display screen 101S. The eye-catching video allows the subject to start viewing from a target position of an evaluation image. The target position may be set at a certain position that is desired to be gazed at by the subject in the evaluation image at the start of display of the evaluation image. The plurality of answer target objects include, for example, a specific target object that is a correct answer for the question and comparison target objects that are different from the specific target object. The question information, the guidance target object, and the answer target objects as described above are included in, for example, an evaluation video or an evaluation image that is to be viewed by the subject. The display control unit 202 displays the evaluation video or the evaluation image as described above on the display screen 101S.
The light source control unit 204 controls the light source driving unit 406, and controls operation states of the first light source 103A and the second light source 103B. The light source control unit 204 controls the first light source 103A and the second light source 103B such that the first light source 103A and the second light source 103B emit detection light at different timings.
The image data acquisition unit 206 acquires the image data of the eyeball 111 of the subject that is captured by the stereo camera device 102 including the first camera 102A and the second camera 102B, from the stereo camera device 102 via the input-output unit 302.
The input data acquisition unit 208 acquires the input data that is generated through operation of the input device 60, from the input device 60 via the input-output unit 302.
The position detection unit 210 detects positional data of a pupil center on the basis of the image data of the eyeball 111 acquired by the image data acquisition unit 206. Further, the position detection unit 210 detects positional data of a corneal reflection center on the basis of the image data of the eyeball 111 acquired by the image data acquisition unit 206. The pupil center is a center of the pupil 112. The corneal reflection center is a center of the corneal reflection image 113. The position detection unit 210 detects the positional data of the pupil center and the positional data of the corneal reflection center for each of the right and left eyeballs 111 of the subject.
The curvature center calculation unit 212 calculates positional data of a corneal curvature center of the eyeball 111 on the basis of the image data of the eyeball 111 acquired by the image data acquisition unit 206.
The gaze point detection unit 214 detects positional data of a gaze point of the subject on the basis of the image data of the eyeball 111 acquired by the image data acquisition unit 206. In the present embodiment, the positional data of the gaze point indicates positional data of an intersection point between a line-of-sight vector of the subject that is defined by the three-dimensional global coordinate system and the display screen 101S of the display device 101. The gaze point detection unit 214 detects a line-of-sight vector of each of the right and left eyeballs 111 of the subject on the basis of the positional data of the pupil center and the positional data of the corneal curvature center that are acquired from the image data of the eyeball 111. After detection of the line-of-sight vector, the gaze point detection unit 214 detects the positional data of the gaze point that indicates the intersection point between the line-of-sight vector and the display screen 101S.
The region setting unit 216 sets a specific region corresponding to the specific target object and comparison regions corresponding to the respective comparison target objects on the display screen 101S of the display device 101 during a display period in which the third display operation is performed.
The determination unit 218 determines, on the basis of positional data of a viewpoint, whether the gaze point is present in each of the specific region and the comparison regions during the display period in which the third display operation is performed, and outputs determination data. The determination unit 218 determines whether the gaze point is present in the specific region and the comparison regions at a constant time interval, for example. The constant time interval may be set to, for example, a cycle of the frame synchronous signal (for example, every 20 milliseconds (msec)) that is output from the first camera 102A and the second camera 102B.
The arithmetic unit 220 calculates movement course data (may be described as gaze point data) that indicates a course of movement of the gaze point during the display period, on the basis of the determination data of the determination unit 218. The movement course data includes arrival time data indicating a time period from a start time of the display period to an arrival time at which the gaze point first arrives at the specific region, movement frequency data indicating the number of times of movement of the position of the gaze point among the plurality of comparison regions before the gaze point first arrives at the specific region, presence time data indicating a presence time in which the gaze point is present in the specific region or the comparison regions during the display period, and final region data indicating a region in which the gaze point is finally located among the specific region and the comparison regions during a display time.
Meanwhile, the arithmetic unit 220 includes a management timer for managing a video replay time, and a detection timer T for detecting an elapsed time since start of display of the video on the display screen 101S. The arithmetic unit 220 includes a counter that counts the number of times the gaze point is determined as being present in the specific region.
The evaluation unit 224 obtains evaluation data of the subject on the basis of the movement course data. The evaluation data is data for evaluating whether the subject is able to gaze at the specific target object that is displayed on the display screen 101S in the display operation.
The storage unit 222 stores therein the determination data, the movement course data (the presence time data, the movement frequency data, the final region data, and the arrival time data), and the evaluation data as described above. Further, the storage unit 222 stores therein an evaluation program that causes a computer to execute a process of displaying an image on the display screen, a process of detecting the position of the gaze point of the subject who observes the display screen, a process of performing the display operation including the first display operation of displaying the question information that is a question for the subject on the display screen, the second display operation of displaying the guidance target object that guides the gaze point of the subject to the target position on the display screen, and the third display operation of displaying the plurality of answer target objects that are answers for the question at positions that do not overlap with the guidance position, a process of setting the specific region corresponding to the specific target object among the plurality of answer target objects and the comparison target objects different from the specific target object, a process of determining, on the basis of the position of the gaze point, whether the gaze point is present in the specific region and the comparison regions during the display period in which the third display operation is performed, a process of calculating the gaze point data during the display period on the basis of a determination result, and a process of obtaining the evaluation of the subject on the basis of the gaze point data.
The output control unit 226 outputs data to at least one of the display device 101 and the output device 50.
An overview of a process performed by the curvature center calculation unit 212 according to the present embodiment will be described below. The curvature center calculation unit 212 calculates the positional data of the corneal curvature center of the eyeball 111 on the basis of the image data of the eyeball 111.
First, the example illustrated in
Next, the example illustrated in
As described above, even if the two light sources are provided, the corneal curvature center 110 is calculated by the same method as the method that is adopted when the single light source is provided.
The corneal curvature radius 109 is a distance between the corneal surface and the corneal curvature center 110. Therefore, the corneal curvature radius 109 is calculated by calculating positional data of the corneal surface and the positional data of the corneal curvature center 110.
Next, an example of the line-of-sight detection method according to the present embodiment will be described.
A gaze point detection process will be described below. The gaze point detection process is performed after the calibration process. The gaze point detection unit 214 calculates a line-of-sight vector of the subject and the positional data of the gaze point on the basis of the image data of the eyeball 111.
Evaluation Method
The evaluation method according to the present embodiment will be described below. In the evaluation method according to the present embodiment, cognitive impairment and brain impairment of the subject are evaluated by using the line-of-sight detection apparatus 100 as described above.
After displaying the question information I1 on the display screen 101S, the display control unit 202 displays, as the second display operation, a guidance target object E1 on the display screen 101S.
The display control unit 202 arranges the plurality of answer target objects M1 to M4 at positions that do not overlap with one another. Further, the display control unit 202 arranges the plurality of answer target objects M1 to M4 at positions that do not overlap with the guidance position. For example, the display control unit 202 arranges the plurality of answer target objects M1 to M4 around the guidance position. In the present embodiment, the guidance position is the target position P1 to which the gaze point of the subject is guided by the guidance target object E1. The display control unit 202 may arrange the plurality of answer target objects M1 to M4 at positions at equal distances from the target position P1 that is the guidance position.
Meanwhile,
The region setting unit 216 sets the specific region A in a rectangular range including the specific target object M1 that is a correct answer for the question information I1. Similarly, the region setting unit 216 sets the comparison regions B to D in respective rectangular ranges including the comparison target objects M2 to M4 that are incorrect answers for the question information I1. Meanwhile, the specific region A and the comparison regions B to D need not always have rectangular shapes, but may have different shapes, such as circles, ellipses, or polygons.
It is known that symptoms of cognitive impairment and brain impairment affect cognitive ability and memory ability of the subject. If the subject does not have cognitive impairment and brain impairment, the subject is able to view, one by one, the comparison target objects M2 to M4 that are displayed on the display screen 101S in the third display operation, determine that it is difficult to assemble a square, and finally detect and gaze at the specific target object M1. Further, if the subject has cognitive impairment and brain impairment, the subject may have difficulty in performing assembly as described above and gazing at the specific target object M1. In contrast, in a method of displaying the answer target object M1 to M4 on the display screen 101S, in some cases, the gaze point of the subject may be accidentally located at the specific target object M1 that is a correct answer at the start of the third display operation. In this case, it may be determined that the answer is correct regardless of whether or not the subject has cognitive impairment and brain impairment, so that it becomes difficult to evaluate the subject with high accuracy.
To cope with this, it is possible to evaluate the subject with high accuracy through the procedure as described below, for example. First, as the first display operation, the question information I1 is displayed on the display screen 101S so as to be checked by the subject. Further, as the second display operation, the guidance target object is displayed on the display screen 101S and the gaze point of the subject is guided to the target position P1. Thereafter, as the third display operation, the plurality of answer target objects M1 to M4 are displayed around the guidance position (the target position P1) on the display screen 101S.
Through the procedure as described above, it is possible to prevent the gaze point of the subject from moving to or being fixed to any of the answer target objects M1 to M4 at the start of the third display operation. Accordingly, it is possible to prevent a situation that is equivalent to a situation in which the subject unintendedly gazes at the answer target object at the start time. Consequently, it is possible to evaluate, with high accuracy, the subject from the standpoint of whether the subject gazes at the plurality of comparison target objects M2 to M4 one by one, whether the subject is able to finally reach the specific target object M1 that is a correct answer, how long does it take before the subject reaches the specific target object M1, and whether the subject is able to gaze at the specific target object M1, for example.
In the third display operation, if the positional data of the gaze point P of the subject is detected, the determination unit 218 determines whether the gaze point of the subject is present in the specific region A and the plurality of comparison regions B to D, and outputs determination data.
The arithmetic unit 220 calculates movement course data that indicates a course of movement of the gaze point P during the display period, on the basis of the determination data. The arithmetic unit 220 calculates, as the movement course data, the presence time data, the movement frequency data, the final region data, and the arrival time data.
The presence time data indicates a presence time in which the gaze point P is present in the specific region A or the comparison regions B to D. In the present embodiment, it is possible to estimate that the presence time in which the gaze point P is present in the specific region A or the comparison regions B to D increases with an increase in the number of times the gaze point is determined as being present in the specific region A or the comparison regions B to D by the determination unit 218. Therefore, it is possible to adopt, as the presence time data, the number of times the gaze point is determined as being present in the specific region A or the comparison regions B to D by the determination unit 218. In other words, the arithmetic unit 220 is able to adopt count values CNTA, CNTB, CNTC, and CNTD of the counter as the presence time data.
Further, the movement frequency data indicates the number of times of movement of the gaze point P among the plurality of comparison regions B to D before the gaze point P first arrives at the specific region A. Therefore, the arithmetic unit 220 is able to count the number of times of movement of the gaze point P among the specific region A and the comparison regions B to D, and adopt, as the movement frequency data, a result of counting that is performed before the gaze point P arrives at the specific region A.
Furthermore, the final region data indicates a region in which the gaze point P is finally located among the specific region A and the comparison regions B to D, that is, a region that is finally gazed at, as the answer, by the subject. The arithmetic unit 220 updates a region in which the gaze point P is present every time the gaze point P is detected, and is accordingly able to adopt a detection result at the end of the display period as the final region data.
Moreover, the arrival time data indicates a time period from the start time of the display period to an arrival time at which the gaze point first arrives at the specific region A. Therefore, by measuring an elapsed time since the start of the display period by the timer T and detecting a measurement value of the timer T by assuming that a flag value is set to 1 at the time the gaze point first arrives at the specific region A, the arithmetic unit 220 is able to adopt a detection result of the timer T as the arrival time data.
In the present embodiment, the evaluation unit 224 obtains evaluation data on the basis of the presence time data, the movement frequency data, the final region data, and the arrival time data.
Here, a data value of the final region data is denoted by D1, a data value of the presence time data of the specific region A is denoted by D2, a data value of the arrival time data is denoted by D3, a data value of the movement frequency data is denoted by D4. However, the data value D1 of the final region data is set to 1 if the final gaze point P of the subject is present in the specific region A (that is, if the answer is correct), and set to 0 if the gaze point P of the subject is not present in the specific region A (that is, if the answer is incorrect). Further, it is assumed that the data value D2 of the presence time data is the number of seconds in which the gaze point P is present in the specific region A. Meanwhile, it may be possible to set, for the data value D2, an upper limit value that is a smaller number of seconds than the display period. Furthermore, the data value D3 of the arrival time data is set to a reciprocal of the arrival time (for example, 1/(arrival time)/10) (10 is a coefficient used to set an arrival time evaluation value to 1 or smaller based on the assumption that a minimum value of the arrival time is 0.1 second). Moreover, the counter value is used as it is as the data value D4 of the movement frequency data. Meanwhile, it may be possible to appropriately set an upper limit value of the data value D4.
In this case, an evaluation value ANS may be represented as follows, for example.
ANS=D1×K1+D2×K2+D3×K3+D4×K4
Meanwhile, K21 to K24 are constants for weighting. The constants K21 to K24 may be set appropriately.
A value of the evaluation value ANS represented by Expression above increases when the data value D1 of the final region data is set to 1, when the data value D2 of the presence time data increases, when the data value D3 of the arrival time data decreases, and when a value of the data value D4 of the movement frequency data increases. In other words, the evaluation value ANS increases when the final gaze point P is present in the specific region A, when the presence time of the gaze point P in the specific region A increases, when the arrival time at which the gaze point P arrives at the specific region A since the start time of the display period decreases, and when the number of times of movement of the gaze point P among the regions increases.
In contrast, the value of the evaluation value ANS decreases when the data value D1 of the final region data is set to 0, when the data value D2 of the presence time data decreases, when the data value D3 of the arrival time data increases, and when the data value D4 of the movement frequency data decreases. In other words, the evaluation value ANS decreases when the final gaze point P is not present in the specific region A, when the presence time of the gaze point P in the specific region A decreases, when the arrival time at which the gaze point P arrives at the specific region A since the start time of the display period increases, and when the number of times of movement of the gaze point P among the regions decreases.
Therefore, the evaluation unit 224 is able to obtain the evaluation data by determining whether the evaluation value ANS is equal to or larger than a predetermined value. For example, if the evaluation value ANS is equal to or larger than the predetermined value, it is possible to evaluate that the subject is less likely to have cognitive impairment and brain impairment. Further, if the evaluation value ANS is smaller than the predetermined value, it is possible to evaluate that the subject is highly likely to have cognitive impairment and brain impairment.
Furthermore, the evaluation unit 224 may obtain the evaluation value of the subject on the basis of at least one piece of data among the gaze point data as described above. For example, the evaluation unit 224 is able to evaluate that the subject is less likely to have cognitive impairment and brain impairment if the presence time data CNTA of the specific region A is equal to or larger than the predetermined value. Moreover, the evaluation unit 224 may perform evaluation by using the pieces of presence time data CNTB, CNTC, and CNTD of the comparison regions B to D. In this case, the evaluation unit 224 is able to evaluate that the subject is less likely to have cognitive impairment and brain impairment if a ratio of the presence time data CNTA of the specific region A and a sum of the pieces of presence time data CNTB, CNTC, and CNTD of the comparison regions B to D (a ratio of a gazing rate of the specific region A and gazing rates of the comparison regions B to D) is equal to or larger than a predetermined value. Furthermore, the evaluation unit 224 is able to evaluate that the subject is less likely to have cognitive impairment and brain impairment if a ratio of the presence time data CNTA of the specific region A and a total gazing time (a ratio of a gazing time of the specific region A and the total gazing time) is equal to or larger than a predetermined value. Moreover, the evaluation unit 224 is able to evaluate that the subject is less likely to have cognitive impairment and brain impairment if the final region is the specific region A, and that the subject is highly likely to have cognitive impairment and brain impairment if the final region is the comparison regions B to D.
Furthermore, the evaluation unit 224 is able to store the value of the evaluation value ANS in the storage unit 222. For example, it may be possible to cumulatively store the evaluation value ANS for the same subject, and perform evaluation by comparison with past evaluation values. For example, if the evaluation value ANS has a higher value than a past evaluation value, it is possible to evaluate that a cognitive function and a brain function have improved relative to those at the previous evaluation. Moreover, if a cumulative value of the evaluation value ANS is gradually increased for example, it is possible to evaluate that the cognitive function and the brain function have gradually improved.
Furthermore, the evaluation unit 224 may perform evaluation by using the presence time data, the movement frequency data, the final region data, and the arrival time data independently or in combination. For example, if the gaze point P accidentally arrives at the specific region A1 while a number of target objects are viewed, the data value D4 of the movement frequency data decreases. In this case, it is possible to perform evaluation by additionally using the data value D2 of the presence time data as described above. For example, even when the number of times of movement is small, if the presence time is long, it is possible to evaluate that the specific region A1 as the correct answer is gazed at. Moreover, if the number of times of movement is small and the presence time is short, it is possible to evaluate that the gaze point P has accidentally passed through the specific region A1.
Furthermore, when the number of times of movement is small, and if the final region is the specific region A1, it is possible to evaluate that the gaze point arrives at the specific region A1 that is the correct answer through a small number of times of movement, for example. In contrast, when the number of times of movement as described above is small, and if the final region is not the specific region A1, it is possible to evaluate that the gaze point P has accidentally passed through the specific region A1, for example.
In the present embodiment, when the evaluation unit 224 outputs the evaluation data, the output control unit 226 is able to cause the output device 50 to output character data indicating that “it seems that the subject is less likely to have cognitive impairment and brain impairment” or character data indicating that “it seems that the subject is highly likely to have cognitive impairment and brain impairment” in accordance with the evaluation data, for example. Further, if the evaluation value ANS has increased relative to a past evaluation value ANS of the same subject, the output control unit 226 is able to cause the output device 50 to output character data indicating that “a cognitive function and a brain function have improved” or the like.
Even in this case, the display control unit 202 arranges the plurality of answer target objects M5 to M8 at positions that do not overlap with one another. Further, the display control unit 202 arranges the plurality of answer target objects M5 to M8 at positions that do not overlap with the guidance position. For example, the display control unit 202 arranges the plurality of answer target objects M5 to M8 around the target position P1 that is the guidance position. The display control unit 202 may arrange the plurality of answer target objects M5 to M8 at radial positions at equal distances from the target position P1 that is the guidance position. For example, the display control unit 202 may arrange the plurality of answer target objects M5 to M8 at regular pitches on the same circumference of a circle centered at the target position P1.
The region setting unit 216 adopts the comparison region B for a comparison target object M6a indicating a number “14” and a comparison target object M6b indicating a number “12”, for each of which a difference from a reference number “13” indicated by the specific target object M5 that is the correct answer is 1, for example. Further, the comparison region C is adopted for a comparison target object M7a indicating a number “15” and a comparison target object M7b indicating a number “11”, for each of which the difference is 2. Furthermore, a comparison region D is adopted for a comparison target object M8a indicating a number “16”, a comparison target object M8b indicating “10”, and a comparison target object M8c indicating “9”, for each of which the difference is 3 or more.
In this setting, when the data value D1 is obtained in evaluation, and if the final gaze point P of the subject is not present in the specific region A, it is possible to assign a certain data value in order of answer of the closest number to the correct answer, instead of setting the data value D1 to 0. For example, it may be possible to obtain 0.6 as the data value D1 if the final gaze point P of the subject is present in the comparison region B, 0.2 if the final gaze point P of the subject is present in the comparison region C, and 0 if the final gaze point P of the subject is present in the comparison region D.
Similarly to the above, the display control unit 202 arranges the plurality of answer target objects M9 to M12 at positions that do not overlap with one another. Further, the display control unit 202 arranges the plurality of answer target objects M9 to M12 at positions that do not overlap with the guidance position. For example, the display control unit 202 arranges the plurality of answer target objects M9 to M12 around the target position P1 that is the guidance position. The display control unit 202 may arrange the plurality of answer target objects M9 to M12 at positions at equal distances from the target position P1 that is the guidance position. For example, the display control unit 202 may arrange the plurality of answer target objects M9 to M12 at regular pitches on a circular arc R centered at the target position P1a. Further, as illustrated in
Furthermore,
It is known that symptoms of cognitive impairment and brain impairment affect cognitive ability. If the subject does not have cognitive impairment and brain impairment, the subject is able to view, one by one, the comparison target objects M14 to M16 that are displayed on the display screen 101S in the third display operation, determine, by comparison, that the comparison target objects M14 to M16 are not the same as the person indicated by the image information I4b that is memorized in the first display operation, and finally detect and gaze at the specific target object M13. In contrast, if the subject has cognitive impairment and brain impairment, in some cases, it may be difficult to memorize the specific target object M13 or immediately forget the specific target object M13 after memorizing it. Therefore, in some cases, it may be difficult to perform comparison as described above, and it may be difficult to gaze at the specific target object M13. In the present embodiment, it is possible to prevent the gaze point of the subject from moving to or being fixed to any of the answer target objects M13 to M16 at the start of the third display operation, so that it is possible to evaluate memory ability of the subject with high accuracy.
In this setting, when the data value D1 is obtained in evaluation, and if the final gaze point P of the subject is not present in the specific region A, it is possible to assign a certain data value in order of answer of the closest number to the correct answer, instead of setting the data value D1 to 0. For example, it may be possible to obtain 0.6 as the data value D1 if the final gaze point P of the subject is present in the comparison region B, 0.2 if the final gaze point P of the subject is present in the comparison region C, and 0 if the final gaze point P of the subject is present in the comparison region D.
Next, an example of the evaluation method according to the present embodiment will be described with reference to
The gaze point detection unit 214 detects the positional data of the gaze point of the subject on the display screen 101S of the display device 101 with a defined sampling period (for example, 20 msec) while showing the video displayed on the display device 101 to the subject (Step S106). If the positional data is detected (No at Step S107), the determination unit 218 determines a region in which the gaze point P is present on the basis of the positional data (Step S108). Further, if the positional data is not detected (Yes at Step S107), processes from Step S130 to be described later are performed.
If it is determined that the gaze point P is present in the specific region A (Yes at Step S109), the arithmetic unit 220 determines whether the flag value is set to 1, that is, whether the gaze point P arrived at the task region A for the first time (1: has already arrived, 0: has not arrived yet) (Step S110). If the flag value is set to 1 (Yes at Step S110), the arithmetic unit 220 skips Step S111 to Step S113 to be described below, and performs a process at Step S114 to be described later.
Further, if the flag value is not set to 1, that is, if the gaze point P arrived at the specific region A for the first time (No at Step S110), the arithmetic unit 220 extracts a measurement result of the timer T as the arrival time data (Step S111). Furthermore, the arithmetic unit 220 stores movement frequency data, which indicates the number of times of movement of the gaze point P among the regions before the gaze point P arrives at the specific region A, in the storage unit 222 (Step S112). Thereafter, the arithmetic unit 220 changes the flag value to 1 (Step S113).
Subsequently, the arithmetic unit 220 determines whether a region in which the gaze point P is present at the last detection, that is, the final region, is the specific region A (Step S114). If it is determined that the final region is the specific region A1 (Yes at Step S114), the arithmetic unit 220 skips Step S115 and Step S116 to be described below, and performs a process at Step S117 to be described later. Furthermore, if it is determined that the final region is not the specific region A1 (No at Step S114), the arithmetic unit 220 increments the cumulative number, which indicates the number of times of movement of the gaze point P among the regions, by 1 (Step S115), and changes the final region to the specific region A (Step S116). Moreover, the arithmetic unit 220 increments the count value CNTA, which indicates the presence time data in the specific region A, by 1 (Step S117). Thereafter, the arithmetic unit 220 performs the processes from Step S130 to be described later.
Furthermore, if it is determined that the gaze point P is not present in the specific region A (No at Step S109), the arithmetic unit 220 determines whether the gaze point P is present in the comparison region B (Step S118). If it is determined that the gaze point P is present in the comparison region B (Yes at Step S118), the arithmetic unit 220 determines whether the region in which the gaze point P is present at the last detection, that is, the final region, is the comparison region B (Step S119). If it is determined that the final region is the comparison region B (Yes at Step S119), the arithmetic unit 220 skips Step S120 and Step S121 to be described below, and performs the process at Step S130 to be described later. Moreover, if it is determined that the final region is not the comparison region B (No at Step S119), the arithmetic unit 220 increments the cumulative number, which indicates the number of times of movement of the gaze point P among the regions, by 1 (Step S120), and changes the final region to the comparison region B (Step S121). Thereafter, the arithmetic unit 220 performs the processes from Step S130 to be described later.
Furthermore, if it is determined that the gaze point P is not present in the comparison region B (No at Step S118), the arithmetic unit 220 determines whether the gaze point P is present in the comparison region C (Step S122). If it is determined that the gaze point P is present in the comparison region C (Yes at Step S122), the arithmetic unit 220 determines whether the region in which the gaze point P is present at the last detection, that is, the final region, is the comparison region C (Step S123). If it is determined that the final region is the comparison region C (Yes at Step S123), the arithmetic unit 220 skips Step S124 and Step S125 to be described below, and performs the process at Step S130 to be described later. Moreover, if it is determined that the final region is not the comparison region C (No at Step S123), the arithmetic unit 220 increments the cumulative number, which indicates the number of times of movement of the gaze point P among the regions, by 1 (Step S124), and changes the final region to the comparison region C (Step S125). Thereafter, the arithmetic unit 220 performs the processes from Step S130 to be described later.
Furthermore, if it is determined that the gaze point P is not present in the comparison region C (No at Step S122), the arithmetic unit 220 determines whether the gaze point P is present in the comparison region D (Step S126). If it is determined that the gaze point P is present in the comparison region D (Yes at Step S126), the arithmetic unit 220 determines whether the region in which the gaze point P is present at the last detection, that is, the final region, is the comparison region D (Step S127). Moreover, if it is determined that the gaze point P is not present in the comparison region D (No at Step S126), the process at Step S130 to be described later is performed. Furthermore, if it is determined that the final region is the comparison region D (Yes at Step S127), the arithmetic unit 220 skips Step S128 and Step S129 to be described below, and performs the process at Step S130 to be described later. Moreover, if it is determined that the final region is not the comparison region D (No at Step S127), the arithmetic unit 220 increments the cumulative number, which indicates the number of times of movement of the gaze point P among the regions, by 1 (Step S128), and changes the final region to the comparison region D (Step S129). Thereafter, the arithmetic unit 220 performs the processes from Step S130 to be described later.
Thereafter, the arithmetic unit 220 determines whether a time at which replay of the video is completed has come, on the basis of a detection result of the detection timer T (Step S130). If the arithmetic unit 220 determines that the time at which replay of the video is completed has not yet come (No at Step S130), the arithmetic unit 220 repeats the processes from Step S106 as described above.
If the arithmetic unit 220 determines that the time at which replay of the video is completed has come (Yes at Step S130), the display control unit 202 stops replay of the video (Step S131). After replay of the video is stopped, the evaluation unit 224 calculates the evaluation value ANS on the basis of the presence time data, the movement frequency data, the final region data, and the arrival time data that are obtained from processing results as described above (Step S132), and obtains evaluation data on the basis of the evaluation value ANS. Thereafter, the output control unit 226 outputs the evaluation data obtained by the evaluation unit 224 (Step S133).
As described above, the evaluation apparatus according to the present embodiment includes the display screen 101S, the gaze point detection unit that detects a position of a gaze point of a subject who observes an image displayed on the display screen 101S, the display control unit 202 that performs display operation including the first display operation of displaying question information that is a question for the subject on the display screen 101S, the second display operation of displaying a guidance target object that guides the gaze point P of the subject to the predetermined target position P1 on the display screen 101S, and the third display operation of displaying a plurality of answer target objects, which are answers for the question, at positions that do not overlap with the guidance position on the display screen 101S after the second display operation, the region setting unit 216 that sets the specific region A corresponding to the specific target object among the plurality of answer target objects and the comparison regions B to D corresponding to the comparison target objects that are different from the specific target object, the determination unit 218 that determines whether the gaze point P is present in the specific region A and the comparison regions B to D during the display period in which the third display operation is performed, on the basis of the position of the gaze point P, the arithmetic unit 220 that calculates gaze point data during the display period on the basis of a determination result, and the evaluation unit 224 that obtains evaluation data of the subject on the basis of the gaze point data.
Furthermore, the evaluation method according to the present embodiment includes detecting a position of a gaze point of a subject who observes an image displayed on the display screen 101S, performing display operation including the first display operation of displaying question information that is a question for the subject on the display screen 101S, the second display operation of displaying a guidance target object that guides the gaze point P of the subject to the predetermined target position P1 on the display screen 101S, and the third display operation of displaying a plurality of answer target objects, which are answers for the question, at positions that do not overlap with the guidance position on the display screen 101S after the second display operation, setting the specific region A corresponding to the specific target object among the plurality of answer target objects and the comparison regions B to D corresponding to the comparison target objects that are different from the specific target object, determining whether the gaze point P is present in the specific region A and the comparison regions B to D during the display period in which the third display operation is performed, on the basis of the position of the gaze point P, calculating gaze point data during the display period on the basis of a determination result, and obtaining evaluation data of the subject on the basis of the gaze point data.
Moreover, the evaluation program according to the present embodiment causes a computer to execute a process of detecting a position of a gaze point of a subject who observes an image displayed on the display screen 101S, a process of performing display operation including the first display operation of displaying question information that is a question for the subject on the display screen 101S, the second display operation of displaying a guidance target object that guides the gaze point P of the subject to the predetermined target position P1 on the display screen 101S, and the third display operation of displaying a plurality of answer target objects, which are answers for the question, at positions that do not overlap with the guidance position on the display screen 101S after the second display operation, a process of setting the specific region A corresponding to the specific target object among the plurality of answer target objects and the comparison regions B to D corresponding to the comparison target objects that are different from the specific target object, a process of determining whether the gaze point P is present in the specific region A and the comparison regions B to D during the display period in which the third display operation is performed, on the basis of the position of the gaze point P, a process of calculating gaze point data during the display period on the basis of a determination result, and a process of obtaining evaluation data of the subject on the basis of the gaze point data.
According to the present embodiment, it is possible to prevent the gaze point of the subject from moving to or being fixed to any of the answer target objects at the start of the third display operation, so that it is possible to reduce contingency and evaluate the subject with high accuracy. Furthermore, it is possible to obtain the evaluation data of the subject on the basis of the course of movement of the gaze point during the display period, so that it is possible to evaluate the subject with higher accuracy. Therefore, the evaluation apparatus 100 is able to evaluate the subject with high accuracy.
Furthermore, in the evaluation apparatus 100 according to the present embodiment, the region setting unit 216 sets the specific region A and the comparison regions B to D at positions that do not overlap with one another. Therefore, it is possible to distinguish an answer of the subject with high accuracy, so that it is possible to evaluate the subject with high accuracy.
Moreover, in the evaluation apparatus 100 according to the present embodiment, the display control unit 202 arranges the plurality of answer target objects at positions at equal distances from the guidance position. Therefore, it is possible to further reduce contingency and distinguish an answer of the subject with high accuracy.
Furthermore, in the evaluation apparatus 100 according to the present embodiment, the movement course data includes at least one of the arrival time data that indicates a time period from the start time of the display period to an arrival time at which the gaze point first arrives at the specific region A, the movement frequency data that indicates the number of times of movement of the position of the gaze point P among the plurality of comparison regions B to D before the gaze point P first arrives at the specific region A, and the presence time data that indicates a presence time in which the gaze point P is present in the specific region A during the display period, and also includes the final region data that indicates a region in which the gaze point P is present among the specific region A and the comparison regions B to D during the display time. Therefore, it is possible to effectively obtain the evaluation data with high accuracy.
Moreover, in the evaluation apparatus 100 according to the present embodiment, the evaluation unit 224 adds weight to at least a single piece of data included in the movement course data and obtains the evaluation data. Therefore, by giving priority to each piece of data, it is possible to obtain the evaluation data with high accuracy.
The technical scope of the present disclosure is not limited to the embodiment as described above, and it is possible to apply modifications appropriately within a scope not departing from the gist of the present disclosure.
If it is determined that the gaze point P is present in the predetermined region Q (Yes at Step S143), the timer T is reset (Step S103), the count value CNTA of the counter is reset (Step S104), and the flag value is set to 0 (Step S105). Then, the processes from Step S106 are performed. Furthermore, if the positional data is not detected (Yes at Step S141), and if it is determined that the gaze point P is not present in the predetermined region Q (No at Step S143), replay of the video is stopped (Step S144), and the processes from Step S101 are repeated. Therefore, it is possible to more reliably locate the gaze point P of the subject at the target position P1 or the predetermined region Q around the target position P1.
Moreover,
Moreover, in the embodiments as described above, a case has been described as one example in which the evaluation apparatus 100 is used as an evaluation apparatus that evaluates the possibility of cognitive impairment and brain impairment, but embodiments are not limited to this example. For example, the evaluation apparatus 100 may be used as an evaluation apparatus that evaluates a subject who has development disability, rather than cognitive impairment and brain impairment.
Furthermore, the question information that is displayed on the display screen 101S in the first display operation is not limited to the question information indicating an instruction to gaze at a correct figure or a correct number for the question in the embodiments as described above. The question information may be a question that, for example, instructs the subject to memorize a number of figures that match a predetermined condition among a plurality of figures, and instructs the subject to perform a calculation using the memorized number.
The display control unit 202 displays the question information I5, the plurality of apple graphic images FA1, and the plurality of lemon graphic images FB1 as described above on the display screen 101S for a predetermined period. During this period, the gaze point detection unit 214 detects the positional data of the gaze point of the subject on the display screen 101S of the display device 101 with a defined sampling period (for example, 20 msec). If the positional data of the gaze point P is detected, the determination unit 218 determines whether the gaze point of the subject is present in the corresponding region A1 with the above-described sampling period, and outputs determination data. The arithmetic unit 220 calculates first gaze time data indicating a gaze time for the apple graphic images FA1 indicated by the question information I5, on the basis of the determination data.
After displaying the question information I5, the plurality of apple graphic images FA1, and the plurality of lemon graphic images FB1 on the display screen 101S for the predetermined period, the display control unit 202 displays question information I6, a plurality of banana graphic images FA2, and a plurality of strawberry graphic images FB2 on the display screen 101S as illustrated in
The display control unit 202 displays the question information I6, the plurality of banana graphic images FA2, and the plurality of strawberry graphic images FB2 as described above on the display screen 101S for a predetermined period. During this period, the gaze point detection unit 214 detects the positional data of the gaze point of the subject on the display screen 101S of the display device 101 with a defined sampling period (for example, 20 msec). If the positional data of the gaze point P is detected, the determination unit 218 determines whether the gaze point of the subject is present in the corresponding region A2 with the above-described sampling period, and outputs determination data. The arithmetic unit 220 calculates second gaze time data indicating a gaze time for the banana graphic images FA2 indicated by the question information I6, on the basis of the determination data.
After displaying the question information I6, the plurality of banana graphic images FA2, and the plurality of strawberry graphic images FB2 on the display screen 101S for the predetermined period, the display control unit 202 displays, as question information I7, a question for instructing the subject to calculate a difference between the number of apples and the number of bananas on the display screen 101S as illustrated in
After displaying the question information I7 on the display screen 101S for a predetermined period, the display control unit 202 displays, as the second display operation, a guidance target object on the display screen 101S.
After performing the second display operation, the display control unit 202 performs the third display operation.
The display control unit 202 arranges the plurality of answer target objects M21 to M24 at positions that do not overlap with one another. Furthermore, the display control unit 202 arranges the plurality of answer target objects M21 to M24 at positions that do not overlap with the guidance position. For example, the display control unit 202 arranges the plurality of answer target objects M21 to M24 around the target position that is the guidance position. The display control unit 202 may arrange the plurality of answer target objects M21 to M24 at radial positions at equal distances from the target position that is the guidance position. For example, the display control unit 202 may arrange the plurality of answer target objects M21 to M24 at regular pitches on the same circumference of a circle centered at the target position.
During the display period in which the third display operation is performed, the region setting unit 216 sets the specific region A corresponding to the specific target object M21. Further, the region setting unit 216 sets the comparison regions B to D corresponding to the respective comparison target objects M22 to M24. In this case, the region setting unit 216 sets the specific region A and the comparison regions B to D at positions that do not overlap with one another.
The region setting unit 216 adopts the comparison region B for a comparison target object M22a indicating a number “5” and a comparison target object M22b indicating a number “3”, for each of which a difference from a reference number “4” indicated by the specific target object M21 that is the correct answer is 1, for example. Further, the comparison region C is adopted for a comparison target object M23 indicating a number “6” and a comparison target object M23b indicating a number “2”, for each of which the difference is 2. Furthermore, the comparison region D is adopted for the comparison target object M24a indicating a number “1” and a comparison target object M24c indicating a number “8”, for each of which the difference is 3 or more.
In this setting, similarly to the embodiment as described above, when the data value D1 is obtained in evaluation, and if the final gaze point P of the subject is not present in the specific region A, it is possible to assign a certain data value in order of answer of the closest number to the correct answer, instead of setting the data value D1 to 0. For example, it may be possible to obtain 0.6 as the data value D1 if the final gaze point P of the subject is present in the comparison region B, 0.2 if the final gaze point P of the subject is present in the comparison region C, and 0 if the final gaze point P of the subject is present in the comparison region D.
In the third display operation, the gaze point detection unit 214 detects the positional data of the gaze point of the subject on the display screen 101S of the display device 101 with a defined sampling period (for example, 20 msec). If the positional data of the gaze point P is detected, the determination unit 218 determines whether the gaze point of the subject is present in the specific region A and the plurality of comparison regions B to D with the above-described sampling period, and outputs determination data. The arithmetic unit 220 calculates the movement course data that indicates the course of movement of the gaze point P during the display period, on the basis of the determination data. The arithmetic unit 220 calculates, as the movement course data, the presence time data, the movement frequency data, the final region data, and the arrival time data.
The evaluation unit 224 obtains the evaluation data by using the first gaze time data and the second gaze time data that are obtained in the first display operation and using the presence time data, the movement frequency data, the final region data, and the arrival time data that are obtained in the third display operation. Meanwhile, the evaluation data is obtained based on the presence time data, the movement frequency data, the final region data, and the arrival time data that are obtained in the third display operation, similarly to the embodiment as described above.
A subject who is highly likely to have cognitive impairment and brain impairment tends not to carefully view the figures indicated by the question information I5 and the question information I6. Further, a subject who is less likely to have cognitive impairment and brain impairment tends to carefully view the figures indicated by the question information I5 and the question information I6 in accordance with the question displayed on the display screen 101S. Accordingly, by referring to the first gaze time data and the second gaze time data that are obtained in the first display operation, it is possible to reflect the the gaze time for the figures indicated by the question information in the evaluation.
Therefore, assuming that a sum of the first gaze time data and the second gaze time data is denoted by D5, the evaluation value ANS is represented as follows, for example.
ANS=D1×K1+D2×K2+D3×K3+D4×K4+D5×K5
Meanwhile, D1 to D4 and K1 to K4 are the same as those of the embodiment as described above. Further, K5 is a constant for weighting, and can be set appropriately similarly to K1 to K4. It may be possible to appropriately set an upper limit value for the data value D5. Further, the evaluation unit 224 may obtain the evaluation value of the subject on the basis of at least a single piece of data in the gaze point data, similarly to the embodiment as described above.
Next, an example of the evaluation method according to the present embodiment will be described with reference to
The gaze point detection unit 214 detects the positional data of the gaze point of the subject on the display screen 101S of the display device 101 with a defined sampling period (for example, 20 msec) while showing the video displayed on the display device 101 to the subject (Step S204). If the positional data is not detected (Yes at Step S205), processes from Step S209 to be described later are performed. If the positional data is detected (No at Step S205), the determination unit 218 determines a region in which the gaze point P is present on the basis of the positional data (Step S206).
If it is determined that the gaze point P is present in the corresponding region A1 (Yes at Step S207), the arithmetic unit 220 increments the count value CNTA1 indicating the first gaze time data in the corresponding region A1 by 1 (Step S208). Thereafter, the arithmetic unit 220 performs processes from Step S209 to be described later. If it is determined that the gaze point P is not present in the corresponding region A1 (No at Step S207), the processes from Step S209 are performed.
Thereafter, the arithmetic unit 220 determines whether a time at which replay of the video is completed has come, on the basis of a detection result of the timer T1 (Step S209). If the arithmetic unit 220 determines that the time at which replay of the video is completed has not yet come (No at Step S209), the arithmetic unit 220 repeats the processes from Step S204 as described above.
If the arithmetic unit 220 determines that the time at which replay of the video is completed has come (Yes at Step S209), the display control unit 202 stops replay of the video (Step S210). After replay of the video is stopped, operation of displaying the question information I6 or the like is performed.
The gaze point detection unit 214 detects the positional data of the gaze point of the subject, similarly to operation of displaying the question information I5 or the like (Step S304), and if the positional data is not detected (Yes at Step S305), the processes from Step S309 to be described later are performed. If the positional data is detected (No at Step S305), the determination unit 218 determines a region in which the gaze point P is present, on the basis of the positional data (Step S306).
If it is determined that the gaze point P is present in the corresponding region A2 (Yes at Step S307), the arithmetic unit 220 increments the count value CNTA2, which indicates the second gaze time data in the corresponding region A2, by 1 (Step S308). Thereafter, the arithmetic unit 220 performs the processes from Step S309 to be described later. If it is determined that the gaze point P is not present in the corresponding region A2 (No at Step S307), the processes from Step S309 to be described later are performed.
Thereafter, the arithmetic unit 220 determines whether a time at which replay of the video is completed has come, on the basis of a detection result of the timer T2 (Step S309). If the arithmetic unit 220 determines that the time at which replay of the video is completed has not yet come (No at Step S309), the arithmetic unit 220 repeats the processes from Step S304 as described above.
If the arithmetic unit 220 determines that the time at which replay of the evaluation video including the question information I6 and the like is completed has come (Yes at Step S309), the display control unit 202 displays a certain part including the question information I7 in the evaluation video on the display screen 101S. After displaying the question information I7 for a predetermined time, the display control unit 202 performs the second display operation by displaying the video of the guidance target object E4 as an eye-catching video (Step S310). After displaying the video of the guidance target object E4, the display control unit 202 stops replay of the video (Step S311).
After Step S311, the display control unit 202 performs the third display operation by displaying an evaluation video including the plurality of answer target objects M21 to M24 on the display screen 101S. After performing the third display operation, the evaluation unit 224 obtains evaluation data. Thereafter, the output control unit 226 outputs the evaluation data. The processes in the third display operation, the process of obtaining the evaluation data, and the process of outputting the evaluation data are the same as Step S101 to Step S133 (see
As described above, the display control unit 202 is configured to display a plurality of graphic images in the first display operation, display the first question information for instructing the subject to memorize the number of graphic images that match a predetermined condition among the plurality of graphic images, and display the second question information for instructing the subject to perform a calculation using the number of the graphic images that are memorized based on the first question information, so that it is possible to obtain more objective and correct evaluation in a short time, and it is possible to alleviate the influence of mistakes made by a healthy subject.
The display control unit 202 displays the instruction information I8, the plurality of apple graphic images GA1, the bag graphic image GA2, and the plurality of orange graphic images GB1 as described above on the display screen 101S for a predetermined period. During this period, the gaze point detection unit 214 detects the positional data of the gaze point of the subject on the display screen 101S of the display device 101 with a defined sampling period (for example, 20 msec). If the positional data of the gaze point P is detected, the determination unit 218 determines whether the gaze point of the subject is present in the corresponding region A1 with the above-described sampling period, and outputs determination data. The arithmetic unit 220 calculates the first gaze time data indicating a gaze time for the apple graphic image GA1 indicated by the instruction information I8 on the basis of the determination data.
After displaying the instruction information I8, the plurality of apple graphic images GA1, the bag graphic image GA2, and the plurality of orange graphic patterns GB1 on the display screen 101S for a predetermined period, the display control unit 202 displays question information I9, the plurality of bag graphic images GA2, and the plurality of orange graphic images GB2 on the display screen 101S as illustrated in
The display control unit 202 displays the question information I9, the plurality of bag graphic images GA2, and the plurality of orange graphic images GB2 as described above on the display screen 101S for a predetermined period. During this period, the gaze point detection unit 214 detects the positional data of the gaze point of the subject on the display screen 101S of the display device 101 with a defined sampling period (for example, 20 msec). If the positional data of the gaze point P is detected, the determination unit 218 determines whether the gaze point of the subject is present in the corresponding region A2 with the above-described sampling period, and outputs determination data. The arithmetic unit 220 calculates the second gaze time data indicating a gaze time for the bag graphic image GA2 indicated by the question information I9, on the basis of the determination data.
After displaying the question information I9, the plurality of bag graphic images GA2, and the plurality of orange graphic patterns GB2 on the display screen 101S, the display control unit 202 displays, as the second display operation, the guidance target object on the display screen 101S.
The display control unit 202 displays the question information I10, the plurality of kettle graphic images HA1, and the plurality of creature graphic images HB1 as described above on the display screen 101S for a predetermined period. During this period, the gaze point detection unit 214 detects the positional data of the gaze point of the subject on the display screen 101S of the display device 101 with a defined sampling period (for example, 20 msec). If the positional data of the gaze point P is detected, the determination unit 218 determines whether the gaze point of the subject is present in the corresponding region A1 with the above-described sampling period, and outputs determination data. The arithmetic unit 220 calculates the first gaze time data indicating a gaze time for the kettle graphic images HA1 indicated by the question information I10, on the basis of the determination data.
After displaying the question information I10, the plurality of kettle graphic images HA1, and the plurality of creature graphic images HB1 on the display screen 101S for a predetermined period, the display control unit 202 displays the question information I11, a plurality of cup graphic images HA2 and a plurality of creature (fish and frogs) graphic images HB2 on the display screen 101S as illustrated in
The display control unit 202 displays the question information I11, the plurality of cup graphic images HA2, and the plurality of creature graphic images HB2 on the display screen 101S for a predetermined period. During this period, the gaze point detection unit 214 detects the positional data of the gaze point of the subject on the display screen 101S of the display device 101 with a defined sampling period (for example, 20 msec). If the positional data of the gaze point P is detected, the determination unit 218 determines whether the gaze point of the subject is present in the corresponding region A2 with the above-described sampling period, and outputs determination data. The arithmetic unit 220 calculates the second gaze time data indicating a gaze time for the cup graphic images HA2 indicated by the question information I11, on the basis of the determination data.
The question information I11, the plurality of cup graphic images HA2, and the plurality of creature graphic images HB2 are displayed on the display screen 101S for a predetermined period. Thereafter, the display control unit 202 displays, as question information I12, a question instructing the subject to calculate a difference between the number of the cup graphic images HA2 and the number of the kettle graphic images HA1 on the display screen 101S as illustrated in
After displaying the question information I12 on the display screen 101S for a predetermined time, the display control unit 202 displays, as the second display operation, the guidance target object on the display screen 101S.
According to the present disclosure, it is possible to provide an evaluation apparatus, an evaluation method, and an evaluation program capable of evaluating cognitive impairment and brain impairment with high accuracy.
Number | Date | Country | Kind |
---|---|---|---|
2018-149559 | Aug 2018 | JP | national |
2019-013002 | Jan 2019 | JP | national |
This application is a Continuation of PCT international application Ser. No. PCT/JP2019/021401 filed on May 29, 2019, which designates the United States, incorporated herein by reference, and which claims the benefit of priority from Japanese Patent Application No. 2018-149559, filed on Aug. 8, 2018, and Japanese Patent Application No. 2019-013002, filed on Jan. 29, 2019, incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/JP2019/021401 | May 2019 | US |
Child | 17155124 | US |