The present application claims priority to and incorporates by reference the entire contents of Japanese Patent Application No. 2014-156812 filed in Japan on Jul. 31, 2014, Japanese Patent Application No. 2014-156872 filed in Japan on Jul. 31, 2014 and Japanese Patent Application No. 2014-242033 filed in Japan on Nov. 28, 2014.
1. Field of the Invention
The present invention relates to a diagnosis supporting device, a diagnosis supporting method and a computer-readable recording medium.
2. Description of the Related Art
It is often the case that developmentally disabled people have characteristics of deviated gazing points or difficulties in understanding causal relations. It's important in providing care and education (training) on developmentally disabled people to understand what point they gaze at to obtain information in order to understand a causal relation, whether they cannot understand the causal relation despite gazing at, or the like. Conventional techniques are described in Japanese Patent Application Laid-open No. 2011-206542 and Pierce K et al., “Preference for Geometric Patterns Early in Life as a Risk Factor for Autism,” Arch Gen Psychiatry. 2011 January; 68 (1): 101-109, for example.
However, the conventional methods cannot give an understanding of what point developmentally disabled people gaze at to obtain information in order to understand a causal relation, whether they cannot understand the causal relation despite gazing at, or the like. For this reason, the conventional methods may fail to appropriately support diagnosis, and a diagnosis supporting method with higher precision has been demanded.
It is an object of the present invention to at least partially solve the problems in the conventional technology.
There is provided a diagnosis supporting device that includes a display, an imaging unit that images a subject, a visual line detector that detects a visual line direction of the subject from an image imaged by the imaging unit, a visual point detector that detects a visual point of the subject in a display area of the display based on the visual line direction, an output controller that displays a diagnostic image representing a cause of a certain event and the event on the display, and an evaluator that calculates an evaluation value of the subject based on the visual point detected by the visual point detector when the diagnostic image is displayed.
The above and other objects, features, advantages and technical and industrial significance of this invention will be better understood by reading the following detailed description of presently preferred embodiments of the invention, when considered in connection with the accompanying drawings.
The following describes embodiments of a diagnosis supporting device, a diagnosis supporting method, a computer-readable recording medium for supporting diagnosis according to the present invention in detail with reference to the drawings. The present invention is not limited by the embodiments. Although the following describes a case in which the diagnosis supporting device is used as the diagnosis supporting device that supports diagnosis of a developmental disorder or the like using a visual line detection result and can also be used for training, applicable devices are not limited to this.
The diagnosis supporting device of the present embodiment displays an image (video) indicating before and after an event, measures a dwell time at a position of a gazing point, and performs evaluation computation. A continuous moving image containing a scene representing a cause and a scene indicating before and after the event is displayed as an explanation image indicating a causal relation between the cause and the event. With this configuration, efficient training support that is easily understood by trainees is achieved.
The diagnosis supporting device of the present embodiment detects a visual line using an illuminator placed at one position. The diagnosis supporting device is not limited to the above embodiment. The diagnosis supporting device of the present embodiment, using a result measured by causing a subject to gaze at one point before visual line detection, calculates a corneal curvature center position with high accuracy.
The illuminator includes a light source and is a component that can apply light to an eyeball of the subject. The light source is, for example, an element that emits light such as a light emitting diode (LED). The light source may include one LED or include a plurality of LEDs combined and arranged at one position. The following may use a “light source” as a term thus representing the illuminator.
As illustrated in
As illustrated in
From the positions of the pupil 112 and the corneal reflex 113 obtained by the two cameras, three-dimensional world coordinate values of the positions of the pupil 112 and the corneal reflex 113 are calculated. In the present embodiment, as three-dimensional world coordinates, with the central position of a screen of the display 101 as the point of origin, the up-and-down direction is a Y coordinate (the upper side is +), the lateral direction is an X coordinate (the right side viewed from the front is +), and the depth direction is a Z coordinate (the near side is +).
The speaker 205 functions as a voice output unit that outputs a voice for calling subject's attention or the like during calibration or the like.
The drive/IF 313 drives units included in the stereo camera 102. The drive/IF 313 serves as an interface between the units included in the stereo camera 102 and the controller 300.
The controller 300 is implemented by a computer or the like including, for example, a controller such as a central processing unit (CPU), storage devices such as a read only memory (ROM) and a random access memory (RAM), a communication I/F that connects to a network to perform communication, and a bus that connects the units to each other.
The storage 150 stores therein various types of information such as control programs, measurement results, and diagnosis support results. The storage 150, for example, stores therein images or the like to be displayed on the display 101. The display 101 displays various types of information such as images to be diagnosed.
The right camera 202 and the left camera 203 are connected to the drive/IF 313 via the camera IFs 314 and 315, respectively. The drive/IF 313 drives the cameras, thereby imaging the subject.
The speaker driver 322 drives the speaker 205. The diagnosis supporting device 100 may include an interface (a printer IF) for connecting to a printer as a printing unit. The printer may be incorporated into the diagnosis supporting device 100.
The controller 300 controls the entire diagnosis supporting device 100. The controller 300 includes a first calculator 351, a second calculator 352, a third calculator 353, a visual line detector 354, a visual point detector 355, an output controller 356, and an evaluator 357. A visual line detection supporting device that detects a visual line only needs to include at least the first calculator 351, the second calculator 352, the third calculator 353, and the visual line detector 354.
Each of the components (the first calculator 351, the second calculator 352, the third calculator 353, the visual line detector 354, the visual point detector 355, the output controller 356, and the evaluator 357) included in the controller 300 may be implemented by software (a computer program), may be implemented by a hardware circuit, or may be implemented by using both the software and the hardware circuit.
When each of the components is implemented by the program, the program is recorded in a computer-readable recording medium such as a compact disc read only memory (CD-ROM), a flexible disk (FD), a compact disc recordable (CD-R), and a digital versatile disc (DVD) as an installable or executable file and is provided as a computer program product. The program may be stored in a computer connected to a network such as the Internet and provided by being downloaded via the network. The program may be provided or distributed via a network such as the Internet. The program may be embedded and provided in a ROM, for example.
The first calculator 351 calculates a position (a first position) of a pupil center indicating the center of a pupil from an image of an eyeball imaged by the stereo camera 102. The second calculator 352 calculates a position (a second position) of a corneal reflex center indicating the center of a corneal reflex from the taken image of the eyeball. The first calculator 351 and the second calculator 352 correspond to a position detector that detects the first position indicating the center of the pupil and the second position indicating the center of the corneal reflex.
The third calculator 353 calculates a corneal curvature center (a fourth position) from a line (a first line) connecting between the LED light source 103 and the corneal reflex center. The third calculator 353, for example, calculates a position that is on the line and the distance of which from the corneal reflex center is a certain value as the corneal curvature center. The certain value can be a value determined in advance from a general corneal curvature radius value or the like.
The corneal curvature radius value can have individual differences, and when the corneal curvature center is calculated using the value determined in advance, a large error may possibly occur. Given this situation, the third calculator 353 may calculate the corneal curvature center in consideration of the individual differences. In this case, the third calculator 353 first, using the pupil center and the corneal reflex center calculated when the subject is made to gaze at a target position (a third position), calculates a point of intersection between a line (a second line) connecting between the pupil center and the target position and a line (the first line) connecting between the corneal reflex center and the LED light source 103. The third calculator 353 then calculates a distance (a first distance) between the pupil center and the calculated point of intersection and stores the distance, for example, in the storage 150.
The target position may be a position that is determined in advance and the three-dimensional world coordinate values of which can be calculated. The central position (the point of origin of the three-dimensional world coordinates) of the display screen 201, for example, can be the target position. In this case, the output controller 356, for example, displays an image (a target image) at which the subject is made to gaze at the target position (center) of the display screen 201. With this configuration, the subject can be made to gaze at the target position.
The target image may be any image so long as it is an image at which the subject can be made to gaze. Examples of the target image include an image the display manner such as brightness and color of which changes and an image the display manner of which is different from the other areas.
The target position is not limited to the center of the display screen 201 and may be any position. The center of the display screen 201 as the target position minimizes the distance to any end of the display screen 201. This arrangement can reduce a measurement error at the time of visual line detection, for example.
The processing up to the calculation of the distance is performed in advance, for example, before starting actual visual line detection. At the time of the actual visual line detection, the third calculator 353 calculates a position that is on the line connecting between the LED light source 103 and the corneal reflex center and the distance from the pupil center of which is the distance calculated in advance as the corneal curvature center. The third calculator 353 corresponds to a calculator that calculates a corneal curvature center (the fourth position) from the position of the LED light source 103, a certain position (the third position) indicating the target image on the display 101, the position of the pupil center, and the position of the corneal reflex center.
The visual line detector 354 detects a visual line of the subject from the pupil center and the corneal curvature center. The visual line detector 354, for example, detects a direction from the corneal curvature center toward the pupil center as a visual line direction of the subject.
The visual point detector 355 detects a visual point of the subject using the detected visual line direction. The visual point detector 355, for example, detects a visual point (a gazing point), which is a point at which the subject gazes on the display screen 201. The visual point detector 355 detects a point of intersection between a visual line vector represented in a three-dimensional world coordinate system as in, for example,
The output controller 356 controls output of various types of information for the display 101 and the speaker 205. The output controller 356, for example, outputs the target image at the target position on the display 101. The output controller 356 controls output of a diagnostic image, evaluation results by the evaluator 357, or the like to the display 101.
The diagnostic image may be an image appropriate for evaluation processing based on a visual line (visual point) detection result. When a developmental disorder is diagnosed, for example, a diagnostic image containing an image (a geometrical pattern video or the like) that subjects of the developmental disorder like and other images (portrait videos or the like) may be used.
The evaluator 357 performs evaluation processing based on the diagnostic image and the gazing point detected by the visual point detector 355. When a developmental disorder is diagnosed, for example, the evaluator 357 analyzes the diagnostic image and the gazing point and evaluates whether the image that subjects of the developmental disorder like has been gazed at. The evaluator 357, for example, calculates an evaluation value based on the position of the gazing point by the subject when such diagnostic images as illustrated in
A pupil center 407 and a corneal reflex center 408 represent the center of a pupil and the center of a corneal reflex point, respectively, detected when the LED light source 103 is turned on. A corneal curvature radius 409 represents the distance from a corneal surface to a corneal curvature center 410.
The method A uses two LED light sources 511 and 512 in place of the LED light source 103. The method A calculates a point of intersection between a line 515 connecting between a corneal reflex center 513 illuminated by the LED light source 511 and the LED light source 511 and a line 516 connecting between a corneal reflex center 514 illuminated by the LED light source 512 and the LED light source 512. This point of intersection is a corneal curvature center 505.
In contrast, considered in the present embodiment is a line 523 connecting between a corneal reflex center 522 illuminated by the LED light source 103 and the LED light source 103. The line 523 passes through the corneal curvature center 505. It is known that the curvature radius of the cornea has nearly a constant value with less influence of individual differences. From this fact, the corneal curvature center illuminated by the LED light source 103 is present on the line 523 and can be calculated using the general curvature radius value.
However, when a visual point is calculated using the position of the corneal curvature center determined using the general curvature radius value, a visual point position deviates from the original position due to individual differences in the eyeball, which may fail to perform accurate visual point detection.
A target position 605 is a position at which the target image or the like is displayed at one point on the display 101 and at which the subject is made to gaze. In the present embodiment, the target position 605 is the central position of the screen of the display 101. A line 613 is a line connecting between the LED light source 103 and a corneal reflex center 612. A line 614 is a line connecting between the target position 605 (the gazing point) at which the subject gazes and a pupil center 611. A corneal curvature center 615 is a point of intersection between the line 613 and the line 614. The third calculator 353 calculates and stores therein a distance 616 between the pupil center 611 and the corneal curvature center 615.
First, the output controller 356 reproduces the target image at one point on the screen of the display 101 (Step S101) and causes the subject to gaze at the one point. Next, the controller 300 turns on the LED light source 103 toward an eye of the subject using the LED drive controller 316 (Step S102). The controller 300 images the eye of the subject by the left and right cameras (the right camera 202 and the left camera 203) (Step S103).
By the emission from the LED light source 103, a pupil part is detected as a dark part (a dark pupil). A virtual image of a corneal reflex occurs as a reflection of the LED emission, and the corneal reflex point (corneal reflex center) is detected as a bright part. Specifically, the first calculator 351 detects the pupil part from the taken image and calculates coordinates indicating the position of the pupil center. The first calculator 351, for example, detects an area of certain brightness or less containing the darkest part in a certain area containing the eye as the pupil part and detects an area of certain brightness or more containing the brightest part as the corneal reflection. The second calculator 352 detects a corneal reflex part from the taken image and calculates coordinates indicating the position of the corneal reflex center. The first calculator 351 and the second calculator 352 calculate the coordinate values for the respective two images acquired by the left and right cameras (Step S104).
The right and left cameras are subjected to camera calibration by a method of stereo calibration in advance in order to acquire three-dimensional world coordinates, and transformation parameters are calculated. The method of stereo calibration may be any of conventionally used methods such as the method using the camera calibration theory by Tsai.
The first calculator 351 and the second calculator 352, using the transformation parameters, transforms the coordinates of the left and right camera into three-dimensional world coordinates of the pupil center and the corneal reflex center (Step S105). The third calculator 353 obtains a line connecting between the determined world coordinates of the corneal reflex center and the world coordinates of a center position of the LED light source 103 (Step S106). Next, the third calculator 353 calculates a line connecting between the world coordinates of the center of the target image displayed at one point on the screen of the display 101 and the world coordinates of the pupil center (Step S107). The third calculator 353 obtains a point of intersection between the line calculated at Step S106 and the line calculated at Step S107 and determines the point of intersection to be the corneal curvature center (Step S108). The third calculator 353 calculates the distance between the pupil center and the corneal curvature center in this situation and stores the distance in the storage 150 or the like (Step S109). The stored distance is used for calculating the corneal curvature center at the time of subsequent visual point (visual line) detection.
The distance between the pupil center and the corneal curvature center when the subject gazes at the one point on the display 101 in the calculation processing is maintained constant within the range of detecting the visual point within the display 101. The distance between the pupil center and the corneal curvature center may be determined from the average of total values calculated during the reproduction of the target image or determined from the average of several values among the values calculated during the reproduction.
A pupil center 811 and a corneal reflex center 812 indicate the position of the pupil center and the position of the corneal curvature center, respectively, calculated at the time of visual point detection. A line 813 is a line connecting between the LED light source 103 and the corneal reflex center 812. A corneal curvature center 814 is the position of the corneal curvature center calculated from the general curvature radius value. A distance 815 is the distance between the pupil center and the corneal curvature center calculated by the advance calculation processing. A corneal curvature center 816 is the position of the corneal curvature center calculated using the distance determined in advance. The corneal curvature center 816 is determined by the corneal curvature center that is present on the line 813 and the distance between the pupil center and the corneal curvature center that is the distance 815. With this determination, a visual line 817 calculated when the general curvature radius value is used is corrected to a visual line 818. The gazing point on the screen of the display 101 is corrected from the gazing point 805 to the gazing point 806.
Steps S201 to S205 are similar to Steps S102 to S106 of
The third calculator 353 calculates a position which is on the line calculated at Step S205 and that the distance from the pupil center is equal to the distance determined by the calculation processing in advance, as the corneal curvature center (Step S206).
The visual line detector 354 determines a vector (visual line vector) connecting between the pupil center and the corneal curvature center (Step S207). The vector indicates the visual line direction that the subject is seeing. The visual point detector 355 calculates three-dimensional world coordinate values of a point of intersection between the visual line direction and the screen of the display 101 (Step S208). The values are coordinate values representing the one point on the display 101 at which the subject gazes with the world coordinates. The visual point detector 355 transforms the determined three-dimensional world coordinate values into coordinate values (x, y) represented by a two-dimensional coordinate system of the display 101 (Step S209). With this transformation, the visual point (gazing point) on the display 101 at which the subject gazes can be calculated.
First Modification
The calculation processing to calculate the distance between the pupil center position and the corneal curvature center position is not limited to the method described in
A segment 1101 is a segment (a first segment) connecting between the target position 605 and the LED light source 103. A segment 1102 is a segment (a second segment) that is parallel to the segment 1101 and connects between the pupil center 611 and the line 613. The present modification calculates and stores the distance 616 between the pupil center 611 and the corneal curvature center 615 using the segment 1101 and the segment 1102 as follows.
Steps S301 to S307 are similar to Steps S101 to S107 of
The third calculator 353 calculates a segment (the segment 1101 in
The third calculator 353 calculates a segment (the segment 1102 in
The third calculator 353, based on a similarity relation between a triangle with the corneal curvature center 615 as a vertex and with the segment calculated at Step S308 as a base and a triangle with the corneal curvature center 615 as a vertex and with the segment calculated at Step S309 as a base, calculates the distance 616 between the pupil center 611 and the corneal curvature center 615 (Step S310). The third calculator 353, for example, calculates the distance 616 so that the ratio of the length of the segment 1102 to the length of the segment 1101 is equal to the ratio of the distance 616 to the distance between the target position 605 and the corneal curvature center 615.
The distance 616 can be calculated by the following equation (1), where L614 is the distance from the target position 605 to the pupil center 611.
Distance 616=(L614×L1102)/(L1101−L1102) (1)
The third calculator 353 stores the calculated distance 616 in the storage 150 or the like (Step S311). The stored distance is used for calculating the corneal curvature center at the time of subsequent visual point (visual line) detection.
The following describes details of diagnosis support processing. The present embodiment uses an image representing a cause of a certain event and the event as the diagnostic image. Measuring a dwell time at a gazing point for an area set in the diagnostic image can support diagnosis. This configuration can support diagnosis such as what point developmentally disabled people gaze at to obtain information in order to understand a causal relation, whether they are unable to understand the causal relation despite gazing at, or the like. In other words, diagnosis support with higher precision than conventional ones is provided.
The present embodiment uses, as one of the diagnostic images, a still image obtained by capturing a part of the moving image before an event occurs or a still image equivalent to the still image and a still image obtained by capturing a part of the moving image after the event occurs or a still image equivalent to the still image.
In the examples of
First, the output controller 356 displays the still image 1 on the display 101. The subject sees the displayed still image 1. In this situation, the visual point detector 355 detects the gazing point (Step S401).
Next, the output controller 356 displays the still image 2 on the display 101. The subject sees the displayed still image 2. In this situation, the visual point detector 355 detects the gazing point (Step S402). Advancement from Step S401 to Step S402 may be performed in accordance with the pressing of a “proceed to next button” (not illustrated) or the like by the subject or an operator. The display may be continuously advanced without any instruction by the subject or the operator.
Next, the evaluator 357 receives a selection of a primary answer by the subject (Step S403).
The evaluator 357 acquires position information indicating a position touched by the subject or the operator from the display 101 configured as, for example, a touch panel and receives the selection of an option corresponding to the position information. The evaluator 357 may receive the primary answer designated using an input device (a keyboard or the like, not illustrated) by the subject or the operator. The selection of the primary answer is not limited to the method using the selection screen as illustrated in
Returning back to
Details of gazing point detection processing at Step S401, Step S402, and Step S404 will be described below.
The evaluator 357 then receives a selection of a secondary answer by the subject (Step S405). The secondary answer is an answer selected after displaying the moving image corresponding to the diagnostic images. The selection of the secondary answer may be performed by, for example, a similar method to the selection of the primary answer. Options of the secondary answer may be the same as the options of the primary answer or different therefrom. The secondary answer is selected in order to enable the determination (the primary answer) after seeing the still image 1 and the still image 2 and the determination (the secondary answer) after seeing the continuous moving image subsequently to be compared with each other.
Next, the output controller 356 displays the right answer to the question on the display 101 (Step S406). The output controller 356 displays an explanation on the display 101 (Step S407).
On this right answer screen, for example, if a next button 2001 is pressed, an explanation screen for displaying an explanation is displayed.
Returning back to
The following describes the details of the gazing point detection processing.
First, the output controller 356 starts reproduction (display) of the diagnostic image (the still image 1) (Step S501). Next, the output controller 356 resets a timer for measuring a reproduction time (Step S502). Next, the visual point detector 355 resets counters (counters ST1_M, ST1_H, ST1_C, ST1_S, and ST1_OT) that count up at the time of gazing at the respective areas (Step S503).
The counters ST1_M, ST1_H, ST1_C, ST1_S, and ST1_OT are counters when the still image 1 (ST1) is displayed. The respective counters correspond to the following areas. By counting up the respective counters, dwell times representing times during which the gazing point is detected within the respective corresponding areas can be measured.
The counter ST1_M: the area M
The counter ST1_H: the area H
The counter ST1_C: the area C
The counter ST1_S: the area S
The counter ST1_OT: an area other than the above areas
Next, the visual point detector 355 detects the gazing point of the subject (Step S504). The visual point detector 355, for example, can detect the gazing point by the procedure described in
If the detection of the gazing point has been failed (Yes at Step S505), the process advances to Step S516. If detection of the gazing point has been succeeded (No at Step S505), the visual point detector 355 acquires coordinates of the gazing point (gazing point coordinates) (Step S506).
The visual point detector 355 determines whether the acquired gazing point coordinates are present within the area M (around the person) (Step S507). If the acquired gazing point coordinates are present within the area M (Yes at Step S507), the visual point detector 355 further determines whether the acquired gazing point coordinates are present within the area H (around the head) (Step S508). If the acquired gazing point coordinates are present within the area H (Yes at Step S508), the visual point detector 355 increments (counts up) the counter ST1_H (Step S510), and the process proceeds to Step S516. If the acquired gazing point coordinates are absent within the area H (No at Step S508), the visual point detector 355 increments (counts up) the counter ST1_M (Step S509), and the process proceeds to Step S516.
If the acquired gazing point coordinates are absent within the area M (No at Step S507), the visual point detector 355 determines whether the acquired gazing point coordinates are present within the area C (around the object as the cause of the causal relation) (Step S511). If the acquired gazing point coordinates are present within the area C (Yes at Step S511), the visual point detector 355 increments (counts up) the counter ST1_C (Step S512), and the process proceeds to Step S516.
If the acquired gazing point coordinates are absent within the area C (No at Step S511), the visual point detector 355 determines whether the acquired gazing point coordinates are present within the area S (around the object that is not the cause of the causal relation) (Step S513). If the acquired gazing point coordinates are present within the area S (Yes at Step S513), the visual point detector 355 increments (counts up) the counter ST1_S (Step S514), and the process proceeds to Step S516.
If the acquired gazing point coordinates are absent within the area S (No at Step S513), the gazing point is absent within any of the set areas, and the visual point detector 355 increments (counts up) the counter ST1_OT (Step S515).
Next, the output controller 356 checks whether the timer that manages the reproduction time of the video has reached a time-out (Step S516). If a certain time has not been elapsed, that is, if the timer has not reached a time-out (No at Step S516), the process returns to Step S504 to continue the measurement. If the timer has reached a time-out (Yes at Step S516), the output controller 356 stops reproduction of the video (Step S517).
The gazing point detection processing when the still image 2 (ST2) at Step S402 is displayed can use a similar procedure to
The counter ST1_M→a counter ST2_M
The counter ST1_H→a counter ST2_H
The counter ST1_C→a counter ST2_C
The counter ST1_S→a counter ST2_S
The counter ST1_OT→a counter ST2_OT
The gazing point detection processing when the moving image (MOV) at Step S404 is displayed can use a similar procedure to
The counter ST1_M→a counter MOV_M
The counter ST1_H→a counter MOV_H
The counter ST1_C→a counter MOV_C
The counter ST1_S→a counter MOV_S
The counter ST1_OT→a counter MOV_OT
The following describes the details of the analysis processing.
First, the evaluator 357 determines whether the selected primary answer is the right answer (Step S601). If the selected primary answer is the right answer (Yes at Step S601), the evaluator 357 calculates an evaluation value indicating that capacity of understanding causal relations is high (Step S602).
If the primary answer is not the right answer (No at Step S601), or after Step S602, the evaluator 357 calculates ans1=ST1_M+ST2_M (Step S603). ST1_M represents a value of the counter ST1_M, for example. The following may similarly represent a value of a counter X as simply “X.”
Next, the evaluator 357 determines whether ans1 is larger than a threshold k11 (Step S604). If ans1 is larger than the threshold k11 (Yes at Step S604), the evaluator 357 calculates an evaluation value indicating that the degree of attention is high against changes in events (Step S605). This is because ans1 indicates a degree to which the gazing point is contained within the area M containing the person. The evaluation value may be binary values indicating that the degree of attention toward changes in events is high or low or multiple values varying in accordance with, for example, the magnitude of ans1.
If ans1 is equal to or less than the threshold k11 (No at Step S604), or after Step S605, the evaluator 357 calculates ans2=ST1_H+ST2_H (Step S606). The evaluator 357 then determines whether ans2 is larger than a threshold k12 (Step S607). If ans2 is larger than the threshold k12 (Yes at Step S607), the evaluator 357 calculates an evaluation value indicating that the degree of attention toward the head containing one's face is high and that the development of sociality is high (Step S608). This is because ans2 indicates a degree to which the gazing point is contained within the area H containing the head.
If ans2 is equal to or less than the threshold k12 (No at Step S607), or after Step S608, the evaluator 357 calculates ans3=ST1_C+ST2_C (Step S609). The evaluator 357 then determines whether ans3 is larger than a threshold k13 (Step S610). If ans3 is larger than the threshold k13 (Yes at S610), the evaluator 357 calculates an evaluation value indicating that capacity of predicting relevance is high and that attention to the object related to the causal relation is paid (Step S611). This is because ans3 indicates a degree to which the gazing point is contained within the area C containing the stone as the cause by which the person falls down.
If ans3 is equal to or less than the threshold k13 (No at Step S610), or after Step S611, the evaluator 357 calculates ans4=ST1_M+ST2_M+ST1_C+ST2_C+ST1_S+ST2_S (Step S612). The evaluator 357 then determines whether ans4 is larger than a threshold k14 (Step S613). If ans4 is larger than the threshold k14 (Yes at Step S613), the evaluator 357 calculates an evaluation value, which indicates that the degree of interest toward various objects and events is high (Step S614). This is because ans4 indicates a degree to which the gazing point is contained within the areas (the area M, the area C, and the area S) containing the objects such as the person and the stone.
In the case that ans4 is equal to or less than the threshold k14 (No at Step S613), or after Step S614, the analysis processing ends. Similarly to ans1, ans2, ans3, and ans4 may be binary values or multiple values.
Subjects having a developmental disorder often have difficulty in understanding causal relations. It is desirable to change a method of care and education depending on whether a causal relation cannot be understood even though a subject gazes at a cause of the causal relation and incorporates its information into the brain or the causal relation cannot be understood because the subject does not try to see the cause of the causal relation, and information itself does not reach the brain. In particular, when a case has neither the evaluation value indicating that capacity of understanding causal relations is high (Step S602), the evaluation value indicating that the development of sociality is high (Step S608), nor the evaluation value indicating that capacity of predicting relevance is high (Step S611), the case has a possibility of a developmental disorder.
The diagnosis supporting device of the present embodiment, when images (the still image 1 and the still image 2, for example) before and after an event are displayed, measures what part a subject sees. With this configuration, diagnosis can be supported with high precision even about whether causal relations can be understood or the like. With an analyzed (diagnosed) result as reference, a policy of care and education can be determined.
As illustrated in
When the evaluation value (Step S611) indicating that capacity of predicting relevance is high is calculated, for example, diagnosis about understanding of causal relations can be supported. In this case, there is no need to display the right answer screen or to receive selection of the primary answer. This is because what is only needed is the detection result of the gazing point when the diagnostic image is displayed, for calculating the evaluation value as in Step S611.
In
The diagnosis support processing of
First, the evaluator 357 stores subject information such as the name of a subject and a measurement date, for example, in the storage 150 before measurement (Step S701). Next, the diagnosis support processing (measurement) as illustrated in
If the past measurement data is stored (Yes at Step S703), the output controller 356 displays information indicating the past measurement data and a change in the present measurement data relative to the past measurement data on the display 101 (Step S704).
Concerning the evaluation value of “capacity of understanding causal relations is high,” for example, it is illustrated that there is no change. ansn_old (n is 1 to 4) indicates values of the previous measurement data. ansn_new (n is 1 to 4) indicates values of the present measurement data. As illustrated in
The output controller 356 displays information (values indicating the difference) indicating thus determined changes in the measurement data, for example, on the display 101. The output controller 356 may output the measurement data and the information indicating changes to another device (an external communication device connected via a network, a printer, or the like) in place of the display 101.
Returning back to
Thus, training by seeing the diagnostic image several times can increase the capacity of understanding causal relations and enables effects by the training to be checked.
The analysis processing may be performed also when a moving image is displayed.
ans1=MOV_M
ans2=MOV_H
ans3=MOV_C
ans4=MOV_M+MOV_C+MOV_S
k11→k21
k12→k22
k13→k23
k14→k24
The analysis processing as illustrated in
In order to support the determination of the policy of care and education, methods of training (a policy of care and education) to be recommended may be displayed on the display 101 or the like in accordance with a diagnostic result. For example, the evaluator 357 may compare the measurement data with a threshold for policy determination determined in advance, and the output controller 356 may display different methods of training for a case of being lower than the threshold and a case of being the threshold or more. The output controller 356 may display different methods of training in accordance with a combination of values of different pieces of measurement data (the evaluation value indicating capacity of understanding causal relations is high and the evaluation value indicating capacity of predicting relevance is high, for example) or the like. The method of training may be a method of training using the present diagnosis supporting device 100, a method of training using illustrations and photographs, and any other method of training.
Although the above example uses the moving image containing the diagnostic images (the still image 1 and the still image 2) used at the time of diagnosis as the explanation image, the explanation image is not limited to such a moving image. Any image can be used so long as it is an image that represents a causal relation between a cause and an event and serves as support for training. For example, the explanation image containing one or more still images different from the diagnostic images may be used.
Second Modification
The above embodiment describes an example in which the diagnosis supporting device that supports diagnosis of a developmental disorder or the like is used also as a training supporting device. Any device that can display, for example, an explanation image, a right answer, an explanation, or the like other than the diagnosis supporting device can be used as the training supporting device. The present modification describes an example in which a portable terminal such as a tablet, a smartphone, and a notebook personal computer (PC) is used as the training supporting device. Other than that, an information processing device such as an ordinary personal computer can be used as the training supporting device.
The gazing point detection processing performed at Step S401, Step S402, and Step S404 of the diagnosis support processing of
When a program for training support is started, for example, the output controller 356 displays a menu screen (Step S901).
Returning back to
If the end button 2811 has not been pressed (No at Step S902), the output controller 356 determines whether any of the selection buttons 2801 to 2806 has been pressed (Step S903). If any of the selection buttons 2801 to 2806 has not been pressed (No at Step S903), the process returns to Step S902 to repeat the processing.
If any of the selection buttons 2801 to 2806 has been pressed (Yes at Step S903), the output controller 356 receives selection of a question corresponding to the pressed button among the selection buttons 2801 to 2806 (Step S904). The output controller 356 displays the still image 1 indicating the cause of the event among the images corresponding to the received question (Step S905). Suppose that, for example, a user (trainee) has pressed the selection button 2801 of
Returning back to
Next, the output controller 356 receives selection of an answer (Step S907).
Returning back to
If the button 3301 is pressed on the explanation screen, the output controller 356 displays the reproduction screen on the display 101 (Step S910). The reproduction screen displays a moving image containing, for example, a process from the still image 1 to the still image 2.
When the display of the reproduction screen ends, the process returns to the menu display (Step S901).
Such processing enables training even by an device such as a tablet, which does not incorporate a gazing point detector and is less expensive. It is noted that a doctor or the like cannot perform evaluation or guidance based on a gazing point during training.
If the answer by the user is the right answer, points or the like may be given for each right answer. With this configuration, the user is motivated to perform training, and training can be supported more effectively.
As described above, the present embodiment produces the following advantageous effects, for example.
(1) By causing a subject to view images for training multiple times, the subject is enabled to understand a causal relation effectively.
(2) Effects of training can be measured.
(3) Whether a subject gazes at an object related to a causal relation can be known.
(4) Points and directions of care and education can be set.
(5) Self-analysis is enabled.
(6) Development of sociality can also be checked.
(7) Without the need to arrange light sources (the illuminator) at two sites, visual line detection is enabled using a light source arranged at one site.
(8) Owing to the light source arranged at one site, the device can be compact, and cost reduction can also be achieved.
The diagnosis supporting device, the diagnosis supporting method, and the computer-readable recording medium according to the present embodiment produce the advantageous effect of increasing the accuracy of diagnosis.
Although the invention has been described with respect to specific embodiments for a complete and clear disclosure, the appended claims are not to be thus limited but are to be construed as embodying all modifications and alternative constructions that may occur to one skilled in the art that fairly fall within the basic teaching herein set forth.
Number | Date | Country | Kind |
---|---|---|---|
2014-156812 | Jul 2014 | JP | national |
2014-156872 | Jul 2014 | JP | national |
2014-242033 | Nov 2014 | JP | national |