The present disclosure relates to an ophthalmological diagnosis, more particularly, to a technique for making a diagnosis on a cornea.
The surface of a pupil of an anterior eye part of an eyeball is covered with a thin film called “cornea”. The surface of the cornea is covered with a thin layer of tears called “lachrymal fluid layer”, and is protected from a dust or the like. However, in recent years, widespread use of contact lenses, long-time desk jobs with PCs (Personal Computers), and the like have led to an increased number of patients who complain symptoms of so-called dry eye involving breaking of the lachrymal fluid layer. As a dry-eye examination method, there is a method of observing a cornea for a certain period of time to make a diagnosis based on a change in state of the surface of the cornea due to drying.
Regarding the dry-eye examination method, for example, Japanese Patent Laying-Open No. 2005-211633 (PTL 1) discloses a cornea shape analysis apparatus configured as follows: “An arbitrary pattern is projected on a cornea of a subject, and reflection of the pattern from the cornea is captured in images. In the measurement, the subject is made to blink once to form a homogeneous lachrymal fluid layer on the surface of the cornea. After that, while maintaining the eyelid in an open state for about 10 seconds, a plurality of images of the pattern reflected from the cornea are recorded onto a digital memory in a time-series manner at arbitrarily determined time intervals. From the data recorded on the digital memory, an initial image just after the opening of the eyelid is employed as a reference and a cross-correlation with the images captured at the arbitral time intervals is calculated” (see [Abstract]).
Further, Japanese Patent Laying-Open No. 2004-321508 (PTL 2) discloses an ophthalmological measurement apparatus configured as follows: “The ophthalmological measurement apparatus is aligned when measurement is started. A calculation unit executes initial settings for measurement intervals, measurement times, and the like of the apparatus using a wavefront measurement unit. Triggering for the start of the measurement is provided by an input unit or the calculation unit. The calculation unit repeats the measurement of the shape of the cornea and the wavefront aberration of the cornea using the measurement unit until a measurement end time is reached. When the measurement end time is reached, a determination unit analyzes a break-up state, which is one index for determining a dry-eye state. The determination unit finds and outputs a value about the break-up, and performs an automatic diagnosis on the dry eye based on the value” (see [Abstract]).
PTL 1: Japanese Patent Laying-Open No. 2005-211633
PTL 2: Japanese Patent Laying-Open No. 2004-321508
According to each of the techniques disclosed in PTL 1 and PTL 2, the dedicated examination apparatus is required to make a diagnosis on a cornea, a great burden is imposed on the patient during the examination, and an objective and visual examination result cannot be presented. Thus, there has been required a technique for reducing a burden on a patient during an examination and presenting an objective and visual examination result.
The present disclosure has been made in view of the above background, and an object in an aspect of the present disclosure is to provide a technique for reducing a burden on a patient during an examination and presenting an objective examination result.
An ophthalmological diagnosis device for providing information to assist in making a diagnosis on a cornea according to a certain embodiment includes: an input unit that receives an input of video image data from an external device; an output unit that outputs image data; a storage unit that stores an evaluation criterion for an injury of the cornea; and a processing unit that processes data. The processing unit generates a diagnosis image based on a position of the cornea in each still image of the video image data, the processing unit divides the cornea captured in the generated diagnosis image into a plurality of regions, and evaluates the injury in each of the divided regions based on the evaluation criterion, the processing unit generates diagnosis information in which the diagnosis image and information about the evaluation on the injury in each of the regions are superimposed on each other, and the processing unit outputs the diagnosis information through the output unit.
In a certain aspect, the generating of the diagnosis image by the ophthalmological diagnosis device include a process of extracting a reference still image from the video image data based on the position of the cornea, and combining the reference still image with another still image extracted from the video image data, by superimposing the other still image on the reference still image based on positions of an iris and the injury of the cornea captured in the reference still image.
In a certain aspect, the generating of the diagnosis image by the ophthalmological diagnosis device includes a process of comparing portions of the reference still image and the superimposed still image, the portions of the reference still image and the superimposed still image being estimated to represent the same position on the cornea, and avoiding a portion determined to have a dust from being included in the diagnosis image, the portion being determined to have the dust based on a contrast ratio or high-frequency component in each of the portions of the still images estimated to represent the same position.
In a certain aspect, the generating of the diagnosis image by the ophthalmological diagnosis device includes a process of using, for alignment of the reference still image and the superimposed still image, a wrinkle of a conjunctiva captured in each of the reference still image and the superimposed still image.
In a certain aspect, the generating of the diagnosis image by the ophthalmological diagnosis device includes a process of determining portions of the reference still image and the superimposed still image as light reflected by a conjunctiva, and avoiding the portions determined as the light reflected by the conjunctiva from being included in the diagnosis image, the portions determined as the light reflected by the conjunctiva being portions in each of which a luminance on the conjunctiva is more than a predetermined value.
In a certain aspect, the generating of the diagnosis image by the ophthalmological diagnosis device includes a process of emphasizing a portion of the captured cornea having a high contrast ratio or high-frequency component in the diagnosis image after the combining.
In a certain aspect, the generating of the diagnosis information by the ophthalmological diagnosis device includes a process of detecting position and size of the cornea captured in the diagnosis image; dividing the cornea into the plurality of regions in a form of a grid based on the detected position and size of the cornea, and evaluating the injury in each of the regions divided in the form of the grid, based on a contrast ratio or high-frequency component in the region.
In a certain aspect, the generating of the diagnosis information by the ophthalmological diagnosis device includes a process of superimposing a score on each of the regions divided in the form of the grid in the diagnosis image, the score being based on the evaluation on the injury in the region.
In a certain aspect, the generating of the diagnosis information by the ophthalmological diagnosis device includes a process of superimposing a frame line on each of the regions divided in the form of the grid in the diagnosis image, the frame line having a color that is based on the evaluation on the injury in the region.
In a certain aspect, the generating of the diagnosis information by the ophthalmological diagnosis device includes a process of calculating a comprehensive score for the evaluations on the injury in the regions.
In a certain aspect, the generating of the diagnosis information by the ophthalmological diagnosis device includes a process of blurring an iris portion captured in the diagnosis image.
In a certain aspect, the ophthalmological diagnosis device further includes a communication unit that communicates through a network. The processing unit transmits, to an external server via the communication unit, the diagnosis information to which metadata is added.
In a certain aspect, the storage unit further stores data about medicine administration. The processing unit obtains the data about the medicine administration from the storage unit, and the processing unit generates information of a medicine related to the diagnosis information, based on the diagnosis information and the data about the medicine administration.
According to the present disclosure, in a certain aspect, there can be provided a technique for reducing a burden on a patient during an examination and presenting an objective examination result.
The foregoing and other objects, features, aspects and advantages of the present invention will become more apparent from the following detailed description of the present invention when taken in conjunction with the accompanying drawings.
Hereinafter, embodiments of technical ideas according to the present disclosure will be described with reference to figures. In the description below, the same components are denoted by the same reference characters. Their names and functions are also the same. Therefore, they will not be described repeatedly in detail.
<A. Overview of System>
Camera 101 captures an image of an anterior eye part of a patient. Camera 101 is connected to diagnosis device 102 via a cable, and transmits a video image of the anterior eye part to diagnosis device 102 via the cable.
In a certain aspect, camera 101 may be connected to diagnosis device 102 via an HDMI (registered trademark) (High-Definition Multimedia Interface) cable. Alternatively, camera 101 may be connected to diagnosis device 102 via a USB (Universal Serial Bus) cable. Alternatively, camera 101 may be connected to diagnosis device 102 via an RCA cable. Alternatively, camera 101 may wirelessly communicate with diagnosis device 102 instead of a cable. Alternatively, camera 101 may temporarily store a captured video image therein and may transmit the video image to diagnosis device 102 via the Internet or the like.
In a certain aspect, camera 101 may be a camera using a CMOS (Complementary Metal-Oxide-Semiconductor) sensor or may be a camera using a CCD (Charged-Coupled Devices) sensor. Alternatively, camera 101 may be a camera of a smartphone or may be a web camera.
Diagnosis device 102 extracts still images from the video image received from camera 101. Further, diagnosis device 102 selects, from the extracted still images, a still image usable for diagnosis on a conjunctiva, or diagnosis device 102 combines a plurality of still images to generate an image by which a diagnosis on the conjunctiva can be made. Alternatively, diagnosis device 102 may encode the video image received from camera 101 into an appropriate moving image format, and then may extract a still image therefrom. Further, diagnosis device 102 quantitatively makes a diagnosis on an injury of the cornea in the still image based on pattern data of injuries of corneas.
In a certain aspect, diagnosis device 102 may be a PC, a workstation, a virtual machine on a cloud service, or dedicated hardware. Alternatively, diagnosis device 102 may be a parallel machine in which a plurality of PCs or the like are connected.
Monitor 103 is connected to diagnosis device 102 via the cable, and presents a diagnosis result of diagnosis device 102. Further, monitor 103 may present the still image of the diagnosis result as well as related diagnosis information, patient information, medicine administration proposal, and the like.
In a certain aspect, monitor 103 may be a liquid crystal monitor, an organic EL (Electro-Luminescence) display, or an OLED (Organic Light Emitting Diode) display. In a certain aspect, monitor 103 may be connected to diagnosis device 102 via an HDMI (registered trademark) cable, a D-Sub15 cable, or a DVI (Digital Visual Interface) cable.
Diagnosis device 102 encodes moving image 201 into data in a processable format, and extracts still images from moving image 201. It can be said that moving image 201 is a collection of continuous still images with each frame being regarded as a unit. As shown in
Next, diagnosis device 102 extracts still images 202A to 202E from moving image 201. Next, from still images 202A to 202E, diagnosis device 102 selects still images to serve as candidates for use in cornea diagnosis. As an example, it is assumed that diagnosis device 102 selects still images 202A to 202C.
Next, diagnosis device 102 determines whether or not the cornea can be detected in each of selected still images 202A to 202C. When the cornea can be detected in one still image, diagnosis device 102 may directly use the still image for the cornea diagnosis. When the cornea cannot be detected in one still image, diagnosis device 102 generates a combined image for cornea detection by combining a plurality of still images. For example, when still image 202A is an image of the anterior eye part ideally captured from the front with no problem in terms of brightness, diagnosis device 102 determines that still image 202A is solely usable for the cornea diagnosis. On the other hand, when each of still images 202B and 202C is solely unusable for the cornea diagnosis, diagnosis device 102 combines still images 202B and 202C to generate a combined image for the cornea diagnosis.
Finally, diagnosis device 102 presents a cornea diagnosis still image 204, diagnosis information 205, and related information 206 on monitor 103. It should be noted that diagnosis device 102 may select one still image or a plurality of still images as cornea diagnosis still image 204.
<B. Hardware Configuration>
CPU 11 processes a program executable on diagnosis device 102, and processes data. The primary storage device stores the program executable by CPU 11 and the data to be referenced to. In a certain aspect, a DRAM (Dynamic Random Access Memory) may be used as the primary storage device.
Secondary storage device 13 stores a program, data, and the like for a long period of time. Generally, the secondary storage device is lower in rate than the primary storage device. Hence, data to be directly used by CPU 11 is placed in the primary storage device, whereas the other data is placed in the secondary storage device. In a certain aspect, a non-volatile storage device such as a HDD (hard disk drive) or a SSD (solid state drive) may be used as the secondary storage device.
External device interface 14 is used when connecting an auxiliary device to diagnosis device 102. In a certain aspect, a USB (Universal Serial Bus) interface may be used as external device interface 14. Input interface 15 is used to connect a keyboard, a mouse, or the like. In a certain aspect, a USB interface may be used as input interface 15.
Output interface 16 is used to connect an output device such as a display. In a certain aspect, HDMI (registered trademark) or DVI may be used as output interface 16. Alternatively, when diagnosis device 102 is a server machine, diagnosis device 102 may include a serial interface for communication with an external terminal.
Communication interface 17 is used to communicate with an external communication device. In a certain aspect, a LAN (Local Area Network) port, a Wi-Fi (registered trademark) (Wireless Fidelity) transmission/reception device, or the like may be used as output interface 16. Further, in a certain aspect, diagnosis device 102 may be a personal computer (PC), a workstation, or a virtual machine provided on a data center cloud.
Moving image processing unit 401 encodes the moving image received from camera 101. When the moving image received from camera 101 has already been encoded in a predetermined format, moving image processing unit 401 does not perform the encoding process. In the present embodiment, examples of the predetermined format includes, but not limited to, MP4, AVI, and the like.
Moving image analysis unit 402 analyzes the encoded moving image with each frame being regarded as a unit, and extracts still images from the moving image. For example, when the moving image is of 60 fps (frames per second), 60 still images are included in the moving image of one second. Moving image analysis unit 402 performs circle detection and brightness evaluation onto each still image in the moving image, and extracts only images in each of which the cornea can be detected. It should be noted that the cornea included in each still image may not be in an ideal state, and moving image analysis unit 402 extracts still images each having a certain score or higher from the moving image.
Image analysis unit 403 evaluates whether or not each of the still images extracted from the moving image is usable for the cornea diagnosis. Image analysis unit 403 makes reference to cornea pattern 409 to determine whether or not the cornea in each still image is solely usable for the cornea diagnosis. Image analysis unit 403 stores, as a diagnosis still image candidate, a still image determined to be solely usable for the cornea diagnosis. On the other hand, image analysis unit 403 sends, to combining processing unit 404, a still image determined to be solely unusable for the cornea diagnosis.
Combining processing unit 404 makes reference to cornea pattern 409 to combine the still images each solely unusable for the cornea diagnosis, thereby generating a combined image usable for the cornea diagnosis. Combining processing unit 404 sends the generated combined image to diagnosis evaluation unit 405.
Diagnosis evaluation unit 405 analyzes the still image sent from image analysis unit 403, and makes a diagnosis on the cornea using the analysis result. For example, diagnosis evaluation unit 405 makes reference to evaluation master 410 to evaluate amount and size of an injury on the surface of the cornea in the still image. Diagnosis evaluation unit 405 sends, to presentation information generation unit 407, the evaluation information and the still image used for the diagnosis.
Communication processing unit 406 performs a process for communication with an external device. Communication processing unit 406 performs updating of the data of the various types of tables as well as updating of the programs. Communication processing unit 406 may transmit the diagnosis information to an external device instead of presentation information generation unit 407 described later.
Presentation information generation unit 407 generates a screen to be presented on monitor 103. Presentation information generation unit 407 generates an image in which the evaluation information obtained from diagnosis evaluation unit 405 is superimposed on the still image used for the diagnosis. Further, presentation information generation unit 407 makes reference to patient master 411, medicine master 412, diagnosis data 413, and medical case data 414 to generate various types of information related to the diagnosis result. Patient master 411 is used to present information about the patient subjected to the diagnosis. Medicine master 412 and medical case data 414 are mainly used to present past medicine administration information and proposal information for the diagnosis result. Diagnosis data 413 is used to present a past diagnosis history.
Editing UI 408 is a user interface for a doctor or researcher to correct a diagnosis content presented on monitor 103. Diagnosis evaluation unit 405 makes reference to evaluation master 410 to automatically perform the cornea diagnosis. When the diagnosis result presented on monitor 103 is not appropriate, the doctor may operate editing UI 408 via input interface 15 to manually correct the diagnosis result. The corrected content may be fed back to evaluation master 410.
Cornea pattern 409 includes cornea pattern data. Cornea pattern 409 is used by combining processing unit 404 to recognize the cornea in the still image.
Evaluation master 410 includes pattern data for the cornea diagnosis. Evaluation master 410 is used by diagnosis evaluation unit 405 to evaluate the amount and size of the injury on the surface of the cornea captured in the still image.
Patient master 411 includes information about the patient, and is referenced to by presentation information generation unit 407 when generating the presentation screen. Medicine master 412 includes information about medicines, and is referenced to by presentation information generation unit 407 when generating the presentation screen. Diagnosis data 413 includes diagnosis information. Diagnosis data 413 is referenced to by presentation information generation unit 407 when generating the presentation screen. Medical case data 414 includes past medical case information. Medical case data 414 is referenced to by presentation information generation unit 407 when presenting the information related to the diagnosis result.
<C. Procedure in Generation of Diagnosis Image>
Moving image analysis unit 402 generates the still images and excludes the unnecessary still images from the large amount of still images by simply making determination in accordance with the brightness, the circle detection, and the like. In this way, moving image analysis unit 402 reduces used areas of primary storage device 12 and secondary storage device 13, and reduces workloads of the other functional units.
When image analysis unit 403 obtains still images 701A to 701D, image analysis unit 403 makes reference to cornea pattern 409 to determine whether or not the cornea in each of still images 701A to 701D is solely usable for the cornea diagnosis. In the example shown in
Combining processing unit 404 obtains the plurality of still images each unusable solely for the cornea diagnosis, and generates a combined image solely usable for the cornea diagnosis. In the example shown in
Image analysis unit 403 selects, through pattern matching or the like, the still image solely usable for the cornea diagnosis, thereby facilitating the below-described cornea diagnosis. Further, since combining processing unit 404 produces, based on the plurality of still images, the combined image solely usable for the cornea diagnosis, an effective cornea diagnosis can be facilitated even with still images obtained by an ordinary camera.
In still image 801A, cornea C is inclined to the left side when viewed from the front, and the entirety of a region X1 is not captured. Hence, still image 801A is not unusable solely for the cornea diagnosis. On the other hand, in still image 801B, cornea C is inclined to the right side when viewed from the front, and the entirety of a region Y2 is not captured. Also, still image 801B is not usable solely for the cornea diagnosis.
Combining processing unit 404 aligns the respective anterior eye parts in the still images based on the shapes and relative positions of conjunctiva A, conjunctiva B, cornea C, and pupil D. In the example shown in
Combining processing unit 404 collects and combines clearly captured portions from still images 801A and 801B, so as to generate a combined image 802. For example, combining processing unit 404 selects region Y1 from still image 801A and selects region X2 from still image 801B and combines them.
It should be noted that when combining a plurality of still images, combining processing unit 404 may employ one still image as a reference still image and may superimpose another still image on the reference still image based on a combination of the positions of the cornea, the iris, the injury of the cornea, and the conjunctiva in the reference still image. When it is difficult to superimpose the still images only using the cornea, combining processing unit 404 may use a wrinkle of the conjunctiva at the end portion of the anterior eye part for the sake of alignment.
Further, combining processing unit 404 may estimate and correct the inclination of the combined region. It is assumed that region Y1 and region X2 are used in combined image 802. In this case, based on the shapes and relative positions of conjunctiva A, conjunctiva B, cornea C, and pupil D, combining processing unit 404 may estimate at what degrees of angles region Y1 and region X2 are inclined to the left and right sides with respect to the front, and may combine region Y1 and region X2 after distorting them so as to attain a shape close to the shape when oriented to the front.
Based on the shapes and relative positions of the regions specific to the anterior eye part of the human being, combining processing unit 404 generates, from the plurality of still images, the combined image of the anterior eye part oriented to the front. In this way, combining processing unit 404 can appropriately generate the combined image even when the patient's face is moved during the image capturing, thereby facilitating the cornea diagnosis process.
First, combining processing unit 404 makes reference to cornea pattern 409 to specify foreign objects D1, D2, S1, and S2 on the cornea in still images 901A and 901B through image recognition.
Next, based on the relative positions of the conjunctiva, cornea, pupil, and the like in still images 901A and 901B, combining processing unit 404 estimates that foreign object D1 in still image 901A and foreign object D2 in still image 901B are the same foreign object. Similarly, based on the relative positions of the conjunctiva, cornea, pupil, and the like in still images 901A and 901B, combining processing unit 404 estimates that foreign object S1 of still image 901A and foreign object S2 of still image 901B are the same foreign object.
In the example shown in
Combining processing unit 404 determines that a portion (light reflection B1) of still image 1001A having a luminance of more than a predetermined luminance is light reflection. Similarly, combining processing unit 404 determines that a portion (light reflection B2) of still image 1001B having a luminance of more than a predetermined luminance is light reflection.
Since the position of the cornea in still image 1001A is different from the position of the cornea in still image 1001B, the position of the light reflection on the cornea in still image 1001A is also different from the position of the light reflection on the cornea in still image 1001B (the position of light reflection B1 and the position of light reflection B2 are different). Combining processing unit 404 combines still images 1001A and 1001B in which light is reflected at different positions on the cornea, and generates a combined image 1002 in which the respective light reflections are removed. By performing the process of
Combining processing unit 404 performs an emphasizing process onto a portion of the still image involving large changes in lightness/darkness, contrast ratio, and sharpness. In the example shown in
Combining processing unit 404 may perform the process of
<D. Flow of Procedure in Generation of Diagnosis Image>
In a step S1205, CPU 11 serves as moving image processing unit 401 to obtain a moving image from camera 101. It should be noted that CPU 11 may perform processes of a step S1210 and subsequent steps during the obtainment of the moving image from camera 101. Further, CPU 11 may divide, for each certain reproduction time, the data obtained from camera 101, may temporarily store the divided data into primary storage device 12 or secondary storage device 13, and may apply the processes of step S1210 and subsequent steps onto each divided moving image. Further, after the process of step S1205, CPU 11 may delete the moving image temporarily stored in primary storage device 12 or secondary storage device 13.
In step S1210, CPU 11 serves as moving image processing unit 401 to encode the moving image obtained from camera 101 into a predetermined format and temporarily store the moving image into primary storage device 12 or secondary storage device 13. It should be noted that after the process of step S1210, CPU 11 may delete the moving image encoded in the predetermined format and temporarily stored in primary storage device 12 or secondary storage device 13. Further, when the moving image is encoded in the predetermined format by camera 101 in advance, CPU 11 may skip the process of step S1210.
In a step S1215, CPU 11 serves as moving image analysis unit 402 to generate still images from the moving image encoded in the predetermined format with each frame being regarded as a unit. Further, CPU 11 determines whether or not each of the still images has a portion in which a circle having a certain size or larger can be detected, and determines whether or not the brightness of each still image is more than or equal to a certain brightness, etc. Further, CPU 11 deletes a still image apparently having no conjunctiva captured therein and a too dark still image, and temporarily stores, into primary storage device 12 or secondary storage device 13, only a still image usable for the cornea diagnosis. The process of step S1215 corresponds to each of the processes in
In a step S1220, CPU 11 serves as image analysis unit 403 to select, from the still images generated in step S1215, a still image usable for the cornea diagnosis. Further, from the still images generated in step S1215, CPU 11 selects a still image to be used to generate a combined image for the cornea diagnosis. When selecting a still image, CPU 11 makes reference to cornea pattern 409 from secondary storage device 13 for the sake of use in pattern matching or the like for the selection of the still image.
CPU 11 serves as combining processing unit 404 to perform a loop process from steps S1225A to S1225B. In the loop from steps S1225A to S1225B, CPU 11 performs the below-described combining process onto M still images selected in step S1220. In the description below, the process onto the N-th still image will be described as an example.
In a step S1230, CPU 11 serves as combining processing unit 404 to determine whether or not the cornea diagnosis can be made by using the N-th still image solely. When making the determination, CPU 11 makes reference to cornea pattern 409 from secondary storage device 13. When CPU 11 determines that the cornea diagnosis can be made by using the N-th still image solely (YES in step S1230), CPU 11 transitions the control to a step S1235. When CPU 11 determines that the cornea diagnosis cannot be made by using the n-th still image solely (NO in step S1230), CPU 11 transitions the control to a step S1240.
It should be noted that a determination criteria in step S1230 is stricter than a determination criteria in step S1220. For example, in step S1220, CPU 11 selects a still image usable to generate a combined image, even when the cornea is not completely captured therein, whereas in step S1230, CPU 11 selects only a still image solely usable for the cornea diagnosis.
In step S1235, CPU 11 serves as combining processing unit 404 to performs a combining process to combine the N-th still image with other still image(s). For example, when the cornea diagnosis cannot be made by using the N-1-th still image solely and by using the N-2-th still image solely, CPU 11 combines the N-2-th to N-th still images to generate a combined image usable for the cornea diagnosis. It should be noted that any number of still images may be combined. The processes of steps S1220 to S1235 correspond to the processes in
In step S1240, CPU 11 serves as combining processing unit 404 to perform dust removal processing onto the N-th still image. The process of step S1240 corresponds to the process of
In step S1245, CPU 11 serves as combining processing unit 404 to remove, from the N-th still image, a portion having a saturated luminance The process of step S1245 corresponds to the process of
In step S1250, CPU 11 serves as combining processing unit 404 to perform emphasizing processing onto the N-th still image with regard to an injury or the like on the cornea. It should be noted that the process of step S1250 is not essential in the cornea diagnosis. Therefore, CPU 11 may perform the process of step S1250 only when an emphasizing processing instruction is received from the user via input interface 15. The process of step S1250 corresponds to the process of
In step S1225B, CPU 11 serves as combining processing unit 404 to determine whether or not all the still images selected in step S1220 have been subjected to the processes in the loop of steps S1225A to S1225B. When CPU 11 determines that the processes in the loop of steps S1225A to S1225B have not been completed for all the still images selected in step S1220 (NO in step S1225B), CPU 11 transitions the control to step S1225A. Then, CPU 11 performs the processes in the loop of steps S1225A to S1225B onto the N+1-th still image. When CPU 11 determines that the processes in the loop of steps S1225A to S1225B have been completed for all the still images selected in step S1220 (YES in step S1225B), CPU 11 ends the process.
By performing the processes of
<E. Procedure in Generation of Diagnosis Result>
Diagnosis evaluation unit 405 divides, into certain regions, the cornea captured in the combined image for the cornea diagnosis generated by the processes of
Diagnosis evaluation unit 405 makes reference to evaluation master 410 to evaluate the size of an injury in each region. Evaluation master 410 includes a parameter for pattern matching and is used to evaluate the size of the injury on the cornea. In accordance with the amount of injury, diagnosis evaluation unit 405 provides different colors to the squares of the grid to facilitate a visual determination on a distribution of the injury. Alternatively, diagnosis evaluation unit 405 may provide translucent colors to the squares of the grid such that the injury in the combined image can be also seen. Alternatively, diagnosis evaluation unit 405 may provide colors only to the frames of the squares of the grid. Further, the combined image, the squares of the grid, and the colors of the squares of the grid may be individually stored in primary storage device 12 or secondary storage device 13 as layer information.
Diagnosis evaluation unit 405 divides, into certain regions, the cornea captured in the combined image for the cornea diagnosis generated by the processes of
An injury of the cornea is a cause of dry eye. Therefore, in a diagnosis on dry eye, it is very important to quantitatively evaluate the injury of the cornea. Diagnosis evaluation unit 405 performs the diagnosis process of
Editing UI 408 includes a cornea diagnosis result 1501, a cursor 1502, and an evaluation value selector 1503. The user may operate cursor 1502 via input interface 15, may select an evaluation value from evaluation value selector 1503, and may rewrite an evaluation value by selecting a corresponding square of the grid in cornea diagnosis result 1501.
It should be noted that editing UI 408 in
Editing UI 408 provides a function of editing the evaluation value of the diagnosis result as shown in
Editing UI 408 includes an anterior eye part image 1601 and a cursor 1602. The user may select a region targeted for the diagnosis by operating cursor 1602 via input interface 15 to select arbitrary two points, i.e., Point A and Point B.
It should be noted that editing UI 408 in
Editing UI 408 provides a function of editing the diagnosis range of the cornea as shown in
Cornea diagnosis image 1701 is a presented image in which the combined image for the cornea diagnosis generated by the processes of
Patient information 1702 is information about the patient subjected to the cornea diagnosis. Presentation information generation unit 407 makes reference to patient master 411 to obtain patient information 1702. Diagnosis history 1703 is information about past diagnosis for the patient subjected to the cornea diagnosis. Presentation information generation unit 407 makes reference to diagnosis data 413 to obtain diagnosis history 1703.
Medicine administration information 1704 is information about a medicine administered to the patient subjected to the cornea diagnosis. Presentation information generation unit 407 makes reference to medicine master 412 to obtain medicine administration information 1704. Related data 1705 is information about past related medical cases and medicine administrations. Proposal information 1706 is proposal information or the like about medicine administration content and treatment method considered to be effective in view of past medical cases. Presentation information generation unit 407 makes reference to medicine master 412 and medical case data 414 to generate related data 1705 and proposal information 1706. Comprehensive evaluation 1707 is a comprehensive evaluation that is based on the diagnosis result for each region in
As shown in
<F. Flow of Procedure in Generation of Diagnosis Result>
In a step S1805, CPU 11 serves as diagnosis evaluation unit 405 to select a combined image to be used for the diagnosis from the combined images for the diagnosis generated in the flow of
In a step S1810, CPU 11 makes reference to evaluation master 410 on secondary storage device 13. Evaluation master 410 is a parameter for evaluating the size, depth, or the like of the injury on the surface of the cornea.
In a step S1815, CPU 11 serves as diagnosis evaluation unit 405 to evaluate the combined image selected in step S1805. When making the evaluation, CPU 11 uses the parameter read from evaluation master 410. The process of step S1815 corresponds to the process of
In a step S1820, CPU 11 makes reference to patient master 411 and diagnosis data 413 on secondary storage device 13. CPU 11 uses the respective pieces of data read from patient master 411 and diagnosis data 413 to present the respective pieces of data as patient information 1702 and diagnosis history 1703 in
In a step S1825, CPU 11 makes reference to medicine master 412 and medical case data 414 on secondary storage device 13. CPU 11 uses the respective pieces of data read from medicine master 412 and medical case data 414 to generate medicine administration information 1704, related data 1705, and proposal information 1706 in
In a step S1830, CPU 11 serves as presentation information generation unit 407 to generate a diagnosis result screen. It should be noted that in the case of making the cornea diagnosis in the plurality of combined images, CPU 11 may present the diagnosis results for the plurality of combined images in cornea diagnosis image 1701. The manner of the presentation of the diagnosis results of the plurality of combined images is not limited to a specific manner For example, the diagnosis results may be presented in the form of a slide or a list.
In a step S1835, CPU 11 serves as editing UI 408 to receive a correction process from the user. When CPU 11 receives the correction process from the user (YES in step S1835), CPU 11 transitions the control to step S1830, and produces the presentation data again by reflecting the corrected content. When the correction process is not received from the user (NO in step S1835), CPU 11 transitions the control to step S1840. The process of step S1835 corresponds to the process of
In a step S1840, CPU 11 serves as editing UI 408 to receive a diagnosis record input from the user. Then, CPU 11 adds the diagnosis record to diagnosis data 413.
By executing the processes of
<G. Reusability of Various Types of Information>
Diagnosis ID 1901 is an identifier for uniquely identifying an individual diagnosis. Diagnosis image 1902 is the combined image for the diagnosis generated in the process of
Evaluation master 410 is improved in precision by receiving feedback of correction on evaluation information using editing UI 408. Therefore, by storing diagnosis image 1902 separately from evaluation information 1903 in diagnosis data 413, CPU 11 can examine the combined image for past diagnosis again using evaluation master 410 improved in precision.
Patient ID 1904 is used as a key for searching patient master 411 for information of the patient subjected to the diagnosis. Diagnosis information 1905 represents a content recorded by the doctor during the diagnosis. Diagnosis date/time 1906 represent a date/time when the diagnosis is made.
By analyzing evaluation information 1903 and diagnosis information 1905, CPU 11 can find a tendency of diagnosis results by diagnosis device 102 and a tendency of diagnosis results by the doctor, thereby facilitating proposal of treatment and presentation of related information in next and subsequent diagnoses.
As described above, diagnosis data 413 individually includes: the combined image for diagnosis (diagnosis image 1902) generated by diagnosis device 102; the evaluation information (evaluation information 1903) generated by diagnosis device 102; and the content (diagnosis information 1905) recorded by the doctor. Therefore, the reusability of the various types of data can be improved.
Medical case ID 2001 is an identifier for uniquely identifying an individual medical case. Medicine ID 2002 is an identifier for uniquely identifying a used medicine. CPU 11 uses medicine ID 2002 as a search key to perform search in medicine master 412.
The symptom detail represents a specific content of a symptom. It should be noted that different medical case IDs 2001 may be associated with the same symptom detail 2003. This means that the same symptom was observed multiple times.
Diagnosis ID 2004 is an identifier for uniquely identifying a diagnosis content associated with medical case ID 2001. CPU 11 uses diagnosis ID 2004 as a search key to perform search in diagnosis data 413. Recording date/time 2005 represent a date/time when the medical case was recorded. As described above, medical case data 414 stores a medicine, diagnosis information, and the like for each medical case, thereby facilitating an analysis on progression of treatment with regard to each medicine.
<H. Exemplary Applications>
Ophthalmological diagnosis system 100 according to the present embodiment is not limited to the implementation shown in
It should be noted that in the example shown in
The exemplary configuration of
Machine learning engine 2202 can update evaluation master 410 based on a history of correction operation using editing UI 408, thereby improving precision in the evaluation on the combined image. Similarly, machine learning engine 2202 can update cornea pattern 409 based on the history of correction operation using editing UI 408 and the still image generated during the diagnosis, thereby improving precision in the cornea detection. Further, machine learning engine 2202 may process diagnosis data 413 and medical case data 414 as training data so as to update related data 1705 and proposal information 1706 to be presented on monitor 103.
Further, as with the example of
In the exemplary configuration of
As with the example of
Further, as with the other exemplary configurations described above, server device 2301 may provide an interface as a web application. In this case, diagnosis device 102 can access a function and data provided by server device 2201 using a browser without installing dedicated software. Further, diagnosis device 102 and server device 2301 may be integrated.
In the exemplary configuration of
The embodiments disclosed herein are illustrative and non-restrictive in any respect. The scope of the present invention is defined by the terms of the claims, rather than the embodiments described above, and is intended to include any modifications within the scope and meaning equivalent to the terms of the claims.
11: CPU; 12: primary storage device; 13: secondary storage device; 14: external device interface; 15: input interface; 16: output interface; 17: communication interface; 100: ophthalmological diagnosis system; 101: camera; 102: diagnosis device; 103: monitor; 205, 1905: diagnosis information; 206: related information; 401: moving image processing unit; 402: moving image analysis unit; 403: image analysis unit; 404: combining processing unit; 405: diagnosis evaluation unit; 406: communication processing unit; 407: presentation information generation unit; 408: editing UI; 409: cornea pattern; 410: evaluation master; 411: patient master; 412: medicine master; 413: diagnosis data; 414: medical case data; 702, 802, 902, 1002: combined image; 1501: diagnosis result; 1502, 1602: cursor; 1503: evaluation value selector; 1601: image; 1701, 1902: diagnosis image; 1702: patient information; 1703: diagnosis history; 1704: medicine administration information; 1705: related data; 1706: proposal information; 1707: comprehensive evaluation; 1901, 2004: diagnosis ID; 1903: evaluation information; 1904: patient ID; 1906: diagnosis date/time; 2001: medical case ID; 2002: medicine ID; 2003: detail; 2005: recording date/time; 2101, 2201, 2301: server device; 2202: machine learning engine; 2302: public network.
Number | Date | Country | Kind |
---|---|---|---|
2018-239976 | Dec 2018 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2019/049996 | 12/20/2019 | WO | 00 |