The present invention relates to an image processing system that supports capturing of an image of an eye, and more particularly, to an image processing system using tomograms of an eye.
For the purpose of conducting early diagnoses of various diseases that occupy the top places of the causes of adult diseases and blindness, eye examinations are widely conducted. In examinations and the like, it is requested to find diseases of the entirety of an eye. Therefore, examinations using images of a wide area of an eye (hereinafter called wide images) are essential. Wide images are captured using, for example, a retinal camera or a scanning laser ophthalmoscope (SLO). In contrast, eye tomogram capturing apparatuses such as an optical coherence tomography (OCT) apparatus can observe the three-dimensional state of the interior of retina layers, and therefore, it is expected that these eye tomogram capturing apparatuses are useful in accurately conducting diagnoses of diseases. Hereinafter, an image captured with an OCT apparatus will be referred to as a tomogram or tomogram volume data.
When an image of an eye is to be captured using an OCT apparatus, it takes some time from the beginning of image capturing to the end of image capturing. During this time, the eye being examined (hereinafter this will be referred to as the subject's eye) may suddenly move or blink, resulting in a shift or distortion in the image. However, such a shift or distortion in the image may not be recognized while the image is being captured. Also, such a shift or distortion may be overlooked when the captured image data is checked after the image capturing is completed because of the vast amount of the image data. Since this checking operation is not easy, the diagnosis workflow of a doctor is inefficient.
To overcome the above-described problems, the technique of detecting blinking when an image is being captured (Japanese Patent Laid-Open No. 62-281923) and the technique of correcting a positional shift in a tomogram due to the movement of the subject's eye (Japanese Patent Laid-Open No. 2007-130403) are disclosed.
However, the known techniques have the following problems.
In the method described in the foregoing Japanese Patent Laid-Open No. 62-281923, blinking is detected using an eyelid open/close detector. When the eyelid level changes from a closed level to an open level, an image is captured after a predetermined time set by a delay time setter has elapsed. Therefore, although blinking can be detected, a shift or distortion in the image due to the movement of the subject's eye cannot be detected. Thus, the image capturing state including the movement of the subject's eye cannot be obtained.
Also, the method described in Japanese Patent Laid-Open No. 2007-130403 is performed to align two or more tomograms using a reference image (one tomogram orthogonal to two or more tomograms, or an image of the fundus of an eye). Therefore, when the eye greatly moves, the tomograms are corrected, but no accurate image can be generated. Also, there is no concept to detect the image capturing state, which is the state of the subject's eye at the time the image is captured.
The present invention provides an image processing system that determines the accuracy of a tomogram.
According to an aspect of the present invention, there is provided an image processing apparatus for determining the image capturing state of a subject's eye, including an image processing unit configured to obtain information indicating continuity of tomograms of the subject's eye; and a determining unit configured to determine the image capturing state of the subject's eye on the basis of the information obtained by the image processing unit.
According to another aspect of the present invention, there is provided an image processing method of determining the image capturing state of a subject's eye, including an image processing step of obtaining information indicating continuity of tomograms of the subject's eye; and a determining step of determining the image capturing state of the subject's eye on the basis of the information obtained in the image processing step.
Other features and advantages of the present invention will be apparent from the following description taken in conjunction with the accompanying drawings, in which like reference characters designate the same or similar parts throughout the figures thereof.
The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate embodiments of the invention and, together with the description, serve to explain the principles of the invention.
Preferred embodiments of the present invention will now be described in detail in accordance with the accompanying drawings. However, the scope of the present invention is not limited to examples illustrated in the drawings.
An image processing apparatus according to the present embodiment generates an integrated image from tomogram volume data when tomograms of a subject's eye (eye serving as an examination target) are obtained, and determines the accuracy of the captured images by using the continuity of image features obtained from the integrated image.
The tomogram capturing apparatus 20 is an apparatus that captures a tomogram of an eye. The tomogram capturing apparatus 20 is, for example, an OCT apparatus using time domain OCT or Fourier domain OCT. In response to an operation entered by an operator (not shown), the tomogram capturing apparatus 20 captures a three-dimensional tomogram of a subject's eye (not shown). The tomogram capturing apparatus 20 sends the obtained tomogram to the image processing system 10.
The data server 40 is a server that holds a tomogram of a subject's eye and information obtained from the subject's eye. The data server 40 holds a tomogram of a subject's eye, which is output from the tomogram capturing apparatus 20, and the result output from the image processing system 10. In response to a request from the image processing system 10, the data server 40 sends past data regarding the subject's eye to the image processing system 10.
Referring now to
The subject's eye information obtaining unit 210 obtains information for identifying a subject's eye from the outside. Information for identifying a subject's eye is, for example, a subject identification number assigned to each subject's eye. Alternatively, information for identifying a subject's eye may include a combination of a subject identification number and an identifier that represents whether an examination target is the right eye or the left eye.
Information for identifying a subject's eye is entered by an operator. When the data server 40 holds information for identifying a subject's eye, this information may be obtained from the data server 40.
The image obtaining unit 220 obtains a tomogram sent from the tomogram capturing apparatus 20. In the following description, it is assumed that a tomogram obtained by the image obtaining unit 220 is a tomogram of a subject's eye identified by the subject's eye information obtaining unit 210. It is also assumed that various parameters regarding the capturing of the tomogram are attached as information to the tomogram.
The command obtaining unit 230 obtains a process command entered by an operator. For example, the command obtaining unit 230 obtains a command to start, interrupt, end, or resume an image capturing process, a command to save or not to save a captured image, and a command to specify a saving location. The details of a command obtained by the command obtaining unit 230 are sent to the image processing apparatus 250 and the result output unit 270 as needed.
The storage unit 240 temporarily holds information regarding a subject's eye, which is obtained by the subject's eye information obtaining unit 210. Also, the storage unit 240 temporarily holds a tomogram of the subject's eye, which is obtained by the image obtaining unit 220. Further, the storage unit 240 temporarily holds information obtained from the tomogram, which is obtained by the image processing apparatus 250 as will be described later. These items of data are sent to the image processing apparatus 250, the display unit 260, and the result output unit 270 as needed.
The image processing apparatus 250 obtains a tomogram held by the storage unit 240, and executes a process on the tomogram to determine continuity of tomogram volume data. The image processing apparatus 250 includes an integrated image generating unit 251, an image processing unit 252, and a determining unit 253.
The integrated image generating unit 251 generates an integrated image by integrating tomograms in a depth direction. The integrated image generating unit 251 performs a process of integrating, in a depth direction, n two-dimensional tomograms captured by the tomogram capturing apparatus 20. Here, two-dimensional tomograms will be referred to as cross-sectional images. Cross-sectional images include, for example, B-scan images and A-scan images. The specific details of the process performed by the integrated image generating unit 251 will be described in detail later.
The image processing unit 252 extracts, from tomograms, information for determining three-dimensional continuity. The specific details of the process performed by the image processing unit 252 will be described in detail later.
The determining unit 253 determines continuity of tomogram volume data (hereinafter this may also be referred to as tomograms) on the basis of information extracted by the image processing unit 252. When the determining unit 253 determines that items of tomogram volume data are not continuous, the display unit 260 displays the determination result. The specific details of the process performed by the determining unit 253 will be described in detail later. On the basis of information extracted by the image processing unit 252, the determining unit 253 determines how much the subject's eye moved or whether the subject's eye blinked
The display unit 260 displays, on a monitor, tomograms obtained by the image obtaining unit 220 and the result obtained by processing the tomograms using the image processing apparatus 250. The specific details displayed by the display unit 260 will be described in detail later.
The result output unit 270 associates an examination time and date, information for identifying a subject's eye, a tomogram of the subject's eye, and an analysis result obtained by the image obtaining unit 220, and sends the associated information as information to be saved to the data server 40.
A central processing unit (CPU) 701 controls the entire computer by using programs and data storage in a random-access memory (RAM) 702 and/or a read-only memory (ROM) 703. The CPU 701 also controls execution of software corresponding to the units of the image processing system 10 and realizes the functions of the units. Note that programs may be loaded from a program recording medium and stored in the RAM 702 and/or the ROM 703.
The RAM 702 has an area that temporarily stores programs and data loaded from an external storage device 704 and a work area needed for the CPU 701 to perform various processes. The function of the storage unit 240 is realized by the RAM 702.
The ROM 703 generally stores a basic input/output system (BIOS) and setting data of the computer. The external storage device 704 is a device that functions as a large-capacity information storage device, such as a hard disk drive, and stores an operating system and programs executed by the CPU 701. Information regarded as being known in the description of the present embodiment is saved in the ROM 703 and is loaded to the RAM 702 as needed.
A monitor 705 is a liquid crystal display or the like. The monitor 705 can display the details output by the display unit 260, for example.
A keyboard 706 and a mouse 707 are input devices. By operating these devices, an operator can give various commands to the image processing system 10. The functions of the subject's eye information obtaining unit 210 and the command obtaining unit 230 are realized via these input devices.
An interface 708 is configured to exchange various items of data between the image processing system 10 and an external device. The interface 708 is, for example, an IEEE 1394, USB, or Ethernet (registered trademark) port. Data obtained via the interface 708 is taken into the RAM 702. The functions of the image obtaining unit 220 and the result output unit 270 are realized via the interface 708.
The above-described components are interconnected by a bus 709.
Referring now to the flowchart illustrated in
In step S301, the subject's eye information obtaining unit 210 obtains a subject identification number as information for identifying a subject's eye from the outside. This information is entered by an operator by using the keyboard 706, the mouse 707, or a card reader (not shown). On the basis of the subject identification number, the subject's eye information obtaining unit 210 obtains information regarding the subject's eye, which is held by the data server 40. This information regarding the subject's eye includes, for example, the subject's name, age, and sex. When there are other items of examination information including measurement data of, for example, the eyesight, length of the eyeball, and intraocular pressure, the subject's eye information obtaining unit 210 may obtain the measurement data. The subject's eye information obtaining unit 210 sends the obtained information to the storage unit 240.
When an image of the same eye is captured again, this processing in step S301 may be skipped. When there is new information to be added, this information is obtained in step S301.
In step S302, the image obtaining unit 220 obtains tomograms sent from the tomogram capturing apparatus 20. The image obtaining unit 220 sends the obtained information to the storage unit 240.
In step S303, the integrated image generating unit 251 generates an integrated image by integrating cross-sectional images (e.g., B-scan images) in a depth direction.
Hereinafter, a process performed by the integrated image generating unit 251 will be described using
In step S304, the image processing unit 252 extracts information for determining continuity of tomogram volume data from the integrated image.
The image processing unit 252 detects blood vessels in the integrated image as information for determining continuity of tomogram volume data. A method of detecting blood vessels is a generally known technique, and a detailed description thereof will be omitted. Blood vessels may not necessarily be detected using one method, and may be detected using a combination of multiple techniques.
In step S305, the determining unit 253 performs a process on the blood vessels obtained in step S304 and determines continuity of tomogram volume data.
Hereinafter, a specific process performed by the determining unit 253 will be described using
The image processing unit 252 tracks, from blood vessels that are concentrated near the macula lutea, the individual blood vessels, and labels the tracked blood vessels as “tracked”. The image processing unit 252 stores the positional coordinates of the tracked blood vessel ends as position information in the storage unit 240. The image processing unit 252 counts together the positional coordinates of blood vessel ends existing on a line parallel to the scanning direction at the time of image capturing using OCT (x-direction). This represents the number of blood vessel ends in tomograms. For example, the image processing unit 252 counts together the points (x1, yi), (x2, yi), (x3, yi), . . . (xn−1, yi),(nn, yi) existing on the same y-coordinate. When the image capturing using OCT was successful as in
Therefore, the threshold Th may be a fixed threshold in terms of a numeral, or the ratio of the number of coordinates of blood vessel ends on a line to the number of coordinates of all blood vessel ends. Alternatively, the threshold Th may be set on the basis of statistic data or patient information (age, sex, and/or race). The degree of concentration of blood vessel ends is not limited to that obtained using blood vessel ends existing on a line. Taking into consideration variations of blood vessel detection, the determination may be made using the coordinates of blood vessel ends on two or more consecutive lines. When a blood vessel end is positioned at the border of the image, it may be regarded that this blood vessel is continued to the outside of the image, and the coordinate point of this blood vessel end may be excluded from the count. Here, the fact that a blood vessel end is positioned at the border of the image means that, in the case where the image size is (X, Y), the coordinates of the blood vessel end are (0, yj), (X−1, yj), (xj, 0), or (xj, Y−1). In this case, the fact that a blood vessel end is positioned at the border of the image is not limited to being on the border of the image; there may be a margin of a few pixels from the border of the image.
Cy≧Th; 0≦y≦Y−1
Cy<Th; 0≦y≦Y−1 [Math.1]
In step S306, the display unit 260 displays, on the monitor 705, the tomograms or cross-sectional images obtained in step S302. For example, images as schematically illustrated in
When the determining unit 253 determines in step S305 that the items of tomogram volume data are not continuous, the determining unit 253 displays that fact in step S306 using the display unit 260.
In this manner, the determining unit 253 determines continuity of tomograms, and determines the image capturing state, such as the movement or blinking of the subject's eye.
In step S307, the command obtaining unit 230 obtains, from the outside, a command to capture or not to capture an image of the subject's eye again. This command is entered by the operator via, for example, the keyboard 706 or the mouse 707. When a command to capture an image again is given, the flow returns to step S301, and the process on the same subject's eye is performed again. When no command to capture an image again is given, the flow proceeds to step S308.
In step S308, the command obtaining unit 230 obtains, from the outside, a command to save or not to save the result of this process on the subject's eye in the data server 40. This command is entered by the operator via, for example, the keyboard 706 or the mouse 707. When a command to save the data is given, the flow proceeds to step S309. When no command to save the data is given, the flow proceeds to step S310.
In step S309, the result output unit 270 associates the examination time and date, information for identifying the subject's eye, tomograms of the subject's eye, and information obtained by the image processing unit 252, and sends the associated information as information to be saved to the data server 40.
In step S310, the command obtaining unit 230 obtains, from the outside, a command to terminate or not to terminate the process on the tomograms. This command is entered by the operator via, for example, the keyboard 706 or the mouse 707. When a command to terminate the process is obtained, the image processing system 10 terminates the process. In contrast, when a command to continue the process is obtained, the flow returns to step S301, and the process on the next subject's eye (or the process on the same subject's eye again) is executed.
In the foregoing manner, the process performed by the image processing system 10 is conducted.
With the foregoing structure, whether tomograms are continuous is determined from an integrated image generated from items of tomogram volume data, and the result is presented to a doctor. Thus, the doctor can easily determine the accuracy of the tomograms of an eye, and the efficiency of the diagnosis workflow of the doctor can be improved. Further, the image capturing state such as the movement or blinking of the subject's eye at the time of image capturing using OCT can be obtained.
In the present embodiment, the details of the process performed by the image processing unit 252 are different. A description of portions of the process that are the same as or similar to the first embodiment will be omitted.
The image processing unit 252 detects an edge region in the integrated image. By detecting an edge region parallel to the scanning direction at the time tomograms were captured, the image processing unit 252 obtains, in numeric terms, the degree of similarity between cross-sectional images constituting tomogram volume data.
When an integrated image is generated from tomogram volume data obtained by capturing tomograms of a position away from the retina since the eye moved at the time the tomograms were captured, the integrated value is different at a place where there is a positional shift due to the difference in the retina layer thickness.
Alternatively, when the eye blinked at the time the tomograms were captured, the integrated value becomes 0 or extremely small. Thus, there is a luminance difference at a boundary where there is a positional shift or blinking
In
The image processing unit 252 detects, in the edge image Pb′, a range of a certain number of consecutive edge regions that are parallel to the scanning direction at the time of image capturing using OCT (x-direction) and that are greater than or equal to a threshold. By detecting a certain number of consecutive edge regions E that are parallel to the scanning direction (x-direction), these can be distinguished from blood vessel edges and noise.
In the determination of the continuity of tomograms and the image capturing state of the subject's eye, the image processing unit 252 obtains, in numeric terms, the length of a certain number of consecutive edge regions E.
The determining unit 253 determines the continuity of tomograms and the image capturing state of the subject's eye by performing a comparison with a threshold Th′.
For example, the determination is made on the basis of the following equation (2) where E denotes the length of consecutive edge regions. The threshold Th′ may be a fixed value or may be set on the basis of statistic data. Alternatively, the threshold Th′ may be set on the basis of patient information (age, sex, and/or race). It is preferable that the threshold Th′ be dynamically changeable in accordance with the image size. For example, the smaller the image size, the smaller the threshold Th′. Further, the range of a certain number of consecutive edge regions is not limited to that on a parallel line. The determination can be made by using the range of a certain number of consecutive edge regions on two or more consecutive parallel lines.
E≧Th′ [Math.2]
In the present embodiment, the image processing unit 252 performs a frequency analysis based on Fourier transform to extract frequency characteristics. The determining unit 253 determines whether items of tomogram volume data are continuous, in accordance with the strength in a frequency domain.
Using these results, the determining unit 253 determines the continuity of tomograms and the image capturing state of the subject's eye.
The image processing system 10 according to the first embodiment obtains tomograms of a subject's eye, generates an integrated image from tomogram volume data, and determines the accuracy of the captured images by using the continuity of image features obtained from the integrated image. An image processing apparatus according to the present embodiment is similar to the first embodiment in that a process is performed on the obtained tomograms of the subject's eye. However, the present embodiment is different from the first embodiment in that, instead of generating an integrated image, the continuity of tomograms and the image capturing state of the subject's eye are determined from image features obtained from the tomograms.
Referring now to the flowchart illustrated in
In step S1003, the image processing unit 252 extracts, from tomograms, information obtained for determining the continuity of tomogram volume data.
The image processing unit 252 detects, in the tomograms, a visual cell layer as a feature for determining the continuity of tomogram volume data, and detects a region in which a luminance value is low in the visual cell layer. Hereinafter, a specific process performed by the image processing unit 252 will be described using
The image processing unit 252 detects the boundary between layers in tomograms. Here, it is assumed that a three-dimensional tomogram serving as a processing target is a set of cross-sectional images (e.g., B-scan images), and the following two-dimensional image processing is performed on the individual cross-sectional images. First, a smoothing filtering process is performed on a target cross-sectional image to remove noise components. In the tomogram, edge components are detected, and, on the basis of connectivity thereof, a few lines are extracted as candidates for the boundary between layers. From among these candidates, the top line is selected as the inner limiting membrane 1. A line immediately below the inner limiting membrane 1 is selected as the nerve fiber layer boundary 2. The bottom line is selected as the pigmented layer of the retina 3. A line immediately above the pigmented layer of the retina 3 is selected as the visual cell inner/outer segment junction 4. A region enclosed by the visual cell inner/outer segment junction 4 and the pigmented layer of the retina 3 is regarded as the visual cell layer 5. When there is not much change in the luminance value, and when no edge component greater than or equal to a threshold can be detected along A-scan, the boundary between layers may be interpolated by using coordinate points of a group of detection points on the left and right sides or in the entire region.
By applying a dynamic contour method such as a Snake or level set method using these lines as initial values, the detection accuracy may be improved. Using a technique such as graph cutting, the boundary between layers may be detected. Boundary detection using a dynamic contour method or a graph cutting technique may be performed three-dimensionally on a three-dimensional tomogram. Alternatively, a three-dimensional tomogram serving as a processing target may be regarded as a set of cross-sectional images, and such boundary detection may be performed two-dimensionally on the individual cross-sectional images. A method of detecting the boundary between layers is not limited to the foregoing methods, and any method can be used as long as it can detect the boundary between layers in tomograms of the eye.
As illustrated in
In the foregoing case, a region where luminance values are low is detected in the visual cell layer 5. However, a blood vessel feature is not limited thereto. A blood vessel may be detected by detecting a change in the thickness between the inner limiting membrane 1 and the nerve fiber layer boundary 2 (i.e., the nerve fiber layer) or a change in the thickness between the left and right sides. For example, as illustrated in
In step S1004, the image processing unit 252 performs a process on the blood vessels obtained in step S1003, and determines continuity of tomogram volume data.
The image processing unit 252 tracks, from blood vessel ends near the macula lutea, the individual blood vessels, and labels the tracked blood vessels as “tracked”. The image processing unit 252 stores the coordinates of the tracked blood vessel ends in the storage unit 240. The image processing unit 252 counts together the coordinates of the blood vessel ends existing on a line parallel to the scanning direction at the time of image capturing using OCT. In the case of
With the foregoing structure, continuity of tomograms is determined from tomogram volume data, and the determination result is presented to a doctor. Therefore, the doctor can easily determine the accuracy of tomograms of the eye, and the efficiency of the diagnosis workflow of the doctor can be improved.
The present embodiment describes the method of computing the degree of similarity in the first embodiment in a more detailed manner. The image processing unit 252 further includes a degree-of-similarity computing unit 254 (not shown), which computes the degree of similarity or difference between cross-sectional images. The determining unit 253 determines the continuity of tomograms and the image capturing state of the subject's eye by using the degree of similarity or difference. In the following description, it is assumed that the degree of similarity is to be computed.
The degree-of-similarity computing unit 254 computes the degree of similarity between consecutive cross-sectional images. The degree of similarity can be computed using the sum of squared difference (SSD) of a luminance difference or the sum of absolute difference (SAD) of a luminance difference. Alternatively, mutual information (MI) may be obtained. The method of computing the degree of similarity between cross-sectional images is not limited to the foregoing methods. Any method can be used as long as it can compute the degree of similarity between cross-sectional images. For example, the image processing unit 252 extracts a density value average or dispersion as a color or density feature, extracts a Fourier feature, a density cooccurence matrix, or the like as a texture feature, and extracts the shape of a layer, the shape of a blood vessel, or the like as a shape feature. By computing the distance in an image feature space, the degree-of-similarity computing unit 254 may determine the degree of similarity. The distance computed may be a Euclidean distance, a Mahalanobis distance, or the like.
The determining unit 253 determines that the consecutive cross-sectional images (B-scan images) have been normally captured when the degree of similarity obtained by the degree-of-similarity computing unit 254 is greater than or equal to a threshold. The degree-of-similarity threshold may be changed in accordance with the distance between two-dimensional tomograms or the scan speed. For example, given the case in which an image of a 6×6-mm range is captured in 128 slices (B-scan images) and the case in which the same image is captured in 256 slices (B-scan images), the degree of similarity between cross-sectional images becomes higher in the case of 256 slices. The degree-of-similarity threshold may be set as a fixed value or may be set on the basis of statistic data. Alternatively, the degree-of-similarity threshold may be set on the basis of patient information (age, sex, and/or race). When the degree of similarity is less than the threshold, it is determined that consecutive cross-sectional images are not continuous. Accordingly, a positional shift or blinking at the time the image was captured can be detected.
An image processing apparatus according to the present embodiment is similar to the first embodiment in that a process is performed on the obtained tomograms of the subject's eye. However, the present embodiment is different from the foregoing embodiments in that a positional shift or blinking at the time the image was captured is detected from image features obtained from tomograms of the same patient that are captured at a different time in the past, and from image features obtained from the currently captured tomograms.
The functional blocks of the image processing system 10 according to the present embodiment are different from the first embodiment (
Referring now to the flowchart illustrated in
In step S1201, the subject's eye information obtaining unit 210 obtains, from the outside, a subject identification number as information for identifying a subject's eye. This information is entered by an operator via the keyboard 706, the mouse 707, or a card reader (not shown). On the basis of the subject identification number, the subject's eye information obtaining unit 210 obtains information regarding the subject's eye, which is held in the data server 40. For example, the subject's eye information obtaining unit 210 obtains the name, age, and sex of the patient. Furthermore, the subject's eye information obtaining unit 210 obtains tomograms of the subject's eye that are captured in the past. When there are other items of examination information including measurement data of, for example, the eyesight, length of the eyeball, and intraocular pressure, the subject's eye information obtaining unit 210 may obtain the measurement data. The subject's eye information obtaining unit 210 sends the obtained information to the storage unit 240.
When an image of the same eye is captured again, this processing in step S1201 may be skipped. When there is new information to be added, this information is obtained in step S1201.
In step S1202, the image obtaining unit 220 obtains tomograms sent from the tomogram capturing apparatus 20. The image obtaining unit 220 sends the obtained information to the storage unit 240.
In step S1203, the integrated image generating unit 251 generates an integrated image by integrating cross-sectional images (e.g., B-scan images) in the depth direction. The integrated image generating unit 251 obtains, from the storage unit 240, the past tomograms obtained by the subject's eye information obtaining unit 210 in step S1201 and the current tomograms obtained by the image obtaining unit 220 in step S1202. The integrated image generating unit 251 generates an integrated image from the past tomograms and an integrated image from the current tomograms. Since a specific method of generating these integrated images is the same as that in the first embodiment, a detailed description thereof will be omitted.
In step S1204, the degree-of-similarity computing unit 254 computes the degree of similarity between the integrated images generated from the tomograms captured at different times.
Hereinafter, a specific process performed by the degree-of-similarity computing unit 254 will be described using
The degree of similarity between images can be obtained using the sum of squared difference (SSD) of a luminance difference, the sum of absolute difference (SAD) of a luminance difference, or mutual information (MI). The method of computing the degree of similarity between integrated images is not limited to the foregoing methods. Any method can be used as long as it can compute the degree of similarity between images.
When the determining unit 253 computes the degree of similarity between each of the partial integrated images Pa1 to Pan and the integrated image Pb, if all the degrees of similarity of the partial integrated images Pa1 to Pan are greater than or equal to a threshold, the determining unit 253 determines that the eyeball movement is small and that the image capturing is successful.
If there is any partial integrated image whose degree of similarity is less than the threshold, the degree-of-similarity computing unit 254 further divides that partial integrated image into m images, and computes the degree of similarity between each of the divided m images and the integrated image Pb and determines a place (image) whose degree of similarity is greater than or equal to the threshold. These processes are repeated until it becomes impossible to further divide the partial integrated image or until a cross-sectional image whose degree of similarity is less than the threshold is specified. In an integrated image generated from tomograms captured in the case where the eyeball moves or the eye blinks, a positional shift occurs in the space, and hence, some of the partial integrated images in which the image capturing is successful are missing. Thus, the determining unit 253 determines that a partial integrated image whose degree of similarity is less than the threshold even when the partial integrated image is further divided into images or a partial integrated image whose degree of similarity is greater than or equal to the threshold at a positionally conflicting place (the order of partial integrated images is changed) is missing.
When the degree of similarity computed by the degree-of-similarity computing unit 254 is greater than or equal to the threshold, the determining unit 253 determines that consecutive two-dimensional tomograms have been normally captured. If the degree of similarity is less than the threshold, the determining unit 253 determines that the tomograms are not consecutive. The determining unit 253 also determines that there was a positional shift or blinking at the image capturing time.
In step S1206, the display unit 260 displays the tomograms obtained in step S1202 on the monitor 705. The details displayed on the monitor 705 are the same as those displayed in step S306 in the first embodiment. Alternatively, tomograms of the same subject's eye captured at a different time, which are obtained in step S1201, may additionally be displayed on the monitor 705.
In the present embodiment, an integrated image is generated from tomograms, the degree of similarity is computed, and continuity is determined. However, instead of generating an integrated image, the degree of similarity may be computed between tomograms, and continuity may be determined.
With the foregoing structure, continuity of tomograms is determined from the degree of similarity between integrated images generated from tomograms captured at different times, and the determination result is presented to a doctor. Therefore, the doctor can easily determine the accuracy of tomograms of the eye, and the efficiency of the diagnosis workflow of the doctor can be improved.
In the present embodiment, the degree-of-similarity computing unit 254 computes the degree of similarity between blood vessel models generated from tomograms captured at different times, and the determining unit 253 determines continuity of tomogram volume data by using the degree of similarity.
Since a method of detecting blood vessels by using the image processing unit 252 is the same as that in step S304 in the first embodiment, a description thereof will be omitted. For example, a blood vessel model is a multilevel image in which a blood vessel corresponds to 1 and other tissues correspond to 0 or only blood vessel portions correspond to grayscale and other tissues correspond to 0.
As in steps S1204 and S1205 of the third embodiment, continuity of tomogram volume data is determined from the degree of similarity obtained from tomograms captured at different times.
In the foregoing embodiments, the determining unit 253 performs determination by combining the evaluation of the degree of similarity and the detection of blood vessel ends. For example, using the partial integrated images Pa1 to Pan or the partial blood vessel models Va1 to Van, the determining unit 253 evaluates the degree of similarity between tomograms captured at different times. Only in the partial integrated images P a1 to Pan or the partial blood vessel models Va1 to Van whose degrees of similarity are less than the threshold, the determining unit 253 may track blood vessels and detect blood vessel ends, and may determine continuity of the tomogram volume data.
In the foregoing embodiments, whether to capture an image of the subject's eye again may automatically be determined. For example, when the determining unit 253 determines discontinuity, an image is captured again. Alternatively, an image is captured again when the place where discontinuity is determined is within a certain range from the image center. Alternatively, an image is captured again when discontinuity is determined at multiple places. Alternatively, an image is captured again when the amount of a positional shift estimated from a blood vessel pattern is greater than or equal to a threshold. Estimation of the amount of a positional shift may be performed not necessarily from a blood vessel pattern, but may be performed by performing comparison with a past image. Alternatively, an image is captured again in accordance with whether the eye is normal or has a disease, and, when the eye has a disease, an image is captured again when discontinuity is determined Alternatively, an image is captured again when discontinuity is determined at a place where a disease (leucoma or bleeding) existed, compared with past data. Alternatively, an image is captured again when there is a positional shift at a place whose image is specified by a doctor or an operator to be captured. It is not necessary to perform these processes independently, and a combination of these processes may be performed. When it is determined to capture an image again, the flow returns to the beginning, and the process is performed on the same subject's eye again.
In the foregoing embodiments, a display example of the display unit 260 is not limited to that illustrated in
In the foregoing embodiments, an analysis process is performed on a captured image of the macula lutea. However, a place for the image processing unit to determine continuity is not limited to a captured image of the macula lutea. A similar process may be performed on a captured image of the optic disk. Furthermore, a similar process may be performed on a captured image including both the macula lutea and the optic disk.
In the foregoing embodiments, an analysis process is performed on the entirety of an obtained three-dimensional tomogram. However, a target cross section may be selected from a three-dimensional tomogram, and a process may be performed on the selected two-dimensional tomogram. For example, a process may be performed on a cross section including a specific portion (e.g., fovea) of the fundus of an eye. In this case, the boundary between detected layers, a normal structure, and normal data constitute two-dimensional data on this cross section.
Determination of continuity of tomogram volume data using the image processing system 10, which has been described in the foregoing embodiments, may not necessarily be performed independently, and may be performed in combination. For example, continuity of tomogram volume data may be determined by simultaneously evaluating the degree of concentration of blood vessel ends, which is obtained from an integrated image generated from tomograms, as in the first embodiment, and the degree of similarity between consecutive tomograms and image feature values, as in the second embodiment. For example, detection results and image feature values obtained from tomograms with no positional shift and from tomograms with positional shifts may be learned, and continuity of tomogram volume data may be determined by using an identifier. Needless to say, any of the foregoing embodiments may be combined.
In the foregoing embodiments, the tomogram capturing apparatus 20 may not necessarily be connected to the image processing system 10. For example, tomograms serving as processing targets may be captured and held in advance in the data server 40, and processing may be performed by reading these tomograms. In this case, the image obtaining unit 220 gives a request for the data server 40 to send tomograms, obtains the tomograms sent from the data server 40, and performs layer boundary detection and quantification processing. The data server 40 may not necessarily be connected to the image processing system 10. The external storage device 704 of the image processing system 10 may serve the role of the data server 40.
Needless to say, the present invention may be achieved by supplying a storage medium storing program code of software for realizing the functions of the foregoing embodiments to a system or apparatus, and reading and executing the program code stored in the storage medium by using a computer (or a CPU or a microprocessing unit (MPU)) of the system or apparatus.
In this case, the program code itself read from the storage medium realizes the functions of the foregoing embodiments, and a storage medium storing the program code constitutes the present invention.
As a storage medium for supplying the program code, for example, a floppy disk, a hard disk, an optical disk, a magneto-optical disk, a compact disc read-only memory (CD-ROM), a compact disc-recordable (CD-R), a magnetic tape, a nonvolatile memory card, or a ROM can be used.
As well as realizing the functions of the foregoing embodiments by executing the program code read by the computer, an operating system (OS) running on the computer may execute part of or the entirety of actual processing on the basis of instructions of the program code to realize the functions of the foregoing embodiments.
Furthermore, a function expansion board placed in the computer or a function expansion unit connected to the computer may execute part of or the entirety of the processing to realize the functions of the foregoing embodiments. In this case, the program code read from the storage medium may be written into a memory included in the function expansion board or the function expansion unit. On the basis of the instructions of the program code, a CPU included in the function expansion board or the function expansion unit may execute the actual processing.
The description of the foregoing embodiments only describes an example of a preferred image processing apparatus according to the present invention, and the present invention is not limited thereto.
As many apparently widely different embodiments of the present invention can be made without departing from the spirit and scope thereof, it is to be understood that the invention is not limited to the specific embodiments thereof except as defined in the claims.
While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
This application claims the benefit of Japanese Patent Application No. 2008-287754, filed Nov. 10, 2008, which is hereby incorporated by reference herein in its entirety.
Number | Date | Country | Kind |
---|---|---|---|
2008-287754 | Nov 2008 | JP | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/JP2009/005935 | 11/9/2009 | WO | 00 | 3/4/2011 |