The present invention relates to methods and systems for automatically processing images of the anterior chamber of the eye, and in particular HD-OCT (High definition optical coherence tomography) images, to obtain from them automatically information such as the location of Schwalbe's line, as part of a process for the an assessment of the Anterior Chamber Angle.
Glaucoma is one of the most major eye diseases globally, and causes irreversible loss of vision due to the optic nerve damage leading to blindness. It is largely caused by poor filtration of aqueous fluid in the eyeball through the anterior chamber angle (ACA). If untreated, it leads to higher internal pressure, permanent nerve damage and blindness. It is the second leading cause of global blindness after cataract and is the leading cause of irreversible visual loss [1]. It accounts for 40% of blindness in Singapore [2].
There are two main types of glaucoma, depending on how the flow of fluid is blocked:
Glaucoma is asymptomatic in an early stage and is often only recognized when the disease is quite advanced and vision is lost. Detection of ACG in the early stage using clinical imaging modalities could lead to treatment to arrest its development or slow down the progression.
Anterior chamber angle assessment is used for the detection of ACG and is essential in deciding whether or not to perform laser iridotomy. Three approaches are used, namely, gonioscopy, ultrasound biomicroscopy (UBM) and anterior segment optical coherence tomography (AS-OCT).
Gonioscopy involves imaging with a contact lens placed onto the eye. Though considered as ‘gold standard’, gonioscopy is highly subjective. The definition of angle findings varies across grading schemes and there is no universal standard. It is also prone to potential measurement errors due to how the lens is placed on the eye [5] and different illumination intensities. It is also uncomfortable for the patient. As such, there are severe constraints to its potential as a screening tool.
Ultrasound biomicroscopy (UBM) uses a higher frequency transducer than regular ultrasound for more detailed assessment of the anterior ocular structures [6]. The different parameters defined in [7] to quantify ACA are as follows:
Ishikawa et al. designed a semi-automated program (UBM Pro2000) [9] to calculate several important parameters, i.e. AOD 250, 500 and ARA 750, based on the manual identification of the scleral spur, which is prone to intra-observer and inter-observer variability. Although UBM is useful in quantifying the ACA, the equipment is costly and the resolution is sometimes unsatisfactory. Furthermore, it is neither user nor patient friendly as a water bath is needed to image the eye [5].
AS-OCT is another instrument for imaging the anterior chamber angle. Optical coherence tomography is analogous to ultrasound imaging, as the image is formed by detecting the signal backscattered from different tissue structures. Instead of sound waves, light is used for OCT imaging, which avoids the need for direct contact with the eyes to transmit or receive the signal. Furthermore, the use of light achieves higher spatial resolution than ultrasound. From the experiments in [10], AS-OCT is found to be at least as sensitive in detecting angle closure when compared with gonioscopy.
The existing angle assessment parameters used in AS-OCT remain the same as UBM images. The current Visante™ built-in angle assessment software requires substantial user labeling, i.e. the scleral spur, cornea and iris, hence the measurements are subjective. The Zhongshan Angle Assessment Program [11] is able to define the borders of the corneal epithelium, endothelium and iris to measure AOD, ARA and TISA using the location of scleral spur as the only observer input. However, it is found that the scleral spur is not identified in 20% to 30% of Visante™ OCT images and measurements using the scleral spur as the landmark are subjective to intra-observer and inter-observer variability and so are not very reproducible.
With the advancement in OCT imaging technology, higher resolution images could be produced in shorter time. Zeiss Cirrus™ HD-OCT [12] uses spectral domain technology to achieve image acquisition speed of 27,000 axial scans per second, which results in approximately 50 times faster data acquisition in practice as compared to the time-domain Visante™ OCT. Furthermore, the transverse resolution of HD-OCT images is improved from 60 μm/pixel to 15 μm/pixel and axial resolution is improved from 18 μm/pixel to 5 μm/pixel [13], [14]. However, as Cirrus™ uses a shorter wavelength (840 nm) than Visante™ (1310 nm) to achieve better spatial resolution, the penetration depth is decreased.
An object of the present invention is to make possible automatic processing of an optical coherence tomography image of the anterior chamber of an eye, and in particular an HD-OCT image.
A further object of the invention is the automatic identification of Schwalbe's line in an HD-OCT image, and the use of this novel landmark to provide a more reliable and consistent way of quantifying the anterior chamber angle in an HD-OCT image.
In general terms the invention proposes that a location in an OCT image corresponding to Schwalbe's line (i.e. a point in the OCT image corresponding to the landmark in the eye at which Schwalbe's line intersects with the plane in which the OCT image was a captured) is detected automatically by: detecting the location of the corneal endothelium; fitting a model to the detected corneal endothelium; and determining the location of Schwalbe's line based on the relationship between the detected corneal endothelium and the fitted model, such as where the detected corneal endothelium diverges most from the fitted model.
The detection of the Schwalbe's line is part of a process for obtaining data indicative of the anterior chamber angle, i.e. one or more numerical measures of the angle. That process may be used to screen patients reliably for reduced anterior chamber angle using optical coherence tomography images, and without gonioscopy or ultrasound biomicroscopy. Patients with reduced anterior chamber angle may be identified for further testing and/or treatment. Such treatment may include laser iridotomy. The effect of the treatment may be monitored using OCT images captured periodically, and the images may be processed according to the invention.
The detection of the corneal endothelium may include a thresholding step applied to the image, to identify first pixels having an intensity to one side of the threshold (e.g. above the threshold) and second pixels having an intensity on the other side of the threshold (e.g. below the threshold). Naturally, it would be equivalent to obtain a negative version of the OCT image, and perform thresholding swapping the meanings of “above” and “below”.
The term “automatic” is used here in the sense that it is substantially without human interaction, except as to the initialization of the algorithm.
Although the embodiments of the invention described below use a single OCT image captured in a single plane, there are possible applications of the invention to datasets having multiple OCT images for a single eye, such as a three-dimensional dataset composed of multiple OCT images in different respective planes, e.g. parallel spaced-apart planes. Each of the planes may have a different respective intersection with Schwalbe's line.
Embodiments of the invention will now be described for the sake of example only with reference to the following figures, in which:
1. Explanation of the First Embodiment
A block diagram of the first embodiment of the invention is given in
1.1 Segmentation and Edge Detection In step 1, the cornea and the iris regions are extracted by an image segmentation method, and then the inner surface of the cornea and the upper boundary of the iris are detected.
First, we convert the input HD-OCT image f(x,y) of dimension M×N to a binary image defined by
where the threshold can be calculated by Otsu's method [21], in which the intra class variance is minimized and the inter class variance is maximized. The cornea and the iris can be separated by the connected component labeling [17] segmentation method. The basic idea of this method is to scan the image and group its pixels into components based on connectivity and assign each component with a unique label. The components with the largest and the second largest number of pixels are identified as the cornea, qc(x,y), and the iris, qI(x,y), respectively. These segmented binary images are illustrated in
A rectangular region of interest (ROI) is identified and defined by
ROI=[xL+w]×[yT,yT+h], where (xL,yT) are the coordinates of the top left corner of the rectangle and w and h specify its width and height. The Region of Interest (ROI) is a rectangular area provided by the user and includes the location of Schwalbe's line. The size of the window is 50×50 pixels. Although it is a user input, we have found the results of the algorithm are usually not affected by the exact choice of the ROI as long as the Schwalbe's line is located inside the ROI.
It is assumed that Schwalbe's line is located inside the region of interest, thus the lower boundary of the cornea is defined by
E
c={(x,y)|gc(x,y)=1 and gc(x,v)=0,∀(x,y)∈ROI with v>y} (2)
and the upper boundary of the iris is defined by
E
I={(x,y)|gI(x,y)=1 and gI(x,v)=0 ∀v>y} (3)
1.2 Auto-Detection of Schwalbe's Line
In step 2, using linear regression, the embodiment first determines a linear function lc(x)=mcx+b which best fits the points on the lower boundary of the cornea, that is,
and then the location of Schwalbe's line is the point at which there is a maximum distance between the points in Ec and the regression line lc(x), that is,
(xsb,ysb)=arg max(lc(x)−y),(x,y)∈Ec (5)
1.3 Schwalbe's Line Bounded Area (SLBA)
As illustrated in
2) The upper boundary of the iris and the regression line segment obtained from the points in EI:
The three detected boundaries, lc
1.4 Find the Area of the SLBA
In step 4 the area of the SLBA is found, which is a value indicative of the value of the ACA. A small SLBA value suggests the possibility of angle-closure glaucoma. The proposed parameter uses area to quantify the ACA, which is similar to ARA and TISA. But Schwalbe's line is more distinct in HD-OCT images and hence, SLBA is a more reliable parameter to quantify angle closure glaucoma.
2 Data, Experiments and Results
We now turn to experimental text of the embodiment. HD-OCT images were captured using Cirrus™ [12] under dark room conditions at the Tan Tock Seng Hospital Eye Center, Singapore and the implementation of the algorithm was done in Matlab™ 2008a using an Intel® Core™ 2 Quad CPU Q9400 @266 GHz and 267 GHz computer. The embodiment was tested on a data set consisting of ten images labeled fn, n=1, 2, . . . 10 with pixel resolution M×N ranging from 540×270 to 1815×1126 and the identified Region of Interest is a 50×50 square in the neighborhood of Schwalbe's line.
where a=7.2 mm and b=2.7 mm are the horizontal and vertical scales of the images, respectively. The results are summarized in Table I.
The average values of ∈x and ∈y are 0.0388 mm, 0.00945 mm, respectively, which corresponds to less than 0.5% and 0.35% of the image dimension in each direction. Thus, the value of Schwalbe's line obtained with our algorithm faithfully agrees with the ophthalmologist's manual identification. The algorithm's average processing time is about 0.111 seconds.
1. Explanation of the Second Embodiment
A block diagram of the second embodiment is given in
The definitions of the variables used in this explanation are as follows:
mask after dewarping:
1.1 Image Segmentation & Edge Detection
The grayscale image f(x,y) is first segmented into two sets of pixels, the foreground object (B) and the background object ({circumflex over (B)}):
B={(x,y)|f(x,y)>Ta,(x,y)∈[1,M]×[1,N]}
{circumflex over (B)}={(x,y)|f(x,y)≦Ta,(x,y)∈[1,M]×[1,N]}
where Ta is a threshold used to segment the image. Ta is selected experimentally (it is selected as 40 in the experiments below). Morphological operations, e.g. erosion, dilation, opening and closing [16], are performed to remove the speckle noise.
Using the method described in [17], components of B and {circumflex over (B)} are labeled as {B1, B2, . . . , Bk} and {{circumflex over (B)}1, {circumflex over (B)}2, . . . , {circumflex over (B)}k} such that |B1|>|B2|> . . . >|Bk| and Cx({circumflex over (B)}1)>Cx({circumflex over (B)}2)> . . . >Cx({circumflex over (B)}k), where |•| and Cx(•) denote the cardinality (the number of pixels) and the x-coordinate of the centroid of a component, respectively.
The following segmentation and edge detection algorithm (“Algorithm 1”) is then performed:
Algorithm 1
if |B2|/|B1|<0.01 then the cornea is assumed to be connected to the iris, so
1. Define the MAC as MAC={circumflex over (B)}1
2. Perform edge detection to obtain two edges:
E
c={(x,y)|(x,y)∈MAC,(x,v)∉MAC,∀v<y}
E
I={(x,y)|(x,y)∈MAC,(x,v)∉MAC,∀v>y}
else
1. Perform edge detection
E
c={(x,y)|(x,y)∈B1 and (x,v)∉B1,∀v<y}
E
I={(x,y)|(x,y)∈B2 and (x,v)∉B2,∀v<y}
2. Find MAC from
M
AC={(x,y)|v<y<u, where (x,u)∈Ec and (x,v)∉EI}
end if
In other words, algorithm 1 first determines whether the cornea is connected or disjoint from the iris. It assumes that it is connected if |B2|/|B1|<0.01, implying that the area of the object B2 is insignificant as compared with that of B1, and B2 is classified as noise. This is the situation illustrated in , i.e. the background component with the right-most centroid. Note that the embodiment uses the right-most centroid (rather than the left-most centroid) because it assumes that the images are the temporal scans of the left eye or the nasal scans of the right eye 1. Ec and EI are then chosen as the upper boundary and lower boundary of MAC respectively. Note that if the image is the nasal scan of the left eye or the temporal scan of the right eye, we need to flip the image horizontally before the segmentation, or alternatively the left-most centroid is taken as {circumflex over (B)}1 instead in Algorithm 1.
Conversely, if |B2|/|B1|<0.01 then the cornea and iris are disjoint. This is the situation illustrated in
1.2 Automatic Detection of Schwalbe's Line
It is found that Ec has a very good fit to fourth order polynomial p(x)=p4x4+p3x3+p2x2+p1x+p0 where p4, p3, p2, p1 and p0 are real numbers. This is true except for the region around Schwalbe's line, where the curvature of Ec changes suddenly due to the termination of Descemet's membrane. Although it would be possible in principle to use a polynomial of yet higher degree, the fit is hardly improved for higher powers of x than 4. Based on this observation, we develop a method to detect Schwalbe's line automatically (“Algorithm 2”):
Algorithm 2
1. Ec is first fitted to the fourth order polynomial p(x)=p4x4+p3x3+p2x2+p1x+p0
as shown in
2. A window region containing Schwalbe's line is detected automatically in this step. Consider
where H(•) is the Heaviside step function, (u,v)∉Ec and
b(x)=max{u|u<x,p(u)≦v,(u,v)∉Ec}
Note that although this expression is given here as an integral, since Ec is known only at a set of points, the integral is actually a summation over those points.
As illustrated in
3. The regression line of Ec around the neighborhood of xR is lc(x)=mcx+bc, where
and where Δ is the size of the neighborhood, i.e. xR−xL. That is, the regression is over points outside the window, and just to the right of the window.
4. Schwalbe's line, (xsb,ysb), is the point at which there is a maximum distance between lc and Ec as seen in
1.3 Dewarping
The optical coherence tomography images are distorted as light changes direction at the interface between the air and the cornea. Moreover, the OCT records the optical path length that the light travels, not the actual physical distance. So both the directions and physical distance need to be corrected before making any measurements.
Westphal [18] corrected the scanning geometry (convergent or divergent) and proposed a backward transformation approach for refraction correction based on Fermat's Principle. However, the user needs to input several points on the corneal epithelium to perform active contour in the refraction correction step of the algorithm. Dewarping remains a subjective and time-consuming process.
The second embodiment includes a step 13 of automatic edge detection of the corneal epithelium in the dewarping process to avoid the need for user input. There are three steps in the dewarping algorithm: (1) Resizing f(x,y) so that the axial and transverse resolutions are the same; (2) Automatic detection of the corneal epithelium and (3) Refraction correction using Snell's law.
The details of the steps are as follows:
1) Resizing: The original HD-OCT image f(x,y):(x,y)∉[1,M]×[1, N] is resized to fr(xr,yr),(xr,yr)∉[1,
where w and d are the scan width and depth, respectively. Hence, the axial resolution and transverse resolution are the same. The point (x,y) on the original image is mapped to (xr,yr) in the resized image as defined by the following transformation:
2) Corneal epithelium detection: Using a similar method to described in Section 1.1, the cornea region in fr(xr,yr) is identified using a connected component labeling method, and the upper boundary of the cornea Ep0 is initially taken as the corneal epithelium. However, the embodiment removes any sudden change caused by non-uniform illumination as illustrated in
Algorithm 3
1. In an Image Segmentation substep, the embodiment converts the image fr(xr,yr) to a binary image using Tp as the threshold and labeling the components
{B1r, B2r, . . . , Bkr}, such that |B1r|>|B2r|> . . . >|B3r|. B1r is taken as the cornea component.
2. In an initial edge detection substep, is defined by
E
p
0={(xr,yr)|(xr,yr)∉B1r,(xr,vr)∈B1r,∀vr<yr}
3. In a sub-step of removing any sudden change of Ep0, the embodiment removes the sample points of Ep0 between xs and xe where
x
s=min{x|∥yr(xr)−yr(xr−1)∥>δ,(xr,yr)∉Ep0}
x
c=max{x|∥yr(xr)−yr(xr−1)∥>δ,(xr,yr)∉Ep0}
δ is the threshold value to define the sudden change, and it is set to 5 during the experiments reported below.
4. In an interpolation sub-step, the embodiment interpolates the points in using fourth order polynomial q(xr)=q4xr4+q3xr3+q2xr2+q1xr+q0, such that for (xr,yr)∈Ep0,
5. Finally, a final estimate of the position of the corneal epithelium, Ep is defined by Ep={(xr,yr)|xr∈[1,
3) Refraction Correction: An AS-OCT image is distorted by refraction as the light changes direction and speed when it passes the corneal epithelium. This is illustrated in
According to Snell's law [19]
where (θ1,θ2) are the refraction and incident angles for the backscattered light from point Ā and (n1, n2) are the refraction indices of the air and cornea respectively.
The refraction angle1 θ1 is equal to the angle between the tangent line and x-axis. Therefore θ1=arctan(q′(xr)) and θ2=arcsin(n1 sin(θ1)/n2). The actual physical distance from the incident point O=(xr,q(xr)) to light source Ā is OĀ=OA·n1/n2=|yr−q(xr)|n1/n2.
Hence, the transformation from A to Ā is
x(xr)=┌xr−OĀ sin(θ1−θ2)┐ (11)
y(xr)=┌xr+OĀ cos(θ1−θ2)┐ (12)
where ┌•┐ is the ceiling function (i.e. indicating the smallest integer which is not less than the content of the bracket). This function is used so that after the transformation, the results are integers, so that the pixels in the old image map to corresponding pixels in the dewarped image. In order to assess the anterior chamber angle, (xsb, ysb) Ec, EI and MAC are transformed to correct the distortion caused by refraction as follows:
(
Ē
c={(
Ē
I={(
AC={(
1.4 Anterior Chamber Angle Assessment
The anterior chamber angle (ACA) measurements based on Schwalbe's line are defined as follows (see
The calculation of the three ACA measurements is as follows (“Algorithm 4”):
Algorithm 4
1. Linear regression is used to estimate the tangent line of Ec anterior to Schwalbe's line, by obtaining the line lc(
2. The left and right boundaries of TISAsb, denoted by
3. Calculate AODsb, ARAsb and TISAsb as follows:
Again, |•| means the cardinality.
2. Experiments and Discussion
Forty HD-OCT images captured using the Cirrus™ at the Singapore Eye Research Institute were used to test the second embodiment. The size of the test images was (M,N)=(750,500) pixels or (924,616) pixels, and the scan dimensions are w=7.2 mm and d=2.7 mm, respectively. The test images are resized to (
2.1 Processing Time
The algorithm was implemented in Matlab™ R2009a using an Intel® Core™ 2 Duo CPU P7450 @ 2.13 GHz and 1.72 GHz computer. The average processing time of each stage is computed and listed as follows:
Hence, using the proposed algorithm, the operator of Cirrus™ HD-OCT can get the angle assessment measurements in 1 second without any manual labeling.
The embodiment thus simplifies the job of the ophthalmologists and saves their time.
2.2 Accuracy
As there is no existing algorithm that could detect Schwalbe's line automatically, the manual labeling is used as the ground truth to evaluate the accuracy of the second embodiment. Each original image is labeled by three ophthalmologists (i.e. three individuals who are here called BM, WHT and AT). BM did an evaluation twice), and WHT and AT each did it once. The labelings are denoted by L1a, L1b,L2 and L3, respectively. The reference point L0=(xsb0,ysb0) is the weighted average value of labelings defined by
L
0=(0.5L1a+0.5L1b+L2+L3)/3
The detection error defined as
∈(∈x,∈y)=(xsb−xsb0,ysb−ysb0) pixels
are compared with the intra-observer difference L1a−L1b and inter-observer differences
L
1a
−L
0
, L
1b
−L
0
, L
2
−L
0 and L3−Lb.
As the detection error, inter-observer difference and intra-observer difference are vectors, the performance of the algorithm is evaluated using the first order statistics (average and standard deviation) in two directions (x and y) separately. Four statistics of the vector on forty images are defined as follows
Where a(n)=(xa(n),ya(n)) is either the inter-observer difference, intra-observer or detection error on image n.
Overall, the results of automatic detection of Schwalbe's line are in good agreement with the ophthalmologist's labeling.
As seen from Table. 2, the mean of detection error in transversal direction μx(∈), is smaller than that of the intra-observer difference and the inter-observer differences; and the mean of detection error in axial direction μy(∈) is only 0.5042 pixel, which is smaller than the inter-observer differences but larger than the intra-observer difference.
In the final column of the table, the first row denotes the mean of ∈x, the second row is the mean of ∈y, the third row is the standard deviation of ∈x, denoted σx(∈), and the last row is the standard deviation of ∈y, denoted σy(∈). The reliability of our algorithm is evaluated using the standard deviation. σy(∈) across forty images is 1.92 pixel, which is smaller than the standard deviation of inter-observer and intra-observer differences in axial redirection. However, σx(∈) is 5.4160 pixel, which is a large variation as compared with that of the intra-observer difference 3.1199 pixels. It is due to the poor contrast and heavy speckle noise around the angle recess in three images. The distribution of ∈x is shown in
We turn to an evaluation of the effect of Schwalbe's line detection error in the angle assessment. The reference point is transformed to (
(
Consequently, angle assessment measurements are computed based on the reference point and are denoted by AODsb0, ARAsb0 and TISAsb0.
The performance metrics of angle assessment are defined as:
(
∈AOD=(AODsb0−AODsb0) mm
∈ARA=(ARAsb−ARAsb0) mm2
∈TISA=(TISAsb−TISAsb0) mm2
The statistics of the performance metrics of the second embodiment are shown in Table 2. The standard deviation of the detection error in dewarped images is (4.2480,0.9380) pixels in the dewarped image, which corresponds to (0.0382,0.0084) mm. The angle assessment measurements based on detected Schwalbe's line are quite reliable. The standard deviation of ∈AOD is 0.0048 mm, which is less than a pixel. The standard deviations of ∈ARA and ∈TISA are 0.0027 mm2 and 0.0070 mm2, which are 0.0252% and 0.0376% of whole image area (2.7×7.2 mm2). Thereby, we could conclude that the angle assessment measurements using automatic detection of Schwalbe's line are in good agreement with the ones using the manually labeling.
Considering the cases of image containing heavy speckle around the angle recess, the second embodiment may also provide a manual labeling option, which may be used if the ophthalmologists do not agree with the automatically detected location of Schwalbe's line.
Although two embodiments of the invention have been described, many variations are possible within the scope of the appended claims. Firstly, better segmentation methods could be employed to reduce the speckle noise and eliminate the image dependant threshold value.
Another possibility is to model the lower boundary of the cornea as a piecewise linear function and investigate Finite Rate of Innovation (FRI) [22] methods to detect Schwalbe's line which appears to be the point of discontinuity. Furthermore, given that the Cirrus™ provides images at different depths of the eye, the multidimensional theoretical concepts of FRI can further be explored and applied to angle-closure glaucoma.
The disclosure of the following references is incorporated herein by reference.
[12] The certainty of Cirrus. Carl Zeiss Meditec. [Online]. Available: http://www.cirrusoctdemo.com/index.html
This application claims benefit of Ser. No. 61/449,380, filed 4 Mar. 2011 in the United States and which application is incorporated herein by reference. A claim of priority, to the extent appropriate, is made.
Number | Date | Country | |
---|---|---|---|
61449380 | Mar 2011 | US |