This application relates generally to fundus imaging and, more specifically, to fast generation of 2D fundus images of the eye using an interferometric imaging modality.
Fundus imaging provides essential diagnostic information in ophthalmology. From a signal detection point of view, there are three criteria for imaging performance: (1) light collection efficiency; (2) system detection sensitivity; and (3) artifacts suppression. Due to the particular structure of the eye, there are several constraints for fundus imaging. For example, both illumination and imaging aperture are limited by the pupil size, the light scattered from the retina is weak, and strong reflections from the anterior segment (particularly the cornea) can eclipse the weak signal from the retina and spoil the image contrast.
For conventional fundus imaging modalities including fundus cameras and scanning laser ophthalmoscopes (SLO), it is critical to suppress the image artifacts such as the glare from the cornea reflection, for example, by separating the illumination and imaging apertures. The pupil area is split into two parts: one for illumination, and another for imaging. However, this approach reduces the usable aperture for both illumination and imaging and, therefore, compromises the overall efficiency of the imaging performance.
Besides artifacts suppression, sufficient signal to noise ratio (SNR) may be needed for diagnostic images. Since the reflectivity of the retina is low, imaging of the retina requires either high illumination power (in the case of fundus camera) or high sensitivity with an SNR close to a shot noise limit (in the case of SLO) to generate desirable image quality. Due to safety limits, fundus cameras cannot provide high quality fundus images at video rate. In traditional SLO, special detector and data acquisition modules are employed to achieve high sensitivity, including but not limited to confocal pinhole, high gain detectors (e.g., photomultiplier tube), and avalanche diodes (APDs). However, typically such a design can only achieve high sensitivity within a certain range of the scattered light power. For a scattered signal that is not within this optimal range, some of the system performance is sacrificed, for instance, the imaging speed. Despite such compromises, shot-noise-limited high sensitivity may still not be possible considering the wide variation in retina backscattering among different people.
Recently, optical coherence tomography (OCT) has become another widely used imaging modality in ophthalmology. By utilizing a broad band light source with short coherence length, OCT can provide sub 10 μm resolution in depth. With 2-dimensional scanning, OCT can also acquire 3D volumetric images. These 3D images can further generate en-face fundus images by projection in the depth direction. However, in recent more sensitive Fourier domain OCT (FD-OCT), including spectral domain OCT (SD-OCT) based on spectrometer and swept source OCT (SS-OCT) based on tunable laser, complex ambiguities (also known as mirror image artifacts) reduce the useful depth measurement range to half in many clinical applications. This is due to the Hermitian Fourier transform of the real-valued spectral interferogram. In addition, the sensitivity may also be reduced when Fourier transforms are employed and en-face fundus images are generated based on the resulted OCT images. Due to the limited scanning speed and/or data acquisition, OCT fundus imaging has yet to achieve video rate. Still further, OCT systems are generally expensive due to the high quality and sophistication of its components such as high resolution high speed spectrometers, fast tunable lasers with large sweeping ranges, high speed digitizers, and computers for sophisticated processing. As a result, OCT fundus images are generally compromised by eye motion artifacts and insufficient for eye tracking purpose.
To address the limitations in existing fundus imaging modalities described above, the description herein provides an interferometric fundus imaging system and method that can approach or achieve video rate by eliminating the computation intensive Fourier transforms. The resulting image can be more sensitive than conventional OCT fundus images and useful for eye tracking applications.
In a first example, a method of imaging comprises: applying a plurality of different spectrums of light from a swept source light source to an object via a two-dimensional scanner; detecting light of each of the plurality of different spectrums of light that is backscattered by the object, detected light of each applied spectrum of light corresponding to a unique pixel of an en-face image of the object having an M×N pixel array; and generating the en-face image of the object from data corresponding to the detected light, wherein the plurality of different spectrums of light each comprise at least one unique wavelength of light.
According to various embodiments of the first example, the method further comprises: synchronizing the two-dimensional scanner with a duty cycle of the light source such that as an output of the light source changes spectrums, the two-dimensional scanner causes the light from the light source to be applied at a different location of the object; the two-dimensional scanner does not alter a location of light applied to the object while the light source is inactive; an instantaneous linewidth of the swept source light source is smaller than 0.72 nanometers; a wavelength tuning range of the swept source light source is larger than 0.017 nanometers; each pixel of the en-face image is generated by calculating the sum of the squared signal intensities for the detected light of the spectrum of light corresponding to each pixel; the en-face image is generated by normalizing pixels of the M×N pixel array corresponding to each of the at least two different spectrums; the method further comprises: frequency filtering data corresponding to the detected light to selectively retain a portion of the data corresponding to depths of interest of the object; a filtering bandwidth is adjusted based on an estimate of curvature of the object and an evaluation of the en-face image; the light is applied and detected according to an interferometric system, and the method further comprises: adjusting a path length of a reference arm of the interferometric system such that the path length of the reference arm and a path length of a detection arm of the interferometric system are equal at varying depths corresponding to a curvature of the object; the en-face image is a fundus image; the object is an eye ball; the method further comprises aligning and/or tracking an eye ball based on the generated en-face image, wherein the method is performed at least in part by an interferometric system; and/or the method further comprises: digitizing each detected spectrum at at least 15 sample points within the spectrum, the en-face image being generated at least in part from the digitized sample points.
In a second example, a method of imaging comprises: detecting spectrums of light that are backscattered by an object at various depths of the object, each detected spectrum of light corresponding to a unique pixel of an en-face image of the object having an M×N pixel array and being output by a swept source light source; filtering data corresponding to the detected spectrums of light by applying a frequency filter corresponding to a depth of interest; selectively retaining the filtered data; and generating the en-face image of the object by performing a statistical calculation on the selectively retained data.
In various embodiments of the above example, the spectrums of light are the same; the spectrums of light comprise at least two different spectrums within the bandwidth of the swept source light source, the at least two different spectrums each comprising at least one unique wavelength of light; the method further comprises: synchronizing the two-dimensional scanner with a duty cycle of the light source such that as an output of the light source changes spectrums, the two-dimensional scanner causes the light from the light source to be applied at a different location of the object; the two-dimensional scanner does not alter a location of light applied to the object while the light source is inactive; an instantaneous linewidth of the swept source light source is greater than 0.72 nanometers; a wavelength tuning range of the swept source light source is less than 0.017 nanometer; each pixel of the en-face image is generated by calculating the sum of the squared signal intensities for the detected light of the spectrum of light corresponding to each pixel; the en-face image is generated by normalizing pixels of the M×N pixel array corresponding to each of the at least two different spectrums; a bandwidth of the frequency filter is adjusted based on an estimate of curvature of the object and an evaluation of the en-face image; the light is applied and detected according to an interferometric system, and the method further comprises: adjusting a path length of a reference arm of the interferometric system such that the path length of the reference arm and a path length of a detection arm of the interferometric system are equal at varying depths corresponding to a curvature of the object; the en-face image is a fundus image; the object is an eye ball; the method further comprises aligning and/or tracking an eye ball based on the generated en-face image, wherein the method is performed at least in part by an interferometric system; the method further comprises: digitizing each detected spectrum at at least 15 sample points within the spectrum, the en-face image being generated at least in part from the digitized sample points.
These and other embodiments are described in more detail below.
Certain terminology is used herein for convenience only and is not to be taken as a limitation on the present invention. Relative language used herein is best understood with reference to the drawings, in which like numerals are used to identify like or similar items. Further, in the drawings, certain features may be shown in somewhat schematic form.
The present disclosure herein describes a new interferometric imaging modality that enables fast generation of 2D fundus image by accumulation of a filtered interferogram, reducing or eliminating the need for sophisticated computation to generate an A-profile. One particular implementation may use a split-spectrum technique to increase the imaging rate by a factor of K, where K is the number of sub-bands of the split spectrum and is flexibly adjustable according to system specifications and imaging requirements. It should be noted that because the imaging modality described herein utilizes an interferometric setup, it may be implemented alongside other interferometric modalities, utilizing a single interferometric setup. While the disclosure contained herein illustrates the present invention with respect to ophthalmic imaging and OCT, it is to be understood that this is not a limiting embodiment. That is, interferometric techniques are also used, for example, in astronomy, spectroscopy, oceanography, and seismology.
Interferometric imaging modalities, such as OCT, rely on the principle of interferometry. A simplified interferometric setup is shown in
When backscattered light from a sample arm is combined with back-reflected light from a reference arm, the intensity of the detected light is determined according to equation (1):
I=I
r
+I
s+2√{square root over (IrIs)} cos(Δz·k+ϕ) (1)
where Is is the intensity of the backscattered light from the sample arm and Ir is the intensity of the back-reflected light from the reference arm. Usually Is has very low negligible power and Ir can be removed as a constant background signal from the reference arm. Therefore, the interference signal, i.e., the interferogram, is determined according to equation (2):
I=2√{square root over (IrIs)} cos(Δz·k+ϕ) (2)
When using a tunable light source, equation (2) can be rewritten according to equation (3):
I=A cos(Δz·R·t+ϕ′) (3)
where ϕ′=Δz·ks+ϕ, Δz is the difference in the respective path lengths of the sample and reference arms, and A is the interferogram amplitude A=√{square root over (IrIs)}. It should be noted that, unlike frequency domain OCT applications, the system and method described herein do not require a Fourier transform or other method to convert the signal to the frequency domain and do not rely on a short coherence gate.
Typically a tunable laser, a laser which emits light over a particular bandwidth (its tuning range), is characterized by a wavelength λ, while a wavenumber k is widely used in interference descriptions. As wavelength and wavenumber are directly related through k=2π/λ, both k and λ are used in this disclosure and are interchangeable according to the above relationship. The relationship between the wavenumber (k) and the time (t) for a tunable laser is shown in
k=k
s
+R·t (4)
where R is the tuning speed of k,
and is constant in this example. It is noted that in the case of a typical tunable laser, R may not be a constant, which will then result in nonlinear tuning of k in time t. Such nonlinearity broadens the bandwidth of the interferogram, which can be readily accommodated by adjusting signal processing accordingly.
In addition, as illustrated in
According to equation (3), the frequency of the interferogram increases as Δz, the difference in the respective path lengths of the reference and sample arms, increases. Therefore, it is possible to selectively keep the signal from a region of interest (according to depth) with standard signal processing techniques (e.g., frequency filters) and remove the others which may appear as imaging artifacts. An example of such a technique is illustrated in
It is noted that the Δz corresponding to the retina and cornea can be reversed because the depth of Δz=0 is determined based on the path length of the reference arm. That is, the optical path length such that Δz=0 can be set to be the length to the cornea. In such a case, the cornea generates an interferogram with a lower frequency, which can be removed by a high pass filter. However, because a system with bandwidth at low frequency is often much easier to implement, it may be preferable to set Δz for the cornea to be larger than Δz of the retina (the length of the reference arm equals the length of the detection arm to the retina).
It is also noted that because of the above relationships, the filter's frequency response can be flexibly specified to select signals from specific depth. For instance, as shown in
When the field of view of the fundus image increases, a dynamic configuration may be desirable to accommodate the curvature of the eye ball. According to a first embodiment, the filter bandwidth is adjusted based on a set of parameters determined by a standard model eye to estimate the curvature of the eye ball. In such an implementation, it is possible to further fine tune the filter bandwidth dynamically by utilizing a feedback loop based on evaluation of the resulting fundus image. According to a second embodiment, the path length of the reference arm is dynamically adjusted (instead of filter bandwidth) so that the signal from the region of interest (ROI) will always fall within the filter bandwidth. As such, the portion of the eye in which the path lengths are equal changes to accommodate curvature of the eye.
In the context of signal processing, and particularly time-frequency analysis, the uncertainty principle imposes the following condition on any real waveform: Δf·Δt≥1, where Δf is a measure of the bandwidth (Hz), and Δt is a measure of time duration (second). In the case of the present disclosure, the corresponding variables are Δz and Δk, respectively. Thus, Δk is the tuning range in wavenumber according to equation (5):
Δk=|2π/λs−2π/λe|≈2πΔλ/λ02 (5)
where λs and λe are the starting and ending wavelength of the tunable laser respectively, λ0 is the center wavelength, and Δλ=|λs−λe| is the wavelength tuning range. For fundus imaging of the eye, the retina and the cornea are about 32.4 mm (24 mm physically with refractive index of ˜1.35) apart according to the averaged axial length of human eye. Therefore, to separate retina signal from cornea reflection, Δk is determined according to: Δk≥π/Δz=π/32.4 mm−1. For instance, for a near infrared light source with a center wavelength ˜1050 nm, the wavelength tuning range is found to be Δλ≥0.017 nm to separate cornea glare from retina signal. For visible light centered at 500 nm, this range is reduced to 0.004 nm. Compared to typical tunable ranges of OCT systems (on the order of tens of nanometers), the above tuning range for the present disclosure makes it possible to employ faster and less expensive tunable light sources by reducing the burden to provide large tuning range. In other words, by having a tunable range less than that for typical OCT systems, cheaper and fast light sources can be used. Further, it is possible to effectively increase imaging speed with a typical tunable laser for OCT system by splitting the spectrum of the tunable laser into small portions that can still satisfy the above ranges.
In a system with a tunable laser as the light source, the amplitude of the interference signal (and therefore the SNR) is further affected by two more factors: (1) coherence length of the tunable laser, and (2) the electrical bandwidth of the system. To avoid or minimize the retinal signal loss, the coherence length of the tunable laser should be large enough to accommodate the retina structure and its curvature. The governing formula is derived from the formula of the coherence length as shown in equation (6):
where Zr is the depth range in free space that contains the desirable retina structure, and δΔ is the instantaneous linewidth of the tunable laser. For instance, with a light source having a center wavelength ˜1050 nm, when the retina is placed close to the path length match position (Δz=0) and the Zr is estimated to be ˜1.35 mm (1 mm physically with refractive index of ˜1.35), the instantaneous linewidth is found to be δΔ<0.36 nm to avoid the retina signal loss.
In the present invention, because the depth resolvable cross sectional tomography is not required for fundus image, the path length match position can essentially be set inside of the retinal structure, thereby avoiding the mirror image problem. As a result, the required instantaneous linewidth can be further relaxed to 2δΔ. In the example above, the instantaneous linewidth would be 2δλ<0.72 nm to avoid the retina signal loss. Compared to a typical corresponding instantaneous linewidth much less than 0.1 nm in OCT systems, this instantaneous linewidth for the present disclosure can be larger than prior systems. Because the instantaneous linewidth can be greater than the 0.1 nm of typical systems, it is more tolerable and thus makes it possible to employ faster and less expensive tunable light sources that are generally not useful for OCT systems.
To acquire the interferogram, the optical signal is converted to an electrical signal by a detector and/or a data acquisition device in the interferometric setup according to equation (3). The detector and data acquisition response frequency should be sufficient for resulted signal from retina in order to maintain the interferogram from the retina. In addition, the detector and data acquisition device should have frequency bandwidth according to: fBW>Zr·R/2π, where Zr is depth range in free space that contains the desirable retina structure. When the path length match position is set inside of the retinal structure, the cutoff frequency can be further reduced by half.
After the artifacts are removed, a fundus image can be rendered with further signal processing, either in analog or digitized format. Digitization of the signal leverages the wealth of sophisticated digital signal processing techniques and therefore is further explored in detail. For instance, each pixel of the fundus image can be calculated as a statistical result, e.g., the sum of square of the signal within the sweeping range at the location(s) corresponding to the pixel. For example, according to one embodiment, at least 15 sample points within the bandwidth for each pixel on the fundus image are digitized and used to generate the fundus image. The insights gained from digitized signal processing, however, can apply equally to implementations based on analog signals.
The above description discusses the use of a tunable laser where the interferogram is frequency modulated in the time domain. However, it is noted that a broadband light source can also be used in the invention. In such a case, the interferogram is frequency modulated in the spectral domain. As a result, for broadband light source implementations, signal processing, such as noise filtering, should be done in the spectral domain instead of the time domain.
One example of the implementation of the modality of the present disclosure is shown in
To generate the fundus image of an eye, a 2D scanning scheme is usually implemented in the sample arm 404 for fast flying spot scanning over the fundus. For example, to generate a fundus image with M×N pixels, the x-direction scan of the 2D scanner 406 will run M steps for each horizontal line, and the y-direction scan of the 2D scanner 406 will run one step forward after each x-line scan for N steps, as illustrated in
In an ideal case, the 2D scanner 406 should stop at each scanning spot until the signal of this spot is acquired, then jump to the next spot, as illustrated in
Due to the hysteretic nature of tunable lasers, the lasing performance is different depending on the sweeping direction. Additionally, tunable lasers are typically optimized for sweeping from short to long wavelengths. As a result, at the end of the tuning period, the laser requires a return time during which laser output is usually suppressed. This inactive period, together with the linearity requirement, reduces the duty cycle of the tunable laser to be less than 100%, typically ˜50%.
As the duty cycle of a tunable laser is much lower than 100%, continuous transverse scanning may not be the optimal scanning protocol since some dummy pixels (pixels having little to no intensity/background noise only) are acquired during the inactive period of the laser. To address the problem, the scanning controller 408 can be used to control the 2D scanner 406 at the sample arm to either exclude the dummy pixels or minimize the effect of dummy pixels.
One approach to address this problem is to set up the scanning controller to generate new sweep triggers according to each sub-band (also applicable for entire tuning range without splitting), which synchronize the 2D scanner 406 with the swept laser source 400 and the data acquisition at detector 410. In doing so, the starting and ending wavenumbers/wavelengths of each sub-band are consistent during the whole scanning process and the 2D scanner 406 steps for each sub-band and stops when the laser 400 is inactive. As a result, each scanning spot corresponds to one sub-band.
The timing diagram of such a scanning controller that addresses the problem of a limited duty cycle is shown in
For high speed imaging, the “stop-and-start” scan control requirement previously described may be practically difficult due to the inertia and hysteresis of mechanical scanners. However, it is also possible to interleave the active scan (scan during the time when laser is active) and the dummy scan (scan during the time when laser is inactive) so that the dummy pixels (pixels acquired by dummy scan) always have active pixels (pixels acquired by active scan) next to them. As a result, each dummy pixel can be interpolated or otherwise determined based in part on its neighboring active pixels.
This technique is illustrated in
First, as shown in
where m is a positive integer, and T is the laser's duty cycle. As such, if the laser status at the start of scan line n is active, after
cycles when the scan line n+1 starts, the laser status becomes inactive. Or if the laser status at the start of scan line n is inactive, after
cycles when the scan line n+1 starts, the laser status becomes active. Therefore, the laser status at the start of each scan line alternates between active and inactive, resulting in interleaved active and dummy pixels.
Second, as shown in
At the same time, instead of treating the less than 100% duty cycle as a problem, it may be utilized to reduce the potential phase washout effect while the light beam is scanning across a large range. Theoretically, the smaller duty cycle is less susceptible to phase washout, while it is understood that the smaller duty cycle requires a higher detection bandwidth to accommodate it.
As imaging speed is increased, artifacts caused by eye motion can be decreased. The disclosure herein describes a split spectrum technique to increase the imaging speed by a factor of K, which is a flexible number that can be adjusted according to system specifications and imaging requirements. In order to increase the scanning speed and generation speed of the fundus image, one spectrum of the tunable source 400 is split into K sub-bands, as is illustrated in
As discussed above, a small wavelength tuning range (e.g., 0.017 nm for 1050 nm light source) is sufficient to differentiate retinal signals from major noises such as cornea reflection. For modern tunable lasers such as swept source lasers with >50 nm tuning range, it is thus possible to split the spectrum into hundreds of sub-bands that are still capable of filtering out reflection noises from the cornea. In practical applications, the number of sub-bands can be flexibly determined depending on several factors including but not limited to: (1) repetition rate of the tunable laser, where a lower repetition rate can be mitigated by increasing the number of sub-bands; (2) total tuning range of the light source, where larger tuning ranges allows larger number of sub-bands; and (3) required imaging speed, where higher imaging speeds can be obtained using a larger number of sub-bands.
It is noted that as the pixel value is statistically calculated, a certain number of data points within each sub-band are beneficial to minimize statistical errors. Computer-based simulation and empirical analysis suggests that this number of data points be greater than or equal to 15.
When the whole spectrum is split into K sub-bands, each pixel along the x-direction scan of the 2D scanner 406 corresponds to a sub-band of the full spectrum of the tunable source 400. The value of the pixel can then be calculated by processing a signal of the corresponding sub-band acquired by detector 410. The processing may occur at the detector 410 or by a separate image processor 414. An acquired signal broken down by sub-bands is shown in
When using a split spectrum, the light intensity varies for each sub-band. In addition, the actual bandwidth and the number of sampled signal points of each sub-band could also vary for different sub-bands. To generate a better representative image of the fundus, these differences between different sub-bands can be compensated for. This can be addressed by pixel value calibration. One example of a calibration process that may be used after the initial calculation of all the pixel values of the fundus image is described hereinafter. First, pixel values are calculated from the signals within each sub-band. Next, pixels are grouped into K groups according to their sub-bands, and the pixel values for each group are averaged. Finally, pixels are calibrated by dividing the value of each pixel by the corresponding average value for its sub-band.
With respect to the above descriptions, it is therefore possible to image an object according to the following method, as illustrated in
The en-face images generated according to the above method may be used for, and just as, en-face images generated by other methods and modalities. According to one example, the en-face images may be used to identify the location of a structure, such as an eye ball, or structures within the eye. If the method is performed iteratively, or otherwise a plurality of en-face images are generated, movement of the identified structure may be tracked by comparing differences between the en-face images. In this manner, the above method can be used, for example, to align and/or track an eye ball.
It should be evident that this disclosure is by way of example and that various changes may be made by adding, modifying or eliminating details without departing from the fair scope of the teaching contained in this disclosure. The invention is therefore not limited to particular details of this disclosure except to the extent that the following claims are necessarily so limited.