This invention relates generally to optical coherence tomography methods and devices used to measure the characteristics of human tissue and other samples.
Optical coherence tomography (OCT) has emerged as an important imaging modality used in several clinics, especially in ophthalmology and dermatology. Acquiring deep-tissue images at cellular resolution is highly desirable for both biological research and clinical diagnosis. However, tissue heterogeneity can introduce optical aberrations, which then degrade the lateral point spread function (PSF), or lateral resolution. For example, imperfect ocular optics is common during retinal imaging, as is the skull during brain imaging. Adaptive optics OCT (AO-OCT) addresses this challenge by reshaping the wavefront of the illumination beam to focus the beam to diffraction-limited PSF in a targeted region. Wavefront sensor-based AO-OCT (WAO-OCT) optimizes PSF based on the metric from a wavefront sensor. Despite the success of WAO-OCT demonstrated in research labs, translating WAO-OCT into clinics has been hampered by complexity, cost, and size.
As an alternative, simpler sensorless AO-OCT (SAO-OCT) eliminates the wavefront sensor by using image metrics (i.e., intensity or sharpness) to optimize PSF. However, since image metrics are only indirectly related to PSF, SAO-OCT cannot guarantee optimal results globally. In addition, because a strong and stable backscattered signal is required during SAO-OCT optimization iterations, the image metric acquired through 2D enface scanning is susceptible to motion. The slow iteration has hindered the clinical adaptation of SAO-OCT.
Recently, artificial neuron networks (ANNs) have been explored to derive the aberrated wavefront using the PSF generated from a point source. Trained ANNs can optimize the wavefront instantly, eliminating time-consuming iteration. To accomplish this, the ANNs must be trained with a universal metric, like PSF, or its counterpart in the frequency domain, modulated transfer function (MTF). If an image metric is used, the ANN has to be retrained when the imaged object or the system optics are different. Retraining is not acceptable during clinical diagnosis as it is costly and time-consuming.
It appears that the ideal metric for SAO-OCT should be either PSF or MTF. However, accessing the PSF or MTF in a scattering medium without a guide star is challenging. To the applicant's knowledge, no solution has been discovered.
The contrast in OCT images originates from backscattered photons, resulting from the refractive index variation in tissue. New contrasts, such as tissue property-related optical attenuation coefficient (OAC), have been intensively studied to improve the diagnostic capability of OCT. For instance, a decrease of the OAC in the retinal nerve fiber layer has been linked to glaucoma severity. In dermatology, OAC has been tested to monitor the healing process of burn wounds.
OAC is derived from the original scattering contrast and depends on the backscattered photons detected by OCT. The backscattered photons consist of least scattered photons (LSPs) and multiple scattered photons (MSPs). LSPs “remember” the spatial locations to which they were backscattered, inasmuch as the locations can be surmised from the data measured. MSPs “lose” this memory, inasmuch as the locations cannot generally be surmised from the data measured. Although OCT uses a low-coherent gate to capture LSPs and reject MSPs, certain MSPs can still enter the coherent gate and skew the quantification of OAC.
Currently, the extraction of tissue-related OAC is largely based on the single-scattering model, which has two major limitations. First, the single-scattering model ignores MSPs, which is problematic when quantifying the OAC of highly scattering media, such as the skin or in deep tissue, where MSPs are dominant. Second, multiparameter nonlinear fitting introduces significant variation. A minimum of three parameters—focal depth (zf), Rayleigh range (zR), and OAC (μs)—are required to fit a nonlinear equation and are also interdependent. Prior knowledge about zf and zR is required to minimize uncertainty during the fitting process, but it can only be obtained by carefully controlling the imaging process. Despite these efforts, significant variation in OAC can still be observed and the underlying mechanism of this variation has not yet been well understood. Translating OAC measurement into clinical use faces challenges due to patients' different ocular optics and motion artifacts during in vivo imaging.
In conventional OCT, the illumination and detection beams share the same optical path, while in BO-OCT, as shown in
The setup illustrated in prior art systems shown in
Disclosed herein is an interferometer using methods to capture backscattered photons from at least one position that has a small offset from the illuminated area. Detection of photons from an offset position is known, and the technology has been used in other optical detection schemes, such as Raman scattering and fluorescence detection. However, the disclosed technology of parallelly detecting photons at an offset position and processing the data are different, significantly affecting how the technology can be realized and used in clinics. Furthermore, the devices and methods disclosed are different from the prior art as noted herein.
It is demonstrated herein that the offset OCT images can be used to reconstruct a Backscattered Photon Profile (BSPP). A BSPP permits visualization of the beam profile in the scattering medium, and an example of this is shown in
Because the skirt of the BSPP represents MSPs, the skirt can be ignored and one can use only the LSPs. Thus, two things may be accomplished with BO-OCT. One can (a) obtain the PSF and MTF, and (b) separate the MSPs from the LSPs. But the problem still remains that it takes too long to gather data, such as 100 images, needed to have a reliable BO-OCT reading. The mechanical translation for offsetting the detection beam in prior art OCT systems can limit imaging speed, significantly affecting clinical use. In addition, because the prior art acquired offset images sequentially, the phase stability was lost. Patients cannot hold their eyes still for even a few seconds, but for a reliable reading the data must be gathered in real-time, such as at about 30 images per second.
The present invention relates to a Field Scanning OCT (FSOCT) system, which overcomes the bottleneck of imaging speed. The FSOCT system accomplishes this through simultaneous (parallel) detection and thus provides both speed and phase stability during imaging. This thereby provides a solution to the limitations of previous methods and devices.
The herein-disclosed FSOCT methods and devices overcome these problems by detecting backscattered photons in parallel simultaneously from multiple orientations and/or locations, without relying on mechanical motion to capture them at offset positions. For example, with the invention 100 OCT images may be obtained at the same time over a 100 micron space. This significantly improves the performance of OCT imaging by enabling faster and more stable imaging for clinical use.
The disclosed technology is described herein inasmuch as the equipment described can simultaneously detect offset photons, which means photons from parallel positions. Furthermore, the disclosed technology is described herein with methods for different innovative applications.
In describing the preferred embodiment of the invention which is illustrated in the drawings, specific terminology will be resorted to for the sake of clarity. However, it is not intended that the invention be limited to the specific term so selected and it is to be understood that each specific term includes all technical equivalents which operate in a similar manner to accomplish a similar purpose. For example, the word connected or terms similar thereto are often used. They are not limited to direct connection, but include connection through other elements where such connection is recognized as being equivalent by those skilled in the art.
Patent application Ser. No. 63/330,999, which is the above-claimed priority application, is incorporated in this application by reference.
Looking first to an embodiment shown in
The detector array 14 detects spectral interference fringes. The light backscattered from the sample 2 (the solid line) is focused on a slit 12 after a focusing lens 7. The slit 12 serves as a spatial filter to gather the signal from only one orientation, such as the solid line 19, and filter the signal from other orientations of the sample 2. The signal from different offset positions on the sample 2 is first dispersed by a dispersive component (i.e., grating 13), then focused on different rows of the detector array 14 so data can be collected at the detector array 14 and stored in a data storage or further processed by a computer 15. After the slit 12, the light is collimated again by a lens 8 and then dispersed by a grating 13 and focused onto the detector array 14 by a lens 9. In order to obtain further enhanced precision, one can add a second spatial filter, such as the slit 17, along with duplicates of the components that follow slit 12 (all such duplicate components are shown in
The dashed line following the light path reflected by mirrors 4 and 5 is the reference light path, which will illuminate the detector array and interfere with the photons backscattered from the sample. One can further reshape the reference light into a line using optical components 16 like a cylinder lens, a Powell lens or other phase modulators can also be used to further reshape the reference light to generate a uniform reference light field on the photodetector array(s) 14 and those in reference numeral 18.
The spectral interference fringes will be acquired by the detector array 14 for data processing, which may be by a computer, or storage in a computer's local drive. Only the light paths of two positions, the illuminated spot and a single beam offset, are shown in
In addition to configuring the setup with free-space components, fiber components can also be used as an alternative for the same purposes. It is also possible to detect the backscattered photons from more than two orientations by further splitting the backscattered light with multiple beam splitters dispersive components and lenses after the beam splitter 11. The configuration is the same for adapting slits with different orientations.
The data processing flow after acquiring the images with FSOCT or FSOCM is described below. In either spectrometer-based OCT or swept light source-based OCT, the processing of an A-scan of OCT images follows a standard process typically including DC removal, resampling, dispersion compensation, Fourier transform, and image reconstruction. Different and specific data analyses based on FSOCT are described in detail, and the fact that the offset OCT images can be used to reconstruct a BSPP is demonstrated. The BSPP allows one to visualize the beam profile in the scattering medium. With BSPP, one can track the focus during imaging, quantifying LSPs and MSPs, using point spread function (PSF) or modulated transfer function (MTF) as the feedback for adaptive optical imaging, and highly sensitive phase measurement, as described below.
As shown in
Det(Δ{right arrow over (r)},z)∝∫Ei({right arrow over (e)}−Δ{right arrow over (r)},z)Ei({right arrow over (r)}z)R({right arrow over (r)},z)d2{right arrow over (r)} (1)
The light intensity distribution, Id(Δ{right arrow over (r)},z), measured by BO-OCT can be written as
I
d(Δ{right arrow over (r)},z)∝|∫Ei({right arrow over (r)}−Δ{right arrow over (r)},z)Ei({right arrow over (r)},z)R({right arrow over (r)},z)d2{right arrow over (r)}|2 (2)
Multiple A-scans can be averaged by scanning the illumination beam in a small range to reduce speckles. Assuming R({right arrow over (r)},z) is independent and random at different locations, the averaged intensity can be simplified as
I(Δ{right arrow over (r)},z)∝∫Hi({right arrow over (r)}−Δ{right arrow over (r)},z)Hi({right arrow over (r)},z)d2{right arrow over (r)}=Hi({right arrow over (r)},z)*Hi(−{right arrow over (r)},z)=Hi({right arrow over (r)},z)*Hi({right arrow over (r)},z) (3)
Here, H({right arrow over (r)},z)=|Ei({right arrow over (r)}1,z1)|2, which is called intensity point spread function or just PSF, a real function. The symbol * is used to represent correlation and the symbol * is used to represent convolution. Fourier transform can be conducted on both sides of Eq. (3) relative to {right arrow over (r)}, then
[I(Δ{right arrow over (r)},z)]∝[Hi(Δ{right arrow over (r)},z)][Hi(−Δ{right arrow over (r)},z)]=|M(fr,z)|2 (4)
Here, M(fr,z) represents MTF. Therefore, the averaged BO-OCT intensity signal is the PSF autocorrelation or the inverse Fourier transform of the squared MTF at different depths. Thus, with BO-OCT, one can reconstruct the depth-resolved PSF autocorrelation or MTF, two parameters that are widely used for evaluating imaging system performance but have never been achieved as depth-resolved forms in a scattering medium.
The illumination beam and the detection beam could have different apertures or wavefronts.
Note that I is the cross correlation between the PSFs of the illumination beam and the detection beam. As the two PSFs are different, I is not a symmetric function. The asymmetric aberration such as a comatose state can be detected in this way.
If considering a Gaussian approximated illumination beam and only LSPs, the normalized detected intensity at each depth with BO-OCT can be written as
where
zR is the Rayleigh range of the Gaussian beam, d is the distance from the surface of an imaged medium to the beam focus, and w0 is the beam waist at the focus. From Eq. (5), the reconstructed LSPs profile in the scattering medium is the illumination beam field, showing the beam waist at different depths, the focal position, and the Rayleigh range. Therefore, even if the wavelength and the optics used to focus the illumination beam are not known, information about how the light beam is distributed in the scattering medium based on the reconstructed LSPs profile can be obtained with FSOCT.
In FSOCT, the depth-resolved PSF autocorrelation function can be accessed, and then this function can be used to obtain the true PSF. To control the PSFs of the illumination and detection beams, components 20 and 21 in
I(Δ{right arrow over (r)},z)=Hi({right arrow over (r)},z)*Hd({right arrow over (r)},z) (6)
where I is the cross-correlation between the PSFs of the illumination beam and the detection beam. In an optical system, the PSF of a beam with a large diameter suffers significant distortion due to the aberration of the optical system, while the distortion is small with a beam having a small diameter.
The following steps, which are illustrated in
As shown in
In addition, a second detection array 113 can detect a similar pattern with an opposite phase, forming balanced detection and reducing image noise after subtracting the signal acquired with arrays 108 and 113. Both detection arrays 108 and 113 are similar to CCDs and may have M rows and N columns of pixels. However, having a larger number of pixels increases the amount of data that needs to be processed by subsequent units such as a computer. To address this issue, a detector array can be designed in various configurations to reduce the data processing load. For example, for the first pattern in the box 116 of
There are different variations of this setup. Reference numerals 114 and 115 in
Optical Coherence Microscopy (OCM) is a variation of OCT that captures an enface view image at a specific depth in a sample. Similarly, FSOCM is an improvement OCM just as FSOCT is an improvement of OCT. In FSOCM, an example of which is shown in
There are different ways of generating such phase change, and they can all be adapted to FSOCM. In one example, the light beam illuminating the sample 205 can be shifted from the pivot point of the scanner 203. During scanning, a phase modulation will be introduced. The photodetector arrays 209 and 210 can capture such phase modulation, and the signal will be demodulated during data processing. Components 214 and 215 can be used to create different wavefronts or apertures between the illumination beam and the detection beam. Another way to generate phase modulation is to introduce a configuration in the reference arm that is synchronized with the scanning mirror. For example, a scanner can replace the mirror 206 with a beam offset from the pivot point. This scanner must be synchronized with the scanner 203. Alternatively, a phase modulator 216 or 208 can be introduced in the sample or reference arm and synchronized with the scanning mirror.
The backscattered photons from the illuminated and surrounding spots are simultaneously focused on the detector arrays 209 and 210 through the lens 213. The data from the detector arrays 209 and 210 can be captured, stored and/or processed. As the phase modulation is introduced during scanning, interference modulation due to the phase modulation is captured in the form of amplitude modulation (AM). The AM signal from each pixel can be demodulated using Fourier transform or the principle of a locked-in amplifier or filters to recover the interference signal and reconstruct the FSOCM image at a specific depth.
Reference for the modulation method described in (a): https://doi.org/10.1117/1.3155523.
Reference of OCM: https://link.springer.com/protocol/10.1007/978-1-4939-6810-7_12.
Another embodiment is referred to as a fiber bundle-based OCT or OCM (FSOCT or FSOCM, respectively). The embodiment of
The invention uses a fiber bundle which provides flexibility for imaging in a cavity.
The output of the fiber coupler 505 is connected to one of the fiber cores 512 in the fiber bundle 508. The light from the fiber core 512 is focused on a sample 511 through a lens 509 and a scanning mirror 510. The backscattered photons from the sample 511 can be collected from the illuminated spot through the fiber core 512 shown as the solid line or from an offset position through another fiber core 513. The backscattered light out of the fiber core 512 interferes with the light out of the fiber coupler 506 in the fiber coupler 505. Similarly, the backscattered photons from the offset position collected by the fiber core 513 will be delivered to the fiber coupler 507 and interfere with the light from another output of the fiber coupler 506. The interference will be detected by the components 501 and 502, which may be a spectrometer (if the light source 503 is a broadband light source) or a photodetector (if the light source 503 is a swept light source). The light out of the fiber coupler 506 serves as reference arms for the interferometers built on fiber couplers 505 and 507. The optical path length of the reference arms must match the length of the sample arm constructed by the fiber core 512 or 513 and lens 509 and scanning mirror 510.
In the fiber bundle 508, multiple fiber cores can be used to collect light from different offset positions as shown in the illustration of the cross section of the fiber bundle 508 having multiple cores 514. The invention only requires a lens 509 after the fiber bundle 508 for either collimation or focusing. The light should not be collimated from each core. To scan a cavity, the mirror 510 can be rotated to form circumferential scanning or the fiber bundle can be vibrated without the mirror 510 to form forward scanning.
As an experiment, a solid phantom was constructed by mixing 2% agarose with 0.5% intralipid and then was imaged by the
For the OCT image at each position, a mean A-scan was calculated from all A-scans (to suppress the speckles). It is known in the art that each A-scan contains numerical data related to photons that were detected, and data relates to the depth of the photons. Further mean and average are examples of mathematical processes by which speckles may be mitigated or eliminated. The BSPP was then reconstructed in a logarithmic scale shown in
To further validate observations, the focus was shifted to a location at the phantom surface (as shown in
With the data and images described above, one can observe how the illumination beam is focused and spread out in the scattering medium because the profile of the beams as shown in
Therefore, it is possible to determine the focal point in a medium by taking OCT images at various offset positions, calculating the average or mean A-scan by taking the mean of all the data from the A-scans at that location and then reconstructing the BSPP using the mean A-scan against the offset. The BSPP may be displayed on a logarithmic scale, and using the same dataset the intensity at each depth and plotted the BSPP may be normalized on a linear scale as predicted in Eq. (3). The focal point is the point on the image (and in the normalized data) where the illumination beam is narrowest. The process displays the beam profile at various positions as a function of the OCT images, which permits a calculation of where the beam is narrowest, thereby permitting the determination of where the focus is located in the medium. This gives information about how the beam is distributed inside of the human tissue. It has not been previously possible to obtain this type of information about how the beam is focused and distributed in tissue.
There is a desire to separate least scattering photons (LSPs) from multiple scattering photons (MSPs). As shown in
Another function that corresponds to PSF is called Modulation Transfer Function (MTF). MTF is calculated from the Fourier transform of the PSF correlation function represented by BSPP, and it is another way to characterize an optics system. A depth-resolved MTF can therefore be obtained.
The inset in
With the flow chart of
The present disclosure contemplates applying standard OCT or OCM signal processing to each detector on the detector array. The reconstructed BSPP can be displayed in a two dimensional image, similar to that shown in
From the BSPP, one can carry out one or more of at least three other steps or methods (see
In conventional OCT, the phase of the OCT signal is not stable during mechanical scanning. This is because small amounts of motion by the subject can create large amounts of noise, preventing accurate extraction of the phase information. However, with the FSOCT/FSOCM methods carried out as described above using the devices disclosed above, the OCT signals from the illumination spot and the surrounding spots are acquired simultaneously. This simultaneous acquisition of OCT images guarantees a stable phase because there can be no motion between the acquisition of the images. This stability in the phase permits the phase information to be extracted through the inverse Fourier transform of the image at an imaged location. From another point of view, FSOCT/FSOCM can be considered the diffraction pattern of an imaged subject.
Complex variation: Complex OCT signal can be written as Soct=Aeiψ. By exploring the phase variation (iψ), OCT can be used to extract flow or tiny motion in a subject, for example, the blood cell motion in blood vessels. Conventional OCT compares the difference between two OCT A-scans at different time points T1 and T2 (see
However, the noise associated with the phase at different times T1 and T2 could be different. Although phase differences are very sensitive to motion, they are overshadowed by noise. In other words, it may be difficult to differentiate the phase variations induced by motion and noise. With FSOCT/FSOCM, the influence of the noise on the phase variation induced by motion is eliminated when phases at different offset positions are acquired parallelly at the same time. As shown in
When imaging a flow with scattering particles, such as red blood cells at a specific time T1, the FSOCT phase signals between two offset positions can be also directly written as the following equation to get the phase difference.
Ψ1D=(ψ1+ψnoise1)−(ψ1off+ψnoise1)=(ψ1−ψ1off)
The phase noise term is eliminated due to both FSOCT signals being acquired at the same time. When the noise has been removed from the above operation, then one can compare the difference between two FSOCT signals at two time points T1 and T2 as
S
D
=S
1offset_D
−S
2offset
or ΨD=Ψ1D−Ψ2D=(Ψ1−Ψ2)+*Ψ2off−Ψ1off)
It should be noted that phase variation is different at different locations. For example, if (Ψ2off−Ψ1off) is acquired for a region that does not have a flow or the flow rate is small, then (Ψ2off−Ψ1off)≈0. One can extract the absolute phase variation only due to the flow as(Ψ1−Ψ2) without the influence of noise. For the purpose of illustration, only two positions are shown and described. In practice, the FSOCT signal from multiple offset locations can be processed similarly. As the noise has been removed, the signal-to-noise ratio can be significantly improved.
After the FSOCT or FSOCM images have been acquired, the BSPP may be reconstructed and the phase information is recovered, the imaging methods for different applications may be carried out as noted in the far right boxes of
In accordance with equations (4) and (5) and
By compensating for variations in the focal position, accurate tissue structure quantification can be achieved, which is essential for monitoring disease progression over extended periods. In addition to controlling the focus, the recorded focal position during the image can also be used to remove motion artifacts. One method of controlling the focus is to obtain OCT images and focus on a particular depth of tissue, such as the human retina. However, it is not clear with conventional OCT that the focus of the beam is at a particular depth of the tissue. With FSOCT, the focal point, in terms of depth in the tissue, can be tracked with the methods and devices described herein. If the focal point can be tracked, then even if a patient moves slightly the focal point can be moved to stay at the same depth in the tissue or the focus can be locked on a specific feature similar to focus-locking during photography.
Feedback for adaptive imaging: In tissue imaging, such as the retina, the wavefront of the illumination beam can be distorted by the aberrations, such as those induced by the cornea and lens. This results in a reduction of the lateral resolution of images. Adaptive imaging can address this issue by using a wavefront shaping component to compensate for the distortion and focus the beam to a diffraction-limited spot on the targeted tissue. This can be achieved by obtaining the distorted wavefront of the illumination beam prior to compensation or by using a metric as an indicator during optimization. Various methods, such as OCT, confocal, and nonlinear imaging, have been developed for adaptive imaging. However, these methods still have limitations, such as complexity, cost, or phototoxicity.
Adaptive imaging by optimizing PSF/MTF. Because PSF/MTF can be accessed through FSOCT/FSOCM, PSF/MTF may be used as the metric to realize adaptive imaging. FSOCT/FSOCM can obtain depth-resolved PSF/MTF in scattering media. In
Adaptive imaging through neuron network training. In one embodiment, a method for utilizing a deep-learning neural network to extract the phase of a wavefront is disclosed. The method comprises the steps of providing a series of light beams with known wavefront distortion, measuring the Point Spread Function (PSF) or Modulation Transfer Function (MTF) using FSOCT or FSOCM, and training the neural network using the measured PSF or MTF. This will proceed for many different light beams with known wavefront distortions and measuring the PSF or MTF. Once this has occurred, this trained neural network will be used to derive the phase of an unknown distorted wavefront when the PSF or MTF is known. The wavefront shaping component then generates the opposite of the phase of the distorted wavefront to correct the distorted wavefront at the focal spot. A flow chart demonstrating the method is depicted in
Adaptive imaging through extracting phase variation for OCT complex signal. The wavefront of a light beam is determined by the phase of a light wave. If the phase of the light beam can be directly extracted (measured), then the opposite phase of the distorted wavefront can be provided to the wavefront shaping component 3 to compensate for the distortion without requiring iteration. This method includes the determination of the phase of the wavefront by eliminating noise, and avoids the need to either perform numerous iterations or train a neural network, as the previous two methods describe. The illustration of the method can be shown to be used with the
FSOCT/FSOCM parallelly captures the complex OCT signal as S1 (x1, y1, z) at P1 and S2 (x2, y2, z) at P2, as shown in sample 407 of
This results in the phase signal S because from different imaged locations, the aberrated wavefront ψr is similar, but random phase variation ψs is random. After averaging a large number (e.g., about 100 or more) of such measurements as shown in the equation above, only the aberrated wavefront ψr will remain because ψs is cancelled by the averaging calculation due to its randomness. Once the aberrated wavefront is obtained, the wavefront shaping component 3 can generate −ψr (“negative ψr”, which is a shape that is the opposite of ψr), which compensates for the induced wavefront distortion. Even though the signals are measured at different times and different locations, this is acceptable because the distortion (or aberration) does not change substantially over time and location.
Extracting tissue optical properties. Tissue optical scattering coefficient, absorption coefficient, and anisotropy (g) are valuable for diagnosis. Although various methods have been proposed to estimate the optical properties based on OCT images, the estimation requires prior knowledge of the optical systems such as wavelength, focal location, and refractive index of the imaged subject. And almost all models ignore MSPs by considering only LSPs. It remains challenging to translate the technology into clinics.
With FSOCT/FSOCM, LSPs and MSPs can be separated using function fitting. To separate LSPs and MSPs, the BSPP is first normalized. This was described above in relation to
G(Δr)=GL(Δr)+GM(Δr)
Here, GL(Δr) was used to fit the LSPs central beam and GM(Δr) was used to fit the MSPs skirt. Both are Gaussian functions as
Here, C and (1-C) are the coefficients of GL(Δr) and GM(Δr), wL is the beam waist of the LSPs central beam and wM is the beam waist of the MSPs skirt. Here, we use Gaussian function as the example. Other functions, such as Lorenz function, can also be used.
Objectively quantify stray light in eye: As depth-resolved PSF and MTF can be accessed and LSPs and MSPs can be separated, this technology can be used to quantify the ocular stray light, such as the stray light induced by a cataract. One can capture the PSF from the top surface of the retina. By quantifying the contribution of MSPs through MSPs skirt, the stray light induced by the crystal lens can be evaluated. A method of capturing backscattered photons from a position that has a small offset from the illuminated area.
This detailed description in connection with the drawings is intended principally as a description of the presently preferred embodiments of the invention, and is not intended to represent the only form in which the present invention may be constructed or utilized. The description sets forth the designs, functions, means, and methods of implementing the invention in connection with the illustrated embodiments. It is to be understood, however, that the same or equivalent functions and features may be accomplished by different embodiments that are also intended to be encompassed within the spirit and scope of the invention and that various modifications may be adopted without departing from the invention or scope of the following claims.
This application claims the benefit of U.S. Provisional Application No. 63/330,999 filed Apr. 14, 2022.
Number | Date | Country | |
---|---|---|---|
63330999 | Apr 2022 | US |