The present disclosure relates to optical apparatuses, in particular optical apparatuses to calculate wavefront error at different planes in the eye.
The use of wavefront aberrometry for vision correction surgical procedures, such as custom LASIK (Laser-in-situ-Keratomileusis), and design of custom contact lenses and glasses is well established [1] [2]. Currently, there are several aberrometers based on different working principles available on the market with the most prominent being the Shack-Hartmann sensor [3] [4]. However, due to their limited dynamic range, they are mostly suitable for pre-operative examination.
The use of intra-operative aberrometry for guiding intra-ocular lens (IOL) implant surgery and surgical management of astigmatism have been demonstrated [5]. These are either based on Talbot interferometry or the time sequential waterfront sensing principle, which offer large diopter range [6] [7] suitable for measurement in an aphakic state, when the eye's optics becomes highly hyperopic.
Although wavefront aberrometry provides information regarding the optical properties of the eye, it lacks the biometry or anatomical information. Hence, intra-operative aberrometry measurements are still used with a generalized regression based IOL power calculation formula [8], thus limiting the accuracy of IOL power selection and subsequently the precision of the surgical outcome.
Both pre-operative and intra-operative optical coherence tomography (OCT) can provide anatomical and biometric information of the eye [9] [10] [11] [12] [13]. It has been shown that an IOL power calculation formula that uses anterior chamber depth (ACD) and real IOL position estimation yields more accurate results as compared to the modern new generation of regression-based formulae [14]. Although some optical properties such as corneal power and corneal astigmatism can be estimated using the anterior chamber OCT imaging, it has limited accuracy due to calibration issues, and subject's motion due to slow volume imaging speed [15] [16].
Since OCT can also provide phase information along with intensity based imaging, various digital and computational methods, termed as computational or digital adaptive optics (CAO or DAO), have been demonstrated that can compute the wavefront error in OCT images and also digitally correct it in post-processing [17] [18] [19] [20].
It is an object of the present invention to overcome the aforementioned deficiencies of the prior art and to provide apparatuses to calculate wavefront error at different planes in the eye. The object is achieved with the features of the independent claims. Dependent claims define preferred embodiments of the invention.
In particular, the present disclosure relates to an optical apparatus, comprising: a source of wavelength tunable laser light or a broad band partially coherent light source, a first beam splitter receiving the light and directing a part of the light to a sample arm as illumination light and another part of the light to a reference arm as reference light, the sample arm comprising: means for directing the illumination light via a first beam splitter as a light spot to a sample, wherein an image of the light spot is reflected from the sample, a focus tunable optics receiving the image of the light spot from the sample after being transmitted through the first beam splitter and focusing the image to a detection plane, wherein a photodetector unit is adapted for receiving the recombined light from the sample arm and the reference arm.
Various embodiments may preferably implement the following features.
Preferably, the sample arm comprises a separate illumination channel configured to inject light via a mirror and the first beam splitter to the sample without passing through the focus tunable optics.
That is, a configuration is preferably provided in which a separate illumination channel avoids the focus tunable optics on its way into the eye via the beam splitter.
This allows the formation of a stationary spot on the retina of the eye. The light reflected back from the eye, only on the way out, passes through the beam-splitter and the focus tunable optics and is translated at the fiber detection plane of the sample arm. This arrangement is referred to as “Single Path” configuration.
Preferably, the focus tunable optics is configured to be controlled manually, wherein the focus tunable optics preferably comprises a plurality of lenses arranged in a Badal system, such that the image is focused to the detection plane.
Preferably, the focus tunable optics is configured to be controlled automatically, wherein the focus tunable optics preferably comprise electrically focus tunable liquid crystal optical elements, such that the image is focused to the detection plane.
Preferably, the focus tunable optics is adapted to increase the dynamic range of the signal detection more preferably adapted to compensate defocus and/or astigmatism of the sample.
Preferably, the sample is an eye and wherein the focus tunable optics is adapted to ensure that the retinal plane of the eye is continuously conjugated to or imaged at the detection plane.
Preferably, the optical apparatus further comprises a computing unit being connected to the photodetector unit, wherein the computing unit is configured for digitization and further data processing.
That is, the optical apparatus preferably further comprises a computing unit connected to the photodetector unit, wherein the computing unit is configured to digitize the signal and use digital techniques to calculate wavefront error at different planes, e.g. in the human eye.
Preferably, the sample arm further comprises a scanner, preferably a 2-D scanner placed at the Fourier plane of collimation optics located in front of the detection plane.
Preferably, the sample arm further comprises a detection fiber being adapted to receive the image of the illuminated spot at the detection plane and guide light to the photodetector and more preferably an actuator configured to translate the detection fiber laterally across the image of the illuminated spot.
Preferably, in an embodiment, the photodetector comprises of either a dual balanced photo detector or a spectrometer being adapted to receive the light reflect back from the illuminated spot at the detection plane, which is preferably conjugated to the retinal plane of the eye.
Preferably, in another embodiment, the photodetector comprises a 2-D camera sensor being adapted to receive the light reflect back from the illuminated spot at the detection plane, which is preferably conjugated to the pupil plane of the eye.
Preferably, the optical apparatus further comprises a first mirror directing the light from a broad band light source to the first beam splitter for splitting the light into the sample arm and the reference arm, wherein the reference arm further comprises: a second mirror adapted to receive light transmitted through the first beam splitter, a beam expander adapted to receive light via the second mirror and adapted to increase the light to a diameter of light preferably to a diameter of 5 to 10 mm, more preferably around 8 mm, a third mirror receiving the light from the beam expander and directing it to a diffraction grating for directing the reference light to a second beam splitter adapted to combine with the light reflected back from the sample arm passing through the first beam splitter and the focus tunable optics, and wherein the combined sample and reference light directed via the second beam splitter is preferably captured by the 2-D camera, which is preferably conjugated to the pupil plane of the eye.
Preferably, the computing unit is configured to generate a volume image of the sample preferably a full eye and point spread functions, PSF and to determine a wavefront error using a digital adaptive optics, DAO algorithm preferably a digital lateral shearing based digital adaptive optics algorithm, DLS-DAO algorithm.
According to the present disclosure, computational algorithm that can extract wavefront error information from digitized OCT data is considered as a DAO algorithm.
Preferably, the computing unit is configured to: obtain volumetric optical coherence tomography data, OCT data of PSF scans of the eye, extract an enface PSF field at a retinal layer of the eye after OCT based data processing, derive a defocus distance in an image space from a shift in the image plane at which an image of the light spot is best focused when the focal length of the focus tunable optics is changed, calculate 2-D fast Fourier transform, FFT of the PSF field and add a defocus phase corresponding to the derived defocused distance to the phase of the calculated Fourier field, numerically wave propagate the resulting field at Fourier plane to the image location of the pupil of the eye and reconstruct the phase or wavefront error using the digital adaptive optics, DAO algorithm, preferably the DSL-DAO algorithm from the calculated field at a pupil plane of the eye.
Preferably, the computing unit is configured to: obtain volumetric optical coherence tomography data, OCT data of PSF scans of the eye, extract an enface PSF field at a retinal layer of the eye after OCT based data processing, calculate 2-D FFT of the PSF field, numerically wave propagate the resulting field at Fourier plane to the image location of the pupil of the eye, derive a defocus distance of a focal plane of the eye from the retina, from a change in focus of the focus tunable optics, add a defocus phase corresponding to the derived defocused distance to the phase of the calculated pupil field, and reconstruct the phase or wavefront error using the digital adaptive optics, DAO algorithm, preferably the DSL-DAO algorithm from the calculated field at a pupil plane of the eye.
Preferably, the computing unit is configured to: obtain volumetric optical coherence tomography data, OCT data of PSF scans of the eye in an aphakic state, extract an enface PSF field at a retinal layer after OCT based data processing, derive a defocus distance of a focal plane of the aphakic eye from the retina, from a change in focus of the focus tunable optics, numerically wave propagate the PSF field to an estimated IOL location plane within the eye taking into account the defocus distance, and reconstruct the phase or wavefront error using the digital adaptive optics, DAO algorithm, preferably the DSL-DAO algorithm from the calculated field at the IOL location plane within the eye.
Preferably, the computing unit is configured to evaluate an IOL performance preferably comprising: multiplying a field at the IOL location plane with a complex exponential of a known phase of a selected IOL, calculating the field back at the retinal plane using a numerical wave propagation algorithm, and calculating a spot size and a modulus transfer function, MTF for quantifying a visual performance.
Preferably, the computing unit is configured to: obtain volumetric optical coherence tomography data, OCT data of PSF scans of the eye in a phakic state, extract an enface PSF field at a retinal layer after OCT based data processing, derive a defocus distance of a focal plane of the phakic eye from the retina, from a change in focus of the focus tunable optics, numerically wave propagate the PSF field to a plane at a last surface of a crystalline lens considering vitreous media within the eye and taking account a defocus distance, multiply the calculated field with a complex conjugate of a transmission function of the crystalline lens to cancel its refractive effect, numerically wave propagate the resulting field to an estimated IOL location, and reconstruct the phase or wavefront error using the digital adaptive optics, DAO algorithm, preferably the DSL-DAO algorithm from the calculated field at an IOL location plane within the eye.
Preferably, the computing unit is configured to determine at least one of a sphere, a cylinder and cylinder axis values from the reconstructed phase error and to use the determined values for selecting an IOL or for designing a custom IOL having an optimal sphere and cylinder power and cylinder axis for correcting the phase or wavefront error of the eye.
Preferably, the computing unit is configured to evaluate an IOL performance preferably comprising: multiplying a field at the IOL location plane with the transmission function of a selected IOL, calculating the field back at the retinal plane using a numerical wave propagation algorithm and calculating a spot size and a modulus transfer function, MTF for quantifying a visual performance.
Preferably, the computing unit is configured to: numerically wave propagate a sample field at a camera plane to the image location of the pupil of the eye, derive a defocus phase at a pupil plane of the eye corresponding to a focal length change of the focus tunable optics, add the derived defocus phase to the phase of a calculated pupil field, and reconstruct a phase or wavefront error using the digital adaptive optics, DAO algorithm, preferably the DLS-DAO algorithm from the calculated field at the pupil plane of the eye.
Preferably, the computing unit is configured to: provide PSF corresponding to the retinal layer and wavefront error of an eye, determine cylinder power and axis of an astigmatism or cylinder error correcting toric IOL, determine residual cylinder error after toric IOL is implanted in an eye during the IOL implant surgery from a quality metric based on spot size of the PSF profile, determine angle by which toric IOL needs to be rotated to achieve optimal axis alignment in order to cancel the residual cylinder error, and finally confirm the accuracy of the axis alignment of the toric IOL based on a quality metric based on spot size of the PSF profile.
The embodiments aim at utilizing the combination of full eye OCT imaging and wavefront aberrometry based on DAO in order to get full information regarding the optical and biometric property of the whole eye, which can then be used for more accurate IOL power calculations, for evaluation of various IOL designs and also help guide accurate IOL positioning.
Furthermore, an embodiment for digital aberrometry is based on an off-axis digital holography approach that can be useful in situations where only wavefront error measurement is required without biometry.
Embodiments of the present disclosure may allow wavefront error calculation for a human eye suitable for clinical applications with high accuracy over a wide dynamic range.
The exemplary embodiments disclosed herein are directed to providing features that will become readily apparent by reference to the following description when taken in conjunction with the accompany drawings. In accordance with various embodiments, exemplary systems, methods, devices and computer program products are disclosed herein. It is understood, however, that these embodiments are presented by way of example and not limitation, and it will be apparent to those of ordinary skill in the art who read the present disclosure that various modifications to the disclosed embodiments can be made while remaining within the scope of the present disclosure.
Thus, the present disclosure is not limited to the exemplary embodiments and applications described and illustrated herein. Additionally, the specific order and/or hierarchy of steps in the methods disclosed herein are merely exemplary approaches. Based upon design preferences, the specific order or hierarchy of steps of the disclosed methods or processes can be re-arranged while remaining within the scope of the present disclosure. Thus, those of ordinary skill in the art will understand that the methods and techniques disclosed herein present various steps or acts in a sample order, and the present disclosure is not limited to the specific order or hierarchy presented unless expressly stated otherwise.
The above and other aspects and their implementations are described in greater detail in the drawings, the descriptions, and the claims.
In general, the multimodal OCT system 10 of
In general,
With reference to
As can be seen in
In
The data is digitized and processed on a computer and displayed via a display unit (computer and display unit 140). A double path channel, which consists of 2-D scanner 113 and focus tunable optics 112, is used to deliver light to and from the eye 200 for imaging.
In order to scan the PSF, the sample arm 110 is adapted to include a separate illumination channel that can inject a narrow beam of light of diameter of less than 1 mm in the eye 200 via the mirror 116 and the beam splitter 111, so that a perfect diffraction limited spot could be formed on the retina.
The image of the spot is translated over the single mode detection fiber 117 in the sample arm 110. The method of PSF scan with an OCT based system is described in more detail in [20].
According to the embodiment, a focus tunable optic 112 is included in the sample arm 110 to increase the dynamic range of the signal detection. This focus tunable optics 112, which may consist of plurality of lenses with one or more focus tunable optical element, can be adapted to compensate both defocus and astigmatism. Either manual or electrically focus tunable optical element(s) can be used. However, electrically focus tunable liquid crystal optical elements are preferred for real time applications where fast focus tuning is required.
The lenses in the tunable optics 112 can also be arranged as in a Badal system in case mechanical focus tuning is preferred over fast automatic focus tuning. With the focus tunable system, it is ensured that the retinal plane is always conjugated to or imaged at the detection fiber plane. Note that without the focus tunable optics 112, in the presence of defocus error due to myopia or hyperopia, the best focus plane for the illumination spot could be shifted away from the detection fiber plane as shown in
With a defocus error of >±2 diopters, the signal-to-noise ratio (SNR) can even drop to a level that the PSF cannot be recorded. Thus, focus tunable optics 112 are preferably be used in order to form a focused image of the illuminated spot on retina and keep the SNR of the detected PSF signal high. This can allow the scanned volumetric PSF detection using an OCT system 10 in the defocus error range of >±20 diopters, which can make the system versatile and robust in different scenarios.
In general,
Control of the focus tunable optics 112 system is done through a calibration process. A physical model eye that can mimic a normal and a defocused eye can be used. Defocus error in the physical model eye can be introduced either by adding a sphere lens with varying diopters in the pupil plane or by changing the axis length.
With the physical model eye set to a normal condition of zero diopter defocus error, a 2D PSF profile corresponding to the retinal layer is obtained with the modified OCT system as illustrated in
In case of electrical focus tunable optics system, in which liquid crystal based optical elements with application of different voltage change curvature and thus focus, voltage can be tuned to a value that provides the focused PSF spot with a diameter that matches the determined ideal case value. Defocus error of the physical model eye can be progressively changed from −20 diopters to +20 diopters in a step of 0.25 diopters, and for each defocus error condition within that range a corresponding voltage for the focus tunable system 112 that can provide the ideal focused PSF spot size can be determined.
In case of mechanical focus tuning based on a Badal system, in which a distance between the two lenses in the Badal arrangement is varied for example by moving one of the lenses on a translation stage in order to change the focus of the system, a corresponding distance between the Badal system lenses that provides the focused PSF spot with diameter that matches with the determined ideal case value can be determined for a given defocus condition of the physical model eye.
Even though defocus and possibly astigmatism can roughly be compensated with the focus tunable system 112 in order to enhance signal detection in the wide range of defocus error of ±>20 diopters, there can still be residual sphere and cylinder error of <1 diopters left along with significant higher order aberrations (HOA). These aberrations can be detected using a computational or digital adaptive optics technique, in particular a DLS-DAO technique.
In order to be clinically preferable for vision correction, the wavefront error or aberration may be calculated at 1) the pupil plane 201 in cases where corneal treatment (e.g. LASIK), contact lens and custom glasses are required; 2) the estimated position of intra ocular lens (IOL) implant in cases where IOL implant is necessary (e.g. cataract surgery).
In particular, at step S400a, the intra-operative device is used to obtain volumetric OCT data of the PSF scan of the patient's eye 200. At step 401a the enface PSF field is extracted at a retinal layer after OCT based data processing. At step S402a the defocus distance in the image space is derived, i.e. the distance by which the plane at which the image of the spot is best focused shifts when the focal length of the focus tunable system is changed.
At S403a the 2D-FFT of the PSF field is calculated and the defocus phase corresponding to the derived defocus distance is added to the phase of the calculated Fourier field. At S404a the resulting field at the Fourier plane is numerically wave propagated to the image location of the pupil of the eye. The coordinate of the calculated field is scaled back to the co-ordinate of the original pupil in the object space. At S405a the phase or wavefront error is reconstructed using digital adaptive optics (DAO) algorithm such as DLS-DAO from the calculated field at the pupil plane of the eye.
In particular, in the computation or digital adaptive optics approach, when a 2-D FFT of the selected enface PSF field from the volumetric scanned PSF data is calculated on a computer, it yields the field information at the Fourier plane of the collimation lens 114 placed just in front of the detection fiber 117.
The phase of the calculated Fourier field may be adjusted in case a focus tunable lens system 112 is used in the detection path to compensate the defocus or the spherical equivalent error of the eye 200 and keep the retinal plane 202 conjugated to the detection fiber plane. In this case, a relationship between the defocus distance in the image space and change in focal length of the focus tunable lens system 112, from the set ideal focal length for the case of normal emmetropic eye, is established using a theoretical modeling of the system using either 1) an approximate ABCD matrix formulation or 2) computer simulation of the system with a ray tracing software (eg. Zemax, Code V etc.).
The defocus phase error corresponding to defocus distance in the image space is calculated. The calculated Fourier field is pixel-by-pixel multiplied by a complex exponential function with unit amplitude and argument as the calculated defocus phase, as shown in
The image location of the pupil for different focus configurations of the focus tunable lens system 112 can be determined by using 1) approximate ABCD matrix formulation or 2) precisely with a ray tracing software. Using either method a lookup table of the distance between the Fourier and the image of the pupil plane, for different focus configuration of the focus tunable lens system 112, can be determined.
Using the known distance, the calculated Fourier field can be numerically wave propagated on the computer to the location of the image of the pupil using a numerical technique such as Angular spectrum or Fresnel propagation or a combination or a variation of both, which yields the field at the image location of the pupil plane 201.
Finally, the coordinate of the image of the pupil plane can be scaled back to the original pupil plane dimension by taking into account the magnification factor between the image and the original pupil plane. This approach is suitable for the pupil plane field calculation irrespective of whether the eye is in phakic, aphakic or pseudo-phakic state. Once the pupil field is determined, wavefront error at the pupil plane can be calculated using a digital adaptive optics technique such as DLS-DAO. The flow chart of the respective method is shown in
An alternative way to take into account the defocus phase due to change in the focal length of the focus tunable lens system 112 is shown in
In particular, at step S400b, the intra-operative device is used to obtain volumetric OCT data of the PSF scan of the patient's eye 200. At step 401b the enface PSF field is extracted at a retinal layer after OCT based data processing. At step S402b the defocus distance in the object space is derived, i.e. the distance between the retinal plane and the plane of the best focus in the eye.
At S403b the 2D-FFT of the PSF field is calculated to derive the field at the Fourier plane. At S404b the resulting field at the Fourier plane is numerically wave propagated to the image location of the pupil of the eye. The coordinate of the calculated field is scaled back to the co-ordinate of the original pupil in the object space. The defocuse phase corresponding to the derived defocus distance in the object space is added. At S405b the phase or wavefront error is reconstructed using digital adaptive optics (DAO) algorithm such as DLS-DAO from the calculated field at the pupil plane of the eye.
The calculation of the field at the estimated location 209 of IOL 180, as shown in
According to the disclosure, a PSF field in the image space is first scaled to the co-ordinate in the object space taking into account the magnification of the optical system, which effectively yields the virtual PSF field at retina, which by reversibility principle in optics is equivalent to the focused light field distribution at the retina due to the incoming light beam passing through the full pupil 207 and refracting through various refractive surfaces and optical media of the eye 200.
In case the measurement is done in the aphakic state during the surgery, the virtual PSF field at the retina can be numerically wave propagated, taking into account refractive index of vitreous media to an estimated IOL location 209 directly.
In case the defocus error of the eye 200 is compensated using a focus tunable system 112, the defocus distance in the object space corresponding to the change in focal length of the focus tunable system 112 is calculated, which is then added to the propagation distance, while propagating the PSF field to the estimated location 209 of the IOL 180. The flow chart of the method is shown in
In particular, at step S600, the intra-operative device is used to obtain volumetric OCT data of the PSF scan of the patient's eye 200 in the aphakic state. At step S601 the enface PSF field is extracted at a retinal layer after OCT based data processing. At step S602 the defocus distance is derived, i.e. the distance of the focal plane of the aphakic eye from the retina, from the change of focus of the focus tunable system.
At S603 the resulting PSF field propagated using a numerically wave propagation algorithm on a computer (processor) to an estimated IOL location plane considering vitreous media and taking the defocus distance into account. At S604 the IOL performance is evaluated by multiplying the field at the IOL location plane with the complex exponential of the known phase function of a selected IOL, calculating the field back at the retinal plane using a numerical wave propagation algorithm and calculating the spot size and MTF to quantify the visual performance.
At S605 the phase or wavefront error is reconstructed using a digital adaptive optics (DAO) algorithm such as DLS-DAO from the calculated field at the IOL location plane. At S606 the sphere, cylinder and cylinder axis values are derived from the reconstructed phase error. At S607 the derived values are used to select an IOL with optimal sphere and cylinder power and cylinder axis for the measured patient or the derived values are used to design a custom IOL.
In case of phakic eye, the virtual PSF field at the retina plane 202 can be numerically propagated through the vitreous media to the back surface of the crystalline lens taking into account the distance between the last surface of the crystalline lens and retina and the refractive index of the vitreous media.
In case the defocus error of the eye 200 is compensated using a focus tunable system 112, the defocus distance in the object space corresponding to the change in focal length of the focus tunable system 112 is calculated, which is then added to the propagation distance while propagating the PSF field to the last surface of the crystalline lens.
If the crystalline lens front and rear surface curvature, thickness and effective refractive index can be determined from 2-D or 3-D anatomical OCT imaging, then the phase function of the crystalline lens can be derived and the corresponding phase transmission function, which is the complex exponential function with unit amplitude and argument as lens phase function, can be determined.
Since the phase of the wavefront is most relevant, an assumption of unit amplitude may be sufficient. The calculated field at the back surface of the crystalline lens can be multiplied pixel-by-pixel with the complex conjugate of the determined lens phase transmission function, which cancels out the refraction effect of the lens and yields the field at the front surface on the crystalline lens.
The calculated field at the front surface of the crystalline lens is effectively equivalent to the field due to the planar wavefront refracting through anterior and posterior cornea, passing though the full pupil and propagating through the aqueous humour media to the front lens surface. The flow chart of the method is shown in
In particular, at step S700, the intra-operative device is used to obtain volumetric OCT data of the PSF scan of the patient's eye 200 in the phakic state. At step S701 the enface PSF field is extracted at a retinal layer after OCT based data processing. At step S702 the defocus distance is derived, i.e. the distance of the focal plane of the phakic eye from the retina, from the change in focus of the focus tunable system.
At S703 the PSF field is propagated using a numerically wave propagation algorithm on a computer (processor) to the plane at the last surface of the crystalline lens considering vitreous media and taking the defocus distance into account. At S704 the calculated field is multiplied with the complex conjugate of the transmission function of the crystalline lens to cancel its refractive effect and numerically wave propagate the resulting field to the estimated IOL location.
At S705 the IOL performance is evaluated by multiplying the field at the IOL location plane with the transmission function of a selected IOL, calculating the field back at the retinal plane using a numerical wave propagation algorithm and calculating the spot size and MTF to quantify the visual performance.
At S706 the phase or wavefront error is reconstructed using a digital adaptive optics (DAO) algorithm such as DLS-DAO from the calculated field at the IOL location plane. At S707 the sphere, cylinder and cylinder axis values are derived from the reconstructed phase error. At S708 the derived values are used to select an IOL with optimal sphere and cylinder power and cylinder axis for the measured patient or the derived values are used to design a custom IOL.
A computational or DAO technique, in particular DLS-DAO, can be applied to the calculated field at the IOL location in either the phakic or aphakic case in order to determine the wavefront error or deviation from spherical wavefront with radius of curvature equal to the distance of the IOL location from fovea of the retina as shown in
Based on the determined wavefront error, sphere and cylinder power and cylinder axis of the IOL 180 can be determined that results in an optimal focused spot on the retina free from sphere and cylinder error that can ensure 20/20 vision for the patient.
Also, based on the determined wavefront error, a custom IOL 180 can be designed and manufactured with anterior and posterior refracting surfaces, thickness and refractive index that can not only minimize second order aberration such as sphere and cylinder but also higher order aberrations (HOAs) such as spherical, coma, trefoil etc. to ensure best visual performance for the patient.
Furthermore, a simulation of objective refraction can be done to analyze the visual performance of the designed IOL or an IOL with selected sphere and cylinder power and cylinder axis. In this case, the calculated field at the location of the front surface of IOL is pixel-by-pixel multiplied by the phase transmission function of the IOL, and then numerically wave propagated in vitreous media to the retinal plane 202 to get the focused spot. The root mean square (RMS) spot size and the corresponding modulus transfer function (MTF) are quantified for visual performance analysis.
The steps shown in the flowchart of
That is,
According to the disclosure, the scanned PSF obtained using an OCT system 10 can also be used to guide IOL implant surgery, which may be particularly useful in guiding the alignment of the astigmatism or cylinder error correcting toric IOL.
When the axis of the toric IOL is not aligned with the measured cylinder error axis of the eye, a smeared PSF will be obtained using the PSF generating OCT system, as shown in
While considering the need for only standalone aberrometry, i.e. without biometry, a setup similar to full field off-axis digital holography may be useful that can exploit the advantages of digital wavefront error calculation while still being economically attractive.
The schematic of the setup is shown in
The light in the sample arm 1010 is injected into the eye 1200 and is focused on the retina to form an ideal spot, which acts as a pseudo point source of light reflecting light back. The light that is reflected back from retina is passed through the full aperture of the eye's pupil of diameter size between 3 to 7 mm, and is transmitted through the beam splitter 1011 and focus tunable optics 1012, which may consist of plurality of lenses in which one or more optical elements can change their focal length using either an electrical or mechanical mechanism. The lenses in the tunable optics system 1012 can also be arranged as in a Badal system. The focus tunable optics 1012 is used for rough compensation of defocus and optionally cylinder error of the eye 1200 in order to increase the dynamic range of wavefront error detection.
The light passing through the tunable optics 1012 is transmitted through a beam splitter 1005 and is captured by a 2-D camera sensor 1014, where it is interfered with the light from the reference arm 1020. The detection plane of the camera sensor 1014 is placed at the focal plane of the tunable optics system 1012.
The light that is transmitted by the beam splitter 1011 placed just before the eye 1200 is directed to a beam expander 1022, whereby the beam diameter is increased to around 8 mm.
The expanded light is then directed to a diffraction grating 1004 via a mirror 1023. The arrangement of beam expander 1022 and mirrors 1021 and 1023 can be mounted on a translation stage that can change the path length in the reference arm 1020 in order to match the path length in the sample arm 1010 and control the coherence gate.
The diffracted light of first order from the diffraction grating 1004 is then reflected by the beam splitter 1005 to camera 1014 where it interferes with the light from the sample. The generated interference pattern or the hologram is captured by the camera 1014 and the data acquisition system 1030 which is then further processed on the computer and display unit 1040 in order to extract the field information of the sample as shown in
The grating 1004 is used in the reference arm 1020 to introduce the tilt of the order of wavelength in the reference beam, so that the required spatial carrier frequency is added to the interference signal that can allow separation of the objects term related to the sample field, its complex conjugate and the DC and auto-correlation terms when 2-D FFT of the hologram is calculated, as shown in the flow chart in
That is,
Finally, the resulting pupil field is processed using a digital adaptive optics technique such as DLS-DAO in order to calculate the wavefront error of the eye at the pupil plane with a high resolution comparable to the size of the camera pixel. Note that the reference arm path length can be adjusted by moving the translation stage and subsequently the coherence gate can be set at a given retinal layer, for example nerve fiber layer (NFL) or the photoreceptor, such that signals from only that particular layer where the coherence gate is set interferes with the reference light. This results in a depth resolved interference signal, which upon further post processing as shown in
The embodiments described herein provide, inter alia, the following advantages.
Techniques and methods for determining wavefront error with respect to different planes in the eye is applied to a multimodal OCT system that can yield both anatomical and aberrometry information of the human eye. This obviates the need for separate devices for anatomical and aberrometry measurements, which can not only eliminate the inter-device errors in the measurements, but also significantly reduces the space or the device footprint, cost and time for clinicians and surgeons.
The techniques and methods offer truly personalized measurements and calculations by utilizing the real PSF field information of the human eye, which encodes the unique information about the real light field that has travelled through real anatomical refracting surfaces and optical media of the human subject's real eye.
The techniques and methods utilize a focus tunable lens system during the volumetric PSF generation using the multimodal OCT system in order to increase the SNR and the dynamic range of aberration measurements, which can significantly improve the robustness of measurements in different eyes with different anatomical and optical properties.
The methods and techniques offer personalized simulation of post-operative refraction that can be performed in real time, which can be a powerful tool in the objective evaluation of power and design of the IOL to be used for implantation in the patient's eye. This can guide the surgeons to choose the IOL with the power and design that can ensure best quality vision to the patient after surgery.
Using the numerical wave propagation technique in combination with the computational/digital adaptive optics technique such as DLS-DAO, wavefront error can be calculated at different planes without requiring any extra or additional hardware, thus saving cost and reducing system layout and design complexity.
With the adaptation of the multimodal system, such that it can be attached to or mounted on a surgical microscope and to provide intra-operative measurements, real time wavefront error calculations can be made that can guide the surgeons during surgery and help them select the correct IOL for the patient and also, particularly for toric IOL, help position and align it accurately by monitoring the quality of the PSF in real time.
Customized computer generated eye model for IOL power calculation have been demonstrated [22] [23] [24] [25] [26] [27]. However, they all use measurements from multiple separate biometric devices in order to generate a computerized geometrical eye model and use virtual ray tracing through the computer generated eye model in order to simulate the PSF. In comparison, the present disclosure, as pointed out above, uses a single multi-modal system and measures the real PSF field of the individual patient's eye, which contains the real optical and biometric properties specific to the patient's eye.
Multi-modal system that can deliver OCT imaging together with biometry or aberrometry have been demonstrated and reported for both pre-operative and intra-operative applications [12] [7] [28] [29] [30] [31]. However, they all combine devices that work on different physical principles and are often bulky, complex in design and significantly increase manufacturing costs.
The present disclosure for calculation of wavefront error at different planes and IOL power utilizes a system that performs both imaging and aberrometry based on the same OCT principle, thereby reducing design complexity and cost. Also, the use of numerical wave propagation to calculate light distribution at a given plane is expected to be computationally faster than ray tracing methods.
Use of numerical wave propagation techniques to calculate light field at a given plane is well established in Fourier optics and digital holography [32] [33]. The feasibility of numerical wavefront propagation based on Fresnel approximation and PSF simulation has been demonstrated in a numerical eye model. Also, in combination with an iterative phase unwrapping technique derived from synthetic aperture radar (SAR), the wavefront error calculation and the study of the optical quality performance of some IOL designs have been shown [34] [35]. However, the measurement of input light beam is required and the reconstructed numerical eye model is based on a combination of biometric and statistical data, which uses approximations and assumptions.
Another technique uses measurement of wavefront error at one plane and scales that wavefront error at the second plane using a method based on geometrical optics approximation [36]. However, this approximation does not consider the full characteristics of the light propagation as it neglects diffraction effect and has limited accuracy when the actual wavefront is more complex and irregular [37].
Although there are several ways to perform numerical wave propagation, the present disclosure focuses on its utilization as a tool to calculate optical field in a multimodal OCT system that can generate volumetric PSF field that captures the real optical and biometric properties of each individual patients eye without any approximation or assumptions.
Also, in combination with a DAO technique the present disclosure calculates wavefront error at a given plane with high precision and high computational speed that is suitable for real time and intra-operative applications.
PSF imaging of the eye and the estimation of visual quality has been demonstrated [38] [39] [40]. However, the images provide only intensity based information and the retrieval of phase requires complex iterative numerical or digital methods [41] [42]. The present disclosure, on the other hand, directly uses the phase information of the generated PSF field and calculates wavefront error at a given plane using combination of numerical wave propagation and digital adaptive optics such DLS-DAO, which enables faster non-iterative calculations and makes it especially suitable for real-time applications.
Focus tunable lenses have been demonstrated to improve dynamic range of PSF imaging in double pass configuration, autorefractors and also improving depth of focusing in OCT imaging [43] [44] [45] [46]. However, a focus tunable system has not been implemented to increase SNR and dynamic range in a multimodal OCT system that can generate volumetric PSF scan of the eye with both amplitude and phase information as disclosed herein.
Also, according to the present disclosure, tunable focus is used only to roughly correct primarily sphere or defocus error and optionally cylinder or astigmatism error, and these rough lower order errors are taken into account for the calculation of the total and more precise wavefront error at different planes with respect to the eye using the combination of numerical wave propagation and DAO.
Intra-operative systems for guiding the selection of IOL power and positioning it, especially axis alignment in case of toric IOL, have also been reported and demonstrated [8] [47] [48] [49] [50] [51] [52]. However, they use intra-operative measurements in a regression formula in order to calculate the IOL power and hence do not provide personalized calculation according to each individual patient's eye's real optical and biometric properties as is done in the the present disclosure.
Also, the prior art related to intra-operative toric IOL alignment rely only on wavefront measurements in a feedback loop to achieve the best possible alignment. Callisto eye system from Zeiss [53] provides digital marker during surgery but uses pre-operative keratometry and iris images for guiding the alignment. The present disclosure provides direct PSF imaging and PSF width monitoring in real time during the surgery for the toric IOL alignment, which is a more direct and reliable approach as the goal of wavefront error based vision correction procedure is to ultimately achieve the optimal PSF.
Digital holographic techniques have been applied extensively in the field of metrology and microscopy [54]. Digital holography in off-axis configuration has been proposed as wavefront sensor for retinal imaging [55]. However, the present disclosure uses a broad band light source that can provide required coherence gating for depth resolved wavefront sensing, digital adaptive optics technique such as DLS-DAO in order to extract wavefront error at the pupil plane that is suitable for any clinical application, and employs focus tunable systems for dynamic range enhancement.
More recent prior arts have demonstrated depth resolved retinal imaging using digital holography in off-axis configuration using broad band light source [56] [57] [58] [59] [60]. They have also demonstrated digital adaptive optics techniques to improve and achieve cellular level retinal imaging [61] [59]. However, the sample arm is based on a double path configuration, wherein the illumination beam entering the eye and the back reflected detection beam are of same diameter size. Due to this configuration the detected wavefront error have amplified even aberration terms and suppressed odd aberration terms, which no longer accurately represent the true wavefront error of the optics of the eye, and hence cannot be used directly for any clinical application such as wavefront guided LASIK. The present disclosure overcomes this problem by using a separate narrow beam of diameter size of around 0.6 mm illumination that avoids aberration in the incoming path and detects the wavefront error of the eye from the reflected beam passing through the full aperture of the pupil (5-7 mm diameter size).
Also, the present disclosure employs focus tunable systems in combination with post-processing steps involving numerical wave propagation and digital adaptive optics technique such as DLS-DAO in order to detect accurately the wavefront error with high dynamic range at the pupil plane of the eye in the form that can be directly used for clinical applications such as wavefront guided LASIK or design of custom glasses, contact lenses and custom IOLs.
The present disclosure of the digital aberrometer based on off-axis digital holography has, inter alia, the following advantages.
The wavefront error can be obtained from the measurement of a single 2D camera shot, which enables high measurement speed in the order of hundreds of frames per second.
In comparison to point scanning systems, high speed parallel detection is enabled by a 2-D camera that provides better phase stability with respect to eye motion, which can lead to better accuracy in the calculation of wavefront error.
A very high resolution wavefront error map with resolution comparable to camera pixel size can be obtained.
The use of focus tunable system in combination with post-processing steps involving numerical wave propagation and digital adaptive optics technique such as DLS-DAO can enable wavefront error calculation with high accuracy and dynamic range.
Coherence gating enabled by the use of broad-band light source allows signal detection only from a selected depth layer in retina, thus providing depth resolved wavefront error measurement.
Moreover, the use of a narrow illumination beam enables a single path measurement, which provides the accurate representation of the wavefront error due to the optics of the eye.
While various embodiments of the present disclosure have been described above, it should be understood that they have been presented by way of example only, and not by way of limitation. Likewise, the various diagrams may depict an example architectural or configuration, which are provided to enable persons of ordinary skill in the art to understand exemplary features and functions of the present disclosure. Such persons would understand, however, that the present disclosure is not restricted to the illustrated example architectures or configurations, but can be implemented using a variety of alternative architectures and configurations. Additionally, as would be understood by persons of ordinary skill in the art, one or more features of one embodiment can be combined with one or more features of another embodiment described herein. Thus, the breadth and scope of the present disclosure should not be limited by any of the above-described exemplary embodiments.
It is also understood that any reference to an element herein using a designation such as “first,” “second,” and so forth does not generally limit the quantity or order of those elements. Rather, these designations can be used herein as a convenient means of distinguishing between two or more elements or instances of an element. Thus, a reference to first and second elements does not mean that only two elements can be employed, or that the first element must precede the second element in some manner.
Additionally, a person having ordinary skill in the art would understand that information and signals can be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits and symbols, for example, which may be referenced in the above description can be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.
A skilled person would further appreciate that any of the various illustrative logical blocks, units, processors, means, circuits, methods and functions described in connection with the aspects disclosed herein can be implemented by electronic hardware (e.g., a digital implementation, an analog implementation, or a combination of the two), firmware, various forms of program or design code incorporating instructions (which can be referred to herein, for convenience, as “software” or a “software unit”), or any combination of these techniques.
To clearly illustrate this interchangeability of hardware, firmware and software, various illustrative components, blocks, units, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware, firmware or software, or a combination of these techniques, depends upon the particular application and design constraints imposed on the overall system. Skilled artisans can implement the described functionality in various ways for each particular application, but such implementation decisions do not cause a departure from the scope of the present disclosure. In accordance with various embodiments, a processor, device, component, circuit, structure, machine, unit, etc. can be configured to perform one or more of the functions described herein. The term “configured to” or “configured for” as used herein with respect to a specified operation or function refers to a processor, device, component, circuit, structure, machine, unit, etc. that is physically constructed, programmed and/or arranged to perform the specified operation or function.
Furthermore, a skilled person would understand that various illustrative logical blocks, units, devices, components and circuits described herein can be implemented within or performed by an integrated circuit (IC) that can include a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, or any combination thereof. The logical blocks, units, and circuits can further include antennas and/or transceivers to communicate with various components within the network or within the device. A general purpose processor can be a microprocessor, but in the alternative, the processor can be any conventional processor, controller, or state machine. A processor can also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other suitable configuration to perform the functions described herein. If implemented in software, the functions can be stored as one or more instructions or code on a computer-readable medium. Thus, the steps of a method or algorithm disclosed herein can be implemented as software stored on a computer-readable medium.
Computer-readable media includes both computer storage media and communication media including any medium that can be enabled to transfer a computer program or code from one place to another. A storage media can be any available media that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can include RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer.
In this document, the term “unit” as used herein, refers to software, firmware, hardware, and any combination of these elements for performing the associated functions described herein. Additionally, for purpose of discussion, the various units are described as discrete units; however, as would be apparent to one of ordinary skill in the art, two or more units may be combined to form a single unit that performs the associated functions according embodiments of the present disclosure.
Additionally, memory or other storage, as well as communication components, may be employed in embodiments of the present disclosure. It will be appreciated that, for clarity purposes, the above description has described embodiments of the present disclosure with reference to different functional units and processors. However, it will be apparent that any suitable distribution of functionality between different functional units, processing logic elements or domains may be used without detracting from the present disclosure. For example, functionality illustrated to be performed by separate processing logic elements, or controllers, may be performed by the same processing logic element, or controller. Hence, references to specific functional units are only references to a suitable means for providing the described functionality, rather than indicative of a strict logical or physical structure or organization.
Various modifications to the implementations described in this disclosure will be readily apparent to those skilled in the art, and the general principles defined herein can be applied to other implementations without departing from the scope of this disclosure. Thus, the disclosure is not intended to be limited to the implementations shown herein, but is to be accorded the widest scope consistent with the novel features and principles disclosed herein, as recited in the claims below.
Experiment. Ophthalmol. 37, 118-129 (2009).
Number | Date | Country | Kind |
---|---|---|---|
21177197.7 | Jun 2021 | EP | regional |