Field of the Disclosure
The present disclosure relates in general to a spectrally encoded endoscope (SEE), and more particularly, to three-dimensional endoscope design and image reconstruction process for a spectrally encoded endoscope.
Description of the Related Art
The first endoscope was invented more than 50 years ago and consisted of a bundle of optical fibers. Since then, significant progress in minimally invasive surgeries and thereby reducing the risk of complications, costs, and recovery times has been made.
With the advent of inexpensive and miniature CMOS sensors which are mainly used in smart phones, endoscopes are shifting from fiber bundles into designs with imaging sensors at the distal tip of a probe. One significant drawback on these CMOS sensor based endoscopes is the tradeoff between the scope diameter and the resolution. The spectrally encoded endoscope (SEE) is one of the smallest endoscope that has shown a great potential for the use in minimally invasive surgeries. The original SEE system is designed for side view applications. Due to its smaller diameter of only several hundred microns, the probe itself is much more flexible compared to other scopes available on the market. However, currently, the SEE probes provide distorted two-dimensional view of a target to be imaged. It is desirable to provide a stereovision or even three-dimensional (3D) vision of the target using SEE technology without increasing the diameter of the probe.
A three-dimensional (3D) endoscope is provided. The 3D endoscope comprises a probe; a positioning device to locate a tip of the probe at a first position O and a second position O′; a light guide extending through the probe and configured to guide a light propagating through the probe configured to project on a surface a first light beam from the tip at the first position and a second light beam from the tip at the second position; a detector configured to detect the first light beam reflected from the surface and the second light beam reflected from the surface; and an image processor configured to determine a first distance R between the first position and the surface and a second distance R′ between the second position and the surface based on a position difference between the first and second positions, a first deflection angle θ of the first light beam deflected from the probe at the first position and a second deflection angle θ′ of the second light beam deflected from the probe; determining a first and a second rotation angles φ and φ′; obtain image data carried by the first and second light beams detected by the detector; and obtain a 3D image based on the first distance R, the first deflection angle θ, the first rotation angle φ, the second distance R, the second deflection angle θ′, and the second rotation angle φ′.
In one embodiment, the three-dimensional endoscope further comprises a first optical fiber extending through the probe and configured to guide the light beam propagating through the probe to project the first and second light beams. The three-dimensional endoscope also comprises a second optical fiber adjacent to the first optical fiber and configured to guide the first and second light beams detected by the detector to the image processor. The position difference of the first and second positions may include a translation z0 along an z-axis along which the probe extending, and first distance R and the second distance R′ may be determined based on the relationship of:
wherein θ is an angle of the first light beam deflected from the z-axis, and θ′ is an angle of the second light beam deflected from the z-axis.
In another embodiment, the position difference may include a distance I between the first position O and the second position O′ and an angle difference of the probe with the tip located at the first position and the second position. In this embodiment, the first distance R and the second distance R′ may be determined based on the relationship of:
wherein the l is a distance between the tip located at the first position O and the tip located at the second position O′;
δ2 is an angle difference between the first light beam deflected from the probe at the first position and a vector extending through the first and second positions {right arrow over (OO′)};
δ1 is an angle difference between the second light beam deflected from the tip located at the second position and {right arrow over (OO′)};
The positioning device may be further configured to locate the tip of the probe at a third position. The light source is further configured to generate light propagating through the probe via a light guide to project on the surface a third light beam from the tip at the third position, such that the detector is further configured to detect the third light beam reflected from the surface at the third position. Accordingly, the image processor may determine a third distance R″ between the first position and the surface based on a position difference between the first and third positions and a third deflection angle θ″ of the third light beam deflected from the probe at the third position; determining a third rotation angle φ″ at the third position; obtain image data carried by the third light beam detected by the detector; and obtain the 3D image based on the first distance R, the first deflection angle θ, the first rotation angle φ, the second distance R′, the second deflection angle θ′, the second rotation angle φ′, the third distance R″, the third deflection angle θ″, and the third rotation angle φ″. The system now becomes overdetermined and it is possible to solve R optimally with the consideration of measurement errors and noises.
In another embodiment, the probe comprises at least a first light guide and a second light guide, and the light source configured to generate light guided by the first light guide to project a first light beam from a tip of the first light guide onto a surface and guided by the second light guide to project a second light beam from a tip of the second light guide onto the surface. The detector is configured to detect the first and second light beams reflected from the surface. The image processor is configured to determine a first distance R between the tip of the first light guide and the surface and a second distance R′ between the second light guide and the surface based on a position difference between the tips of first and second light guides, a first deflection angle θ of the first light beam projected from the tip of the first light guide, and a second deflection angle θ′ of the second light beam projection from the tip at the second light guide; determine a first rotation angle φ of the first light guide and a second rotation angle φ′ of the second light guide; obtain image data carried by the first and second detected light beams; and obtain a 3D image based on the first distance R, the first deflection angle θ, and the first rotation angle φ, or the second distance R′, the second deflection angle θ, and the second rotation angle φ′.
A three-dimensional endoscopic image reconstruction method is provided in another embodiment. The method comprising the following steps. A first light beam is projected from a first position onto a surface, and a second light beam is projected from a second position onto the surface. The first light beam and the second light beam are reflected from the surface. A first distance R between the first position and the surface are determined based on a position difference between the first and second positions and a first deflection angle θ of the first light beam projected from the first position. A second distance R′ between the second position and the surface is determined based on the position difference and a second deflection angle θ′. Image data carried by the detected first and second light beams detected by the detector are obtained; and a 3D image based on the first distance R, the first deflection angle θ, and the first rotation angle φ at the first position, or the second distance R′, the second deflection angle θ′, and the second rotation angle φ of the probe at the second position can be obtained.
To implement the three-dimensional endoscopic image reconstruction method, a probe is provided. A tip of the probe is located at the first position and the second position, such that the first and second light beams are projected onto the surface from the first and second positions, respectively. Alternatively, a probe including a first light guide and a second light guide is provide, and the first light beam is projected from a tip of the first light guide at the first position, while the second light beam is projected from a tip of the second light guide at the second position. The three-dimensional endoscopic image reconstruction method may also comprise projecting a third light beam from a third position onto the surface; detecting the third light beam reflected from the surface; determine a third distance R″ between the third position and the surface based on a position difference between the first and third positions and a third deflection angle θ″ of the third light beam projected from the third position; determining a third rotation angle φ″ of the probe at the third position; obtain image data carried by the detected third light beam detected by the detector; and obtain the 3D image based on the third distance R″, the third deflection angle θ″ and the third rotation angle φ″. The system now becomes overdetermined and it is possible to solve R optimally with the consideration of measurement errors and noises.
The three-dimensional endoscopic imaging reconstruction method according to claim 13, further comprising measuring irradiance E of the first and second light beams reflected from the surface to determine the first distance R and second distance R′.
In another embodiment, a three-dimensional endoscope is provided. The three-dimensional endoscope comprises a probe; a light guide configured to guide light. Light can propagate through the probe and a light beam is projected from a tip of the probe onto a surface; a first detector configured to detect the light beam reflected from the surface at a first position; a second detector configured to detect the light beam reflected from the surface at a second position, wherein the second position is displaced from the position with a distance z0 along an axis of the probe; and an image processor configured to determine a first distance R between the first position and the surface based on a first intensity I of the light beam detected at the first position, a second intensity I′ of the light beam detected at the second position, and the distance z0. The first distance R is determined based on the relationship of:
The three-dimensional imaging reconstruction discussed here may also be applied to other types of imaging reproduction scopes.
The following description is of certain illustrative embodiments, although other embodiments may include alternatives, equivalents, and modifications. Additionally, the illustrative embodiments may include several novel features, and a particular feature may not be essential to practice the devices, systems, and methods described herein.
Due to its smaller diameter of several hundred microns, the probe 30 is flexible and can be maneuvered to inspect hard-to-reach areas with a minimum bending radius of several millimeters. It is also possible to obtain color images with a modified design and the fluorescence capability. See, for example, color probes and methods as disclosed in U.S. Pat. No. 9,254,089; WO2015/116939; U.S. patent application Ser. No. 15/340,253; and U.S. Pat. App. Ser. No. 62/363,119 and fluorescence as described in U.S. Pat. No. 8,928,889 and U.S. Pat. Pub. 2012/0101374, each of which are herein incorporated by reference. The SEE apparatus as shown in
where r is the parameter that decides the distance of the point to the origin. The target surface P can be presented by a function of x, y, and z, that is, f (x, y, z). At the interception point of the light ray on the target surface P,
f(x,y,z)=0 (2).
From Equations (1) and (2), the length of the light ray, that is, the distance between the tip and the interception point of the light ray and the target surface r can be solved from Equation (2) and thus the interception point of each light ray in the three-dimension is determined. This is evident if one considers a plane in the three dimensions as:
where (a, b, c) is the surface normal to the plane as shown in
As discussed above, the probe 30 rotates about the z-axis. The azimuth angle φ can be determined by an encoder of the motor driving the probe, for example, the motor 40 as shown in
where Γ is the total scanning angle, for example, 700 in one embodiment of the current invention, N is the number of linear portion of pixels, for example, 800, and m is the step index between 1 and N.
Each wavelength of the light propagating through the grating 31 is diffracted to a distinct angle towards the target surface. Equation (5) shows the relationship between the spectral distribution of the light ray projected from the probe 30 and the incident angle and the diffractive angle of the light propagating through grating 31:
−ni sin θi+nd sin θd=mGλ (5),
where ni and nd are the refractive indices of the media through which the light propagates, including the incident side and the diffractive side of the grating 31, respectively; θi is the incident angle of the light onto the grating 31; θd is the diffractive angle of the light projecting from the grating 31; m is the diffraction order, G is the grating constant of the grating 31, and λ is the wavelength of the light. The sign convention follows the definition in
θ(Δ)=θi−θd(λ) (6)
The wavelength λ of the light at the spectrometer may be calibrated based on interpolation or extrapolations from two or more wavelengths, that is, two or more color lights, and the pixel index P(λ) of each pixel by Equation (7):
where λ1 and λ2 are the wavelengths of known spectra, for example, blue and red lasers. The linearity of the spectral distribution at the spectrometer is shown in
By applying the deflection angle θ and the azimuth angle φ obtained from Equations (4) and (6), the distance R between the tip of the probe 30 and the interception point of the light beam on the target surface P can be obtained by assuming a target surface defined by Eq. (2). If the target surface is unknown, it is also possible to reconstruct the surface mesh. Referring to
With the known variables in the triangle OO′P include θ, θ′, and z0, the distances R between the tip of the probe 30 at the first position and the target surface P and the distance R′ between the tip of the probe 30 at the new position and the target surface P can be obtained by:
Once the distances R and R′ are known, combining the known information about θ and φ, the unknown coordinates of the point of interest can be obtained as [R, θ, ρ]. 3D image construction can thus be obtained point by point from the image information carried by the light rays reflected from the interception point of the target surface, including the coordinate information discussed here.
where C is the transition matrix between two coordinate systems and (x0, y0, z0) are the coordinates of the new origin O′ in XYZ coordinate system. The transition matrix C can be presented as
and calculated as:
In
With the known parameters, including δ1, δ2, R, and R′, the coordinate [R, θ, φ] of the interception point of the light ray on the target surface P can be derived. 3D image construction can thus be obtained from the image information carried by the light rays reflected from the interception point of the target surface P, including the coordinate information discussed here.
Instead of moving the tip of the probe 30 at two different positions, the probe 30 may include more than one fibers extending through to project multiple light beams onto the target surface P. As shown in
Intensity of light ray reflected from the target surface provides further information for 3D image reconstruction. As understood, the light intensity I is a function of the distance r between the tip of the probe and the target surface as:
where J1 is the Bessel function of the first kind and the normalized radius r′ is given by
D is the aperture diameter, f is the focal length, N is f-number, and λ is the wavelength of light. Once light intercepts with the target surface, it is reflected and collected by a detection fiber. Bi-directional reflection distribution function (BRDF) is often used for this calculation. As shown in
where L is radiance or power per unit solid-angle-in-the direction-of-a-ray per unit projected-area-perpendicular-to-the ray, E is irradiance or power per unit surface area, and θi is the angle between ωi and the surface normal. The index i indicates incident light, whereas the index r indicates reflected light. If the reflective surface is Lambertian, the amount of light we can collect is inverse proportional to cos2θ.
The amount of light that can be collected is inverse square proportional to the distance between the target to the detection fiber. If we assume the image is right at the focus, the total collectable energy at the MMF (multi-mode fiber) is:
where R is the distance between the tip of the probe and the interception point of the light on the target surface, θi is the incident angle at the target surface. Equation (15) can be combined with Equation (9) to obtain more accurate solution of the distance R and R′. As the critical power P is so sensitive to the distance R, it may be sufficient to estimate the distance based on the reflective power received, assuming a fixed θi and proper calibrations for different tissue reflectivities (BRDFs). By varying the distance from the tip of the probe to the target by a known amount, the drop of power received provides a hint on the distance values. For example, by moving the probe away from the target to reach 1/16 of the power, the distance moved equals to R assuming the incident angle θi at the target surface is unchanged.
Thereby, the distance R can be derived from:
The distance R obtained from Equation (17) and the distance R′ derived in the similar manner can be combined with Equations (9), (12), and (15) used for ensure the accuracy solution of the distances R and R′. For forward-view applications, fibers with larger NAs could be used to increase the light acceptance angle. For side-view applications, both fibers may be covered by gratings in order to accept light with very large incident angles. For both cases, the amount of light that can enter the waveguide will depend on the light detection angle θ, which is also the angle between the reflected light and the surface normal of the detection fiber. A proper calibration is necessary to consider this loss together with other losses at the detection side in order to calculate the amount of light that reaches the waveguide surface I and I′. It is worth noting that what matters here is only the ratio between I and I′, i.e. absolute calibration is not necessary here. It is also interesting to note that Equations (9), (12) and (15) do not contain the intensity, which means they are immune from this potential issue.
It will be appreciated that the 3D image reconstruction structure and method can be applied to non-planar target surfaces as well as planar target surfaces. The stereo vision can also be implemented based on the 3D information obtained, which is aimed to provide the depth perception and comfort for the users. The spectrum power compensation based on distances and angles, that is, critical relationship and the spot size deconvolution can also be implemented.
While the above disclosure describes certain illustrative embodiments, the invention is not limited to the above-described embodiments, and the following claims include various modifications and equivalent arrangements within their scope.
Number | Name | Date | Kind |
---|---|---|---|
6564087 | Pitris et al. | May 2003 | B1 |
6831781 | Tearney et al. | Dec 2004 | B2 |
7304724 | Durkin et al. | Dec 2007 | B2 |
7625335 | Deichmann et al. | Dec 2009 | B2 |
8928889 | Tearney et al. | Jan 2015 | B2 |
9254089 | Tearney et al. | Feb 2016 | B2 |
9846940 | Wang | Dec 2017 | B1 |
20040222987 | Chang et al. | Nov 2004 | A1 |
20060017720 | Li | Jan 2006 | A1 |
20080230705 | Rousso | Sep 2008 | A1 |
20100210937 | Tearney et al. | Aug 2010 | A1 |
20110275899 | Tearney et al. | Nov 2011 | A1 |
20120101374 | Tearney et al. | Apr 2012 | A1 |
20120190928 | Boudoux | Jul 2012 | A1 |
20130093867 | Schick | Apr 2013 | A1 |
20140276108 | Vertikov | Sep 2014 | A1 |
20160005185 | Geissler | Jan 2016 | A1 |
20160316999 | Lamarque | Nov 2016 | A1 |
20170020627 | Tesar | Jan 2017 | A1 |
20170143442 | Tesar | May 2017 | A1 |
20170180704 | Panescu | Jun 2017 | A1 |
20170181798 | Panescu | Jun 2017 | A1 |
20170280970 | Sartor | Oct 2017 | A1 |
20180103246 | Yamamoto | Apr 2018 | A1 |
20180164574 | Wang | Jun 2018 | A1 |
Number | Date | Country |
---|---|---|
908849 | Apr 1999 | EP |
200238040 | May 2002 | WO |
2015116939 | Aug 2015 | WO |
Entry |
---|
Park, H., et al., “Forward imaging OCT endoscopic catheter based on MEMS lens scanning”, Optics Letters, Jul. 1, 2012, vol. 37, No. 13. |
Kang, D., et al., “Spectrally-encoded color imaging”, Optics Express, Aug. 17, 2009, pp. 15239-15247, vol. 17, No. 17. |
Penne, J. et al., “Time-of-Flight 3-D Endoscopy”, 12th International Conference, London, UK, Sep. 20-24, 2009, pp. 467-474, vol. 5761. |
Thormahlen, T. et al., “Three-Dimensional Endoscopy”; Information Technology Laboratory, University of Hannover, Germany. |
McLaughlin, R.A., et al., “Static and dynamic imaging of alveoli using optical coherence tomography needle probes”, Journal of Applied Physiology, Sep. 15, 2012, vol. 113, No. 6. |
Zeidan, A., et al, “Spectral imaging using forward-viewing spectrally encoded endoscopy”, Biomedical Optics Express, Feb. 1, 2016, pp. 392-398, vol. 7, No. 2. |
Zeidan, A., et al, “Miniature forward-viewing spectrally encoded endoscopic probe”, Optics Letters, Aug. 15, 2014, pp. 4871-4874, vol. 39, No. 16. |
Number | Date | Country | |
---|---|---|---|
20180164574 A1 | Jun 2018 | US |