HIGH RESOLUTION DIFFUSE OPTICAL TOMOGRAPHY USING SHORT RANGE INDIRECT IMAGING

Information

  • Patent Application
  • 20230046768
  • Publication Number
    20230046768
  • Date Filed
    April 21, 2021
    3 years ago
  • Date Published
    February 16, 2023
    a year ago
Abstract
A fast imaging apparatus and method for high resolution diffuse optical tomography with a line imaging and illumination system is disclosed. The method uses an algorithm comprising a convolution approximation of the forward heterogeneous scattering model that can be inverted to produce deeper than ever before structured beneath the surface. The method can detect reasonably accurate boundaries and relative depth of absorption variations up to a depth of approximately 8 mm below highly scattering medium such as skin.
Description
BACKGROUND OF THE INVENTION

Imaging below the skin and through tissue is important for diagnosis of several dermatological and cardiovascular conditions. MRI remains the best current approach to obtain a 3D dimensional visualization below the skin. But MRI is expensive, requires visits to a hospital or imaging center, and the patients are highly inconvenienced. Non-invasive imaging using visible or near-infrared light has the potential to make devices portable, safe, and convenient to use at home or at point-of-care centers.


While light penetrates deep through tissue, it scatters continuously resulting in poor image contrast. This makes it challenging to recover useful properties about the anatomical structures below the skin. Further, the anatomical structures include a complex heterogeneous distribution of tissue, vasculature, tumors (benign or malignant) that vary in optical properties (density, scattering, absorption) and depths below the skin. This makes the modeling of light propagation below skin challenging.


Fortunately, under the highly scattering regime, the photons can be assumed to be traveling diffusely in the medium and can be described as a random walk. This has enabled accurate forward models under diffuse photon propagation. In order to improve contrast, imaging detectors and sources are placed at different locations on the skin. This arrangement captures only indirectly scattered light while eliminating the dominant direct reflection and backscatter. The approaches that attempt to invert the diffusion model with such indirect light imaging systems are commonly classified as “Diffuse Optical Tomography” (DOT).


DOT is an approach to recover subsurface structures beneath the skin by measuring light propagation beneath the surface. The method is based on optimizing the difference between the images collected and a forward model that accurately represents diffuse photon propagation within a heterogeneous scattering medium. However, to date, most works have used a few source-detector pairs and recover the medium at only a very low resolution. And increasing the resolution requires prohibitive computations/storage.


Due to their portability and ease of use, DOT is becoming an attractive choice over traditional modalities like MRI for cerebral as well as hemodynamic imaging More recently, DOT has been shown to be a promising tool in detecting strokes, breast cancer, and thyroid imaging. There are, however, two important drawbacks to existing DOT approaches. First, the number of source-detector pairs limits the form-factor of the device. Even with multiple source-detector pairs, applying traditional inverse methods for DOT results in poor resolution, as shown in the second column of View (B) of FIG. 1. Second, as the number of source-detector pairs increases, the computational complexity of the algorithms that recover the volumetric medium increases prohibitively.


SUMMARY OF THE INVENTION

The apparatus and method disclosed herein comprises a novel hardware arrangement and a fast imaging and algorithm for high resolution diffuse optical tomography with a line imaging and illumination system.


An imaging and algorithmic approach to resolve these fundamental drawbacks in DOT systems is presented. Instead of separate source-detector pairs, a high resolution 2D camera and a MEMS projector is used to obtain a large number of effective source-detector pairs, as is commonly done in vision and graphics. This makes the system much more compact and programmable. Second, to increase the speed of acquisition, a line on the skin is illuminated and a different line is captured in the sensor. This arrangement captures short-range indirect light transport much faster than point-to-point illumination and sensing but uses a rectified configuration where the projector and camera planes are parallel, leading to a low spatial resolution over a large overlapping stereo volume. The design of the present invention has a verged configuration that enables high spatial resolution imaging within a small region on the skin (approximately 8 cm×8 cm).


Using this verged design, the invention includes an efficient algorithm that is based on the convolution approximation to the forward model for light propagation in a heterogeneous subsurface medium. The convolution kernel is independent of the heterogeneous structure and only depends on the imaging configuration and the scattering properties of the homogeneous scattering medium. This allows the recovery of the heterogeneous structures at much higher spatial resolution compared with the traditional DOT, as shown in the bottom image in View (B) of FIG. 1.


Key to the present invention is a convolution approximation of the forward heterogeneous scattering model that can be inverted to recover deeper than ever before structures beneath the surface. The method can detect reasonably accurate boundaries and relative depth of heterogeneous structures up to a depth of 8 mm below highly scattering medium such as skin. This work extends the potential of DOT to recover intricate structures (blood vessels, tissue, tumors, etc.) deep beneath the skin for diagnosing many dermatological and cardio-vascular conditions.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1, View (A) shows diffuse optical tomography (DOT) with line-scanned camera and line scanning MEMS projector compared, in View (B), with traditional DOT with 25 source-detector pairs.



FIG. 2 shows the source-detector configuration in a typical DOT system. The fluence rate at the detector is given by superposition of the real diffuse source located z0 below the surface, and a negative image source around the zero flux line denoted by EBC.



FIG. 3 shows the generation of short-range indirect images for a small and a large pixel-to-illumination distances, shown in View (A) and View (B) respectively.



FIG. 4 shows visualizations of phase functions for different pixel-to-illumination line distances in View (A) and View (B), in the y-z plane (top row), and the x-y plane (bottom row).



FIG. 5 shows images of a paper sticker with letters occluded by several other pages captured using different devices in Views (A-D).



FIG. 6, View (A) shows a configuration for the calibration of the device of the present invention. View (B) shows a graph accounting for mis-alignment of the MEMS mirror.



FIG. 7, Views (A, B) shows different planes illuminated with poses relative to the camera. View (C) shows laser rays fitted in 3D space.



FIG. 8, Views (A-C) show simulated direct/global light image, short-range images with different Δy settings, and the DOT results.





DEFINITIONS

Absorption Variation (also referred to as a “heterogeneity”) is defined as any subdermal area having an absorption coefficient different from the background homogeneous scattering medium, which has a relatively homogeneous absorption coefficient. The absorption variation can be caused by, for example, a subdermal structure such as blood vessels, tumors, benign tissue, etc.


DETAILED DESCRIPTION

As a prelude to a discussion of the convolution approximation of the heterogeneous model, the basic theory in DOT for dense scattering tissues is derived. First, the Diffusion Equation for the surrounding homogeneous medium is derived from the Radiative Transfer Equation, assuming that the homogeneous scattering medium surrounding the absorption variation is dense enough such that the light propagation in the medium is dominated by the high-order light scattering events and the angular distribution of light propagation direction is isotropic. Thereafter, the Diffusion Equation for the heterogeneities is derived, assuming that the light absorption coefficient discrepancy dominates the scattering property difference between the heterogeneous embedding and the surrounding medium. Although the assumptions do not always hold perfectly, the method of the present invention is robust to the cases where the assumptions fail.


The Radiative Transfer Equation (RTE) describes the light radiance, which models light propagation, at a particular position in the scattering medium at a specific time instant. It is generally difficult to solve the RTE in closed form. When the medium is highly scattering, as in the case of biological tissue, the diffusion approximation is commonly applied to obtain the diffusion equation. The photon diffusion equation models the fluence rate Φ, that is defined as the total light radiance integrated over all directions, at a position {right arrow over (r)} in the scattering medium for a continuous intensity light source S, given as:





(−D({right arrow over (r)})∇2a({right arrow over (r)}))Φ({right arrow over (r)})=S({right arrow over (r)})  (1)


where μa({right arrow over (r)}), μ′s({right arrow over (r)}) are the absorption coefficient and the reduced scattering coefficient of the medium respectively, and D({right arrow over (r)})=1/(3(μ′s({right arrow over (r)})+μa({right arrow over (r)}))) is known as the diffusion coefficient of the scattering medium. The tissue is commonly modeled as a semi-infinite scattering homogeneous medium, with the source and the detector positioned at the air-medium interface. When the light source is treated as a constant pencil beam source, i.e. S({right arrow over (r)})=Sδ({right arrow over (rs)}), the solution for fluence rate in Eq. (1) for the configuration in FIG. 2 can be written in an analytical form using extrapolated zero boundary conditions (EBC):











Φ
0

(


r


,


r
s




)

=


s

4

π


D
0



[



e


-
β



r
1




r
1


-


e


-
β



r
2




r
2



]





(
2
)







where, Φ0 ({right arrow over (r)}, rs) is the fluence rate at detector kept at a position rd with a constant source at {right arrow over (rs)}. The diffusion coefficient of the homogeneous medium is denoted by D0 and the term β=√{square root over (3μ′sμa)} depends on the optical properties of the homogeneous scattering medium. The extrapolated boundary condition (EBC) accounts for the refractive index mismatch of air and the medium.


Solving for the boundary condition defines a zero fluence rate line located zb above the air-medium interface. This boundary line is imposed by placing a negative image of the source around the zero-crossing line. The terms r1 and r2 are the distances from the detector to the real and the negative image source respectively, and they are defined as:






r
1=|{right arrow over (rs)}+z0+{right arrow over (rd)}|






r
2={right arrow over (rs)}−z0−2zb|  (3)


where, z0=3D is the location of diffused source in the medium. The term zb is the distance of the zero fluence rate boundary from the air-medium interface.


Often, it is desirable to reconstruct objects like veins or tumors embedded within human tissue. Typically, these objects have different optical parameters than the background medium. In the presence of an absorption variation, the change in absorption coefficient of the medium can be defined as:





μa({right arrow over (r)})=μa0+δμa({right arrow over (r)})  (4)


where δμa({right arrow over (r)}) is the difference in absorption coefficient of the absorption variation over the background medium. It is assumed that the change in the reduced scattering coefficient μ′s({right arrow over (r)}) is negligible and can be ignored. The resultant fluence rate at the detector position {right arrow over (rd)} for a source at {right arrow over (rs)} is written as a linear addition of fluence rate from homogeneous component Φ0 ({right arrow over (rd)}, {right arrow over (rs)}) and the change in fluence rate ΔΦ({right arrow over (rd)},{right arrow over (rs)}) due to the absorption variation:





Φ({right arrow over (rd)},{right arrow over (rs)})=Φ0({right arrow over (rd)},{right arrow over (rs)})+ΔΦ({right arrow over (rd)},{right arrow over (rs)})  (5)


The change in fluence rate is due to the absorption coefficient change across the volume around the point of interest:










Φ

(



r
d



,


r
s




)

=

-





Φ
0

(



r
d



,


r
J




)




δ



μ
a

(


r
J



)



D
0





G
0

(



r
J



,


r
d




)


d



r
J










(
6
)







where G0 represents the Green's function for a homogeneous slab and is related to Φ0 in Eq. (2) as G0=D0Φ0/S. Images are acquired using a CCD camera, which records the radiant exitance at the surface. The radiant exitance is proportional to the diffuse reflectance R, which is the projection of current density along the surface normal of the detector for a unit-power source:










R

(



r
d



,


r
s




)

=



D
0

[

δΦ

δ


z
d



]



z
d

=
0






(
7
)







where zd is the z component of the detector location {right arrow over (rd)}. Applying a derivative to Eq. (5) with respect to zd and multiplying by D0, yields:






R({right arrow over (rd)},{right arrow over (rs)})=R0({right arrow over (rd)},{right arrow over (rs)})+ΔR({right arrow over (rd)},{right arrow over (rs)})  (8)


where R0=D0[δΦ0/δzd]zd=0 is the diffuse reflectance due to the homogeneous background medium and is obtained by taking a derivative of Φ0 in Eq. (2) with respect to zd, given by:










R
0

=


1

4

π


[





z
0

(

1
+

β


r
1



)



e


-
β



r
1





r
1
3


+



(


z
0

+

2


z
b



)



(

1
+

β


r
2



)



e


-
β



r
2





r
2
3



]





(
9
)







Similarly, ΔR represents the change in diffuse reflectance for the absorption variation. The expression for ΔR is obtained by taking a derivative of Eq. (6) with respect to zd and multiplying by D, resulting in the following expression:










Δ


R

(



r
d



,


r
s




)


=


-





Φ
0

(



r
s



,


r
J




)


δ





μ
a

(


r
J



)

[


δ



G
0

(



r
J



,


r
d




)



δ


z
d



]



z
d

=
0



d



r
J






=

-





Φ
0

(



r
s



,


r
J




)


δ



μ
a

(


r
J



)




R
0

(



r
J



,


r
d




)


d



r
J











(
10
)







The integral above is discretized by dividing the medium into N voxels, and the optical properties are defined for each voxel. If the medium is discretized into N voxels with volume of each voxel as h3, then Eq. (10) can be written in the discrete summation form given by:





ΔR({right arrow over (rd)},{right arrow over (rs)})=Σj=1N=P({right arrow over (rs)},{right arrow over (rj)},{right arrow over (rd)})δμa({right arrow over (rj)})





with






P({right arrow over (rs)},{right arrow over (rj)},{right arrow over (rd)})=Φ0({right arrow over (rs)},{right arrow over (rj)})R0({right arrow over (rj)},{right arrow over (rd)})h3  (12)


The term P({right arrow over (rs)},{right arrow over (rj)},{right arrow over (rd)}) is commonly known as the phase function defined at each voxel position {right arrow over (rj)} in the medium. The values of the phase function depend on the optical properties of the background homogeneous medium as well as the location of the source {right arrow over (rs)} and the detector {right arrow over (rd)}. Note that the phase function is independent from the structure of the absorption variation.


Convolution Approximation of Heterogeneous Model


Herein is described how the diffuse forward model can be adapted for the use in the present invention. A line illumination is projected on the scene using a laser projector. As such, the light source is now considered as a slit source instead of a point source. By using a slit source, the acquisition time is reduced because line scanning is significantly faster than point scanning A slit source is incorporated in the forward model by using the linear superposition principle. The quantities described in the previous section which are functions of the source location {right arrow over (rs)} are now obtained by adding up the contributions corresponding to all the point sources on the illumination line.


Note that the invention is explained in terms of embodiments using a laser. However, as would be realized by one of skill in the art, light sources and sensors of any wavelength could be used, for example, visible, near-infrared, short wave infrared, etc. In addition, any sensor modalities could be employed, for example, intensity, color, multi-spectral, timer-of-flight, correlation time-of-flight or any other modality.


On the detector side, a rolling shutter camera synchronized with the laser projector is used. The advantage of using a camera is that each pixel in the camera sensor can now be considered as an individual detector, resulting in a detector array with millions of detectors. Secondly, because the camera sensor can be considered as a grid array of detectors, a convolution form of the forward model can be derived, significantly speeding up the computation time. Several images are acquired by varying the pixel-to-illumination line distance, as shown in FIG. 3, which shows the pixel-to-illumination line distance increasing between View (A) and View (B). FIG. 3 shows the generation of short-range indirect images for a small pixel-to-illumination distance, in View (A), and a large pixel-to-illumination distance, in View(B). The simulated scene consists of three cylinders embedded in a scattering medium. The offset Δy is kept constant while scanning the entire scene to obtain an image. For a shorter Δy, as in View (A), the rods are distinctly visible, whereas for longer Δy, as in View (B), the blurring increases with reduction of signal-to-noise ratio.


These images are referred to as short-range indirect images. The boundaries of the absorption variations become blurrier in the short-range indirect image as the pixel to illumination line distance Δy increases. The blurring effect is related to Δy and the depth of the structures.


The values of phase function at each voxel for the short-range indirect images can be interpreted as the number of photons that have traversed through the corresponding voxel for a given pixel to illumination line distance. FIG. 4 is a visualization of phase function for different pixel-to-illumination line distances in the y-z plane (top row), and the x-y plane (bottom row). S and D represent the illumination line and pixel location respectively. As the pixel to illumination line distance increases, as shown between View (A) and View (B), the photons tend to travel deeper into the scattering medium but leads to reduced number of photons reaching the pixel, thereby reducing the signal-to-noise ratio. Typically, the most probable path between a pixel and the source illumination line follows a well-known “banana shape” and is shown for different pixel to illumination line distances in FIG. 4.


Two important properties of the phase function P should be noted. First, in case of simple geometry, like the semi-infinite homogeneous background medium under consideration, the expression for the Green's function G0 as well as Φ0 can be written in terms of relative voxel location rather than the absolute location:













P

(



r
d



,


r
J



,


r
s




)

=




Φ
0

(



r
s



-


r
J




)




R
0

(



r
J



-


r
d




)








=


P

(




r
J



-


r
d




,



r
s



-


r
d





)








(
13
)







Second, I should be noted that the values of the phase function decays fast spatially as the distance between a voxel and source or detector position increases. Hence, the contribution of voxels that are far away from both the illumination line and the pixel can be neglected. Because a projected line is being used illumination as the light source, the phase function in Eq. (13) can be approximated by the summation of truncated phase functions for each source point along the illumination line. Additionally, as evident from the figure, the contribution of light from the illumination line to a center pixel is dominant only near the center of the illumination line, and hence a spatially-invariant phase kernel can be used. The pixel-to-illumination line distance Δy is defined as ys−yd, where ys and yd are the y component of illumination row {right arrow over (rs)} and the pixel location {right arrow over (rd)} respectively. The phase kernel ϰ for a line illumination can then be written as:












(




r
J



-


r
d




,

Δ

y


)

=





r
s





P

(




r
J



-


r
d




,



r
s



-


r
d





)






(
14
)







where the summation over {right arrow over (rs)} is for all the point sources lying on the illumination line. In the following, the phase kernel is denoted as x(Δy) for denotation simplicity unless the spatial dependency needs to be emphasized.


Similarly, the diffuse reflectance R({right arrow over (rd)},{right arrow over (rs)}), the change in diffuse reflectance ΔR({right arrow over (rd)},{right arrow over (rs)}) and the homogeneous diffuse reflectance R0({right arrow over (rd)},{right arrow over (rs)}) in Eq. (8) are modified for line illumination as the sum of contribution from all point sources lying on the illumination line, and are defined as R({right arrow over (rd)}; Δy), R({right arrow over (rd)}; Δy) and R0({right arrow over (rd)}; Δy) respectively. xd, yd are the surface coordinates of the pixel location {right arrow over (rd)} as shown in FIG. 3. If the change in absorption coefficient δμa({right arrow over (rd)}) in Eq. (11) is represented by a 3D volume Q, then the change in diffuse reflectance ΔR in Eq. (11) can now be expressed in a convolution notation as:










Δ


R

(


x
d

,


y
d

;

Δ

y



)


=


-





r
s








j
=
1

N



P

(




r
J



-


r
d




,



r
s



-


r
d





)


δ



μ
a

(

r


)





=

-




j
=
1

N





(




r
J



-


r
d




;

Δ

y


)


δ



μ
a

(


r
J



)









(
15
)







where ΔR∈custom-characterM×N is defined over a sensor array of dimension M×N and corresponds to each pixel to illumination line distance Δy as shown in FIG. 3. By representing the change of absorption coefficient δμa by a 3D volume Q, Eq. (15) can be rewritten as the sum of a 3D convolution result:










Δ


R

(


x
d

,


y
d

;

Δ

y



)


=

-



z





(

Δ

y

)

*

Q

(


x
d

,

y
d

,
z

)








(
16
)







The change in absorption coefficient in the 3D volume is denoted by Q∈custom-characterM×N×D where D is the depth resolution. The 3D truncated kernel ϰ∈custom-characterM×N×D is then defined for each Δy, and has the same depth resolution as that of the 3D volume Q. Using Eq. (8), the resultant diffuse reflectance R acquired at each pixel to illumination line distance Δy can be written as a linear summation of the contribution from homogeneous background medium R0 and the perturbation term due to presence of heterogeneity OR:







R

(


x
d

,


y
d

;

Δ

y



)

=



R
0

(


x
d

,


y
d

;

Δ

y



)

-



z





(

Δ

y

)

*

Q

(


x
d

,

y
d

,
z

)








where R∈custom-characterM×N is the diffuse reflectance on an M×N grid.


Reconstruction of Heterogeneous Structure


For the set of captured images which correspond to different pixel to illumination line Δy, a set of short-range indirect images I(Δy) is captured. For the given set of images, the volume Q of unknown optical parameters is reconstructed by solving the following optimization problem:











min
Q

·





Δ

y

=

T

d
min




T

d
max








I

(

Δ

y

)

-

l
(



R
0

(

Δ

y

)

-




(

Δ

y

)

*
Q






F
2



+

λ




Q


1






(
17
)







where ∥·∥F denotes the Frobenius norm, and l is an unknown scaling factor which depends on the intensity and width of the laser profile and the sensitivity of the camera. It is also assumed that the reconstructed volume is sparse, which essentially implies that the absorption variation only occupies a fraction of the total reconstructed volume.


The optimization is done over a range of Δy values. For smaller Δy values, the diffusion approximation breaks down, as the photon propagation is largely governed by single or very few scattering events. For very large Δy, not enough photons reach the camera pixels, and therefore the measurement images have a poor signal-to-noise ratio. Therefore, the range of Δy values needs to be chosen appropriately.


If the exact optical parameters μ′s and μa of the homogeneous background medium are known, then the kernel ϰ(Δy) can be constructed as in Eq. (14). However, in some cases, the background optical parameters of the material are not known. In those cases, a homogeneous patch inside the field of view is selected, and the pixel intensity measurements are fitted with lR0 with respect to the unknown optical coefficients as in Eq. (9). The estimated values of the coefficients can be used to construct the phase kernel ϰ(Δy) for solving the optimization in Eq. (17).


PyTorch may be used for implementation, given it is highly optimized for convolution operations. The running time on a workstation with TianX GPU is around 5 minutes for 300 iterations for Q with a depth resolution of 64. The λ value in Eq. (17) is heuristically chosen to be 0.0001. The optimization is started with an initial value of all zeros for Q, and the reconstruction accuracy can be further improved if a better initialization is provided based on prior knowledge of the scene.


Hardware


The imaging setup for capturing shortrange indirect images will now be described. To capture high resolution images for a small area of interest, high spatial resolution over a smaller overlapping stereo volume is needed. One way to achieve smaller overlapping stereo volume is to verge the projector and camera. Therefore, a design using a verged setup is used for capturing high resolution short-range indirect images.


The setup consists of a synchronized rolling shutter camera and a laser projector implemented with Micro-Electro-Mechanical-Systems (MEMS). The imaging setup is shown in View (A) of FIG. 1. In some embodiments, an IDS-3950CPv2 industrial camera and Mirrorcle MEMS development kit may be used. The central wavelength for the laser light may be selected as 680 nm. The MEMS mirror reflects the laser beam from the laser diode and the orientation of the MEMS mirror can be controlled in terms of both vertical and horizontal rotation axes. Using this arrangement, the size of the imaged area on the sample is approximately 8 cm by 8 cm. The laser diode and MEMS mirror pair is modeled as a pinhole projector whose center of projection is the center of rotation of the MEMS.


During the imaging process, the projector is scanned through the epipolar planes of the projector-camera pair. The camera is synchronized such that the pixels having a pre-defined offset from the corresponding epipolar line on the camera image plane are exposed. Each offset corresponds to one pixel to illumination line distance Δy as discussed above. For the physically rectified projector-camera pair, the epipolar lines on the projector and camera image are horizontal. This corresponds to illuminating and exposing the corresponding rows of projector and camera. In contrast, in the described configuration, the epipolar lines in the projector and camera are not horizontal due to the verged setup. As such, the short-range indirect images cannot be captured by illuminating and exposing corresponding rows. Instead, on the projector side, the angle of the MEMS mirror is controlled to scan the light laser beam across a pencil of epipolar lines with different 2D slopes in the projector image plane. On the camera side, interpolation is used over offset epipolar lines to get the short-range indirect images. As a special case, for Δy=0 the interpolated line overlaps with the epipolar. The resultant image is the direct light image.


Because the described configuration has smaller FOV than a rectified system due to the non-zero vergence angle between the project and camera, the sample can be placed closer to the camera while still being illuminated by the projector. This enables a higher image resolution for a smaller area of interest so that more fine-grained (sub)surface details can be captured.



FIG. 5 shows images of a paper sticker captured with different devices. The sticker page with letters is occluded by several other pages so no letters can be seen under regular lighting. View (A) show an image captured with cellphone camera under regular lighting. View (B) shows a short-range indirect image captured with a device in a rectified configuration. View (C) is an enlarged image for the sticker region in View (B). View (D) shows a short-range indirect image captured with the configuration of the present invention. The configuration of the device of the present invention has a smaller FOV due to non-zero vergence angle. The images captured with this device have higher resolution, SNR and contrast as shown in the Views (C) and (D). The bright spot in the center and the lines in View (D) are due to the reflection and inter-reflections from the protective glass in front of the MEMS mirror.


Calibration



FIG. 6 shows a setup to calibrate the device to compensate for laser-mirror misalignment and the non-linearity of the MEMS mirror. Due to misalignment, the incident laser beam onto the MEMS mirror will not be perpendicular to the mirror surface and align with the MEMS rotation center. Due to non-linearity of MEMS mechanics, the input control signal and degrees of rotation are not linearly related.


The device is mounted vertically above a liquid container as shown in View (A) of FIG. 6, with no cover above the scattering medium. We model the laser-MEMS as a pinhole projector whose center of projection is the rotation center of the MEMS mirror. During the calibration process, the relative pose between the MEMS mirror and the camera is estimated. For MEMS, the non-linear mapping between the input voltage for the MEMS and the mirror tilt angle is compensated for, and the mis-alignment of the MEMS mirror and the laser is accounted for, as shown in the graph of View (B).


More specifically, planes are illuminated with given poses relative to the camera with a set of dot patterns. As shown in Views (A) and (B) of FIG. 7, given the laser dot images for different plane positions, the laser rays can be fitted in 3D and the rays triangulated to get the origin of the rays (i.e. the rotation center of the MEMS mirror) as shown in View (C) of FIG. 7. Due to the laser-MEMS misalignment and fitting error for the rays, the rays will not intersect at one 3D point. A least square problem is solved for the intersection point where the point to ray distances are minimized. The fitted rays are also used to account for the non-linear relation between the MEMS input voltage and the rotation angle. In calibration, a lookup table is built relating the input voltage for the MEMS and the rotation angle for the mirror to account for their non-linear relation. During imaging, given the target laser ray direction, the required input voltage can be estimated by interpolating over lookup table.


Results


The device and method of the present invention were tested using Monte Carlo rendered images. For the homogeneous medium, the scattering coefficients of human skin were used. The heterogeneous inclusions are located up to 4 mm below the surface. For the imaging setup, the width of the laser illumination line is 1 mm. The distance between the illumination line and the camera pixel ranges from 0 to 15 mm. To make the diffusion approximation valid for the algorithm, only images with the illumination-to-pixel distance Δy larger than 2 mm were used.



FIG. 8 shows simulated direct/global light image, the short-range images with different Δy settings, and the DOT results. The homogeneous medium is skin in View (A) and View (C) and skim milk in View (B). Photon and read-out noise are added to the input images. The depths of inclusions in Views (A) and (B) are 4 mm and 3 mm respectively. In View (C), the depth of the vein structure is 5 mm while the solid circle is 10 mm deep. The inclusion boundaries in the global and short-range indirect images are either blurred or shifted due to light scattering. The signal-to-noise ratio decreases as the pixel-to-illumination line distance increases because less photons are received by the detectors. The method of the present invention recovered the inclusion boundaries and their relative depths despite the blurring and noise.


The simulated direct and global light images are shown in the first two rows of all views in FIG. 8. The global light image is the sum of the images captured with different Δy's, except for Δy=0. The inclusions cannot be seen in the direct only image. For the global light image, because of the highly scattering property of skin, the contrast is low for some of the deeper inclusions, such as the solid circle in the View (C). This makes the detection and localization for such objects (e.g. tumor beneath the skin surface) difficult. For each short-range indirect image, the image intensity is contributed in part by the indirect light that travels from the illumination line with a preset distance to the imaged scene point. As a result, compared with the global light image, the contrast of the inclusions are much higher for the short-range indirect images shown in the third and fourth rows of all views in FIG. 8. For larger pixel to illumination line distance, the signal-to-noise ratio is low because there are less photons reaching the imaged scene point due to multiple scattering of light. In addition, because the non-zero support of the diffuse scattering kernel increases with the pixel-to-illumination line distance, the boundaries of the inclusions in the image become blurrier for larger distance. Despite low signal-to-noise ratio and blurring in the short-range indirect images, the 2D boundaries were able to be localized and the relative depth were estimated for the inclusions using the novel method of the present invention.


For the input short range indirect images, the contrast of the image decreases with the inclusion depth. This is because as the inclusions become deeper, most light reaching each pixel is contributed by the scattered light from the homogeneous medium without traveling through the inclusions. Another intuition is that the diffuse scattering phase function diminishes with the depth increase, as shown in FIG. 4. One key factor for the proposed method is the size of the diffuse scattering kernel. A smaller kernel enables a faster optimization process but will lead to more errors in the convolutional approximation and therefore less accurate results. A larger kernel leads to better performance but requires more processing time. The choice for the size of diffuse scattering kernel is also related to the pixel to illumination line distance. In addition, as shown in FIG. 4, the non-zero support region for the kernel varies with the pixel to illumination line distance. For large pixel to illumination line distance, the magnitude of the kernel would be small due to multiple scatterings, so the performance will saturate at certain distance.


The performance of the apparatus and method of the present invention may be evaluated using the IoU score of segmentation results for the inclusions. For highly scattering medium, like human skin, the performance saturates when the kernel size approaches 20 mm in diameter.


The present invention addresses two fundamental limitations of existing diffuse optical tomography methods: low resolution reconstruction and high computational complexity. These limitations are overcome by (1) extending the design for short-range indirect subsurface imaging to a verged scanning projector and camera configuration; and (2) a novel convolution based model and efficient computational algorithm for estimating the subsurface medium with absorption variation. This allowed the recovery of detailed absorption variations immersed up to 8 mm deep in a highly scattering medium, such as milk.


The invention has been described herein using specific apparatuses, methods and parameters. The scope of the invention is not meant to be limited thereby. Other embodiments are contemplated to be within the scope of the invention. For example, other source spectra (e.g., near-infra red) may be used to recover structures deeper within tissue. Additionally, the invention may be extended to use resonant MEMS scanning for capturing subsurface videos of dynamic structures, such as blood flow in microscopic capillary veins. The scope of the invention is captured in the claims that follow.

Claims
  • 1. A computer-implemented method comprising: varying a distance between a line illumination source and a line detection sensor, the distance varying by a set of discrete distances;obtaining a short-range indirect image from the surface of a medium at each discrete distance;calculating a phase kernel for the line illumination source at each discrete distance;defining a volume representing a change in absorption coefficient based on the presence of an absorption variation within the medium, the volume having a depth below the surface of the medium;calculating a change in diffuse reflectance for each discrete distance and for each pixel in the detection sensor as a sum over the depth of the defined volume of a convolution of the phase kernel for each discrete distance and the defined volume; andcalculating an overall diffuse reflectance of each discrete distance as a linear sum of a diffuse reflectance of the medium and the change in diffuse reflectance due to the absorption variation.
  • 2. The method of claim 1, the phase kernel for the line illumination calculated as the sum of values of a phase function at each voxel in the medium between a point light source on the line illumination and a corresponding pixel on the detection sensor.
  • 3. The method of claim 2, the phase kernel having the same depth resolution as the defined volume.
  • 4. The method of claim 1 wherein the change in diffuse reflectance represents a perturbation in reflectance due to the presence of the absorption variation.
  • 5. The method of claim 1, further comprising determining a structure of the absorption variation.
  • 6. The method of claim 5, further comprising constructing a 3D representation of the absorption variation.
  • 7. The method of claim 6, the structure of the absorption variation being calculated by optimizing a sum over each discrete distance of a function of the image captured at each discrete distance, the diffuse reflectance of the medium and a convolution of the phase kernel and the defined volume, the sum being optimized for minimum defined volume.
  • 8. The method of claim 7 wherein, when the optical parameters of the medium are unknown, estimating the optical parameters of the medium by selecting a homogeneous patch of the medium inside a field of view of the detector and determining the optical parameters of the homogeneous patch.
  • 9. The method of claim 8, the optical parameters of the homogeneous patch being used to construct the phase kernel used in the determination of the structure of the absorption variation.
  • 10. An apparatus comprising: a laser projector; anda camera,the laser projector and camera being in a verged epipolar configuration.
  • 11. The apparatus of claim 11, the laser projector comprising: a laser source; anda steerable MEMS mirror for directing a line of light along a medium.
  • 12. The apparatus of claim 11, the camera being a rolling shutter camera, the camera capturing a line of line from the surface of the medium offset from the line of light projected by the laser projector.
  • 13. The apparatus of claim 12, the camera capturing a series of images of the medium, each image captured with a different offset between a scan line of the camera and the line of light projected by the laser projector.
  • 14. A method using the apparatus of claim 13, comprising: scanning the laser projector through the epipolar planes of the camera-projector pair;synchronizing the camera with the laser projector such that pixels having a pre-defined offset from a corresponding epipolar line on the camera image plane are exposed.
  • 15. The method of claim 14, wherein the steerable MEMS mirror is controlled to scan a line of laser light across a series of epipolar lines having different 2D sloped with respect to the medium.
  • 16. The method of claim 15, further comprising: determining the differences in the intensity of each pixel in the series of collected images; andusing the determined differences to estimate an absorption coefficient for each voxel in a volume of voxels located under the surface of the medium.
  • 17. The method of claim 15 further comprising: using the determined absorption coefficients for each voxel to visualize a 3D representation of a sub-surface absorption variation in the medium causing variations in the absorption coefficient from an absorption coefficient of a homogenous medium.
RELATED APPLICATIONS

This application claims priority to U.S. Patent Application Ser. No. 63/013,790 entitled “HIGH RESOLUTION DIFFUSE OPTICAL TOMOGRAPHY USING SHORT RANGE INDIRECT IMAGING” filed on Apr. 22, 2020. The contents of the aforementioned application are incorporated herein by reference in their entirety.

GOVERNMENT INTEREST

This invention was made with government support under the National Science Foundation Expeditions “See Below the Skin” Grants No. 1730147 and No. 1730574, and U.S. Army Medical Research and Materiel Command contract for “Autonomous Trauma Care in the Field” No. W81XWH-19-C-0083. The government has certain rights in the invention.

PCT Information
Filing Document Filing Date Country Kind
PCT/US2021/028421 4/21/2021 WO
Provisional Applications (1)
Number Date Country
63013790 Apr 2020 US