This application is the U.S. national phase of International Application No. PCT/IB2019/056370 filed Jul. 25, 2019 which designated the U.S. and claims priority to IT 102018000007857 filed Aug. 3, 2018, the entire contents of each of which are hereby incorporated by reference.
The present invention relates to a technique for the plenoptic acquisition of images in the field of microscopy, stereoscopy and, in general, of three-dimensional image techniques.
In particular, the plenoptic acquisition procedure according to the present invention is called “Correlation Plenoptic Imaging” (CPI), i.e. it refers to a plenoptic image acquisition based on the space-time correlation of the luminous intensities recorded by the sensors arranged to acquire the spatial and angular measurement of the image.
The term “plenoptic acquisition of images” refers to a particular optical method according to which it is possible to acquire both the position and the direction of propagation of light in a given scene. In this way, it is possible to obtain a spatial measurement and an angular measurement which allow reconstructing the acquired image three-dimensionally.
In fact, in an image processing step following its acquisition, it is possible, for example, to change the position of the focal plane of the image or to extend the depth of field of the image or to reconstruct a three-dimensional image.
The currently known conventional image acquisition technique allows choosing the magnification, the focal plane position and the depth of field by means of suitable lenses positioned upstream with respect to the image acquisition sensor.
The traditional image acquisition technique, however, has the limitation of offering a two-dimensional representation of an originally three-dimensional scene. The three-dimensional representation of images is useful in many technical applications, such as those concerning the modeling of components to be used in virtual simulation environments, or those concerning the representation of objects for prototyping, design, production, marketing, inspection and maintenance, or those generally concerning an improved representation of an object of a three-dimensional scene in order to ensure an improved experience for the user and a more realistic result.
Moreover, the traditional image acquisition technique does not allow changing the focal plane position or the depth of field at a time subsequent to the image acquisition. In the photographic field, it is very common to have the need to focus on a particular plane or to choose the depth of field of the image in a moment after the acquisition.
With reference to the field of microscopy, it is worth noting that large resolutions correspond to small depths of field. Since it is not possible to change the focal plane after the acquisition, if one wants to deeply characterize the sample under examination, it is necessary to perform a large number of scans with different focusing planes. In this regard, it should be noted that exposing the sample to radiation for a long time, especially if it is biological samples, may damage it or, in the case of in vivo observations, cause damage to the patient.
Therefore, the traditional microscopic image acquisition technique has several drawbacks that the plenoptic acquisition technique has the purpose of solving.
The currently known plenoptic image acquisition technique allows obtaining images with different focal planes in different positions of the three-dimensional space of the scene. This feature is made possible by the acquisition of the spatial and angular measurement of light in the scene.
The term “spatial measurement” refers to the traditional two-dimensional image acquisition of a plane within the scene, while “angular measurement” refers to the acquisition of the information necessary to determine the direction of propagation of the beam of light from the scene to be acquired. In a processing step following the image acquisition, it is possible to combine the spatial and the angular measurements in order to reconstruct a three-dimensional image.
The currently known plenoptic image acquisition technique is based on the insertion of an array of microlenses arranged between a main lens, adapted to focus the image of the scene of interest on the array of microlenses, and a sensor, adapted to acquire the image of a given scene. The array of microlenses plays a double role. On the one hand, it behaves like an array of points capable of acquiring the spatial measurement of the scene, on the other side it reproduces a sequence of images of the main lens (one for each microlens) on the sensor, thus providing the angular size of the scene.
Unlike traditional image acquisition techniques, a plenoptic image acquisition device captures double information on the position and direction of light for each pixel of the sensor. This means that in the processing of an image, it is possible to obtain different perspectives or views of the scene, thus allowing the user to choose the scene plane in focus and the depth of field, as well as obtaining a three-dimensional reconstruction of the scene.
However, the currently known plenoptic image acquisition technique has the drawback of producing images at a lower resolution than the physical limit (“diffraction limit”) determined by the diameter and focal length of the main lens. In fact, the currently known plenoptic image acquisition technique provides for the use of a single sensor for the simultaneous acquisition of the spatial and angular measurement of the scene. This feature limits the spatial resolution of the acquired image as part of the sensor's resolution capability is sacrificed to the benefit of the angular measurement. Moreover, in the currently known plenoptic image acquisition technique, the maximum spatial and angular resolution are linked by an inverse proportionality ratio, due to the use of a single sensor to obtain both spatial and angular information. Therefore, the images produced by known plenoptic image acquisition devices have the drawback of being at low resolution, i.e. they are characterized in that the resolution of the images is well below the resolution given by the diffraction limit.
A better understanding of the present invention and of the objects and advantages thereof with respect to what is currently known result from with the following detailed description and with reference to the accompanying drawings which illustrate, by way of a non-limiting example, some preferred embodiments of the invention.
In the drawings:
As already mentioned, the plenoptic imaging devices currently on the market, including plenoptic microscopes, are based on the standard structure of imaging devices, in which the images are acquired through the measurement of the light intensity distribution on a sensor. These devices are adapted to plenoptic imaging by inserting an array of microlenses in front of the sensor. On the one hand, the image of the object is formed on the microlenses: they then act as “effective pixels”, determining the limit of spatial resolution of the image, and each given microlens corresponds to a given portion of an object. On the other hand, each microlens reproduces an image of the main lens on the sensor portion behind it. Each of these images of the main lens provides information on the direction of the light propagating from the portion of the object corresponding to the microlens to the portion of the lens corresponding to the pixel of the sensor.
As a result of this configuration, shown in
It is worth noting that plenoptic imaging devices with correlation measurements (CPI: Correlation Plenoptic Imaging) already developed by a part of the inventor group of the present invention solve the above limitations by decoupling the sensors dedicated to spatial measurement (image of the object) and to the directional measurement (image of the lens).
In fact, in such devices, once the total number of pixels per side of the sensor (Ntot) is fixed, the constraint which links spatial and directional resolution is Nx+Nu=Ntot. Furthermore, there are no limitations on the resolution of the image, thus being capable of reaching the diffraction limit. And finally, in the aforementioned devices already developed by some of the inventors, the image of the entire lens is projected onto a single sensor dedicated to this purpose. This feature allows obtaining arbitrary magnifications, even larger than the unit. Thus, in the regime in which geometric optics is valid, directional resolution (determined by Nu) can be much more precise than standard plenoptic imaging devices, and the depth of field can be much more extensive.
A first object of the present invention, with respect to the preceding CPI devices, is to provide a plenoptic device in which the object whose image is to be obtained is positioned before the beam separator.
A second object of the present invention is to provide a plenoptic device in which the light source coincides with the object itself.
It should be noted that this last object is of fundamental importance in view of the application to plenoptic microscopy. In fact, the principle of operation of old setups was based on the possibility of accurately reconstructing the direction of light in its propagation from a chaotic source through the object. Old setups cannot therefore work to obtain the plenoptic images of fluorescent or diffusive samples, which are extremely common in microscopy, in which the direction of the emitted light is tendentially unrelated to that of the incident light.
The first object, on the other hand, is relevant from the point of view of the attenuation of turbulence, that is, of the noise effects which determine a random, unpredictable and generally time-dependent variation in the amplitude and the phase of the light. In fact, if the turbulence modifies the phase and the direction of light propagation only along the common path from the object S to the beam separator BS, measuring the correlations of intensity between the two beams after the beam separator BS has the effect of partially canceling the noises due to phase turbulence along this stretch of the plenoptic device. The ability to perform imaging in the presence of turbulence is a relevant and practically unsolved problem to date, especially in the microscopic context. In particular, the images acquired with the present invention are practically insensitive to turbulence within the sample or close to its surface. This feature is not shared by old CPI setups, whose effectiveness is actually very sensitive to the presence of turbulence near the object.
Compared to the previous proposals of correlating microscopy and imaging insensitive to turbulence, the device described is the first one which combines with these features the possibility of performing plenoptic imaging, and thus refocusing objects out of focus, extending the depth of field, obtaining three-dimensional images. Furthermore, it is noted that the present device does not require either the consistency of the light emitted by the sample nor the quantum entanglement properties of the emitted photons.
With reference to
The constructive schemes of these three setups, shown in
According to the invention described, the plenoptic microscopy systems may also include additional components which, although not necessarily required by the operating principle, can help to optimize the structure and efficiency of the device. Some of these additional components are:
In all cases, while the ordinary image may be obtained directly on sensor Da of the transmitted beam (provided the object is in focus, f=fO), the plenoptic image, which also contains information on the direction of the light, is obtained by analyzing the correlations of intensity between the pixels of the two sensors. Specifically, the image emerges from the correlation between the intensity fluctuations
Γ(ρa,ρb)=ΔIa(ρa)ΔIb(βb), (1)
where . . . denotes an average on the statistics of the light emitted by the sample, Ia,b(□a,b) are the intensities in positions □a and □b on each sensor and ΔIa,b=Ia,b−ΔIa,b the fluctuations in intensity with respect to their average value Ia,b. The statistical mean is practically replaced by a time average over N successive frames of duration τ, acquired in a time window of duration T. Under the assumption that the light emitted has negligible traverse coherence, the correlation of intensity fluctuations is valid, apart irrelevant constant factors [3],
Γ(ρa,ρb)=|∫d2ρsga(ρa,ρs)gb(ρb,ρs)*(ρs)|2, (2)
with ga and gb optical transfer functions in path a and b, respectively. In the following sections, the correlation of intensity fluctuations will be calculated for each setup in
In all the setups shown in
In all three setups, the positioning of the sample before the beam separator BS ensures robustness with respect to the effects of turbulence in the vicinity of unlike other CPI devices, in which the object is placed after the BS, in the arm of the transmitted beam or in the arm of the reflected beam. Advantageously, in the setups according to the present invention, the effects of the turbulence present at a longitudinal distanced from the object can be neglected, provided that the transverse dimension δt, within which the phase variations due to turbulence are practically constant, satisfies
with k=2π/λ light wave number and δ size of the smallest sample detail.
In the first setup, a beam splitter BS is placed between the objective lens O and the second lens T. The beam transmitted by the beam separator BS affects the second lens T and is focused on the sensor Da, the beam reflected by the beam separator BS reaches the sensor Db, which is placed at the same distance as the second lens T with respect to the beam separator BS. In other words, the optical paths from the sample to the second lens T, and from the sample to the detector Db, are practically identical. This feature ensures that, when measuring the correlations between the intensities measured by sensors Da and Db, a ghost image of the second lens T is formed at Db [1, 2]. Thus, the combined information of the image of the second lens T and the image of the object plane of the microscope (usually different from the plane in which the sample effectively lies) will help the reconstruction of the out of focus image of the object.
It is assumed that the aperture of the objective lens O is irrelevant, or PO(ρO) can be replaced with a constant in the transfer functions without significantly altering their value. This assumption is based on the fact that the resolutions are essentially fixed by the aperture PT of the second lens T by the intensity profile F of the sample. When this hypothesis is not satisfied, the finite aperture of the objective can be included in the analysis by replacing the pupil function Pr of the second T lens with an effective pupil function. The correlation function (2), in this scheme, becomes
where k=2π/λ is the light wave number and length F defined as
was introduced for convenience. In the case of focus, (f=fO) F=f the integration of the correlation function on the sensor plane ρb provides the inconsistent image of the sample, magnified by a factor m=fT/fO,
whose point-spread function is determined by the Fourier transform of the pupil function of the second lens T, as in the image reproduced directly on sensor Da. Unlike the latter, however, the image obtained in correlation contains within the integral a term in the square modulus: this is irrelevant for roughly binary objects, but in the general case it can lead to variations with respect to ordinary imaging. In both cases, the resolution of the image increases as the diameter of the second lens T increases, while the natural depth of field decreases quadratically. Likewise, it is possible to show that the integration on the plane of sensor Da returns an image of the second lens T as a function of ρb, whose point-spread function is determined by the intensity profile of the sample.
The dominant contribution to equation (4) in the limit of geometric optics (large frequency and small wavelength of light) is determined by the stationary point (
Thus, the correlation is reduced to the product of two images, i.e. the image of sample S (second term), and the image of the second lens T (first term). Because of the structure of equation (4), these images are consistent. The position of the sample image on sensor Da depends on the coordinate p, on the other sensor Db, except in the case where the microscope is in focus (f=fO, F={tilde over (f)}). When the image is out of focus, the integration on Db adapted to increase the signal-to-noise ratio, as in equation (7), deletes the sample image. However, the point-by-point knowledge of Γ(ρa, ρb) allows reordering the correlation matrix, to factor the dependence on ρa and ρb and refocusing the image.
In the limit of geometric optics, the “refocused” correlation matrix
provides a sample image independent of ρb, as in the case in focus. Therefore, the integration on ρb following the implementation of the refocusing algorithm (9) allows increasing the signal-to-noise ratio of the sample image, since it exploits all the light transmitted by the second lens T:
The results of equations (8)-(9) demonstrate, in the limit of geometric optics, the ability to refocus the first setup of CPI microscopy.
In the second setup, the beam separator BS is placed between the sample S and the objective lens O. While the beam path transmitted by the beam separator BS is identical to the first setup, the beam reflected by the beam separator BS affects the reflected beam sensor Db, which is positioned at the same distance from the beam separator BS with respect to the objective lens O. This ensures that, by measuring the correlations between the intensities detected by the two sensors of the transmitted beam Da and of the reflected beam Db, the ghost image of the lens is reproduced by the sensor Db. The image of the sample S, in focus or out of focus, is reproduced by the sensor Da either directly or by measuring correlations with each pixel of Db.
Unlike the previous case, it is assumed for simplicity that the opening of the second lens T is irrelevant, that is, PT (ρT) is a constant function, and that the resolutions are essentially fixed by the aperture PO of the objective lens and the intensity profile of the sample. The correlation function (2) becomes
In the case of focus (f=fO), the integration of the correlation function on the sensor plane Db of the reflected beam produces the incoherent, enlarged image of M=fT/fO, of the sample
whose point-spread function is determined by the Fourier transform of the pupil function of the objective lens O, as in the image reproduced directly on the sensor Da. The resolution increases as the lens diameter increases, while the natural depth of field decreases quadratically. The integration of the correlation function on the sensor plane Da produces an image of the objective lens O as a function of ρb, with the point-spread function determined by the intensity profile of the sample S.
The dominant contribution to (11) in the limit of geometric optics is fixed by the stationary point (
Also in this case, the position of the sample image on Da depends on the coordinate p, on the other detector Db, except in the case where the microscope is in focus (f=fO, F=f), and the integration on Db to increase the signal-to-noise ratio may produce an out of focus image of the sample. However, the dependence on ρa and ρb can be factored through an appropriate choice of the first argument of the correlation matrix:
The integration of ρb after performing the operation of refocusing (15) produces an image of the sample with a greater signal-to-noise ratio:
The results (14)-(15) show, in the limit of the geometric optics, the ability to refocus the second setup of CPI microscopy.
The second setup has advantages over the first one, since the directional reconstruction is based on the image of the objective, which generally defines the opening of a microscope: therefore, it is not necessary to introduce an effective pupil function, and the design of the apparatus is (at least in principle) simpler; regarding the refocusing algorithm, formula (15) depends on simpler combinations of system distances with respect to (9). On the other hand, Setup II has a significant drawback from a practical-operational point of view, due to the need to insert a beam separator BS into the generally very small space between sample S and objective O, which also implies a fine adjustment of the distance between sample S and sensor Db to obtain a focused lens ghost image.
The Setup III in
The calculation of Γ(ρa, ρb) passes, as in the case II, through the substitution of ρb→−ρb/ML, with ML=SI/SO magnification of the objective given by the third lens L, and the irrelevant multiplication by PO(−ρb/ML), leading to
The refocusing algorithm
and the high-SNR (Signal to Noise Ratio) refocused image
follow, as in previous cases, by the approximation of geometric optics.
The refocusing algorithms (10), (16) and (20) were obtained in the geometric optics limit. To determine the physical limits, and therefore the maximum resolution and the depth of field obtainable by refocusing, one should calculate without approximations the quantities (4), (11), (17), which incorporate the effects of finite wavelength and coherence, like diffraction and interference. In order to quantify the resolution and depth of field of the three embodiments (three Setups) described so far, we perform this calculation in a simple case, in which we want to solve two slits of width δ, separated by a distance from center to center d=2δ. The minimum resolved distance d is defined, according to Rayleigh's criterion, as the distance at which the visibility of the double slit image is 10%. Using this criterion, we compare the resolution of a CPI microscope, with fixed blurring f−fO, with those of a standard microscope and a standard plenoptic microscope. To this end, we consider a plenoptic microscope with Nu=3, i.e. with 3×3=9 directional resolution cells [4, 5]; in fact, in a standard plenoptic device, the depth of field grows with Nu, while the resolution worsens by the same factor (compared to a standard device with the same numerical aperture), so the choice made is typically a good compromise.
The comparison results are shown in
In
In conclusion, it can be said that the three proposed schemes are essentially analogous, and differ only in the positioning of some optical components and, consequently, for the refocusing algorithm. It is expected that Setup III may be favored for the greater practicality of assembly and use compared to the first two.
Finally, it is worth noting that the same inventive concept described thus far regarding the three setups I, II and III shown in
The only precaution to be observed in applying the formulas mentioned in the present description with reference to
Number | Date | Country | Kind |
---|---|---|---|
102018000007857 | Aug 2018 | IT | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/IB2019/056370 | 7/25/2019 | WO |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2020/026093 | 2/6/2020 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
20180203217 | Knebel et al. | Jul 2018 | A1 |
20190271592 | Gray | Sep 2019 | A1 |
Number | Date | Country |
---|---|---|
104344793 | Feb 2015 | CN |
Entry |
---|
Scarcelli et al., “Can Two-Photon Correlation of Chaotic Light Be Considered as Correlation of Intensity Fluctuations?” Physical Review Letters, 2006, 4 pages. |
D'Angelo et al., “Correlation Plenoptic Imaging,” Physical Review Letters, vol. 116, 2016, 6 pages. |
Levoy et al., “Light Field Microscopy,” Association for Computing Machinery, Inc., Trans. Graph. 25, 2006, pp. 924-934. |
Georgiev et al., “Focused plenoptic camera and rendering,” Journal of Electronic Imaging, vol. 19, No. 2, Apr.-Jun. 2010, 11 pages. |
International Search Report for PCT/IB2019/056370 dated Dec. 5, 2019, 3 pages. |
Written Opinion of the ISA for PCT/IB2019/056370 dated Dec. 5, 2019, 6 pages. |
Pepe et al., “Exploring plenoptic properties of correlation imaging with chaotic light”, arXiv.org, Cornell University Library, Oct. 6, 2017, XP080826623, 9 pages. |
Number | Date | Country | |
---|---|---|---|
20210318532 A1 | Oct 2021 | US |