The technical field generally relates methods and devices used for high-resolution imaging. More particularly, the method and device relates to digital holographic imaging techniques although the methods and concepts described herein may also be used in conjunction with lenses in some embodiments.
High-resolution wide-field optical imaging is needed in various fields, especially in medical and engineering applications that demand large space-bandwidth-products. Originally invented for electron microscopy, holography has become an emerging solution for high-resolution and wide-field digital imaging. The concept of holography relies on reconstructing the image of a sample using interference patterns created by the diffracted object fields, which can be recorded and digitized even without the use of any lenses. Recent advances in digital holographic microscopy have largely benefited from the rapid evolution of e.g., the opto-electronic sensor technology and computing power, which have led to the development of various new imaging configurations and reconstruction techniques
To achieve high-resolution and wide field-of-view, digital holographic imaging techniques need to tackle two major challenges: phase recovery and spatial undersampling. Previously, these challenges were separately addressed using phase retrieval and pixel super-resolution algorithms, which utilize the diversity of different imaging parameters. For example, various super-resolution techniques have been implemented to mitigate resolution loss by utilizing sub-pixel displacements in the imaging system, achieved, for example, by shifting the light source, the sensor-array and/or the sample, followed by digital synthesis of a smaller effective pixel by merging these sub-pixel-shifted low-resolution images.
Conventional pixel-super resolution relies on digital synthesis of high spatial frequency content of the sample using multiple low-resolution measurements that are recorded at different sub-pixel displacements (e.g., lateral displacements in the x, y direction) between the image sensor and object planes. Using this mathematical framework, high-resolution (i.e., super-resolved) holograms can be obtained, and then used for digital phase retrieval. To retrieve the lost optical phase in an in-line imaging geometry, multiple super-resolved holograms can be utilized at e.g., different sample-to-image sensor distances, illumination angles. Each one of these holograms essentially serve as independent physical constraints on the amplitude of the optical field, which enables the use of an iterative algorithm to force the complex object field to be consistent with all these measurements. Although this sequential implementation of pixel super-resolution followed by phase retrieval has enabled digital holographic microscopy to deliver high-resolution and wide-field reconstructions with giga-pixel level throughput, they currently require large amounts of holographic data which can be a limit for high-speed or cost-effective imaging applications. For instance, in a multi-height configuration (i.e., using multiple sample-to-image sensor distances), if 4×4 pixel super-resolution (e.g., 4×4 lateral shifts in the x, y directions) is implemented at eight different heights, the total number of raw holograms to be captured becomes 128, which could be a limitation for high-speed imaging applications.
In one embodiment, a method for obtaining a high resolution image of objects contained within a sample is disclosed that combines pixel super-resolution and phase retrieval techniques into a unified algorithmic framework that enables new holographic image reconstruction methods with significantly improved data efficiency, i.e., using much less number of raw measurements to obtain high-resolution and wide-field reconstructions of the sample. Using the unified algorithmic framework, the twin image noise and spatial aliasing signals, along with other digital holographic artifacts, can be interpreted as noise terms modulated by digital phasors, which are all analytical functions of the imaging parameters including e.g., the lateral displacement between the hologram and the sensor array planes (e.g., x, y shifts), sample-to-image sensor distance (z), illumination wavelength (λ), and the angle of incidence (θ,φ). Based on this new propagation phasor approach, a two-stage holographic image reconstruction algorithm was developed that merges phase retrieval and pixel super-resolution into the same unified framework. Compared to previous holographic reconstruction algorithms, this new computational method reduces the number of raw measurements by five to seven-fold, while at the same time achieving a competitive spatial resolution across a large field-of-view.
In one aspect of the invention, the computational framework that employs the propagation phasor-based approach acquires a plurality of raw measurements or holograms of a sample. These holograms are used to generate a high-resolution initial guess of the sample in a first stage (Stage I) of the process. This initial guess is made by first upsampling each raw measurement using an upsampling factor. The upsampled raw measurements are then offset with lateral displacements (xshift, k, and yshift, k) and subject to back-propagation. The upsampled and back-propagated holograms are subject to a summation process to generate the initial guess. The initial guess that is generated as a result of the first stage is then used as the input to an iterative algorithm to reconstruct and refine the image of the sample. This iterative process is used to eliminate the remaining twin image noise, aliasing signal, and upsampling artifacts. This second stage (Stage II) involves four different operations. First, phase modulation is applied to the initial guess and the field is propagated from the object plane to the image sensor plane (i.e., forward-propagation). At the image sensor plane, the low-resolution, undersampled holograms are used to update the amplitude of the high-resolution, forward-propagated field. The amplitude-updated, high-resolution field is then back-propagated to the object plane and phase modulation is removed that is caused by the illumination angle. The back-propagated field is then used to update the transmitted field on the object plane in the spatial frequency domain. After this update, the phase of the field is converted into an optical path length map of the sample, and the amplitude of the field is the reconstructed, final transmission image. The operations of Stage II are performed for every image configuration and iteratively are performed until convergence is reached.
In one aspect of the invention, the computational framework that employs the propagation phasor-based approach is executed on a computing device having one or more processors. The propagation phasor-based holographic reconstruction algorithm may be executed using, for example, MATLAB® or other software or imaging program (e.g., the software may be written in C or similar programming language). Because the bulk of computation time is spent on wave propagation between the sample plane and the image sensor plane, which relies on Fast Fourier Transforms (FFTs), the use of one or more graphic processing units (GPUs) or other parallel computing architecture may reduce the total computation time and accelerate the image reconstruction process.
In one embodiment of the invention, the propagation phasor-based holographic reconstruction algorithm is used to reconstruct pixel super-resolution images using diversity of sample-to-image sensor distance (i.e., z). In another embodiment of the invention, the propagation phasor-based holographic reconstruction algorithm is used to reconstruct pixel super-resolution images using diversity of angle (i.e., (θ,φ). In another embodiment of the invention, the propagation phasor-based holographic reconstruction algorithm is used to reconstruct pixel super-resolution images using diversity of wavelength (λ). In another embodiment, the propagation phasor-based holographic reconstruction algorithm is used to reconstruct pixel super-resolution images based on diversity of sample-to-image sensor distance as well as sub-pixel lateral shifts performed at one sample-to-image sensor distance. In another embodiment, the propagation phasor-based holographic reconstruction algorithm is used to reconstruct pixel super-resolution images based on diversity of angle as well as sub-pixel lateral shifts performed at one angle. In another embodiment, the propagation phasor-based holographic reconstruction algorithm is used to reconstruct pixel super-resolution images based on diversity of wavelength as well as diversity of sample-to-image sensor distance. In another embodiment, the propagation phasor-based holographic reconstruction algorithm is used to reconstruct pixel super-resolution images based on diversity of wavelength as well as diversity of angle.
In one embodiment, wavelength-scanning is used for pixel super-resolution imaging that generates high-resolution reconstructions form undersampled raw measurements captured at multiple wavelengths within a relatively narrow spectral range (e.g., 10-30 nm). Compared with lateral (x, y) shift-based super-resolution, the wavelength-scanning method avoids the need for shifting mechanical components and, more importantly, provides uniform resolution improvement along all the directions across the image sensor or sample plane. This framework enables one to improve the resolution and effective NA of a wide-field lens-free microscope by a factor of approximately four, achieving a half-pitch resolution of 250 nm with an NA of approximately 1.0 using significantly fewer measurements compared with those required by lateral shift-based super-resolution methods. Because the wavelength-scanning super-resolution technique utilizes a narrow spectral range, it allows for super-resolved imaging of both colorless (e.g., unstained) and stained/dyed biological samples. Reconstructions at multiple wavelengths also enable robust phase unwrapping to reveal the optical path length differences between the sample and surrounding media. This new wavelength-scanning super-resolution approach would broadly benefit various lens-free as well as lens-based wide-field and high-throughput microscopy applications that require large space-bandwidth products.
In one embodiment, a method for obtaining a high resolution image of one or more objects contained within a sample includes illuminating the sample with a light source emitting partially coherent light or coherent light, wherein the sample is interposed between the light source and an image sensor with a sample-to-image sensor distance zk. A plurality of lower resolution hologram image frames of the objects are obtained with the image sensor, wherein the plurality of lower resolution hologram image frames are obtained at different (1) sample-to-image sensor distances zk or (2) different illumination angles θk,φk. A high-resolution initial guess of the objects is generated from the plurality of lower resolution hologram image frames based on a summation of upsampled holograms in the lower resolution image frames. Twin image noise, aliasing signal, and artifacts are iteratively eliminated and phase information is retrieved from the high-resolution initial guess. The iterative process involves: forward-propagating the high-resolution initial guess from an object plane to an image sensor plane to generate a high-resolution forward-propagated field; updating an amplitude of the high-resolution forward-propagated field using holograms in the lower resolution hologram image frames; back-propagating the updated high-resolution forward-propagated field to the object plane; updating a transmitted field of the object at the object plane in the spatial frequency domain; and outputting a phase retrieved high resolution image of the one or more objects contained within the sample based on the updated transmitted fields of the one or more object.
In another embodiment, a method for obtaining a high resolution image of one or more objects contained within a sample includes sequentially illuminating the sample at a plurality of different wavelengths with a light source emitting partially coherent light or coherent light, wherein the sample is interposed between the light source and an image sensor with a sample-to-image sensor distance zk. The light source may be a wavelength-tunable light source or multiple different light sources emitting light at different wavelengths. A plurality of lower resolution hologram image frames of the objects are obtained with the image sensor at the plurality of different wavelengths, wherein the plurality of lower resolution hologram image frames are also obtained at different (1) sample-to-image sensor distances zk or (2) different illumination angles θk,φk. A high-resolution initial guess of the objects is generated from the plurality of lower resolution hologram image frames based on a summation of upsampled holograms in the lower resolution image frames. The twin image noise, aliasing signal, and artifacts are iteratively eliminated and phase information is retrieved from the high-resolution initial guess. The iterative process includes forward-propagating the high-resolution initial guess from an object plane to an image sensor plane to generate a high-resolution forward-propagated field; updating an amplitude of the high-resolution forward-propagated field using holograms in the lower resolution hologram image frames; back-propagating the updated high-resolution forward-propagated field to the object plane; updating a transmitted field of the object at the object plane in the spatial frequency domain; and outputting a phase retrieved high resolution image of the one or more objects contained within the sample based on the updated transmitted field of the one or more objects.
In another embodiment, a wavelength scanning pixel super-resolution microscope device for imaging a sample includes a sample holder configured to hold the sample. The device includes a wavelength-tunable light source (or multiple different light sources) configured to illuminate the sample at a plurality of different wavelengths λk along an optical path. A lens or set of lenses are disposed along the optical path. The device includes an image sensor configured to receive illumination passing through the sample and lens or lens set along the optical path, wherein the at least one of the sample holder, lens/lens set, or image sensor are moveable along the optical path to introduce incremental defocusing conditions to the microscope device, wherein the image sensor obtains a plurality of images of the sample at the different wavelengths under the incremental defocusing conditions. The device includes at least one processor configured to (1) generate a high-resolution initial guess of the sample image based on the plurality of images of the sample at the different wavelengths under the incremental defocusing conditions, (2) iteratively eliminate artifacts and retrieving phase information from the high-resolution initial guess of the sample image, and (3) output a high resolution, phase retrieved high resolution image of the sample.
The configuration of an in-line holographic imaging system 10 according to one embodiment of the invention is seen with reference to
As seen in
Thus, some of the controllable parameters of the imaging system are illustrated in
Still referring to
The components of the system 10 described above may be incorporated into a small, benchtop unit that can be used to obtain high resolution images of a sample 12. In other embodiments, it may be possible to incorporate the imaging components into a small, portable device. While any type of sample 12 may be imaged using the system 10, it has particular suitability for imaging a sample 12 that includes a biological sample (e.g., bodily fluid such as blood or the like). The sample 12 may also include a pathological sample such as histological slide that may be unstained or stained depending on the particular application. The system 10 may also be used to image environmental samples containing small particulate matter or pathogens found in air or water sources.
Mathematical Formalism of Propagation Phasor Approach in Digital Holography
The concept of propagation phasors is established by deriving the analytical expressions that contain not only the holographic information of the sample 12, but also the twin image noise, spatial aliasing signal, and upsampling related spatial artifacts that are present. Throughout this analysis, plane wave illumination is assumed and it is also supported by the imaging set-up of
Under these different imaging configurations, each labeled with index k, the transmission properties of a two-dimensional (2D) sample 12 can be generally expressed as ok(x, y)=1+sk(x, y), where sk refers to the scattered object field that interferes with the background unscattered light. The frequency spectrum Ok(fx, fy) of ok(x, y) can be written as:
O
k(fx,fy)=δ(fx,fy)+Sk(fx,fy) (1)
Similarly, one can write the 2D spatial frequency spectrum of the transfer function hk(x, y, zk, λk, θk) as:
H
k(fx,fy,zk,λk,θk)≡FT{hk(x,y,zk,λk,θk)} (2)
where FT refers to the Fourier Transform operation. From now on, the expressions of all the frequency spectra will be simplified in the equations by hiding the spatial frequency variables fx, and fy. The frequency spectrum of the field intensity ik(x, y) on the image sensor plane can then be expressed as:
I
k
=δ+T
k
S
k+(Tk−·Sk−)*+SSk (3)
where the superscript ‘−’ represents using variable set (−fx, −fy) instead of (fx, fy) and the asterisk stands for complex conjugate operation. SSk represents the self-interference terms, which can be written as SSk=Γf
T
k
=H
k*(fx,k,fy,k)·Hk(fx+fx,k,fy+fy,k) (4)
where fx, k=nk sin θk cos φk/λk, fy, k=nk sin θk sin φk/λk, and nk is the refractive index of the medium, which is assumed to be a function of only the illumination wavelength. It is important to notice that Hk is a complex function with a unit magnitude, defining a phasor. Based on Eq. (4), as a product of Hk*(fx, k, fy, k) and Hk, the function Tk is also a phasor, and the term Tk is deemed as a propagation phasor, the function of which in the reconstruction framework will be more clear below.
When any intensity distribution ik(x, y) is sampled by an image sensor 14 with a pixel pitch of Δx and Δy in lateral directions, the discrete Fourier transform (DFT) of the sensor's output can be expressed as:
In Eq. (5) u and v are integers representing the aliasing orders, and (u, v)=(0, 0) denotes the non-aliased target signal of the object. Pk(fx, fy) is the 2D FT of the pixel function that defines the responsivity distribution within each pixel of the image sensor 14. Originally, fx, and fy in Eq. (5) are discrete frequency values confined within the Nyquist window. Based on the periodic nature of DFT, Eq. (5) and all of the further derivations can be numerically extended to a broader frequency domain by simply upsampling the raw measurements. Therefore, without change of notations, Isampled, k refers to the DFT of the upsampled version of the raw measurements.
Now, the lateral displacements between the holograms and the image sensor 14 are incorporated into Eq. (5). If one adds lateral shifts (xshift, k, yshift, k) to each hologram, then Eq. (5) can be re-written as:
where the expression of spatial aliasing order is simplified by using the subscript uv, and φshift, uv, k represents the phase change caused by a lateral shift:
In Eq. (6), by replacing the expression of Iuv, k with Eq. (3), one can obtain an expanded expression for Isampled, k:
On the right side of Eq. (8), one can see that, for each aliasing order (i.e., each combination of u and v, including the target signal: u=0, v=0), there are four items inside the square brackets. The first item, δuv, represents the background light, the second item, Tuv, k·Suv, k, represents the real image, the third item, (Tuv, k−·Suv, k)*, represents the twin image; and the last item, SSuv, k, is the self-interference term.
Two-Stage Holographic Reconstruction Algorithm
As explained herein, a generic, two-stage holographic reconstruction algorithm using propagation phasors is set forth which aims to recover the object term δ00+S00, k from a series of measured holograms.
Stage I of Propagation Phasor Based Holographic Reconstruction: Generation of an Initial Guess
With reference to
On the left side of Eq. (9), the pixel function is kept at P00, k multiplied with δ00, k S00, k; note, however, that it can be later removed using deconvolution techniques as the last step of the holographic reconstruction. The right side of Eq. (9) shows that in order to extract (S00+S00, k)·P00, k, there are five terms that need to be eliminated from the back-propagated intensity (i.e., T00, k*·ejφshift, 00, k·Isample, k). The first term, T00, k*·T00, k−*·P00, k·S00, k−*, represents the twin image noise; the second and third terms which contain Suv, k or Suv, k−* (u≠0, v≠0) represent the spatial aliasing signals for real and twin images, respectively; the fourth term with δuv (u≠0, v≠0) is the high frequency artifacts generated during the upsampling process. The last term with SSuv, k is the self-interference signal.
where Q00, ktwin=T00, k*·T00, k−*, Quv, kreal=T00, k*·Tuv, k·ej(φ
Also notice that except the illumination wavelength λk, the changes of the imaging parameters zk, θk, φk, xshift, k, and yshift, k do not affect the transmission properties of the 2D sample. During the imaging process, the illumination wavelengths are confined within a narrow spectral range, typically less than 10 nm, so that the transmission properties of the sample and the image sensor's pixel function can be approximately considered identical when generating an initial guess of the object, i.e., Suv, k≈Suv, and Puv, k≈Puv. If one lists Eq. (10) for all the possible K imaging conditions (e.g., as a function of various illumination wavelengths, sub-pixel shifts, etc.), and then sum them up with a set of weighting factors, {ck}, this produces:
By finding a set of weighting factors {ck} that satisfy
one can have “complete elimination” of the twin image noise, aliasing signals and upsampling related spatial artifacts, while still maintaining the target object function, (δ00+S00)·P00. However, considering the fact that Quv, ktwin, Quv, kreal and Quv, kartifact are also functions of spatial frequencies (fx, fy), it is computationally expensive to obtain a set of ideal {ck} values. Therefore, an alternative strategy is adopted as shown in
In fact, as illustrated in
This initial guess is then used as the input to an iterative algorithm (Stage II) to reconstruct and refine the object function/image, which will be detailed below.
Stage II of Propagation Phasor Based Holographic Reconstruction: Iterative Image Reconstruction
Using the initial guess defined by Eq. (13), an iterative process is then implemented as the second stage of the propagation phasor based holographic reconstruction algorithm to eliminate the remaining twin image noise, aliasing signal, and the upsampling related artifacts. Each iteration of Stage II is comprised of four steps (i.e., Steps 4 through 7—see
Wave propagation from the object plane to the image sensor is deemed as forward-propagation, and the spatial form of the forward-propagated field is denoted as gforward, k(x, y).
These above outlined steps (Steps 4 to 7) are performed for every imaging configuration. It is considered as one iteration cycle when all the K raw measurements are used for once. Similar to the convergence condition defined in Allen, L. J. et al., Exit wave reconstruction at atomic resolution. Ultramicroscopy 100, 91-104 (2004), which is incorporated by reference, the convergence of the iterations and the reconstruction is determined when the sum-squared error (SSEavgitr) between the raw measurement and the downsampled intensity map satisfies the following criterion:
|SSEavgitr−SSEavgitr-1|<ε (15)
where ‘itr’ is the index of the iteration cycle, and e is the convergence constant, empirically defined as ˜0.2% of SSEavgitr.
To demonstrate the propagation phasor approach for holographic image reconstruction, it was implemented using lens-free holographic microscopy although it is broadly applicable to other holographic microscopy platforms (including those with lenses). As depicted in
Sample Preparation
Besides a standard 1951 USAF resolution test target, the propagation phasor approach was also tested by imaging biological samples, including unstained Papanicolaou (Pap) smear slides and blood smears. Pap smears are prepared using ThinPrep® method (Hologic, Massachusetts, USA). The blood smear samples are prepared using EDTA (ethylenediaminetetraacetic acid) anticoagulated human blood and stained with Wright's Stain.
Computation Platform Used for Propagation Phasor Based Holographic Reconstructions
For proof-of-concept implementation, the propagation phasor approach based reconstruction algorithm has been implemented using MATLAB® (Version R2012a, MathWorks, Massachusetts, USA) on a desktop computer with 3.60-GHz central processing unit (Intel Xeon ES-1620) and 16 GB random-access memory. Of course, dedicated imaging software or other programs may be used besides MATLAB®. Using an upsampling factor of seven, the computation time of one iteration in reconstruction Stage II is ˜1.2 seconds for a region-of-interest of ˜1×1 mm2. As for the total computation time including Stages I and II, assuming that the number of intensity distribution updates is ˜8-10 per iteration (see e.g.
Results and Discussion
The main challenges of wide field-of-view, high-resolution holographic imaging include: (1) phase retrieval, and (2) mitigating the undersampling caused by an image sensor chip. The propagation phasor approach described herein relies on the fact that in the digital hologram of a sample, the twin image noise and spatial aliasing signals vary under different imaging configurations. Such variations enable one to eliminate these unwanted noise terms (twin image noise and aliasing signal) and obtain phase-retrieved and high-resolution (i.e., super-resolved) reconstructions of the object. The imaging configuration in a holographic microscope can in general be changed by varying different parameters: (1) the lateral displacements between the holograms and the sensor-array (i.e., lateral relative shifts xshift, k and yshift, k), (2) the sample-to-image sensor distance (zk), (3) the illumination wavelength (4), and (4) the angle of incidence (θk, φk).
To better illustrate the inner workings of the propagation phasor approach, the dependencies of the twin image noise and the aliasing signal on these controllable imaging parameters are demonstrated followed by a summary of the combinations of these imaging parameters that can create phase-retrieved and high-resolution reconstructions while also improving the data efficiency of holographic imaging.
Dependency of Twin Image Noise and Aliasing Signal on Imaging Parameters
From Eq. (10), one can see that all the terms which need to be eliminated from an upsampled and back-propagated hologram T00, k*·ejφshift, 00, k·Isampled, k are modulated by phasors, including: (1) the twin image term, modulated by Q00, ktwin; (2) aliasing signals, modulated by Quv, kreal and Quv, ktwin, u≠0, v≠0; (3) upsampling artifacts (δuv terms modulated by Quv, kartifact, u≠0, v≠0; and (4) self-interference patterns (SSuv, k terms modulated by Quv, kartifact). From the perspective of the propagation phasor approach, it is desirable that the phasors that modulate these unwanted noise terms or artifacts exhibit sufficient variations across [0, 2π], so that they can be significantly suppressed during the initial summation in the reconstruction Stage I. Here, the focus of the discussion is on twin image phasor Q00, ktwin and aliasing related phasors Quv, kreal, Quv, ktwin, (u≠0, u≠0), where the conclusions would be broadly applicable to a wide range of holographic imaging systems (lens-based or lens-free). Meanwhile, the self-interference patterns/artifacts are much weaker in signal strength compared to the holographic interference terms and can be easily suppressed by the iterative reconstruction algorithm (Stage II).
To illustrate the dependencies of the twin image noise and the aliasing signal on the holographic imaging parameters, the twin image phasor ejφtwin≡Q00, ktwin and one of the spatial aliasing phasors, i.e., ejφalias≡Quv, kreal (u=1, v=1) were chosen as examples.
From
Lateral Shifts (xshift, k, yshift, k):
Since the twin image phasor ejφtwin≡Q00, ktwin≡T00, k*·T00, k−* (see Eq. 4) does not contain variables xshift, k or yshift, k, the absolute value of its partial derivatives with respect to xshift, k and yshift, k is zero, i.e., |∂φtwin/∂xshift, k|=0 and |∂φtwin/∂yshift, k|=0 (
Sample-to-Image Sensor Distance (zk):
Using the diversity of the sample-to-image sensor distance (zk) to eliminate the twin image noise has been one of the most widely-used phase retrieval techniques in holographic image reconstruction. For completeness, here the effect of zk on the twin image noise is analyzed from the perspective of the propagation phasor approach. As shown in
Wavelength (λk):
The diversity of illumination wavelength can be used for twin image elimination (i.e., phase retrieval). As shown in
Angle of Incidence (θk, φk):
As shown in
The propagation phasor approach described herein: (1) provides a unique mathematical formalism that combines/merges various existing phase retrieval and pixel super-resolution techniques used in digital holography into the same unified framework, and (2) creates two new techniques to eliminate the aliasing signal in digital holography, namely using the diversity of the sample-to-image sensor distance, and the diversity of the illumination angle. For consistency with the previous used terminology, these two new methods are called multi-height based pixel super-resolution and multi-angle based pixel super-resolution, respectively.
Propagation Phasor Approach Using Multi-Height and Multi-Angle Holographic Data
Using this new propagation phasor based reconstruction framework, the diversities of sample-to-image sensor distance or illumination angle can enable not only twin image elimination, but also resolution enhancement, i.e., super-resolution. To demonstrate the resolution enhancement brought by the diversity of zk (i.e., multi-height based pixel super-resolution), holograms of a standard resolution test target were captured at eight different heights, where the values of zk are evenly distributed between 200 μm and 305 μm with a spacing of ˜15 μm. For comparison, the sample was first reconstructed using a previous technique: multi-height based phase retrieval algorithm as seen in
In addition to multi-height based pixel super-resolution, a similar resolution enhancement can also be achieved using the diversity of illumination angles (i.e., multi-angle based pixel super-resolution). As shown in
Improving the Data Efficiency in High-Resolution Holographic Reconstructions Using the Propagation Phasor Approach
It was also found that much higher resolution images can be reconstructed using the propagation phasor approach by simply adding lateral shift based pixel super resolution to only one of the measurement heights or angles, which is used as an initial guess at Stage I of the reconstruction algorithm. This approach is also quite efficient in terms its data requirement compared to existing approaches.
Using the multi-height imaging configuration outlined earlier, a 4×4 lateral shift-based pixel super-resolution was performed at only one sample-to-image sensor distance (i.e., 190 μm), which added fifteen (15) extra raw measurements/holograms to the original data set that is composed of measurements at eight (8) heights. In the propagation phasor based reconstruction, the back-propagation of this super-resolved hologram at this height (190 μm) was used as the initial guess (Stage I of algorithm). The resolution improvement that were obtained by using these additional fifteen (15) raw measurements in the propagation phasor approach is significant: a half-pitch resolution of ˜0.55 μm (corresponding to an NA of ˜0.48 at 530 nm illumination) was achieved, which is the same level of resolution that is achieved by performing lateral shift-based super-resolution at every height (see
A similar level of improvement in data efficiency of the propagation phasor approach is also observed in the multi-angle imaging configuration: by simply performing 6×6 pixel super-resolution at only the vertical illumination, the propagation phasor based reconstruction can achieve a half-pitch resolution of ˜0.49 μm (corresponding to an NA of ˜0.53 at 530 nm illumination). As a comparison, the synthetic aperture approach achieves a half-pitch resolution of ˜0.44 μm; however it uses 6×6 pixel super-resolution at every illumination angle (
Imaging Biological Samples Using the Propagation Phasor Approach
To demonstrate the success of the propagation phasor approach in imaging a biological sample, unstained Papanicolaou (Pap) smears were imaged (see
Based on these results, it can be seen that the propagation phasor approach would greatly increase the speed of high-resolution and wide-field holographic microscopy tools. In previously reported holographic imaging modalities, multiple laterally shifted images are captured to achieve pixel super-resolution at every one of the sample-to-image sensor distances or illumination angles. As demonstrated in
Wavelength-Scanning Pixel Super-Resolution
In this particular embodiment, pixel super-resolution is achieved that utilizes wavelength scanning to significantly improve the resolution of an undersampled or pixelated imaging system, without the use of any lateral shifts or displacements. In this technique, the sample is sequentially illuminated at a few wavelengths that are sampled from a rather narrow spectral range of 10-30 nm (although a larger wavelength range may also be used in some embodiments). Compared with sub-pixel displacement or lateral shift-based super-resolution techniques, wavelength scanning introduces a uniform resolution improvement across all the directions on the sample plane and requires significantly fewer raw measurements to be made. Making fewer measurements without sacrificing performance could greatly benefit high-speed wide-field imaging, field-portable microscopy and telemedicine applications, which are all sensitive to data transmission and storage.
Mathematical Formalism of Wavelength-Scanning Pixel Super-Resolution
An assumption is made that the sample is a thin object mounted on a plane parallel to the image sensor chip and that the sample is sequentially illuminated by multiple wavelengths {λk}. At a given wavelength λk, the object wave can be written as ok(x, y)=1+sk(x, y), where sk(x, y) represents the scattered object wave, immediately at the exit of the object plane (z=0). The 2D Fourier transform of ok(x, y) can be written as Ok(fx, fy)=δ(fx, fy)+Sk(fx, fy). At the image sensor plane (z=z0), the Fourier transform of the intensity distribution, ik(x, y), can be written as Equation (3), above. On the right-hand side of Eq. (3), the first item, δ, represents the background intensity; the second and third items are conjugate holographic terms, which represent the interference of the scattered object wave with the background wave at the sensor plane. The fourth item is the self-interference term, which can be considered negligible for weakly scattering objects. The expression for Tk can be written as follows:
T
k(fx,fy)=Hk*(fx,k,fy,k)·Hk(fx+fx,k,fy+fy,k) (16)
where Hk(fx, fy) is the free space transfer function, and the frequency shifts fx, k and fy, k are determined by the illumination wavelength and the incident angle. After the object intensity is sampled by an image sensor array with a pixel pitch of Δx and Δy, the discrete Fourier transform (DFT) of the sensor's output can be expressed as follows:
where u and v are integers and fx and fy are discrete spatial frequency values. Note that Ipix, k(fx, fy)=Ik(fx, fy)·Pk(fx, fy), where Pk(fx, fy) represents the Fourier transform of the pixel function, i.e., the 2D responsivity distribution within each pixel: pk(x, y). Variables u and v represent the order of spatial aliasing due to pixelation, and (u, v)=(0, 0) corresponds to the non-aliased real (i.e., target) signal. The periodic nature of the DFT enables one to extend the expression of Isampled, k to a broader frequency space by upsampling. Based on these definitions, one can express the undersampled or pixelated lens-free hologram at a given wavelength λk as follows:
The non-aliased target signal ok(x, y) or its spatial Fourier spectrum can be obtained under (u, v)=(0, 0), i.e., δ00+S00, k which can be written as follows:
On the left side of Eq. (19), one retains the pixel function P00, k, which can be removed later in the last step of the image reconstruction, using, for example, spatial deconvolution with a Wiener filter. Eq. (19) also shows that to obtain the non-aliased object at (u, v)=(0, 0), one needs to eliminate or subtract four terms from the upsampled and back-propagated holographic term (i.e., T00, k*·Isampled, k). To this end, the first item to eliminate, T00, k*T00, k*·P00, k·S00, k−*, is the twin image noise, a characteristic artifact of in-line holography. The second term in Eq. (19), which contains Suv, k and Suv, k−* (u≠0, v≠0) in the summation, represents the effects of spatial aliasing and undersampling for both the real and twin image terms. The third item, which contains δuv in the summation, is the periodic background artifact generated during the upsampling process, and the last item is the self-interference term and its upsampling related artifacts.
As with the other embodiments, a two-stage reconstruction algorithm for eliminating all four of these items listed on the right side of Eq. (19) using wavelength scanning to enable super-resolved reconstructions of complex (i.e., phase and amplitude) object functions.
Reconstruction Stage 1: Generation of a High-Resolution Initial Guess of the Sample Using Wavelength Diversity
As depicted in
This expression indicates that by summing all the back propagations at different wavelengths (over a narrow spectral range of 10-30 nm, for example), the reconstructed image, i.e., δ00+S00, k or (δ00+S00, k)·P00, k, can be significantly enhanced, whereas the spatial aliasing and undersampling related terms with T00, k*·Tuv, k will be considerably suppressed. Therefore, in this first stage of the reconstruction process, a high-resolution initial guess of the sample is generated by summing all the upsampled and back-propagated raw measurements, i.e., low-resolution diffraction patterns. The artifact items {(δuv(u≠0, v≠0)} are subtracted before the back-propagation step to create a cleaner image.
It should be noted that modifying Eq. (20) into a weighted average at each spatial frequency point could achieve better suppression of spatial aliasing and undersampling-related artifacts. However, using the computation platform used for experimental results, which is based on a central processing unit (CPU), the search for optimal weighting factors at each frequency point will significantly increase the total computation time. Therefore, in this implementation, a simpler summation approach was chosen to minimize the computation time for the generation of the initial object guess. The spatial aliasing and undersampling-related artifacts of this initial guess will be further eliminated and cleaned up during the second stage of the algorithm as explained below.
Reconstruction Stage 2: Multi-Wavelength-Based Iterative Pixel Super-Resolution and Phase Retrieval
The second stage of the numerical reconstruction involves an iterative algorithm, which contains four sub-steps in each iteration:
(1) Knowing each raw measurement's corresponding wavelength and incidence angle, one applies the corresponding plane wave illumination on the initial guess of the sample (from Stage 1) and propagate the optical field from the object plane to the image sensor plane using the angular spectrum approach.
(2) The amplitude of the high-resolution field on the image sensor plane is updated using the low-resolution measurement at the corresponding wavelength. To this end, the intensity of the high-resolution field is convolved with the image sensor's pixel function and downsampled to the same grid size as the pixelated raw measurement. The difference between the raw measurement and the downsampled intensity map is considered a low-resolution “correction” map for each illumination wavelength. A high-resolution correction map can then be generated by taking the Kronecker product of this low-resolution map and the pixel function. To perform a smooth update, this high-resolution correction map is added to the high-resolution intensity distribution with a relaxation parameter, typically set to approximately 0.5. After the smoothed update, a Wiener deconvolution filter that incorporates the image sensor's noise level is applied to this updated intensity distribution. The square root of this filtered high-resolution intensity distribution is then applied to the amplitude of the field on the sensor plane while the phase map is kept unaltered.
(3) This updated field is then back-propagated to the object plane.
(4) The back-propagated field is used to update the transmission field on the object plane. This update is performed in the frequency domain within a circular area as seen in
The four steps described above are performed for every raw (i.e., undersampled) measurement captured by the image sensor array. It is considered one iteration cycle when each one of the raw measurements has been used for amplitude update. Typically after 5-10 iteration cycles, the reconstruction converges. The convergence condition for the iteration is defined using Equation (15) above.
Phase Retrieval Using Multi-Height and Synthetic Aperture Techniques
Multi-height and synthetic aperture techniques have been proven to be robust phase retrieval methods for lens-free on-chip imaging. In previously reported lens-free reconstructions, pixel super-resolution and phase retrieval are carried out sequentially: at each height or illumination angle, lateral shift-based pixel super-resolution is first performed to obtain high-resolution diffraction patterns on the image sensor plane. These super-resolved diffraction patterns are then used by an iterative phase retrieval algorithm, in which wave propagations between the object plane and the image sensor plane are executed repeatedly. However, in wavelength-scanning-based pixel super-resolution, raw measurements are essentially undersampled versions of different holograms. Therefore, the same iterative algorithm detailed in the previous subsection (i.e., Reconstruction Stage 2) is used to realize resolution enhancement and phase retrieval altogether. More specifically, in the multi-height configuration, the sample is illuminated sequentially at each wavelength, and the corresponding lens-free holograms are captured before the vertical scanning stage moves the sample or the image sensor to the next height. Each height will be labeled with index l; therefore, all the measurements {Isampled, k} and the corresponding transfer functions {Hk} and {Tuv, k} that are used in the previous derivations can be relabeled as {Isampled, kl}, {Hkl} and {Tuv, kl}, respectively. During the numerical reconstruction process, all the raw holograms are upsampled, back-propagated, and then summed together to generate the high-resolution initial guess at a given height. In Stage 2 of the reconstruction algorithm, the aforementioned four-step process is applied to each raw measurement. The same set of operations and processing also apply to the synthetic aperture technique, except that index l now refers to each illumination angle instead of sample height.
In general, for pathology slides such as blood smears and Pap smears, the optical path length difference between the sample (i.e., biological tissue) and the medium (i.e., air or the sealing glue) is rather small. Under these circumstances, phase unwrapping is not a concern; therefore, in the phase recovery process, one can use a scrambled order of {Isampled, kl} in each iteration cycle. However, when working with samples with larger optical path length differences, such as grating lines carved into a glass substrate, one extra step, i.e., phase unwrapping, must be added after the reconstruction, and the order of iterations must be modified accordingly.
Multi-Wavelength Phase Unwrapping
A robust phase unwrapping algorithm requires high-resolution and phase-retrieved reconstructions at multiple wavelengths; therefore, the raw measurements are divided into subsets, in which the wavelengths are identical or very similar (e.g., Δλ≦5 nm), and perform the four-step reconstruction process previously discussed (as part of the Reconstruction Stage 2) on each subset separately. For example, reconstruction No. 1 uses subset {Isampled, kl|k=1, l=1, . . . L}, No. 2 uses {Isampled, kl|k=2, l=1, . . . L}, etc. When the iterations for all these subsets are completed, one obtains high-resolution (i.e., super-resolved) phase-retrieved reconstructions at multiple wavelengths, i.e., {Ok}, whose phase maps {φk, wrapped} need unwrapping. Using these wrapped phase maps {φk, wrapped} at multiple wavelengths, phase unwrapping is performed to accurately reveal the optical path length differences between the sample and the surrounding medium. Assuming that the optical path length difference is ΔL(x, y), the phase distribution at the object plane at each wavelength can be written as φk(x, y)=2π·ΔL(x, y)/λk. The wrapped phase can thus be expressed as φk, wrapped(x, y)=φk(x, y)±2Nπ, where π<φk, wrapped≦π and N is an integer. These resulting wrapped phase maps {φk, wrapped} that are generated through super-resolved and phase-retrieved reconstructions at multiple wavelengths are then fed into an optimization algorithm that finds the optimum path length ΔLopt(x, y) at each spatial point (x, y) by minimizing a cost function defined as follows:
To avoid convergence to a local minimum and reduce the computation cost/time, a search range of [ΔL0-min{λk }/2, ΔL0+min{λk }/2] is defined, where ΔL0 is the initial guess of the optical path length:
where the total number of wavelengths (K) is typically 5-10. Within this search interval, the values are scanned to find the optical path length ΔLopt(x, y) that minimizes the cost function, resulting in an unwrapped object phase image.
In one embodiment, a fundamentally new pixel super-resolution method is described that utilizes wavelength scanning to significantly improve the resolution of an undersampled or pixelated imaging system, without the use of any lateral shifts or displacements. In this technique, the sample is sequentially illuminated at a few wavelengths that are sampled from a rather narrow spectral range of 10-30 nm. Compared with sub-pixel displacement or lateral shift-based super-resolution techniques, wavelength scanning introduces a uniform resolution improvement across all the directions on the sample plane and requires significantly fewer raw measurements to be made. Making fewer measurements without sacrificing performance could greatly benefit high-speed wide-field imaging, field-portable microscopy and telemedicine applications, which are all sensitive to data transmission and storage.
Using an experimental setup, the effectiveness of this new wavelength-scanning-based pixel super-resolution approach has been demonstrated using lens-free holographic microscopy (
Optical Setup
For wavelength scanning experiments, an in-line holographic imaging system 10 that disclosed in
Wavelength Calibration and Dispersion Compensation
Wavelength calibration of the light source is achieved using an optical spectrum analyzer (HR2000+, Ocean Optics, Amersham, UK). The intensity-weighted average wavelength of each measured spectrum is considered as the illumination wavelength. To achieve optimal resolution, the refractive index of the glass substrate (100 μm, N-BK7, Schott AG, Mainz, Germany) at each wavelength is also corrected using the dispersion formula for borosilicate glass.
Sample Preparation
The grating lines used for resolution quantification are fabricated on an approximately 100-μm glass slide (N-BK7, Schott AG, Mainz, Germany) using focused ion beam (FIB) milling. Unstained Papanicolaou (Pap) smear slides are prepared through the ThinPrep® method (Hologic, Massachusetts, USA). The blood smear samples are prepared using EDTA (ethylenediaminetetraacetic acid) anticoagulated human blood and stained with Wright's stain.
Computation Platform Used for Super-Resolved Image Reconstructions
The reconstructions were performed using MATLAB® (Version R2012a, MathWorks, Massachusetts, USA) on a desktop computer equipped with a 3.60-GHz central processing unit (Intel Xeon ES-1620) and 16 GB of random-access memory. For a 1×1 mm2 sub-region with an upsampling factor of seven, one iteration of the wavelength-scanning super-resolution routine takes approximately 1.2 seconds. For example, one cycle of the algorithm, which undergoes all the undersampled measurements (e.g., seven wavelengths for each angle/height, and 22 angles/heights in total), takes approximately 3 minutes. In this implementation, the iterations did not use either GPU (graphics processing unit) or parallel computing, which could significantly improve the overall computation time. The total image reconstruction time could be further improved by implementing the algorithm in the C language rather than MATLAB®.
Results and Discussion
The physical basis for wavelength-scanning pixel super-resolution is the strong wavelength dependence of the undersampled interference patterns in coherent or partially coherent diffraction imaging systems such as lens-free, holographic microscopy or defocused lens-based imaging systems. When illuminated at slightly different wavelengths, the high-frequency interference fringes caused by object scattering will change, causing the undersampled output of the image sensor chip to change as well. In the spatial frequency domain, the aliasing signal caused by pixel-induced undersampling is modulated by a complex transfer function whose phase is rather sensitive to even small wavelength changes, which makes it possible to use wavelength diversity within a narrow spectral range (i.e., 10-30 nm) to cancel out the spatial aliasing term and enhance the resolution of the reconstructions beyond the pixel pitch.
Without pixel super-resolution, lens-free microscopy with a unit magnification on-chip imaging geometry (in which the sample FOV equals the active area of the sensor array) can achieve a half-pitch resolution close to the image sensor's pixel pitch (i.e., approximately 1 μm in
In addition to delivering a competitive resolution and NA, the wavelength-scanning-based super-resolution approach also offers better data efficiency compared with lateral shift-based pixel super-resolution techniques, i.e., fewer raw measurements are needed for the same resolution improvement. In lateral shift-based pixel super-resolution, the sub-pixel shifts between the raw measurements are obtained by moving the light source, image sensor and/or the sample with respect to each other, and the resolution improvement is direction dependent. Therefore, sub-pixel shifts that spread uniformly within a two-dimensional pixel area are preferred in lateral shift-based pixel super-resolution techniques to achieve optimal resolution enhancement. As a result, the number of raw measurements generally increases as a quadratic function of the pixel super-resolution factor. However, in the wavelength-scanning super-resolution approach, the resolution improvement caused by wavelength diversity is uniform in all lateral directions across the sensor array, which enables us to achieve competitive resolution with much fewer raw measurements compared with lateral shift-based super-resolution. To exemplify this important advantage of our approach, in a normal-illumination configuration, compared with the lateral shift technique, which needs nine measurements to achieve a half-pitch resolution of approximately 0.6 μm (
One should re-emphasize that wavelength-scanning super-resolution requires only a few wavelengths taken from a narrow spectral range (e.g., 10-30 nm). With this new super-resolution approach, one can obtain high-resolution amplitude reconstructions of not only colorless but also colored (i.e., stained/dyed) samples without further modifications in the reconstruction algorithm. This capability has been demonstrated by imaging various biological samples, including unstained Pap smears (
In addition to significantly improving resolution and mitigating undersampling, wavelength diversity also enables one to perform robust phase unwrapping and reveal the optical path length differences between a given sample and surrounding medium. The retrieved phase reconstruction from a single wavelength is constrained to its principle value (−π, π], and therefore large optical path length differences can cause polarity errors that may not be corrected even by using state-of-the-art phase unwrapping algorithms (see, for example,
The wavelength-scanning pixel super-resolution approach, together with phase retrieval methods, including multi-height, synthetic aperture, and object-support-based techniques, could constitute high-resolution imaging systems with greatly improved imaging speed. For a bench-top system, high-speed wavelength scanning can be realized using a fast tunable source (employing, for example, an acousto-optic tunable filter with a supercontinuum source) synchronized with the image sensor chip. Compared with lateral shift-based super-resolution setups, such a combination avoids motion blur and could increase the data acquisition speed to the maximum frame rate of the image sensor. Furthermore, the lateral shifts generated by the source-shifting approach are determined by both the sample-to-image sensor and sample-to-aperture distances, which can make it challenging to generate optimized lateral shifts for samples at different vertical heights. The wavelength-scanning approach, however, is performed with evenly distributed wavelengths regardless of sample height. Therefore, the wavelength-scanning pixel super-resolution may be more favorable that lateral shifting techniques for building high-resolution wide-field microscopes with high imaging speeds. Additionally, the better data efficiency of the wavelength-scanning super-resolution approach can reduce the cost of data storage and transmission, benefiting telemedicine implementations and server-based remote reconstructions.
In addition to delivering competitive results on a bench-top system, the presented wavelength-scanning pixel super-resolution approach also holds great potential for field-portable microscopy applications. Compared with lateral shift-based pixel super-resolution, the wavelength-scanning approach does not require any mechanical motion or fiber bundle (although the integration of such options may be optional), which could make the mobile imaging platform more robust. Because the wavelength scanning range is narrow (i.e., 10-30 nm), the combination of a few light-emitting diodes (LEDs), each with a standard spectral bandwidth of 15-30 nm, and a variable optical thin-film filter to narrow the LED spectra can be used to implement wavelength-scanning super-resolution in a field-portable and cost-effective design.
It should further be emphasized that the wavelength-scanning pixel super-resolution approach, in addition to lens-free or holographic diffraction-based imaging systems, can also be applied to lens-based point-to-point imaging modalities. By introducing a simple defocus condition into a lens-based imaging system (by, for example, introducing a relative axial shift of the sensor array, object and/or the objective lens along the z direction of the optical path), the entire wavelength diversity framework described herein would be able to achieve pixel super-resolution. This was experimentally demonstrated with the wavelength-scanning pixel super-resolution approach using a conventional lens-based microscope with a 10× objective lens (NA=0.3) and a CMOS sensor chip with a pixel size of 3.75 μm (see
While embodiments of the present invention have been shown and described, various modifications may be made without departing from the scope of the present invention. For example, while several embodiments are described in the context of lens-free embodiments, the reconstruction methods described here may also be used with lens-based microscope systems. The invention, therefore, should not be limited, except to the following claims, and their equivalents.
This application claims priority to U.S. Provisional Patent Application No. 62/267,186 filed on Dec. 14, 2015, which is hereby incorporated by reference in its entirety. Priority is claimed pursuant to 35 U.S.C. §119 and any other applicable statute.
Number | Date | Country | |
---|---|---|---|
62267186 | Dec 2015 | US |