ANGULAR PTYCHOGRAPHIC IMAGING WITH CLOSED-FORM RECONSTRUCTION

Information

  • Patent Application
  • 20240353665
  • Publication Number
    20240353665
  • Date Filed
    March 29, 2024
    9 months ago
  • Date Published
    October 24, 2024
    2 months ago
Abstract
Embodiments pertain to angular ptychographic imaging with closed-form imaging methods. computer products and imaging systems.
Description
FIELD

Certain aspects generally pertain to computational imaging and, more specifically, to ptychographic imaging.


BACKGROUND

Computational imaging empowers modern microscopy with the ability to produce high-resolution, large field-of-view, aberration-free images that improve digital pathology and broadly apply to other high throughput imaging fields. One dominant computational label-free imaging method, Fourier ptychographic microscopy (FPM), effectively increases the spatial-bandwidth product of conventional microscopy using multiple tilted illuminations to achieve high-throughput imaging. However, its iterative reconstruction is subject to convergence criteria. Kramers-Kronig methods have shown the potential to analytically reconstruct a complex field from intensity measurements but these methods do not address aberrations.


Background and contextual descriptions contained herein are provided solely for the purpose of generally presenting the context of the disclosure. Much of this disclosure presents work of the inventors, and simply because such work is described in the background section or presented as context elsewhere herein does not mean that such work is admitted prior art.


SUMMARY

Certain embodiments pertain to an imaging method comprising (a) reconstructing a plurality of complex field spectrums from a respective plurality of NA-matching intensity measurements, (b) combining the complex field spectrums to determine a reconstructed sample spectrum, and (c) using a plurality of darkfield intensity measurements to extend the reconstructed sample spectrum to obtain an image with higher resolution than the NA-matching intensity measurements. In some embodiments, the imaging method further comprises (i) extracting a system aberration from the plurality of complex field spectrums and (ii) removing the system aberration from each of the complex field spectrums prior to (b).


Certain embodiments pertain to a computer program product comprising a non-transitory computer readable medium having computer-executable instructions for performing (a) reconstructing a plurality of complex field spectrums from a respective plurality of NA-matching intensity measurements, (b) combining the complex field spectrums to determine a reconstructed sample spectrum, and (c) using a plurality of darkfield intensity measurements to extend the reconstructed sample spectrum to obtain an image with higher resolution than the NA-matching intensity measurements. In some embodiments, the computer program product comprises additional computer-executable instructions for (i) extracting a system aberration from the plurality of complex field spectrums and (ii) removing the system aberration from each of the complex field spectrums prior to (b).


Certain embodiments pertain an imaging system comprising an optical system having collection optics, an illumination device configured to provide illumination at a plurality of NA-matching illumination angles at a first sequence of sample times and provide illumination at a plurality of darkfield illumination angles at a second sequence of sample times, at least one radiation detector configured to receive light from the optical system and acquire a plurality of NA-matching intensity measurements and a plurality of darkfield intensity measurements, and a computing device. The computing device is configured to: (a) reconstruct a plurality of complex field spectrums from the plurality of NA-matching intensity measurements, (b) combine the complex field spectrums to determine a reconstructed sample spectrum, and (c) use the plurality of darkfield intensity measurements to extend the reconstructed sample spectrum to obtain an image with higher resolution than the NA-matching intensity measurements. In some embodiments, the computing device is further configured to (i) extract a system aberration from the plurality of complex field spectrums and (ii) remove the system aberration from each of the complex field spectrums prior to (b).


These and other features and embodiments will be described in more detail with reference to the drawings.


Additional aspects and advantages of the present disclosure will become readily apparent to those skilled in this art from the following detailed description, wherein only illustrative embodiments of the present disclosure are shown and described. As will be realized, the present disclosure is capable of other and different embodiments, and its several details are capable of modifications in various obvious respects, all without departing from the disclosure. Accordingly, the drawings and description are to be regarded as illustrative in nature, and not as restrictive.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1A depicts a schematic diagram of an angular ptychographic imaging with closed-form (APIC) system during acquisition of an NA-matching intensity measurement, according to embodiments.



FIG. 1B depicts a schematic diagram of the APIC system of FIG. 1A during acquisition of a darkfield intensity measurement.



FIG. 2 depicts example operations of a reconstruction process of an APIC imaging method, according to embodiments.



FIG. 3 depicts a schematic diagram depicting example operations of an APIC imaging method, according to various embodiments.



FIG. 4 depicts a flow diagram depicting an APIC imaging method 400, according to various embodiments.



FIG. 5 depicts a flow diagram depicting examples of suboperations of an operation of FIG. 4.



FIG. 6 depicts a flow diagram depicting examples of suboperations of an operation of FIG. 4.



FIG. 7 depicts a block diagram of an APIC system, according to various embodiments.



FIG. 8A depicts a plan view of an illumination device of an APIC system, according to embodiments.



FIG. 8B depicts an isometric view of an illumination device of an APIC system, according to embodiments.



FIG. 9 depicts a side view of an APIC system, according to an embodiment.



FIG. 10 depicts a side view of an APIC system with multiple detectors, according to an embodiment.



FIG. 11 depicts side view of an APIC system with laser illumination, according to an embodiment.



FIG. 12 depicts block diagram of a computing device of an APIC system, according to various embodiments.



FIG. 13 depicts illustrations of APIC and FPM reconstruction results for a small number of measurements, according to embodiments.



FIG. 14A depicts illustrations of complex fields under defocus of 0 μm reconstructed by an APIC imaging system, according to an embodiment.



FIG. 14B depicts illustrations of complex fields under defocus of 16 μm reconstructed by an APIC imaging system, according to an embodiment.



FIG. 14C depicts illustrations of complex fields under defocus of 32 μm reconstructed by an APIC imaging system, according to an embodiment.



FIG. 15 depicts illustrations of complex fields of a human thyroid carcinoma cell sample reconstructed using APIC according to embodiments and FPM for comparison.



FIG. 16 depicts high-resolution images of hematoxylin and eosin (H&E) stained breast cancer cells resulted from using APIC imaging method according to embodiments and FPM imaging method for comparison.



FIG. 17 is a diagram depicting an original spectrum of a sample a phase difference between two individual spectrums with overlap, according to an implementation.



FIG. 18 depicts a schematic diagram of APIC reconstruction using darkfield measurements to extend known sample spectrum, according to embodiments.



FIG. 19 depicts a graphical illustration of the construction of a correlation operator, according to embodiments.



FIG. 20 depicts an illustration of autocorrelation, according to embodiments.



FIG. 21A depicts images reconstructed with both full dataset and reduced dataset using APIC imaging method according to embodiments and FPM imaging method for comparison.



FIG. 21B depicts images reconstructed using APIC imaging method according to embodiments and FPM imaging method for comparison of resolution, according to embodiments.



FIG. 22A depicts images of hematoxylin and eosin (H&E) stained breast cancer cells reconstructed using APIC imaging method according to embodiments and FPM imaging method for comparison.



FIG. 22B depicts a plot of reconstruction runtime vs. image patch size for APIC imaging method according to embodiments and FPM for comparison.



FIG. 23A depicts images reconstructed using an APIC imaging method under different levels of aberrations, according to embodiments.



FIG. 23B depicts images reconstructed using an APIC imaging method according to embodiments and FPM imaging method for comparison.



FIG. 24A depicts images reconstructed using an APIC imaging method under different signal-to-noise ratios (SNRs), according to embodiments.



FIG. 24B depicts images reconstructed under different noise levels using APIC imaging method according to embodiments and FPM imaging method for comparison.



FIG. 25 depicts images of a weak absorption target reconstructed under different noise levels using APIC imaging method according to embodiments and FPM imaging method for comparison.



FIG. 26 depicts images of aberrations reconstructed using an APIC imaging method using different numbers of NA-matching measurements, according to embodiments.



FIG. 27A includes images reconstructed using an APIC imaging method using different with different levels of illumination angle calibration error, according to embodiments.



FIG. 27B depicts amplitude and phase images generated by a spatial Kramers-Kronig imaging method and amplitude and phase images generated an APIC imaging method of certain embodiments, according to embodiments.





The figures and components therein may not be drawn to scale.


DETAILED DESCRIPTION

Different aspects are described below with reference to the accompanying drawings. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the presented embodiments. The disclosed embodiments may be practiced without one or more of these specific details. In other instances, well-known operations have not been described in detail to avoid unnecessarily obscuring the disclosed embodiments. While the disclosed embodiments will be described in conjunction with the specific embodiments, it will be understood that it is not intended to limit the disclosed embodiments.


I. Introduction

Over the past few decades, remarkable progress has been made in both fluorescence and label-free imaging. One such representative label-free technique, Fourier ptychographic microscopy (FPM), leverages the power of computation to provide high-resolution and aberration correction abilities to low numerical aperture objectives. Conventional FPM operates by collecting a series of low-resolution images under tilted illumination and applies an iterative phase retrieval algorithm to reconstruct sample's high spatial-frequency features and optical aberration, resulting in high-resolution aberration-free imaging that preserves the inherently large FOV associated with the low numerical aperture (NA) objectives. However, its iterative algorithm can pose challenges. First, its iterative reconstruction is typically a non-convex optimization, which means it is not guaranteed to converge. As a result, FPM does not guarantee that the global optimal solution is ever reached. This may be problematic for exacting applications, such as digital pathology, where even small errors in the image are not tolerable. Furthermore, the joint optimization of aberration and sample spectrum of conventional FPM can sometimes fail when the system's aberrations are sufficiently severe, which may lead to poor reconstructions.


Spatial-domain Kramers-Kronig relations have shown that a complex field can be non-iteratively reconstructed in one specific varied illumination microscopy scenario by matching the illumination angle to the objective's maximal acceptance angle and exploiting the signal analyticity. However, this approach does not possess the ability to correct for hybrid aberrations or provide resolution enhancement beyond the diffraction limit of the objective NA.


Certain embodiments pertain to angular ptychographic imaging with closed-form (APIC) techniques that can recover complex fields, retrieve aberrations, and/or reconstruct the darkfield associated high spatial frequency spectrum to expand the sample spectrum in a purely analytical way. By avoiding iterative algorithms, APIC techniques are advantageous in that they do not require convergence metrics and have been demonstrated to consistently obtain a closed-form solution. This may enable faster computational time over existing techniques that use iterative algorithms. APIC techniques have also demonstrated robustness against aberrations, including complex aberrations, where existing iterative techniques have failed. Using NA-matching and darkfield measurements, APIC techniques can be used to reconstruct high-resolution aberration-free complex fields when a low magnification, large FOV objective is used for data acquisition. Due to its analytical nature. APIC techniques are inherently insensitive to optimization parameters such as convergence metrics and consistently provide an analytical closed-form solution.


As used herein, an “NA-matching measurement” (also sometimes referred to as a “NA-matching intensity measurement”) refers to an intensity measurement (also sometimes referred to herein as “intensity image” or “raw image,”) acquired while incident illumination is at an NA-matching illumination angle. As used herein, an “NA-matching illumination angle” refers to an illumination angle that is equal to, or nearly equal to (e.g., within 1 degree, within 2 degrees, or within 3 degrees), the maximum acceptance angle of the collection optics (e.g., objective) of the imaging system acquiring the image. In various embodiments, a plurality of NA-matching intensity measurements is acquired at a sequence of exposure times during which incident illumination sequentially shifts to each one of the NA-matching angles. In these examples, each NA-matching intensity measurement is acquired while incident illumination is at one of the NA-matching illumination angles. In alternate embodiments, multiplexing illumination may be employed where incident illumination is cycled through a sequence of illumination patterns. Each illumination pattern includes simultaneous incident illumination from multiple NA-matching illumination angles. In these multiplexing embodiments, each NA-matching intensity measurement is acquired during an exposure time during which incident illumination is at one of the illumination patterns.


As used herein, a “darkfield measurement” (also sometimes referred to as a “darkfield intensity measurement”) refers to an intensity measurement acquired while incident illumination is at a darkfield illumination angle. As used herein, a “darkfield illumination angle” refers to an illumination angle that is greater than the maximum acceptance angle of the collection optics. Each darkfield measurement is acquired during an exposure time while the specimen is illuminated at one of the darkfield illumination angles in the sequence. In one example, the darkfield illumination angles may be in a range of 1 degree to 5 degrees greater than the maximum acceptance angle. In another example, the darkfield illumination angles may be in a range of 3 degrees to 5 degrees greater than the maximum acceptance angle. In another example, the darkfield illumination angles may be in a range of 1 degree to 5 degrees greater than the maximum acceptance angle. In another example, the darkfield illumination angles may be more than 1 degree greater than the maximum acceptance angle.


As used herein, an “NA-matching illumination source” refers to an illumination source that is configured to provide incident illumination that is equal to, or nearly equal to (e.g., within 1 degree, within 2 degrees, or within 3 degrees), the maximum acceptance angle of the collection optics (e.g., objective) of the imaging system acquiring the image.


As used herein, a “darkfield illumination source” refers to an illumination source that is configured to provide incident illumination that is greater than the maximum acceptance angle of the collection optics of the imaging system acquiring the image.


As used herein, a “spectrum” (also sometimes referred to as a “complex field spectrum”) generally refers to a spatial frequency spectrum, which is the Fourier transform of the sample's complex field. The spectrum is different from the Fourier transform of an acquired intensity image, which is the Fourier transform of a purely intensity measurement.


As used herein, a “known sample spectrum,” refers to a prior reconstructed spectrum. The reconstructed spectrum expands as more images are used in the reconstruction and this known sample spectrum also grows during this process. Thus, the known sample spectrum at step i can be a subset of the known spectrum at step i+1.


II. Angular Ptychographic Imaging with Closed-form (APIC) Imaging Systems and Methods
A. Example APIC Imaging System


FIG. 1A depicts a schematic diagram of an APIC system 100 during acquisition of an NA-matching intensity measurement, according to embodiments. FIG. 1B depicts a schematic diagram of APIC system 100 of FIG. 1A during acquisition of a darkfield intensity measurement. At the instant in time depicted in the illustrated example, a specimen 120 being imaged is located on a transparent receptacle 122 (e.g., transparent slide). FIG. 1A depicts APIC system 100 at an instant in time during data acquisition of one of a plurality of NA-matching intensity measurements. FIG. 1B depicts APIC system 100 at an instant in time during data acquisition of one of a plurality of darkfield intensity measurements.


APIC system 100 includes an illumination device 110, an optical system 130, and a radiation detector 170. Optical system 130 includes an aperture 132, an objective 134 in optical communication with the aperture 132, and a lens 136 in optical communication with the objective 134, and a radiation detector 140 configured to acquire intensity images based on light passing from lens 136. In one implementation, the objective is a low magnification objective such as, e.g., a 10× magnification, NA 0.25 objective (e.g., 10× magnification, NA 0.25 objective sold by Olympus).


Illumination device 110 includes a rectangular light emitting diode (LED) array 111 having 81 illumination sources (e.g., LEDs) 115 and a ring of NA-matching illumination sources 116a (e.g., Neopixel ring 16 made by Adafruit) disposed on the LED array 111. The ring of 16 NA-matching illumination sources 116a has a diameter (defined along a circle at the centerline of the illumination sources) that is configured to be able to change illumination angle through a plurality of NA-matching illumination angles as different illumination sources 115 along the circumference of the circle are illuminated in sequence in the clockwise direction as depicted by an arrow. The NA-matching illumination angles are equal to, or nearly equal to, the acceptance angle of objective 134. Additional or fewer illumination sources may be included in the LED array 111 or the ring 116a in other implementations. Also, in another implementation, the illumination sources may be illuminated in a different order.


In FIG. 1A, the ring of NA-matching illumination sources 116a includes an illumination source 115a that is illuminated at the instant in time depicted in the illustrated example. The illumination device 110 also includes an electrical connector 112 that can electrically connect to, e.g., a computing device (e.g., computing device 780 in FIG. 7 or computing device 1280 in FIG. 12) to receive trigger signals to activate different illumination sources at different exposure times. The ring of NA-matching illumination sources 116a includes illumination sources (e.g., LEDs) that when activated have illumination that matches up with the maximum acceptance angle of the objective 134 and that may be lit (activated) sequentially while the radiation detector 140 obtains a plurality of NA-matching measurements.


As depicted in FIG. 1B, illumination device 110 also includes a plurality of darkfield illumination sources 116b. The darkfield illumination sources 116b are configured for darkfield illumination where the illumination angles are larger than the maximum acceptance angle of objective 134. The illumination sources 116b are arranged to be able to change the illumination angle through a plurality of darkfield illumination angles as different darkfield illumination sources in the plurality of darkfield illumination sources 116b are illuminated in sequence in the clockwise direction as depicted by an arrow. The plurality of darkfield illumination sources 116b includes an illumination source 115b that is illuminated at the instant in time depicted in the illustrated example during which radiation detector 140 may take a darkfield measurement.


In FIGS. 1A and 1B, APIC system 100 has components arranged in a transmission mode where light transmitted through specimen 120 is received at the optical system 130. In another implementation, APIC system 100 may have components arranged in reflection mode where light reflected from specimen 120 is received at optical system 130, which may be advantageous for use in imaging thick specimens. For example, illumination source 110 may be located on the opposite side of transparent receptacle 122.


Although not shown, APIC system 100 may also include a computing device (e.g., computing device 780 in FIG. 7 or computing device 1280 in FIG. 12). The computing device may be in communication with radiation detector 140 and/or the illumination device 110. The computing device may send control signals to the radiation detector 140 to trigger acquisition of intensity measurements and/or send control signals to illumination device 110 to activate different illumination sources to change illumination angle. In one implementation, the control signals synchronize image acquisition with activation of different illumination sources. The computing device may also receive signals with intensity measurement data from radiation detector 140 and/or perform other functions of the APIC system such as reconstruction process.


In one implementation, APIC system 100 may also include at least one motorized stage upon which the illumination device 110 is mounted. The at least one motorized stage may be configured to adjust the position of the illumination device 110.


B. Reconstruction Process

In certain embodiments, an APIC imaging method includes a reconstruction process that analytically solves for a sample's spatial frequency spectrum and system aberration with NA-matching intensity measurements. In addition or in alternate embodiments, the reconstruction process may use one or more darkfield intensity measurements to extend the sample's spatial frequency spectrum, which may advantageously enhance resolution of a NA-limited imaging system.



FIG. 2 depicts example operations of a reconstruction process of an APIC imaging method, according to embodiments. In FIG. 2, the reconstruction process of the APIC imaging method includes obtaining a plurality of darkfield intensity images and a plurality of NA-matching intensity images. Image-wise field reconstruction is performed to reconstruct complex spectrums from the NA-matching intensity images. The system aberration is extracted from the reconstructed complex spectrums and the extracted aberration is used to correct the aberration in the reconstructed complex spectrums of the image-wise field reconstruction. The aberration-corrected spectrums are then stitched together to form a known sample spectrum that serves as a prior knowledge in the spectrum extension. Using darkfield measurements the sample spectrum is extended in a closed form solution to obtain a higher-resolution, aberration-free reconstruction.


The reconstruction process of the FPM imaging method includes iteratively updating the sample spectrum and the aberration to minimize the differences in the measurements and the reconstruction output. This iterative process is terminated upon convergence to obtain the sample spectrum and the coherent transfer function (CTF) estimate. The illustrated pupil the reconstructed aberrations. FPM reconstruction treat darkfield and brightfield indifferently. All the data are given to the FPM algorithm for optimization. APIC reconstruction, on the contrary, handles the NA-matching measurements and darkfield measurements in different ways.



FIG. 3 is a schematic diagram depicting example operations of an APIC imaging method, according to various embodiments. The system 100 shown in FIG. 3 is analogous to elements shown in FIG. 1. For the sake of brevity, the prior discussion of such elements with regard to FIG. 3 may be assumed to be equally applicable, unless indicated otherwise in the following discussion. During image acquisition, the illuminating (illumination) angle is changed. By changing the illuminating angle, the coherent transfer function (CTF) is effectively shifted to different positions in the spatial frequency domain, and the radiation detector samples different regions of sample's spectrum. During acquisition of a plurality of NA-matching intensity measurements, the illuminating angle is changed to a plurality of NA-matching illumination angles. During acquisition of a plurality of darkfield intensity measurements, the illuminating angle is changed to a plurality of darkfield illumination angles. The APIC imaging method recovers the complex field spectrums for the intensity measurements under NA-matching angle illumination (NA-matching intensity measurements). The APIC imaging method recovers the corresponding spectrums from the NA-matching intensity measurements using, e.g., Kramers-Kronig relations. The APIC imaging method extracts the imaging system's aberration using the phase differences of pairs of spectrums with overlaps in their sampled spectrum. The APIC imaging method corrects reconstructed spectrums for aberration and stiches the corrected reconstructed spectrums together to form a priori knowledge for the reconstruction process involving using darkfield measurements to extend the known sample spectrum. The APIC method extends the sample spectrum using darkfield measurements, by using the known spectrum of a selected darkfield measurement to isolate a portion of a cross-correlation term from the other auto-correlation terms. The APIC method solves a linear equation involving the isolated cross-correlation term to determine a correlation operator. The APIC method analytically obtains an unknown spectrum using the correlation operator. The APIC method extends the original reconstructed sample spectrum by adding the unknown spectrum.


C. Example APIC Reconstruction Process

In certain embodiments, an APIC imaging method includes reconstructing the complex field spectrums corresponding to a plurality of NA-matching measurements acquired by an APIC system (e.g., APIC system 100 in FIGS. 1A and 1B, APIC system 700 in FIG. 7, APIC system 900 in FIG. 9, APIC system 1000 in FIG. 10, and APIC system 1100 in FIG. 11). A complex field spectrum may be reconstructed for each NA-matching measurement in an image-wise field reconstruction. In some implementations, Kramers-Kronig relations may be employed to reconstruct the complex field spectrums from the NA-matching measurements. An example of Kramers-Kronig relations can be found in Baek, Y. & Park, Y., “Intensity-based holographic imaging via space-domain Kramers-Kronig relations,” Nat. Photonics 15, 354-360 (2021), which is hereby incorporated by reference for the Kramers-Kronig relations. The NA-matching measurements may be taken at exposure times when illumination angles match the maximum acceptance angle of the collection optics (e.g., objective) of the optical system of the APIC imaging system.


The APIC imaging method may use various numbers of NA-matching measurements to reconstruct the complex field spectrums. In some embodiments, an APIC imaging method uses at least 6 NA-matching measurements. In some embodiments, an APIC imaging method uses at least 8 NA-matching measurements. In some embodiments, an APIC imaging method uses between 6 NA-matching measurements and 8 NA-matching measurements. In some embodiments, an APIC imaging method uses at least 7 NA-matching measurements. In some embodiments, an APIC imaging method uses at least 9 NA-matching measurements. In some embodiments, an APIC imaging method uses more than 10 NA-matching measurements.


For realistic imaging systems, aberrations may be superimposed on the sample spectrum's phase. As discussed in some detail in Section V (C), the overlapping region in sample dependent phases are identical in the overlapped region of two spectrums, and subtracting their phases cancels out the sample dependent phase term, leaving only the phase differences between different parts of the pupil function (the aberrations). Referring to FIG. 17, for example, illustration 1761 depicts the sample's phase of second constructed spectrum 1714 at the overlap between the first constructed spectrum 1712 and second constructed spectrum 1714. As shown, in the overlapping region between individual spectrums 1712 and 1714, the phase of the sample cancels out and the phase difference of the shifted CTF solely contributes to the measured phase difference. Consequently, the phase differences in the overlapping regions provides a linear relationship with respect to the aberration term. By solving the linear equation, the aberration of the imaging system can be extracted, which can in turn be used to correct the reconstructed complex field spectrums. The corrected spectrums can then be stitched together to obtain an aberration-free, resolution enhanced sample image.


In one implementation, an APIC imaging method includes extracting an aberration such as an aberration introduced by an objective (e.g., objective 134). In this example, the APIC imaging method determines a phase difference in each overlapping portion of the reconstructed complex field spectrums and uses the phase differences to extract the aberration. The APIC imaging method uses the extracted aberration to correct the reconstructed complex field spectrums and stitches (sums) the corrected spectrums together to obtain a reconstructed sample spectrum (also sometimes referred to herein as a “known sample spectrum”). FIG. 4 shows an example of suboperations that may be used to extract a system aberration. Once the aberration-free sample spectrum is reconstructed, an aberration-free sample image can be obtained (e.g., via reverse Fourier transform).


In addition or in alternate embodiment, the APIC imaging method may include extending the reconstructed sample spectrum using one or more darkfield measurements. FIG. 6 shows an example of suboperations that may be used to extend a reconstructed sample spectrum using a darkfield measurement. In some cases, the reconstructed sample spectrum (also referred to as a “known sample spectrum”) and/or a system aberration may serve as a priori knowledge. The APIC imaging method may select a spectrum of one of the darkfield measurement measurements for reconstruction. In one example, the spectrum closest to the known spectrum is selected. The APIC method crops out (subtracts from the signal) the known sample spectrum from what is sampled in this selected darkfield measurement as shown, e.g., in FIG. 3. This cropped portion only contains the unknown part of the darkfield measurement. The APIC method includes operations that recover the unknown spectrum so that it can be filled in for spectrum spanning. The Fourier transform of the darkfield measurement consists of crosscorrelation terms of the known and unknown spectrum and their autocorrelations. Using the known sample spectrum, a linear relationship can be constructed with respect to the unknown spectrum. The correlation coefficient relating the unknown part of the darkfield measurement to the known sample spectrum may be determined by analytically solving this linear relationship. To construct the linear relationship, the APIC method calculates the autocorrelation of the known part (e.g., autocorrelation of known part 1848 in FIG. 18) and subtracts the autocorrelation from the spectrum of the darkfield measurement (e.g., spectrum of darkfield measurement 1830 in FIG. 18). After subtraction, the autocorrelation of the unknown part (e.g., autocorrelation of the unknown part 1842 in FIG. 18) and the two crosscorrelations (e.g., crosscorrelations 1844, 1846 in FIG. 18) remain. As shown in FIG. 3 and FIG. 18, these terms are do not fully coincided in the special frequency domain. In a non-overlapping region of the cross-correlation terms, only cross correlation contributes to the signal. When calculating a cross-correlation of the known sample spectrum and the unknown part of the darkfield measurement, one of the signals is shifted and gets multiplied with the other signal. The correlation coefficient is the summation of this product. Since one of the two signals (the known sample spectrum) is known, the known signal can be considered a weight. The summation of the two signals is a weighted version of the unknown signal (unknown part of darkfield measurement). This is the linear operation. Thus, by applying the known spectrum to this summed signal, a linear operator is applied to the unknown spectrum and produces a cross correlation. By extracting a non-overlapping portion of the cross-correlation term, a linear relationship between the known spectrum and the unknown spectrum can be analytically determined to obtain a closed-form solution for the unknown spectrum. The APIC method then determines an unknown spectrum for the darkfield measurement. The unknown spectrum is filled back into the reconstructed spectrum. This process may continue until all darkfield measurements have been reconstructed. Once the reconstructed spectrum is filled in with the unknown spectrum of one or more darkfield measurements, an increased resolution free sample image can be obtained (e.g., via reverse Fourier transform).


In some embodiments, an APIC imaging method that extends the sample spectrum using one or more darkfield measurements may also correct for system aberration. In these embodiments, to be able to match up the known spectrum with the spectrum of the darkfield measurement, a system aberration (e.g., a recovered aberration) may be introduced to the cropped out known spectrum. After recovering the unknown spectrum, the aberration may be corrected and the corrected unknown spectrum filled back into the reconstructed spectrum. Once the reconstructed spectrum is filled in with the corrected unknown spectrum of one or more darkfield measurements, an increased resolution, aberration-free sample image can be obtained (e.g., via reverse Fourier transform).


The theoretical optical resolution of APIC methods is determined by the sum of the illumination NA and the objective NA. Since APIC techniques analytically recover the actual sample spectrum, these techniques may be advantageous over existing iterative techniques that rely on optimization parameters.


In certain embodiments, an APIC imaging method includes a data acquisition procedure for acquiring a plurality of NA-matching intensity measurements and a plurality of darkfield intensity measurements. In some cases, control signals are communicated to the radiation detector to trigger taking the intensity measurements and/or control signals are communicated to the illumination device to activate different illumination sources to change the illumination angle. For example, different illumination sources 115 in the ring of NA-matching illumination sources may be activated to change illumination angle. In other implementations, multiplexing illumination may be employed to illuminate with a series of illumination patterns.


D. Examples of APIC Imaging Methods


FIG. 4 is a flow diagram depicting an APIC imaging method 400, according to various embodiments. FIG. 5 is a flow diagram depicting examples of suboperations of operation 440 of FIG. 4. FIG. 6 is a flow diagram depicting examples of suboperations of operation 470 of FIG. 4. One or more of the functions of the APIC imaging method may be performed by a computing device (e.g., computing device 780 in FIG. 7 or computing device 1200 of FIG. 12). Means for performing the functionality illustrated in one or more of the operations shown in FIG. 4 and suboperations shown in FIGS. 5 and 6 may include hardware and/or software components of a computing device, such as, for example, a controller apparatus, or a computer-readable apparatus including a storage medium storing computer-readable and/or computer-executable instructions that are configured to, when executed by at least one processor, cause the at least one processor or another apparatus to perform the functions.


It should also be noted that the operations of the APIC imaging method may be performed in any suitable order, not necessarily the order depicted in FIGS. 4-6. For example, operation 420 may be performed before operation 410 or after operation 430, operation 440, operation 450, or operation 460. Further, the APIC imaging method of other implementations may include additional or fewer operations/suboperations than those depicted in FIGS. 4-6. For example, in one implementation, an APIC imaging method performs operations 410, 420, 430, 460, and 470 and omits the aberration correction operations 440 and 450. As another example, in one implementation, an APIC imaging method performs operations 410, 420, 430, 440, and 460 to determine, and correct for, the system aberration and omits operation 470. As another example, in one implementation, the APIC imaging method in FIG. 4 omits operations 410 and 420.


At operation 410, a plurality of NA-matching intensity measurements of a specimen is retrieved from memory or received directly in a signal from a radiation detector (e.g., radiation detector 740 in FIG. 7, radiation detector 140 in FIGS. 1A, 1B, and 3), radiation detector 940 in FIG. 9, or cameras 1040a-d in FIG. 10) of an APIC imaging system (e.g., APIC system 100 in FIGS. 1A and 1B or APIC system 700 in FIG. 7). The NA-matching intensity measurements are acquired while varying illumination angles that are equal to, or nearly equal to, a maximum acceptance angle of the collection optics (e.g., objective 134 in FIGS. 1A, 1B, and 3) of the APIC imaging system. In some embodiments, each NA-matching intensity measurement is acquired during an exposure duration during which the specimen is illuminated at one of the NA-matching illumination angles. In an alternate embodiment, multiplexing illumination may be employed where the specimen is illuminated by a sequence of illumination patterns where each illumination pattern includes simultaneous illumination from multiple NA-matching illumination angles. In some embodiments, a computing device (e.g., a controller) may send trigger signals to an illumination device (e.g., illumination device 110 (e.g., an LED array) in FIGS. 1A, 1B, and 3, illumination device 710 in FIG. 7, illumination device 810A in FIG. 8A, illumination device 810B in FIG. 8B, illumination device 910 in FIG. 9, illumination device 1010 in FIG. 10, or illumination device 1110 in FIG. 11) to activate different illumination sources to provide illumination at the NA-matching illumination angles sequentially. Alternatively or in addition, the computing device may send trigger signals to the radiation detector to take the NA-matching intensity measurements.


At operation 420, a plurality of darkfield measurements is retrieved from memory or received directly in a signal from the radiation detector. The darkfield measurements are acquired while varying illumination angles that are greater than a maximum acceptance angle of the collection optics of the imaging system. In one implementation, the illumination angles may be in a range of 1 degree to 5 degrees greater than the maximum acceptance angle. In another example, the illumination angles may be in a range of 3 degrees to 5 degrees greater than the maximum acceptance angle. In another implementation, the illumination angles may be in a range of 1 degree to 5 degrees greater than the maximum acceptance angle. In another implementation, the illumination angles may be more than 1 degree greater than the maximum acceptance angle. In some embodiments, the darkfield measurements are acquired while a specimen is being illuminated by a sequence of darkfield illumination angles. Each darkfield illumination angle is greater in than the maximum acceptance angle of the collection optics of the APIC imaging system. In some embodiments, each darkfield measurement is acquired during an exposure time while the specimen is illuminated at one of the darkfield illumination angles in the sequence. In an alternate embodiment, multiplexing illumination may be employed where the specimen is illuminated by a sequence of illumination patterns where each illumination pattern includes simultaneous illumination from multiple darkfield illumination angles. In some embodiments, a computing device may send trigger signals to the illumination device to activate different illumination sources to provide illumination at the darkfield illumination angles sequentially. Alternatively or in addition, the computing device may send trigger signals to the radiation detector to take the darkfield intensity measurements.


At operation 430, a new signal with a complex field spectrum is reconstructed from each of the NA-matching intensity measurements to obtain a plurality of complex field spectrums. In some embodiments, a transformed signal of each complex field spectrum is reconstructed from a corresponding NA-matching intensity measurement using Eqn. 32. The complex field spectrum may be restored in real space with inverse Fourier transform and applying an exponential function to each point of the inverse transform as provided in Eqn. 33. In some cases, Kramers-Kronig relations may be used to reconstruct complex field spectrums from NA-matching measurements.


At operation 440, a system aberration is extracted from the plurality of complex field spectrums reconstructed from the NA-matching intensity measurements. As discussed in more detail in Section V (C) with reference to FIG. 17, at an overlapping portion of two reconstructed spectrums, the phase difference due to the sample spectrum cancels out and the phase difference depends (linearly) on the system aberration. The system aberration is determined by multiplying the signal by a linear operator. The linear operator maps the phase differences of the overlapping portions of the complex field spectrums to the system aberration. An example of a linear operator is provided by Eqn. 41. Another example of a linear operator is provided in Eqn. 48. A discussion of a process of solving for the linear operator is provided in Section V (C). An example of a 2D aberration of an imaging system is represented in Eqn. 51. Details regarding an example of an operation for extracting a system aberration are provided in Section V (C). FIG. 5 illustrates an example of suboperations that may be included in operation 440, according to some embodiments.


At operation 450, aberration is corrected in each of the complex field spectrums to generate a plurality of aberration-corrected spectrums. To remove the system aberration from each reconstructed spectrum, the conjugate of the system aberration is subtracted from the reconstructed spectrum to generate a corresponding aberration-corrected spectrum.


At operation 460, the aberration-corrected spectrums are stitched together to generate a reconstructed sample spectrum (also referred to herein as a “(initial) known sample spectrum” or a “calculated sample spectrum”). For example, the aberration-corrected spectrums may be summed together by the weighted average.


At operation 470, the reconstructed (known) sample spectrum is extended using the plurality of darkfield measurements. Inverse Fourier transform is applied to the extended sample spectrum to obtain an extended-resolution, aberration-free reconstructed image with higher resolution than the NA-matching intensity measurements or the darkfield measurements. In certain implementations, an unknown parts of the sample spectrum are recovered from the darkfield measurements and a spectrum spanning procedure is used to fill in the sample spectrum with the recovered unknown parts. Details regarding an example of an operation for extending the known sample spectrum are provided in Section V (D). FIG. 5 illustrates an example of suboperations that may be included in operation 470, according to some embodiments.


In some embodiments, the APIC imaging method discussed with respect to FIG. 5 may further includes a calibration procedure. An example of a calibration procedure is discussed in Section V


Aberration Extraction


FIG. 5 is a flow diagram depicting an example of suboperations that may be included in operation 440 of FIG. 4, according to some embodiments. At optional (denoted by dashed line) suboperation 510, one or more weighting factors (e.g., weighting factors in a weighting matrix as discussed, for example, with reference to Eqn. 52) may be applied to one more spatial frequencies in each reconstructed complex field spectrum reconstructed from a corresponding NA-matching measurement. In one aspect, the one or more weightings factors place a greater weight on spatial frequencies with a stronger signal than other spatial frequencies in the spectrum. It may be advantageous to apply greater weight to these spatial frequencies with stronger signals since these signals generally have higher signal-to-noise ratio (SNR) and emphasizing these frequencies may provide a better result. In one aspect, a weighting matrix is applied to emphasize one or more frequencies in the spectrum with higher SNR.


As discussed in more detail in subsection V (C) with reference to FIG. 17, at an overlapping portion of two reconstructed spectrums, the phase difference due to the sample spectrum cancels out and the phase difference solely depends on the system aberration. As shown by the illustration 1790 in FIG. 17, the phase difference in the overlapping portion is a linear combination of the system aberration. Due to this, a linear operator may be determined that accounts for the phase differences. The linear operation may then be used to recover the system aberration from the phase differences in the overlapping portions.


Returning to FIG. 5, at suboperation 520, the phase difference in each overlapping portion of the reconstructed spectrums is determined by subtracting the phase of the overlapping reconstructed spectrums. For example, as illustrated in FIG. 17, the phase of the reconstructed spectrum 1712 of the overlap (consists of 1750, 1760 and an offset) may be subtracted from the phase of the overlapping part of spectrum 1714 (consists of 1751, 1761 and another offset) to determine the phase difference.


At suboperation 530, a system aberration may be determined using the phase differences from the overlapping portions. In certain implementations, the system aberration is determined by solving for the linear operator that maps the phase differences of the overlapping portions of the complex field spectrums to the system aberration. An example of a linear operator is provided in Eqn. 48. In this example, phase differences for the overlapping portions can be entered for different pairs of overlapping spectrums into Eqn. 48 and the system aberration can be solved from the populated matrix with the phase differences. In one implementation, the phase differences of a smaller set of pairs of overlapping spectrums from the total number of pairs of overlapping spectrums in the full spectrum of the sample are used, which may be advantageous to reduce computational resources. An example of a 2D aberration of the APIC imaging system is given by Eqn. 51.


Extending Sample Spectrum Using Darkfield Measurements


FIG. 6 is a flow diagram depicting an example of suboperations that may be included in operation 470 of FIG. 4, according to certain embodiments. One or more of the suboperations in the flow diagram may use a reconstructed sample spectrum (e.g., reconstructed sample spectrum from operation 460 of FIG. 4) and/or a system aberration (e.g., system aberration extracted at operation 440 of FIG. 4) as known parameters to extend the sample spectrum using one or more darkfield measurements.


In some implementations, the suboperations in FIG. 6 may be performed for two or more of the darkfield measurements obtained at operation 420 of FIG. 4, for example. In one implementation, the suboperations in FIG. 6 are performed for all of the darkfield measurements. Where suboperations in FIG. 6 are performed for multiple darkfield measurements, the suboperations may be performed in parallel (parallel processing), for example.


At suboperation 610, one of the darkfield measurements may be selected from the plurality of darkfield measurements. In one example, the darkfield measurement with a spectrum closest to a known reconstructed sample spectrum is selected (e.g., known sample spectrum from operation 460).


The sampled spectrum of the darkfield measurement (e.g., sampled spectrum 1810 in FIG. 18) consists of the reconstructed known sample spectrum (e.g., known sample spectrum 1814 in FIG. 18) and an unknown spectrum (e.g., unknown spectrum 1812 in FIG. 18). The unknown spectrum of the darkfield measurement can be reconstructed using the known sample spectrum. At suboperation 620, the known sample spectrum may be cropped out from the prior reconstructed spectrum based on which part of the sample spectrum is measured in the selected darkfield measurement (e.g., spectrum of darkfield measurement 1830 in FIG. 18). A correlation operator and the autocorrelation of the cropped known spectrum are used to obtain the unknown spectrum of the selected darkfield measurement. This cropped spectrum is only a part of the information of the spectrum associated with the selected darkfield measurement. In one example, the known sample spectrum is subtracted from the spectrum of the darkfield measurement.


At optional (denoted by dashed line) suboperation 630, the known system aberration may be added back into the cropped out known part of the spectrum of the selected darkfield measurement to obtain an uncorrected known part. In another implementation, the known system aberration may be added into the spectrum of the darkfield measurement prior to suboperation 620.


In certain embodiments, APIC imaging methods correct for aberrations such as an imaging system aberration (e.g., system aberration extracted at operation 440 in FIG. 4). For embodiments with aberration correction, suboperation 630 may be included to add the system aberration back into the unknown spectrum and suboperation 680 may be included to remove the system aberration from the recovered unknown part of the sample spectrum.


As discussed in detail in Section V(D), the spectrum of the darkfield measurement (e.g., Fourier transform of the darkfield intensity measurement 1830 in FIG. 18) includes a first cross correlation term (e.g., first cross-correlation term 1844 in FIG. 18) and a second cross-correlation term (e.g., second cross-correlation term 1846 in FIG. 18) of the unknown and known spectrum, an autocorrelation term of the unknown spectrum (e.g., autocorrelation of the unknown spectrum 1842 in FIG. 18), and an autocorrelation of the known spectrum (e.g., autocorrelation of the known spectrum 1848 in FIG. 18). At suboperation 640, the autocorrelation term of the known reconstructed sample spectrum is calculated and subtracted out of the spectrum of the darkfield measurement, leaving the auto-correlation term of the unknown spectrum, and the first and second cross-correlation terms.


At suboperation 650, the portion of one of the cross-correlation terms that does not overlap with the other cross-correlation term or the autocorrelation term of the unknown spectrum is extracted (by discarding the overlap). For example, the possible non-zero area of the other cross-correlation term may be determined using the shape of the known sample spectrum and that of the unknown spectrum of the darkfield measurement. The non-zero area in their autocorrelation term can be determined from the two as well. The non-overlapping portion can be extracted by setting any possible non-zero area covered by autocorrelation of unknown spectrum (autocorrelation of unknown spectrum 1842) and the other cross-correlation term (first cross-correlation term 1844 in FIG. 18) to zero in the result of subtracting the auto-correlation of the known spectrum (e.g., auto-correlation of the known spectrum 1848 in FIG. 18) from the spectrum of the darkfield intensity measurement (e.g., darkfield intensity measurement 1830 in FIG. 18).


The non-overlapping portion is linearly related to the unknown part of the spectrum of the darkfield measurement. A linear equation (e.g., part of a correlation operator) may be obtained with respect to the unknown part of the darkfield measurement. This linear equation may be used to form a closed-form solution of an unknown part from each darkfield measurement. At suboperation 660, a correlation operator may be constructed from known spectrum based on the portion extracted (1852 in FIG. 18). FIG. 19 depicts a graphical illustration of the construction of a correlation operation according to certain implementations. The correlation operator may be used in a closed form solution to determine the unknown part of the sample spectrum based on the extracted non-overlapping spectrum 1852 in FIG. 18. A correlation operator for multiple darkfield measurements can be used to obtain a correlation matrix. An example of a correlation matrix, CiF is described in Section V(D).


In some implementations, by constructing a correlation operator, a lincar equation with respect to the unknown part of the sample spectrum is determined. In calculating cross-correlations of the known part of the signal and the unknown part of the signal, one of the signals is shifted and gets multiplied by the other signal. The summation of this product is a correlation coefficient. The known signal can be used as the weight to calculate a weighted version of the unknown signal. The summation of weighted version of the unknown signal is the correlation coefficient and this process is linear in terms of the unknown spectrum. We can thus form a linear equation and the linear equation can be solved for the unknown spectrum.


At suboperation 670, the correlation operator is applied to the spectrum of the darkfield measurement to recover the unknown part.


In implementations where the imaging method corrects for aberration, after recovering the unknown part, the aberration is corrected in the unknown part. At optional (denoted by dashed line) suboperation 680, the known system aberration is removed (subtracted) from the unknown part.


At suboperation 690, the sample spectrum is filled in with the recovered unknown part. Once the unknown part of the sample spectrum is recovered, the sample spectrum is filled in (summed) with the recovered unknown part.


In certain implementations, the suboperations in FIG. 6 are completed for two or more of the plurality of darkfield measurements. In one example, the suboperations are completed for all the darkfield measurements. Inverse Fourier transform is applied to the extended sample spectrum to obtain a higher-resolution, aberration-free reconstructed image with higher resolution than the NA-matching intensity measurements or darkfield measurements.


In certain implementations, one or more unknown parts of the sample spectrum are recovered and a spectrum spanning procedure is used to fill in the sample spectrum with the recovered unknown parts. For example, the suboperations in FIG. 6 may be performed for multiple darkfield measurements to obtain corresponding multiple unknown parts of the sample spectrum. The spectrum spanning procedure may be used to fill in (add to) the sample spectrum to obtain an extended sample spectrum.


According to embodiments, an APIC imaging method uses a plurality of NA-matching intensity measurements and/or a plurality of darkfield intensity measurements in a reconstruction process. Various number of these intensity measurements may be used.


In some embodiments, an APIC imaging method uses at least 6 NA-matching measurements. In some embodiments, an APIC imaging method uses at least 8 NA-matching measurements. In some embodiments, an APIC imaging method uses between 6 NA-matching measurements and 8 NA-matching measurements. In some embodiments, an APIC imaging method uses at least 7 NA-matching measurements. In some embodiments, an APIC imaging method uses at least 9 NA-matching measurements. In some embodiments, an APIC imaging method uses more than 9 NA-matching measurements.


In some embodiments, an APIC imaging method uses at least one darkfield measurement. In some embodiments, an APIC imaging method uses between 1-10 darkfield measurements. In some embodiments, an APIC imaging method uses between 10-20 darkfield measurements. In some embodiments, an APIC imaging method uses between 20 and 30 darkfield measurements. In some embodiments, an APIC imaging method uses more than 10 darkfield measurements.


E. Additional Examples of APIC Imaging Systems and System Components


FIG. 7 is a block diagram of components of an APIC imaging system 700, according to embodiments. The APIC imaging system 700 comprises an APIC imaging device 701 and a computing device 700 in electronic communication with APIC imaging device 701. The APIC imaging device 701 includes an illumination device 710 for providing illumination at a plurality of NA-matching illumination angles and at a plurality of darkfield illumination angles incident a specimen 720 being imaged. The illumination device 710 may be configured to provide illumination in reflection mode and/or in transmission mode. The APIC imaging system 700 is shown at an instant when the specimen 720 is being imaged. It would be understood that the presence of a specimen is optional (denoted by a dotted line). APIC imaging device 701 also includes an optical system 730 with collection optics such an objective for receiving illumination scattered by the specimen 720. An example of an objective that may be used is a low magnification objective such as, e.g., a 10× magnification, NA 0.25 objective (e.g., 10× magnification, NA 0.25 objective sold by Olympus).


APIC imaging device 701 also includes one or more radiation detectors 740 in communication with the optical system 730 to receive light. The one or more radiation detectors 740 are configured to acquire a plurality of NA-matching intensity measurements while the illumination device 710 provides illumination at NA-matching illumination angles and acquire a plurality of darkfield intensity measurements while the illumination device 710 provides illumination at darkfield illumination angles. The computing device 780 includes one or more processors 782, a non-transitory computer readable medium (CRM) 784, and an optional (denoted by dashed line) display 786. The one or more processors 782 are in electrical communication with one or more radiation detectors 740 to receive a signal with a plurality of NA-matching intensity measurements and a plurality of darkfield intensity measurements and/or to send control signals to the one or more radiation detectors 740, for example, to trigger image acquisition. Communication between one or more system components may be in wired and/or wireless form.


In some embodiments, the illumination device of an APIC imaging system includes one or more illumination sources. In other cases, the illumination device is in communication with (e.g., via optical fibers) the one or more illumination sources to receive illumination. For example, illumination device 1110 in FIG. 11 receives a laser beam 1118 from one or more laser sources (e.g., via optical fibers) that are separate from the illumination device 1110.


An illumination device of various formats may be used. In one embodiment, an illumination device includes a galvo motor configured to receive a laser beam and a plurality of mirrors configured to reflect the laser beam at different illumination angles. In another embodiment, an illumination device includes one or more rings of illumination sources mounted onto a supporting structure (e.g., a flat plate, a hemispherical plate, a semi-hemispherical plate, a partial conical plate, etc.). For example, an illumination device may have a first ring of illumination sources configured for NA-matching illumination angles and a second ring of illumination sources configured for darkfield illumination angles. In another embodiment, the illumination device includes a single illumination source. In another embodiment, the illumination device includes a rectangular array of illumination sources and/or a ring of illumination sources. For example, an illumination device may include an LED array (e.g., RGB LED Matrix sold by Adafruit) and/or LED ring (e.g., Neopixel ring 16 sold by Adafruit). As another example, an LCD pixel array and/or LCD pixel ring may be used. One skilled in the art would contemplate other sources of radiation and other formats of illumination devices that can be implemented by an APIC imaging system.


The illumination sources of different embodiments may provide electromagnetic waves of various wavelength. In some cases, the illumination sources of the illumination device provide wavelength within the visible spectrum. In another case, the illumination sources provide electromagnetic waves in the ultraviolet spectrum. In another case, the illumination sources provide electromagnetic waves in the infrared spectrum.


In some embodiments, the illumination device includes a first plurality of illumination sources configured to provide illumination sequentially at a corresponding plurality of NA-matching illumination angles for acquiring a plurality of NA-matching measurements and a second plurality of illumination sources configured to provide illumination sequentially at a corresponding plurality of darkfield illumination angles for acquiring a plurality of darkfield measurements. For example, illumination device 110 in FIGS. 1A and 1B includes a ring of NA-matching illumination sources 116a and a plurality of darkfield illumination sources 116b. As another example, illumination device 810A in FIG. 8A includes a first ring of NA matching illumination sources 812A and a second ring of darkfield illumination sources 814A. As another example, illumination device 810B in FIG. 8B includes a first ring of NA matching illumination sources 812B and a second ring of darkfield illumination sources 814B.


In some embodiments, an APIC imaging system has one or more motorized translational stages upon which the illumination device is mounted to adjust the position (height and x-y translational position) of the illumination device. In some cases, the motorized transitional stage(s) is/are in communication with the computing device of the APIC imaging system to control the movement. In one implementation, two motorized transitional stages are used. In this implementation, two circuit boards (e.g., circuit boards made by Arduino Uno) may be in communication with the motorized transitional stages respectively to control them individually.


In various embodiments, an APIC imaging system includes one or more radiation detectors (e.g., radiation detector 140 in FIGS. 1A and 1B and radiation detector(s) in FIG. 7) configured to receive light from the optical system and acquire a plurality of NA-matching intensity measurements and a plurality of darkfield measurements. An example of a suitable radiation detector is a camera such as the Prosilica GT6400 camera made by Allied Vision. In some embodiments, multiple radiation detectors may be employed to acquire intensity measurements at different angles. In FIG. 10, for example, APIC imaging system 1000 includes a first ring of radiation detectors 1040a configured to acquire a plurality of NA-matching intensity measurements and a second ring of radiation detectors 1040b configured to acquire a plurality of darkfield measurements. A radiation detector may include one or more light detecting elements such as, for example, a charge coupled device (CCD), a CMOS imaging sensor, an avalanche photo-diode (APD) array, a photo-diode (PD) array, or a photomultiplier tube (PMT) array.


An APIC system may include a computing device for performing one or more functions of the APIC imaging system such as, e.g., one or more operations of a APIC imaging method. The computing device may include one or more processors and a non-transitory computer readable medium in electrical communication with the one or more processors. Optionally, the computing device may also have a display that is in electrical communication with the one or more processors. The computing device can be in various forms such as, for example, a smartphone, laptop, desktop, tablet, etc. An example of a suitable computing device is a personal computer having a non-transitory computer readable medium with 16 GB RAM and a processor including a CPU (e.g., Intel Core i5-8259U).


In some cases, the computing device may include a controller for controlling functionality of the APIC system. In one example, the controller may include one or more circuit boards (e.g., Arduino board made by Arduino Uno). In one implementation, a first plurality of illumination sources configured to provide illumination sequentially at a corresponding plurality of NA-matching illumination angles for acquiring a plurality of NA-matching measurements are controlled by a first circuit board and the second plurality of illumination sources configured to provide illumination sequentially at a corresponding plurality of darkfield illumination angles for acquiring a plurality of darkfield measurements are controlled by a second circuit board.


In various embodiments, the APIC imaging system includes one or more processors (e.g., processor(s) 782 in FIG. 7, GPU(s) 1282B in FIG. 12, and CPU(s) 1282A in FIG. 12) and non-transitory computer readable medium (CRM) (e.g., memory) (e.g., computer readable medium 784 in FIG. 7 and memory 1284 in FIG. 12) in communication with the processor(s). An example of a suitable CPU is an Intel Core i9-10940X with 64 GB RAM. The one or more processors and computer readable medium may be part of a controller. The processor-executable code (or “instructions”) for performing the APIC imaging method can be stored on the CRM or other memory. The processor(s) and can retrieve the processor-executable code from the CRM or other memory to perform various functions or operations of the APIC imaging method. The CRM or other memory can store raw and/or processed image data. In some implementations, the CRM or other memory can additionally or alternatively include a volatile memory array for temporarily storing code to be executed as well as image data to be processed, stored, or displayed.



FIG. 8A depicts a plan view of an illumination device 810A of an APIC system, according to embodiments. Illumination device 810A includes a first ring of NA matching illumination sources 812A and a second ring of darkfield illumination sources 814A. First ring of NA matching illumination sources 812A and second ring of darkfield illumination sources 814A are mounted onto a surface of a flat plate 811A. The first ring 812A has a circumference through the centers of the NA matching illumination sources. The second ring 814B has a circumference through the centers of the darkfield illumination sources. The circumference of the first ring 812A has a diameter and the illumination device 810A positioned during operation with respect to the collection optics such that the NA matching illumination sources 812A can provide illumination at each of a plurality of NA-matching illumination angles. The circumference of the second ring 814A has a diameter and the illumination device positioned during operation with respect to the collection optics such that the darkfield illumination sources 814B can provide illumination at each of a plurality of darkfield illumination angles. Additional rings of darkfield illumination sources may be used in other implementations. In other implementations, the darkfield illumination sources can be in other arrangements than rings where the darkfield illumination sources have illumination angles greater than the maximum acceptance angle of the collection optics. Also, each of the rings or other arrangements of darkfield illumination sources may include fewer or more illumination sources according to other implementations.



FIG. 8B depicts an isometric view of an illumination device 810B of an APIC system, according to embodiments Illumination device 810B includes a first ring of NA matching illumination sources 812B and a second ring of darkfield illumination sources 814B. First ring of NA matching illumination sources 812B and second ring of darkfield illumination sources 814B are mounted onto a surface of a curved support 811B. The first ring 812B has a circumference through the centers of the NA matching illumination sources. The second ring 812B has a circumference through the centers of the NA matching illumination sources. The circumference of the first ring 812B has a diameter and the illumination device 810B positioned during operation with respect to the collection optics such that the NA matching illumination sources 812B can provide illumination at each of a plurality of NA-matching illumination angles. The circumference of the second ring 812B has a diameter and the illumination device positioned during operation with respect to the collection optics such that the darkfield illumination sources 812B can provide illumination at each of a plurality of darkfield illumination angles. Additional rings of darkfield illumination sources may be used in other implementations. In other implementations, the darkfield illumination sources can be in other arrangements than rings where the darkfield illumination sources have illumination angles greater than the maximum acceptance angle of the collection optics. Also, each of the rings or other arrangements of darkfield illumination sources may include fewer or more illumination sources according to other implementations.



FIG. 9 is a side view of components of an APIC system 900, according to certain embodiments. APIC system 900 includes an illumination device 910 including a single illumination source 912 and an optical system 930 with an aperture 932 and an objective 934 having a lens with an acceptance angle, θ. APIC system 900 also includes a radiation detector 940. At the instant in time depicted in the illustrated example, a specimen 920 being imaged is located on a transparent receptacle 922 (e.g., transparent slide).


APIC system 900 also includes a mechanism (not shown) coupled to the single illumination source 912. The mechanism is configured to move (e.g., scan) the illumination source 912 in a direction of an x-axis and/or in a direction of a y-axis (not shown) perpendicular to a plane of the x-axis and the z-axis. In FIG. 9, the single illumination source 912 is shown moved between three positions. At the first two positions on the right, the illumination source 912 is depicted as providing NA-matching illumination 916 along an NA-matching illumination angle, α, that is equal to the acceptance angle, θ. At a third position on the left, the illumination source 912 is depicted as providing darkfield illumination 917 along a darkfield illumination angle that is greater than the acceptance angle, θ. Although three positions are shown, it would be understood that the single illumination source 912 may be moved to additional positions. During operation, the radiation detector 940 acquires a plurality of NA-matching intensity measurements while illumination is provided at the corresponding plurality of NA-matching illumination angles and acquires a plurality of darkfield intensity measurements while illumination is provided at the corresponding plurality of darkfield illumination angles. In some implementations, an NA-matching intensity measurement is acquired at each plurality of NA-matching illumination angles and a darkfield intensity measurement is acquired at each plurality of darkfield illumination angles. In another implementation, the APIC system 900 may be configured to move the transparent receptacle 922 relative to the single illumination source 912 while keeping the single illumination source 912 stationary. In another implementation, the APIC system 900 may be configured to move the optical system 930 and radiation detector 940 relative to the illumination source 912 and transparent receptacle 922 while keeping to the illumination source 912 and transparent receptacle 922 stationary.



FIG. 10 is a side view of components of an APIC system 1000, according to an embodiment. APIC system 1000 includes an illumination device 1010 with a single illumination source 1012 and an optical system 1030 with an aperture 1032 and an objective 1034 having a lens with an acceptance angle, θ. At the instant in time depicted in the illustrated example, a specimen 1020 being imaged is located on a transparent receptacle 1022 (e.g., transparent slide). APIC imaging system 1000 also includes a first ring of radiation detectors 1040a configured to acquire a plurality of NA-matching intensity measurements and a second ring of radiation detectors 1040b configured to acquire a plurality of darkfield measurements. The first ring of radiation detectors 1040a have respective detection planes 1041a. The radiation detectors in the first ring 1040a are positioned such that their respective detection planes 1041a are perpendicularly facing the NA-matching illumination angles to be able to capture a plurality of NA-matching intensity measurements during operation. The radiation detectors in the second ring 1040b are positioned such that their respective detection planes 1041b are perpendicularly facing the darkfield illumination angles to be able to capture a plurality of darkfield intensity measurements during operation. In the illustrated example, the single illumination source 1012 may remain stationary and NA-matching and darkfield intensity measurements may be taken at a single exposure time during operation.


In certain embodiments, an APIC imaging system includes an illumination device that directs a laser beam at different NA-matching illumination angles and darkfield illumination angles. For example, the illumination device may include a galvo motor (two-axis rotatable mirror system) and an array of mirrors (e.g., an arrangement of mirrors in concentric circles along a flat surface). The galvo motor may have mirrors that are rotatable to direct a laser beam to different mirrors in the array that then reflect the laser beam at the different illumination angles. The galvo motor is in communication with one or more laser sources (e.g., via optical fibers) to receive the laser beam.



FIG. 11 depicts an orthogonal view of an APIC imaging system 1100 having an illumination device 1110 with a galvo motor 1111 and a circular mirror array 1115, according to an embodiment. Galvo motor 1111 includes first and second rotatable mirrors 1113 and 1114. Circular mirror array 1115 includes ninety five (95) mirrors 1116 (1)-1116 (95). Other numbers of mirrors can be used in other implementations. Each mirror 1116 (i) of the mirror array 1115 is coupled at one end of a rectangular tower coupled to a plate. The rectangular towers may be 3D-printed structures. Each tower's top surface is sloped at a certain angle such that collimated laser beam 1104 may be reflected towards the sample plane at one of the illumination angles in the sequence of illumination angles. In one implementation, a neutral density filter may be placed on each of the mirrors 1116 (1)-1116 (95) in circular mirror array 1115. Use of such neutral density filters allows for increasing the input laser intensity to obtain higher SNR in dark field images while preventing the bright field images from over-exposure. While two rotatable mirrors 1113 and 1114 are depicted in FIG. 11, a single mirror rotatable about two different axes can be used in another implementation.


The APIC system 1100 also includes an optical system 1130 and a radiation detector 1140 for receiving laser light propagated by optical system 1130. Optical system 1130 includes a collection element 1134 (e.g., objective) having a focal length, f1, and a focusing element 1136 (e.g., lens) having a focal length, f2. Collection clement 1134 is located to receive light issuing from the specimen during operation. Focusing element 1136 is configured to focus light propagated from collection element 1134 to radiation detector 1140. Optical system 1130 may be in a 4f arrangement or a 6f arrangement in in certain implementations. APIC system 1100 also includes a specimen receptacle 1122 (e.g., slide) for receiving a specimen being imaged. specimen receptacle 1122 includes a first surface at a sample plane. The illustrated example is shown at an instant in time during an image acquisition process where a specimen 1120 is located in specimen receptacle 1122.


The first and second rotatable mirrors 1113 and 1114 may be controlled in a variety of manners. By way of example, a controller, may be coupled with the first and second rotatable mirrors 1113 and 1114 of FIG. 11. The controller may access instructions stored in a memory. The instructions may be executable by the controller to cause the first and second rotatable mirrors 1113 and 1114 of FIG. 11 to or rotate to any given position such that the specimen is illuminated at each of the illumination angles at each of the different sampling times, as described in the preceding paragraph. Alternatively, the first and second rotatable mirrors 1113 and 1114 may have an internal memory and processor and may be programed to cause each of the first and second rotatable mirrors 1113 and 1114 to follow a pre-determined rotation path.



FIG. 12 depicts a block diagram of an example of components of a computing device 1280, according to various embodiments. Computing device 1280 includes a bus 1201 coupled to an input/output (I/O) subsystem 1202, one or more processors 1282, one or more communication interfaces 1207, a main memory 1208, a secondary memory 1210, and a power supply 1240. One of more of these components may be in separate housing. According to certain implementations, the illustrated components may be examples of components that a part of computing device 780 in FIG. 7 or a part of a computing device that is part of, or in communication with, APIC system 100 in FIGS. 1A, 1B, and FIG. 3.


Computing device 1280 includes I/O subsystem 1202, which includes, or is in communication with, one or more components which may implement an interface for interacting with human users and/or other computer devices depending upon the application. Certain embodiments disclosed herein may be implemented in program code on computing device 1280 with I/O subsystem 1202 used to receive input program statements and/or data from a human user (e.g., via a graphical user interface (GUI), a keyboard, touchpad, etc.) and to display them back to the user, for example, on a display. The I/O subsystem 1202 may include, e.g., a keyboard, mouse, graphical user interface, touchscreen, or other interfaces for input, and, e.g., an LED or other flat screen display, or other interfaces for output. Other elements of embodiments may be implemented with a computer system like that of computer system 1200 without I/O subsystem 1202. According to various embodiments, a processor may include a CPU, GPU or computer, analog and/or digital input/output connections, controller boards, etc.


Program code may be stored in non-transitory computer readable media such as secondary memory 1210 or main memory 1208 or both. One or more processors 1204 may read program code from one or more non-transitory media and execute the code to enable computing device 1280 to accomplish the methods performed by various embodiments described herein, such as APIC imaging methods. Those skilled in the art will understand that the one or more processors 1282 may accept source code and interpret or compile the source code into machine code that is understandable at the hardware gate level of the one or more processors 1282.


Communication interfaces 1207 may include any suitable components or circuitry used for communication using any suitable communication network (e.g., the Internet, an intranet, a wide-area network (WAN), a local-area network (LAN), a wireless network, a virtual private network (VPN), and/or any other suitable type of communication network). For example, communication interfaces 1207 can include network interface card circuitry, wireless communication circuitry, etc.


In certain embodiments, computing device 1280 may be part of or connected to a controller that is employed to control functions of an APIC system such as controlling image acquisition by the radiation detector (e.g., radiation detector(s) 140 in FIGS. 1A, 1B, and 3, radiation detector(s) 740 in FIG. 7, radiation detector 940 in FIG. 9, radiation detectors 1040a and 1040b in FIG. 10, radiation detector 1130 in FIG. 11) controlling illuminations in an illumination device (e.g., illumination device 110 in FIGS. 1A, 1B, and 3, illumination device 710 in FIG. 7, illumination device 810A in FIG. 8A, illumination device 810B in FIG. 8B, illumination device 910 in FIG. 9, illumination device 1010 in FIG. 10, and illumination device 1110 in FIG. 11). The controller will typically include one or more memory devices and one or more processors. The processor may include a CPU or computer, analog and/or digital input/output connections, stepper motor controller boards, etc.


III. Demonstration Results

For certain implementations, APIC reconstruction may be computationally faster than certain examples of FPM reconstruction. For certain implementations, APIC reconstruction may be more robust than these examples of FPM reconstruction against large aberrations. In one implementation, APIC reconstruction may be capable of addressing aberration whose maximal phase differences exceed 3.8n when using a NA 0.25 objective.


Implementation A

The APIC system in FIGS. 1A and 1B according to one implementation (Implementation A) was used. In this implementation, the objective 134 was a low magnification objective, the 10× magnification, NA 0.25 objective sold by Olympus, for data acquisition. The illumination device 110 included an LED ring 112 that was the Neopixel ring 16 made by Adafruit. The LED ring 112 was glued onto an LED array that was the 32×32 RGB LED Matrix sold by Adafruit to provide illumination at NA-matching illumination angles. Another cluster of illumination sources 114 configured to provide illumination sequentially at a corresponding plurality of darkfield illumination angles was also included. The LED ring 112 (first cluster) and the second cluster of illumination sources 114 were mounted on a motorized stage for position and height adjustment, and were individually controlled by two circuit boards made by Arduino Uno. In the data acquisition process, one LED was lit up at one time, and the radiation detector 140 was simultaneously triggered to capture an image when the LED was on. The radiation detector 140 was the Prosilica GT6400 camera made by Allied Vision. The acquisition process continued until all LEDs in the two clusters of LEDs were lit once. After data acquisition, an APIC imaging method and an FPM imaging method was used to perform reconstruction. The computing device used to perform the APIC and FPM reconstructions was a personal computer including a CPU: Intel Core i9-10940X with 64 GB RAM.


First Experiment with Small Dataset


In a first experiment, a Siemens star target was imaged and a small dataset was acquired to perform reconstruction using APIC and FPM. The dataset acquired consisted of 9 brightfield measurements, 8 NA-matching measurements and 27 darkfield measurements. The nominal scanning pupil overlap rate was approximately 65%. The results suggest that the APIC reconstruction was able to render a more accurate complex field than the FPM reconstruction.



FIG. 13 depicts illustrations of APIC and FPM reconstruction results for a small number of measurements, according to embodiments. FIG. 13 includes a ground truth image 1301, which was imaged under a high-NA, 40× objective and a Kohler illuminated image 1302 under the same 10× objective used in APIC's data acquisition. FIG. 13 also includes zoomed images 1303 of the reconstructed amplitude and phase. FIG. 13 also includes zoomed images 1303 of the reconstructed amplitude and phase. FIG. 13 also includes a reconstructed pupils 1304 using APIC and FPM reconstruction.


As shown by the result, the reconstructed finer spokes were distorted in the reconstruction result of FPM. Moreover, noticeable wavy reconstruction artifacts existed in the phases reconstructed by FPM. This suggests that large redundancy may be needed for FPM to provide a good reconstruction. When the measurements were given to APIC reconstruction, the reconstructed phases and amplitudes were less noisy. The reconstructed amplitude is also closer to the ground truth, which is sampled using a high-NA objective. This experiment showed the ability of APIC reconstruction to better retrieve a high-resolution complex field when the raw data size is constrained because it is an analytical method and does not rely as heavily on pupil overlap redundancy for solution convergence.


For image tile of length size 256 pixels, APIC reconstruction took 9 seconds on the personal computer, while FPM took 25 seconds to finish reconstruction. The relative computational efficiency of APIC can again be attributed to the analytical nature of its approach. This computational efficiency is image tile size dependent-the smaller the tile the more efficient. Section VI includes more detail regarding image tile size. As it is generally preferred to divide a large image into more smaller tiles in parallel computing, APIC's computational efficiency for smaller tiles aligns well with practical computation considerations.


Second Experiment with 312 Images


In a second experiment, the robustness of APIC and FPM reconstruction at addressing optical aberrations was examined. For this experiment, a total of 316 images were acquired including 52 normal brightfield measurements, 16 NA-matching measurements and 248 darkfield measurements. The nominal scanning pupil overlap rate of our dataset was approximately 87% and the final theoretical synthetic NA was equal to 0.75 when all darkfield measurements were used. This large degree of spectrum overlap was chosen to provide sufficient data redundancy for the best performance of FPM. APIC does not typically use such a large dataset. In our reconstruction, APIC only used the NA-matching and darkfield measurements, whereas FPM used the entire dataset, including the additional 52 brightfield measurements corresponding to illumination angles that were below the objective's acceptance angle. The second order Gauss-Newton FPM reconstruction algorithm was applied for reconstruction as it was found to be the most robust FPM reconstruction algorithm. A set of 6 parameters was used in the reconstruction of FPM.


The Siemens star target was deliberately defocused to assess how the two methods perform under different aberration levels. In this experiment, the sample was defocused to different levels and the defocus information was hidden from both methods.



FIGS. 14A, 14B, and 14C depict results of reconstructed complex fields under different levels of aberrations, according to embodiments. FIG. 14A depicts results of reconstructed complex fields under defocus of 0 μm. FIG. 14B depicts results of reconstructed complex fields under defocus of 16 μm. FIG. 14C depicts results of reconstructed complex fields under defocus of 32 μm. APIC reconstructed amplitude and phase are shown on the upper right of each group and highlighted by the dashed line. FPM reconstructed amplitude and phase are shown on the lower left of each group and highlighted by the dash dotted line.



FIG. 15 depicts results of reconstructed complex fields of a human thyroid carcinoma cell sample using APIC and FPM, according to embodiments. Sum of measurements denotes the summation of all 316 images acquired, which can be treated as the incoherent image we would get under the same objective. The ground truth was acquired using an objective whose magnification power is 40 and NA equals 0.75. The NA of this 40× objective equals the theoretical synthetic NA of APIC and FPM. The square root of the summed image was calculated and the 40× image matched up with the amplitude reconstruction. The zoomed images of the highlighted boxes are shown on the lower right. The Zernike decompositions of retrieved aberrations using FPM and APIC are shown on the far-right side.


From the results, for large aberrations whose phase standard deviation exceeded 1.ln: on the pupil function (the case when Siemens star target was defocused by 32 μm, and the maximal phase difference is approximately 3.8n:), FPM may not have found the correct solution and the reconstructed images were different from the ground truth. At a lower aberration level, the amplitude reconstructions of FPM appeared to be close to the ideal case. However, the reconstructed phases were substantially different from the result when no defocus was introduced. In contrast, APIC was highly robust to different levels of aberrations. Although the contrast of APIC's reconstruction dropped under larger aberrations, it retrieved the correct aberrations and gave high-resolution complex field reconstructions that match with the in-focus result. The measured resolution for both FPM and APIC is approximately 870 nm when the in focus measurements were used, which is close to the 840 nm theoretical resolution.


Implementation B with Complex Aberration


The APIC system in FIGS. 1A and 1B according to implementation A was used except the objective 134 was an obsolete Olympus objective (10× magnification, NA 0.25) designed to work with another type of tube lens in order to introduce complex aberration.


First Experiment

A human thyroid adenocarcinoma cell sample was imaged using the APIC imaging method and the FPM imaging method. FIG. 15 depicts images of a human thyroid adenocarcinoma cell sample that resulted from using APIC imaging method and FPM imaging method, according to embodiments. Sum of measurements denotes the summation of all 316 images acquired, which can be treated as the incoherent image under the same objective. The ground truth was acquired using an objective whose magnification power is 40 and NA equals 0.75. The NA of this 40× objective equals the theoretical synthetic NA of APIC and FPM. The square root of the summed image was calculated and the 40× image was calculated to match up with the amplitude reconstruction. The zoomed images of the highlighted boxes are shown on the lower right. The Zernike decompositions of retrieved aberrations using FPM and APIC are shown on the far-right side.


In FIG. 15, the reconstructed amplitude of FPM reconstruction appears distorted by reconstruction artefacts. In comparison, APIC reconstruction recovered the finer details that were in good correspondence with the image acquired using a 0.75 NA objective.


Second Experiment

A hematoxylin and cosin (H&E) stained breast cancer cell sample was imaged using APIC imaging method and FPM imaging method. Red, green and blue LEDs were used to acquire dataset for these three different channels and then applied APIC for the reconstruction. In this experiment, the sample was placed at a fixed height in the data acquisition process. As a result, different levels of defocus are shown in different channels lying on top of the chromatic aberrations of the objective. A 40× objective was used to acquire the ground truth image. The illumination angles were calibrated for the central patch (side length: 512 pixels) and then the angles were calculated for off-axis patches using geometry. This calibrated illumination angles were used as input parameter data in reconstruction. The reconstructed color image is shown in FIG. 16. The comparison of all three channels is discussed in Section VI (C).



FIG. 16 depicts constructed high-resolution images of hematoxylin and cosin (H&E) stained breast cancer cells resulted from using APIC imaging method according to embodiments and FPM imaging method for comparison. For each group, FPM reconstructions are shown in the lower left and APIC reconstructions are shown in the upper right. The zoomed images of the amplitude and phase reconstructions are shown on the right of each boxes. Scale bar for the zoomed images: 5 μm.


From the zoomed images in FIG. 16, the reconstructions of FPM appeared noisy for the blue channel. FPM did not appear to work well with this weakly absorptive sample under a relatively high aberration level. FPM appeared to not be able to extract the aberration of the imaging system. As such, the color image generated by FPM appeared grainy and the high spatial frequency information were only partially recovered. The color reconstruction of APIC retained all high spatial frequency features that were closely matched up with the ground truth. This demonstrates that the aberration and complex field reconstruction of APIC appears to be more accurate as compared with FPM.


In the experiments discussed in this Section, an APIC imaging method of certain implementations was shown to be able to extract relatively large aberrations and synthesize a large FOV and higher resolution images using low NA objectives.


APIC methods advantageously avoid problems with conventional phase retrieval algorithms, such as being prone to optimization parameters and getting stuck in a local minimum. As APIC techniques directly solve for the complex field, they advantageously avoid a potentially time-consuming iterative process. APIC techniques may provide high-resolution and large field-of-view label-free imaging with robustness to aberrations. As an analytical method, APIC is insensitive to parameter selections and can compute the correct imaging field without getting trapped in local minimums. APIC's analyticity is particularly important in a range of exacting applications, such as digital pathology, where even minor errors are not tolerable. APIC guarantees the correct solution while other iterative methods cannot. Additionally, APIC brings new possibilities to label-free computational microscopy as it affords greater freedom in the use of engineered pupils for various imaging purposes. APIC techniques use of the known spectrum to reconstruct the unknown spectrum may be adapted for other uses.


V. APIC System Calibration

In some embodiments, an APIC imaging method includes a calibration procedure, which may be performed once for a particular APIC imaging system. In some implementations, the calibration procedure determines the illumination angle of each tilted illumination for each intensity measurement. This information determines the area of the sample's spectrum being measured. The calibration procedure may use a circle-finding algorithm to find the exact illumination angle for the intensity measurements. In one example, brightfield measurements whose illumination angles are below the acceptance angle of imaging system may be collected as well. These brightfield measurements can be used for geometrically calibrating the angles associated with the darkfield measurements.


In some embodiments, a calibration procedure includes optimizing illumination angles by maximizing the correlation between the real measurements and the images obtained. For example, the calibration data can be used to reconstruct the complex field with the geometrically calculated darkfield LED illumination angle and then the reconstructed complex field can be used to optimize the illumination angle by searching over a pre-defined finer grid. Once this is done, the calibrated illumination angles may be fixed and used in the reconstruction process. FIG. 20 depicts a calibration process that optimizes the illumination angles, according to an embodiment. By locating the center of the circle in the Fourier transform of the measurement, the corresponding ki can be extracted in the spatial frequency domain. Using the separation of the LEDs on the array and the estimated brightfield LED illumination angles, the darkfield LED illumination angles can be calculated using the geometry.


In one calibration example, an illumination device was used that included an LED ring attached on top of an LED array. The LED ring was used for the NA-matching measurements. This illumination device was mounted on a motorized translational stage for height adjustment. The motorized translational stage was configured such that the tilt and height exactly matched the illumination angle of the ring LED and the maximum acceptance angle of the objective of the optical system. To find the exact height, the illumination device was moved toward the sample being imaged until the LED ring produced the darkfield measurement. Then, the illumination device was moved away gradually increasing the separation between the LED ring and the sample until the image under the ring LED illumination transitioned from darkfield to brightfield. The transition point indicated the desired height. Once the height and tilt of the system were fixed, calibration data was acquired and the illumination angles for all LEDS in the LED ring were calibrated with the calibration data. During the calibration process, the optical system mas configured with a high NA objective to measure the relative intensity of each LED with a blank slide. The high NA objective was used so that the incident light from any LED could directly enter the optical system. The calibration intensity measurements were normalized.


VI. APIC Reconstruction

Section V presents the forward model and APIC reconstruction according to various embodiments. In certain embodiments, APIC reconstruction includes (1) reconstructing a complex field for each of a plurality of NA-matching measurements (e.g., intensity measurements acquired under tilted illumination where illumination angles are equal to (match), or nearly match, a maximum acceptance angle of the collection optics of the imaging system acquiring the intensity measurements), (2) extracting a system aberration using the complex field spectrums as discussed in and correcting for aberration in the complex field spectrums using the extracted system aberration, and (3) expanding the sample spectrum using darkfield measurements. Some detail regarding reconstructing a complex field spectrum for each of a plurality of NA-matching measurements is discussed in subsection V(B) below. Some detail regarding extracting a system aberration using the complex field spectrums is discussed in subsection V(C). Some detail regarding expanding the sample spectrum using darkfield measurements is discussed in subsection V(D). In alternate embodiments, an APIC reconstruction method may omit the operations involving extracting and correcting for aberrations. In other alternate embodiments, an APIC reconstruction method may omit expanding the complex field using darkfield measurements and simply extracts and corrects for the system aberration. The extracted aberration may be used to correct for other type of images such as fluorescence image acquired using the same optical imaging system (e.g., objectives).


In certain embodiments, an APIC reconstruction method includes (1) reconstructing complex field spectrums using NA-matching measurements, (2) extracting a system aberration and correcting for aberrations in the reconstructed spectrums, and (3) expanding the known sample spectrum using darkfield measurements. In some embodiments, Kramers-Kronig relations may be employed to reconstruct the complex fields from NA-matching measurements. Using the reconstructed fields, the system aberration can be retrieved. The extracted aberration may then be applied to correct the currently reconstructed aberrated fields and the subsequent darkfield associated field reconstructions. The aberration corrected reconstructed complex field can serve as initial a priori knowledge, referred to as the known field, and its Fourier transform as the known spectrum. To achieve high-resolution imaging, darkfield measurements may be incorporated and used to expand the reconstructed sample spectrum in an orderly way. At each sub-step, a new spectrum may be reconstructed corresponding to an unused darkfield measurement whose illumination angle is the smallest among all remaining unused measurements. This newly reconstructed spectrum, together with the original reconstructed spectrum, can serve as the new a priori knowledge in subsequent reconstructions. To recover the field associated with one darkfield measurement, focus on the spatial frequency space and form a linear equation with respect to the unknown spectrum. By solving this linear equation, a closed-form solution of the unknown spectrum sampled in this darkfield measurement can be obtained. Adding this newly reconstructed spectrum effectively expands the sample spectrum coverage to achieve higher resolution. After all measurements are reconstructed, a high-resolution, aberration-free complex field reconstruction may be obtained.


In Section V, APIC techniques are described with reference to system components of APIC system 100 in FIGS. 1A-1B. It would be understood that these aspects may apply to system components of other embodiments such as those illustrated in FIGS. 7, 8A-8B, 9, 10, 11, and 12. Also, the described aspects may be practiced without one or more of the specific details set forth in this section.


A. The Forward Model

When a thin specimen is illuminated by a plane wave emitted by an ith (where i=1, 2, . . . , n) laser emitting diode (LED) or another illumination source with a transverse k-vector, ki, and then imaged an APIC imaging system, the modulated sample spectrum is given by:














S
^

i

(
k
)

=



O
^

(

k
-

k
i


)



H

(
k
)






(

Eqn
.

1

)








where H is the coherent transfer function (CTF) associated with the optical system, Ô is the original sample's spectrum, Ŝi is the ith sampled spectrum, and k is the 2D (transverse) spatial frequency vector. Reciprocally, the spatial coordinate is denoted on the sample plane with x. Without loss of generality, it is assumed that Ô(0) is a real number. For a thin specimen, it can be assumed further that the majority of the incident illumination light does not change its original direction. That is, at any position x:















"\[LeftBracketingBar]"



(









-
1



[



O
^

(
k
)



δ

(
k
)


]

)



(
x
)




"\[RightBracketingBar]"


>



"\[LeftBracketingBar]"



[









-
1



(



O
^

(
k
)

[

1
-

δ

(
k
)


]

)

]



(
x
)




"\[RightBracketingBar]"







(

Eqn
.

2

)








where custom-character−1 is the inverse Fourier transform, [custom-character−1(·)] (x) means the inverse Fourier transform is evaluated at x, |·| gives the modulus of a complex number, and δ(k) is the Kronecker delta as follows:










δ

(
k
)

=

{




1
,





k
=
0

,






0
,



otherwise








(

Eqn
.

3

)







The physical meaning of this assumption is that the ballistic light exiting from the sample plane is dominant over the scattering light almost everywhere within the field of view.


As only the intensity of the light field is directly measured, the signal measured by the radiation detector of the APIC system is:











I
i

(
x
)

=





"\[LeftBracketingBar]"



[




-
1


(


S
^

i

)

]



(
x
)




"\[RightBracketingBar]"


2

=




"\[LeftBracketingBar]"



S
i

(
x
)



"\[RightBracketingBar]"


2






(

Eqn
.

4

)







where Ii is the ith intensity image captured when lighting up the ith LED, and Si denotes the ith sampled field in real space (the inverse Fourier transform of Ŝi).


This intensity measurement is insensitive to phase shift applied to the inverse Fourier transform:













"\[LeftBracketingBar]"



[




-
1


(


S
^

i

)

]



(
x
)



e

j


ξ

(
x
)






"\[RightBracketingBar]"


2

=





"\[LeftBracketingBar]"



[




-
1


(


S
^

i

)

]



(
x
)




"\[RightBracketingBar]"


2

=


I
i

(
x
)






(

Eqn
.

5

)







where ξ(x) stands for an arbitrary phase function and j is the unit imaginary number. Thus, a particular phase ramp can be selected, namely −2πki·x, which effectively shifts Ŝi along the opposite direction of ki.


Using the properties of Fourier transform when applying this special phase ramp, the measured intensity image is identical to the intensity of the inverse Fourier transform of the following (translated) spectrum:












S
^

i


(
k
)

=



[



(

S
i


)

]



(
k
)


=



[



S
^

i

*



(

e


-
j


2

π



k
i

·
x



)


]



(
k
)


=



O
^

(
k
)



H

(

k
+

k
i


)








(

Eqn
.

6

)







where · is the dot product, S′i(x):=Si(x)e−2λjki·x, and * is convolution. This forward model suggests that different components of sample's spectrum can be measured with different illumination angles. By increasing the illumination angle, higher spatial frequency information can effectively be sampled, where such information may normally be inaccessible due to the limitations of the numerical aperture (NA) of the optical system.


The coherent transfer function (CTF) of the optical system of the APIC imaging system can be assumed to be circular with a NA-dependent radius, kNA, and the phase ϕ of the CTF function depicts the system's aberration. For simplicity, the term “NA” or “system NA” used in Section V represents this radius kNA. This provides:











H

(
k
)

=




Circ
NA

(
k
)



e

j


ϕ

(
k
)




=

(


(




"\[LeftBracketingBar]"

k


"\[RightBracketingBar]"



NA

)



e

j


ϕ

(
k
)




k

)







δ

(
k
)

=

{





e

j


ϕ

(
k
)



,






if





"\[LeftBracketingBar]"

k


"\[RightBracketingBar]"




NA

,






0
,



otherwise









(

Eqn
.

7

)







where ϕ(k) stands for the system's aberration, and CircNA (k):=custom-character(|k|23 NA) is an indicator function that is equal to one when the modulus of the k-vector is below the system NA and is zero otherwise.


The sample spectrum Ô(k) can be further decomposed to its amplitude and phase. Together with Eqn. 7, Eqn. 6 can be written as:












S
^

i


(
k
)

=




O
^

(
k
)



H

(

k
+

k
i


)


=



A
^

(
k
)



e

j



α
^

(
k
)






Circ
NA

(

k
+

k
i


)



e

j


ϕ

(

k
+

k
i


)









(

Eqn
.

8

)







where Â(k):=[Ô(k)]∈custom-character is the amplitude of the sample's spectrum and {circumflex over (α)}(k):=arg Ô(k)∈custom-character is its phase (the operator arg gives the argument of a complex number).


In the following subsections B-D, based on Eqn. 6, a closed form solution to the sample's spectrum, Ô(k), is obtained and a technique is shown for analytically retrieving the aberration, ϕ(k), of an APIC imaging system. This methodology uses NA-matching (e.g., |ki|=NA) and darkfield measurements (e.g., |ki|>NA). For simplicity, it is assumed that |ki|>NA and k is sequentially ordered so that |ki|≤|ki+1|.


B. Complex Field Reconstruction for NA-Matching Angle Illumination

In certain embodiments, an APIC reconstruction method includes reconstructing the complex field spectrums using the NA-matching intensity measurements acquired when the illumination angles are equal to (match), or nearly match, the maximal acceptance angle of the optical system in the Fourier domain. In some cases, the Kramers-Kronig methodology may be employed to reconstruct the complex field spectrums from the NA-matching measurements. However, the Kramers-Kronig method does not take into account system aberration. In the complex field reconstruction method discussed in this subsection, aberration is taken into account, which is advantageous over existing techniques that do not consider aberration in their complex field reconstruction. The reconstruction can be performed by employing Eqn. 32.


In this subsection, only the first n0 measurements whose k-vector ki that satisfy Eqn. 9 below are considered:













"\[LeftBracketingBar]"


k
i



"\[RightBracketingBar]"


=
NA

,

i
=
1

,
2
,


,

n
0

,


where



n
0


:=



max
i




"\[LeftBracketingBar]"


k
i



"\[RightBracketingBar]"



=
NA






(

Eqn
.

9

)







These measurements are the NA-matching measurements used in this subsection. In this subsection, a signal is constructed whose Fourier transform is one-sided (which will be defined later in this subsection), and it will be shown that this allows for the calculation of the imaginary part of the signal from its real part based on the fact that the Fourier transform is a linear operator. This constructed signal can then be mapped back to the desired complex field. This approach can also be generalized to higher dimensional spaces. Although this methodology allows for analytical complex field reconstruction, it was found that both the desired sample's field and the aberration function of the imaging system are entangled in the reconstructed field.


The goal is computing the imaginary part of a complex signal from its real part using, e.g., Kramers-Kronig relations. In order to do so, the real part must be known first. However, the APIC system only takes measurements of the intensity (squared modulus) of a complex field, which is neither its real or imaginary part. To solve this mismatch, a nonlinear transformation is applied that maps the intensity and phase of a complex number to the real and imaginary part of its output, respectively. This can be done by taking the logarithm of a nonzero complex number. Applying this to the complex field S′i (x) in a point-wise manner, it is found:










log
[


S
i


(
x
)

]

=


log
[



"\[LeftBracketingBar]"



S
i


(
x
)



"\[RightBracketingBar]"


]

+

j


arg
[


S
i


(
x
)

]







(

Eqn
.

10

)







The nonzero condition is guaranteed by Eqn. 2. Note that the first term on the right-hand side is purely real and the second term is purely imaginary. Ii(x) are the intensity measurements.


As the intensity Ii(x)=|Si(x)|2=|(S)i(x)·e−2πjki·x)|2=|S′i(x)|2 is measured, everything is known about the real part of log S′i(x). What remains is to verify that its imaginary part can be reconstructed using this known real part.


Below it is shown that the Fourier transform of this transformed signal is “one-sided.” Such structure allows one to perform the desired reconstruction.


For any positive integer m∈custom-character+, we say g:custom-charactermcustom-character if there exists a nonzero vector e∈custom-characterm, e≠0 such that its Fourier transform ĝ satisfies:












g
^

(
k
)

=
0

,



k





m





s
.
t
.

k

·
e


<
0







(

Eqn
.

11

)







Hence, g is said to be strictly one sided if (k)=0, ∀k∈custom-characterms. t.·e≤0.


To show that the transform signal is one sided, the “offset” field, R (x), is factored out. R (x) is the field that does not change its direction:












(
x
)

=



(




-
1


[




S
i


^

(
k
)



δ

(
k
)


]

)



(
x
)


=



(




-
1


[



O
^

(
k
)



H

(

k
+

k
i


)



δ

(
k
)


]

)



(
x
)


=

re

j


ϕ

(

k
i

)









(

Eqn
.

12

)









where


r


is



real
.

When



the




offset




field


is


factored


out


from




S
i


(
x
)

:












S
i


(
x
)




(
x
)


=




S
i


(
x
)


re

j


ϕ

(

k
i

)




=




S
i


(
x
)



e


-
j



ϕ

(

k
i

)




r






(

Eqn
.

13

)







It is seen that ϕ(ki) serves as a phase offset and r can be treated as a normalization factor. Note that r and ϕ(ki) are unknowns, and only intensity of Si(x) is measured. Thus, below focuses primarily focus on the “offset” field, S′i(x) e−jϕ(ki), because adding a phase offset to a complex field does not change its intensity.


Applying the logarithm transformation to the “offset” field, S′i(x) e−jϕ(ki):













log
[



S
i


(
x
)



e


-
j



ϕ

(

k
i

)




]

=



log
[




S
i


(
x
)



e


-
j



ϕ

(

k
i

)




r

]

+

log

(
r
)








=



log
[

1
+





S
i


(
x
)



e


-
j



ϕ

(

k
i

)




-
r

r


]

+

log

(
r
)









(

Eqn
.

14

)







With the assumption that the majority of the light does not change its direction (Eqn. 2) the “offset” field is larger than the sample modulated field r=|custom-character(x)|>|S′i(x)−custom-character(x) |=|S′i(x)−re−jϕ(ki)|·So,r>|S′i(x)e−jϕ(ki)−r||e−jϕ(ki)|=|S′i(x)e−jϕ(ki)−r|. The first term on the right-hand side can be written as a convergent Taylor series (which will make it easier to analyze the structure of its Fourier transform as discussed below):










T

(
x
)

:=




(

Eqn
.

15

)










log
[

1
+





S
i


(
x
)



e


-
j



ϕ

(

k
i

)




-
r

r


]

+







m
=
1







(

-
1

)


m
+
1


m





[




S
i


(
x
)



e


-
j



ϕ

(

k
i

)




-
r

]

m


r
m







Note that this applies to all x. For simplicity, Δi(x) is defined as:











Δ
i

(
x
)

:=




(

Eqn
.

16

)













S
i


(
x
)



e


-
j



ϕ

(

k
i

)




-
r

=



[



S
i


(
x
)

-

re


-
j



ϕ

(

k
i

)




]



e


-
j



ϕ

(

k
i

)




=


[



S
i


(
x
)

-



(
x
)


]



e


-
j



ϕ

(

k
i

)









And, Eqn. 14 can be rewritten as:










log
[



S
i


(
x
)



e


-
j



ϕ

(

k
i

)




]

=



log

(
r
)

+

T

(
x
)


=


log

(
r
)

+







m
=
1







(

-
1

)


m
+
1


m





Δ
i
m

(
x
)


r
m









(

Eqn
.

17

)







The Fourier transform of Δi(x) yields:












Δ
^

i

(
k
)

:=




(

Eqn
.

18

)











[



(

Δ
i

)

]



(
k
)


=



[



(


[



S
i


(
x
)

-



(
x
)


]



e


-
j



ϕ

(

k
i

)




)

]



(
k
)


=





S
^

i


(
k
)

[

1
-

δ

(
k
)


]



e


-
j



ϕ

(

k
i

)









For x∈custom-character2, and letting f1(x): custom-character2custom-character and f2(x):: custom-character2custom-character be two complex l2 function with Fourier transform {circumflex over (f)}1(k) and {circumflex over (f)}2(k), respectively. Assume there exists a (common) nonzero vector e∈custom-character2e≠0 such that:













f
ˆ

1

(
k
)

=




f
ˆ

2

(
k
)

=
0


,




k




s
.
t
.

k

·
e


<
0






(

Eqn
.

19

)







then their product f′(x)=f1(x)f2(x) is one sided. Furthermore, if f1 and f2 are strictly one sided (with the same e), their product is also strictly one sided.


To prove this, the Fourier transform is taken of both sides, which yields:












f


ˆ

(
k
)

=



[



(

f


)

]



(
k
)


=



[



(


f
1



f
2


)

]



(
k
)


=



(


[



(

f
1

)

]

*

[



(

f
1

)

]


)



(
k
)


=


[



f
ˆ

1

*


f
ˆ

2


]



(
k
)









(

Eqn
.

20

)







For k0custom-character2 such that k0·e<0(e≠0), it is shown:












f


ˆ

(

k
0

)

=



[



f
ˆ

1

*


f
ˆ

2


]



(

k
0

)


=




d


k






f
ˆ

1

(

k


)





f
ˆ

2

(


k
0

-

k



)



=








k


·
e

<
0



d


k






f
ˆ

1

(

k


)





f
ˆ

2

(


k
0

-

k



)



+






k


·
e


0





d


k






f
ˆ

1

(

k


)





f
ˆ

2

(


k
0

-

k



)











(

Eqn
.

21

)







If k′·e<0, f1(k′)=0. If k′·e≥0, (0−k′)·e=k0·e−k′·e<0. As f1 and f2 are both one sided and share the same e:













f


ˆ

(

k
0

)

=








k


·
e

<
0



d


k






f
ˆ

1

(

k


)





f
ˆ

2

(


k
0

-

k



)



+







k


·
e


0



d


k






f
ˆ

1

(

k


)





f
ˆ

2

(


k
0

-

k



)




=
0


,





k
0





s
.
t
.


k
0


·
e


<
0






(

Eqn
.

22

)







That is, f′ is one sided. The proof for f1 and f2 being strictly one sided follows the same structure. To see that, the integration for k′·e≤0 and k′·e>0 is calculated and then it can be proven the strict version with the same technique.


It is seen that that Δi(x) is one sided. To show this, it is chosen that e=−ki. As the illumination angle is matched with NA (Eq. 9), it is verified that for arbitrary k such that k·e<0:












"\[LeftBracketingBar]"


k
+

k
i




"\[RightBracketingBar]"


=







"\[LeftBracketingBar]"

k


"\[RightBracketingBar]"


2

+




"\[LeftBracketingBar]"


k
i



"\[RightBracketingBar]"


2

+

2


k
·

k
i





=









"\[LeftBracketingBar]"

k


"\[RightBracketingBar]"


2

+




"\[LeftBracketingBar]"


k
i



"\[RightBracketingBar]"


2

-

2


k
·
e




>





"\[LeftBracketingBar]"


k
i



"\[RightBracketingBar]"


2



=

N

A







(

Eqn
.

23

)







The last equality holds because of the interest in the NA-matching angle illumination condition (Eqn. 9) in this subsection. So, H(k+ki)=0 for all k subject to k0·k0·e=k·(−ki)<0. It can then be proved that Δi is one sided:












Δ
ˆ

i

(
k
)

=





S
^

i


(
k
)

[

1
-

δ

(
k
)


]



e


-
j


ϕ


(

k
i

)








(

Eqn
.

24

)














O
^

i

(
k
)




H

(

k
+

k
i


)

[

1
-

δ

(
k
)


]



e


-
j



ϕ

(

k
i

)




=
0

,




k




s
.
t
.

k

·

(

-

k
i


)



<
0






Furthermore, for k·e=−k·kib =0, e k⊥ki. When k≠0, the above two equations still hold. It is only needed to consider the special case, namely k=0. At k=0:












Δ
ˆ

i

(
0
)

=






S
^

i


(
0
)

[

1
-

δ

(
0
)


]



e


-
j



ϕ

(

k
i

)




=





S
^

i


(
0
)



(

1
-
1

)



e


-
j



ϕ

(

k
i

)




=
0






(

Eqn
.

25

)







That is, Δi is strictly one sided. Using Eqn. 19, it can be concluded that the Taylor series (the last term on the right-hand side of Eqn. 17) is strictly one sided by induction.


If one were to want to directly use Si instead, the “offset” field should be modified as re2πjkix+jϕ(ki). This is because the original model uses tilted illumination, so the unchanged light field is associated with an incident angle. With such modification, one can verify it gives an identical result.


Here, it has been proved that the transformed signal log[S′i(x)e−jϕ(ki)] is one sided (it is easy to see that log[S′i(x)] is also one sided, as e−jϕ(ki) is a constant). Below it is shown that such special structure allows one to reconstruct the imaginary part from the real part of the signal.


A modified version of Eqn. 10:










log
[



S
i


(
x
)



e


-
j



ϕ

(

k
i

)




]

=



log
[



"\[LeftBracketingBar]"




S
i


(
x
)



e


-
j



ϕ

(

k
i

)






"\[RightBracketingBar]"


]

+

j



arg
[



S
i


(
x
)



e


-
j



ϕ

(

k
i

)




]



=



log
[



"\[LeftBracketingBar]"



S
i


(
x
)



"\[RightBracketingBar]"


]

+

j

(


arg
[


S
i


(
x
)

]

-

ϕ

(

k
i

)


)


=


log

(
r
)

+

[

T

(
x
)

]

+

j


[

T

(
x
)

]









(

Eqn
.

26

)







where the last equality follows from Eqn. 17, custom-character(·) denotes the real part of a complex number, and custom-character(·) denotes the imaginary part. The Fourier transform exhibits even symmetric for a real signal and odd symmetric for an imaginary signal, and so:











[



(

log




"\[LeftBracketingBar]"


S
i




"\[RightBracketingBar]"



)

]



(
k
)


=



[



(

log




"\[LeftBracketingBar]"


S
i




"\[RightBracketingBar]"



)

]

*



(

-
k

)






(

Eqn
.

27

)








and










[



(

j
[


arg

(

S
i


)

-

ϕ

(

k
i

)


]

)

]



(
k
)


=


-


[



(

j
[


arg

(

S
𝒾


)

-

ϕ

(

k
i

)


]

)

]

*




(

-
k

)







(

Eqn
.

28

)








where * in the superscript denotes complex conjugate. Since it is already proved that log[S′i(x)e−jϕ(ki)] is one sided:











[



(

log
[


S
i




e


-
j



ϕ

(

k
i

)




]

)

]



(
k
)


=




[



(

log




"\[LeftBracketingBar]"


S
i




"\[RightBracketingBar]"



)

]



(
k
)


+


[



(

j
[


arg

(

S
i


)

-


ϕ

(

k
i

)


]

)

]



(
k
)



=
0





(

Eqn
.

29

)












[



(

j

[


arg

(

S
i


)

-

ϕ

(

k
i

)


]

)

]



(
k
)


=


-

[



(

log
|

S
i


|

)

]




(
k
)



,




k




s
.
t
.





k

·

(

-

k
i


)



<
0






For k·(−ki)>0, symmetry can be used to conclude:












[



(

j
[


arg

(

S
i


)

-

ϕ

(

k
i

)


]

)

]



(
k
)


=


[

F

(

log
|

S
i


|

)

]



(
k
)



,




k




s
.
t
.





k

·

(

-

k
i


)



>
0






(

Eqn
.

30

)







As T(x) is strictly one sided, its Fourier transform is zero for all k such that k⊥kj. Moreover, the Fourier transform of constant log(r) is a real (Dirac delta) function centered at zero. In other words, the Fourier transform of log(r) has no imaginary part. So:












[



(

j
[


arg

(

S
i


)

-

ϕ

(

k
𝒾

)


]

)

]



(
k
)


=
0

,




k




s
.
t
.





k

·

(

-

k
𝒾


)




=
0





(

Eqn
.

31

)







By collecting all pieces from the above three equations and noticing Ii(x)=|S′i(x)|2:











[



(

log
[


S
i




e


-
j



ϕ

(

k
i

)




]

)

]



(
k
)


=

{







2

[



(

log




"\[LeftBracketingBar]"


S
i




"\[RightBracketingBar]"



)

]



(
k
)


=


[



(

log




I


i


)

]



(
k
)




,






k
·

k
i


<
0

,









[



(

log




"\[LeftBracketingBar]"


S
i




"\[RightBracketingBar]"



)

]



(
k
)


=



1
2

[



(

log



I
i


)

]



(
k
)




,






k
·

k
i


=
0

,






0
,





k
·

k
i


>
0









(

Eqn
.

32

)







That is, the complex field can be reconstructed using its real part log |S′i|. The desired field (up to a constant phase offset) can then be restored with inverse Fourier transform and applying exponential function to each point of the inverse Fourier transform:












S
i


(
x
)



e


-
j



ϕ

(

k
i

)




=

exp
[


(




-
1


[



(

log
[


S
i




e


-
j



ϕ

(

k
i

)




]

)

]

)



(
x
)


]





(

Eqn
.

33

)







If there is no aberration in the system ϕ(k)=0, S′i(x)=Ô(k)CircNA(k+ki) and the phase offset ϕ(ki)=0.


C. Aberration Extraction

This subsection discusses the principle underlying a procedure for extracting the system aberration from the complex field spectrums reconstructed from the NA-matching intensity measurements, according to certain embodiments.


Subsection V(B) ends with the reconstruction of S(x)e−jϕ(ki). Looking at its spectrum:













S
ˆ

i


(
k
)



e


-
j



ϕ

(

k
i

)




=




O
^

(
k
)



H

(

k
+

k
i


)



e


-
j



ϕ

(

k
i

)




=




A
^

(
k
)



e

j



α
^

(
k
)






Circ
NA

(

k
+

k
i


)



e


j


ϕ

(

k
+

k
i


)


-

j


ϕ

(

k
i

)










(

Eqn
.

34

)







As can be seen from Eqn. 34, the sample's spectrum and the system's aberration function are superimposed in the reconstruction. For a practical imaging system with aberration, this is not yet a reconstruction of a clean sample spectrum. Instead it is an aberrated version of the desired spectrum.


Without correction, aberrations of an imaging system, including defocus due to sample's height unevenness, can degrade reconstruction quality. This can be tackled by working in the spatial frequency domain (that is, working with the spectrum). As shown below, aberration of the system can be analytically solvable by considering the phases of the reconstructed spectrums.


To extract the aberration from the reconstructed spectrums, the procedure separates the contribution from the sample itself and the imaging system. This can be achieved by considering the phases of multiple reconstructed spectrums.


First, the overlap of two spectrums is defined. For two reconstructed spectrums Ŝ′i(k) and Ô(k), a set custom-characteril is defined as:










𝒞
il

:=

{


k



2






H

(

k
+

k
i


)



H

(

k
+

k
l


)



0


}





(

Eqn
.

35

)







These two spectrums are overlapped if the set custom-characteril is nonempty (custom-characteril ≠Ø) and the overlap between Ô(k) and Ô(k) is custom-characteril.


Consider two spectrums Ô(k) and Ô(k) with (nonempty) overlap custom-characteril≠Ø. The phase difference within the overlapped region custom-characteril gives:












arg
[




S
^

i


(
k
)



e


-
j


ϕ


(

k
i

)




]

-

arg
[




S
^

i


(
k
)



e


-
j


ϕ


(

k
l

)




]


=




[





α


^



(
k
)


+


ϕ


(

k
+

k
i


)


-


ϕ


(

k
i

)



]

-

[





α


^



(
k
)


+


ϕ


(

k
+

k
l


)


-


ϕ


(

k
l

)



]


=




ϕ


(

k
+

k
i


)


-


ϕ


(

k
i

)


-


ϕ


(

k
+

k
l


)


+


ϕ


(

k
l

)



=


[



ϕ


(

k
+

k
i


)


-


ϕ


(

k
+

k
l


)



]

-

[



ϕ


(

k
i

)


-


ϕ


(

k
l

)



]





,




(

Eqn
.

36

)








if





k


𝒞
il





The first term ϕ(k+ki)−ϕ(k+kl) on the right-hand side is referred to as the “aberration difference,” and the last term ϕ(ki)−ϕ(ki) is referred to as “the offset.” When considering the phase difference of the two spectrums, the contribution from the sample spectrum cancels out and the difference depends solely on the system's aberration. It can be seen that the remaining phase difference is linear with respect to the aberration function. As such, a linear operator can be constructed that maps the aberration into phase differences of two overlapping spectrums. To do so, the 2D spectrum is rearranged into a vector.



FIG. 17 is a diagram depicting an original spectrum 1711 of the sample, Ô(k), and a phase difference between two individual spectrums 1712, 1714 with overlap, according to an implementation. The original spectrum 1711 includes six individual spectrums corresponding to a plurality of six (6) NA-matching illumination angles. For aberration extraction, one or more illumination sources (e.g., LEDs) are used whose illumination k vector matches with the maximal receiving angle of the imaging system ki=NA. In the illustrated example, the illumination sources are LEDs. For illustration purposes, l=i+1 is selected as the LEDs are lit up along an annulus. The original spectrum 1711 includes a plurality of six overlapping spectrums including a first constructed spectrum 1712 and a second constructed spectrum 1714 with overlap. Ô(k) stands for the original sample's spectrum, S′i(k) is the acquired spectrum under i th illumination, and H(k) is the coherent transfer function (CTF). Illustration 1720 depicts the aberration of first constructed spectrum 1712 and illustration 1721 depicts the aberration of second constructed spectrum 1714. Illustration 1730 depicts the sample's phase of first constructed spectrum 1712 and illustration 1731 depicts the sample's phase of second constructed spectrum 1714. Illustration 1750 depicts the aberration of first constructed spectrum 1712 at the overlap between the first constructed spectrum 1712 and second constructed spectrum 1714. Illustration 1751 depicts the aberration of second constructed spectrum 1714 at the overlap between the first constructed spectrum 1712 and second constructed spectrum 1714. Illustration 1760 depicts the sample's phase of first constructed spectrum 1712 at the overlap between the first constructed spectrum 1712 and second constructed spectrum 1714. Illustration 1761 depicts the sample's phase of second constructed spectrum 1714 at the overlap between the first constructed spectrum 1712 and second constructed spectrum 1714. As shown, in the overlapping region between individual spectrums 1712 and 1714, the phase of the sample cancels out and the phase difference of the shifted CTF solely contributes to the measured phase difference. As shown in the illustration 1790, the phase difference in the overlapping portion is a linear combination of the system aberration. The phase difference is linearly dependent on the system aberration. Thus, an operator can be determined that accounts for the shift induced phase difference and this operator can be used to recover the imaging system's aberration.


Assume B is a m×t matrix, then a “flattening” operator, Flatm,t, can be defined which concatenates every column of B and produces a vector of length mt. custom-character denotes this flattened vector:










=



Flat

m
,
t


(
B
)





mt
×
1







(

Eqn
.

37

)








and







[


i
1

+


(


i
2

-
1

)


m


]

=

B

(


i
1

,

i
2


)





An inverse operator, Flatm,t−1, can be defined which restores the matrix when applied to the flattened vector:









B
=


Flat

m
,
t


-
1


(

)





(

Eqn
.

38

)







Assuming the sample spectrum acquired is a N×N matrix, Ŝ′icustom-characterN×N can be the matrix form of the sampled spectrum, and ϕ∈custom-characterN×N the matrix form of the aberration. The corresponding flattened version is then custom-characteri=FlatN,N(Ŝ′i)∈custom-characterN2×1, and φ=Flat N,N (ϕ)∈custom-characterN2×1, respectively.


Let Ku, Kvcustom-characterN×N be the spatial frequency grid (with zero frequency at [c0, c0]=(┌N/2┐, ┌N/2┐) , where ┌N/2┐ is the smallest integer that is larger than N/2) for the sampled spectrum, where u and v denotes two orthogonal direction. Let Ku, Kvcustom-characterN2×1 be their flattened vectors Ku=Flat N,N (Ku) and Kv=Flat N,N (Kv). For simplicity, let K(m), m∈{1,2, . . . , N2} denote k vector in grid, which is defined as:










K

(
m
)

:=

[



K
u

(
m
)

,


K
v

(
m
)


]





(

Eqn
.

39

)







Additionally, an index set custom-characteril is defined by:











il

:=

{


m
=
1

,
2
,


,


N
2




K

(
m
)



𝒞
il




}





(

Eqn
.

40

)







Let |custom-characteril| be the cardinal of set custom-characteril and also abuse the notation such that custom-characteril(m) indicate the m th smallest element in custom-characteril. For the transverse illumination k-vector ki(i=1,2, . . . , n), its grid representation is denoted as κi.


With this notation, a difference operator Dilcustom-character|custom-characteril|×N2 is designed. When acting on φ, it is desired that this operator to calculate the aberration difference ϕ(k+k1)−ϕ(k+k1) associated with custom-characteri and custom-characteri (each row corresponds to one specific choice of k). Based on this principle, a value 1 is assigned to the place that corresponds to phase ϕ(k+k1) and a value −1 to the place that corresponds to phase ϕ(k+k1) for every row of Dil. That is to say, the operator Dil is constructed as:











D
il

(


i
1

,

i
2


)

=

{





1
,





if



K

(

i
2

)


=


K
[



il

(

i
1

)

]

+



κ


i









-
1

,





if


K


(

i
2

)


=


K
[



il



(

i
1

)


]

+



κ


l








0
,



otherwise



.






(

Eqn
.

41

)







The operator is shown to calculate the aberration difference:











D
il


φ


=

[







ϕ


(


K
[



il

(
1
)

]

+



κ


i


)


-


ϕ


(


K
[



il

(
1
)

]

+



κ


l


)












ϕ


(


K
[



il

(
2
)

]

+



κ


i


)


-


ϕ


(


K
[



il

(
2
)

]

+



κ


l


)

















ϕ


(


K
[



il

(



"\[LeftBracketingBar]"



il



"\[RightBracketingBar]"


)

]

+



κ


i


)


-


ϕ


(


K
[



il

(



"\[LeftBracketingBar]"



il



"\[RightBracketingBar]"


)

]

+



κ


l


)







]





(

Eqn
.

42

)







Let custom-characteri[custom-characteril](i=1,2, . . . , n) be a custom-character vector which is defined as












𝒮
^

i


[


il

]

:=


[




𝒮
^

i


[



il

(
1
)

]

,



𝒮
^

i


[



il

(
2
)

]

,



𝒮
^

i


[



il

(
3
)

]

,


,



𝒮
^

i


[



il

(



"\[LeftBracketingBar]"



il



"\[RightBracketingBar]"


)

]


]

T





(

Eqn
.

43

)







where T in the superscript denotes ordinary transpose. Simplifying the expression of Dilφ:











D
il


φ


=


arg

(



𝒮
^

i


[


il

]

)

-

arg

(



𝒮
^

l


[


il

]

)

+

[



ϕ


(

k
i

)


-


ϕ


(

k
l

)



]






(

Eqn
.

44

)







Note ϕ(ki)−ϕ(kl) is a constant. To account for this constant, an offset operator Dil0custom-character, is defined where a 1 is assigned to place that corresponds to ki and −1 to kl. That is,











D
il
0

(


i
1

,

i
2


)

=

{





1
,





if



K

(

i
2

)


=



κ


i








-
1

,





if


K


(

i
2

)


=



κ


l







0
,



otherwise



.






(

Eqn
.

45

)







When acting on φ, this offset operator gives:











D
il
0


φ


=

[







ϕ


(

k
i

)


-


ϕ


(

k
l

)












ϕ


(

k
i

)


-


ϕ


(

k
l

)

















ϕ


(

k
i

)


-


ϕ


(

k
l

)







]





(

Eqn
.

46

)







Thus, Dil and Dil0 can be used to express the total phase difference between custom-characteri and custom-characteri, which gives:











(


D
il

-

D
il
0


)


φ


=


arg

(



𝒮
^

i


[


il

]

)

-

arg

(



𝒮
^

l


[


il

]

)






(

Eqn
.

47

)







For different pairs of spectrums with overlap, those equations can be concatenated, which yields:











D

φ


=



β




Δ




,




(

Eqn
.

48

)









Where
:






D
:=

[





D


i
1



l
1



-

D


i
1



l
1


0








D


i
2



l
2



-

D


i
2



l
2


0













D


i
m



l
m



-

D


i
m



l
m


0





]






and









β




Δ



:=


arg

(






𝒮
^


i
1



[




i
1



l
1



]








𝒮
^


i
2



[




i
2



l
2



]













𝒮
^


i
m



[




i
m



l
m



]




)

-

arg

(






𝒮
^


l
1



[




i
1



l
1



]








𝒮
^


l
2



[




i
2



l
2



]













𝒮
^


l
m



[




i
m



l
m



]




)







Instead of directly solving for the aberration term, a Zernike polynomial is used to represent the aberration of the system. Thus, the Zernike operator Z∈custom-characterN2×z, is introduced where z is the number of Zernike coefficients to be reconstructed and each column of Z represents a particular Zernike mode. With Zernike decomposition, the aberration of the imaging system is given by:











φ

=
Zc





(

Eqn
.

49

)







where c∈custom-characterz×1 is the corresponding Zernike coefficient. Using Zernike decomposition, Eq. 48 can be rewritten as:












β


=


D

φ

=
DZc





(

Eqn
.

50

)








The above linear equation (or the associated normal equation) can be solved to get the analytical solution of the Zernike coefficient c. The 2D aberration ϕ of the APIC imaging system is then given by:











ϕ
=



Flat






N

,
N







-
1



(
φ
)

=


Flat






N

,
N







-
1



(
Zc
)






(

Eqn
.

51

)








In reality, some spatial frequencies in the spectrum may have a stronger signal than other frequencies. It may be advantageous to emphasize those frequencies in the spectrum as they may have a higher signal-to-noise ratio (SNR). Thus, in some implementations, a weight matrix W may be employed to emphasize places with high SNR. This gives:













(
WDZ
)


c

=

W


β







(

Eqn
.

52

)








In one implementation, the logarithm of the modulus of the product Ŝi(k)Ŝl(k) can be used as a weight matrix.


In some cases, the phase difference of two spectrums might exceed 2π. Thus, phase differences may be first unwrapped and then solved using Eqn. 52 to extract the aberration of the imaging system.


D. Reconstruction Extending Spectrum Using Darkfield Measurements

Since the aberration of the imaging system (system aberration) may be fully determined by the aberration extraction procedure discussed in subsection C, the contribution of the aberration term can be removed from the reconstructed spectrums determined by the reconstruction procedure discussed in subsection B.


In subsection B, the (modified) sampled spectrums were reconstructed under NA-matching angle illumination based on Eqns. 32 and 33. This means the following is known for all |ki|=NA:













S
^





i









(
x
)




e







-
j




ϕ

(

k
i

)




=





O
^

(
k
)



H

(

k
+

k
i


)



e







-
j




ϕ

(

k
i

)




=



O
^

(
k
)




Circ
NA

(

k
+

k
i


)



e







j



ϕ

(

k
+

k
i


)


-

j


ϕ

(

k
i

)










(

Eqn
.

53

)








As the last phase factor eejϕ(kαki)−jϕ(ki) entirely depends on the aberration, the aberration can now be corrected and a clean sample spectrum extracted, which yields:













O
^

(
k
)





Circ
NA

(

k
+

k
i


)

.





(

Eqn
.

54

)








This basically means that a piece of the sample spectrum region covered by the CTF support was recovered for each NA-matching intensity measurement. Thus, these reconstructed regions can be stitched for a larger coverage in the spatial frequency domain. That is, the coverage of the reconstructed Ô(k) can be gradually expanded in the spatial frequency domain. Let us define the sampled region custom-characterm that denotes the spectrum covered by the first mth measurements













m

:=

{

k




2







"\[LeftBracketingBar]"







i


1

,
2
,


,
m
,



Circ
NA

(

k
+

k
i


)


0





}





(

Eqn
.

55

)








The mask custom-characterm is defined which denotes the effective sampling mask for the first mth measurements













M
m

(
k
)

:=


𝟙

(

k



m


)

=

{




1
,





if


k




m







0
,



otherwise









(

Eqn
.

56

)








Assuming the complex spectrum can be reconstructed for every measurement, the reconstructed complex sample spectrum {circumflex over (R)}m(k) using the first m th measurements can be expressed by:














R
^

m

(
k
)

:=



O
^

(
k
)




M
m

(
k
)






(

Eqn
.

57

)








After aberration correction, the reconstructed spectrum, {circumflex over (R)}no(k) =Ô(k)Mno((k), was obtained using NA-matching measurements, i=1,2, . . . , no. For i>no, the corresponding illumination angle that exceeds the maximal acceptance angle of the optical system was applied, darkfield (see Eqn. 9 for the definition of no) measured.


In this subsection, it is shown that if the sampled spectrum at i th illumination Ŝ′i(i>no) consists of previously reconstructed (a priori) spectrum and another unknown part, the unknown part can be reconstructed using the known spectrum.


Let us decompose the sampled spectrum Ŝ′i into the unknown and known part, assuming all previous measurements are reconstructed, which means {circumflex over (R)}i−1(k)=Ô(k)Mi−1(k) is known. The known spectrum custom-characteri(k) at this substep is given by:














𝒫
^

i

(
k
)

=





R
^


i
-
1


(
k
)



H

(

k
+

k
i


)


=



O
^

(
k
)



H

(

k
+

k
i


)





M

i
-
1


(
k
)

.







(

Eqn
.

58

)








Then, the unknown part Ûi(k) is














U
^

i

(
k
)

=




S
^





i









(
k
)



-



𝒫
^

i

(
k
)


=





O
^

(
k
)



H

(

k
+

k
i


)


-



O
^

(
k
)



H

(

k
+

k
i


)




M

i
-
1


(
k
)



=




O
^

(
k
)




H

(

k
+

k
i


)

[

1
-


M

i
-
1


(
k
)


]








(

Eqn
.

59

)








Let Supp(f) be the support of function f:custom-character2custom-character, which is a set given by:












Supp

(
f
)

:=

{

x




2






"\[LeftBracketingBar]"




f

(
x
)


0




}





(

Eqn
.

60

)








By construction of custom-characteri(k) and Ûi(k), it is easy to see that custom-characteri(k)≠0 suggests Ûi(k)=0, and vice versa. Thus, custom-characteri(k) and Ûi(k) have disjoint support. Based on Eqns. 5 and 6, the measured intensity of one darkfield measurement (darkfield intensity measurement) can be expressed as













I
i

(
x
)

=






"\[LeftBracketingBar]"



[









-
1



(


S
^



i









)

]



(
x
)




"\[RightBracketingBar]"


2


=






"\[LeftBracketingBar]"



[









-
1



(



U
^

i

+


𝒫
^

i


)

]



(
x
)




"\[RightBracketingBar]"



2

=






"\[LeftBracketingBar]"




U
i

(
x
)

+


𝒫
i

(
x
)




"\[RightBracketingBar]"


2

=




U
i

(
x
)




U
i





*


(
x
)


+



U
i

(
x
)




𝒫
i





*


(
x
)


+



𝒫
i

(
x
)




U
i





*


(
x
)


+




𝒫
i

(
x
)




𝒫
i





*


(
x
)










(

Eqn
.

61

)








where custom-character(x) :=[custom-character−1(custom-characteri](x) and U(x) :=[custom-character−1i](x) are the known field and the unknown field, respectively. Using the property of Fourier transform, the Fourier transform of Ii can be written as:













[



(

I
i

)

]



(
k
)


=





[



U
^

i








U
^

i


]



(
k
)



+



[



U
^

i








𝒫
^

i


]



(
k
)



+



[



𝒫
^

i








U
^

i


]



(
k
)



+



[



𝒫
^

i








𝒫
^

i


]




(
k
)







(

Eqn
.

62

)








where * denotes correlation. As the known spectrum is a priori, its auto-correlation can be subtracted from the Fourier transform of the intensity measurement, which yields:














[



(

I
i

)

]



(
k
)


-



[



𝒫
^

i








𝒫
^

i


]




(
k
)



=





[



U
^

i








U
^

i


]



(
k
)


+


[



U
^

i








𝒫
^

i


]



(
k
)


+


[



𝒫
^

i








U
^

i


]



(
k
)








(

Eqn
.

63

)








The two cross-terms are linear with respect to Ûi as correlation is a linear operator. If considering one cross term, it naturally leads to a linear equation with respect to Ûi, which can be solved analytically. However, those three remaining terms cannot be easily separated due to the existence of the desired unknown part.


In the remaining part of this subsection, it is shown that the above three terms have different supports, as depicted in FIG. 18. Therefore, the non-overlapping part can be isolated, which allows for the construction of a linear equation with respect to Ûi.



FIG. 18 depicts a schematic illustration of the complex field reconstruction using a plurality of darkfield measurements to extend the known sample spectrum, according to embodiments. In this visualization, the sampled spectrum 1810 is centered, which is basically a cropped version of the spectrum translation model (Eqn. 6). The sampled spectrum 1810 is shown decomposed into two disjointed regions, an unknown spectrum 1812 and a known spectrum 1814. That is, sampled spectrum (not directly measured) 1810 consists of the previously reconstructed known sample spectrum 1814 and an unknown part 1812. The unknown part of the darkfield measurement can be reconstructed using the known reconstructed sample spectrum 1814.The Fourier transform of the darkfield measurement can be decomposed into four terms: an autocorrelation of the unknown spectrum 1842, a first cross-correlation term 1844, a second cross-correlation term 1846, and an auto-correlation of the known spectrum 1848. A Fourier transform of the (directly measured) darkfield intensity measurement 1820 provides a Fourier transformed darkfield intensity measurement 1830, the spectrum of the darkfield measurement. As the known part of the spectrum of the darkfield measurement is given, the intensity of the corresponding field can be calculated by reverse Fourier transform. Upon subtracting the auto-correlation of the known spectrum 1842 from the Fourier transform of the (directly measured) darkfield intensity measurement 1830, three terms are left: the autocorrelation of an unknown spectrum 1842, a first cross-correlation term 1844, and a second cross-correlation term 1846. A portion 1852 of the second cross-correlation term 1846 does not overlap with the autocorrelation of unknown spectrum 1842 or with the first cross-correlation term 1844. This portion 1852 can be extracted by discarding (setting to zero) regions covered by the possible nonzero area of the autocorrelation of unknown spectrum 1842 and the first cross-correlation term 1844 in the result of subtracting the auto-correlation of the known spectrum 1848 from the spectrum of the darkfield intensity measurement 1830. The portion 1852 is a linear combination of unknown parts. A linear equation (e.g., part of a correlation operator) may be obtained with respect to the unknown part 1852 of the darkfield measurement. This linear equation may be used to form a closed-form solution of an unknown part 1812 of the sample spectrum from each darkfield measurement. The unknown parts (e.g., portions at higher frequencies) determined from multiple darkfield measurements can be filled into the sample spectrum in, e.g., a sample spanning procedure, to extend the sample spectrum (e.g., the sample spectrum stitched together from complex field spectrums constructed from the NA-matching measurements). A reverse Fourier transform of the extended sample spectrum yields a higher resolution image (i.c., with higher resolution than the NA-matching intensity measurements or the darkfield measurements).


In the APIC system 100 shown in FIGS. 1A and 1B, the same radiation detector (e.g., camera) is used to acquire both darkfield and NA-matching measurements. Thus, each intensity measurement is a matrix of fixed size. The matrix representation of the original sample's spectrum Rm is nevertheless a much larger matrix. The physical picture is that the tilt illumination translates the sample spectrum in the spatial frequency domain, as the initial forward model Eqn. 1 suggests, so that different spatial frequencies of sample spectrum is moved into this smaller measurement related grid and then gets sampled. This is equivalent to cropping out a specific region of the sample spectrum and moving the cropped part to the center when applying the pupil translation model in Eqn. 6. In following discussion, the matrix representations of both the unknown and the known spectrum are assumed to be centered, thus having the same dimension as the measurement.


Focusing on [custom-characterii](k) and let custom-characteri be the non-intersecting set, which is defined as:












Q
i

:=



{

k




2






"\[LeftBracketingBar]"



k



Supp

(



𝒫
^

i








U
^

i


)



\



[


Supp

(



U
^

i








𝒫
^

i


)



Supp

(



U
^

i








U
^

i


)


]






}






(

Eqn
.

64

)








By construction:













[



(

I
i

)

]



(
k
)


-


[



𝒫
^

i






𝒫
^

i


]



(
k
)



=


[



𝒫
^

i






U
^

i


]



(
k
)



,



k


Q
i







(

Eqn
.

65

)







The masked subtraction as Li, is defined as:











L
i

(
k
)

:=



(



[



(

I
i

)

]



(
k
)


-


[



𝒫
^

i






𝒫
^

i


]



(
k
)



)


(

k


Q
i


)


=

{






[



𝒫
^

i






U
^

i


]



(
k
)


,





if


k



Q
i







0
,



otherwise









(

Eqn
.

66

)







Let Ûicustom-characterN×N be the matrix version of the (centered) unknown part and Ûicustom-characterN2×1 be its flattened vector custom-characteri=Flat N,N i), Licustom-characterN×N be the (centered) matrix version of Li and custom-charactericustom-characterN2×1 be its flattened vector custom-characteri=Flat N,N (Li), and custom-characteri be the (centered) matrix version of the known part. In general, the correlation of two matrices of size N×N would be a matrix of size (2N−1)×(2N−1). Based on the Nyquist theorem, the nonzero part of Ûi+custom-characteri is contained in a N/2×N/2 box. As a consequence, the correlation of Ûi and custom-characteri can be calculated using a N×N grid.


Let us construct a (sparse) correlation operator Ci that takes all nonzero elements in the unknown spectrum custom-characteri and gives custom-characteri. First, focus on the t1 th row and t2 th column of Li, which corresponds to m th element of vector custom-characteri, where m=t1+N(t2−1) and t1, t2∈1,2, . . . , N. Let t′1=t1−c0 and t′2=t2−c0 . Then, (recall that in Eq. 39 K is defined as the grid version of the k vector and also Ku(c0, c0)=Kv(c0, c0)=0):















i

(
m
)

=


{





[



𝒫
^

i






U
^

i


]

[

K

(
m
)

]





if



K

(
m
)




Q
i







0
,



otherwise










=


{








i
1

=

max


{

1
,

1
+

t



1




}




min


{

N
,

N
+

t



1




}







i
2

=

max


{

1
,

1
+

t



2




}




min


{

N
,

N
+

t



2




}













𝒫
^

i
*



(



i
1

-

t
1



,


i
2

-

t
2




)




U
^

i



(


i
1

,

i
2


)


,







if



K

(
m
)




Q
i










0
,



otherwise











(

Eqn
.

67

)







For K(m)∈custom-characterii, a matrix Gimcustom-characterN×N can be constructed that is defined as:











G
i
m

(


i
1

,

i
2


)

:=

{








𝒫
^

i
*

(



i
1

-

t
1



,


i
2

-

t
2




)

,








if


max


{

1
,

1
+

t
ζ




}




i
ζ









min


{

N
,

N
+

t
ζ




}


,

ζ

1

,
2









0
,



otherwise



.






(

Eqn
.

68

)







With this definition:












i

(
m
)

=

{







[


Flat

N
,
N


(

G
i
m

)

]

T




𝒰
^

i


,





if



K

(
m
)




Q
i







0
,



otherwise








(

Eqn
.

69

)







An index set custom-characteri is defined that denotes this special region that is linear in custom-characteri:










𝔏
i

:=

{


m
=
1

,
2
,


,


N
2




K

(
m
)



Q
i




}





(

Eqn
.

70

)







The notation is abused so that custom-characteri(m) indicates the m th smallest element in custom-characteri and define custom-characteri(custom-characteri) as












i

(

𝔏
i

)

:=


[




i

[


𝔏
i

(
1
)

]

,



i

[


𝔏
i

(
2
)

]

,


,



i

[


𝔏
i

(



"\[LeftBracketingBar]"


𝔏
i



"\[RightBracketingBar]"


)

]


]

T





(

Eqn
.

71

)







where |custom-characteri| is the cardinal of custom-characteri. A correlation operator can then be constructed. Let Cifcustom-character be a |custom-characteri |×N2 matrix and let its m th row Cif(m,·) be:











C
i
F

(

m
,
·

)

=


[


Flat

N
,
N


(

G
i


𝔏
i

(
m
)


)

]

T





(

Eqn
.

72

)







and then:











C
i
F




𝒰
^

i


=



i

(

𝔏
i

)





(

Eqn
.

73

)








FIG. 19 depicts a graphical illustration of the construction of a correlation operator CiF 1912, according to embodiments. For vector K(m) that admits a measurement that is linear in the unknown spectrum, an associated matrix Gim is constructed. This matrix is constructed such that the correlation of the unknown and known part at K(m) equals Σi1,i2=1NÛi(i1, i2)Gim(i1, i2), the summation of the element-wise multiplication of this matrix and the unknown spectrum. The flattened vector of Gim serves as one row of the correlation matrix CiF, as illustrated in the figure.


By construction of the unknown spectrum Ûi, it is known that the only locations that it can be nonzero are where the following holds:











H

(

k
+

k
i


)

[

1
-


M

i
-
1


(
k
)


]


0




(

Eqn
.

74

)







Using the above equation, it can be easily found that the corresponding nonzero elements in the flattened vector custom-characteri. An index set Ni is defined that consists of the indices of these nonzero elements:










N
i

:=

{


m
=
1

,
2
,


,


N
2





H
[


K

(
m
)

+

κ
i


]



(

1
-


M

i
-
1


[

K

(
m
)

]


)



0



}





(

Eqn
.

75

)







and let Ni(m) be the m th smallest element in Ni, and |Ni| be the cardinal of Ni. To construct the correlation operator Ci that encodes the sparsity of the unknown spectrum, simply keep all columns of CiF whose indices belong to set Ni and throw away all other columns. This gives the definition of Cicustom-character













C
i

(


i
1

,
m

)

:=


C
i
F

[


i
1

,


N
i

(
m
)


]


,


i
1

=
1

,
2
,


,




"\[LeftBracketingBar]"


𝔏
i



"\[RightBracketingBar]"




and






m
=
1

,
2
,


,



"\[LeftBracketingBar]"


N
i



"\[RightBracketingBar]"







(

Eqn
.

76

)







Then, the following linear equation with respect to the nonzero elements of the unknown spectrum:












C
i





𝒰
^

i

(

N
i

)


=



i

(

𝔏
i

)






where





𝒰
^

i

(

N
i

)


:=

[




𝒰
^

i

[


N
i

(
1
)

]

,



𝒰
^

i

[


N
i

(
2
)

]

,


,





𝒰
^

i

[


N
i

(



"\[LeftBracketingBar]"


N
i



"\[RightBracketingBar]"


)

]

T

.








(

Eqn
.

77

)







To solve this equation, it is required that the rank of matrix Ci to be at least |Ni|. This can be satisfied if the known spectrum covers the semicircle of the circular CTF. Overlap criteria may be applied in certain embodiments. For example, the autocorrelation of a semicircle may be around 4 times larger than itself. If it is assumed that the CTF is of radius ro, and the known spectrum is a semicircle, the area of the unknown spectrum is then 1/2πr02. For a circle (area is πr02), its autocorrelation is strictly 4 times larger in size (area is 4πr02). Thus, the linear region has an area of:










Area
(

Q
i

)

=



1
2

[


4

π


r
0
2


-

(


4


r
0
2


+

π


r
0
2



)


]

=



3

π


r
0
2


-

4


r
0
2



2






(

Eqn
.

78

)







which is approximately 1.7 times larger than the area of the unknown part. That is, if the known spectrum occupies 50% of the spectrum, the rank of matrix Ci can be well above |Ni|. Numerically, a safe choice is to let the unknown spectrum occupy over 42% of the measured spectrum, assume the CTF is circular.



FIG. 20 depicts an illustration of autocorrelation of a semicircle with radius r0, according to embodiments. The shape of the autocorrelation result has an area of 4r02πr02 which is around 4.5 times larger than the area enclosed by the semicircle.


By solving Eqn. 77, the closed-form solution of the unknown spectrum is obtained. That is, the following spectrum is reconstructed:












U
^

i

(
k
)

=



O
^

(
k
)




H

(

k
+

k
i


)

[

1
-


M

i
-
1


(
k
)


]






(

Eqn
.

79

)







As the aberration of the system is determined, the aberration can be corrected, which gives the aberration corrected spectrum Ô(k)CircNA (k+ki)[1−Mi−1(k)]. Because the intensity of the field is directly measured in the darkfield measurement and the phase is unknown, the square root of this measured intensity is used as the modulus of the complex field for maintaining the (point-wise) energy and robustness.


Until now, it is shown that the complex spectrum sampled with i th illumination can be reconstructed using a priori knowledge of the known spectrum. The entire reconstructed spectrum can be expanded by integrating this newly reconstructed spectrum. The overall reconstructed spectrum using i th measurement is given by Ô(k)Mi−1(k)+Ô(k) CircNA (k+ki)[1−Mi−1(k)], which is exactly Ô(k)Mi(k) by definition of Mi(k). That is, an extended complex spectrum reconstuction {circumflex over (R)}i after darkfield reconstruction at the i th sub-step is obtained. This reconstructed field serves as our a priori knowledge in the reconstruction of the (i+1) th sub-step.


For a dense LED array, the area of the unknown spectrum can be quite small because their effective CTFs cover similar area for two closely spaced LEDs (Eq. 6). In such case, the new spectrum can be filled when it is necessary to solve the linear equation formed in Eq. 77 (e.g., when the system becomes undetermined if we do not fill in the new spectrum). By doing this, the spectrum reconstructions prior to the stitching process are independent and they have overlaps in the spatial frequency domain. Therefore, we can average over the overlap for improved robustness of the reconstruction algorithm.


Once the darkfield reconstruction is done for all measurements, we have reconstructed all sampled spectrums and the extended complex spectrum {circumflex over (R)}n, which is our high-resolution, aberration-free complex field reconstruction.


VI. Additional Results
A. Comparison of APIC and FPM using Reduced Dataset

In one example, 316 images were used in a full dataset. In another example, 9 bright field measurements, 8 NA-matching measurements and 28 darkfield measurements were used in a reduced dataset. For the reduced dataset example, an FPM method used all forty five (45) measurements and the APIC method used 36 intensity measurements including 8 NA-matching measurements and 28 darkfield intensity measurements. The APIC imaging system 100 of FIGS. 1A and 1B were used to acquire the intensity measurements. An APIC imaging method of certain implementations and a conventional FPM imaging method were used to reconstruct images.



FIG. 21A includes images reconstructed with both full dataset and reduced dataset using APIC imaging method according to embodiments and FPM imaging method for comparison. The full dataset is largely redundant as the overlap of two spectrum is around 86%. The nominal overlap ratio reduces to 65% for the reduced dataset. The specimen being imaged was a Siemens star target that was defocused by 16 μm. The zoomed images of amplitude and phase reconstructions of APIC and FPM are shown on the right, respectively.


B. Resolution Quantification

The APIC imaging system 100 of FIGS. 1A and 1B was used to acquire intensity measurements of a Siemens star target. The illumination device 110 was configured for illumination angles smaller than the acceptance angle of the objective 134. The illumination device 110 used red LED with a central wavelength of 632 nm. An APIC imaging method of certain implementations and a conventional FPM imaging method were used to reconstruct images.



FIG. 21B includes images reconstructed using APIC imaging method according to embodiments and FPM imaging method for comparison of resolution, according to embodiments. The reconstruction using in focus data and the reconstructed amplitude and phase of FPM and APIC are shown in the middle. The radial profiles along the circles highlighted in the reconstructed amplitude images are shown on the right. The radial profiles used a radius calculate where at least 10% contrast was preserved for any of the two reconstructed amplitude for the Siemens star target. The APIC and FPM images have similar radial profiles. The maximal illumination NA in this experiment is approximately 0.5 and the objective NA equals 0.25. The theoretical resolution of the imaging system is 842 nm. Using the radial profiles in FIG. 21B, the experimentally determined resolution achieved for both FPM and APIC is approximately 867 nm.


C. Reconstruction of Hematoxylin and Eosin Stained Sample

The APIC imaging system 100 of FIGS. 1A and 1B was used to acquire intensity measurements of hematoxylin and cosin (H&E) stained breast cancer cells. In this example, the illumination device 110 included a plurality of LEDs that were each configured for providing red, green, and blue illumination. APIC imaging method of a certain implementation (e.g., APIC method described with reference to FIG. 3) and a conventional FPM imaging method were used to reconstruct images.



FIG. 22A includes images of hematoxylin and cosin (H&E) stained breast cancer cells reconstructed using APIC imaging method according to embodiments and FPM imaging method for comparison. The complex field reconstructions and retrieved aberrations of a red channel, a green channel, and a blue channel are shown. The zoomed images of the amplitude and phase reconstructions are shown on the right of each boxes. Scale bar for the zoomed images: 5 μm. The quantitative phases of the APIC images appear more consistent among all three channels than for the FPM images.


D. Reconstruction Time

In this implementation, a computing device (e.g., computing device 780 in FIG. 7 or computing device 1280 in FIG. 12) in the form of a personal computer with 16 GB RAM and a CPU (e.g., Intel Core i5-8259U) was used for reconstructions. An APIC method of an implementation (e.g., APIC method described with reference to FIG. 3) was used for reconstruction. The reconstructions were performed on the computing device. The reconstructions were performed under different patch sizes. In this simulation, the overlap ratio was set to approximately 60%. The dataset consisted of 76 darkfield measurements and 16 NA-matching measurements. The FPM method was performed with the convergence criterion for first and second order FPM reconstruction methods as was used in other experiments and simulations discussed herein. A moderate aberration was added where the phase standard deviation of the simulated aberration is approximately 0.6 radian.



FIG. 22B is a plot of reconstruction runtime vs. image patch size length (pixels) for APIC imaging method according to embodiments and FPM for comparison. For side length of the patch equals 256, the three methods required approximately the same amount of time to complete. The computation efficiency advantage of APIC for small patch size is in line with another important computational consideration, namely parallel processing. By splitting the whole image into smaller patches, more processors can be simultaneously used for APIC reconstruction.


E. Aberration Correction

Simulations under different aberration levels were conducted and an APIC method of an embodiment (e.g., APIC method described with reference to FIG. 3) used to reconstruct the complex field. A complex USAF target was used as the specimen being imaged and different aberrations were added to the intensity measurements. It was assumed the illumination angles were known and no noise was simulated. For the NA-matching angle illumination, 16 evenly spaced LEDs were simulated. The overlap ratio of two adjacent sampled spectrums is around 60%. The ground truth was generated by cropping the complex object spectrum using the sampled region Mn(k). In other words, the ground truth was the inverse Fourier transform of the spectrum O(k)Mn(k). To visually perceive different aberration levels, the image that would be obtained by the aberrated imaging system under the normal incidence illumination was also simulated. This normal incidence measurement was just for visualization and was not used in the APIC's reconstruction process. The reconstruction results are shown in FIG. 23A.



FIG. 23A depicts images reconstructed using an APIC imaging method under different levels of aberrations, according to embodiments. The first row of images shows the corresponding raw intensity images under normal incidence with different levels of aberration. The second and third rows are the reconstructed amplitude and phase images, respectively. The last (bottom) row is the reconstructed aberration (pupil) and the actual (GT) simulated aberration. The last column shows the ground truth of object's amplitude and phase, as well as the image under normal incidence when captured by an aberration-free imaging system. The results shown in FIG. 23A indicate that the APIC imaging used is exceptionally tolerant against aberration and can accurately extract the aberration of the imaging system under extremely high aberration level (the largest phase standard deviation exceeds 1.5 π). The reconstructed phase and amplitude for all cases matched up with the ground truth.


For comparison, a conventional FPM imaging method was used for reconstruction with less aberration that used above. The same number of brightfield images were used for the reconstruction using the conventional FPM imaging method as were used for the APIC imaging method. The darkfield measurements were shared by FPM and APIC. Two different reconstruction algorithms were used for FPM, namely the original alternating projection algorithm (the original Gerchberg-Saxton algorithm combined with EPRY for aberration correction) and the second order Gauss-Newton method. The FPM reconstruction was conducted with 6 different sets of parameters. As the ground truth was known in our simulation, one of these parameter sets was manually selected to correspond closest to the ground truth.



FIG. 23B includes images reconstructed using APIC imaging method according to embodiments and FPM imaging method for comparison. AP denotes the alternating projection method (combined with EPRY) and GN denotes Gauss-Newton method. The ground truth of system's aberrations and the complex object are shown in the middle. In this simulation, two levels of moderate aberrations were applied. The results on the left correspond to the case where the gentler one among the two simulated aberration was used. The results on the right correspond to the more severe case.


From the simulation results, FPM worked well with mild aberrations. When the imaging system had a relatively small aberration (its phase standard deviation is approximately 0.15π), both the original alternating projection method (implemented with EPRY for aberration correction) and the second order Gauss-Newton method successfully reconstructed the aberrations. Their amplitude reconstructions were also closely matched with the ground truth. The phase reconstruction of the second order method had better correspondence to the ground truth. When the aberration became more severe (standard deviation reaches 0.4π), both FPM methods did not work. The second order FPM method worked slightly better than the first order algorithm as it partially reconstructed the high-frequency information. APIC, on the other hand, worked well at all these different levels of aberrations.


F. Comparison Under Different Signal-to-Noise Ratios

Simulations of an APIC method according to an embodiment (e.g., APIC method described with reference to FIG. 3) were run using different signal-to-noise ratios (SNRs) to see its performance under different scenarios. It was assumed the illumination angles were known and that an ideal aberration-free imaging system was used. Poisson noise was added in each measurement. The average number of photons for NA-matching measurements is shown on the top of FIG. 24. All other simulation parameters were the same.



FIG. 24A includes images reconstructed using an APIC imaging method under different signal-to-noise ratios (SNRs), according to embodiments. The measurement under normal incidence is shown on the far left, with no noise added. This measurement is for illustration purpose and is not used in APIC's reconstruction. Poisson noise was added to each measurement in this simulation. The numbers shown on the top are three average number of photons under NA matching angle illumination. When the average photons is 10 for the NA-matching measurements, the average number of photons for all darkfield measurements is around 0.11.


If no noise was added to the measurement, APIC method produced result that matched up with the ground truth. When the SNR was low, APIC method became more noisy and exhibited degraded resolution. Nonetheless, it preserved the high frequency details that were not captured in the normal incidence measurement.


Then another two simulations were conducted to compare APIC with FPM under different SNRs. As before, the NA-matching measurements were replaced with n0 brightfield measurements to construct the dataset for FPM. All darkfield measurements were shared by APIC and FPM. For FPM, 6 different sets of parameters were chosen and the best results were selected in the simulation.



FIG. 24B includes images reconstructed under different noise levels using APIC imaging method according to embodiments and FPM imaging method for comparison. For each group, the upper left panel shows the reconstruction result when the average number of photons of NA-matching measurements is 1,000, and the lower right panel shows the result when the number is 100. The zoomed version of the result within the box is shown on the far right of each group. For FPM, reconstruction was performed using 6 different parameter sets and the best reconstruction result is shown.


In the first simulation, a complex Siemens star target was simulated and different levels of noise added to the simulated dataset. As shown in FIG. 24B, the second order FPM algorithm performs well under low SNR (when the average number of photons equals 100 for the NA-matching measurements). This iterative algorithm trades the reconstruction speed for the additional tolerance of noise. The first order FPM reconstruction algorithm, which was much faster than the second order algorithm, performed worse than the other two in both cases. The second order algorithm is more robust than the first order alternating projection method.



FIG. 25 includes images of a weak absorption target reconstructed under different noise levels using APIC imaging method according to embodiments and FPM imaging method for comparison. The numbers shown on the left are the average number of photons under the matching angle. The zoomed version of images inside the highlighted boxes, along with the zoomed ground truth, are shown on the lower right of this figure. The image under normal incidence was simulated with a noise-free imaging system. For a complex sample whose amplitude and phase are different in morphology, cross-talk between the two appears in FPM's reconstruction. APIC does not appear to suffer from such severe cross-talk.


In our second simulation, two different patterns for the amplitude and phase of the complex object were selected. This complex object was designed to have a weak amplitude variation while preserving a relatively strong phase variation based on the common property for unstained biological samples. There was cross-talks between phase and amplitude in both FPM algorithms using low SNR measurements.


When SNR increased, such cross-talk was less prominent in FPM. APIC did not suffer from such cross-talk. Although the reconstruction of APIC with low SNR was noisy, it maintained the features of the ground truth amplitude and phase and showed almost no cross-talk between the two. Also, the reconstructed phase of APIC was closer to the ground truth. For the low SNR dataset, the range of the reconstructed phase of FPM appeared to be compressed.


G. Number of NA-Matching Measurements

Simulations of different numbers of NA-matching measurements were run to determine an example of a minimum number of measurements needed for an APIC imaging method, according to certain implementation, that accurately reconstructs the imaging system's aberration. In the simulation, only aberration was introduced to the imaging system and all other parameters were assumed to be ideal. Illumination angles were assumed to be azimuthally uniformly distributed, which means their corresponding LEDs were uniformly distributed along the ring (e.g., ring 116a in FIGS. 1A and 1B).



FIG. 26 includes images of aberrations reconstructed using an APIC imaging method using different numbers of NA-matching measurements, according to embodiments. The ground truths in the simulation are shown on top of each group. The number of NA-matching measurements used in the simulation are shown on the far left side. In addition to the reconstructed aberration, an error map between the reconstructed aberration and the ground truth is shown. This error map is placed next to the APIC's reconstruction and is highlighted by a dashed circle.


When using 4 images, APIC did not obtain a good aberration estimate. However, when there were 6 NA-matching measurements, APIC successfully reconstructed mild to moderately high aberrations (their phase standard deviation is below 0.8π). However, residual aberration exists in the reconstructed aberration under severe aberrations (phase standard deviation exceeds 1.5π), as depicted in the corresponding error map in FIG. 26. When the number of NA-matching measurements increased to 8, there are no residual aberrations in APIC's reconstructions regardless of the severeness of the aberration. Generally speaking, 6 to 8 NA-matching measurements are sufficient for generating an accurate aberration estimation in most cases.


In some embodiments, an APIC imaging method uses at least 6 NA-matching measurements. In some embodiments, an APIC imaging method uses at least 8 NA-matching measurements. In some embodiments, an APIC imaging method uses between 6 NA-matching measurements and 8 NA-matching measurements. In some embodiments, an APIC imaging method uses at least 7 NA-matching measurements. In some embodiments, an APIC imaging method uses at least 9 NA-matching measurements. In some embodiments, an APIC imaging method uses more than 9 NA-matching measurements.


H. Illumination Angle Estimates

Simulations of errors in the angle calibration were run to see how an APIC imaging method of an embodiment performs under different levels of calibration error. Random uniformly distributed estimation error was introduced to the actual illumination angles. The simulation was based on a similar APIC system as APIC system 100 shown in FIGS. 1A and 1B but with a different number of LEDs. The maximal amount of error in our simulation was proportional to the maximal acceptance angle of the imaging system. As the illumination angle was converted to the spatial frequency vector in practice, this maximum error is denoted by the ratio of the error to the k-vector of the maximal acceptance angle. In the simulation, it was assumed that the estimate of the NA-matching angle was no larger than the acceptance angle of the imaging system.



FIG. 27A includes images reconstructed using an APIC imaging method using different with different levels of illumination angle calibration error, according to embodiments. In this simulation, randomly generated angle errors were added to the actual illumination angles and this inaccurate angle estimate was fed into the APIC imaging method when performing the reconstruction. The numbers on top of each group are the maximal possible error in the simulation. If the maximal possible error is denoted by y, the simulated angle errors for both the horizontal and vertical direction were uniformly sampled in [−γ, γ]. For each NA-matching measurement, the corresponding angle estimate was forced to be smaller than the acceptance angle of the system by tuning the sign of this randomly generated error.


From the simulation results, APIC performance was well-maintained until the error reached 9%. Beyond that, the reconstructions showed obvious artifacts. This suggests that the alignment requirement of the proposed APIC imaging system (e.g., imaging system 100 in FIGS. 1A, 1B, and 3) is fairly relaxed. In other words, APIC imaging is somewhat tolerant to LED position errors.


I. Kramers-Kronig Method and APIC

Using APIC system 100 shown in FIGS. 1A and 1B for data acquisition and a computing device (e.g., computing device 780 in FIG. 7 or computing device 1280 in FIG. 12) for executing instructions for performing operations of an APIC method of an embodiment, a complex field was reconstructed. A USAF target was used for simulation. It was assumed that the LED illumination angles were known and an aberration-free imaging system was used. Aberration was not considered in the reconstruction. For comparison, the spatial Kramers-Kronig method was also used to reconstruct a complex field. As the spatial Kramers-Kronig method cannot perform reconstruction with darkfield measurements, its resolution was limited. APIC advantageously employs complex field reconstruction using darkfield measurements.



FIG. 27B depicts (i) raw amplitude images of an USAF target under normal incidence, (ii) amplitude and phase images of an USAF target generated by a spatial Kramers-Kronig (spatial KK) imaging method, (iii) amplitude and phase images of an USAF target generated an APIC imaging method of certain embodiments, and (iv) ground truth amplitude and phase images of an USAF target. In this simulation, no aberration was introduced. By comparing the images from Kramers-Kronig method with the images from the APIC method, is seen that the resolution of the images generated by the APIC method is much enhanced as compared with the resolution of the images generated by the spatial Kramers-Kronig method.


Many types of computing devices having any of various computer architectures may be employed as the disclosed systems for implementing algorithms. For example, the computing devices may include software components executing on one or more general purpose processors or specially designed processors such as Application Specific Integrated Circuits (ASICs) or programmable logic devices (e.g., Field Programmable Gate Arrays (FPGAs)). Further, the systems may be implemented on a single device or distributed across multiple devices. The functions of the computational elements may be merged into one another or further split into multiple sub-modules.


At one level a software element is implemented as a set of commands prepared by the programmer/developer. However, the module software that can be executed by the computer hardware is executable code committed to memory using “machine codes” selected from the specific machine language instruction set, or “native instructions,” designed into the hardware processor. The machine language instruction set, or native instruction set, is known to, and essentially built into, the hardware processor(s). This is the “language” by which the system and application software communicates with the hardware processors. Each native instruction is a discrete code that is recognized by the processing architecture and that can specify particular registers for arithmetic, addressing, or control functions; particular memory locations or offsets; and particular addressing modes used to interpret operands. More complex operations are built up by combining these simple native instructions, which are executed sequentially, or as otherwise directed by control flow instructions.


The inter-relationship between the executable software instructions and the hardware processor is structural. In other words, the instructions per se are a series of symbols or numeric values. They do not intrinsically convey any information. It is the processor, which by design was preconfigured to interpret the symbols/numeric values, which imparts meaning to the instructions.


The algorithms used herein may be configured to execute on a single machine at a single location, on multiple machines at a single location, or on multiple machines at multiple locations. When multiple machines are employed, the individual machines may be tailored for their particular tasks. For example, operations requiring large blocks of code and/or significant processing capacity may be implemented on large and/or stationary machines.


In addition, certain embodiments relate to tangible and/or non-transitory computer readable media or computer program products that include program instructions and/or data (including data structures) for performing various computer-implemented operations. Examples of computer-readable media include, but are not limited to, memory devices, phase-change devices, magnetic media such as disk drives, magnetic tape, optical media such as CDs, magneto-optical media, and hardware devices that are specially configured to store and perform program instructions, such as read-only memory devices (ROM) and random access memory (RAM). The computer readable media may be directly controlled by an end user or the media may be indirectly controlled by the end user. Examples of directly controlled media include the media located at a user facility and/or media that are not shared with other entities. Examples of indirectly controlled media include media that is indirectly accessible to the user via an external network and/or via a service providing shared resources such as the “cloud.” Examples of program instructions include both machine code, such as produced by a compiler, and files containing higher level code that may be executed by the computer using an interpreter.


In some embodiments, code executed during generation or execution of various models on an appropriately programmed system can be embodied in the form of software elements which can be stored in a nonvolatile storage medium (such as optical disk, flash storage device, mobile hard disk, etc.), including a number of instructions for making a computing device (such as personal computers, servers, network equipment, etc.).


In various embodiments, the data or information employed in the disclosed methods and apparatus is provided in an electronic format. Such data or information may include design layouts, fixed parameter values, floated parameter values, feature profiles, metrology results, and the like. As used herein, data or other information provided in electronic format is available for storage on a machine and transmission between machines. Conventionally, data in electronic format is provided digitally and may be stored as bits and/or bytes in various data structures, lists, databases, etc. The data may be embodied electronically, optically, etc.


Modifications, additions, or omissions may be made to any of the above-described embodiments without departing from the scope of the disclosure. Any of the embodiments described above may include more, fewer, or other features without departing from the scope of the disclosure. Additionally, the steps of described features may be performed in any suitable order without departing from the scope of the disclosure. Also, one or more features from any embodiment may be combined with one or more features of any other embodiment without departing from the scope of the disclosure. The components of any embodiment may be integrated or separated according to particular needs without departing from the scope of the disclosure.


It should be understood that certain aspects described above can be implemented in the form of logic using computer software in a modular or integrated manner. Based on the disclosure and teachings provided herein, a person of ordinary skill in the art will know and appreciate other ways and/or methods to implement the present invention using hardware and a combination of hardware and software.


Any of the software components or functions described in this application, may be implemented as software code using any suitable computer language and/or computational software such as, for example, Java, C, C #, C++ or Python, LabVIEW, Mathematica, or other suitable language/computational software, including low level code, including code written for field programmable gate arrays, for example in VHDL. The code may include software libraries for functions like data acquisition and control, motion control, image acquisition and display, etc. Some or all of the code may also run on a personal computer, single board computer, embedded controller, microcontroller, digital signal processor, field programmable gate array and/or any combination thereof or any similar computation device and/or logic device(s). The software code may be stored as a series of instructions, or commands on a CRM such as a random access memory (RAM), a read only memory (ROM), a magnetic media such as a hard-drive or a floppy disk, or an optical media such as a CD-ROM, or solid stage storage such as a solid state hard drive or removable flash memory device or any suitable storage device. Any such CRM may reside on or within a single computational apparatus, and may be present on or within different computational apparatuses within a system or network. Although the foregoing disclosed embodiments have been described in some detail to facilitate understanding, the described embodiments are to be considered illustrative and not limiting. It will be apparent to one of ordinary skill in the art that certain changes and modifications can be practiced within the scope of the appended claims.


The terms “comprise,” “have” and “include” are open-ended linking verbs. Any forms or tenses of one or more of these verbs, such as “comprises,” “comprising,” “has,” “having,” “includes” and “including,” are also open-ended. For example, any method that “comprises,” “has” or “includes” one or more steps is not limited to possessing only those one or more steps and can also cover other unlisted steps. Similarly, any composition or device that “comprises,” “has” or “includes” one or more features is not limited to possessing only those one or more features and can cover other unlisted features.


All methods described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The use of any and all examples, or exemplary language (e.g., “such as”) provided with respect to certain embodiments herein is intended merely to better illuminate the present disclosure and does not pose a limitation on the scope of the present disclosure otherwise claimed. No language in the specification should be construed as indicating any non-claimed element essential to the practice of the present disclosure.


Groupings of alternative elements or embodiments of the present disclosure disclosed herein are not to be construed as limitations. Each group member can be referred to and claimed individually or in any combination with other members of the group or other elements found herein. One or more members of a group can be included in, or deleted from, a group for reasons of convenience or patentability. When any such inclusion or deletion occurs, the specification is herein deemed to contain the group as modified thus fulfilling the written description of all Markush groups used in the appended claims.

Claims
  • 1. An imaging method comprising: (a) reconstructing a plurality of complex field spectrums from a respective plurality of NA-matching intensity measurements;(b) combining the complex field spectrums to determine a reconstructed sample spectrum; and(c) using a plurality of darkfield intensity measurements to extend the reconstructed sample spectrum to obtain an image with higher resolution than the NA-matching intensity measurements.
  • 2. The method of claim 1, wherein: the NA-matching measurements are acquired for incident illumination angles equal, or nearly equal, to a maximum acceptance angle of collection optics of an imaging system configured to acquire the NA-matching intensity measurements and the darkfield intensity measurements; andthe darkfield intensity measurements are acquired for incident illumination angles greater than the maximum acceptance angle of the collection optics of the imaging system.
  • 3. The method of claim 1, further comprising: (i) extracting a system aberration from the plurality of complex field spectrums; and(ii) removing the system aberration from each of the complex field spectrums prior to (b).
  • 4. The method of claim 3, wherein (i) comprises: determining a phase difference in each overlapping portion of the complex field spectrums; anddetermining the system aberration from the phase differences.
  • 5. The method of claim 4, further comprising applying a weighting factor to one or more frequencies of each of the complex field spectrums.
  • 6. The method of claim 1, wherein (c) comprises, for each darkfield measurement, calculate an auto-correlation part of the reconstructed sample spectrum;subtract the auto-correlation part from a spectrum of the darkfield measurement;extract a non-overlapping portion of a first cross-correlation part, wherein the non-overlapping portion does not overlap with the auto-correlation part or with a second cross correlation part;construct a correlation operator from the non-overlapping portion;apply the correlation operator to the spectrum of the darkfield measurement to determine an unknown part; andfill in the reconstructed sample spectrum with the unknown part.
  • 7. A computer program product, comprising a non-transitory computer readable medium having computer-executable instructions for performing: (a) reconstructing a plurality of complex field spectrums from a respective plurality of NA-matching intensity measurements;(b) combining the complex field spectrums to determine a reconstructed sample spectrum; and(c) using a plurality of darkfield intensity measurements to extend the reconstructed sample spectrum to obtain an image with higher resolution than the NA-matching intensity measurements.
  • 8. The computer program product of claim 7, wherein: the NA-matching measurements are acquired for incident illumination angles equal, or nearly equal, to a maximum acceptance angle of collection optics of an imaging system configured to acquire the NA-matching intensity measurements and the darkfield intensity measurements; andthe darkfield intensity measurements are acquired for incident illumination angles greater than the maximum acceptance angle of the collection optics of the imaging system.
  • 9. The computer program product of claim 7, further comprising additional computer-executable instructions for: (i) extracting a system aberration from the plurality of complex field spectrums; and(ii) removing the system aberration from each of the complex field spectrums prior to (b).
  • 10. The computer program product of claim 9, wherein (i) comprises: determining a phase difference in each overlapping portion of the complex field spectrums; anddetermining the system aberration from the phase differences.
  • 11. The computer program product of claim 10, further comprising additional computer-executable instructions for applying a weighting factor to one or more frequencies of each of the complex field spectrums.
  • 12. The computer program product of claim 7, wherein (c) comprises, for each darkfield measurement, calculate an auto-correlation part of the reconstructed sample spectrum;subtract the auto-correlation part from a spectrum of the darkfield measurement;extract a non-overlapping portion of a first cross-correlation part, wherein the non-overlapping portion does not overlap with the auto-correlation part or with a second cross correlation part;construct a correlation operator from the non-overlapping portion;apply the correlation operator to the spectrum of the darkfield measurement to determine an unknown part; andfill in the reconstructed sample spectrum with the unknown part.
  • 13. An imaging system, comprising: an optical system having collection optics;an illumination device configured to provide illumination at a plurality of NA-matching illumination angles at a first sequence of sample times and provide illumination at a plurality of darkfield illumination angles at a second sequence of sample times;at least one radiation detector configured to receive light from the optical system and acquire a plurality of NA-matching intensity measurements and a plurality of darkfield intensity measurements; anda computing device configured to:(a) reconstruct a plurality of complex field spectrums from the plurality of NA-matching intensity measurements;(b) combine the complex field spectrums to determine a reconstructed sample spectrum; and(c) use the plurality of darkfield intensity measurements to extend the reconstructed sample spectrum to obtain an image with higher resolution than the NA-matching intensity measurements.
  • 14. The imaging system of claim 13, wherein: the NA-matching illumination angles are equal, or nearly equal, to a maximum acceptance angle of the collection optics; andthe darkfield illumination angles are greater than the maximum acceptance angle of the collection optics of the imaging system.
  • 15. The imaging system of claim 13, further comprising a galvo motor configured to direct a laser beam from one or more lasers to each of a plurality of mirrors to reflect the laser beam to the plurality of NA-matching illumination angles and the plurality of darkfield illumination angles.
  • 16. The imaging system of claim 13, wherein: the illumination device includes a single illumination source; andthe at least one radiation detector comprises a plurality of cameras, each camera having a different detection plane.
  • 17. The imaging system of claim 13, wherein the computing device is further configured to: (i) extract a system aberration from the plurality of complex field spectrums; and(ii) remove the system aberration from each of the complex field spectrums prior to (b).
CROSS-REFERENCES TO RELATED APPLICATIONS

This application claims benefit of and priority to U.S. Provisional Patent Application No. 63/455,878, titled “High-Resolution, Large Field-Of-View Label-Free Imaging Via Aberration-Corrected, Closed-Form Complex Field Reconstruction,” and filed on Mar. 30, 2023 and to U.S. Provisional Patent Application No. 63/536,265, titled “High-Resolution, Large Field-Of-View Label-Free Imaging Via Aberration-Corrected, Closed-Form Complex Field Reconstruction,” and filed on Sep. 1, 2023, which are incorporated by reference herein in their entireties and for all purposes.

Provisional Applications (2)
Number Date Country
63536265 Sep 2023 US
63455878 Mar 2023 US