Certain aspects generally pertain to computational imaging and, more specifically, to ptychographic imaging.
Computational imaging empowers modern microscopy with the ability to produce high-resolution, large field-of-view, aberration-free images that improve digital pathology and broadly apply to other high throughput imaging fields. One dominant computational label-free imaging method, Fourier ptychographic microscopy (FPM), effectively increases the spatial-bandwidth product of conventional microscopy using multiple tilted illuminations to achieve high-throughput imaging. However, its iterative reconstruction is subject to convergence criteria. Kramers-Kronig methods have shown the potential to analytically reconstruct a complex field from intensity measurements but these methods do not address aberrations.
Background and contextual descriptions contained herein are provided solely for the purpose of generally presenting the context of the disclosure. Much of this disclosure presents work of the inventors, and simply because such work is described in the background section or presented as context elsewhere herein does not mean that such work is admitted prior art.
Certain embodiments pertain to an imaging method comprising (a) reconstructing a plurality of complex field spectrums from a respective plurality of NA-matching intensity measurements, (b) combining the complex field spectrums to determine a reconstructed sample spectrum, and (c) using a plurality of darkfield intensity measurements to extend the reconstructed sample spectrum to obtain an image with higher resolution than the NA-matching intensity measurements. In some embodiments, the imaging method further comprises (i) extracting a system aberration from the plurality of complex field spectrums and (ii) removing the system aberration from each of the complex field spectrums prior to (b).
Certain embodiments pertain to a computer program product comprising a non-transitory computer readable medium having computer-executable instructions for performing (a) reconstructing a plurality of complex field spectrums from a respective plurality of NA-matching intensity measurements, (b) combining the complex field spectrums to determine a reconstructed sample spectrum, and (c) using a plurality of darkfield intensity measurements to extend the reconstructed sample spectrum to obtain an image with higher resolution than the NA-matching intensity measurements. In some embodiments, the computer program product comprises additional computer-executable instructions for (i) extracting a system aberration from the plurality of complex field spectrums and (ii) removing the system aberration from each of the complex field spectrums prior to (b).
Certain embodiments pertain an imaging system comprising an optical system having collection optics, an illumination device configured to provide illumination at a plurality of NA-matching illumination angles at a first sequence of sample times and provide illumination at a plurality of darkfield illumination angles at a second sequence of sample times, at least one radiation detector configured to receive light from the optical system and acquire a plurality of NA-matching intensity measurements and a plurality of darkfield intensity measurements, and a computing device. The computing device is configured to: (a) reconstruct a plurality of complex field spectrums from the plurality of NA-matching intensity measurements, (b) combine the complex field spectrums to determine a reconstructed sample spectrum, and (c) use the plurality of darkfield intensity measurements to extend the reconstructed sample spectrum to obtain an image with higher resolution than the NA-matching intensity measurements. In some embodiments, the computing device is further configured to (i) extract a system aberration from the plurality of complex field spectrums and (ii) remove the system aberration from each of the complex field spectrums prior to (b).
These and other features and embodiments will be described in more detail with reference to the drawings.
Additional aspects and advantages of the present disclosure will become readily apparent to those skilled in this art from the following detailed description, wherein only illustrative embodiments of the present disclosure are shown and described. As will be realized, the present disclosure is capable of other and different embodiments, and its several details are capable of modifications in various obvious respects, all without departing from the disclosure. Accordingly, the drawings and description are to be regarded as illustrative in nature, and not as restrictive.
The figures and components therein may not be drawn to scale.
Different aspects are described below with reference to the accompanying drawings. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the presented embodiments. The disclosed embodiments may be practiced without one or more of these specific details. In other instances, well-known operations have not been described in detail to avoid unnecessarily obscuring the disclosed embodiments. While the disclosed embodiments will be described in conjunction with the specific embodiments, it will be understood that it is not intended to limit the disclosed embodiments.
Over the past few decades, remarkable progress has been made in both fluorescence and label-free imaging. One such representative label-free technique, Fourier ptychographic microscopy (FPM), leverages the power of computation to provide high-resolution and aberration correction abilities to low numerical aperture objectives. Conventional FPM operates by collecting a series of low-resolution images under tilted illumination and applies an iterative phase retrieval algorithm to reconstruct sample's high spatial-frequency features and optical aberration, resulting in high-resolution aberration-free imaging that preserves the inherently large FOV associated with the low numerical aperture (NA) objectives. However, its iterative algorithm can pose challenges. First, its iterative reconstruction is typically a non-convex optimization, which means it is not guaranteed to converge. As a result, FPM does not guarantee that the global optimal solution is ever reached. This may be problematic for exacting applications, such as digital pathology, where even small errors in the image are not tolerable. Furthermore, the joint optimization of aberration and sample spectrum of conventional FPM can sometimes fail when the system's aberrations are sufficiently severe, which may lead to poor reconstructions.
Spatial-domain Kramers-Kronig relations have shown that a complex field can be non-iteratively reconstructed in one specific varied illumination microscopy scenario by matching the illumination angle to the objective's maximal acceptance angle and exploiting the signal analyticity. However, this approach does not possess the ability to correct for hybrid aberrations or provide resolution enhancement beyond the diffraction limit of the objective NA.
Certain embodiments pertain to angular ptychographic imaging with closed-form (APIC) techniques that can recover complex fields, retrieve aberrations, and/or reconstruct the darkfield associated high spatial frequency spectrum to expand the sample spectrum in a purely analytical way. By avoiding iterative algorithms, APIC techniques are advantageous in that they do not require convergence metrics and have been demonstrated to consistently obtain a closed-form solution. This may enable faster computational time over existing techniques that use iterative algorithms. APIC techniques have also demonstrated robustness against aberrations, including complex aberrations, where existing iterative techniques have failed. Using NA-matching and darkfield measurements, APIC techniques can be used to reconstruct high-resolution aberration-free complex fields when a low magnification, large FOV objective is used for data acquisition. Due to its analytical nature. APIC techniques are inherently insensitive to optimization parameters such as convergence metrics and consistently provide an analytical closed-form solution.
As used herein, an “NA-matching measurement” (also sometimes referred to as a “NA-matching intensity measurement”) refers to an intensity measurement (also sometimes referred to herein as “intensity image” or “raw image,”) acquired while incident illumination is at an NA-matching illumination angle. As used herein, an “NA-matching illumination angle” refers to an illumination angle that is equal to, or nearly equal to (e.g., within 1 degree, within 2 degrees, or within 3 degrees), the maximum acceptance angle of the collection optics (e.g., objective) of the imaging system acquiring the image. In various embodiments, a plurality of NA-matching intensity measurements is acquired at a sequence of exposure times during which incident illumination sequentially shifts to each one of the NA-matching angles. In these examples, each NA-matching intensity measurement is acquired while incident illumination is at one of the NA-matching illumination angles. In alternate embodiments, multiplexing illumination may be employed where incident illumination is cycled through a sequence of illumination patterns. Each illumination pattern includes simultaneous incident illumination from multiple NA-matching illumination angles. In these multiplexing embodiments, each NA-matching intensity measurement is acquired during an exposure time during which incident illumination is at one of the illumination patterns.
As used herein, a “darkfield measurement” (also sometimes referred to as a “darkfield intensity measurement”) refers to an intensity measurement acquired while incident illumination is at a darkfield illumination angle. As used herein, a “darkfield illumination angle” refers to an illumination angle that is greater than the maximum acceptance angle of the collection optics. Each darkfield measurement is acquired during an exposure time while the specimen is illuminated at one of the darkfield illumination angles in the sequence. In one example, the darkfield illumination angles may be in a range of 1 degree to 5 degrees greater than the maximum acceptance angle. In another example, the darkfield illumination angles may be in a range of 3 degrees to 5 degrees greater than the maximum acceptance angle. In another example, the darkfield illumination angles may be in a range of 1 degree to 5 degrees greater than the maximum acceptance angle. In another example, the darkfield illumination angles may be more than 1 degree greater than the maximum acceptance angle.
As used herein, an “NA-matching illumination source” refers to an illumination source that is configured to provide incident illumination that is equal to, or nearly equal to (e.g., within 1 degree, within 2 degrees, or within 3 degrees), the maximum acceptance angle of the collection optics (e.g., objective) of the imaging system acquiring the image.
As used herein, a “darkfield illumination source” refers to an illumination source that is configured to provide incident illumination that is greater than the maximum acceptance angle of the collection optics of the imaging system acquiring the image.
As used herein, a “spectrum” (also sometimes referred to as a “complex field spectrum”) generally refers to a spatial frequency spectrum, which is the Fourier transform of the sample's complex field. The spectrum is different from the Fourier transform of an acquired intensity image, which is the Fourier transform of a purely intensity measurement.
As used herein, a “known sample spectrum,” refers to a prior reconstructed spectrum. The reconstructed spectrum expands as more images are used in the reconstruction and this known sample spectrum also grows during this process. Thus, the known sample spectrum at step i can be a subset of the known spectrum at step i+1.
APIC system 100 includes an illumination device 110, an optical system 130, and a radiation detector 170. Optical system 130 includes an aperture 132, an objective 134 in optical communication with the aperture 132, and a lens 136 in optical communication with the objective 134, and a radiation detector 140 configured to acquire intensity images based on light passing from lens 136. In one implementation, the objective is a low magnification objective such as, e.g., a 10× magnification, NA 0.25 objective (e.g., 10× magnification, NA 0.25 objective sold by Olympus).
Illumination device 110 includes a rectangular light emitting diode (LED) array 111 having 81 illumination sources (e.g., LEDs) 115 and a ring of NA-matching illumination sources 116a (e.g., Neopixel ring 16 made by Adafruit) disposed on the LED array 111. The ring of 16 NA-matching illumination sources 116a has a diameter (defined along a circle at the centerline of the illumination sources) that is configured to be able to change illumination angle through a plurality of NA-matching illumination angles as different illumination sources 115 along the circumference of the circle are illuminated in sequence in the clockwise direction as depicted by an arrow. The NA-matching illumination angles are equal to, or nearly equal to, the acceptance angle of objective 134. Additional or fewer illumination sources may be included in the LED array 111 or the ring 116a in other implementations. Also, in another implementation, the illumination sources may be illuminated in a different order.
In
As depicted in
In
Although not shown, APIC system 100 may also include a computing device (e.g., computing device 780 in
In one implementation, APIC system 100 may also include at least one motorized stage upon which the illumination device 110 is mounted. The at least one motorized stage may be configured to adjust the position of the illumination device 110.
In certain embodiments, an APIC imaging method includes a reconstruction process that analytically solves for a sample's spatial frequency spectrum and system aberration with NA-matching intensity measurements. In addition or in alternate embodiments, the reconstruction process may use one or more darkfield intensity measurements to extend the sample's spatial frequency spectrum, which may advantageously enhance resolution of a NA-limited imaging system.
The reconstruction process of the FPM imaging method includes iteratively updating the sample spectrum and the aberration to minimize the differences in the measurements and the reconstruction output. This iterative process is terminated upon convergence to obtain the sample spectrum and the coherent transfer function (CTF) estimate. The illustrated pupil the reconstructed aberrations. FPM reconstruction treat darkfield and brightfield indifferently. All the data are given to the FPM algorithm for optimization. APIC reconstruction, on the contrary, handles the NA-matching measurements and darkfield measurements in different ways.
In certain embodiments, an APIC imaging method includes reconstructing the complex field spectrums corresponding to a plurality of NA-matching measurements acquired by an APIC system (e.g., APIC system 100 in
The APIC imaging method may use various numbers of NA-matching measurements to reconstruct the complex field spectrums. In some embodiments, an APIC imaging method uses at least 6 NA-matching measurements. In some embodiments, an APIC imaging method uses at least 8 NA-matching measurements. In some embodiments, an APIC imaging method uses between 6 NA-matching measurements and 8 NA-matching measurements. In some embodiments, an APIC imaging method uses at least 7 NA-matching measurements. In some embodiments, an APIC imaging method uses at least 9 NA-matching measurements. In some embodiments, an APIC imaging method uses more than 10 NA-matching measurements.
For realistic imaging systems, aberrations may be superimposed on the sample spectrum's phase. As discussed in some detail in Section V (C), the overlapping region in sample dependent phases are identical in the overlapped region of two spectrums, and subtracting their phases cancels out the sample dependent phase term, leaving only the phase differences between different parts of the pupil function (the aberrations). Referring to
In one implementation, an APIC imaging method includes extracting an aberration such as an aberration introduced by an objective (e.g., objective 134). In this example, the APIC imaging method determines a phase difference in each overlapping portion of the reconstructed complex field spectrums and uses the phase differences to extract the aberration. The APIC imaging method uses the extracted aberration to correct the reconstructed complex field spectrums and stitches (sums) the corrected spectrums together to obtain a reconstructed sample spectrum (also sometimes referred to herein as a “known sample spectrum”).
In addition or in alternate embodiment, the APIC imaging method may include extending the reconstructed sample spectrum using one or more darkfield measurements.
In some embodiments, an APIC imaging method that extends the sample spectrum using one or more darkfield measurements may also correct for system aberration. In these embodiments, to be able to match up the known spectrum with the spectrum of the darkfield measurement, a system aberration (e.g., a recovered aberration) may be introduced to the cropped out known spectrum. After recovering the unknown spectrum, the aberration may be corrected and the corrected unknown spectrum filled back into the reconstructed spectrum. Once the reconstructed spectrum is filled in with the corrected unknown spectrum of one or more darkfield measurements, an increased resolution, aberration-free sample image can be obtained (e.g., via reverse Fourier transform).
The theoretical optical resolution of APIC methods is determined by the sum of the illumination NA and the objective NA. Since APIC techniques analytically recover the actual sample spectrum, these techniques may be advantageous over existing iterative techniques that rely on optimization parameters.
In certain embodiments, an APIC imaging method includes a data acquisition procedure for acquiring a plurality of NA-matching intensity measurements and a plurality of darkfield intensity measurements. In some cases, control signals are communicated to the radiation detector to trigger taking the intensity measurements and/or control signals are communicated to the illumination device to activate different illumination sources to change the illumination angle. For example, different illumination sources 115 in the ring of NA-matching illumination sources may be activated to change illumination angle. In other implementations, multiplexing illumination may be employed to illuminate with a series of illumination patterns.
It should also be noted that the operations of the APIC imaging method may be performed in any suitable order, not necessarily the order depicted in
At operation 410, a plurality of NA-matching intensity measurements of a specimen is retrieved from memory or received directly in a signal from a radiation detector (e.g., radiation detector 740 in
At operation 420, a plurality of darkfield measurements is retrieved from memory or received directly in a signal from the radiation detector. The darkfield measurements are acquired while varying illumination angles that are greater than a maximum acceptance angle of the collection optics of the imaging system. In one implementation, the illumination angles may be in a range of 1 degree to 5 degrees greater than the maximum acceptance angle. In another example, the illumination angles may be in a range of 3 degrees to 5 degrees greater than the maximum acceptance angle. In another implementation, the illumination angles may be in a range of 1 degree to 5 degrees greater than the maximum acceptance angle. In another implementation, the illumination angles may be more than 1 degree greater than the maximum acceptance angle. In some embodiments, the darkfield measurements are acquired while a specimen is being illuminated by a sequence of darkfield illumination angles. Each darkfield illumination angle is greater in than the maximum acceptance angle of the collection optics of the APIC imaging system. In some embodiments, each darkfield measurement is acquired during an exposure time while the specimen is illuminated at one of the darkfield illumination angles in the sequence. In an alternate embodiment, multiplexing illumination may be employed where the specimen is illuminated by a sequence of illumination patterns where each illumination pattern includes simultaneous illumination from multiple darkfield illumination angles. In some embodiments, a computing device may send trigger signals to the illumination device to activate different illumination sources to provide illumination at the darkfield illumination angles sequentially. Alternatively or in addition, the computing device may send trigger signals to the radiation detector to take the darkfield intensity measurements.
At operation 430, a new signal with a complex field spectrum is reconstructed from each of the NA-matching intensity measurements to obtain a plurality of complex field spectrums. In some embodiments, a transformed signal of each complex field spectrum is reconstructed from a corresponding NA-matching intensity measurement using Eqn. 32. The complex field spectrum may be restored in real space with inverse Fourier transform and applying an exponential function to each point of the inverse transform as provided in Eqn. 33. In some cases, Kramers-Kronig relations may be used to reconstruct complex field spectrums from NA-matching measurements.
At operation 440, a system aberration is extracted from the plurality of complex field spectrums reconstructed from the NA-matching intensity measurements. As discussed in more detail in Section V (C) with reference to
At operation 450, aberration is corrected in each of the complex field spectrums to generate a plurality of aberration-corrected spectrums. To remove the system aberration from each reconstructed spectrum, the conjugate of the system aberration is subtracted from the reconstructed spectrum to generate a corresponding aberration-corrected spectrum.
At operation 460, the aberration-corrected spectrums are stitched together to generate a reconstructed sample spectrum (also referred to herein as a “(initial) known sample spectrum” or a “calculated sample spectrum”). For example, the aberration-corrected spectrums may be summed together by the weighted average.
At operation 470, the reconstructed (known) sample spectrum is extended using the plurality of darkfield measurements. Inverse Fourier transform is applied to the extended sample spectrum to obtain an extended-resolution, aberration-free reconstructed image with higher resolution than the NA-matching intensity measurements or the darkfield measurements. In certain implementations, an unknown parts of the sample spectrum are recovered from the darkfield measurements and a spectrum spanning procedure is used to fill in the sample spectrum with the recovered unknown parts. Details regarding an example of an operation for extending the known sample spectrum are provided in Section V (D).
In some embodiments, the APIC imaging method discussed with respect to
As discussed in more detail in subsection V (C) with reference to
Returning to
At suboperation 530, a system aberration may be determined using the phase differences from the overlapping portions. In certain implementations, the system aberration is determined by solving for the linear operator that maps the phase differences of the overlapping portions of the complex field spectrums to the system aberration. An example of a linear operator is provided in Eqn. 48. In this example, phase differences for the overlapping portions can be entered for different pairs of overlapping spectrums into Eqn. 48 and the system aberration can be solved from the populated matrix with the phase differences. In one implementation, the phase differences of a smaller set of pairs of overlapping spectrums from the total number of pairs of overlapping spectrums in the full spectrum of the sample are used, which may be advantageous to reduce computational resources. An example of a 2D aberration of the APIC imaging system is given by Eqn. 51.
In some implementations, the suboperations in
At suboperation 610, one of the darkfield measurements may be selected from the plurality of darkfield measurements. In one example, the darkfield measurement with a spectrum closest to a known reconstructed sample spectrum is selected (e.g., known sample spectrum from operation 460).
The sampled spectrum of the darkfield measurement (e.g., sampled spectrum 1810 in
At optional (denoted by dashed line) suboperation 630, the known system aberration may be added back into the cropped out known part of the spectrum of the selected darkfield measurement to obtain an uncorrected known part. In another implementation, the known system aberration may be added into the spectrum of the darkfield measurement prior to suboperation 620.
In certain embodiments, APIC imaging methods correct for aberrations such as an imaging system aberration (e.g., system aberration extracted at operation 440 in
As discussed in detail in Section V(D), the spectrum of the darkfield measurement (e.g., Fourier transform of the darkfield intensity measurement 1830 in
At suboperation 650, the portion of one of the cross-correlation terms that does not overlap with the other cross-correlation term or the autocorrelation term of the unknown spectrum is extracted (by discarding the overlap). For example, the possible non-zero area of the other cross-correlation term may be determined using the shape of the known sample spectrum and that of the unknown spectrum of the darkfield measurement. The non-zero area in their autocorrelation term can be determined from the two as well. The non-overlapping portion can be extracted by setting any possible non-zero area covered by autocorrelation of unknown spectrum (autocorrelation of unknown spectrum 1842) and the other cross-correlation term (first cross-correlation term 1844 in
The non-overlapping portion is linearly related to the unknown part of the spectrum of the darkfield measurement. A linear equation (e.g., part of a correlation operator) may be obtained with respect to the unknown part of the darkfield measurement. This linear equation may be used to form a closed-form solution of an unknown part from each darkfield measurement. At suboperation 660, a correlation operator may be constructed from known spectrum based on the portion extracted (1852 in
In some implementations, by constructing a correlation operator, a lincar equation with respect to the unknown part of the sample spectrum is determined. In calculating cross-correlations of the known part of the signal and the unknown part of the signal, one of the signals is shifted and gets multiplied by the other signal. The summation of this product is a correlation coefficient. The known signal can be used as the weight to calculate a weighted version of the unknown signal. The summation of weighted version of the unknown signal is the correlation coefficient and this process is linear in terms of the unknown spectrum. We can thus form a linear equation and the linear equation can be solved for the unknown spectrum.
At suboperation 670, the correlation operator is applied to the spectrum of the darkfield measurement to recover the unknown part.
In implementations where the imaging method corrects for aberration, after recovering the unknown part, the aberration is corrected in the unknown part. At optional (denoted by dashed line) suboperation 680, the known system aberration is removed (subtracted) from the unknown part.
At suboperation 690, the sample spectrum is filled in with the recovered unknown part. Once the unknown part of the sample spectrum is recovered, the sample spectrum is filled in (summed) with the recovered unknown part.
In certain implementations, the suboperations in
In certain implementations, one or more unknown parts of the sample spectrum are recovered and a spectrum spanning procedure is used to fill in the sample spectrum with the recovered unknown parts. For example, the suboperations in
According to embodiments, an APIC imaging method uses a plurality of NA-matching intensity measurements and/or a plurality of darkfield intensity measurements in a reconstruction process. Various number of these intensity measurements may be used.
In some embodiments, an APIC imaging method uses at least 6 NA-matching measurements. In some embodiments, an APIC imaging method uses at least 8 NA-matching measurements. In some embodiments, an APIC imaging method uses between 6 NA-matching measurements and 8 NA-matching measurements. In some embodiments, an APIC imaging method uses at least 7 NA-matching measurements. In some embodiments, an APIC imaging method uses at least 9 NA-matching measurements. In some embodiments, an APIC imaging method uses more than 9 NA-matching measurements.
In some embodiments, an APIC imaging method uses at least one darkfield measurement. In some embodiments, an APIC imaging method uses between 1-10 darkfield measurements. In some embodiments, an APIC imaging method uses between 10-20 darkfield measurements. In some embodiments, an APIC imaging method uses between 20 and 30 darkfield measurements. In some embodiments, an APIC imaging method uses more than 10 darkfield measurements.
APIC imaging device 701 also includes one or more radiation detectors 740 in communication with the optical system 730 to receive light. The one or more radiation detectors 740 are configured to acquire a plurality of NA-matching intensity measurements while the illumination device 710 provides illumination at NA-matching illumination angles and acquire a plurality of darkfield intensity measurements while the illumination device 710 provides illumination at darkfield illumination angles. The computing device 780 includes one or more processors 782, a non-transitory computer readable medium (CRM) 784, and an optional (denoted by dashed line) display 786. The one or more processors 782 are in electrical communication with one or more radiation detectors 740 to receive a signal with a plurality of NA-matching intensity measurements and a plurality of darkfield intensity measurements and/or to send control signals to the one or more radiation detectors 740, for example, to trigger image acquisition. Communication between one or more system components may be in wired and/or wireless form.
In some embodiments, the illumination device of an APIC imaging system includes one or more illumination sources. In other cases, the illumination device is in communication with (e.g., via optical fibers) the one or more illumination sources to receive illumination. For example, illumination device 1110 in
An illumination device of various formats may be used. In one embodiment, an illumination device includes a galvo motor configured to receive a laser beam and a plurality of mirrors configured to reflect the laser beam at different illumination angles. In another embodiment, an illumination device includes one or more rings of illumination sources mounted onto a supporting structure (e.g., a flat plate, a hemispherical plate, a semi-hemispherical plate, a partial conical plate, etc.). For example, an illumination device may have a first ring of illumination sources configured for NA-matching illumination angles and a second ring of illumination sources configured for darkfield illumination angles. In another embodiment, the illumination device includes a single illumination source. In another embodiment, the illumination device includes a rectangular array of illumination sources and/or a ring of illumination sources. For example, an illumination device may include an LED array (e.g., RGB LED Matrix sold by Adafruit) and/or LED ring (e.g., Neopixel ring 16 sold by Adafruit). As another example, an LCD pixel array and/or LCD pixel ring may be used. One skilled in the art would contemplate other sources of radiation and other formats of illumination devices that can be implemented by an APIC imaging system.
The illumination sources of different embodiments may provide electromagnetic waves of various wavelength. In some cases, the illumination sources of the illumination device provide wavelength within the visible spectrum. In another case, the illumination sources provide electromagnetic waves in the ultraviolet spectrum. In another case, the illumination sources provide electromagnetic waves in the infrared spectrum.
In some embodiments, the illumination device includes a first plurality of illumination sources configured to provide illumination sequentially at a corresponding plurality of NA-matching illumination angles for acquiring a plurality of NA-matching measurements and a second plurality of illumination sources configured to provide illumination sequentially at a corresponding plurality of darkfield illumination angles for acquiring a plurality of darkfield measurements. For example, illumination device 110 in
In some embodiments, an APIC imaging system has one or more motorized translational stages upon which the illumination device is mounted to adjust the position (height and x-y translational position) of the illumination device. In some cases, the motorized transitional stage(s) is/are in communication with the computing device of the APIC imaging system to control the movement. In one implementation, two motorized transitional stages are used. In this implementation, two circuit boards (e.g., circuit boards made by Arduino Uno) may be in communication with the motorized transitional stages respectively to control them individually.
In various embodiments, an APIC imaging system includes one or more radiation detectors (e.g., radiation detector 140 in
An APIC system may include a computing device for performing one or more functions of the APIC imaging system such as, e.g., one or more operations of a APIC imaging method. The computing device may include one or more processors and a non-transitory computer readable medium in electrical communication with the one or more processors. Optionally, the computing device may also have a display that is in electrical communication with the one or more processors. The computing device can be in various forms such as, for example, a smartphone, laptop, desktop, tablet, etc. An example of a suitable computing device is a personal computer having a non-transitory computer readable medium with 16 GB RAM and a processor including a CPU (e.g., Intel Core i5-8259U).
In some cases, the computing device may include a controller for controlling functionality of the APIC system. In one example, the controller may include one or more circuit boards (e.g., Arduino board made by Arduino Uno). In one implementation, a first plurality of illumination sources configured to provide illumination sequentially at a corresponding plurality of NA-matching illumination angles for acquiring a plurality of NA-matching measurements are controlled by a first circuit board and the second plurality of illumination sources configured to provide illumination sequentially at a corresponding plurality of darkfield illumination angles for acquiring a plurality of darkfield measurements are controlled by a second circuit board.
In various embodiments, the APIC imaging system includes one or more processors (e.g., processor(s) 782 in
APIC system 900 also includes a mechanism (not shown) coupled to the single illumination source 912. The mechanism is configured to move (e.g., scan) the illumination source 912 in a direction of an x-axis and/or in a direction of a y-axis (not shown) perpendicular to a plane of the x-axis and the z-axis. In
In certain embodiments, an APIC imaging system includes an illumination device that directs a laser beam at different NA-matching illumination angles and darkfield illumination angles. For example, the illumination device may include a galvo motor (two-axis rotatable mirror system) and an array of mirrors (e.g., an arrangement of mirrors in concentric circles along a flat surface). The galvo motor may have mirrors that are rotatable to direct a laser beam to different mirrors in the array that then reflect the laser beam at the different illumination angles. The galvo motor is in communication with one or more laser sources (e.g., via optical fibers) to receive the laser beam.
The APIC system 1100 also includes an optical system 1130 and a radiation detector 1140 for receiving laser light propagated by optical system 1130. Optical system 1130 includes a collection element 1134 (e.g., objective) having a focal length, f1, and a focusing element 1136 (e.g., lens) having a focal length, f2. Collection clement 1134 is located to receive light issuing from the specimen during operation. Focusing element 1136 is configured to focus light propagated from collection element 1134 to radiation detector 1140. Optical system 1130 may be in a 4f arrangement or a 6f arrangement in in certain implementations. APIC system 1100 also includes a specimen receptacle 1122 (e.g., slide) for receiving a specimen being imaged. specimen receptacle 1122 includes a first surface at a sample plane. The illustrated example is shown at an instant in time during an image acquisition process where a specimen 1120 is located in specimen receptacle 1122.
The first and second rotatable mirrors 1113 and 1114 may be controlled in a variety of manners. By way of example, a controller, may be coupled with the first and second rotatable mirrors 1113 and 1114 of
Computing device 1280 includes I/O subsystem 1202, which includes, or is in communication with, one or more components which may implement an interface for interacting with human users and/or other computer devices depending upon the application. Certain embodiments disclosed herein may be implemented in program code on computing device 1280 with I/O subsystem 1202 used to receive input program statements and/or data from a human user (e.g., via a graphical user interface (GUI), a keyboard, touchpad, etc.) and to display them back to the user, for example, on a display. The I/O subsystem 1202 may include, e.g., a keyboard, mouse, graphical user interface, touchscreen, or other interfaces for input, and, e.g., an LED or other flat screen display, or other interfaces for output. Other elements of embodiments may be implemented with a computer system like that of computer system 1200 without I/O subsystem 1202. According to various embodiments, a processor may include a CPU, GPU or computer, analog and/or digital input/output connections, controller boards, etc.
Program code may be stored in non-transitory computer readable media such as secondary memory 1210 or main memory 1208 or both. One or more processors 1204 may read program code from one or more non-transitory media and execute the code to enable computing device 1280 to accomplish the methods performed by various embodiments described herein, such as APIC imaging methods. Those skilled in the art will understand that the one or more processors 1282 may accept source code and interpret or compile the source code into machine code that is understandable at the hardware gate level of the one or more processors 1282.
Communication interfaces 1207 may include any suitable components or circuitry used for communication using any suitable communication network (e.g., the Internet, an intranet, a wide-area network (WAN), a local-area network (LAN), a wireless network, a virtual private network (VPN), and/or any other suitable type of communication network). For example, communication interfaces 1207 can include network interface card circuitry, wireless communication circuitry, etc.
In certain embodiments, computing device 1280 may be part of or connected to a controller that is employed to control functions of an APIC system such as controlling image acquisition by the radiation detector (e.g., radiation detector(s) 140 in
For certain implementations, APIC reconstruction may be computationally faster than certain examples of FPM reconstruction. For certain implementations, APIC reconstruction may be more robust than these examples of FPM reconstruction against large aberrations. In one implementation, APIC reconstruction may be capable of addressing aberration whose maximal phase differences exceed 3.8n when using a NA 0.25 objective.
The APIC system in
First Experiment with Small Dataset
In a first experiment, a Siemens star target was imaged and a small dataset was acquired to perform reconstruction using APIC and FPM. The dataset acquired consisted of 9 brightfield measurements, 8 NA-matching measurements and 27 darkfield measurements. The nominal scanning pupil overlap rate was approximately 65%. The results suggest that the APIC reconstruction was able to render a more accurate complex field than the FPM reconstruction.
As shown by the result, the reconstructed finer spokes were distorted in the reconstruction result of FPM. Moreover, noticeable wavy reconstruction artifacts existed in the phases reconstructed by FPM. This suggests that large redundancy may be needed for FPM to provide a good reconstruction. When the measurements were given to APIC reconstruction, the reconstructed phases and amplitudes were less noisy. The reconstructed amplitude is also closer to the ground truth, which is sampled using a high-NA objective. This experiment showed the ability of APIC reconstruction to better retrieve a high-resolution complex field when the raw data size is constrained because it is an analytical method and does not rely as heavily on pupil overlap redundancy for solution convergence.
For image tile of length size 256 pixels, APIC reconstruction took 9 seconds on the personal computer, while FPM took 25 seconds to finish reconstruction. The relative computational efficiency of APIC can again be attributed to the analytical nature of its approach. This computational efficiency is image tile size dependent-the smaller the tile the more efficient. Section VI includes more detail regarding image tile size. As it is generally preferred to divide a large image into more smaller tiles in parallel computing, APIC's computational efficiency for smaller tiles aligns well with practical computation considerations.
Second Experiment with 312 Images
In a second experiment, the robustness of APIC and FPM reconstruction at addressing optical aberrations was examined. For this experiment, a total of 316 images were acquired including 52 normal brightfield measurements, 16 NA-matching measurements and 248 darkfield measurements. The nominal scanning pupil overlap rate of our dataset was approximately 87% and the final theoretical synthetic NA was equal to 0.75 when all darkfield measurements were used. This large degree of spectrum overlap was chosen to provide sufficient data redundancy for the best performance of FPM. APIC does not typically use such a large dataset. In our reconstruction, APIC only used the NA-matching and darkfield measurements, whereas FPM used the entire dataset, including the additional 52 brightfield measurements corresponding to illumination angles that were below the objective's acceptance angle. The second order Gauss-Newton FPM reconstruction algorithm was applied for reconstruction as it was found to be the most robust FPM reconstruction algorithm. A set of 6 parameters was used in the reconstruction of FPM.
The Siemens star target was deliberately defocused to assess how the two methods perform under different aberration levels. In this experiment, the sample was defocused to different levels and the defocus information was hidden from both methods.
From the results, for large aberrations whose phase standard deviation exceeded 1.ln: on the pupil function (the case when Siemens star target was defocused by 32 μm, and the maximal phase difference is approximately 3.8n:), FPM may not have found the correct solution and the reconstructed images were different from the ground truth. At a lower aberration level, the amplitude reconstructions of FPM appeared to be close to the ideal case. However, the reconstructed phases were substantially different from the result when no defocus was introduced. In contrast, APIC was highly robust to different levels of aberrations. Although the contrast of APIC's reconstruction dropped under larger aberrations, it retrieved the correct aberrations and gave high-resolution complex field reconstructions that match with the in-focus result. The measured resolution for both FPM and APIC is approximately 870 nm when the in focus measurements were used, which is close to the 840 nm theoretical resolution.
Implementation B with Complex Aberration
The APIC system in
A human thyroid adenocarcinoma cell sample was imaged using the APIC imaging method and the FPM imaging method.
In
A hematoxylin and cosin (H&E) stained breast cancer cell sample was imaged using APIC imaging method and FPM imaging method. Red, green and blue LEDs were used to acquire dataset for these three different channels and then applied APIC for the reconstruction. In this experiment, the sample was placed at a fixed height in the data acquisition process. As a result, different levels of defocus are shown in different channels lying on top of the chromatic aberrations of the objective. A 40× objective was used to acquire the ground truth image. The illumination angles were calibrated for the central patch (side length: 512 pixels) and then the angles were calculated for off-axis patches using geometry. This calibrated illumination angles were used as input parameter data in reconstruction. The reconstructed color image is shown in
From the zoomed images in
In the experiments discussed in this Section, an APIC imaging method of certain implementations was shown to be able to extract relatively large aberrations and synthesize a large FOV and higher resolution images using low NA objectives.
APIC methods advantageously avoid problems with conventional phase retrieval algorithms, such as being prone to optimization parameters and getting stuck in a local minimum. As APIC techniques directly solve for the complex field, they advantageously avoid a potentially time-consuming iterative process. APIC techniques may provide high-resolution and large field-of-view label-free imaging with robustness to aberrations. As an analytical method, APIC is insensitive to parameter selections and can compute the correct imaging field without getting trapped in local minimums. APIC's analyticity is particularly important in a range of exacting applications, such as digital pathology, where even minor errors are not tolerable. APIC guarantees the correct solution while other iterative methods cannot. Additionally, APIC brings new possibilities to label-free computational microscopy as it affords greater freedom in the use of engineered pupils for various imaging purposes. APIC techniques use of the known spectrum to reconstruct the unknown spectrum may be adapted for other uses.
In some embodiments, an APIC imaging method includes a calibration procedure, which may be performed once for a particular APIC imaging system. In some implementations, the calibration procedure determines the illumination angle of each tilted illumination for each intensity measurement. This information determines the area of the sample's spectrum being measured. The calibration procedure may use a circle-finding algorithm to find the exact illumination angle for the intensity measurements. In one example, brightfield measurements whose illumination angles are below the acceptance angle of imaging system may be collected as well. These brightfield measurements can be used for geometrically calibrating the angles associated with the darkfield measurements.
In some embodiments, a calibration procedure includes optimizing illumination angles by maximizing the correlation between the real measurements and the images obtained. For example, the calibration data can be used to reconstruct the complex field with the geometrically calculated darkfield LED illumination angle and then the reconstructed complex field can be used to optimize the illumination angle by searching over a pre-defined finer grid. Once this is done, the calibrated illumination angles may be fixed and used in the reconstruction process.
In one calibration example, an illumination device was used that included an LED ring attached on top of an LED array. The LED ring was used for the NA-matching measurements. This illumination device was mounted on a motorized translational stage for height adjustment. The motorized translational stage was configured such that the tilt and height exactly matched the illumination angle of the ring LED and the maximum acceptance angle of the objective of the optical system. To find the exact height, the illumination device was moved toward the sample being imaged until the LED ring produced the darkfield measurement. Then, the illumination device was moved away gradually increasing the separation between the LED ring and the sample until the image under the ring LED illumination transitioned from darkfield to brightfield. The transition point indicated the desired height. Once the height and tilt of the system were fixed, calibration data was acquired and the illumination angles for all LEDS in the LED ring were calibrated with the calibration data. During the calibration process, the optical system mas configured with a high NA objective to measure the relative intensity of each LED with a blank slide. The high NA objective was used so that the incident light from any LED could directly enter the optical system. The calibration intensity measurements were normalized.
Section V presents the forward model and APIC reconstruction according to various embodiments. In certain embodiments, APIC reconstruction includes (1) reconstructing a complex field for each of a plurality of NA-matching measurements (e.g., intensity measurements acquired under tilted illumination where illumination angles are equal to (match), or nearly match, a maximum acceptance angle of the collection optics of the imaging system acquiring the intensity measurements), (2) extracting a system aberration using the complex field spectrums as discussed in and correcting for aberration in the complex field spectrums using the extracted system aberration, and (3) expanding the sample spectrum using darkfield measurements. Some detail regarding reconstructing a complex field spectrum for each of a plurality of NA-matching measurements is discussed in subsection V(B) below. Some detail regarding extracting a system aberration using the complex field spectrums is discussed in subsection V(C). Some detail regarding expanding the sample spectrum using darkfield measurements is discussed in subsection V(D). In alternate embodiments, an APIC reconstruction method may omit the operations involving extracting and correcting for aberrations. In other alternate embodiments, an APIC reconstruction method may omit expanding the complex field using darkfield measurements and simply extracts and corrects for the system aberration. The extracted aberration may be used to correct for other type of images such as fluorescence image acquired using the same optical imaging system (e.g., objectives).
In certain embodiments, an APIC reconstruction method includes (1) reconstructing complex field spectrums using NA-matching measurements, (2) extracting a system aberration and correcting for aberrations in the reconstructed spectrums, and (3) expanding the known sample spectrum using darkfield measurements. In some embodiments, Kramers-Kronig relations may be employed to reconstruct the complex fields from NA-matching measurements. Using the reconstructed fields, the system aberration can be retrieved. The extracted aberration may then be applied to correct the currently reconstructed aberrated fields and the subsequent darkfield associated field reconstructions. The aberration corrected reconstructed complex field can serve as initial a priori knowledge, referred to as the known field, and its Fourier transform as the known spectrum. To achieve high-resolution imaging, darkfield measurements may be incorporated and used to expand the reconstructed sample spectrum in an orderly way. At each sub-step, a new spectrum may be reconstructed corresponding to an unused darkfield measurement whose illumination angle is the smallest among all remaining unused measurements. This newly reconstructed spectrum, together with the original reconstructed spectrum, can serve as the new a priori knowledge in subsequent reconstructions. To recover the field associated with one darkfield measurement, focus on the spatial frequency space and form a linear equation with respect to the unknown spectrum. By solving this linear equation, a closed-form solution of the unknown spectrum sampled in this darkfield measurement can be obtained. Adding this newly reconstructed spectrum effectively expands the sample spectrum coverage to achieve higher resolution. After all measurements are reconstructed, a high-resolution, aberration-free complex field reconstruction may be obtained.
In Section V, APIC techniques are described with reference to system components of APIC system 100 in
When a thin specimen is illuminated by a plane wave emitted by an ith (where i=1, 2, . . . , n) laser emitting diode (LED) or another illumination source with a transverse k-vector, ki, and then imaged an APIC imaging system, the modulated sample spectrum is given by:
where H is the coherent transfer function (CTF) associated with the optical system, Ô is the original sample's spectrum, Ŝi is the ith sampled spectrum, and k is the 2D (transverse) spatial frequency vector. Reciprocally, the spatial coordinate is denoted on the sample plane with x. Without loss of generality, it is assumed that Ô(0) is a real number. For a thin specimen, it can be assumed further that the majority of the incident illumination light does not change its original direction. That is, at any position x:
where −1 is the inverse Fourier transform, [−1(·)] (x) means the inverse Fourier transform is evaluated at x, |·| gives the modulus of a complex number, and δ(k) is the Kronecker delta as follows:
The physical meaning of this assumption is that the ballistic light exiting from the sample plane is dominant over the scattering light almost everywhere within the field of view.
As only the intensity of the light field is directly measured, the signal measured by the radiation detector of the APIC system is:
where Ii is the ith intensity image captured when lighting up the ith LED, and Si denotes the ith sampled field in real space (the inverse Fourier transform of Ŝi).
This intensity measurement is insensitive to phase shift applied to the inverse Fourier transform:
where ξ(x) stands for an arbitrary phase function and j is the unit imaginary number. Thus, a particular phase ramp can be selected, namely −2πki·x, which effectively shifts Ŝi along the opposite direction of ki.
Using the properties of Fourier transform when applying this special phase ramp, the measured intensity image is identical to the intensity of the inverse Fourier transform of the following (translated) spectrum:
where · is the dot product, S′i(x):=Si(x)e−2λjk
The coherent transfer function (CTF) of the optical system of the APIC imaging system can be assumed to be circular with a NA-dependent radius, kNA, and the phase ϕ of the CTF function depicts the system's aberration. For simplicity, the term “NA” or “system NA” used in Section V represents this radius kNA. This provides:
where ϕ(k) stands for the system's aberration, and CircNA (k):=(|k|23 NA) is an indicator function that is equal to one when the modulus of the k-vector is below the system NA and is zero otherwise.
The sample spectrum Ô(k) can be further decomposed to its amplitude and phase. Together with Eqn. 7, Eqn. 6 can be written as:
where Â(k):=[Ô(k)]∈ is the amplitude of the sample's spectrum and {circumflex over (α)}(k):=arg Ô(k)∈ is its phase (the operator arg gives the argument of a complex number).
In the following subsections B-D, based on Eqn. 6, a closed form solution to the sample's spectrum, Ô(k), is obtained and a technique is shown for analytically retrieving the aberration, ϕ(k), of an APIC imaging system. This methodology uses NA-matching (e.g., |ki|=NA) and darkfield measurements (e.g., |ki|>NA). For simplicity, it is assumed that |ki|>NA and k is sequentially ordered so that |ki|≤|ki+1|.
In certain embodiments, an APIC reconstruction method includes reconstructing the complex field spectrums using the NA-matching intensity measurements acquired when the illumination angles are equal to (match), or nearly match, the maximal acceptance angle of the optical system in the Fourier domain. In some cases, the Kramers-Kronig methodology may be employed to reconstruct the complex field spectrums from the NA-matching measurements. However, the Kramers-Kronig method does not take into account system aberration. In the complex field reconstruction method discussed in this subsection, aberration is taken into account, which is advantageous over existing techniques that do not consider aberration in their complex field reconstruction. The reconstruction can be performed by employing Eqn. 32.
In this subsection, only the first n0 measurements whose k-vector ki that satisfy Eqn. 9 below are considered:
These measurements are the NA-matching measurements used in this subsection. In this subsection, a signal is constructed whose Fourier transform is one-sided (which will be defined later in this subsection), and it will be shown that this allows for the calculation of the imaginary part of the signal from its real part based on the fact that the Fourier transform is a linear operator. This constructed signal can then be mapped back to the desired complex field. This approach can also be generalized to higher dimensional spaces. Although this methodology allows for analytical complex field reconstruction, it was found that both the desired sample's field and the aberration function of the imaging system are entangled in the reconstructed field.
The goal is computing the imaginary part of a complex signal from its real part using, e.g., Kramers-Kronig relations. In order to do so, the real part must be known first. However, the APIC system only takes measurements of the intensity (squared modulus) of a complex field, which is neither its real or imaginary part. To solve this mismatch, a nonlinear transformation is applied that maps the intensity and phase of a complex number to the real and imaginary part of its output, respectively. This can be done by taking the logarithm of a nonzero complex number. Applying this to the complex field S′i (x) in a point-wise manner, it is found:
The nonzero condition is guaranteed by Eqn. 2. Note that the first term on the right-hand side is purely real and the second term is purely imaginary. Ii(x) are the intensity measurements.
As the intensity Ii(x)=|Si(x)|2=|(S)i(x)·e−2πjk
Below it is shown that the Fourier transform of this transformed signal is “one-sided.” Such structure allows one to perform the desired reconstruction.
For any positive integer m∈+, we say g:m→ if there exists a nonzero vector e∈m, e≠0 such that its Fourier transform ĝ satisfies:
Hence, g is said to be strictly one sided if (k)=0, ∀k∈ms. t.·e≤0.
To show that the transform signal is one sided, the “offset” field, R (x), is factored out. R (x) is the field that does not change its direction:
It is seen that ϕ(ki) serves as a phase offset and r can be treated as a normalization factor. Note that r and ϕ(ki) are unknowns, and only intensity of Si(x) is measured. Thus, below focuses primarily focus on the “offset” field, S′i(x) e−jϕ(k
Applying the logarithm transformation to the “offset” field, S′i(x) e−jϕ(k
With the assumption that the majority of the light does not change its direction (Eqn. 2) the “offset” field is larger than the sample modulated field r=|(x)|>|S′i(x)−(x) |=|S′i(x)−re−jϕ(k
Note that this applies to all x. For simplicity, Δi(x) is defined as:
And, Eqn. 14 can be rewritten as:
The Fourier transform of Δi(x) yields:
For x∈2, and letting f1(x): 2→ and f2(x):: 2→ be two complex l2 function with Fourier transform {circumflex over (f)}1(k) and {circumflex over (f)}2(k), respectively. Assume there exists a (common) nonzero vector e∈2e≠0 such that:
then their product f′(x)=f1(x)f2(x) is one sided. Furthermore, if f1 and f2 are strictly one sided (with the same e), their product is also strictly one sided.
To prove this, the Fourier transform is taken of both sides, which yields:
For k0∈2 such that k0·e<0(e≠0), it is shown:
If k′·e<0, f1(k′)=0. If k′·e≥0, (0−k′)·e=k0·e−k′·e<0. As f1 and f2 are both one sided and share the same e:
That is, f′ is one sided. The proof for f1 and f2 being strictly one sided follows the same structure. To see that, the integration for k′·e≤0 and k′·e>0 is calculated and then it can be proven the strict version with the same technique.
It is seen that that Δi(x) is one sided. To show this, it is chosen that e=−ki. As the illumination angle is matched with NA (Eq. 9), it is verified that for arbitrary k such that k·e<0:
The last equality holds because of the interest in the NA-matching angle illumination condition (Eqn. 9) in this subsection. So, H(k+ki)=0 for all k subject to k0·k0·e=k·(−ki)<0. It can then be proved that Δi is one sided:
Furthermore, for k·e=−k·kib =0, e k⊥ki. When k≠0, the above two equations still hold. It is only needed to consider the special case, namely k=0. At k=0:
That is, Δi is strictly one sided. Using Eqn. 19, it can be concluded that the Taylor series (the last term on the right-hand side of Eqn. 17) is strictly one sided by induction.
If one were to want to directly use Si instead, the “offset” field should be modified as re2πjk
Here, it has been proved that the transformed signal log[S′i(x)e−jϕ(k
A modified version of Eqn. 10:
where the last equality follows from Eqn. 17, (·) denotes the real part of a complex number, and (·) denotes the imaginary part. The Fourier transform exhibits even symmetric for a real signal and odd symmetric for an imaginary signal, and so:
where * in the superscript denotes complex conjugate. Since it is already proved that log[S′i(x)e−jϕ(k
For k·(−ki)>0, symmetry can be used to conclude:
As T(x) is strictly one sided, its Fourier transform is zero for all k such that k⊥kj. Moreover, the Fourier transform of constant log(r) is a real (Dirac delta) function centered at zero. In other words, the Fourier transform of log(r) has no imaginary part. So:
By collecting all pieces from the above three equations and noticing Ii(x)=|S′i(x)|2:
That is, the complex field can be reconstructed using its real part log |S′i|. The desired field (up to a constant phase offset) can then be restored with inverse Fourier transform and applying exponential function to each point of the inverse Fourier transform:
If there is no aberration in the system ϕ(k)=0, S′i(x)=Ô(k)CircNA(k+ki) and the phase offset ϕ(ki)=0.
This subsection discusses the principle underlying a procedure for extracting the system aberration from the complex field spectrums reconstructed from the NA-matching intensity measurements, according to certain embodiments.
Subsection V(B) ends with the reconstruction of S(x)e−jϕ(k
As can be seen from Eqn. 34, the sample's spectrum and the system's aberration function are superimposed in the reconstruction. For a practical imaging system with aberration, this is not yet a reconstruction of a clean sample spectrum. Instead it is an aberrated version of the desired spectrum.
Without correction, aberrations of an imaging system, including defocus due to sample's height unevenness, can degrade reconstruction quality. This can be tackled by working in the spatial frequency domain (that is, working with the spectrum). As shown below, aberration of the system can be analytically solvable by considering the phases of the reconstructed spectrums.
To extract the aberration from the reconstructed spectrums, the procedure separates the contribution from the sample itself and the imaging system. This can be achieved by considering the phases of multiple reconstructed spectrums.
First, the overlap of two spectrums is defined. For two reconstructed spectrums Ŝ′i(k) and Ô(k), a set il is defined as:
These two spectrums are overlapped if the set il is nonempty (il ≠Ø) and the overlap between Ô(k) and Ô(k) is il.
Consider two spectrums Ô(k) and Ô(k) with (nonempty) overlap il≠Ø. The phase difference within the overlapped region il gives:
The first term ϕ(k+ki)−ϕ(k+kl) on the right-hand side is referred to as the “aberration difference,” and the last term ϕ(ki)−ϕ(ki) is referred to as “the offset.” When considering the phase difference of the two spectrums, the contribution from the sample spectrum cancels out and the difference depends solely on the system's aberration. It can be seen that the remaining phase difference is linear with respect to the aberration function. As such, a linear operator can be constructed that maps the aberration into phase differences of two overlapping spectrums. To do so, the 2D spectrum is rearranged into a vector.
Assume B is a m×t matrix, then a “flattening” operator, Flatm,t, can be defined which concatenates every column of B and produces a vector of length mt. denotes this flattened vector:
An inverse operator, Flatm,t−1, can be defined which restores the matrix when applied to the flattened vector:
Assuming the sample spectrum acquired is a N×N matrix, Ŝ′i∈N×N can be the matrix form of the sampled spectrum, and ϕ∈N×N the matrix form of the aberration. The corresponding flattened version is then ′i=FlatN,N(Ŝ′i)∈N
Let Ku, Kv∈N×N be the spatial frequency grid (with zero frequency at [c0, c0]=(┌N/2┐, ┌N/2┐) , where ┌N/2┐ is the smallest integer that is larger than N/2) for the sampled spectrum, where u and v denotes two orthogonal direction. Let Ku, Kv∈N
Additionally, an index set il is defined by:
Let |il| be the cardinal of set il and also abuse the notation such that il(m) indicate the m th smallest element in il. For the transverse illumination k-vector ki(i=1,2, . . . , n), its grid representation is denoted as κi.
With this notation, a difference operator Dil∈|
The operator is shown to calculate the aberration difference:
Let ′i[il](i=1,2, . . . , n) be a vector which is defined as
where T in the superscript denotes ordinary transpose. Simplifying the expression of Dilφ:
Note ϕ(ki)−ϕ(kl) is a constant. To account for this constant, an offset operator Dil0∈, is defined where a 1 is assigned to place that corresponds to ki and −1 to kl. That is,
When acting on φ, this offset operator gives:
Thus, Dil and Dil0 can be used to express the total phase difference between ′i and ′i, which gives:
For different pairs of spectrums with overlap, those equations can be concatenated, which yields:
Instead of directly solving for the aberration term, a Zernike polynomial is used to represent the aberration of the system. Thus, the Zernike operator Z∈N
where c∈z×1 is the corresponding Zernike coefficient. Using Zernike decomposition, Eq. 48 can be rewritten as:
The above linear equation (or the associated normal equation) can be solved to get the analytical solution of the Zernike coefficient c. The 2D aberration ϕ of the APIC imaging system is then given by:
In reality, some spatial frequencies in the spectrum may have a stronger signal than other frequencies. It may be advantageous to emphasize those frequencies in the spectrum as they may have a higher signal-to-noise ratio (SNR). Thus, in some implementations, a weight matrix W may be employed to emphasize places with high SNR. This gives:
In one implementation, the logarithm of the modulus of the product Ŝi(k)Ŝl(k) can be used as a weight matrix.
In some cases, the phase difference of two spectrums might exceed 2π. Thus, phase differences may be first unwrapped and then solved using Eqn. 52 to extract the aberration of the imaging system.
Since the aberration of the imaging system (system aberration) may be fully determined by the aberration extraction procedure discussed in subsection C, the contribution of the aberration term can be removed from the reconstructed spectrums determined by the reconstruction procedure discussed in subsection B.
In subsection B, the (modified) sampled spectrums were reconstructed under NA-matching angle illumination based on Eqns. 32 and 33. This means the following is known for all |ki|=NA:
As the last phase factor eejϕ(kαk
This basically means that a piece of the sample spectrum region covered by the CTF support was recovered for each NA-matching intensity measurement. Thus, these reconstructed regions can be stitched for a larger coverage in the spatial frequency domain. That is, the coverage of the reconstructed Ô(k) can be gradually expanded in the spatial frequency domain. Let us define the sampled region m that denotes the spectrum covered by the first mth measurements
The mask m is defined which denotes the effective sampling mask for the first mth measurements
Assuming the complex spectrum can be reconstructed for every measurement, the reconstructed complex sample spectrum {circumflex over (R)}m(k) using the first m th measurements can be expressed by:
After aberration correction, the reconstructed spectrum, {circumflex over (R)}n
In this subsection, it is shown that if the sampled spectrum at i th illumination Ŝ′i(i>no) consists of previously reconstructed (a priori) spectrum and another unknown part, the unknown part can be reconstructed using the known spectrum.
Let us decompose the sampled spectrum Ŝ′i into the unknown and known part, assuming all previous measurements are reconstructed, which means {circumflex over (R)}i−1(k)=Ô(k)Mi−1(k) is known. The known spectrum i(k) at this substep is given by:
Then, the unknown part Ûi(k) is
Let Supp(f) be the support of function f:2→, which is a set given by:
By construction of i(k) and Ûi(k), it is easy to see that i(k)≠0 suggests Ûi(k)=0, and vice versa. Thus, i(k) and Ûi(k) have disjoint support. Based on Eqns. 5 and 6, the measured intensity of one darkfield measurement (darkfield intensity measurement) can be expressed as
where (x) :=[−1(i](x) and U(x) :=[−1(Ûi](x) are the known field and the unknown field, respectively. Using the property of Fourier transform, the Fourier transform of Ii can be written as:
where * denotes correlation. As the known spectrum is a priori, its auto-correlation can be subtracted from the Fourier transform of the intensity measurement, which yields:
The two cross-terms are linear with respect to Ûi as correlation is a linear operator. If considering one cross term, it naturally leads to a linear equation with respect to Ûi, which can be solved analytically. However, those three remaining terms cannot be easily separated due to the existence of the desired unknown part.
In the remaining part of this subsection, it is shown that the above three terms have different supports, as depicted in
In the APIC system 100 shown in
Focusing on [i*Ûi](k) and let i be the non-intersecting set, which is defined as:
By construction:
The masked subtraction as Li, is defined as:
Let Ûi∈N×N be the matrix version of the (centered) unknown part and Ûi∈N
Let us construct a (sparse) correlation operator Ci that takes all nonzero elements in the unknown spectrum i and gives i. First, focus on the t1 th row and t2 th column of Li, which corresponds to m th element of vector i, where m=t1+N(t2−1) and t1, t2∈1,2, . . . , N. Let t′1=t1−c0 and t′2=t2−c0 . Then, (recall that in Eq. 39 K is defined as the grid version of the k vector and also Ku(c0, c0)=Kv(c0, c0)=0):
For K(m)∈ii, a matrix Gim∈N×N can be constructed that is defined as:
With this definition:
An index set i is defined that denotes this special region that is linear in i:
The notation is abused so that i(m) indicates the m th smallest element in i and define i(i) as
where |i| is the cardinal of i. A correlation operator can then be constructed. Let Cif∈ be a |i |×N2 matrix and let its m th row Cif(m,·) be:
and then:
By construction of the unknown spectrum Ûi, it is known that the only locations that it can be nonzero are where the following holds:
Using the above equation, it can be easily found that the corresponding nonzero elements in the flattened vector i. An index set Ni is defined that consists of the indices of these nonzero elements:
and let Ni(m) be the m th smallest element in Ni, and |Ni| be the cardinal of Ni. To construct the correlation operator Ci that encodes the sparsity of the unknown spectrum, simply keep all columns of CiF whose indices belong to set Ni and throw away all other columns. This gives the definition of Ci∈
Then, the following linear equation with respect to the nonzero elements of the unknown spectrum:
To solve this equation, it is required that the rank of matrix Ci to be at least |Ni|. This can be satisfied if the known spectrum covers the semicircle of the circular CTF. Overlap criteria may be applied in certain embodiments. For example, the autocorrelation of a semicircle may be around 4 times larger than itself. If it is assumed that the CTF is of radius ro, and the known spectrum is a semicircle, the area of the unknown spectrum is then 1/2πr02. For a circle (area is πr02), its autocorrelation is strictly 4 times larger in size (area is 4πr02). Thus, the linear region has an area of:
which is approximately 1.7 times larger than the area of the unknown part. That is, if the known spectrum occupies 50% of the spectrum, the rank of matrix Ci can be well above |Ni|. Numerically, a safe choice is to let the unknown spectrum occupy over 42% of the measured spectrum, assume the CTF is circular.
By solving Eqn. 77, the closed-form solution of the unknown spectrum is obtained. That is, the following spectrum is reconstructed:
As the aberration of the system is determined, the aberration can be corrected, which gives the aberration corrected spectrum Ô(k)CircNA (k+ki)[1−Mi−1(k)]. Because the intensity of the field is directly measured in the darkfield measurement and the phase is unknown, the square root of this measured intensity is used as the modulus of the complex field for maintaining the (point-wise) energy and robustness.
Until now, it is shown that the complex spectrum sampled with i th illumination can be reconstructed using a priori knowledge of the known spectrum. The entire reconstructed spectrum can be expanded by integrating this newly reconstructed spectrum. The overall reconstructed spectrum using i th measurement is given by Ô(k)Mi−1(k)+Ô(k) CircNA (k+ki)[1−Mi−1(k)], which is exactly Ô(k)Mi(k) by definition of Mi(k). That is, an extended complex spectrum reconstuction {circumflex over (R)}i after darkfield reconstruction at the i th sub-step is obtained. This reconstructed field serves as our a priori knowledge in the reconstruction of the (i+1) th sub-step.
For a dense LED array, the area of the unknown spectrum can be quite small because their effective CTFs cover similar area for two closely spaced LEDs (Eq. 6). In such case, the new spectrum can be filled when it is necessary to solve the linear equation formed in Eq. 77 (e.g., when the system becomes undetermined if we do not fill in the new spectrum). By doing this, the spectrum reconstructions prior to the stitching process are independent and they have overlaps in the spatial frequency domain. Therefore, we can average over the overlap for improved robustness of the reconstruction algorithm.
Once the darkfield reconstruction is done for all measurements, we have reconstructed all sampled spectrums and the extended complex spectrum {circumflex over (R)}n, which is our high-resolution, aberration-free complex field reconstruction.
In one example, 316 images were used in a full dataset. In another example, 9 bright field measurements, 8 NA-matching measurements and 28 darkfield measurements were used in a reduced dataset. For the reduced dataset example, an FPM method used all forty five (45) measurements and the APIC method used 36 intensity measurements including 8 NA-matching measurements and 28 darkfield intensity measurements. The APIC imaging system 100 of
The APIC imaging system 100 of
The APIC imaging system 100 of
In this implementation, a computing device (e.g., computing device 780 in
Simulations under different aberration levels were conducted and an APIC method of an embodiment (e.g., APIC method described with reference to
For comparison, a conventional FPM imaging method was used for reconstruction with less aberration that used above. The same number of brightfield images were used for the reconstruction using the conventional FPM imaging method as were used for the APIC imaging method. The darkfield measurements were shared by FPM and APIC. Two different reconstruction algorithms were used for FPM, namely the original alternating projection algorithm (the original Gerchberg-Saxton algorithm combined with EPRY for aberration correction) and the second order Gauss-Newton method. The FPM reconstruction was conducted with 6 different sets of parameters. As the ground truth was known in our simulation, one of these parameter sets was manually selected to correspond closest to the ground truth.
From the simulation results, FPM worked well with mild aberrations. When the imaging system had a relatively small aberration (its phase standard deviation is approximately 0.15π), both the original alternating projection method (implemented with EPRY for aberration correction) and the second order Gauss-Newton method successfully reconstructed the aberrations. Their amplitude reconstructions were also closely matched with the ground truth. The phase reconstruction of the second order method had better correspondence to the ground truth. When the aberration became more severe (standard deviation reaches 0.4π), both FPM methods did not work. The second order FPM method worked slightly better than the first order algorithm as it partially reconstructed the high-frequency information. APIC, on the other hand, worked well at all these different levels of aberrations.
Simulations of an APIC method according to an embodiment (e.g., APIC method described with reference to
If no noise was added to the measurement, APIC method produced result that matched up with the ground truth. When the SNR was low, APIC method became more noisy and exhibited degraded resolution. Nonetheless, it preserved the high frequency details that were not captured in the normal incidence measurement.
Then another two simulations were conducted to compare APIC with FPM under different SNRs. As before, the NA-matching measurements were replaced with n0 brightfield measurements to construct the dataset for FPM. All darkfield measurements were shared by APIC and FPM. For FPM, 6 different sets of parameters were chosen and the best results were selected in the simulation.
In the first simulation, a complex Siemens star target was simulated and different levels of noise added to the simulated dataset. As shown in
In our second simulation, two different patterns for the amplitude and phase of the complex object were selected. This complex object was designed to have a weak amplitude variation while preserving a relatively strong phase variation based on the common property for unstained biological samples. There was cross-talks between phase and amplitude in both FPM algorithms using low SNR measurements.
When SNR increased, such cross-talk was less prominent in FPM. APIC did not suffer from such cross-talk. Although the reconstruction of APIC with low SNR was noisy, it maintained the features of the ground truth amplitude and phase and showed almost no cross-talk between the two. Also, the reconstructed phase of APIC was closer to the ground truth. For the low SNR dataset, the range of the reconstructed phase of FPM appeared to be compressed.
Simulations of different numbers of NA-matching measurements were run to determine an example of a minimum number of measurements needed for an APIC imaging method, according to certain implementation, that accurately reconstructs the imaging system's aberration. In the simulation, only aberration was introduced to the imaging system and all other parameters were assumed to be ideal. Illumination angles were assumed to be azimuthally uniformly distributed, which means their corresponding LEDs were uniformly distributed along the ring (e.g., ring 116a in
When using 4 images, APIC did not obtain a good aberration estimate. However, when there were 6 NA-matching measurements, APIC successfully reconstructed mild to moderately high aberrations (their phase standard deviation is below 0.8π). However, residual aberration exists in the reconstructed aberration under severe aberrations (phase standard deviation exceeds 1.5π), as depicted in the corresponding error map in
In some embodiments, an APIC imaging method uses at least 6 NA-matching measurements. In some embodiments, an APIC imaging method uses at least 8 NA-matching measurements. In some embodiments, an APIC imaging method uses between 6 NA-matching measurements and 8 NA-matching measurements. In some embodiments, an APIC imaging method uses at least 7 NA-matching measurements. In some embodiments, an APIC imaging method uses at least 9 NA-matching measurements. In some embodiments, an APIC imaging method uses more than 9 NA-matching measurements.
Simulations of errors in the angle calibration were run to see how an APIC imaging method of an embodiment performs under different levels of calibration error. Random uniformly distributed estimation error was introduced to the actual illumination angles. The simulation was based on a similar APIC system as APIC system 100 shown in
From the simulation results, APIC performance was well-maintained until the error reached 9%. Beyond that, the reconstructions showed obvious artifacts. This suggests that the alignment requirement of the proposed APIC imaging system (e.g., imaging system 100 in
Using APIC system 100 shown in
Many types of computing devices having any of various computer architectures may be employed as the disclosed systems for implementing algorithms. For example, the computing devices may include software components executing on one or more general purpose processors or specially designed processors such as Application Specific Integrated Circuits (ASICs) or programmable logic devices (e.g., Field Programmable Gate Arrays (FPGAs)). Further, the systems may be implemented on a single device or distributed across multiple devices. The functions of the computational elements may be merged into one another or further split into multiple sub-modules.
At one level a software element is implemented as a set of commands prepared by the programmer/developer. However, the module software that can be executed by the computer hardware is executable code committed to memory using “machine codes” selected from the specific machine language instruction set, or “native instructions,” designed into the hardware processor. The machine language instruction set, or native instruction set, is known to, and essentially built into, the hardware processor(s). This is the “language” by which the system and application software communicates with the hardware processors. Each native instruction is a discrete code that is recognized by the processing architecture and that can specify particular registers for arithmetic, addressing, or control functions; particular memory locations or offsets; and particular addressing modes used to interpret operands. More complex operations are built up by combining these simple native instructions, which are executed sequentially, or as otherwise directed by control flow instructions.
The inter-relationship between the executable software instructions and the hardware processor is structural. In other words, the instructions per se are a series of symbols or numeric values. They do not intrinsically convey any information. It is the processor, which by design was preconfigured to interpret the symbols/numeric values, which imparts meaning to the instructions.
The algorithms used herein may be configured to execute on a single machine at a single location, on multiple machines at a single location, or on multiple machines at multiple locations. When multiple machines are employed, the individual machines may be tailored for their particular tasks. For example, operations requiring large blocks of code and/or significant processing capacity may be implemented on large and/or stationary machines.
In addition, certain embodiments relate to tangible and/or non-transitory computer readable media or computer program products that include program instructions and/or data (including data structures) for performing various computer-implemented operations. Examples of computer-readable media include, but are not limited to, memory devices, phase-change devices, magnetic media such as disk drives, magnetic tape, optical media such as CDs, magneto-optical media, and hardware devices that are specially configured to store and perform program instructions, such as read-only memory devices (ROM) and random access memory (RAM). The computer readable media may be directly controlled by an end user or the media may be indirectly controlled by the end user. Examples of directly controlled media include the media located at a user facility and/or media that are not shared with other entities. Examples of indirectly controlled media include media that is indirectly accessible to the user via an external network and/or via a service providing shared resources such as the “cloud.” Examples of program instructions include both machine code, such as produced by a compiler, and files containing higher level code that may be executed by the computer using an interpreter.
In some embodiments, code executed during generation or execution of various models on an appropriately programmed system can be embodied in the form of software elements which can be stored in a nonvolatile storage medium (such as optical disk, flash storage device, mobile hard disk, etc.), including a number of instructions for making a computing device (such as personal computers, servers, network equipment, etc.).
In various embodiments, the data or information employed in the disclosed methods and apparatus is provided in an electronic format. Such data or information may include design layouts, fixed parameter values, floated parameter values, feature profiles, metrology results, and the like. As used herein, data or other information provided in electronic format is available for storage on a machine and transmission between machines. Conventionally, data in electronic format is provided digitally and may be stored as bits and/or bytes in various data structures, lists, databases, etc. The data may be embodied electronically, optically, etc.
Modifications, additions, or omissions may be made to any of the above-described embodiments without departing from the scope of the disclosure. Any of the embodiments described above may include more, fewer, or other features without departing from the scope of the disclosure. Additionally, the steps of described features may be performed in any suitable order without departing from the scope of the disclosure. Also, one or more features from any embodiment may be combined with one or more features of any other embodiment without departing from the scope of the disclosure. The components of any embodiment may be integrated or separated according to particular needs without departing from the scope of the disclosure.
It should be understood that certain aspects described above can be implemented in the form of logic using computer software in a modular or integrated manner. Based on the disclosure and teachings provided herein, a person of ordinary skill in the art will know and appreciate other ways and/or methods to implement the present invention using hardware and a combination of hardware and software.
Any of the software components or functions described in this application, may be implemented as software code using any suitable computer language and/or computational software such as, for example, Java, C, C #, C++ or Python, LabVIEW, Mathematica, or other suitable language/computational software, including low level code, including code written for field programmable gate arrays, for example in VHDL. The code may include software libraries for functions like data acquisition and control, motion control, image acquisition and display, etc. Some or all of the code may also run on a personal computer, single board computer, embedded controller, microcontroller, digital signal processor, field programmable gate array and/or any combination thereof or any similar computation device and/or logic device(s). The software code may be stored as a series of instructions, or commands on a CRM such as a random access memory (RAM), a read only memory (ROM), a magnetic media such as a hard-drive or a floppy disk, or an optical media such as a CD-ROM, or solid stage storage such as a solid state hard drive or removable flash memory device or any suitable storage device. Any such CRM may reside on or within a single computational apparatus, and may be present on or within different computational apparatuses within a system or network. Although the foregoing disclosed embodiments have been described in some detail to facilitate understanding, the described embodiments are to be considered illustrative and not limiting. It will be apparent to one of ordinary skill in the art that certain changes and modifications can be practiced within the scope of the appended claims.
The terms “comprise,” “have” and “include” are open-ended linking verbs. Any forms or tenses of one or more of these verbs, such as “comprises,” “comprising,” “has,” “having,” “includes” and “including,” are also open-ended. For example, any method that “comprises,” “has” or “includes” one or more steps is not limited to possessing only those one or more steps and can also cover other unlisted steps. Similarly, any composition or device that “comprises,” “has” or “includes” one or more features is not limited to possessing only those one or more features and can cover other unlisted features.
All methods described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The use of any and all examples, or exemplary language (e.g., “such as”) provided with respect to certain embodiments herein is intended merely to better illuminate the present disclosure and does not pose a limitation on the scope of the present disclosure otherwise claimed. No language in the specification should be construed as indicating any non-claimed element essential to the practice of the present disclosure.
Groupings of alternative elements or embodiments of the present disclosure disclosed herein are not to be construed as limitations. Each group member can be referred to and claimed individually or in any combination with other members of the group or other elements found herein. One or more members of a group can be included in, or deleted from, a group for reasons of convenience or patentability. When any such inclusion or deletion occurs, the specification is herein deemed to contain the group as modified thus fulfilling the written description of all Markush groups used in the appended claims.
This application claims benefit of and priority to U.S. Provisional Patent Application No. 63/455,878, titled “High-Resolution, Large Field-Of-View Label-Free Imaging Via Aberration-Corrected, Closed-Form Complex Field Reconstruction,” and filed on Mar. 30, 2023 and to U.S. Provisional Patent Application No. 63/536,265, titled “High-Resolution, Large Field-Of-View Label-Free Imaging Via Aberration-Corrected, Closed-Form Complex Field Reconstruction,” and filed on Sep. 1, 2023, which are incorporated by reference herein in their entireties and for all purposes.
Number | Date | Country | |
---|---|---|---|
63536265 | Sep 2023 | US | |
63455878 | Mar 2023 | US |