The present disclosure relates generally to methods and systems for enhanced resolution imaging, and more specifically to methods and systems for performing enhanced resolution imaging for bioassay applications, e.g., nucleic acid detection and sequencing applications.
High performance imaging systems used for optical inspection and genome sequencing are designed to maximize imaging throughput, signal-to-noise ratio (SNR), image resolution, and image contrast, which are key figures of merit for many imaging applications. In genome sequencing, for example, high resolution imaging enables the use of higher packing densities of clonally-amplified nucleic acid molecules on a flow cell surface, which in turn may enable higher throughput sequencing in terms of the number of bases called per sequencing reaction cycle. However, a problem that may arise when attempting to increase imaging throughput while simultaneously trying to improve the ability to resolve small image features at higher magnification is the reduced number of photons available for imaging. In fluorescence imaging-based sequencing, for example, where fluorophores are used to label nucleic acid molecules tethered to a flow cell surface, high resolution imaging may in effect reduce the total number of fluorophores present in the region of the flow cell surface (e.g., a feature) being imaged, and thus result in the generation of fewer photons. Although this problem may be addressed, for example, by integrating over longer periods of time to acquire an acceptable image (e.g., an image that has a sufficient signal-to-noise ratio to resolve the features of interest), this approach may have an adverse effect on image data acquisition rates, imaging throughput, and overall sequencing reaction cycle times.
The image resolution of conventional imaging systems is limited by diffraction to a value determined by the effective numerical aperture (NA) of the object-facing imaging optics and the wavelength of light being imaged. In recent years, a number of imaging techniques (e.g., stimulated emission depletion microscopy (STED), photo-activated localization microscopy (PALM), stochastic optical reconstruction microscopy (STORM), reversible saturable optical fluorescence transitions microscopy (RESOLFT), etc.) have been developed that may be used to acquire images that exceed diffraction-limited image resolution. However, these approaches generally have low imaging throughput (and in many cases require specialized fluorophores), thus precluding their use for high-speed imaging applications.
Other imaging techniques (e.g., confocal microscopy, structured illumination microscopy (SIM), and image scanning microscopy (ISM)) that may be used to acquire images of more modest but still significant increases in image resolution utilize patterned illumination. However, these techniques either suffer from a significant loss of signal in view of the modest increase in resolution obtained (e.g., due to use of pinhole apertures as spatial filters in the case of confocal microscopy) or require the acquisition of multiple images and a subsequent computational reconstruction of a resolution-enhanced image (thereby significantly increasing image acquisition times, imaging system complexity, and computational overhead for structured illumination microscopy (SIM) and image scanning microscopy (ISM)). Having to acquire multiple images also generally has the undesirable effect that read noise (or digitization noise) is accumulated with every image acquisition.
Time delay and integration (TDI) imaging enables a combination of high throughput imaging with high SNR by accumulating the image-forming signal onto a two-dimensional stationary sensor pixel array that shifts the acquired image signal from one row of pixels in the pixel array to the next synchronously with the motion of an object being imaged as it is moved relative to the imaging system, or vice versa. As is the case with conventional imaging systems, the image resolution for TDI imaging systems is diffraction-limited. Thus, there remains an unmet need for an imaging system capable of high-throughput imaging while simultaneously maintaining high image resolution, high SNR, and high image contrast.
Disclosed herein are systems and methods that combine: (i) the use of a first optical transformation to create patterned illumination that is directed to an imaged object such that light reflected, transmitted, scattered, or emitted by the object comprises high-resolution spatial information about the object that would not otherwise be obtained, and (ii) the use of a second optical transformation that generates an enhanced resolution optical image at a time delay and integration (TDI) image sensor that comprises all or a portion of the high-resolution information contained in said light due to the patterned illumination. The resulting enhanced-resolution images can be acquired without requiring a change in the configuration, position, or orientation of the optical transformation devices used to generate the first and second optical transformations, with no additional digital processing required, or, in some instances, using digital processing of substantially reduced computational complexity in comparison with conventional enhanced resolution imaging methods. All of these factors contribute to the simplicity and high throughput of the imaging system.
In one exemplary implementation, the disclosed systems and methods utilize a novel combination of optical photon reassignment (OPRA) with time delay and integration (TDI) imaging to provide high-throughput and high signal-to-noise ratio (SNR) images of an object while also providing enhanced image resolution. The disclosed systems and methods provide enhanced image resolution without compromising the imaging throughput and high SNR achieved using TDI imaging by incorporating passive optical transformation device(s) into both the illumination and detection optical paths of the imaging system. In some embodiments, the systems and methods described herein provide enhanced image resolution (e.g., enhanced raw image resolution) as compared to that for images acquired using an otherwise identical imaging system that lacks one or more of the passive optical transformation devices. In some embodiments, the enhanced-resolution image is obtained in a single scan, without the need to acquire or recombine multiple images. In some embodiments, the enhanced-resolution images are produced with little or no digital processing required.
The systems and methods provided herein, in some embodiments, may be standalone systems or may be incorporated into pre-existing imaging systems. In some embodiments, the imaging systems may be useful for imaging, for example, biological analytes, non-biological analytes, synthetic analytes, cells, tissue samples, or any combination thereof.
Disclosed herein are imaging systems, comprising: an imaging device, comprising: an illumination unit that includes a radiation source optically coupled to a first optical transformation device, wherein the first optical transformation device applies a first optical transformation to a light beam received from the radiation source to generate an illumination pattern that is directed to a corresponding area of an object; a projection unit that receives light reflected, transmitted, scattered, or emitted by the object and directs it to a detection unit, wherein the projection unit is configured to accept said light within a defined range of propagation angles; a detection unit that includes one or more image sensors configured for time delay and integration (TDI) imaging and optically coupled to a second optical transformation device, wherein the second optical transformation device applies a second optical transformation to light received from the projection unit; wherein the illumination pattern generated by the first optical transformation causes the light accepted by the projection unit to comprise high-resolution spatial information about the object that would not be contained in the light accepted by the projection unit in a comparable imaging device lacking the first optical transformation device; and wherein the second optical transformation generates an optical image at the one or more image sensors that comprises all or a portion of said high-resolution spatial information; and an actuator configured to create relative movement between the imaging device and the object during a scan of all or a portion of the object, wherein the relative movement is synchronized with the time delay and integration (TDI) imaging such that a scanned image of all or a portion of the object is acquired by the one or more image sensors.
In some embodiments, the illumination pattern comprises a plurality of light intensity maxima, and the second optical transformation compensates for a spatial offset between the plurality of light intensity maxima in the illumination pattern and a plurality of signal intensity maxima that would be measured by individual image sensor pixels laterally offset relative to the light intensity maxima in scanned images acquired using an otherwise identical imaging system that lacks the second optical transformation device, the second optical transformation thereby enabling acquisition of a scanned image of higher resolution than would be acquired using an otherwise identical imaging system that lacks the second optical transformation device.
In some embodiments, the scanned image generated by at least one of the one or more image sensors exhibits a lateral spatial resolution that exceeds a lateral spatial resolution of an otherwise identical imaging system that lacks the second optical transformation device. In some embodiments, the scanned image generated by at least one of the one or more image sensors exhibits a lateral spatial resolution that exceeds a diffraction-limited spatial resolution.
In some embodiments, the scanned image acquired by at least one of the one or more image sensors exhibits an increased signal-to-noise ratio (SNR) compared to a signal-to-noise ratio (SNR) of an otherwise identical imaging system that lacks the second optical transformation device.
In some embodiments, at any given point in time during the scan, the second optical transformation device reroutes and redistributes light received from the projection unit to present a modified optical image of the object to the one or more image sensors, wherein the modified optical image represents a spatial structure of the object that is inferable from properties of the light received from the projection unit and a known illumination pattern projected on the object at that point in time, and wherein the one or more image sensors integrate signals from a plurality of modified optical images over a period of time required to perform the scan of the object.
In some embodiments, the first optical transformation device comprises one or more components selected from the group consisting of a micro-lens array (MLA), a diffractive optical element, a digital micro-mirror device (DMD), a phase mask, an amplitude mask, a spatial light modulator (SLM), and a pinhole array. In some embodiments, the second optical transformation device comprises one or more components selected from the group consisting of a micro-lens array (MLA), a diffractive optical element, a digital micro-mirror device (DMD), a phase mask, an amplitude mask, a spatial light modulator (SLM), and a pinhole array.
In some embodiments, the imaging system comprises only components for which their position, relative orientation, and optical properties remain static during imaging, with the exception of (i) the actuator configured to create relative motion between the imaging device and the object, and (ii) components of an autofocus system.
In some embodiments, the second optical transformation device is a lossless optical transformation device. In some embodiments, at least 40%, 50%, 60%, 70%, 80%, 90%, 95%, or 99% of the light received from the projection unit that enters the second optical transformation device reaches the one or more image sensors.
In some embodiments, the actuator further comprises a moveable stage mechanically coupled to the object to support, rotate, or translate the object relative to the imaging device, or any combination thereof.
In some embodiments, the radiation source comprises a coherent source, a partially coherent source, an incoherent source, or any combination thereof.
In some embodiments, the one or more image sensors comprise one or more time delay and integration (TDI) cameras, or one or more cameras comprising a TDI mode of image acquisition, and wherein the relative movement between the imaging device and the object is synchronized to a line shift or an image shift in the one or more image sensors so as to minimize motion blurring during image acquisition.
In some embodiments, integration of illumination pattern light intensity directed to the object during a scan results in approximately the same total exposure to illumination light at every location of the object. In some embodiments, a separation distance between any two of the plurality of light intensity maxima in the illumination pattern is at least 1× to 100× of a full width at half maximum (FWHM) of a corresponding intensity peak profile.
In some embodiments, the first optical transformation device or the second optical transformation device comprises a micro-lens array (MLA), and wherein the micro-lens array (MLA) comprises a regular arrangement of two or more micro-lenses. In some embodiments, the second optical transformation device comprises a micro-lens array, and wherein there is a 1:1 correspondence between the plurality of light intensity maxima in the illumination pattern and micro-lenses in the micro-lens array. In some embodiments, each micro-lens in the micro-lens array is configured to demagnify a corresponding beamlet in the light received from the projection unit. In some embodiments, the regular arrangement is a hexagonal pattern. In some embodiments, the regular arrangement includes a shift in micro-lens position between neighboring rows or columns of micro-lenses. In some embodiments, a projection of the regular arrangement onto an object plane comprising the object is rotated with respect to a direction of the relative movement. In some embodiments, the projection of the regular arrangement onto the object plane comprising the object is rotated by an angle, θ, with respect to the direction of relative movement, and wherein θ is chosen so as to result in the illumination pattern providing a uniform total exposure at every point on the object when integrated over a scan.
In some embodiments, the first optical transformation device and the second optical transformation device comprise a plurality of harmonically-modulated phase masks or harmonically-modulated amplitude masks with different orientations. In some embodiments, a spatial frequency and orientation of the second optical transformation device matches that of the first optical transformation device. In some embodiments, the first and second optical transformation devices comprise harmonically-modulated phase masks, and wherein the second optical transformation device is phase shifted relative to the first optical transformation device. In some embodiments, a final high-resolution image is reconstructed from the scanned image(s) acquired by the one or more image sensors by applying a Fourier reweighting process.
In some embodiments, the imaging device is configured to perform fluorescence imaging, and wherein the illumination unit is configured to provide excitation light at two or more excitation wavelengths. In some embodiments, the imaging device is configured to perform fluorescence imaging, and wherein the detection unit is configured to detect fluorescence at two or more emission wavelengths.
In some embodiments, the imaging system further comprises a synchronization unit configured to control the synchronization of the relative movement of the imaging device and the object to the time delay integration (TDI) of the one or more image sensors.
In some embodiments, the object comprises a flow cell or substrate for performing nucleic acid sequencing. In some embodiments, the flow cell or substrate comprises at least one surface, and wherein the at least one surface comprises a plurality of single nucleic acid molecules or clonally-amplified nucleic acid clusters.
In some embodiments, the second optical transformation device is not a diffraction grating. In some embodiments, the imaging system further comprises a compensator configured to correct for non-flatness of the second optical transformation device. In some embodiments, the imaging system further comprises one or more pinhole aperture arrays positioned on or in front of the one or more image sensors, wherein the pinhole aperture arrays are configured to reduce artifacts in a point spread function for the imaging system.
Also disclosed herein are methods of imaging an object, comprising: illuminating a first optical transformation device with a light beam, wherein the first optical transformation device is configured to apply a first optical transformation to the light beam to produce an illumination pattern that is projected through an object-facing optical component of a projection unit onto the object; directing light reflected, transmitted, scattered, or emitted by the object and accepted by the object-facing optical component of the projection unit to a second optical transformation device, wherein the second optical transformation device is configured to apply a second optical transformation to the light accepted by the projection unit and relay it to one or more image sensors configured for time delay and integration (TDI) imaging; wherein the illumination pattern generated by the first optical transformation causes the light accepted by the projection unit to comprise high-resolution spatial information about the object that would not be contained in the light accepted by a projection unit in a comparable imaging device lacking the first optical transformation device; and wherein the second optical transformation generates an optical image at the one or more image sensors that comprises all or a portion of said high-resolution spatial information; and scanning the object relative to the object-facing optical component, or the object-facing optical component relative to the object, wherein relative motion of the object and object-facing optical component during the scan is synchronized to the time delay and integration (TDI) imaging such that a scanned image of all or a portion of the object is acquired by each of the one or more image sensors.
In some embodiments, the illumination pattern comprises a plurality of light intensity maxima, and the second optical transformation compensates for a spatial offset between the plurality of light intensity maxima in the illumination pattern and a plurality of signal intensity maxima that would be measured by individual image sensor pixels laterally offset relative to the light intensity maxima in scanned images acquired using an otherwise identical imaging system that lacked the second optical transformation device, the second optical transformation thereby enabling acquisition of a scanned image of higher resolution than would be acquired using an otherwise identical imaging system that lacks the second optical transformation device.
In some embodiments, the scanned image generated by at least one of the one or more image sensors exhibits a lateral spatial resolution that exceeds a lateral spatial resolution of an otherwise identical imaging system that lacks the second optical transformation device.
In some embodiments, the scanned image acquired by at least one of the one or more image sensors exhibits an increased signal-to-noise ratio (SNR) compared to a signal-to-noise ratio (SNR) of an otherwise identical imaging system that lacks the second optical transformation device.
In some embodiments, the light accepted by the projection unit passes through the second optical transformation device without significant loss. In some embodiments, the light accepted by the projection unit that passes through the second optical transformation device is at least 30%, 40%, 50%, 60%, 70%, 80%, 90%, 95%, 98%, or 99% of the light accepted by the projection unit that reaches the second optical transformation device.
In some embodiments, at any given point in time during the scan, the second optical transformation device reroutes and redistributes light received from the projection unit to present a modified optical image of the object to the one or more image sensors, and wherein the modified optical image represents a spatial structure of the object that is inferable from properties of the light received from the projection unit and a known illumination pattern projected on the object at that point in time, and wherein the one or more image sensors integrate signals from a plurality of modified optical images over a period of time required to perform the scanning of the object.
In some embodiments, the first optical transformation device comprises one or more components selected from the group consisting of a micro-lens array (MLA), a diffractive optical element, a digital micro-mirror device (DMD), a phase mask, an amplitude mask, a spatial light modulator (SLM), and a pinhole array. In some embodiments, the second optical transformation device comprises one or more components selected from the group consisting of a micro-lens array (MLA), a diffractive optical element, a digital micro-mirror device (DMD), a phase mask, an amplitude mask, a spatial light modulator (SLM), and a pinhole array.
In some embodiments, an imaging system used to perform the method comprises only components that remain static during imaging, with the exception of (i) an actuator configured to create relative motion between the imaging system and the object, and (ii) components of an autofocus system.
In some embodiments, at least 40%, 50%, 60%, 70%, 80%, 90%, 95%, or 99% of the light received by the projection unit and entering the second optical transformation device reaches the one or more image sensors.
In some embodiments, the one or more image sensors comprise one or more time delay and integration (TDI) cameras, or one or more cameras comprising a TDI mode of image acquisition, and wherein the relative motion between the object-facing optical component and the object is synchronized to a line shift or an image shift in the one or more image sensors so as to minimize motion blurring during image acquisition.
In some embodiments, integration of illumination pattern light intensity directed to the object during a scan results in approximately the same total exposure to illumination light at every location of the object. In some embodiments, a separation distance between any two light intensity maxima of the plurality of light intensity maxima in the illumination pattern is at least 1× to 100× of a full width at half maximum (FWHM) of a corresponding intensity peak profile.
In some embodiments, the first optical transformation device or the second optical transformation device comprises a micro-lens array (MLA), and wherein the micro-lens array (MLA) comprises a regular arrangement of two or more micro-lenses. In some embodiments, each micro-lens in the micro-lens array is configured to demagnify a corresponding beamlet in the light received from the projection unit. In some embodiments, the regular arrangement is a hexagonal pattern. In some embodiments, the regular arrangement includes a shift in micro-lens position between neighboring rows or columns of micro-lenses. In some embodiments, the regular arrangement is staggered. In some embodiments, a projection of the regular arrangement onto an object plane comprising the object is rotated with respect to a direction of the relative movement. In some embodiments, the projection of the regular arrangement onto the object plane comprising the object is rotated by an angle, θ, with respect to the direction of relative movement, and wherein θ is chosen so as to result in the illumination pattern providing a uniform total exposure at every point on the object when integrated over a scan.
In some embodiments, the first optical transformation device and the second optical transformation device comprise a plurality of harmonically-modulated phase masks or harmonically-modulated amplitude masks with different orientations. In some embodiments, a spatial frequency and orientation of the second optical transformation device matches that of the first optical transformation device. In some embodiments, the first and second optical transformation devices comprise harmonically-modulated phase masks, and wherein the second optical transformation device is phase shifted relative to the first optical transformation device. In some embodiments, a final high-resolution image is reconstructed from the scanned image(s) acquired by the one or more image sensors by applying a Fourier reweighting process.
In some embodiments, the one or more image sensors comprise one or more time delay and integration (TDI) cameras, charge-coupled device (CCD) cameras, complementary metal-oxide semiconductor (CMOS) cameras, or a single-photon avalanche diode (SPAD) arrays.
In some embodiments, the scanned image(s) comprise fluorescence images, and wherein the illuminating step comprises providing excitation light at two or more excitation wavelengths. In some embodiments, the scanned image(s) comprise fluorescence images, and wherein the one or more image sensors are configured to detect fluorescence at two or more emission wavelengths.
In some embodiments, the object comprises a flow cell or substrate for performing nucleic acid sequencing. In some embodiments, the flow cell or substrate comprises at least one surface, and wherein the at least one surface comprises a plurality of single nucleic acid molecules or clonally-amplified nucleic acid clusters.
All publications, patents, and patent applications mentioned in this specification are herein incorporated by reference in their entirety to the same extent as if each individual publication, patent, or patent application was specifically and individually indicated to be incorporated by reference in its entirety. In the event of a conflict between a term herein and a term in an incorporated reference, the term herein controls.
The novel features of the invention are set forth with particularity in the appended claims. A better understanding of the features and advantages of the present invention will be obtained by reference to the following detailed description that sets forth illustrative embodiments, in which the principles of the invention are utilized, and the accompanying drawings of which:
Disclosed herein are systems and methods that combine: (i) the use of a first optical transformation to create patterned illumination that is directed to an imaged object such that light reflected, transmitted, scattered, or emitted by the object comprises high-resolution spatial information about the object that would not otherwise be obtained, and (ii) the use of a second optical transformation that generates an enhanced resolution optical image at a time delay and integration (TDI) image sensor that comprises all or a portion of the high-resolution spatial information captured by the patterned illumination. The resulting enhanced-resolution images can be acquired without requiring a change in the configuration, position, or orientation of the optical transformation devices used to generate the first and second optical transformations, with no additional digital processing required, or, in some instances, using digital processing of substantially reduced computational complexity in comparison with conventional enhanced resolution imaging methods. All of these factors contribute to the simplicity and high throughput of the disclosed imaging system.
In one non-limiting implementation, the disclosed systems and methods combine optical photon reassignment (OPRA) with time delay and integration (TDI) imaging to provide high-throughput and high signal-to-noise ratio (SNR) images of an object using a system that has no moving parts while also providing a large field-of-view (FOV) and enhanced image resolution. The disclosed systems and methods provide enhanced image resolution without compromising the imaging throughput and high SNR that is achieved using TDI imaging by incorporating passive optical transformation device(s) into both the illumination and detection optical paths of the imaging system and by synchronizing a relative motion between the object being imaged and the imaging system with the TDI image acquisition process. In some instances, the systems and methods described herein provide enhanced image resolution (e.g., enhanced raw image resolution) as compared to that obtained for images acquired using an otherwise identical imaging system that lacks one or more of the passive optical transformation devices. In some instances, the enhanced-resolution image is obtained in a single scan, without the need to acquire or recombine multiple images. In some instances, the enhanced-resolution images are produced with little or no digital processing required. This advantageously increases the throughput rate for imaging applications.
For example, in some instances, the disclosed imaging system comprises: an imaging device, that includes an illumination unit that includes a radiation source optically coupled to a first optical transformation device, where the first optical transformation device applies a first optical transformation to a light beam received from the radiation source to generate an illumination pattern that is directed to a corresponding area of an object; a projection unit that receives light reflected, transmitted, scattered, or emitted by the object and directs it to a detection unit, wherein the projection unit is configured to accept said light within a defined range of propagation angles; a detection unit that includes one or more image sensors configured for time delay and integration (TDI) imaging and optically coupled to a second optical transformation device, where the second optical transformation device applies a second optical transformation to light received from the projection unit (i.e., the light reflected, transmitted, scattered, or emitted by the object); where the illumination pattern generated by the first optical transformation causes the light accepted by the projection unit to comprise high-resolution spatial information about the object that would not be contained in the light accepted by the projection unit in a comparable imaging device lacking the first optical transformation device; and where the second optical transformation generates an optical image at the one or more image sensors that comprises all or a portion of said high-resolution spatial information; and an actuator configured to create relative movement between the imaging device and the object during a scan of all or a portion of the object, wherein the relative movement is synchronized with the time delay and integration (TDI) imaging such that a scanned image of all or a portion of the object is acquired by each of the one or more image sensors.
In some instances, the illumination pattern comprises a plurality of light intensity maxima, and the second optical transformation device is positioned so that the second optical transformation compensates for a spatial offset between the plurality of light intensity maxima in the illumination pattern and a plurality of signal intensity maxima that would be measured by individual image sensor pixels laterally offset relative to the light intensity maxima in scanned images acquired using an otherwise identical imaging system that lacked the second optical transformation device, the second optical transformation thereby enabling acquisition of a scanned image of higher resolution than would be acquired using an otherwise identical imaging system that lacks the second optical transformation device. In other words, at any given point in time during the scan, the second optical transformation device reroutes and redistributes the light reflected, transmitted, scattered, or emitted by the object to present a modified optical image of the object to the one or more image sensors, where the modified optical image represents a spatial structure of the object that is inferable from the properties of the light reflected, transmitted, scattered, or emitted from the object and the known illumination pattern projected on the object at that point in time, and wherein the one or more image sensors integrate signals from a plurality of instantaneous modified optical images over the period of time required to perform the scan of the object. In some instances, the optical transformation devices (e.g., micro-lens arrays (MLAs), diffractive optical element (e.g., diffraction gratings), digital micro-mirror devices (DMDs), phase masks, amplitude masks, spatial light modulators (SLMs), or pinhole arrays) are passive (or static) components of the system, i.e., their position and/or orientation is not changed during the image acquisition process. In some instances, the optical transformation devices are configured so that at least 40%, 50%, 60%, 70%, 80%, 90%, 95%, or 99% of the light reflected, transmitted, scattered, or emitted by the object and entering the second optical transformation device reaches the one or more image sensors.
In a second non-limiting implementation, the disclosed systems and methods combine an alternative optical transformation with time delay and integration (TDI) imaging to provide high-throughput and high signal-to-noise ratio (SNR) images of an object using a system that has no moving parts while also providing a large field-of-view (FOV) and enhanced image resolution. This realization is a generalization of the concept of structured illumination microscopy (SIM), which is known to provide enhanced resolution enhancement relative to a diffraction-limited imaging system. The disclosed methods and systems differ from SIM by collecting the image information in a single pass and thus obviating the need to acquire and recombine a series of images as required by conventional SIM. The compatibility of the approach with TDI imaging supports high-throughput, high SNR imaging, and only requires a computationally-straightforward and inexpensive processing of the raw images.
In this second, non-limiting implementation, the illumination pattern generated by the first optical transformation device (e.g., one or more phase masks or intensity masks) consists of regions of harmonically-modulated light intensity at the maximal frequency supported by the objective's numerical aperture (NA) and the illumination wavelength. The pattern consists of a several regions with different orientations of harmonic modulation, so that each point of the object scanning through the illumination pattern is sequentially and uniformly exposed to modulations in all directions on the sample plane. Alternatively, the pattern can consist of a harmonically-modulated intensity with the orientation aligned with one or more selected directions (i.e., selected to improve resolution along specific directions in the plane (e.g. the directions connecting nearest neighbors in an array-shaped object)).
The second optical transformation device comprises, e.g., one or more harmonically-modulated phase masks or harmonically-modulated amplitude masks, with spatial frequencies and orientations matching that of the first optical transformation device in each region. In some instances, the second optical transformation device is complementary (e.g., phase-shifted by 90 degrees) relative to the first optical transformation device. The scanning of the object, synchronized with the TDI image shift, generates a set of images in the image plane that correspond to different phase shifts of the harmonic modulation required for SIM imaging. In conventional TDI, the modulation would be averaged over time, yielding diffraction-limited images. In contrast, the disclosed systems and methods preserve and recombine the images obtained at different phase shifts, routing each image to the appropriate regions of frequency space. Thus, for the disclosed systems and methods, the required set of SIM images is acquired in a single TDI pass and is recombined in an analog fashion, without requiring computational overhead. The final high-resolution image can be reconstructed from the raw scanned image by Fourier reweighting, which is a computationally-inexpensive operation. Remarkably, one of the key difficulties for conventional SIM microscopy—the need to keep the specimen aligned during acquisition of the entire image set, is significantly relaxed for the disclosed methods due to near-simultaneous image acquisition and analog recombination.
The systems and methods provided herein, in some instances, may be standalone systems or may be incorporated into pre-existing imaging systems. In some instances, the imaging systems may be useful for imaging, for example, biological analytes, non-biological analytes, synthetic analytes, cells, tissue samples, or any combination thereof.
While various embodiments of the invention have been shown and described herein, it will be obvious to those skilled in the art that such embodiments are provided by way of example only. Numerous variations, changes, and substitutions may occur to those skilled in the art without departing from the invention. It should be understood that various alternatives to the embodiments of the invention described herein may be employed.
Definitions: Unless defined otherwise, all terms of art, notations and other technical and scientific terms or terminology used herein are intended to have the same meaning as is commonly understood by one of ordinary skill in the art to which the claimed subject matter pertains. In some cases, terms with commonly understood meanings are defined herein for clarity and/or for ready reference, and the inclusion of such definitions herein should not necessarily be construed to represent a substantial difference over what is generally understood in the art.
Throughout this application, various embodiments may be presented in a range format. It should be understood that the description in range format is merely for convenience and brevity and should not be construed as an inflexible limitation on the scope of the disclosure. Accordingly, the description of a range should be considered to have specifically disclosed all the possible subranges as well as individual numerical values within that range irrespective of whether a specific numerical value or specific sub-range is expressly stated. For example, description of a range such as from 1 to 6 should be considered to have specifically disclosed subranges such as from 1 to 3, from 1 to 4, from 1 to 5, from 2 to 4, from 2 to 6, from 3 to 6 etc., as well as individual numbers within that range, for example, 1, 2, 3, 4, 5, and 6. This applies regardless of the breadth of the range.
Ranges may be expressed herein as from “about” one particular value, and/or to “about” another particular value. The terms “about” and “approximately” shall generally mean an acceptable degree of error or variation for a given value or range of values, such as, for example, a degree of error or variation that is within 20 percent (%), within 15%, within 10%, or within 5% of a given value or range of values.
As used in the specification and claims, the singular forms “a”, “an” and “the” include plural references unless the context clearly dictates otherwise. For example, the term “a sample” includes a plurality of samples, including mixtures thereof.
The terms “determining,” “measuring,” “evaluating,” “assessing,” “assaying,” and “analyzing” are often used interchangeably herein to refer to forms of measurement. The terms include determining whether an element is present or not (for example, detection). These terms can include quantitative, qualitative, or quantitative and qualitative determinations. Assessing can be relative or absolute. “Detecting the presence of” can include determining the amount of something present in addition to determining whether it is present or absent, depending on the context.
Use of absolute or sequential terms, for example, “will,” “will not,” “shall,” “shall not,” “must,” “must not,” “first,” “initially,” “next,” “subsequently,” “before,” “after,” “lastly,” and “finally,” are not meant to limit the scope of the systems and methods disclosed herein but rather are meant to be exemplary.
Any systems, methods, software, compositions, and platforms described herein are modular and not limited to sequential steps. Accordingly, terms such as “first” and “second” do not necessarily imply priority, order of importance, or order of acts.
As used herein, the term “biological sample” generally refers to a sample obtained from a subject. The biological sample may be obtained directly or indirectly from the subject. A sample may be obtained from a subject via any suitable method, including, but not limited to, spitting, swabbing, blood draw, biopsy, obtaining excretions (e.g., urine, stool, sputum, vomit, or saliva), excision, scraping, and puncture. A sample may comprise a bodily fluid such as, but not limited to, blood (e.g., whole blood, red blood cells, leukocytes or white blood cells, platelets), plasma, serum, sweat, tears, saliva, sputum, urine, semen, mucus, synovial fluid, breast milk, colostrum, amniotic fluid, bile, bone marrow, interstitial or extracellular fluid, or cerebrospinal fluid. For example, a sample of bodily fluid obtained by a puncture method and comprise blood and/or plasma. Such a sample may comprise cells and/or cell-free nucleic acid material. Alternatively, the sample may be obtained from any other source including but not limited to blood, sweat, hair follicle, buccal tissue, tears, menses, feces, or saliva. The biological sample may be a tissue sample, such as a tumor biopsy. The sample may be obtained from any of the tissues provided herein including, but not limited to, skin, heart, lung, kidney, breast, pancreas, liver, intestine, brain, prostate, esophagus, muscle, smooth muscle, bladder, gall bladder, colon, or thyroid. The biological sample may comprise one or more cells. A biological sample may comprise one or more nucleic acid molecules such as one or more deoxyribonucleic acid (DNA) and/or ribonucleic acid (RNA) molecules (e.g., included within cells or not included within cells). Nucleic acid molecules may be included within cells. Alternatively, or in addition, nucleic acid molecules may not be included within cells (e.g., cell-free nucleic acid molecules).
As used herein, the term “optical device” refers to a device comprising one, two, three, four, five, six, seven, eight, nine, ten, or more than ten optical elements or components (e.g., lenses, mirrors, prisms, beam-splitters, filters, diffraction gratings, apertures, etc., or any combination thereof).
As used herein, the term “optical transformation device” refers to an optical device used to apply an optical transformation to a beam of light (e.g., to affect a change in intensity, phase, wavelength, band-pass, polarization, ellipticity, spatial distribution, etc., or any combination thereof).
As used herein, the term “lossless” when applied to an optical device indicates that there is no significant loss of light intensity when a light beam passes through, or is reflected from, the optical device. For a lossless optical device, the intensity of the light transmitted or reflected by the optical device has at least 80%, 85%, 90%, 95%, 98%, or 99% of the intensity of the incident light.
The term “support” or “substrate,” as used herein, generally refers to any solid or semi-solid article on which analytes or reagents, such as nucleic acid molecules, may be immobilized. Nucleic acid molecules may be synthesized, attached, ligated, or otherwise immobilized. Nucleic acid molecules may be immobilized on a substrate by any method including, but not limited to, physical adsorption, by ionic or covalent bond formation, or combinations thereof. An analyte or reagent (e.g., nucleic acid molecules) may be directly immobilized onto a substrate. An analyte or reagent may be indirectly immobilized onto a substrate, such as via one or more intermediary supports or substrates. In an example, an analyte (e.g., nucleic acid molecule) is immobilized to a bead (e.g., support or substrate) which bead is immobilized to a substrate. A substrate may be 2-dimensional (e.g., a planar 2D substrate) or 3-dimensional. In some cases, a substrate may be a component of a flow cell and/or may be included within or adapted to be received by a sequencing instrument. A substrate may include a polymer, a glass, or a metallic material. Examples of substrates include a membrane, a planar substrate, a microtiter plate, a bead (e.g., a magnetic bead), a filter, a test strip, a slide, a cover slip, and a test tube. A substrate may comprise organic polymers such as polystyrene, polyethylene, polypropylene, polyfluoroethylene, polyethyleneoxy, and polyacrylamide (e.g., polyacrylamide gel), as well as co-polymers and grafts thereof. A substrate may comprise latex or dextran. A substrate may also be inorganic, such as glass, silica, gold, controlled-pore-glass (CPG), or reverse-phase silica. The configuration of a support may be, for example, in the form of beads, spheres, particles, granules, a gel, a porous matrix, or a substrate. In some cases, a substrate may be a single solid or semi-solid article (e.g., a single particle), while in other cases a substrate may comprise a plurality of solid or semi-solid articles (e.g., a collection of particles). Substrates may be planar, substantially planar, or non-planar. Substrates may be porous or non-porous and may have swelling or non-swelling characteristics. A substrate may be shaped to comprise one or more wells, depressions, or other containers, vessels, features, or locations. A plurality of substrates may be configured in an array at various locations. A substrate may be addressable (e.g., for robotic delivery of reagents), or by detection approaches, such as scanning by laser illumination and confocal or deflective light gathering. For example, a substrate may be in optical and/or physical communication with a detector. Alternatively, a substrate may be physically separated from a detector by a distance. A substrate may be configured to rotate with respect to an axis. The axis may be an axis through the center of the substrate. The axis may be an off-center axis. The substrate may be configured to rotate at any useful velocity. The substrate may be configured to undergo a change in relative position with respect to a first longitudinal axis and/or a second longitudinal axis.
The term “bead,” as described herein, generally refers to a solid support, resin, gel (e.g., hydrogel), colloid, or particle of any shape and dimensions. A bead may comprise any suitable material such as glass or ceramic, one or more polymers, and/or metals. Examples of suitable polymers include, but are not limited to, nylon, polytetrafluoroethylene, polystyrene, polyacrylamide, agarose, cellulose, cellulose derivatives, or dextran. Examples of suitable metals include paramagnetic metals, such as iron. A bead may be magnetic or non-magnetic. For example, a bead may comprise one or more polymers bearing one or more magnetic labels. A magnetic bead may be manipulated (e.g., moved between locations or physically constrained to a given location, e.g., of a reaction vessel such as a flow cell chamber) using electromagnetic forces. A bead may have one or more different dimensions including a diameter. A dimension of the bead (e.g., the diameter of the bead) may be less than about 1 mm, 0.1 mm, 0.01 mm, 0.005 mm, 1 μm, 0.1 μm, 0.01 μm, 1 nm, or may range from about 1 nm to about 100 nm, from about 100 nm to about 1 μm, from about 1 μm to about 100 μm, or from about 1 mm to about 100 mm.
The section headings used herein are for organizational purposes only and are not to be construed as limiting the subject matter described.
In recent years, a number of approaches have been developed that overcome the diffraction limit by various physical mechanisms, such as stimulated emission depletion microscopy (STED), photo-activated localization microscopy (PALM), stochastic optical reconstruction microscopy (STORM), reversible saturable optical fluorescence transitions microscopy (RESOLFT), etc. However, these approaches generally have low throughput, precluding their use for high-speed imaging applications, and in many cases these methods require specialized fluorophores. A faster and more general framework allowing for a modest but significant increase in resolution is provided by methods utilizing patterned illumination, such as confocal microscopy, structured illumination microscopy (SIM), and image scanning microscopy (ISM).
Prior publications have described the combination of TDI with multi-focal confocal microscopy, where a large number of stationary illumination points are projected onto the specimen, and an array of pinholes is used to spatially filter the resulting image to create a confocal image in a single TDI pass. This approach enables high-throughput imaging with sub-diffraction limited resolution and requires no computational overhead. However, its significant drawback is the steep reduction of signal in view of the modest resolution increase achieved that is an inherent limitation of confocal microscopy. This drawback is resolved in image scanning microscopy (ISM) and structured illumination microscopy (SIM), but these two approaches require the acquisition of multiple images with subsequent computational reconstruction of a resolution-enhanced image, which significantly increases the imaging system complexity and dramatically decreases throughput compared to conventional TDI imaging.
Another imaging modality based on the use of patterned illumination is a variant of ISM where the final image is generated directly on the image sensor without computational overhead. These techniques are known variously as re-scan confocal microscopy, optical photon (or pixel) reassignment microscopy (OPRA), or instant SIM. In these techniques, a resolution improvement is achieved by optical rerouting of light emanating from the sample to an appropriate location in the image plane, either by an arrangement of scanning mirrors or using a modified spinning disk. However, while some of these techniques provide enhanced-resolution images at relatively high frame rates, they are mainly applicable to imaging of a small field of view and are not readily compatible with scanning imaging modalities such as TDI. Therefore, as noted above, there remains an unmet need for systems, devices, and methods for imaging that can bring together the high throughput and SNR of TDI imaging with the resolution enhancement attainable using patterned illumination.
The trade-offs between imaging speed, signal-to-noise ratios (SNR), image resolution are key considerations for many imaging applications (e.g., nucleic acid sequencing, small molecule or analyte detection, in-vitro cellular biological systems, synthetic and organic substrate analyses, etc.). In some cases, when optimizing an imaging system for a given attribute, others may be compromised. For example, current imaging systems and methods focused on improving imaging resolution beyond the diffraction limit (e.g., stimulated emission depletion microscopy (STED), photo-activated localization microscopy (PALM), stochastic optical reconstruction microscopy (STORM), reversible saturable optical fluorescence transitions microscopy (RESOLFT), etc.) are indeed capable of producing images that have image resolution that exceeds the diffraction limit, yet have low imaging throughputs (e.g., long image acquisition times and/or small fields-of-view) that limit their applicability in applications where high-speed imaging is required. The present disclosure presents systems and methods that are capable of improving imaging speed, SNR, and image resolution simultaneously.
Provided herein are imaging systems that combine optical photon reassignment microscopy (OPRA) with time delay and integration (TDI) imaging to enable high throughput, high signal to noise ratio (SNR) imaging while also providing enhanced image resolution. Optical photon reassignment microscopy (OPRA) is an optical technique for achieving enhanced image resolution without the need for the computer-based methods often previously applied to methods such as image scanning microscopy (ISM) to computationally reassign the detected light intensity at any point in time to a corresponding most probable position of an emitter during a scan (Roth, et al. (2013), “Optical photon reassignment microscopy (OPRA)”, Optical Nanoscopy 2:5). OPRA is an improvement on ISM (discussed above). As with ISM, it is a method that does not reject light and is thus capable of generating high SNR images. However, it also does not require digital processing and only requires acquisition of a single image, which minimizes technical noise.
In TDI imaging, an image sensor (e.g., a time delay and integration (TDI) charge-coupled device (CCD)) is configured to capture images of moving objects without blurring by having multiple rows of photosensitive elements (pixels) which integrate and shift signals to an adjacent row of photosensitive elements synchronously with the motion of the image across the array of photosensitive elements. An image comprises a matrix of analog or digital signals corresponding to a numerical value of, e.g., photoelectric charge, accumulated in each image sensor pixel during exposure to light. During each clock cycle (typically from about 1 to 10 microseconds), the signal accumulated in each image sensor pixel is moved to an adjacent pixel (e.g., row by row in a “line shift” TDI sensor). The last row of pixels is connected to the readout electronics, and the rest of the image is shifted by one row. The motion of the object being imaged is synchronized with the clock cycle and image shifts so that each point in the object is imaged onto the same point in the image as it traverses the field of view (i.e., there is no motion blur). The image sensor (or TDI camera) is either continuously exposed, or line shifts may be alternated with exposure intervals. Each point in the image accumulates signal for N clock cycles, where N is the number of active pixel rows in the image sensor. The ability to integrate signal over the duration of a scan provides for high sensitivity imaging at low light levels.
The imaging systems described herein combine these techniques by using novel combinations of optical transformation devices (and other optical components) to create structured illumination patterns for imaging an object, to reroute and redistribute the light reflected, transmitted, scattered, or emitted by the object, and to project the rerouted and redistributed light onto one or more image sensors configured for TDI imaging. The combinations of OPRA and TDI disclosed herein allow the use of static optical transformation devices, which confers the advantages of: (i) being much simpler than exiting implementations of OPRA-like systems, and (ii) enabling a wide field-of-view and hence a very high imaging throughput (similar to or exceeding the throughput of conventional TDI systems). The disclosed imaging systems may be configured to perform fluorescence, reflection, transmission, dark field, phase contrast, differential interference contrast, two-photon, multi-photon, single molecule localization, or other types of imaging.
In some cases, the disclosed imaging systems may be standalone imaging systems. Alternatively, or in addition, in some instances the disclosed imaging systems, or component modules thereof, may be configured as an add-on to a pre-existing imaging system.
The disclosed imaging systems may be used to image any of a variety of objects or samples. For example, the object may be an organic or inorganic object, or combination thereof. An organic object may comprise cells, tissues, nucleic acids, nucleic acids conjugated onto beads, nucleic acids conjugated onto a surface, nucleic acids conjugated onto a support structure, proteins, small molecule analytes, a biological sample as described elsewhere herein, or any combination thereof. An object may comprise a substrate comprising one or more analytes (e.g., organic, inorganic) immobilized thereto. The object may comprise any substrate as described elsewhere herein, such as a planar or substantially planar substrate. The substrate may be a textured substrate, such as physically or chemically patterned substrate to distinguish at least one region from another region. The object may comprise a substrate comprising an array of individually addressable locations. An individually addressable location may correspond to a patterned or textured spot or region of the substrate. In some cases, an analyte or cluster of analytes (e.g., clonally amplified population of nucleic acid molecules, optionally immobilized to a bead) may be immobilized at an individually addressable location, such that the array of individually addressable locations comprises an array of analytes or clusters of analytes immobilized thereto. The imaging systems and methods described herein may be configured to spatially resolve optical signals, at high throughput, high SNR, and high resolution, between individual analytes or individual clusters of analytes within an array of analytes or clusters of analytes that are immobilized on a substrate. At any one point in time, when the object is illuminated not all of the individually addressable locations within the scanned FOV will emit optical (e.g., fluorescent) signals. That is, for a given time point, at least one individually addressable location on the object and within the illuminated FOV will not emit an optical signal (e.g., a fluorescent intensity).
In some instances, the disclosed imaging systems may be used with a nucleic acid sequencing platform, non-limiting examples of which are described in PCT International Patent Application Publication No. WO 2020/186243, which is incorporated by reference herein in its entirety.
In some instances, the illumination unit 102 may comprise a light source 104, a first optical transformation device 106, optional optics 108, or any combination thereof. In some instances, the light source (or radiation source) 104 may comprise a coherent source, a partially-coherent source, an incoherent source, or any combination thereof. In some instances, the light source comprises a coherent source, and the coherent source may comprise a laser or a plurality of lasers. In some instances, the light source comprises an incoherent source, and the incoherent source may comprise a light emitting diode (LED), a laser driven light source (LDLS), an amplified spontaneous emission (ASE) source, a super luminescence light source, or any combination thereof.
In some instances, the first optical transformation device 106 is configured to apply an optical transformation (e.g., a spatial transformation) to a light beam received from light source 104 to create patterned illumination and may comprise one or more of a micro-lens array (MLA), diffractive element (e.g., a diffraction grating), digital micromirror device (DMD), phase mask, amplitude mask, spatial light modulator (SLM), pinhole array, or any combination thereof.
In some instances, the first optical transformation device comprises a plurality of optical elements that may generate an array of Bessel beamlets from a light beam produced by the light source or radiation source. In some instances, the first optical transformation device may comprise a plurality of individual elements that may generate the array of Bessel beamlets. The optical transformation device may comprise any other optical component configured to transform a source of light into an illumination pattern.
In some instances, the illumination pattern may comprise an array or plurality of intensity peaks that are non-overlapping. In some instances, the illumination pattern may comprise a plurality of two-dimensional illumination spots or shapes. In some instances, the illumination pattern may comprise a pattern in which the ratio of the spacing between illumination pattern intensity maxima and a full width at half maximum (FWHM) value of the corresponding intensity peaks is equal to a specified value. In some instances, for example, the ratio of the spacing between illumination pattern intensity maxima and a full width at half maximum (FWHM) value of the corresponding intensity peaks may be 1, 2, 3, 4, 5, 10, 15, 20, 30, 40, 50, 60, 70, 80, 90, or 100.
In some cases, an uneven spacing between illumination spots or shapes may be generated by the optical transformation device to accommodate linear or non-linear motion of the object being imaged. In some instances, for example, non-linear motion may comprise circular motion. Various optical configurations and systems for continuously scanning a substrate using linear and non-linear patterns of relative motion between the optical system and the object (e.g., a substrate) are described in International Patent Pub. No. WO2020/186243, which is incorporated in its entirety herein by reference.
In some cases, the optional optics 108 of the illumination unit 2 may comprise one or more plano-convex lenses, bi-convex lenses, plano-concave lenses, bi-concave lenses, band-pass optical filters, low-pass optical filters, high-pass optical filters, notch-pass optical filters, quarter wave plates, half wave plates, or any combination thereof. In some instances, the illumination unit 102 is optically coupled with projection unit 120 such that patterned illumination 110a is directed to the projection unit.
In some instances, the projection unit 120 may comprise object-facing optics 124, additional optics 122, or any combination thereof. In some cases, the object-facing optics 124 may comprise a microscope objective lens, a plurality of microscope objective lenses, a lens array, or any combination thereof. In some instance the additional optics 122 of the projection unit 120 may comprise one or more dichroic mirrors, beam splitters, polarization sensitive beam splitters, plano-convex lenses, bi-convex lenses, plano-concave lenses, bi-concave lenses, band-pass optical filters, low-pass optical filters, high-pass optical filters, notch-pass optical filters, quarter wave plates, half wave plates, or any combination thereof. In some instances, the projection unit 120 is optically coupled to the object 132 such that patterned illumination light 110b is directed to the object 132, and light 112a that is reflected, transmitted, scattered, or emitted by the object 132 is directed back to the projection unit 120 and relayed 112b to the detection unit 140.
In some cases, the object positioning system 130 may comprise one or more actuators (e.g., a linear translational stage, two-dimensional translational stage, three-dimensional translational stage, circular rotation stage, or any combination thereof) configured to support and move the object 132 relative to the projection unit 120 (or vice versa). In some instances, the one or more actuators are optically, electrically, and/or mechanically coupled with (i) the optical assembly comprising the illumination unit 102, the projection unit 120, and the detection unit 140, or individual components thereof, and/or (ii) the object 132 being imaged, to effect relative motion between the object and the optical assembly or individual components thereof during scanning. In some cases, the object positioning system 130 may comprise a built-in encoder configured to relay the absolute or relative movement of the object positioning system 130, e.g., to a system controller (not shown) or the detection unit 140.
In some instances, the object 132 may comprise, for example, a biological sample, biological substrate, nucleic acids coupled to a substrate, biological analytes coupled to a substrate, synthetic analytes coupled to a substrate, or any combination thereof.
In some instances, the detection unit 140 may comprise a second optical transformation device 142, one or more image sensors 144 (e.g., 1, 2, 3, 4, or more than 4 image sensors), optional optics 148, or any combination thereof. In some cases, the second optical transformation 142 element may comprise a micro-lens array (MLA), diffractive element, digital micromirror device (DMD), phase mask, amplitude mask, spatial light modulator (SLM), pinhole array, or any combination thereof. In some cases, the one or more image sensors 144 may comprise a time delay integration (TDI) camera, charge-coupled device (CCD) camera, complementary metal-oxide semiconductor (CMOS) camera, or a single-photon avalanche diode (SPAD) array. In some instances, the time delay and integration circuitry may be integrated directly into the camera or image sensor. In some instances, the time delay and integration circuitry may be external to the camera or image sensor. In some instances, the optional optics 148 may comprise one or more plano-convex lenses, bi-convex lenses, plano-concave lenses, bi-concave lenses, band-pass optical filters, low-pass optical filters, high-pass optical filters, notch-pass optical filters, quarter wave plates, half wave plates, or any combination thereof.
As noted above, the illumination unit 102 may be optically coupled to the projection unit 120. In some instances, the illumination unit 102 may emit illumination light 110a that is received by the projection unit 120. The projection unit 120 may direct the illumination light 110b toward the object 132. The object may absorb, scatter, reflect, transmit (in other optical configurations), refract, or emit light (112a), or any combination thereof, upon interaction between the object 132 and the illumination light 110b. The light emanating from the object 112a directed towards the projection unit 120 may be directed 112b to the detection unit 140.
During operation, the projection unit 120 may direct an illumination pattern (received from the illumination unit 102) to the object 132 and receive and direct the resultant illumination pattern reflected, transmitted, scattered, emitted, or otherwise received from the object 132, also referred to herein as a “reflected illumination pattern” to the detection unit 140.
The optical elements, and configuration thereof, of the system 100 illustrated in
Non-limiting examples of imaging system optical configurations that may perform high-throughput, high SNR imaging of an object with an enhanced resolution are illustrated in
In some examples, the pattern illumination source may be in either a transmission optical geometry (see, e.g.,
In some cases, the projection optical assembly 213 may comprise a first dichroic mirror 208, tube lenses 209, and an objective lens 210 which directs the patterned illumination to object 220. In some instances, the detection unit 211 may comprise a second optical transformation device 207, tube lens 205, and one or more sensors 206. In some instances, the tube lens 205 receives and direct the illumination pattern emitted or otherwise received from the object via the projection optical assembly 213 to the sensor 206. The tube lens 205 in combination with tube lens 209 of the projection optical assembly 213 may be configured to provide a higher magnification of the illumination pattern emitted or received from the object 220 and relayed to the sensor 206. In some instances, the one or more image sensors 206 of the detection unit 211 are configured for time delay and integration (TDI) imaging.
In some instances, imaging system 200 (or any of the other imaging system configurations described herein) may comprise an autofocus (AF) mechanism (not shown). An AF light beam may be configured to provide feedback to adjust the position of the objective lens with respect to the object being imaged, or vice versa. In some instances, the AF beam may be co-axial with the pattern illumination source 212 optical path. In some instances, the AF beam may be combined with the pattern illumination source using a second dichroic mirror (not shown) that reflects the AF beam and transmits the pattern illumination source radiation to the object being imaged.
In some instances, imaging system 200 (or any of the other imaging system configurations described herein) may comprises a controller. In some instances, a controller (or control module) may be configured, for example, as a synchronization unit that controls the synchronization of the relative movement between the imaging system (or the projection optical assembly) and the object with the time delay integration (TDI) of the one or more image sensors. In some instances, a controller may be configured to control components of the patterned illumination unit (e.g., light sources, spatial light modulators (SLMs), electronic shutters, etc.), the projection optical assembly, the patterned illumination detector (e.g., the one or more image sensors configured for TDI imaging, etc.), the object positioning system (e.g., the one or more actuators used to create relative motion between the object and the projection optical assembly), the image acquisition process, post-acquisition image processing, etc. In some instances, a galvo-mirror is used to scan all or a portion of the object (e.g., to enable TDI imaging). In some instances, the scanning performed by the galvo-mirror may be used to provide apparent relative motion between the object and the projection optical assembly.
In some instances of any of the imaging system configurations described herein, one or both of the optical transformation devices (or optical transformation elements) may be tilted and/or rotated to allow collection of signal information in variable pixel sizes (e.g., to increase SNR, but at the possible of cost of increased analysis requirements). Tilting and/or rotating of one or both of the optical transformation elements may be performed to alleviate motion blur.
In some instances, motion blur may be caused by different linear velocities across the imaging system FOV, as illustrated in
One strategy to compensate for this relative motion is to separate the motion into linear (translational) and rotational motion components. An alternative strategy is to use wedged counter scanning where a magnification gradient can be created by, e.g., altering the working distance across the field-of-view of the image sensor (e.g., the camera). For example, a magnification gradient that is characterized by a magnification ratio (i.e., the ratio of magnification at the outer radius of the sensor to magnification at the inner radius of the sensor) given by Magnification Ratio=(S2/S1) (r2/r1)=1+(FOV/r1) where an FOV=r2−r1 could be used to compensate for the relative motion. As an example, if FOV is 2.6 mm and r1=60 mm, then the Magnification Ratio between S2 and S1 is approximately 1.04.
For an imaging system with a typical Scheimpflug layout (see
Another strategy to compensate for this relative motion is to insert a tilted lens before a tilted image sensor (see
where α′ is similar to the concept of the photon reassignment coefficient, α.
If α′ is set to 1, then D2 will be 0 (and hence Δd will be 0), meaning that the sensor and the lens would be superimposed. If D2 is 0.04f, then α′ will be 1/1.04 and Δd will be 0.0015f. The relative change in magnification between one edge of the FOV and the other can be determined as:
In some instances, the sensor and the lens are tilted at a same angle (and if so, there will be no variable magnification). In some instances, the sensor and the lens are tilted at different angles (e.g., β1 and β2, respectively). In some instances, β1 may be at least about 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1, 2, 3, 4, 5, 6, 7, 8, 9, or 10 degrees. In some instances, β2 may be at least about 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 15, or 20 degrees. Those of skill in the art will recognize that β1 and β2 each may be any value within their respective ranges, e.g., about 0.54 degrees and about 11 degrees.
Accordingly, in some instances, the disclosed imaging systems may be configured to redirect light transmitted, reflected, or emitted by the object to one or more optical sensors (e.g., image sensors) through the use of a tiltable objective lens configured to deliver the substantially motion-invariant optical signal to the one or more optical sensors (e.g. image sensors). In some instances, the redirecting of light transmitted, reflected, or emitted by the object to the one or more optical sensors further comprises the use of a tiltable tube lens and/or a tiltable image sensor. In some instances, tiltable objectives, tube lenses, and/or image sensors may be actuated using, e.g., piezoelectric actuators.
In some instances, the tilt angles for the objective, tube lens, and/or image sensor used to create a magnification gradient across the field-of-view may be different when the image sensor is positioned at a different distance (e.g., a different radius) from the axis of rotation. In some instances, the tilt angles for the objective, tube lens, and/or image sensor may each independently range from about ±0.1 to about ±10 degrees. In some instances, the tilt angles for the objective, tube lens, and/or image sensor may each independently be at least ±0.1 degrees, ±0.2 degrees, ±0.4 degrees, ±0.6 degrees, ±0.8 degrees, ±1.0 degrees, ±2.0 degrees, ±3.0 degrees, ±4.0 degrees, ±5.0 degrees, ±6.0 degrees, ±7.0 degrees, ±8.0 degrees, ±9.0 degrees, or ±10.0 degrees. Those of skill in the art will recognize that the tilt angles may, independently, be any value within this range, e.g., about ±1.15 degrees.
In some instances, the nominal distance between the objective and tube lens may range from about 150 mm to about 250 mm. In some instances, the nominal distance between the objective and the tube lens may be at least 150 mm, 160 mm, 170 mm, 180 mm, 190 mm, 200 mm, 210 mm, 220 mm, 230 mm, 240 mm, or 250 mm. Those of skill in the art will recognize that the nominal distance between the objective and the tube lens may be any value within this range, e.g., about 219 mm. In some instances, the distance between the objective and tube lens may be increased or decreased from their nominal separation distance by at least about ±5 mm, ±10 mm, ±15 mm, ±20 mm, ±25 mm, ±30 mm, ±35 mm, ±40 mm, ±45 mm, ±50 mm, ±55 mm, ±60 mm, ±65 mm, ±70 mm, ±75 mm, or ±80 mm. Those of skill in the art will recognize that the distance between the objective and tube lens may be increased or decreased by any value within this range, e.g., about ±74 mm.
In some instances, the working distance may be increased or decreased by at least about ±0.01 mm, ±0.02 mm, ±0.03 mm, ±0.04 mm, ±0.05 mm, ±0.06 mm, ±0.07 mm, ±0.08 mm, ±0.09 mm, ±0.10 mm, ±0.20 mm, ±0.40 mm, ±0.60 mm, ±0.80 mm, ±1.00 mm, ±1.20 mm, ±1.40 mm, ±1.60 mm, 1.80 mm, 2.00 mm, 2.20 mm, ±2.40 mm, ±2.60 mm, ±2.80 mm, or ±3.00 mm. Those of skill in the art will recognize that the working distance may be any value within this range, e.g., about ±1.91 mm.
In some instances, the change in magnification across the field-of-view may be at least about ±0.2%, ±0.4%, ±0.6%, ±0.8%, ±1.0%, ±1.2%, ±1.4%, ±1.6%, ±1.8%, ±2.0%, ±2.2%, ±2.4%, ±2.6%, ±2.8%, ±3.0%, ±3.2%, ±3.4%, ±3.6%, ±3.8%, ±4.0%, ±4.2%, ±4.4%, ±4.6%, ±4.8%, ±5.0%, ±5.2%, ±5.4%, ±5.6%, ±5.8%, or ±6.0%. Those of skill in the art will recognize that the change in magnification across the field-of-view may be any value within this range, e.g., about ±0.96%.
In some instances, of any of the imaging system configurations described herein, the position of the second optical transformation device (e.g., a second micro-lens array) may be varied. For example, in some instances, the second MLA may be positioned directly (e.g., mounted) on the image sensor. In some instances, the second MLA may be positioned on a translation stage or moveable mount so that its position relative to the image sensor (e.g., its separation distance from the sensor, or its lateral displacement relative to the sensor) may be adjusted. In some instances, the distance between the second MLA and the image sensor is less than 10 mm, 1 mm, 100 μm, 50 μm, 25 μm, 15 μm, 10 μm, 5 μm, or 1 μm or any value within a range therein. The location of the second MLA with respect to the sensor may be determined by the MLA's focal length (i.e., the second MLA may be positioned such the final photon reassignment coefficient is within a desired range). The photon reassignment coefficient is determined as the ratio of L1/L2, where L1 is the focal length of the second MLA and L2 is the effective distance of the second MLA2 to the sensor plane (see e.g.,
In some instances of any of the imaging system configurations described herein, the system may further comprise line-focusing optics for adjusting the width of a laser line used for illumination or excitation. For example, the line width of the focused laser may be made wider in order to reduce peak illumination intensity and avoid photodamage or heat damage of the object, while the line width of the focused laser line may be made narrower in order to reduce motion-induced blur). Photodamage is particularly problematic for objects comprising fluorophores (e.g., such as the fluorophores used in many sequencing applications).
As illustrated in
Mathematically, this optical compensation technique can be described in terms of a relative scaling of the point spread functions (PSFs) for the illumination and detection optics. The image intensity profile (or emitted light intensity profile) recorded during a scan is given by:
where h(x) is the illumination PSF, and g(y−x) is the detection PSF. The optical transformation device used to compensate for the spatial shift between intensity peaks in the image and intensity peaks in the illumination profile at the object plane may, for example, apply a demagnification, m, in the Y direction such that y→my, where 0<m<1. The image intensity profile is then given by:
or, in terms of a scan coordinate, s=y−x:
where =(1−m)x. More generally,
and ⊗ is the convolution operator. The PSF for the imaging system in this method is the convolution of the illumination point spread function, h, scaled by a factor (1−m) and the detection point spread function, g, scaled by a factor m. The PSF determines the resolution of the imaging system, and is comparable to, or better than, the point spread function (and image resolution) for a confocal imaging system (e.g., a diffraction limited conventional imaging system).
The same considerations apply when using structured illumination (patterned illumination) in combination with TDI imaging. In some cases, the pattern illumination source may comprise an optical transformation device used to generate structured illumination (or patterned illumination).
In some examples, the plurality of micro-lenses may comprise a plurality of spherical micro-lenses, aspherical micro-lenses or any combination thereof. In some instances, the MLA may comprise a plurality of micro-lenses with a positive or negative optical power. In some cases, the MLA may be configured such that the rows are aligned with respect to a scan or cross-scan direction. In some instances, the scan direction may be aligned with a length of the MLA defined by the number columns of micro-lenses. Alternatively, the cross-scan direction may be aligned with a width of the MLA defined by the number of rows of micro-lenses.
The light pattern reflected, transmitted, scattered, or emitted by the object as a result of illumination by the patterned illumination (e.g., the “reflected light pattern”, “emitted light pattern”, etc.) is transformed (e.g., by the second optical transformation device) to create an intensity distribution representing a maximum-likelihood image of the object. In some cases, each point in the image plane may be represented by an intensity distribution that is substantially one-dimensional (1d) (i.e., the illumination pattern may consist of elongated illumination spots (line segments) that only confer a resolution advantage in the direction orthogonal to the line segments). In some cases, each point in the image plane may be re-routed to a different coordinate that represents the maximum-likelihood position of the corresponding emission coordinate on the object plane. In some cases, the light pattern emitted by the object and received at an image plane may be re-routed to form a two-dimensional (2d) intensity distribution that represents the maximum-likelihood 2d distribution of the corresponding emission coordinates on the object plane. In some instances, a series of illumination patterns may be used to create a larger illumination pattern that is used during a single scan. In some instances, a series of illumination patterns may be cycled through in a series of scans, and their signals and respective transformations accumulated, to generate a single enhanced resolution image. That is, the signal generated at each position and/or by each illumination pattern may be accumulated. In some cases, the illumination pattern may be selected such that each point of the object, when scanned through the field of view, receives substantially the same integral of illumination intensity over time (i.e., the same total illumination light exposure) as other points of the object (see, e.g.,
In some examples, the imaging system may illuminate the object by an illumination pattern comprising regions of harmonically modulated intensity at the maximum frequency supported by the imaging system objective lens NA and illumination light wavelength. The pattern may consist of several regions with varying orientations of harmonically modulated light such that each illumination point in the illumination pattern directed to the object may be sequentially exposed to modulations in all directions on the object plane uniformly. Alternatively, the illumination pattern may comprise a harmonically modulated intensity aligned with one or more selected directions. In some instances, the direction may be selected to improve resolution along a particular direction in the object plane (e.g., directions connecting the nearest neighbors in an array-shaped object).
In some instances, the imaging system may be configured to generate a harmonically-modulated illumination intensity pattern (e.g., a sinusoidal-modulated illumination intensity pattern generated by a first optical transformation device (or illumination mask)) and may be used to image an object at enhanced resolution in a single scan without the need of computationally reconstructing the enhanced resolution image from a plurality of images. In some instances, the imaging system may comprise a second optical transformation device (e.g., a harmonically-modulated phase mask or a harmonically-modulated amplitude mask (or detection mask)) with a spatial frequency and orientation matching that of the harmonically modulated intensity in each region of the illumination pattern. In some instances, a detection mask may comprise a mask that is complementary to the illumination mask (i.e., a mask that is phase-shifted by 90 degrees relative to the illumination mask). In some instances, when scanning the object and acquiring a series of “instantaneous” images with the harmonically modulated intensity illumination pattern (i.e., the plurality of optical images presented to the sensor during the course of a single scan; at each point during the scan, the object is in a different position relative to the illumination pattern and the second optical transformation device, so these “instantaneous” images are not identical and are not simply shifted versions of the same image) the enhanced resolution image is generated by analog phase demodulation of the series of “instantaneous” images without the need of computationally-intensive resources. In some instances, the enhanced-resolution image may be reconstructed from the analog phase demodulation using a Fourier re-weighting technique that is computationally inexpensive.
In some examples, the imaging system may comprise methods of processing images captured by one or more image sensors. In some instances, the location of a photon reflected, transmitted, scattered, or emitted by the object may not accurately map to the corresponding location on the one or more image sensors. In some cases, photons may be re-mapped or reassigned to precisely determine the location of a photon reflected from the object. In some instances, a maximum-likelihood position of a fluorescent molecule (or other source of optical signal) can be, for example, midway between the laser beam center point in the object plane and the corresponding photon detection center point in the image plane. Photon reassignment in confocal imaging is described in, for example, Sheppard, et al., Super-resolution in Confocal Imaging, International Journal for Light and Electron Optics (1988); Sheppard, et al., Super resolution by image scanning microscopy using pixel reassignment, Optics Letters (2013); and Azuma and Kei, Super-resolution spinning-disk confocal microscopy using optical photon reassignment, Optics Express 23(11):15003-15011; each of which is incorporated herein by reference in its entirety.
In contrast to
The inset in
As described with respect to the exemplary imaging system illustrated in
In some instances, the illumination unit may comprise one or more light sources or radiation sources, e.g., 1, 2, 3, 4, or more than 4 light sources or radiation sources. In some instances, the one or more light source or radiation source may be a laser, a set of lasers, an incoherent source, or any combination thereof. In some instances, the incoherent source may be a plasma-based light source. In some instances, the one or more light sources or radiation sources may provide radiation at one or more particular wavelengths for absorption by exogenous contrast fluorescence dyes. In addition, the one or more light sources or radiation sources may provide radiation at a particular wavelength for endogenous fluorescence, auto-fluorescence, phosphorous, or any combination thereof. In some instances, the one or more light sources or radiation sources may provide continuous wave, pulsed, Q-switched, chirped, frequency-modulated, amplitude-modulated, harmonic, or any combination thereof of output light or radiation.
In any of the imaging system configurations described herein, the one or more light sources (or radiation sources, etc.) may produce light at a center wavelength ranging from about 400 nanometers (nm) to about 1,500 nm or any range thereof. In some instances, the center wavelength may be about 400 nm, 500 nm, 600 nm, 700 nm, 800 nm, 900 nm, 1,000 nm, 1,100 nm, 1,200 nm, 1,300 nm, 1,400 nm, or 1,500 nm. Those of skill in the art will recognize that the center wavelength may be any value within this range, e.g., about 633 nm.
In any of the imaging system configurations described herein, the one or more light sources (or radiation sources), alone or in combination with one or more optical components (e.g., optical filters and/or dichroic beam splitters), may produce light at the specified center wavelength within a bandwidth of ±2 nm, ±5 nm, ±10 nm, ±20 nm, ±40 nm, ±80 nm, or greater. Those of skill in the art will recognize that the bandwidth may have any value within this range, e.g., ±18 nm.
In any of the imaging system configurations described herein, the first and/or second optical transformation device may comprise one or more of a micro-lens array (MLA), diffractive element (e.g., a diffraction grating), digital micromirror device (DMD), phase mask, amplitude mask, spatial light modulator (SLM), pinhole array, or any combination thereof.
In some instances, the first and/or second optical transformation device in any of the imaging system configurations described herein may comprise a micro-lens array (MLA). In some instances, an MLA optical transformation device may comprise a plurality of micro-lenses 700 or 703 configured in a plurality of rows and columns, as seen for example in
In some instances, the MLA may comprise about 200 columns to about 4,000 columns of micro-lenses or any range thereof. In some instances, the MLA may comprise at least about 200 columns, 400 columns, 600 columns, 800 columns, 1,000 columns, 1,200 columns, 1,500 columns, 1,750 columns, 2,000 columns, 2,200 columns, 2,400 columns, 2,600 columns, 2,800 columns, 3,000 columns, 3,250 columns, 3,500 columns, 3,750 columns, or 4,000 columns of micro-lenses. In some instances, the MLA may comprise at most about 200 columns, 400 columns, 600 columns, 800 columns, 1,000 columns, 1,200 columns, 1,500 columns, 1,750 columns, 2,000 columns, 2,200 columns, 2,400 columns, 2,600 columns, 2,800 columns, 3,000 columns, 3,250 columns, 3,500 columns, 3,750 columns, or 4,000 columns of micro-lenses. Those of skill in the art will recognize that the MLA may comprise any number of columns within this range, e.g., about 2,600 columns. In some instances, the number of columns in the MLA may be determined by the size of the pupil plane (e.g., the number and organization of pixels in the pupil plane).
In some instances, the MLA may comprise about 2 rows to about 50 rows of micro-lenses, or any range thereof. In some instances, the MLA may comprise at least about 2 rows, 4 rows, 6 rows, 8 rows, 10 rows, 12 rows, 14 rows, 16 rows, 18 rows, 20 rows, 22 rows, 24 rows, 26 rows, 28 rows, 30 rows, 32 rows, 34 rows, 36 rows, 38 rows, 40 rows, 42 rows, 44 rows, 46 rows, 48 rows, or 50 rows of micro-lenses. In some instances, the MLA may comprise at most about 2 rows, 4 rows, 6 rows, 8 rows, 10 rows, 12 rows, 14 rows, 16 rows, 18 rows, 20 rows, 22 rows, 24 rows, 26 rows, 28 rows, 30 rows, 32 rows, 34 rows, 36 rows, 38 rows, 40 rows, 42 rows, 44 rows, 46 rows, 48 rows, or 50 rows of micro-lenses. Those of skill in the art will recognize that the MLA may comprise any number of rows within this range, e.g., about 32 rows. In some instances, the abovementioned values, and ranges thereof, for the rows and columns of micro-lenses may be reversed.
In some instances, the MLA may comprise a pattern of micro-lenses (e.g., a staggered rectangular or a tilted hexagonal pattern) that may comprise a length of about 4 mm to about 100 mm, or any range thereof. In some instances, the pattern of micro-lenses in an MLA may comprise a length of at least about 4 mm, 8 mm, 12 mm, 16 mm, 20 mm, 30 mm, 40 mm, 50 mm, 60 mm, 70 mm, 80 mm, 90 mm or 100 mm. In some instances, the pattern of micro-lenses in an MLA may comprise a length of at most about 4 mm, 8 mm, 12 mm, 16 mm, 20 mm, 30 mm, 40 mm, 50 mm, 60 mm, 70 mm, 80 mm, 90 mm or 100 mm. Those of skill in the art will recognize that the pattern of micro-lenses in the MLA may have a length of any value within this range, e.g., about 78 mm. In some instances, the length of the pattern of micro-lenses in the MLA may be determined with respect to a desired magnification. For example, the length of the pattern of micro-lenses in the MLA may be 2.6 mm×magnification.
In some cases, the pattern (e.g., the staggered rectangular or the tilted hexagonal pattern) of micro-lenses in an MLA may comprise a width of about 100 μm to about 1500 μm or any range thereof. In some instances, the pattern of micro-lenses in an MLA may comprise a width of at most about 100 μm, 150 μm, 200 μm, about 250 μm, 300 μm, about 350 μm, about 400 μm, 450 μm, or 500 μm. In some instances, the pattern (e.g., staggered rectangular or tilted hexagonal pattern) of micro-lenses in an MLA may comprise a width of at least about 100 μm, 150 μm, 200 μm, 250 μm, 300 μm, 350 μm, 400 μm, 450 μm, or 500 μm. Those of skill in the art will recognize that the pattern of micro-lenses in the MLA may have a width of any value within this range, e.g., about 224 μm. In some instances, the width of the MLA pattern may be determined with respect to a desired magnification, e.g., 50 μm×magnification (i.e., similar to the determination of the length of the pattern of micro-lenses in the MLA).
In some examples, the tilted hexagonal pattern of micro-lenses in an MLA may be tilted at an angle 702 with respect to the vertical axis of the MLA. For example, the angle (θ) of the tilted hexagonal patterned MLA may be determined by the following:
where N is a number of rows of micro-lenses in the tilted hexagonal pattern as described above.
In some instances, the angle (θ) of the tilted hexagonal pattern MLA may be configured to be about 0.5 degrees to about 45 degrees or any range thereof. In some instances, the angle (θ) of the tilted hexagonal pattern MLA may be configured to be at most about 0.5, 1, 1.5, 2, 2.5, 3, 3.5, 4, 4.5, 5, 5.5, 6, 6.5, 7, 7.5, 8, 8.5, 9, 9.5, 10, 10.5, 11, 11.5, 12, 12.5, 13, 13.5, 14, 14.5, 15, 20, 25, 30, 35, 40, or 45 degrees. In some instances, the angle (θ) of the tilted hexagonal pattern MLA may be configured to be at least about 0.5, 1 degree, 1.5, 2, about 2.5, about 3, about 3.5, about 4, about 4.5, about 5, 5.5, 6, 6.5, 7, 7.5, 8, 8.5, 9, 9.5, 10, 10.5, 11, 11.5, 12, 12.5, 13, 13.5, 14, 14.5, 15, 20, 25, 30, 35, 40, or 45 degrees. Those of skill in the art will recognize that the angle (θ) of the tilted hexagonal pattern MLA may have any value within this range, e.g., about 4.2 degrees. In some instances, the angle (θ) of the tilted hexagonal pattern may be configured to generate an illumination pattern with even spacing between illumination peaks in a cross-scan direction.
In some instances, the MLA may be further characterized by pitch, micro-lens diameter, numerical aperture (NA), focal length, or any combination thereof. In some instances, a micro-lens of the plurality of micro-lenses may have a diameter of about 5 micrometers (μm) to about 40 μm, or any range thereof. In some instances, a micro-lens of the plurality of micro-lenses may have a diameter of at most about 5 μm, 10 μm, 15 μm, 20 μm, 25 μm, 30 μm, 35 μm, or 40 μm. In some instances, a micro-lens of the plurality of micro-lenses may have a diameter of at least about 5 μm, 10 μm, 15 μm, 20 μm, 25 μm, 30 μm, 35 μm, or 40 μm. Those of skill in the art will recognize that the diameters of micro-lenses may have any value within this range, e.g., about 28 μm. In some instances, each micro-lens in a plurality of micro-lenses in an MLA has a same diameter. In some instances, at least one micro-lens in a plurality of micro-lenses in an MLA has a different diameter from another micro-lens in the plurality.
In some instances, the distances between adjacent micro-lenses may be referred to as the pitch of the MLA. In some instances, the pitch of the MLA may be about 10 μm to about 70 μm or any range thereof. In some instances, the pitch of the MLA may be at least about 10 μm, 15 μm, 20 μm, 25 μm, 30 μm, 35 μm, 40 μm, 45 μm, 50 μm, 55 μm, 60 μm, 65 μm, or 70 μm. In some instances, the pitch of the MLA may be at most about 10 μm, 15 μm, 20 μm, 25 μm, 30 μm, 35 μm, 40 μm, 45 μm, 50 μm, 55 μm, 60 μm, 65 μm, or 70 μm. Those of skill in the art will recognize that the distances between adjacent micro-lenses in the MLA may have any value within this range, e.g., about 17 μm.
In some instances, the pitch (or spacing) of the individual lenses in the one or more micro-lens arrays of the disclosed systems may be varied in order to change the distance between illumination peak intensity locations and in addition may adjust (e.g., increase) the lateral resolution of the imaging system. In some instances, for example, the lateral resolution of the imaging system may be improved by increasing the pitch between individual lenses of the one or more micro-lens arrays.
In some instances, the numerical aperture (NA) of micro-lenses in the MLA may be about 0.01 to about 2.0 or any range thereof. In some instances, the numerical aperture of the micro-lenses in the MLA may be at least 0.01, 0.05, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0, 1.5, 1.6, 1.7, 1.8, 1.9, or 2.0. In some instances, the numerical aperture of the micro-lenses in the MLA may be at most 2.0, 1.9, 1.8, 1.7, 1.6, 1.5, 1.4, 1.3, 1.2, 1.1, 1.0, 0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1, 0.05, or 0.01. In some instances, the NA of micro-lenses in the MLA may be about 0.04, 0.05, 0.06, 0.07, 0.08, 0.09, 0.1, 0.11, or 0.12. Those of skill in the art will recognize that the NA of the micro-lenses in the MLA may have any value within this range, e.g., about 0.065.
In some instances, specifying tighter manufacturing tolerances for micro-lens array specifications (e.g., tighter manufacturing tolerances on individual microlens shape, pitch, numerical aperture, and/or the spacing between rows and columns of micro-lenses in the array) may provide for improved imaging performance, e.g., by eliminating artifacts such as star patterns or other non-symmetrical features in the illumination PSF that contribute to cross-talk between adjacent objects (such as adjacent sequencing beads). In some instances, the tolerable variation in MLA pitch is ±20% and the tolerable variation in focal length is ±15% (see e.g., Example 3 with regards to
In some instances, the use of a pinhole aperture array positioned on or in front of the image sensor, e.g., a pinhole aperture array that mirrors the array of micro-lenses in a microlens array (MLA) positioned in the optical path upstream from the image sensor, may be used to minimize or eliminate artifacts in the system PSF (see Example 3). In some instances, the pinhole aperture array may comprise a number of apertures that are equal to the number of micro-lenses in the MLA. In some instances, the apertures in the pinhole aperture array may be positioned in the same pattern and at the same pitch used for the micro-lenses in the MLA.
In some instances, the pinhole apertures in the aperture array may have diameters ranging from about 0.1 μm to about 2.0 μm. In some instances, the pinhole apertures in the aperture array may have diameters of at least 0.1 μm, 0.15 μm, 0.2 μm, 0.25 μm, 0.3 μm, 0.35 μm, 0.4 μm, 0.45 μm, 0.5 μm, 0.55 μm, 0.6 μm, 0.65 μm, 0.7 μm, 0.75 μm, 0.8 μm, 0.85 μm, 0.9 μm, 0.95 μm, 1.0 μm, 1.05 μm, 1.1 μm, 1.15 μm, 1.2 μm, 1.25 μm, 1.3 μm, 1.35 μm, 1.4 μm, 1.45 μm, 1.5 μm, 1.55 μm, 1.6 μm, 1.65 μm, 1.7 μm, 1.75 μm, 1.8 μm, 1.85 μm, 1.9 μm, 1.95 μm, or 2.0 μm. In some instances, the pinhole apertures in the aperture array may have diameters of at most 2.0 μm, 1.95 μm, 1.9 μm, 1.85 μm, 1.8 μm, 1.75 μm, 1.7 μm, 1.65 μm, 1.6 μm, 1.55 μm, 1.5 μm, 1.45 μm, 1.4 μm, 1.35 μm, 1.3 μm, 1.25 μm, 1.2 μm, 1.15 μm, 1.1 μm, 1.05 μm, 1.0 μm, 0.95 μm, 0.9 μm, 0.85 μm, 0.8 μm, 0.75 μm, 0.7 μm, 0.65 μm, 0.6 μm, 0.55 μm, 0.5 μm, 0.45 μm, 0.4 μm, 0.35 μm, 0.3 μm, 0.25 μm, 0.2 μm, 0.15 μm, 0.1 μm. Those of skill in the art will recognize that the pinhole apertures in the aperture array may have diameters of any value within this range, e.g., about 1.26 μm.
As described with respect to the exemplary imaging system illustrated in
In some examples, the projection optical assembly may comprise a dichroic mirror configured to transmit patterned light in one wavelength range and reflect patterned light in another wavelength range. In some instances, the dichroic mirror may comprise one or more optical coatings that may reflect or transmit a particular bandwidth of radiative energy. Non-limiting examples of paired transmittance and reflectance ranges for the dichroic mirror include 425-515 nm and 325-395 nm, 454-495 nm and 375-420 nm, 492-510 nm and 420-425 nm, 487-545 nm and 420-475 nm, 520-570 nm and 400-445 nm, 512-570 nm and 440-492 nm, 512-570 nm and 455-500 nm, 520-565 nm and 460-510 nm, 531-750 nm and 480-511 nm, 530-595 nm and 470-523 nm, 537-610 nm and 470-523 nm, 550-615 nm and 480-532 nm, 567-620 nm and 490-550 nm, 575-650 nm and 500-560 nm, 587-650 nm and 500-565 nm, 587-650 nm and 500-565 nm, 592-660 nm and 540-582 nm, 592-660 nm and 540-582 nm, 612-675 nm and 532-585 nm, 608-700 nm and 525-588 nm, 619-680 nm and 540-590 nm, 647-710 nm and 575-626 nm, 620-720 nm, and 570-646 nm, 667-725 nm and 585-650 nm, 673-740 nm and 600-652 nm, 686-745 nm and 615-669 nm, 692-760 nm and 590-667 nm, 705-780 nm and 630-684 nm, and 765-860 nm and 675-737 nm.
In some instances, the dichroic mirror may have a length of about 10 mm to about 250 mm or any range thereof. In some instances, the dichroic mirror may have a length of at least about 10 mm, 20 mm, 30 mm, 40 mm, 50 mm, 60 mm, 70 mm, 80 mm, 100 mm, 150 mm, 200 mm, or 250 mm. In some instances, the dichroic mirror may have a length of at most about 10 mm, 20 mm, 30 mm, 40 mm, 50 mm, 60 mm, 70 mm, 80 mm, 100 mm, 150 mm, 200 mm, or 250 mm. Those of skill in the art will recognize that the dichroic mirror may be any length within this range, e.g., 54 mm.
In some instances, the dichroic mirror may have a width of about 10 mm to about 250 mm or a range thereof. In some instances, the dichroic mirror may have a width of at least about 10 mm, 20 mm, 30 mm, 40 mm, 50 mm, 60 mm, 70 mm, 80 mm, 100 mm, 150 mm, 200 mm, or 250 mm. In some instances, the dichroic mirror may have a width of at most about 10 mm, 20 mm, about 30 mm, about 40 mm, about 50 mm, about 60 mm, about 70 mm, about 80 mm, about 100 mm, about 150 mm, about 200 mm, or about 250 mm. Those of skill in the art will recognize that the dichroic mirror may be any width within this range, e.g., 22 mm.
In some instances, the dichroic mirror may be comprised of fused silica, borosilicate glass, or any combination thereof. The dichroic mirror may be tailored to a particular type of fluorophore or dye being used in an experiment. The dichroic mirror may be replaced by one or more optical elements (e.g., optical beam splitter or coating, wave plate, etc.) capable of and configured to direct an illumination pattern from the pattern illumination source to the object and direct the reflected pattern from the object to the detection unit.
In some instances, the projection optical assembly may comprise an object-facing optical component configured to direct the illumination pattern to, and receive the light reflected by, transmitted by, scattered from, or emitted from, the object. In some instances, the object-facing optics may comprise an objective lens or a lens array. In some instances, the objective lens may have a numerical aperture about 0.2 to about 2.4. In some instances, the objective lens may have a numerical aperture at least about 0.2, 0.4, 0.6, 0.8, 1, 1.2, 1.4, 1.6, 1.8, 2, 2.2, or 2.4. In some instances, the objective lens may have a numerical aperture at most about 0.2, 0.4, 0.6, 0.8, 1, 1.2, 1.4, 1.6, 1.8, 2, 2.2, or 2.4. Those of skill in the art will recognize that the objective lens may have any numerical aperture within this range, e.g., 1.33.
In some instances, the objective lens aperture may be filled by an illumination pattern covering the total usable area of the objective lens aperture while maintaining well separated intensity peaks of the illumination pattern. In some instances, the tube lens or relay optics of the projection optical assembly may be configured to relay the patterned illumination to the objective lens aperture to fill the total usable area of the objective lens aperture while maintaining well separated illumination intensity peaks.
As described with respect to the exemplary imaging system illustrated in
In any of the imaging system configuration described herein, the detection unit may comprise one or more image sensors 144 as illustrated in
In any of the imaging system configurations described herein, the one or more image sensors may each comprise from about 256 pixels to about 65,000 pixels. In some instances, an image sensor may comprise at least 256 pixels, 512 pixels, 1,024 pixels, 2,048 pixels, 4,096 pixels, 8,192 pixels, 16,384 pixels, 32,768 pixels, or 65,536 pixels. In some instances, an image sensor may comprise at most 256 pixels, 512 pixels, 1,024 pixels, 2,048 pixels, 4,096 pixels, 8,192 pixels, 16,384 pixels, 32,768 pixels, or 65,536 pixels. Those of skill in the art will recognize that an image sensor may have any number of pixels within this range, e.g., 2,048 pixels.
In any of the imaging system configuration described herein, the one or more image sensors may have a pixel size of about 1 micrometer (μm) to about 7 μm. In some cases, the sensor may have a pixel size of at least about 1 μm, 2 μm, 3 μm, 4 μm, 5 μm, 6 μm, or 7 μm. In some instances, the sensor may have a pixel size of at most about 1 μm, 2 μm, 3 μm, 4 μm, 5 μm, 6 μm, or 7 μm. Those of skill in the art will recognize that the objective lens may have any pixel size within this range, e.g., about 1.4 μm.
In any of the imaging system configurations described herein, the one or more image sensors may operate on a TDI clock cycle (or integration time) ranging from about 1 nanosecond (ns) to about 1 millisecond (ms). In some instances, the TDI clock cycle may be at least Ins, 10 ns, 100 ns, 1 microsecond (s), 10 s, 100 s, 1 ms, 10 ms, 100 ms, or 1 s. Those of skill in the art will recognize that the TDI clock cycle may have any value within this range, e.g., about 12 ms. In any of the imaging system configurations described herein, the one or more sensors may comprise TDI sensors that include a number of stages used to integrate charge during image acquisition. For example, in some instances, the one or more TDI sensors may comprise at least 64 stages, at least 128 stages, at least 256 stages. In some instances, the one or more TDI sensors may be split into two or more (e.g., 2, 3, 4, or more than 4) parallel sub-sensors that can be triggered sequentially to reduce motion-induced blurring of the image, where the time delay between sequential triggering is proportional to the relative rate of motion between the sample to be imaged and the one or more TDI sensors.
In any of the imaging system configurations described herein, the system may be configured to acquire one or more imaged with a scan time ranging from about 0.1 millisecond (ms) to about 100 sec. In some instances, the image acquisition time (or scan time) may be at least 0.1 ms, 1 ms, 10 ms, 100 ms, 1 microsecond (s), 10 μs, 100 μs, 1 s, 10 s, or 100 s. In some instances, the image acquisition time (or scan time) may have any value within the range of values described in this paragraph, e.g., 2.4 s.
In any of the imaging system configurations described herein, the optional optics included in the detection unit may comprise a plurality of relay lenses, a plurality of tube lenses, a plurality of optical filters, or any combination thereof. In some cases, the sensor pixel size and magnification of the imaging system may be configured to allow for adequate sampling of optical light intensity at the sensor imaging plane. In some instances, the adequate sampling may be approaching or substantially exceeding the Nyquist sampling frequency.
In any of the imaging system configurations described herein, the second optical transformation device 142 may comprise one or more of a micro-lens array (MLA), diffractive element, digital micromirror device (DMD), phase mask, amplitude mask, spatial light modulator (SLM), pinhole array, or other transformation elements, etc. The second optical transformation device may transform an illumination pattern generated by a first optical transformation device 106, and patterned light reflected, transmitted, scattered or emitted from the object to an array of intensity peaks that are non-overlapping. In some instances, the second optical transformation device 142 may comprise an optical transformation device that is complementary to the first optical transformation device 106 in the pattern illumination source 102. In some instances, the first and second optical transformation devices may be the same type of optical transformation device (e.g., micro-lens array). In some instances, the complementary first and second optical transformation devices may share common characteristics, such as the characteristics of the first optical transformation device 106 described elsewhere herein.
The first optical transformation device of the disclosed imaging systems may be configured to apply a first transformation to generate an illumination pattern that may be further transformed by the second optical transformation device. The first and second transformations by the first and second optical transformation devices may generate an enhanced resolution image of the object, compared to an image of the object generated without the use of these optical transformation devices. The resolution enhancement resulting from the inclusion of these optical transformation devices are seen in a comparison of
In any of the imaging system configuration described herein, the detection unit 140 as illustrated in
In any of the imaging system configurations described herein, the one or more image sensors, alone or in combination with one or more optical components (e.g., optical filters and/or dichroic beam splitters), may detect light at the specified center wavelength(s) within a bandwidth of ±2 nm, ±5 nm, ±10 nm, ±20 nm, ±40 nm, ±80 nm, or greater. Those of skill in the art will recognize that the bandwidth may have any value within this range, e.g., ±18 nm.
In any of the imaging system configurations described herein, the amount of light reflected, transmitted, scattered, or emitted by the object that reaches the one or more image sensors is at least 40%, 50%, 60%, 70%, 80%, or 90% of the reflected, transmitted, scattered, or emitted light entering the detection unit.
In any of the imaging system configurations disclosed herein, the imaging throughput (in terms of the number of distinguishable features or locations that can be imaged (or “read”) per second) may range from about 106 reads/s to about 1011 reads/s. In some instances, the imaging throughput may be at least about 106, at least 5×106, at least 107, at least 5×107, at least 108, at least 5×108, at least 109, at least 5×109, or at least 1010 reads/s. Those of skill in the art will recognize that the imaging throughput may be any value within this range, e.g., about 2.13×109 reads/s.
In any of the imaging system configurations described herein, the imaging system may be capable of integrating signal and acquiring scanned images having an increased signal-to-noise ratio (SNR) compared to a signal-to-noise ratio (SNR) in images acquired by an otherwise identical imaging system that lacks the second optical transformation device. In some instances, the signal-to-noise ratio (SNR) exhibited by the scanned images acquired using the disclosed imaging systems is increased by greater than 20%, 40%, 60%, 80%, 100%, 120%, 140%, 160%, 180%, 200%, 300%, 400%, 500%, 600%, 700%, 800%, 900%, 1,000%, 1,200%, 1,400%, 1,600%, 1,800%, 2,000%, or 2500% relative to that of a scanned image acquired using an otherwise identical imaging system that lacks the second optical transformation device. In some instances, the signal-to-noise ratio (SNR) exhibited by the scanned images acquired using the disclosed imaging systems is increased by at least 2×, 3×, 4×, 5×, 6×, 7×, 8×, 9×, or 10× relative to that of a scanned image acquired using an otherwise identical imaging system that lacks the second optical transformation device.
In any of the imaging system configurations described herein, the imaging system may be capable of integrating signal and acquiring scanned images having an increased image resolution compared to the image resolution in images acquired by an otherwise identical imaging system that lacks the second optical transformation device. In some instances, the image resolution exhibited by the scanned images acquired using the disclosed imaging systems is increased by about 10%, 20%, 30%, 40%, 50%, 60%, 70%, 80%, 90%, 100%, 125%, 150%, 1275%, 200%, 225%, 250%, 275%, 300%, or more than 300% relative to that of a scanned image acquired using an otherwise identical imaging system that lacks the second optical transformation device. In some instances, the image resolution exhibited by the scanned images acquired using the disclosed imaging systems is increased by at least 1.2×, at least 1.5×, at least 2×, or at least 3× relative to that of a scanned image acquired using an otherwise identical imaging system that lacks the second optical transformation device.
In some instances, the image resolution exhibited by the scanned images acquired using the disclosed imaging systems is better than 0.6 (FWHM of the effective point spread function in units of A/NA), better than 0.5, better than 0.45, better than 0.4, better than 0.39, better than 0.38, better than 0.37, better than 0.36, better than 0.35, better than 0.34, better than 0.33, better than 0.32, better than 0.31, better than 0.30, better than 0.29, better than 0.28, better than 0.27, better than 0.26, better than 0.25, better than 0.24, better than 0.23, better than 0.22, better than 0.21, or better than 0.20. Those of skill in the art will recognize that the image resolution exhibited by the scanned images acquired using the disclosed imaging systems may be any value within this range, e.g., about 0.42 (FWHM of the effective point spread function in units of A/NA).
In any of the imaging system configurations described herein, the object positioning system 130 as illustrated in
In some instances, the one or more actuators may be configured to move the object (or projection optical assembly) over a distance ranging from about 0.1 mm to about 250 mm or any range thereof. In some instances, the one or more actuators may be configured to move the object (or projection optical assembly) at least 0.1 mm, 0.5 mm, 1 mm, 2 mm, 4 mm, 6 mm, 8 mm, 10 mm, 20 mm, 30 mm, 40 mm, 50 mm, 60 mm, 70 mm, 80 mm, 90 mm, 100 mm, 110 mm, 120 mm, 130 mm, 140 mm, 150 mm, 160 mm, 170 mm, 180 mm, 190 mm, 200 mm, 210 mm, 220 mm, 230 mm, 240 mm, or 250 mm. In some instances, the one or more actuators may be configured to move the object (or projection optical assembly) at most about 250 mm, 240 mm, 230 mm, 220 mm, 210 mm, 200 mm, 190 mm, 180 mm, 170 mm, 160 mm, 150 mm, 140 mm, 130 mm, 120 mm, 110 mm, 100 mm, 90 mm, 80 mm, 70 mm, 60 mm, 50 mm, 40 mm, 30 mm, 20 mm, 10 mm, 8 mm, 6 mm, 4 mm, 2 mm, 1 mm, 0.5 mm, or 0.1 mm. Those of skill in the art will recognize that the one or more actuators may be configured to move the object (or projection optical assembly) over a distance having any value within this range, e.g., about 127.5 mm.
In some instances, the one or more actuators may travel with a resolution of about 20 nm to about 500 nm, or any range thereof. In some instances, the actuator may travel with a resolution of at least about 20 nm, 40 nm, 60 nm, 80 nm, 100 nm, 150 nm, 200 nm, 250 nm, 300 nm, 350 nm, 400 nm, or 500 nm. In some instances, the actuator may travel with a resolution of at most about 20 nm, 40 nm, 60 nm, 80 nm, 100 nm, 150 nm, 200 nm, 250 nm, 300 nm, 350 nm, 400 nm, or 500 nm. Those of skill in the art will recognize that the actuator may travel with a resolution of any value within this range, e.g., about 110 nm.
In some instances, the one or more actuators may be configured to translate the object (or projection optical assembly) at a rate of about 1 mm/s to about 220 mm/s or any range thereof. In some instances, the one or more actuators may be configured to translate the object (or projection optical assembly) at a rate of at least about 1 mm/s, 20 mm/s, 40 mm/s, 60 nm/s, 80 mm/s, 100 mm/s, 120 mm/s, 140 mm/s, 160 mm/s, about 180 nm/s, about 200 nm/s, or about 220 mm/s. In some instances, the one or more actuators may be configured to translate the object (or projection optical assembly) at a rate of at most about 1 mm/s, 20 mm/s, 40 mm/s, 60 mm/s, 80 mm/s, 100 mm/s, 120 mm/s, 140 mm/s, 160 mm/s, 180 mm/s, 200 mm/s, or 220 mm/s. Those of skill in the art will recognize that the one or more actuators may be configured to translate the object (or projection optical assembly) at a rate of any value within this range, e.g., about 119 mm/s.
Disclosed herein, in some examples, are methods of imaging an object with the imaging systems described herein. In some instances, imaging an object with the imaging systems described herein may provide high-throughput, high SNR imaging while maintaining an enhanced imaging resolution. In some cases, the method of imaging an object may comprise: (a) illuminating a first optical transformation device by a radiation source; (b) transforming light from the radiation source to generate an illumination pattern; (c) projecting the illumination pattern to a projection optical assembly configured to receive and direct the illumination pattern from the first optical transformation device to the object; (d) receiving a reflection of the illumination pattern from the object by a second optical transformation device; (e) transforming the illumination pattern by the second optical transformation device to generate a transformed illumination pattern; (f) detecting the transformed illumination pattern with one or more image sensors, wherein the image sensors are configured for time delay and integration (TDI) imaging, and wherein the illumination pattern is moved relative to the object and/or the object is moved relative to the illumination pattern. The illumination pattern and/or the object may be moved via one or more actuators. For example, the actuator may be a linear stage with the object attached thereto. Alternatively, the actuator may be rotational.
In some instances, imaging an object using the disclosed imaging systems may comprise: illuminating a first optical transformation device with a light beam, applying by the first optical transformation device a first optical transformation to the light beam to produce an illumination pattern, providing the illumination pattern to the object by an object-facing optical component onto the object, directing light reflected, transmitted, scattered, or emitted by (e.g., output from) the object to a second optical transformation device, applying by the second optical transformation device a second optical transformation to light reflected, transmitted, scattered, or emitted by (e.g., output from) the object and relaying it to one or more image sensors configured for time delay and integration (TDI) imaging; and scanning the object relative to the object-facing optical component, or the object-facing optical component relative to the object, wherein relative motion of the object and object-facing optical component during the scan is synchronized to the time delay and integration (TDI) imaging by the one or more image sensors such that a scanned image of all or a portion of the object is acquired by each of the one or more image sensors. In some instances, the illumination pattern is scanned across the object, where the scanning pattern is synchronized to the TDI imaging by the one or more image sensors to acquire the scanned image of all or a portion of the object. In some instances, the speed and the direction of the scanning is synchronized to the TDI imaging. In some instances, the scanning comprises moving the illumination pattern, moving the object, or both.
In some instances, only a portion of the object may be imaged within a scan. In some instances, a series of images is acquired, e.g., through performing a series of scans where the object is translated in one or two dimensions by all or a portion of the field-of-view (FOV) between scans, and the series of scans is aligned relative to each other to create a composite image of the object having a larger total FOV.
Input device 1620 can be any suitable device that provides input, such as a touch screen, keyboard or keypad, mouse, or voice-recognition device. Output device 1630 can be any suitable device that provides output for a user, such as a touch screen, haptics device, or speaker.
Storage 1640 can be any suitable device that provides storage, such as an electrical, magnetic, or optical memory including a RAM, cache, hard drive, or removable storage disk. Communication device 1660 can include any suitable device capable of transmitting and receiving signals over a network, such as a network interface chip or device. The components of the computer can be connected in any suitable manner, such as via a physical bus 1670 or wirelessly.
Software 1650, which can be stored in memory/storage 1640 and executed by processor 1610, can include, for example, the programming that embodies the functionality of the present disclosure (e.g., as embodied in the devices described above).
Software 1650 can also be stored and/or transported within any non-transitory computer-readable storage medium for use by or in connection with an instruction execution system, apparatus, or device, such as those described above, that can fetch instructions associated with the software from the instruction execution system, apparatus, or device and execute the instructions. In the context of this disclosure, a computer-readable storage medium can be any medium, such as storage 1640, that can contain or store programming for use by or in connection with an instruction execution system, apparatus, or device.
Software 1650 can also be propagated within any transport medium for use by or in connection with an instruction execution system, apparatus, or device, such as those described above, that can fetch instructions associated with the software from the instruction execution system, apparatus, or device and execute the instructions. In the context of this disclosure, a transport medium can be any medium that can communicate, propagate, or transport programming for use by or in connection with an instruction execution system, apparatus, or device. The transport readable medium can include, but is not limited to, an electronic, magnetic, optical, electromagnetic, or infrared wired or wireless propagation medium.
Device 1600 may be connected to a network, which can be any suitable type of interconnected communication system. The network can implement any suitable communications protocol and can be secured by any suitable security protocol. The network can comprise network links of any suitable arrangement that can implement the transmission and reception of network signals, such as wireless network connections, T1 or T3 lines, cable networks, DSL, or telephone lines.
Device 1600 can implement any operating system suitable for operating on the network. Software 1650 can be written in any suitable programming language, such as C, C++, Java, or Python. In various embodiments, application software embodying the functionality of the present disclosure can be deployed in different configurations, such as in a client/server arrangement or through a web browser as a web-based application or web service, for example.
The right-hand panel of
High resolution and fast speed are essential for some applications of high-throughput imaging in optical microscopy. The spatial resolution of optical microscopy is limited because of the available numerical aperture (NA) options, even when using optical elements that have negligible aberrations. Photon reassignment (also known as “pixel reassignment”) has been demonstrated to surpass the conventional optical resolution limit to enable resolution of fine structures in biological imaging that would otherwise be difficult to visualize. This example provides a description of confocal structured illumination (CoSI) fluorescence microscopy, a concept which combines the approaches of photon reassignment (for enhanced resolution), multi-foci illumination (for parallel imaging), and a Time Delay Integration (TDI) camera (for fast imaging using reduced irradiance to minimize photodamage of sensitive samples and dyes). Computer simulations demonstrated that the lateral resolution, measured as the full width at half maximum (FWHM) of the signal corresponding to a “point” object can be improved by a factor of approximately 1.6×. That is, the FWHM of imaged objects (e.g., beads on a surface) decreased from 0.48 μm to approximately 0.3 μm by implementing CoSI. Experimental data showed that the lateral resolution was enhanced by a factor of 1.35×, with FWHM reduced from 0.54 μm to 0.4 μm in images of 0.2 μm diameter beads.
Confocal microscopy: In a typical confocal microscope, the overall 3D intensity system point spread function (PSF) is given by:
Here, ⊗ denotes a 3D convolution, Hi and Hd are illumination and detection PSFs, respectively, and P denotes the confocal pinhole, which is assumed to be infinitely thin, expressed as:
In practice, only 2D convolutions implemented in the x-y plane are needed due to the presence of δ(z2) (the Dirac delta function) in the definition of P. With uniform illumination on the back focal plane of the objective, the intensity PSF is governed by:
where v and u are dimensionless radial and axial coordinates:
Here, λ is the wavelength of the light (λi and λd are central wavelength of illumination and fluorescence light for the detection path, respectively), l is the effective focal length of the objective, and a is the radius of the pupil aperture.
If the confocal pinhole is on-axis and infinitely small, P becomes a Dirac delta function in three dimensions, resulting in an ideal system PSF given by:
If the infinitely small confocal pinhole is placed off-axis, the system PSF is given by:
where Δx and Δy are the offset of the pinhole with respect to the optical axis.
If the PSF of a confocal microscope with an off-axis small confocal pinhole is plotted according to Eq. (6) (assuming an illumination wavelength of 0.623 μm, a central fluorescence emission wavelength of 0.670 μm, an objective numerical aperture of 0.72, and a displacement of the confocal pinhole by 0.2338 μm), the peak of the resulting system PSF is about α=0.46 of the displacement of the confocal pinhole (0.11 μm is the offset of the peak in the confocal trace whereas 0.2338 μm is the peak of the detection trace). The full width at half maximum (FWHM) is improved to 0.316 μm, compared to 0.48 μm for detection PSF and 0.44 μm for illumination PSF in comparison to a confocal microscope with an on-axis small confocal pinhole.
Photon reassignment and CoSI: There are several strategies that may be used to implement photon reassignment for resolution improvement. These strategies belong to two main categories: digital approaches (e.g., as illustrated in
Digital methods for photon reassignment are typically slow. For both non-descanning (
In order to achieve accurate optical photon reassignment, the camera and the sample typically must be kept stationary relative to each other. Thus, one strategy for achieving this goal with non-stationary samples, in addition to rescanning, is to move the camera at a speed matched to that of the sample, which greatly simplifies the imaging system. This method is also compatible with the use of a TDI camera to compensate for constant, linear relative motion between the sample and camera. Matching camera and sample motion also leverages a TDI camera's capabilities for increasing imaging throughput while reducing the level of irradiance required.
In a non-descanned confocal microscope (
where (x1, y1) is the scanning position of the illumination light, and (x0, y0) and (x2, y2) are coordinates on the sample and camera planes, respectively. The chief ray of emitted light arising at the center of the illumination spot (x1, y1) on the sample arrives at (x1, y1) in the camera space assuming that the magnification from the sample to the camera is 1× (and ignoring the negative sign). The photons arriving at (x2, y2) are displaced by a distance (xd=x2−x1, yd=y2−y1) away from the chief ray. Those photons should be reassigned to the position [xr=(1−α)x1+αx2, yr=(1−α)y1+αy2]. Integrating over scanning positions (x1, y1) over the sample yields:
and the integration over the scanning position (x1, y1) is replaced with integration over (xd, yd). Eq. (10) shows that the final PSF of the photon reassignment system is the convolution between the scaled illumination PSF and detection PSF. If α=0.5, the system PSF is simplified to
where ⊗ indicates the convolution operation.
The system PSF in 3D is given by:
Note that the convolution only takes place in the x-y plane, and that Hall(x, y, ) comprises multiplication in the z direction.
Simulation results: Eq. (12) enables a straightforward estimation of the system PSF with photon reassignment. However, it doesn't account for the potential crosstalk in detected signals when a multi-foci illumination pattern is used. Therefore, the simulation for the full system accounts for Fourier optics to predict actual system performance more accurately.
Lateral resolution:
In this simulation, the magnification of the system was set to 21.1×, the NA of the objective was 0.72, the pitch of both MLA1 and MLA2 was 23 μm, and the focal lengths of MLA1 and MLA2 were 340 μm and 170 μm, respectively. The photon reassignment coefficient α was set to 0.44 (L1/L2). The excitation wavelength was 0.623 μm and the emission wavelength was 0.670 μm. The overall improvement in lateral resolution compared to that for a wide-field microscope was a factor of ˜1.6× (0.48 μm/0.3 μm).
Zero-order power on the pupil plane: The zero-order and/or the first order diffraction patterns for the MLA or DOE may be projected on the pupil plane (e.g., by tuning the focal length). If an MLA is used, then the zero-order pattern comprises ˜76% of the total power within the pupil aperture. By using a DOE of custom design, one can tune the power contained in the zero-order pattern. As the zero-order power becomes smaller, the FWHM of the system PSF is improved as well, while the peak-to-mean-intensity ratio of the illumination pattern on the sample is also increased. If the peak irradiance is too high, fluorescent dyes may approach their saturation levels, photodamage may be induced, and/or other damage mechanisms, e.g., due to excessive heat, may be induced. Therefore, a trade-off between lateral resolution and the peak-to-mean intensity ratio may be required. But provided that the irradiance is within a safe zone, the zero-order power should be minimized.
Axial resolution: The axial FWHM is a function of the photon reassignment coefficient, a. If a is within the range [0.4, 0.5], the axial FWHM is reduced by a factor of 1.3 (e.g., 2.6 μm/2 μm=1.3).
Orientation of the MLA: The orientation of the MLA affects the illumination uniformity at the sample.
The caveat of MLA orientation and PSF measurement: In a typical optical microscope, one can acquire 3D images of well separated beads of small diameter that are significantly smaller than the diffraction-limit resolution as determined by the 3D PSF of the system. The size of the beads (e.g., lateral FWHM in x-y plane, or axial FWHM in z axis) as measured from the images of the beads for a given system 3D PSF is usually referred to as the resolution of the system. Although this relationship generally also holds for the CoSI systems described here, it may fail under certain extreme conditions. Non-limiting examples of system PSF plots are depicted in
Lateral misalignment of second MLA:
Tolerance analysis of the distance between MLA2 and the camera: The system PSF depends on the distance between the camera sensor and MLA2, and their relative flatness (or parallelism), thus it is important to understand what level of tolerance is required for accurately setting that distance. The required tolerance depends on the magnification of the system and the focal length of MLA2.
If the topography of MLA2 and the sensor is known (or can be measured), in principle, a compensator (e.g., a piece of glass or an MLA2 substrate with an appropriate thickness profile) can be used to compensate for the non-flatness (or non-parallelism) of the MLA2 and camera sensor. In the semiconductor field, the tolerance of a coating thickness on a wafer can be well controlled, provided that the overall thickness of the layer is not too thick. Thus semiconductor fabrication techniques may allow one to fabricate an appropriate compensator element.
Star pattern artifacts and mitigation thereof: To avoid high peak irradiance that could lead to saturation of the dye and potential damage of molecules in the sample, it can be beneficial to project illumination light foci as tightly packed as possible onto the sample while maintaining the individual illumination spots at, or even below, the diffraction limit. The maximum diffraction pattern order that's allowed to pass the pupil aperture is the 1st order, which in turn determines the smallest possible pitch that may be achieved for illumination light foci at the sample. However, the smaller the pitch, the greater the likelihood that crosstalk will occur between adjacent beamlets (arising from adjacent lenses in the microlens array) which gives rise to artifacts, e.g., star patterns, in the resulting images. Such artifacts can be mitigated through the use of a pinhole array positioned on or in front of the sensor.
Experimental results: The experimental setup used to demonstrate proof of concept of CoSI is illustrated in
For
One method to compensate for rotational motion (e.g., where a rotating wafer is imaged by a stationary camera) is to create a gradient of magnification across the field-of-view of the camera's image sensor.
Among the embodiments provided herein are:
1. An imaging system, comprising:
2. The imaging system of embodiment 1, wherein the illumination pattern comprises a plurality of light intensity maxima, and wherein the second optical transformation compensates for a spatial offset between the plurality of light intensity maxima in the illumination pattern and a plurality of signal intensity maxima that would be measured by individual image sensor pixels laterally offset relative to the light intensity maxima in scanned images acquired using an otherwise identical imaging system that lacks the second optical transformation device, the second optical transformation thereby enabling acquisition of a scanned image of higher resolution than would be acquired using an otherwise identical imaging system that lacks the second optical transformation device.
3. The imaging system of embodiment 1 or embodiment 2, wherein the scanned image generated by at least one of the one or more image sensors exhibits a lateral spatial resolution that exceeds a lateral spatial resolution of an otherwise identical imaging system that lacks the second optical transformation device.
4. The imaging system of any one of embodiments 1 to 3, wherein the scanned image generated by at least one of the one or more image sensors exhibits a lateral spatial resolution that exceeds a diffraction-limited spatial resolution.
5. The imaging system of any one of embodiments 1 to 4, wherein the scanned image generated by at least one of the one or more image sensors exhibits a lateral spatial resolution improvement by a factor of better than 1.2, 1.5, 2, or 3 relative to an image obtained by a comparable diffraction-limited imaging system.
6. The imaging system of any one of embodiments 1 to 5, wherein the scanned image acquired by at least one of the one or more image sensors exhibits an increased signal-to-noise ratio (SNR) compared to a signal-to-noise ratio (SNR) of an otherwise identical imaging system that lacks the second optical transformation device.
7. The imaging system of embodiment 6, where the signal-to-noise ratio (SNR) exhibited by the at least one scanned image is increased by up to 300% relative to that of a scanned image acquired using an otherwise identical imaging system that lacks the second optical transformation device.
8. The imaging system of any one of embodiments 1 to 7, wherein, at any given point in time during the scan, the second optical transformation device reroutes and redistributes light received from the projection unit to present a modified optical image of the object to the one or more image sensors, wherein the modified optical image represents a spatial structure of the object that is inferable from properties of the light received from the projection unit and a known illumination pattern projected on the object at that point in time, and wherein the one or more image sensors integrate signals from a plurality of modified optical images over a period of time required to perform the scan of the object.
9. The imaging system of embodiment 8, wherein the modified optical image represents a spatial structure of the object that is inferable from properties of the light received from the projection unit and the known illumination pattern projected on the object at the given point in time using a maximum-likelihood statistical method.
10. The imaging system of embodiment 9, wherein the modified optical image is described by a mathematical formula that utilizes: (i) an optical image of the object acquired by an otherwise identical imaging system that lacks the second optical transformation device, and (ii) a known illumination pattern projected on the object at the given point in time, as input.
11. The imaging system of embodiment 10, wherein the mathematical formula comprises calculation of a product of: (i) image intensities for the optical image of the object acquired by an otherwise identical imaging system that lacks the second optical transformation device, and (ii) light intensities for the known illumination pattern projected on the object at the given point in time.
12. The imaging system of embodiment 9, wherein the modified optical image represents a spatial structure of the object that is inferable from properties of the light received from the projection unit, the known illumination pattern projected on the object at the given point in time, and additional prior information about the object.
13. The imaging system of any one of embodiments 1 to 12, wherein the first optical transformation device comprises one or more components selected from the group consisting of a micro-lens array (MLA), a diffractive optical element, a digital micro-mirror device (DMD), a phase mask, an amplitude mask, a spatial light modulator (SLM), and a pinhole array.
14. The imaging system of any one of embodiments 1 to 13, wherein the second optical transformation device comprises one or more components selected from the group consisting of a micro-lens array (MLA), a diffractive optical element, a digital micro-mirror device (DMD), a phase mask, an amplitude mask, a spatial light modulator (SLM), and a pinhole array.
15. The imaging system of any one of embodiments 1 to 14, wherein the first optical transformation device and the second optical transformation device are the same type of optical transformation device.
16. The imaging system of any one of embodiments 1 to 15, wherein the imaging system comprises only components for which their position, relative orientation, and optical properties remain static during imaging, with the exception of (i) the actuator configured to create relative motion between the imaging device and the object, and (ii) components of an autofocus system.
17. The imaging system of any one of embodiments 1 to 16, wherein the second optical transformation device is a lossless optical transformation device.
18. The imaging system of any one of embodiments 1 to 17, wherein at least 40%, at least 50%, at least 60%, at least 70%, at least 80%, at least 90%, at least 95%, or at least 99% of the light received from the projection unit that enters the second optical transformation device reaches the one or more image sensors.
19. The imaging system of any one of embodiments 1 to 18, wherein the actuator further comprises a moveable stage mechanically coupled to the object to support, rotate, or translate the object relative to the imaging device, or any combination thereof.
20. The imaging system of any one of embodiments 1 to 19, wherein the radiation source comprises a coherent source, a partially coherent source, an incoherent source, or any combination thereof.
21. The imaging system of embodiment 20, wherein the coherent source comprises a laser or a plurality of lasers.
22. The imaging system of embodiment 20, wherein the incoherent source comprises a light emitting diode (LED), a laser driven light source (LDLS), an amplified spontaneous emission (ASE) source, a superluminescence light source, or any combination thereof.
23. The imaging system of any one of embodiments 1 to 22, wherein the illumination unit further comprises a first plurality of optical elements disposed between the radiation source and the first optical transformation device, or between the first optical transformation device and the projection unit.
24. The imaging system of embodiment 23, wherein the first plurality of optical elements is configured to condition the light beam, and wherein the conditioning includes adjustment of the light beam's size, position, direction, collimation, polarization, ellipticity, spatial filtering, spectral filtering, or any combination thereof.
25. The imaging system of any one of embodiments 1 to 24, wherein the detection unit further comprises a second plurality of optical elements disposed between the second optical transformation device and the one or more image sensors, or between the second optical transformation device and the projection unit.
26. The imaging system of embodiment 25, wherein the second plurality of optical elements is configured to condition the light reflected, transmitted, scattered, or emitted by the object before the light reaches the one or more image sensors, and wherein the conditioning includes adjustment of the light beam's size, position, direction, collimation, polarization, ellipticity, spatial filtering, spectral filtering, or any combination thereof.
27. The imaging system of any one of embodiments 1 to 26, wherein the one or more image sensors comprise one or more time delay and integration (TDI) cameras, or one or more cameras comprising a TDI mode of image acquisition, and wherein the relative movement between the imaging device and the object is synchronized to a line shift or an image shift in the one or more image sensors so as to minimize motion blurring during image acquisition.
28. The imaging system of any one of embodiments 1 to 27, wherein integration of illumination pattern light intensity directed to the object during a scan results in approximately the same total exposure to illumination light at every location of the object.
29. The imaging system of any one of embodiments 2 to 28, wherein a separation distance between any two of the plurality of light intensity maxima in the illumination pattern is at least 1× to 100× of a full width at half maximum (FWHM) of a corresponding intensity peak profile.
30. The imaging system of any one of embodiments 1 to 29, wherein the first optical transformation device or the second optical transformation device comprises a micro-lens array (MLA), and wherein the micro-lens array (MLA) comprises a regular arrangement of two or more micro-lenses.
31. The imaging system of any one of embodiments 2 to 30, wherein the second optical transformation device comprises a micro-lens array, and wherein there is a 1:1 correspondence between the plurality of light intensity maxima in the illumination pattern and micro-lenses in the micro-lens array.
32. The imaging system of embodiment 30 or embodiment 31, wherein each micro-lens in the micro-lens array is configured to demagnify a corresponding beamlet in the light received from the projection unit.
33. The imaging system of any one of embodiments 30 to 32, wherein the regular arrangement is repeated for a predetermined number of times.
34. The imaging system of any one of embodiments 30 to 32, wherein the regular arrangement comprises one or more two-dimensional lattice patterns.
35. The imaging system of any one of embodiments 30 to 32, wherein the regular arrangement is a rectangular pattern or a square pattern.
36. The imaging system of any one of embodiments 30 to 32, wherein the regular arrangement is a hexagonal pattern.
37. The imaging system of any one of embodiments 30 to 36, wherein the regular arrangement includes a shift in micro-lens position between neighboring rows or columns of micro-lenses.
38. The imaging system of any one of embodiments 30 to 37, wherein a projection of the regular arrangement onto an object plane comprising the object is rotated with respect to a direction of the relative movement.
39. The imaging system of embodiment 38, wherein the projection of the regular arrangement onto the object plane comprising the object is rotated by an angle, θ, with respect to the direction of relative movement, and wherein θ is chosen so as to result in the illumination pattern providing a uniform total exposure at every point on the object when integrated over a scan.
40. The imaging system of embodiment 39, wherein a is between about 1 degree and about 45 degrees.
41. The imaging system of any one of embodiments 30 to 40, wherein the regular arrangement is configured to provide equal spacing between the two or more micro-lenses of the micro-lens array (MLA).
42. The imaging system of any one of embodiments 1 to 41, wherein the first optical transformation device or the second optical transformation device comprises a micro-lens array (MLA), and wherein the micro-lens array (MLA) comprises a plurality of spherical micro-lenses, aspherical micro-lenses, or any combination thereof.
43. The imaging system of any one of embodiments 1 to 42, wherein the first optical transformation device or the second optical transformation device comprises a micro-lens array (MLA), and wherein the micro-lens array (MLA) comprises a plurality of micro-lenses with a positive or negative optical power.
44. The imaging system of any one of embodiments 1 to 43, wherein the first optical transformation device or the second optical transformation device comprises a micro-lens array (MLA), and wherein each micro-lens in the micro-lens array (MLA) has a numerical aperture of at least 0.01, at least 0.05, at least 0.1, at least 0.5, at least 1, at least 1.5, or at least 2.
45. The imaging system of any one of embodiments 1 to 29, wherein the first optical transformation device and the second optical transformation device comprise a plurality of harmonically-modulated phase masks or harmonically-modulated amplitude masks with different orientations.
46. The imaging system of embodiment 45, wherein a spatial frequency and orientation of the second optical transformation device matches that of the first optical transformation device.
47. The imaging system of embodiment 45 or embodiment 46, wherein the first and second optical transformation devices comprise a plurality of harmonically-modulated phase masks, and wherein the second optical transformation device is phase shifted relative to the first optical transformation device.
48. The imaging system of any one of embodiments 45 to 47, wherein a final high-resolution image is reconstructed from the scanned image(s) acquired by the one or more image sensors by applying a Fourier reweighting process.
49. The imaging system of any one of embodiments 1 to 48, wherein the one or more image sensors comprise one or more time delay and integration (TDI) cameras, charge-coupled device (CCD) cameras, complementary metal-oxide semiconductor (CMOS) cameras, or single-photon avalanche diode (SPAD) arrays.
50. The imaging system of any one of embodiments 1 to 49, wherein the one or more image sensors have a pixel size of between about 0.1 and 20 micrometers.
51. The imaging system of any one of embodiments 1 to 50, further comprising an autofocus module comprising at least one sensor that determines a relative position of the imaging device relative to the object, and wherein the autofocus module is coupled to the actuator and configured to dynamically adjust the imaging system to provide optimal image resolution.
52. The imaging system of embodiment 51, wherein the dynamic adjustment of the imaging system by the autofocus module comprises positioning of an object-facing optical element relative to the object.
53. The imaging system of any one of embodiments 1 to 52, wherein the projection unit comprises an object-facing optical element, a dichroic mirror, a beam-splitter, a plurality of relay optics, a micro-lens array (MLA), or any combination thereof.
54. The imaging system of embodiment 53, wherein the object-facing optical element comprises an objective lens, a plurality of objective lenses, a lens array, or any combination thereof.
55. The imaging system of embodiment 54, wherein the object-facing optical element comprises an objective lens or a plurality of objective lenses having a numerical aperture of at least 0.3, at least 0.4, at least 0.5, at least 0.6, at least 0.7, at least 0.8, at least 0.9, at least 1.0, at least 1.1, at least 1.2, at least 1.3, at least 1.4, at least 1.5, at least 1.6, at least 1.7, or at least 1.8.
56. The imaging system of any one of embodiments 1 to 55, wherein the imaging device is configured to perform fluorescence imaging, reflection imaging, transmission imaging, dark field imaging, phase contrast imaging, differential interference contrast imaging, two-photon imaging, multi-photon imaging, single molecule localization imaging, or any combination thereof.
57. The imaging system of any one of embodiments 1 to 56, wherein the imaging device comprises a conventional time delay and integration (TDI) system, an illumination transformation device, and a detection transformation device that can be mechanically attached to the conventional TDI imaging system without further modification of the conventional TDI imaging system, or with only a minimal modification of the conventional TDI imaging system.
58. The imaging system of any one of embodiments 1 to 57, wherein corresponding sets of pixels in the one or more image sensors continuously integrate signals for a predetermined period of time ranging from about 1 ns to about 1 ms prior to transferring the signals to adjacent sets of pixels during a scan.
59. The imaging system of any one of embodiments 1 to 58, wherein a scan is completed in a predetermined period of time ranging from about 0.1 ms to about 100 s.
60. The imaging system of any one of embodiments 1 to 59, wherein the imaging device is configured to perform fluorescence imaging, and wherein the illumination unit is configured to provide excitation light at two or more excitation wavelengths.
61. The imaging system of any one of embodiments 1 to 60, wherein the imaging device is configured to perform fluorescence imaging, and wherein the detection unit is configured to detect fluorescence at two or more emission wavelengths.
62. The imaging system of any one of embodiments 1 to 61, further comprising a synchronization unit configured to control the synchronization of the relative movement of the imaging device and the object to the time delay integration (TDI) of the one or more image sensors.
63. The imaging system of any one of embodiments 1 to 62, wherein an individual scan comprises imaging the object over its entire length in a direction of the relative movement.
64. The imaging system of any one of embodiments 1 to 62, wherein an individual scan comprises imaging a portion of the object, and a series of scans is performed by translating the object relative to the imaging device by all or a portion of a field-of-view (FOV) of the imaging system between scans to create a series of images of the object.
65. The imaging system of embodiment 64, further comprising tiling the series of images to create a single composite image of the object.
66. The imaging system of any one of embodiments 1 to 65, wherein the object comprises a flow cell or substrate for performing nucleic acid sequencing.
67. The imaging system of embodiment 66, wherein the flow cell or substrate comprises at least one surface, and wherein the at least one surface comprises a plurality of single nucleic acid molecules or clonally-amplified nucleic acid clusters.
68. The imaging system of any one of embodiments 1 to 67, wherein the second optical transformation device is not a diffraction grating.
69. The imaging system of any one of embodiments 1 to 68, wherein the second optical transformation device is not a pinhole array.
70. A method of imaging an object, comprising:
71. The method of embodiment 70, wherein the illumination pattern comprises a plurality of light intensity maxima, and wherein the second optical transformation compensates for a spatial offset between the plurality of light intensity maxima in the illumination pattern and a plurality of signal intensity maxima that would be measured by individual image sensor pixels laterally offset relative to the light intensity maxima in scanned images acquired using an otherwise identical imaging system that lacked the second optical transformation device, the second optical transformation thereby enabling acquisition of a scanned image of higher resolution than would be acquired using an otherwise identical imaging system that lacks the second optical transformation device.
72. The method of embodiment 70 or embodiment 71, wherein the scanned image generated by at least one of the one or more image sensors exhibits a lateral spatial resolution that exceeds a lateral spatial resolution of an otherwise identical imaging system that lacks the second optical transformation device.
73. The method of any one of embodiments 70 to 72, wherein the scanned image generated by at least one of the one or more image sensors exhibits a lateral spatial resolution that exceeds a diffraction-limited spatial resolution.
74. The method of any one of embodiments 70 to 73, wherein the scanned image generated by at least one of the one or more image sensors exhibits a lateral spatial resolution improvement by a factor of better than 1.2, 1.5, 2, or 3 relative to an image obtained by a comparable diffraction-limited imaging system.
75. The method of any one of embodiments 70 to 74, wherein the scanned image acquired by at least one of the one or more image sensors exhibits an increased signal-to-noise ratio (SNR) compared to a signal-to-noise ratio (SNR) of an otherwise identical imaging system that lacks the second optical transformation device.
76. The method of embodiment 75, where the signal-to-noise ratio (SNR) exhibited by the at least one scanned image is increased by up to 300% relative to that of a scanned image acquired using an otherwise identical imaging system that lacks the second optical transformation device.
77. The method of any one of embodiments 70 to 76, wherein the light accepted by the projection unit passes through the second optical transformation device without significant loss.
78. The method of any one of embodiments 70 to 77, wherein the light accepted by the projection unit that passes through the second optical transformation device is at least 30%, at least 40%, at least 50%, at least 60%, at least 70%, at least 80%, at least 90%, at least 95%, at least 98%, or at least 99% of the light accepted by the projection unit that reaches the second optical transformation device.
79. The method of any one of embodiments 70 to 78, wherein, at any given point in time during the scan, the second optical transformation device reroutes and redistributes light received from the projection unit to present a modified optical image of the object to the one or more image sensors, and wherein the modified optical image represents a spatial structure of the object that is inferable from properties of the light received from the projection unit and a known illumination pattern projected on the object at that point in time, and wherein the one or more image sensors integrate signals from a plurality of modified optical images over a period of time required to perform the scanning of the object.
80. The method of embodiment 79, wherein the modified optical image represents a spatial structure of the object that is inferable from properties of the light received from the projection unit and the known illumination pattern projected on the object at the given point in time using a maximum-likelihood statistical method.
81. The method of embodiment 79, wherein the modified optical image is described by a mathematical formula that utilizes: (i) an optical image of the object acquired by an otherwise identical imaging system that lacks the second optical transformation device, and (ii) a known illumination pattern projected on the object at the given point in time, as input.
82. The method of embodiment 81, wherein the mathematical formula comprises calculation of a product of: (i) image intensities for the optical image of the object acquired by an otherwise identical imaging system that lacks the second optical transformation device, and (ii) light intensities for the known illumination pattern projected on the object at the given point in time.
83. The method of embodiment 79, wherein the modified optical image represents a spatial structure of the object that is inferable from properties of the light received from the projection unit, the known illumination pattern projected on the object at the given point in time, and additional prior information about the object.
84. The method of any one of embodiments 70 to 83, wherein the first optical transformation device comprises one or more components selected from the group consisting of a micro-lens array (MLA), a diffractive optical element, a digital micro-mirror device (DMD), a phase mask, an amplitude mask, a spatial light modulator (SLM), and a pinhole array.
85. The method of any one of embodiments 70 to 84, wherein the second optical transformation device comprises one or more components selected from the group consisting of a micro-lens array (MLA), a diffractive optical element, a digital micro-mirror device (DMD), a phase mask, an amplitude mask, a spatial light modulator (SLM), and a pinhole array.
86. The method of any one of embodiments 70 to 85, wherein the first optical transformation device and the second optical transformation device are a same type of optical transformation device.
87. The method of any one of embodiments 70 to 86, wherein an imaging system used to perform the method comprises only components that remain static during imaging, with the exception of (i) an actuator configured to create relative motion between the imaging system and the object, and (ii) components of an autofocus system.
88. The method of any one of embodiments 70 to 87, wherein at least 40%, at least 50%, at least 60%, at least 70%, at least 80%, at least 90%, at least 95%, or at least 99% of the light received by the projection unit and entering the second optical transformation device reaches the one or more image sensors.
89. The method of any one of embodiments 70 to 88, wherein the light beam is provided by a radiation source, and wherein the radiation source comprises a coherent source, a partially coherent source, an incoherent source, or any combination thereof.
90. The method of embodiment 89, wherein the radiation source comprises a coherent source, and the coherent source comprises a laser or a plurality of lasers.
91. The method of embodiment 89, wherein the radiation source comprises an incoherent source, and the incoherent source comprises a light emitting diode (LED), a laser driven light source (LDLS), an amplified spontaneous emission (ASE) source, a super luminescence light source, or any combination thereof.
92. The method of any one of embodiments 70 to 91, further comprising adjusting the light beam's size, position, direction, collimation, polarization, ellipticity, spatial filtering, spectral filtering, or any combination thereof.
93. The method of any one of embodiments 70 to 92, further comprising adjusting the size, position, direction, collimation, polarization, ellipticity, spatial filtering, spectral filtering, or any combination thereof, of light received from the projection unit and relayed by the second optical transformation device to the one or more image sensors.
94. The method of any one of embodiments 70 to 93, wherein the one or more image sensors comprise one or more time delay and integration (TDI) cameras, or one or more cameras comprising a TDI mode of image acquisition, and wherein the relative motion between the object-facing optical component and the object is synchronized to a line shift or an image shift in the one or more image sensors so as to minimize motion blurring during image acquisition.
95. The method of any one of embodiments 70 to 94, wherein integration of illumination pattern light intensity directed to the object during a scan results in approximately the same total exposure to illumination light at every location of the object.
96. The method of any one of embodiments 71 to 95, wherein a separation distance between any two light intensity maxima of the plurality of light intensity maxima in the illumination pattern is at least 1× to 100× of a full width at half maximum (FWHM) of a corresponding intensity peak profile.
97. The method of any one of embodiments 70 to 96, wherein the first optical transformation device or the second optical transformation device comprises a micro-lens array (MLA), and wherein the micro-lens array (MLA) comprises a regular arrangement of two or more micro-lenses.
98. The method of any one of embodiments 71 to 97, wherein the second optical transformation device comprises a micro-lens array, and wherein there is a 1:1 correspondence between the plurality of light intensity maxima in the illumination pattern and micro-lenses in the micro-lens array.
99. The method of embodiment 97 or embodiment 98, wherein each micro-lens in the micro-lens array is configured to demagnify a corresponding beamlet in the light received from the projection unit.
100. The method of any one of embodiments 97 to 99, wherein the regular arrangement is repeated for a predetermined number of times.
101. The method of any one of embodiments 97 to 99, wherein the regular arrangement comprises one or more two-dimensional lattice patterns.
102. The method of any one of embodiments 97 to 99, wherein the regular arrangement is a rectangular pattern or a square pattern.
103. The method of any one of embodiments 97 to 99, wherein the regular arrangement is a hexagonal pattern.
104. The method of any one of embodiments 97 to 103, wherein the regular arrangement includes a shift in micro-lens position between neighboring rows or columns of micro-lenses.
105. The method of any one of embodiments 97 to 104, wherein the regular arrangement is staggered.
106. The method of any one of embodiments 97 to 105, wherein the micro-lens array comprises a plurality of rows, wherein each row in the plurality of rows is staggered, with respect to a previous row in the plurality of rows, in a direction perpendicular to movement of the object-facing optical component relative to the object or to movement of the object relative to the object-facing optical component.
107. The method of embodiment 106, wherein a row of the plurality of rows is staggered in a perpendicular direction with respect to an immediately adjacent previous row of the plurality of rows.
108. The method of any one of embodiments 97 to 107, wherein the regular arrangement is configured to provide equal spacing between micro-lenses in the micro-lens array.
109. The method of any one of embodiments 97 to 108, wherein a projection of the regular arrangement onto an object plane comprising the object is rotated with respect to a direction of the relative movement.
110. The method of embodiment 109, wherein the projection of the regular arrangement onto the object plane comprising the object is rotated by an angle, θ, with respect to the direction of relative movement, and wherein θ is chosen so as to result in the illumination pattern providing a uniform total exposure at every point on the object when integrated over a scan.
111. The method of embodiment 110, wherein a is between about 1 degree and about 45 degrees.
112. The method of any one of embodiments 97 to 111, wherein the regular arrangement is configured to provide equal spacing between the two or more micro-lenses of the micro-lens array (MLA).
113. The method of any one of embodiments 70 to 112, wherein the illumination pattern comprises an array of intensity peaks.
114. The method of embodiment 113, wherein each intensity peak in the array of intensity peaks is non-overlapping.
115. The method of any one of embodiments 70 to 114, wherein the first optical transformation device or the second optical transformation device comprises a micro-lens array (MLA), and wherein the micro-lens array (MLA) comprises a plurality of spherical micro-lenses, aspherical micro-lenses, or any combination thereof.
116. The method of any one of embodiments 70 to 115, wherein the first optical transformation device or the second optical transformation device comprises a micro-lens array (MLA), and wherein the micro-lens array (MLA) comprises a plurality of micro-lenses with a positive or negative optical power.
117. The method of any one of embodiments 70 to 116, wherein the first optical transformation device comprises a micro-lens array (MLA), and wherein each micro-lens in the micro-lens array (MLA) has a numerical aperture of at least 0.01, at least 0.05, at least 0.1, at least 0.5, at least 1, at least 1.5, or at least 2.
118. The method of any one of embodiments 70 to 96, wherein the first optical transformation device and the second optical transformation device comprise a plurality of harmonically-modulated phase masks or harmonically-modulated amplitude masks with different orientations.
119. The method of embodiment 118, wherein a spatial frequency and orientation of the second optical transformation device matches that of the first optical transformation device.
120. The method of embodiment 118 or embodiment 119, wherein the first and second optical transformation devices comprise harmonically-modulated phase masks, and wherein the second optical transformation device is phase shifted relative to the first optical transformation device.
121. The method of any one of embodiments 118 to 120, wherein a final high-resolution image is reconstructed from the scanned image(s) acquired by the one or more image sensors by applying a Fourier reweighting process.
122. The method of any one of embodiments 70 to 121, wherein the one or more image sensors continuously accumulate an image-forming signal over a time course for performing the scan.
123. The method of any one of embodiments 70 to 122, wherein the one or more image sensors comprise one or more time delay and integration (TDI) cameras, charge-coupled device (CCD) cameras, complementary metal-oxide semiconductor (CMOS) cameras, or single-photon avalanche diode (SPAD) arrays.
124. The method of any one of embodiments 70 to 123, wherein the one or more image sensors have a pixel size of between about 0.1 and 20 micrometers.
125. The method of any one of embodiments 70 to 124, further comprising dynamically adjusting a focus of an object-facing optical component to provide optimal image resolution.
126. The method of embodiment 125, wherein the dynamic adjustment of the focus comprises adjusting a position of the object-facing optical component relative to the object.
127. The method of embodiment 125 or embodiment 126, wherein the object-facing optical component comprises an objective lens, a plurality of objective lenses, a lens array, or any combination thereof.
128. The method of any one of embodiments 125 to 127, wherein the object-facing optical component comprises an objective lens or a plurality of objective lenses having a numerical aperture of at least 0.3, at least 0.4, at least 0.5, at least 0.6, at least 0.7, at least 0.8, at least 0.9, at least 1.0, at least 1.1, at least 1.2, at least 1.3, at least 1.4, at least 1.5, at least 1.6, at least 1.7, or at least 1.8.
129. The method of any one of embodiments 70 to 128, wherein the first optical transformation device is included in a first illumination unit.
130. The method of any one of embodiments 70 to 129, wherein the second optical transformation device is included in a detection unit.
131. The method of any one of embodiments 70 to 130, wherein the scanned image(s) comprise fluorescence images, reflection images, transmission images, dark field images, phase contrast images, differential interference contrast images, two-photon images, multi-photon images, single molecule localization images, or any combination thereof.
132. The method of any one of embodiments 70 to 131, wherein the scanned image(s) comprise fluorescence images, and wherein the illuminating comprises providing excitation light at two or more excitation wavelengths.
133. The method of any one of embodiments 70 to 132, wherein the scanned image(s) comprise fluorescence images, and wherein the one or more image sensors are configured to detect fluorescence at two or more emission wavelengths.
134. The method of any one of embodiments 70 to 133, wherein an individual scan comprises imaging the object over its entire length in a direction of the relative motion.
135. The method of any one of embodiments 70 to 134, wherein an individual scan comprises imaging a portion of the object, and a series of scans is performed by translating the object relative to the object-facing optical component by all or a portion of a field-of-view (FOV) of the object-facing optical component between scans to create a series of images of the object.
136. The method of embodiment 135, further comprising tiling the series of images to create a single composite image of the object.
137. The method of any one of embodiments 70 to 136, wherein the object comprises a flow cell or substrate for performing nucleic acid sequencing.
138. The method of embodiment 137, wherein the flow cell or substrate comprises at least one surface, and wherein the at least one surface comprises a plurality of single nucleic acid molecules or clonally-amplified nucleic acid clusters.
139. The imaging system of any one of embodiments 1 to 69, further comprising a compensator configured to correct for non-flatness of the second optical transformation device.
140. The imaging system of any one of embodiments 1 to 69 or claim 139, further comprising one or more pinhole aperture arrays positioned on or in front of the one or more image sensors, wherein the pinhole aperture arrays are configured to reduce artifacts in a point spread function for the imaging system.
141. An imaging system as depicted in
142. An imaging system as depicted in
143. An imaging system configured to achieve the resolution improvement over widefield (WF) imaging depicted in
While preferred embodiments of the present invention have been shown and described herein, it will be obvious to those skilled in the art that such embodiments are provided by way of example only. It is not intended that the invention be limited by the specific examples provided within the specification. While the invention has been described with reference to the aforementioned specification, the descriptions and illustrations of the embodiments herein are not meant to be construed in a limiting sense. Numerous variations, changes, and substitutions will now occur to those skilled in the art without departing from the invention. Furthermore, it shall be understood that all aspects of the invention are not limited to the specific depictions, configurations or relative proportions set forth herein which depend upon a variety of conditions and variables. It should be understood that various alternatives to the embodiments of the invention described herein may be employed in practicing the invention. It is therefore contemplated that the invention shall also cover any such alternatives, modifications, variations or equivalents. It is intended that the following claims define the scope of the invention and that methods and structures within the scope of these claims and their equivalents be covered thereby.
This application is a continuation application of International Application No. PCT/US2022/077551, filed on Oct. 4, 2022, which claims the priority benefit of U.S. Provisional Patent Application No. 63/262,081, filed on Oct. 4, 2021, the contents of which are incorporated herein by reference in their entirety.
Number | Date | Country | |
---|---|---|---|
63262081 | Oct 2021 | US |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/US2022/077551 | Oct 2022 | WO |
Child | 18623877 | US |