This disclosure generally relates to imaging systems. This disclosure relates to hyperspectral imaging systems. This disclosure further relates to hyperspectral imaging systems that generate an unmixed color image of a target. This disclosure further relates to a hyperspectral imaging system that is configured to use a hybrid unmixing technique to provide enhanced imaging of a target. This disclosure further relates to a hyperspectral imaging system that is configured to use a hybrid unmixing technique to provide enhanced imaging of multiplexed fluorescence labels, enabling longitudinal imaging of multiple fluorescent signals with reduced illumination intensities. This disclosure further relates to hyperspectral imaging systems that are used in diagnosing a health condition.
The expanded application of fluorescence imaging in biomedical and biological research towards more complex systems and geometries may require tools that can analyze a multitude of components at widely varying time- and length-scales. A major challenge in such complex imaging experiments may be to cleanly separate multiple fluorescent labels with overlapping spectra from one another and background autofluorescence, without perturbing the sample with high levels of light. Thus, there is a requirement for efficient and robust analysis tools capable of quantitatively separating these signals.
In recent years, high-content imaging approaches have been refined for decoding the complex and dynamical orchestration of biological processes. Fluorescence, with its high contrast, high specificity and multiple parameters, has become the reference technique for imaging. Continuous improvements in fluorescent microscopes and the ever-expanding palette of genetically-encoded and synthesized fluorophores have enabled the labeling and observation of a large number of molecular species. Such fluorescence techniques may offer the potential of using multiplexed imaging to follow multiple labels simultaneously in the same specimen, but these techniques have fallen short of their fully imagined capabilities. Standard fluorescence microscopes may collect multiple images sequentially, employing different excitation and detection bandpass filters for each label.
Recently developed fluorescence techniques may allow for massive multiplexing by utilizing sequential labeling of fixed samples but are not suitable for in vivo imaging. These approaches may be ill-suited to separating overlapping fluorescence emission signals, and the narrow bandpass optical filters used to increase selectivity, decrease the photon efficiency of the imaging. (
Hyperspectral Fluorescent Imaging (HFI) potentially overcomes the limitations of overlapping emissions by expanding signal detection into the spectral domain. HFI captures a spectral profile from each (image) pixel, resulting in a hyperspectral cube (x,y, wavelength) of data, that can be processed to deduce the labels present in that pixel. Linear unmixing (LU) has been widely utilized to analyze HFI data, and has performed well with bright samples emitting strong signals from fully-characterized, extrinsic fluorophores such as fluorescent proteins and dyes. However, in vivo fluorescence microscopy is almost always limited in the number of photons collected per pixel (due to the expression levels, the bio-physical fluorescent properties, and the sensitivity of the detection system), which reduces the quality of the spectra acquired.
A further challenge which affects the quality of spectra is the presence of multiple forms of noise in the imaging of the sample. Two examples of instrumental noise may be photon noise and read noise.
The photon noise, also known as Poisson noise, may be an inherent property related to the statistical variation of photons emission from a source and of detection. Poisson noise may be inevitable when imaging fluorescent dyes and is more pronounced in the low-photon regime. Such noise may pose challenges, especially in live and time lapse imaging, where the power of the exciting laser is reduced to avoid photo-damage to the sample, decreasing the amount of fluorescent signal.
The read noise may arise from voltage fluctuations in microscopes operating in analog mode during the conversion from photon to digital levels intensity and commonly affects fluorescence imaging acquisition.
Most biological samples used for in vivo microscopy are labeled using extrinsic signals from fluorescent proteins or probes but often include intrinsic signals (autofluorescence). Autofluorescence may contribute to photons that are undesired, difficult to identify and to account for in LU.
The cumulative presence of noise may inevitably lead to a degradation of acquired spectra during imaging. As a result, the spectral separation by LU may often be compromised, and the Signal to Noise ratio (SNR) of the final unmixing is often reduced by the weakest of the signals detected.
Increasing the amount of laser excitation may partially overcome these challenges, but the higher energy deposition in the sample may cause photo-bleaching and photo-damage, affecting both the integrity of the live sample and the duration of the observation.
Also, traditional unmixing strategies such as LU may computationally be demanding, requiring long analyses times and often slowing the interrogation.
Combined, above potential compromises and shortcomings have reduced both the overall multiplexing capability and the adoption of HFI multiplexing technologies.
The following publications are related art for the background of this disclosure. One digit or two-digit numbers in the box brackets before each reference correspond to the numbers in the box brackets used in the other parts of this disclosure.
The entire content of each of the above publications is incorporated herein by reference.
Examples described herein generally relate to imaging systems. The examples of this disclosure also relates to hyperspectral imaging systems. The examples further relate to hyperspectral imaging systems that generate an unmixed color image of a target. The examples further relate to a hyperspectral imaging system that is configured to use a hybrid unmixing technique to provide enhanced imaging of a target. The examples further relate to a hyperspectral imaging system that is configured to use a hybrid unmixing technique to provide enhanced imaging of multiplexed fluorescence labels, enabling longitudinal imaging of multiple fluorescent signals with reduced illumination intensities. The examples further relate to hyperspectral imaging systems that are used in diagnosing a health condition.
In this disclosure, the hyperspectral imaging system may include an image forming system. The image forming system may have a configuration to acquire a detected radiation of the target, wherein the detected radiation comprises at least two (target) waves, each target wave having a detected intensity and a different detected wavelength; to form a target image using the detected target radiation, wherein the target image comprises at least two (image) pixels, and wherein each image pixel corresponds to one physical point on the target; to form at least one (intensity) spectrum for each image pixel using the (detected) intensity and the (detected) wavelength of each target wave; to transform the intensity spectrum of each image pixel using a Fourier transform into a complex-valued function based on the intensity spectrum of each image pixel, wherein each complex-valued function has at least one real component and at least one imaginary component; to form one phasor point on a phasor plane for each image pixel by plotting value of the real component against value of the imaginary component, wherein the value of the real component is referred as the real value hereafter, and wherein the value of the imaginary component is referred as the imaginary value hereafter; and to form a (phasor) histogram comprising at least two phasor bins, wherein each (phasor) bin comprises at least one phasor point.
In this disclosure, the image forming system may have a (further) configuration to aggregate the detected spectra belonging to the image pixels of each phasor bin, to generate a representative intensity spectrum for each phasor bin; to unmix representative intensity spectra of the phasor bins by using an unmixing technique, thereby determining abundance of each spectral endmember of the detected radiation; to assign a color to a corresponding image pixel of the target by using the abundance of each spectral endmember in the representative intensity spectra and the detected intensity belonging to the image pixel; and to generate a representative image of the target representing the abundance of each spectral endmember.
In this disclosure, the intensity spectra aggregated in each phasor bin may have an essentially similar spectral shape or a substantially same spectral shape. Or, the intensity spectra aggregated in each phasor bin may have essentially similar spectral features or a substantially same spectral features. Such spectral features may include detected spectral intensities and/or detected wavelengths of each detected spectrum. For example, when each detected spectrum's detected intensities are normalized using a standard (e.g., a maximum detected intensity of said spectrum), the relative (normalized) detected intensities of all intensity spectra aggregated in the same bin may have an essentially similar spectral shape or a substantially same spectral shape. In one example of such configuration of the image forming system, each detected spectrum belonging to the image pixels of the same bin may have at least two detected intensities and a detected wavelength for each detected intensity. In another example of such configuration of the image forming system, the relative detected intensity values of each spectrum belonging to the same spectral bin may be substantially same to those of the other spectra aggregated in the same bin. Yet, in another example of such configuration of the image forming system, the said system may discretize the phasor plane to discrete phasor plane areas (wherein these phasor plane areas may have similar or same areal size and/or similar or same areal shape), and may treat the phasor points as phasor points that belong to essentially similar detected spectra or substantially same detected spectra. Yet, in another example of such configuration of the image forming system, the said system may form at least four phasor bins by discretizing a phasor plot along its real dimension and its imaginary dimension. For any such configurations, each phasor bin may have a phasor bin area on each phasor plot; wherein the phasor bin area may be 4/(total number of phasor bins), and wherein the total number of phasor bins may be a product of number of discretizations along real dimension of the phasor plot and number of discretizations along imaginary dimension of the phasor plot.
Summing or averaging these essentially similar or substantially same detected intensity spectra effectively averages the intensity spectra to generate a representative (or average) intensity spectrum for that phasor position. Summing or averaging these substantially similar intensity spectra may be achieved by any mathematical conventional or known manner. That is, any summing or averaging mathematical technique that may yield the representative intensity spectrum is within the scope of this disclosure.
In this disclosure, any (spectral) unmixing technique that may unmix a detected target radiation, intensity spectra, and/or representative intensity spectra is within the scope of this disclosure. The unmixing technique may be a linear unmixing technique. The unmixing technique may be a fully constrained least squares unmixing technique, a matrix inversion unmixing technique, non-negative matrix factorization unmixing technique, geometric unmixing technique, Bayesian unmixing technique, sparse unmixing technique, or any combination thereof.
The image forming system of this disclosure may have a further configuration that applies a denoising filter to reduce a Poisson noise and/or instrumental noise of the detected radiation. The image forming system may also have a further configuration that applies a denoising filter on the real component and/or the imaginary component of each complex-valued function at least once so as to produce a denoised real value and a denoised imaginary value for each image pixel. The denoising filter may be applied after the image forming system transforms the formed intensity spectrum belonging to each image pixel using the Fourier transform into the complex-valued function; and/or before the image forming system forms one phasor point on the phasor plane for each image pixel. The image forming system may also have a further configuration that may apply a denoising filter to the value of the real component and/or the value of the imaginary component after the image forming system forms one phasor point on the phasor plane for each image pixel. The denoised real value may be used as the real value for each image pixel and the denoised imaginary value for each image pixel may be used as the imaginary value to form one phasor point on the phasor plane for each image pixel.
The hyperspectral imaging system may further comprise an optics system. The optics system may include at least one optical component. The at least one optical component may include at least one optical detector. The at least one optical detector may have a configuration that may detect electromagnetic radiation absorbed, transmitted, refracted, reflected, and/or emitted from at least one physical point on the target, thereby forming the target radiation; wherein the target radiation comprises at least two target waves, each target wave having an intensity and a different wavelength. The at least one optical detector may have a further configuration that may detect the intensity and the wavelength of each target wave. The at least one optical detector may also have a further configuration that may transmit the detected target radiation, and each target wave's detected intensity and detected wavelength to the image forming system to be acquired. The image forming system may further comprise a control system, a hardware processor, a memory, and a display. The image forming system may have a further configuration that may display the representative image of the target on the image forming system's display.
The unmixing technique of this disclosure may be any unmixing technique. For example, the unmixing technique may be a linear unmixing technique. For example, the unmixing technique may be a fully constrained least squares unmixing technique, a matrix inversion unmixing technique, non-negative matrix factorization unmixing technique, geometric unmixing technique, Bayesian unmixing technique, sparse unmixing technique, or any combination thereof.
The image forming system of this disclosure may have a further configuration that applies a denoising filter to reduce a Poisson noise and/or instrumental noise of the detected radiation. The denoising filter may be any denoising filter applied at least once. Each applied denoising filter may be the same denoising filter or a different denoising filter. The denoising filter may be applied, for example, to an intensity of the target radiation, an intensity of an intensity spectrum, the real component and/or the imaginary component of each complex-valued function, an intensity of a representative intensity spectrum, and or a combination thereof. For example, the image forming system of this disclosure may have a configuration that applies a denoising filter on the real component and/or the imaginary component of each complex-valued function at least once so as to produce a denoised real value and a denoised imaginary value for each image pixel. For example, the image forming system of this disclosure may have a configuration that applies a denoising filter on both the real component and the imaginary component of each complex-valued function at least once so as to produce a denoised real value and a denoised imaginary value for each image pixel; wherein the denoising filter is applied: (1) after the image forming system transforms the formed intensity spectrum belonging to each image pixel using the Fourier transform into the complex-valued function; and/or (2) before the image forming system forms one phasor point on the phasor plane for each image pixel; and uses the denoised real value as the real value for each image pixel and the denoised imaginary value for each image pixel as the imaginary value to form one phasor point on the phasor plane for each image pixel. For example, the image forming system of this disclosure may have a configuration that applies a denoising filter to the value of the real component and/or the value of the imaginary component after the image forming system forms one phasor point on the phasor plane for each image pixel.
The image forming system of this disclosure may have a further configuration that may aggregate the detected spectra belonging to the image pixels of each phasor bin, wherein the detected spectra belonging to the image pixels of the same bin have substantially the same detected intensity and the detected wavelength.
The image forming system of this disclosure may have a further configuration that may use at least one harmonic of the Fourier transform to generate the representative image of the target. The at least one harmonic may be a first harmonic and/or a second harmonic. Such system may also use only one harmonic. The only one harmonic may be a first harmonic or a second harmonic. Such system may also use only a first harmonic and only a second harmonic.
In this disclosure, the at least one optical component may further include at least one illumination source to illuminate the target, wherein the illumination source generates an illumination source radiation that comprises at least one illumination wave. Such system may also further include at least one illumination source, wherein the illumination source generates an illumination source radiation that comprises at least two illumination waves, and wherein each illumination wave has a different wavelength.
In this disclosure, the image forming system may further include a control system, a hardware processor, a memory, and a display.
In this disclosure, the image forming system may have a further configuration that may display the representative image of the target on the image forming system's display.
In this disclosure, the image forming system may further include a control system, a hardware processor, a memory, and an information conveying system; wherein the information conveying system conveys the representative image of the target to a user in any manner. The information conveying system may convey the representative image of the target to a user as an image, a numerical value, a color, a sound, a mechanical movement, a signal, or a combination thereof.
In this disclosure, the at least one optical component may further include an optical lens, an optical filter, a dispersive optic system, or a combination thereof.
In this disclosure, the detected target radiation may be a fluorescence radiation.
Disclosed herein is a hyperspectral imaging system for generating a representative image of a target. The hyperspectral imaging system may comprise an image forming system. The image forming system may be configured to acquire a detected radiation of the target. The image forming system may be configured to form a target image using the detected target radiation, wherein the target image comprises at least two image pixels, and wherein each image pixel corresponds to one physical point on the target. The image forming system may be configured to form at least one intensity spectrum for each image pixel. The image forming system may be configured to transform the intensity spectrum of each image pixel based on the intensity spectrum of each image pixel. The image forming system may be configured to form one phasor point on a phasor plane for each image pixel. The image forming system may be configured to form a phasor histogram comprising at least two phasor bins, wherein each phasor bin comprises at least one phasor point. The image forming system may be configured to aggregate the detected spectra belonging to the image pixels of each phasor bin. The image forming system may be configured to generate a representative intensity spectrum for each phasor bin. The image forming system may be configured to unmix representative intensity spectra of the phasor bins using one or more unmixing techniques. The image forming system may be configured to determine an abundance of spectral endmembers in the representative intensity spectra. The image forming system may be configured to generate a representative intensity image of the target representing the abundance of the spectral endmembers.
Disclosed herein is a method for generating a representative image of a target. The method may comprise forming at least one intensity spectrum for image pixels of a target image, wherein the target image is based on a detected radiation. The method may comprise implementing a hyperspectral phasor system. The hyperspectral phasor system may be configured to form one phasor point on a phasor plane for each image pixel. The hyperspectral phasor system may be configured to form a phasor histogram comprising at least two phasor bins, wherein each phasor bin comprises at least one phasor point. The hyperspectral phasor system may be configured to aggregate the detected spectra of the image pixels of the at least two phasor bins. The hyperspectral phasor system may be configured to generate at least one representative intensity spectrum for the at least two phasor bins. The method may further comprise implementing an unmixing system. The unmixing system may be configured to unmix the at least one representative intensity spectrum of the at least two phasor bins using one or more unmixing techniques. The method may further comprise generating a representative intensity image of the target based on at least the representative intensity spectra and a detected intensity corresponding to the detected radiation.
Disclosed herein is a method for generating a representative image of a target. The method may comprise forming at least one intensity spectrum for image pixels of a target image, wherein the target image is based on a detected radiation. The method may comprise generating at least one representative intensity spectrum based on phasor points on a phasor plane corresponding to the image pixels. The method may comprise unmixing the at least one representative intensity spectrum using one or more linear unmixing techniques. The method may comprise generating a representative intensity image of the target based on at least the unmixed representative intensity spectrum.
Any combination of above features/configurations is within the scope of the instant disclosure.
These, as well as other components, steps, features, objects, benefits, and advantages, will now become clear from a review of the following detailed description of illustrative implementations, the accompanying drawings, and the claims.
The drawings are illustrative implementations. They do not illustrate all implementations. Other implementations may be used in addition or instead. Details that may be apparent or unnecessary may be omitted to save space or for more effective illustration. Some implementations may be practiced with additional components or steps and/or without all of the components or steps that are illustrated. When the same numeral appears in different drawings, it refers to the same or like components or steps.
The patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the Office upon request and payment of the necessary fee.
The colors disclosed in the following brief description of drawings and other parts of this disclosure refer to the color drawings and photos as originally filed with the U.S. provisional patent application 63/247,688, entitled “A Hyperspectral Imaging System with Hybrid Unmixing,” filed Sep. 23, 2021, attorney docket number AMISC.022PR. The entire content of the aforementioned provisional patent application is incorporated herein by reference.
Illustrative implementations are now described. Other implementations may be used in addition or instead. Details that may be apparent or unnecessary may be omitted to save space or for a more effective presentation. Some implementations may be practiced with additional components or steps and/or without all the components or steps that are described.
This disclosure generally relates to imaging systems. This disclosure relates to hyperspectral imaging systems. This disclosure further relates to hyperspectral imaging systems that generate an unmixed color image of a target. This disclosure further relates to a hyperspectral imaging system that is configured to use a hybrid unmixing technique to provide enhanced imaging of a target. This disclosure further relates to a hyperspectral imaging system that is configured to use a hybrid unmixing technique to provide enhanced imaging of multiplexed fluorescence labels, enabling longitudinal imaging of multiple fluorescent signals with reduced illumination intensities. This disclosure further relates to hyperspectral imaging systems that are used in diagnosing a health condition.
This disclosure also relates to a hyperspectral imaging system for generating a representative image of a target. The hyperspectral imaging system may be configured to implement one or more hybrid unmixing technique(s). For example, one or more hardware computer processors may be configured to execute program instructions to cause the hyperspectral imaging system to perform one or more operations relating to hybrid unmixing. Hybrid unmixing, or operations relating thereto, such as those executed by a hardware computer processor and/or performed by a hyperspectral imaging system may be referred to herein, collectively or individually, as hybrid unmixing (HyU).
In this disclosure, the hyperspectral imaging system may include an image forming system. The image forming system may have a configuration to acquire a detected radiation of the target, wherein the detected radiation comprises at least two (target) waves, each target wave having a detected intensity and a different detected wavelength; to form a target image using the detected target radiation, wherein the target image comprises at least two (image) pixels, and wherein each image pixel corresponds to one physical point on the target; to form at least one (intensity) spectrum for each image pixel using the (detected) intensity and the (detected) wavelength of each target wave; to transform the intensity spectrum of each image pixel using a Fourier transform into a complex-valued function based on the intensity spectrum of each image pixel, wherein each complex-valued function has at least one real component and at least one imaginary component; to form one phasor point on a phasor plane for each image pixel by plotting value of the real component against value of the imaginary component, wherein the value of the real component is referred as the real value hereafter, and wherein the value of the imaginary component is referred as the imaginary value hereafter; and to form a (phasor) histogram comprising at least two phasor bins, wherein each (phasor) bin comprises at least one phasor point.
In this disclosure, the image forming system may have a (further) configuration to aggregate the detected spectra belonging to the image pixels of each phasor bin, to generate a representative intensity spectrum for each phasor bin; to unmix representative intensity spectra of the phasor bins by using an unmixing technique, thereby determining abundance of each spectral endmember of the detected radiation; to assign a color to a corresponding image pixel of the target by using the abundance of each spectral endmember in the representative intensity spectra and the detected intensity belonging to the image pixel; and to generate a representative image of the target representing the abundance of each spectral endmember.
In this disclosure, the intensity spectra aggregated in each phasor bin may have a relatively similar intensity spectrum or a substantially same intensity spectrum. Summing or averaging these substantially similar intensity spectra effectively averages the intensity spectra to generate a representative (or average) intensity spectrum for that phasor position. Summing or averaging these substantially similar intensity spectra may be achieved by any mathematical conventional or known manner. That is, any summing or averaging mathematical technique that may yield the representative intensity spectrum is within the scope of this disclosure.
In this disclosure, the image forming system may also have a (further) configuration that may aggregate the detected spectra belonging to the image pixels of each phasor bin. The detected spectra belonging to the image pixels of the same bin may have substantially the same detected intensity and the detected wavelength.
In this disclosure, any (spectral) unmixing technique that may unmix a detected target radiation, intensity spectra, and/or representative intensity spectra is within the scope of this disclosure. The unmixing technique may be a linear unmixing technique. The unmixing technique may be a fully constrained least squares unmixing technique, a matrix inversion unmixing technique, non-negative matrix factorization unmixing technique, geometric unmixing technique, Bayesian unmixing technique, sparse unmixing technique, or any combination thereof.
The image forming system of this disclosure may have a further configuration that applies a denoising filter to reduce a Poisson noise and/or instrumental noise of the detected radiation. The image forming system may also have a further configuration that applies a denoising filter on both the real component and the imaginary component of each complex-valued function at least once so as to produce a denoised real value and a denoised imaginary value for each image pixel. The denoising filter may be applied after the image forming system transforms the formed intensity spectrum belonging to each image pixel using the Fourier transform into the complex-valued function; and/or before the image forming system forms one phasor point on the phasor plane for each image pixel. The image forming system may also have a further configuration that may apply a denoising filter to the value of the real component and/or the value of the imaginary component after the image forming system forms one phasor point on the phasor plane for each image pixel. The denoised real value may be used as the real value for each image pixel and the denoised imaginary value for each image pixel may be used as the imaginary value to form one phasor point on the phasor plane for each image pixel.
An exemplary HyU hyperspectral imaging system, which may enhance analysis of multiplexed hyperspectral fluorescent signals in vivo, is shown in
The hyperspectral imaging system may further comprise an optics system. The optics system may include at least one optical component. The at least one optical component may include at least one optical detector. The at least one optical detector may have a configuration that may detect electromagnetic radiation absorbed, transmitted, refracted, reflected, and/or emitted from at least one physical point on the target, thereby forming the target radiation; wherein the target radiation comprises at least two target waves, each target wave having an intensity and a different wavelength. The at least one optical detector may have a further configuration that may detect the intensity and the wavelength of each target wave. The at least one optical detector may also have a further configuration that may transmit the detected target radiation, and each target wave's detected intensity and detected wavelength to the image forming system to be acquired. The image forming system may further comprise a control system, a hardware processor, a memory, and a display. The image forming system may have a further configuration that may display the representative image of the target on the image forming system's display.
One example of the exemplary hyperspectral imaging system comprising an optics system and an image forming system is schematically shown in
Any of the example optics system shown and/or discussed herein may include at least one optical component. Examples of the at least one optical component are a detector (“optical detector”), a detector array (“optical detector array”), a source to illuminate the target (“illumination source”), a first optical lens, a second optical lens, an optical filter, a dispersive optic system, a dichroic mirror/beam splitter, a first optical filtering system placed between the target and the at least one optical detector, a second optical filtering system placed between the first optical filtering system and the at least one optical detector, or a combination thereof. For example, the at least one optical component may include at least one optical detector. For example, the at least one optical component may include at least one optical detector and at least one illumination source. For example, the at least one optical component may include at least one optical detector, at least one illumination source, at least one optical lens, at least one optical filter, and at least one dispersive optic system. For example, the at least one optical component may include at least one optical detector, at least one illumination source, a first optical lens, a second optical lens, and a dichroic mirror/beam splitter. For example, the at least one optical component may include at least one optical detector, at least one illumination source, an optical lens, a dispersive optic; and wherein at least one optical detector is an optical detector array. For example, the at least one optical component may include at least one optical detector, at least one illumination source, an optical lens, a dispersive optic, a dichroic mirror/beam splitter; and wherein at least one optical detector is an optical detector array. For example, the at least one optical component may include at least one optical detector, at least one illumination source, an optical lens, a dispersive optic, a dichroic mirror/beam splitter; wherein at least one optical detector is an optical detector array; and wherein the illumination source directly illuminates the target. These optical components may form, for example, the exemplary optics systems shown in
Any of the example optical systems shown and/or discussed herein may include an optical microscope. Examples of the optical microscope may be a confocal fluorescence microscope, a two-photon fluorescence microscope, or a combination thereof.
The at least one optical detector shown and/or discussed herein may have a configuration that detects electromagnetic radiation absorbed, transmitted, refracted, reflected, and/or emitted (“target radiation”) by at least one physical point on the target. The target radiation may include at least one wave (“target wave”). The target radiation may include at least two target waves. Each target wave may have an intensity and a different wavelength. The at least one optical detector may have a configuration that detects the intensity and the wavelength of each target wave. The at least one optical detector may have a configuration that transmits the detected target radiation to the image forming system. The at least one optical detector may have a configuration that transmits the detected intensity and wavelength of each target wave to the image forming system. The at least one optical detector may have any combination of these configurations.
The at least one optical detector shown and/or discussed herein may include a photomultiplier tube, a photomultiplier tube array, a digital camera, a hyperspectral camera, an electron multiplying charge coupled device, a Sci-CMOS, a digital camera, or a combination thereof. The digital camera may be any digital camera. The digital camera may be used together with an active filter for detection of the target radiation. The digital camera may also be used together with an active filter for detection of the target radiation, for example, comprising, luminescence, thermal radiation, or a combination thereof.
The target radiation shown and/or discussed herein may include an electromagnetic radiation emitted by the target. The electromagnetic radiation emitted by the target may include luminescence, thermal radiation, or a combination thereof. The luminescence may include fluorescence, phosphorescence, or a combination thereof. For example, the electromagnetic radiation emitted by the target may include fluorescence, phosphorescence, thermal radiation, or a combination thereof. For example, the electromagnetic radiation emitted by the target may include fluorescence. The at least one optical component may further include a first optical filtering system. The at least one optical component may further include a first optical filtering system and a second optical filtering system. The first optical filtering system may be placed between the target and the at least one optical detector. The second optical filtering system may be placed between the first optical filtering system and the at least one optical detector. The first optical filtering system may include a dichroic filter, a beam splitter type filter, or a combination thereof. The second optical filtering system may include a notch filter, an active filter, or a combination thereof. The active filter may include an adaptive optical system, an acousto-optic tunable filter, a liquid crystal tunable bandpass filter, a Fabry-Perot interferometric filter, or a combination thereof.
The at least one optical detector shown and/or discussed herein may detect the target radiation at a wavelength in the range of 300 nm to 800 nm. The at least one optical detector may detect the target radiation at a wavelength in the range of 300 nm to 1,300 nm.
The at least one illumination source may generate an electromagnetic radiation (“illumination source radiation”). The illumination source radiation may include at least one wave (“illumination wave”). The illumination source radiation may include at least two illumination waves. Each illumination wave may have a different wavelength. The at least one illumination source may directly illuminate the target. In this configuration, there is no optical component between the illumination source and the target. The at least one illumination source may indirectly illuminate the target. In this configuration, there is at least one optical component between the illumination source and the target. The illumination source may illuminate the target at each illumination wavelength by simultaneously transmitting all illumination waves. The illumination source may illuminate the target at each illumination wavelength by sequentially transmitting all illumination waves.
In this disclosure, the illumination source may include a coherent electromagnetic radiation source. The coherent electromagnetic radiation source may include a laser, a diode, a two-photon excitation source, a three-photon excitation source, or a combination thereof.
The illumination source radiation may include an illumination wave with a wavelength in the range of 300 nm to 1,300 nm. The illumination source radiation may include an illumination wave with a wavelength in the range of 300 nm to 700 nm. The illumination source radiation may include an illumination wave with a wavelength in the range of 690 nm to 1,300 nm. For example, the illumination source may be a one-photon excitation source that can generate electromagnetic radiation in the range of 300 to 700 nm. For example, such one-photon excitation source may generate an electromagnetic radiation that may include a wave with a wavelength of about 405 nm, about 458 nm, about 488 nm, about 514 nm, about 554 nm, about 561 nm, about 592 nm, about 630 nm, or a combination thereof. In another example, the source may be a two-photon excitation source that can generate electromagnetic radiation in the range of 690 nm to 1,300 nm. Such excitation source may be a tunable laser. Yet in another example, the source may a one-photon excitation source and a two-photon excitation source that can generate electromagnetic radiation in the range of 300 nm to 1,300 nm. For example, such one-photon excitation source may generate an electromagnetic radiation that may include a wave with a wavelength of about 405 nm, about 458 nm, about 488 nm, about 514 nm, about 554 nm, about 561 nm, about 592 nm, about 630 nm, or a combination thereof. For example, such two-photon excitation source may be capable of generating electromagnetic radiation in the range of 690 nm to 1,300 nm. Such two-photon excitation source may be a tunable laser.
The intensity of the illumination source radiation may not be higher than a certain level such that when the target is illuminated the target is not damaged by the illumination source radiation.
The hyperspectral imaging system may include a microscope. The microscope may be any microscope. For example, the microscope may be an optical microscope. Any optical microscope may be suitable for the system. Examples of an optical microscope may be a two-photon microscope, a one-photon confocal microscope, or a combination thereof. Examples of the two-photon microscopes are disclosed in Alberto Diaspro “Confocal and Two-Photon Microscopy: Foundations, Applications and Advances” Wiley-Liss, New York, November 2001; and Greenfield Sluder and David E. Wolf “Digital Microscopy” 4th Edition, Academic Press, Aug. 20, 2013. The entire content of each of these publications is incorporated herein by reference.
An exemplary optics system comprising a fluorescence microscope 100 is shown in
An exemplary optics system comprising a multiple illumination wavelength microscope 200 is shown in
Another exemplary hyperspectral imaging system comprising a multiple wavelength detection microscope 300 is shown in
Another exemplary hyperspectral imaging system comprising a multiple wavelength detection microscope 400 is shown in
Another exemplary hyperspectral imaging system comprising a multiple illumination wavelength and multiple wavelength detection device 500 is shown in
Another exemplary optical system comprising a multiple wavelength detection device 600 is shown in
Another exemplary optics system comprising a multiple wavelength detection device 700 is shown in
In this disclosure, the image forming system 30 may include a control system 40, a hardware processor 50, a memory system 60, a display 70, or a combination thereof. An exemplary image forming system is shown in
The image forming system may have a configuration that causes the optical detector to detect the target radiation and to transmit the detected intensity and wavelength of each target wave to the image forming system.
The image forming system may have a configuration that acquires the detected target radiation comprising the at least two target waves.
The image forming system may have a configuration that acquires a target radiation comprising at least two target waves, each wave having an intensity and a different wavelength.
The image forming system may have a configuration that acquires a target image, wherein the target image includes at least two pixels, and wherein each pixel corresponds to one physical point on the target.
The image forming system may have a configuration that forms an image of the target using the detected target radiation (“target image”). The target image may include at least one pixel. The target image may include at least two pixels. Each pixel corresponds to one physical point on the target.
The target image may be formed/acquired in any form. For example, the target image may have a visual form and/or a digital form. For example, the formed/acquired target image may be a stored data. For example, the formed/acquired target image may be stored in the memory system as data. For example, the formed/acquired target image may be displayed on the image forming system's display. For example, the formed/acquired target image may be an image printed on a paper or any similar media.
The image forming system may have a configuration that forms at least one spectrum for each pixel using the detected intensity and wavelength of each target wave (“intensity spectrum”).
The image forming system may have a configuration that acquires at least one intensity spectrum for each pixel, wherein the intensity spectrum includes at least two intensity points.
The intensity spectrum may be formed/acquired in any form. For example, the intensity spectrum may have a visual form and/or a digital form. For example, the formed/acquired intensity spectrum may be a stored data. For example, the formed/acquired intensity spectrum may be stored in the memory system as data. For example, the formed/acquired intensity spectrum may be displayed on the image forming system's display. For example, the formed/acquired intensity spectrum may be an image printed on a paper or any similar media.
The image forming system may have a configuration that transforms the formed intensity spectrum of each pixel using a Fourier transform into a complex-valued function based on the intensity spectrum of each pixel, wherein each complex-valued function has at least one real component and at least one imaginary component.
The image forming system may have a configuration that applies a denoising filter on both the real component and the imaginary component of each complex-valued function at least once so as to produce a denoised real value and a denoised imaginary value for each pixel.
The image forming system may have a configuration that forms one point on a phasor plane (“phasor point”) for each pixel by plotting the denoised real value against the denoised imaginary value of each pixel. The image forming system may form the phasor plane, for example, by using its hardware components, for example, the control system, the hardware processor, the memory or a combination thereof. The image forming system may display the phasor plane.
The phasor point and/or phasor plane may be formed/acquired in any form. For example, the phasor point and/or phasor plane may have a visual form and/or a digital form. For example, the formed/acquired phasor point and/or phasor plane may be a stored data. For example, the formed/acquired phasor point and/or phasor plane may be stored in the memory system as data. For example, the formed/acquired phasor point and/or phasor plane may be displayed on the image forming system's display. For example, the formed/acquired phasor point and/or phasor plane may be an image printed on a paper or any similar media.
The image forming system may have a configuration that maps back the phasor point to a corresponding pixel on the target image based on the phasor point's geometric position on the phasor plane. In this disclosure, the image forming system may have a configuration that maps back the phasor plane to the corresponding target image based on each phasor point's geometric position on the phasor plane. The image forming system may map back the phasor point, for example, by using its hardware components, for example, the control system, the hardware processor, the memory or a combination thereof.
The phasor point and/or phasor plane may be mapped back in any form. For example, the mapped back phasor point and/or phasor plane may have a visual form and/or a digital form. For example, the mapped back phasor point and/or phasor plane may be a stored data. For example, the mapped back phasor point and/or phasor plane may be stored in the memory system as data. For example, the mapped back phasor point and/or phasor plane may be displayed on the image forming system's display. For example, the mapped back phasor point and/or phasor plane may be an image printed on a paper or any similar media.
The image forming system may have a configuration that assigns an arbitrary color to the corresponding pixel based on the geometric position of the phasor point on the phasor plane.
The unmixed color image may be formed in any form. For example, the unmixed color image may have a visual form and/or a digital form. For example, the unmixed color image may be a stored data. For example, the unmixed color image may be stored in the memory system as data. For example, the unmixed color image may be displayed on the image forming system's display. For example, the unmixed color image may be an image printed on a paper or any similar media.
The image forming system may have a configuration that displays the unmixed color image of the target on the image forming system's display.
The image forming system may have any combination of any of the configurations shown and/or described herein, such as those described above.
The image forming system may use at least one harmonic of the Fourier transform to generate the unmixed color image of the target. The image forming system may use at least a first harmonic of the Fourier transform to generate the unmixed color image of the target. The image forming system may use at least a second harmonic of the Fourier transform to generate the unmixed color image of the target. The image forming system may use at least a first harmonic and a second harmonic of the Fourier transform to generate the unmixed color image of the target.
The denoising filter may be any denoising filter. For example, the denoising filter may be a denoising filter such that when the denoising filter is applied, the image quality is not compromised. For example, when the denoising filter is applied, the detected electromagnetic radiation intensity at each pixel in the image may not change. An example of a suitable denoising filter may include a median filter.
The unmixed color image of the target may be formed at a signal-to-noise ratio of the at least one spectrum in the range of 1.2 to 50. The unmixed color image of the target may be formed at a signal-to-noise ratio of the at least one spectrum in the range of 2 to 50.
One example implementation of a hyperspectral imaging system is schematically shown in
Another example implementation of the hyperspectral imaging system is schematically shown in
In this disclosure, the target may be any target. The target may be any target that has a specific spectrum of color. For example, the target may be a tissue, a fluorescent genetic label, an inorganic target, or a combination thereof.
The system may be calibrated by using a reference to assign colors to each pixel. The reference may be any known reference. For example, the reference may be any reference wherein unmixed color image of the reference is determined prior to the generation of unmixed color image of the target. For example, the reference may be a physical structure, a chemical molecule, a biological molecule, a biological activity (e.g. physiological change) as a result of physical structural change and/or disease.
The target radiation may include fluorescence. The hyperspectral imaging system suitable for fluorescence detection may include an optical filtering system. Examples of the optical filtering system are: a first optical filter to substantially decrease the intensity of the source radiation reaching to the detector. The first optical filter may be placed between the target and the detector. The first optical filter may be any optical filter. Examples of the first optical filter may be dichroic filter, a beam splitter type filter, or a combination thereof.
The hyperspectral imaging system suitable for fluorescence detection may further include a second optical filter. The second optical filter may be placed between the first optical filter and the detector to further decrease the intensity of the source radiation reaching the detector. The second optical filter may be any optical filter. Examples of the second optical filter may be a notch filter, an active filter, or a combination thereof. Examples of the active filter may be an adaptive optical system, an acousto-optic tunable filter, a liquid crystal tunable bandpass filter, a Fabry-Perot interferometric filter, or a combination thereof.
The hyperspectral imaging system may be calibrated by using a reference material to assign colors to each pixel. The reference material may be any known reference material. For example, the reference material may be any reference material wherein unmixed color image of the reference material is determined prior to the generation of unmixed color image of the target. For example, the reference material may be a physical structure, a chemical molecule (i.e. compound), a biological activity (e.g. physiological change) as a result of physical structural change and/or disease. The chemical compound may be any chemical compound. For example, the chemical compound may be a biological molecule (i.e. compound).
The hyperspectral imaging system may be used to diagnose any health condition. For example, the hyperspectral imaging system may be used to diagnose any health condition of any mammal. For example, the hyperspectral imaging system may be used to diagnose any health condition of a human. Examples of the health condition may include a disease, a congenital malformation, a disorder, a wound, an injury, an ulcer, an abscess, or the like. The health condition may be related to a tissue. The tissue may be any tissue. For example, the tissue may include a skin. Examples of a health condition related to a skin or tissue may be a skin lesion. The skin lesion may be any skin lesion. Examples of the skin lesion may be a skin cancer, a scar, an acne formation, a wart, a wound, an ulcer, or the like. Other examples of a health condition of a skin or tissue may be a makeup of a tissue or a skin, for example, the tissue or the skin's moisture level, oiliness, collagen content, hair content, or the like.
The target may include a tissue. The hyperspectral imaging system may display an unmixed color image of the tissue. The health condition may cause differentiation of chemical composition of the tissue. This chemical composition may be related to chemical compounds such as hemoglobin, melanin, a protein (e.g., collagen), oxygen water, the like, or a combination thereof. Due to the differentiation of the tissue's chemical composition, color of the tissue that is affected by the health condition may appear to be different than that of the tissue that is not affected by the health condition. Because of such color differentiation, the health condition of the tissue may be diagnosed. The hyperspectral imaging system may therefore allow a user to diagnose, for example, a skin condition, regardless of room lighting and skin pigmentation level.
For example, an illumination source radiation delivered to a biological tissue may undergo multiple scattering from inhomogeneity of biological structures and absorption by chemical compounds such as hemoglobin, melanin, and water present in the tissue as the electromagnetic radiation propagates through the tissue. For example, absorption, fluorescence, and scattering characteristics of the tissue may change during the progression of a disease. For example, therefore, the reflected, fluorescent, and transmitted light from tissue detected by the optical detector of the hyperspectral imaging of this disclosure may carry quantitative diagnostic information about tissue pathology.
The diagnostic information, obtained by using the hyperspectral imaging system, may determine the health condition of the tissue. As such, this diagnostic information may enhance a patient's clinical outcome, for example, before, during, and/or after surgery or treatment. This hyperspectral imaging system, for example, may be used to track a patient's evolution of health over time by determining the health condition of, for example, the tissue of the patient. In this disclosure, the patient may be any mammal. For example, the mammal may be a human.
The reference material disclosed above may be used in the diagnosis of the health condition.
The hyperspectral imaging system comprising Hyperspectral Phasors (HySP) may apply Fourier transform to convert all photons collected across spectrum into one point in the two dimensional (2D) phasor plot (“density plot”). The reduced dimensionality may perform well in low SNR regime compared to linear unmixing method, where each channel's error may contribute to the fitting result. In any imaging system, the number of photons emitted by a dye during a time interval may be a stochastic (Poissonian) process, where the signal (total digital counts) may scale as the average number of acquired photons, N; and the noise may scale as square-root of N, N. Such Poissonian noise of the fluorescence emission and the detector readout noise may become more significant at lower light levels. First, the error on HySP plots may be quantitatively assessed. Then, this information may be used to develop a noise reduction approach to demonstrate that the hyperspectral imaging system comprising HySP is a robust system for resolving time-lapse hyper-spectral fluorescent signals in vivo in a low SNR regime.
The following features are also within the scope of this disclosure.
Multispectral fluorescence microscopy may be combined with hyperspectral phasors and linear unmixing to create a Hybrid Unmixing (HyU) technique (HyU). In some examples, the dynamic imaging of multiple fluorescent labels in live, developing zebrafish embryos and mouse tissue may demonstrate the capabilities of HyU. HyU may be more sensitive to low light levels of fluorescence compared to conventional linear unmixing approaches, permitting better multiplexed volumetric imaging over time, with less bleaching. HyU may also simultaneously image both bright exogenous and dim endogenous labels because of its high dynamic range. This technique may allow interrogation of cellular behaviors, tagged components, and cell metabolism within the same specimen, offering a powerful window into the orchestrated complexity of biological systems.
Hybrid Unmixing (HyU) technique(s) may resolve many of the challenges that have limited the wider acceptance of HFI for applications for example, in vivo imaging. HyU may employ the phasor approach merged with traditional unmixing algorithms to untangle the fluorescent signals more rapidly and more accurately from multiple exogenous and endogenous labels.
The phasor approach, which is a dimensionality reduction approach for the analysis of both fluorescence lifetime and spectral image analysis, may provide advantages to HyU, including spectral compression, denoising, and computational reduction for both pre-processing and unmixing of HFI datasets.
A conventional phasor analysis may fully be supervised and may require a manual selection of regions or points on a graphical representation of the transformed spectra, called the phasor plot.
HyU, as discussed herein, may utilize phasor processing as an encoder to aggregate similar spectra and applies unmixing algorithms, such as LU, on the aggregate similar spectra to provide unsupervised analysis of the HFI data, thereby simplifying the data processing and removing user subjectivity.
HyU may offer, for example, three advantages over prior techniques: (1) improved unmixing over conventional LU, especially for low intensity images, e.g., down to 5 photons per spectra; (2) simplified identification of independent spectral components; and (3) dramatically faster processing of large datasets, overcoming the typical unmixing bottleneck for in vivo fluorescence microscopy.
HyU, as discussed herein, may combine the best features of hyperspectral phasor analysis and unmixing techniques, resulting in faster computation speeds and more reliable results, especially at low light levels.
In this disclosure, the (intensity) spectra may be unmixed by any technique. An example of the unmixing technique is linear unmixing (LU) technique. Examples of the unmixing techniques may include (1) fully constrained least squares, (2) matrix inversion, (3) non-negative matrix factorization, (4) geometric unmixing method, (5) Bayesian unmixing method, and (6) sparse unmixing method. For a review of such unmixing techniques, for example, see Jiaojiao Wei and and Xiaofei Wang “An Overview on Linear Unmixing of Hyperspectral Data,” Mathematical Problems in Engineering, Volume 2020, Article ID 3735403, pages 1-12, https://doi.org/10.1155/2020/3735403. The entire content of this publication is incorporated herein by reference. Such unmixing techniques are within the scope of this disclosure.
The phasor approaches of this disclosure may reduce the computational load because they are compressive, reducing, for example, the 32 channels of an HFI spectral plot into a position on a 2D-histogram, representing the real and imaginary Fourier components of the spectrum (
Because the spectral content of an entire 2D or 3D image set is rendered on a single phasor plot, there is a dramatic data compression—from a spectrum for each voxel in an image set (for example, up to or even beyond gigavoxels) to a histogram value on the phasor plot (for example, megapixels).
In addition, because each “bin” on the phasor plot histogram corresponds to multiple voxels with highly similar spectral profiles, the binning itself represents spectral averaging, which reduces the Poisson and instrumental noise (
Poisson noise in the collected light is unavoidable in HFI unless the excitation is turned so high that the statistics of collected fluorescence creates hundreds or thousands of photons per spectral bin. The clear separation of the spectral phasor plot and its referenced imaging data, permits denoising algorithms to be applied to phasor plots with minimal degradation of the image resolution.
LU or other unmixing approaches may be applied to the spectra on the phasor plot, offering a dramatic reduction in computational burden for large image data sets (
In this example, to quantitatively assess the relative performance of HyU and the conventional LU, they were analyzed on synthetic hyperspectral fluorescent datasets, created by computationally modelling the biophysics of fluorescence spectral emission and microscope performance (
In addition to the computational efficiency mentioned above, HyU analysis shows better ability to capture spatial features over a wide dynamic range of intensities, when compared with conventional LU, in large part due to the denoising created by processing in phasor space (
The absolute MSE for HyU can be consistently up to 2× lower than that of the conventional LU, especially at low and ultra-low fluorescence levels (
To better characterize the performance in the experimental data without ground truth, the unmixing residual can be defined as the difference between the original multichannel hyperspectral images and their unmixed results. Residuals provide a measure of how closely the unmixed results reconstruct the original signal (
Analysis of experimental data, which reveals comparatively lower unmixing residuals and a higher dynamic range as compared to the conventional LU, supports the enhanced performance of HyU. Data was acquired from a quadra-transgenic zebrafish embryo Tg(ubiq:Lifeact-mRuby);Gt(cltca-citrine);Tg(ubiq:lyn-tdTomato);Tg(fli1:mKO2), labelling actin, clathrin, plasma membrane, and pan-endothelial cells, respectively (
HyU unmixing of the data shows minimal signal cross-talk between channels while the conventional LU presents noticeable bleed-through (
The residual images (
Applying HyU to another HFI dataset further highlights HyU's improvements in noise reduction and reconstitution of spatial features for low-photon unmixing. (
HyU is more accurate, leading to more reliable unmixing results across the depth of sample with greatly reduced unmixing residuals. The average residual for HyU is 9-fold lower than that of the conventional LU with a 3-fold narrower variance. (
HyU's increased sensitivity can be utilized to overcome common challenges of multiplexed imaging such as poor photon yield and spectral cross-talk and were able to visualize dynamics in a developing zebrafish embryo, such as a triple-transgenic zebrafish embryo with labeled pan-endothelial cells, vasculature, and clathrin-coated pits (Tg(fli1:mKO2); Tg(kdrl:mCherry); Gt(cltca-Citrine)). Multiplexing these spectrally close fluorescent proteins is enabled by HyU's increased sensitivity at lower photon counts.
The increased performance at lower SNR allowed us to maintain high quality results (
HyU provides the ability to combine the information from intrinsic and extrinsic signals during live imaging of samples, at both single (
HyU allows for reduced energy load, tiled imaging of the entire embryo without perturbing its development or depleting its fluorescence signal (
The HyU capabilities can be used to multiplex volumetric timelapse of extrinsic and intrinsic signals by imaging the tail region of the same quadra-transgenic zebrafish embryo. Extrinsic labels at 488/561 nm and the intrinsic signals with 740 nm two photon can be excited, collecting 6 tiled volumes over 125 mins (
The advantages of Hybrid Unmixing (HyU) over the conventional Linear Unmixing (LU) in performing complex multiplexing interrogations are discussed herein. HyU may overcome the significant challenges of separating multiple fluorescent and autofluorescent labels with overlapping spectra while minimally perturbing the sample with excitation light.
One example advantage of HyU over the conventional LU is its multiplexing capability when imaging in the presence of biological and instrumental noise, especially at low signal levels. HyU increased sensitivity improves multiplexing in photon limited applications (
Simplicity of use and versatility are other key advantages of HyU over the conventional LU, inherited from both the phasor approach and traditional unmixing algorithms. Phasors here operate as a spectral encoder, reducing computational load and integrating similar spectral signatures in histogram bins of the phasor plot. This representation simplifies identification of independent spectral signatures (
The simplicity of this approach is especially helpful in live imaging where identifying independent spectral components remains an open challenge, owing to the presence of intrinsic signals (
In single photon imaging (
HyU performs better than standard algorithms both in the presence and absence of phasor noise reduction filters. Compared with the conventional LU, the unmixing enhancement when such filters are applied is demonstrated by a decrease of the MSE of up to 21% (
In the absence of noise, for example in the ground truth simulations, the conventional LU produces an MSE 6-fold lower than HyU (
HyU can interface with different unmixing algorithms, adapting to existing experimental pipelines. Hybridization with iterative approaches such as non-negative matrix factorization, fully constrained and non-negative least-squares were tested. Speed tests with iterative fitting unmixing algorithms demonstrate a speed increase of up to 500-fold when the HyU compressive strategy is applied. (
One restriction of HyU may derive from the mathematics of linear unmixing, where linear equations representing the unmixed channels need to be solved for the unknown contributions of each analyzed fluorophore.
To obtain a better solution from these equations and to avoid an underdetermined equation system, the maximum number of spectra for unmixing may not exceed the number of channels acquired, generally 32 for commercial microscopes.
This number could be increased; however, due to the broad and photon-starved nature of fluorescence spectra, acquisition of a larger number of channels could negatively affect the sample, imaging time and intensities. Depending on the number of labels in the specimen of interest, extending the number of labels to simultaneously unmix beyond 32 will likely require spectral resolution upsampling strategies.
HyU improvement is related to the presence of various types of signal disruption and noise in microscopy images, such as stochastic emission, Gaussian, Poisson and digital as well as unidentified sources of spectral signatures which affect SNR in a variety of ways (
The results of this example, quantitatively show that HyU, a phasor based, computational unmixing framework, may be well suited in tackling many challenges present in live imaging of multiple fluorescence labels. HyU's reduced requirements in amount of fluorescent signal permit a reduction of laser excitation load and imaging time. These features of HyU may enable multiplexed imaging of biological events with longer duration, higher speed and lower photo-toxicity while providing access to information-rich imaging across different spatio-temporal scales. The reduced requirements of HyU may make it fully compatible with any commercial and common microscopes capable of spectral detection, facilitating access to the technology.
The present disclosure provides examples which demonstrate HyU's robustness, simplicity and improvement in identifying both new and known spectral signatures, and vastly improved unmixing outputs, providing a much-needed tool for delving into the many questions still surrounding studies with live imaging.
Transgenic zebrafish lines were intercrossed over multiple generations to obtain embryos with multiple combinations of the transgenes. All lines were maintained as heterozygous for each transgene. Embryos were screened using a fluorescence stereo microscope (Axio Zoom, Carl Zeiss) for expression patterns of individual fluorescence proteins before imaging experiments. A confocal microscope (LSM 780, Carl Zeiss) was used to isolate Tg(ubiq:Lifeact-mRuby) lines from Tg(ubiq:lyn-tdTomato) lines by distinguishing spatially- and spectrally-overlapping signals.
For in vivo imaging, 5-6 zebrafish embryos at 18 to 72 hpf were immobilized and placed into 1% UltraPure low-melting-point agarose (catalog no. 16520-050, Invitrogen) solution prepared in 30% Danieau (17.4 mM NaCl, 210 M KCl, 120 M MgSO47H2O, 180 M Ca(NO3)2, 1.5 mM HEPES buffer in water, pH 7.6) with 0.003% PTU and 0.01% tricaine in an imaging dish with no. 1.5 coverglass bottom, (catalog no. D5040P, WillCo Wells). Following solidification of agarose at room temperature (1-2 min), the imaging dish was filled with 30% Danieau solution and 0.01% tricaine at 28.5° C.
One fluorescent silica beads solution (Nanocs, Inc.) labeled with Cy3 (Si500-S3-1, 0.5 mL, 1% solid, lot #1608BRX5) was characterized in its spectral fluorescence emission and physical size.
10× dilution in PBS of the beads was placed on a no. 1.5 imaging coverglass and spectrally characterized using spectral mode on a Zeiss LSM 780 laser confocal scanning microscope equipped with a 32-channel detector using 40×/1.1 W LD C-Apochromat Korr UV-VIS-IR lens utilizing a 2-photon laser at 740 nm to excite fluorescence from the beads, using a 690 nm lowpass filter to separate excitation and fluorescence. Spectra obtained from multiple beads with the same label were averaged, producing the reference spectrum reported in
For autofluorescent measurements, mouse organ samples were collected from Balb-c mice. Following euthanasia, organs were resected and washed in Phosphate Buffered Saline (PBS) to remove residual blood and kept in PBS until imaging preparation. Organs were sectioned in order to image the internal architecture and mounted on a glass imaging dish with sufficient PBS to avoid dehydration of the sample. Following imaging, all samples were fixed in a 10% Neutral Buffered Formalin solution at 4° C.
For ex vivo bead characterization in tissue, mouse organ samples were collected from Balb-c mice. Following euthanasia, organs were resected and washed in PBS followed by incubation for at least 24 hours in 10% buffered formalin. The kidney was then removed from the fixative and sectioned into smaller ˜5×5×5 mm pieces for imaging. A fluorescent silica beads working solution (Nanocs, Inc.) labeled with Cy3 (Si500-S3-1, 0.5 mL, 1% solid, lot #1608BRX5) and previously characterized was prepared using a 10× dilution of the fluorescent beads from their stock concentration. Beads were injected in the sample using 50 ul of the solution loaded into a 0.5 mL syringe with a 28 g needle. The kidney sections were then placed in imaging dishes with a small volume of PBS to keep the samples hydrated prior to imaging.
Images were acquired on a Zeiss LSM 780 laser confocal scanning microscope equipped with a 32-channel detector using 40×/1.1 W LD C-Apochromat Korr UV-VIS-IR lens at 28° C.
Samples of Gt(cltca-Citrine), Tg(ubiq:lyn-tdTomato), Tg(fli1::mKO2), and Tg(ubiq:Lifeact-mRuby), were simultaneously imaged with 488 nm and 561 nm laser excitation, for citrine, tdTomato, mKO2, and mRuby. A narrow 488 nm/561 nm dichroic mirror was used to separate excitation and fluorescence emission. Samples were imaged with a 2-photon laser at 740 nm to excite autofluorescence, using a 690 nm lowpass filter to separate excitation and fluorescence.
Samples of mouse kidney tissue were imaged with 2-photon excitation at 740 nm or 850 nm with a 690+nm lowpass filter, at 37° C. incubation.
For all samples, detection was performed at the full available range (410.5-694.9 nm) with 8.9 nm spectral binning.
The model simulates spectral fluorescent emission by generating a stochastic distribution of photons with profile equivalent to the pure reference spectra (as described in Example 22). The effect of photon starvation, commonly observed on microscopes, is synthetically obtained by manually reducing the number of photons in this stochastic distribution. Detection, Poisson and signal transfer noises are then added to produce 32-channel fluorescence emission spectra that closely resemble those acquired on microscopes. The simulations include accurate integration of dichroic mirrors and imaging settings.
Experimentally matching simulations. To quantify the performance of HyU vs LU for microscopy data acquired experimentally, synthetic data were generated where each input spectra was organized with intensity distributions taken from experimental data. The analog to photon counting rate was calibrated based on existing literature. Real data was discretized to photons to produce a realistic photon mask with biologically relevant distribution of signal. This provided intensities and ratios which would match those acquired from the microscope while allowing us control over the effects of photon starvation.
Overlapping simulations. Simulations to quantify the performance of HyU vs. the conventional LU with respect to the number of spectral combinations are included. These simulations were created with artificial intensity distributions so that a simulation with X % overlap and n fluorophores would have a specific percentage of pixels, X, with a randomized ratio of n input spectra. As an example, for a simulation with 6 fluorophores and 50% overlap, the simulated dataset would have 50% of the pixels contain a randomized combination of the 6 fluorophores, while the remaining pixels contain a single fluorophore. This allowed us to investigate the effects of an increasing number of spectral combinations on the compressive nature of the phasor method for HyU.
Independent spectral fingerprints can be obtained through samples, solutions, literatures, or spectral viewer websites (Thermo fisher, BD spectral viewer, Spectra analyzer). Fluorescent signals used in this paper were obtained by imaging single labelled samples in areas morphologically and physiologically known to express the specific fluorescence, see
For autofluorescent signals, spectrum for Elastin was obtained experimentally and compared with literature. Spectra for Nicotinamide Adenine Dinucleotide (NADH) free, NADH bound, Retinoic acid, Retinol and Flavin Adenine Dinucleotide (FAD) were acquired from in vitro solutions using the microscope. NADH free from B-Nicotinamide Adenine Dinucleotide (Sigma-Aldrich, St. Luis, MO, #43420) in Phosphate Buffered Saline (PBS) solution. NADH bound from B-Nicotinamide Adenine Dinucleotide and L-Lactic Dehydrogenase (Sigma-Aldrich, #43420, #L3916) in PBS. Retinoic acid from a solution of Retinoic Acid (Sigma-Aldrich, #R2625) in Dimethylsulfoxide (DMSO). Retinol from a solution of Retinol synthetic (Sigma-Aldrich, #R7632) in DMSO. FAD from Flavin Adenine Dinucleotide Disodium Salt Hydrate (Sigma-Aldrich, #F6625) in PBS.
For each pixel in a dataset, the Fourier coefficients of its normalized spectra define the coordinates (G(n),S(n)) in the phasor plane, where:
Where λs and λf are starting and ending wavelengths respectively; I is the measured intensity; c is the number of spectral channels (32 in the present case), and n the harmonic number. The first harmonic (n=1) is utilized for the autofluorescent signals and the second harmonic (n=2) for fluorescent signals based on the sparsity of independent spectral components. A two-dimensional histogram with dimensions (S, G) is applied to the phasor coordinates in order to group pixels with similar spectra within a single square bin. This process can be defined as phasor encoding.
The hypothesis for linear unmixing in this work is that given i independent spectral fingerprints (fp), each collected spectrum (I(λ)) is a linear combination of fp, and the sum of each fp contribution (R) is 1.
In the pixel-by-pixel linear unmixing implementation in this work, the Jacobian Matrix inversion is applied on the acquired spectrum in each pixel with dimensions (t,z,c,y,x). Resulting ratios for each spectral vector are assembled in the form of a ratio cube with shape (t,z,i,y,x) where x,y,z,t are the original image spatial and time dimensions, respectively and i is the number of input spectral vectors. The ratio cube (t,z,i,y,x) is multiplied with the integral of intensity over channel dimension of the original spectral cube, with shape (t,z,y,x), to obtain the final resulting dataset with shape (t,z,i,y,x).
In the Hybrid Unmixing implementation, Jacobian Matrix Inversion is applied on the average spectrum of each phasor bin with dimensions (c,s,g) where g and s are the phasor histogram sizes and c is the number of spectral channels acquired. The average spectrum in each bin is calculated by using the phasor as an encoding, to reference each original pixel spectra to a bin. Resulting ratios for each component channel are assembled in the form of a phasor bin-ratio cube with shape (i,s,g) where i is the number of input independent spectra fp (Linear Unmixing section). This phasor bin-ratio cube is then referenced to the original image shape, forming a ratio cube with shape (t,z,i,y,x) where x, y, z, t are the original image dimensions. The ratio cube is multiplied with the integral of intensity over channel dimension of the original spectral cube, with shape (t,z,y,x), obtaining a final result dataset with shape (t,z,i,x,y).
Unmixing algorithms utilized for speed comparisons with the HyU algorithm (
Rendering of final result datasets were performed using Imaris 9.5-9.7. In
All box plots were generated using standard plotting methods. The center line corresponds to the median, the lower box border corresponds to the first quartile, and the upper box border corresponds to the third quartile. The lower- and upper-whiskers correspond to one and a half times the interquartile range below and above the first and third quartiles respectively.
A customized python script (Supplementary Code) was first utilized to pad the number of z slices across multiple time points, obtaining equally sized volumes. The “Correct 3D drift” plug (https://imagej.net/Correct_3D_Drift) in FIJI (https://imagej.net/Fiji) was used to register the data.
Box plots and line plots for timelapses were generated using ImarisVantage in Imaris 9.5-9.7. Box plot elements follow the same guidelines as described above. Line plots are connected box plots for each time point with the solid line denoting the median values, and the shaded region denoting the first and third quartiles.
For synthetic data, a ground truth is available for comparison of unmixing fidelity between HyU and LU. fp contributions, or ratios, were used for quantification, owing to the arbitrary nature of intensity values in microscopy data. Mean Square Error (MSE) is used for determining the quality of the ratios in synthetic data. MSE can be defined as the square difference of the ratio recovered by an unmixing algorithm (runmixed) and the ground truth ratio (r) divided by the total number of pixels (n).
To simplify comparison between different unmixing algorithms, Relative Mean Square Error (RMSE) can be defined as:
RMSE measures the improvement in MSE when using HyU as compared to the conventional LU.
For experimental data, in the absence of ground truth, the performance of the results returned by the unmixing algorithms are quantified with the following measurements: Average Relative Residual, Residual Image Map, Residual Phasor Map, and finally, Residual Intensity Histogram.
Residual (R) is calculated as:
The spectral intensity difference between the unmixed image and original image for each pixel or phasor bin depend on the following descriptions of the intensity image (I), where:
The original spectrum (IRaw Image) is the combination of each independent spectral component (fp) with its ratio (r) plus noise (N). The recovered spectrum is obtained by the multiplication of recovered ratios (runmixed) with each corresponding individual component.
Relative Residual (RR) is calculated as the sum of the residual values over C channels and normalized to the sum of the original intensity values over C channels (with C=32 in for example).
The Average Relative Residual (
The Residual Image Map visualizes the residual values for each pixel of the image (
Residual Image Maps (Rimg map(x,y)) project the Relative Residual (RR) cube to the 2D image shape for each voxel, providing an estimated visualization of an algorithm ratio recovery performance in the spatial context of the original image.
Residual Phasor Map visualizes residuals for each bin of the phasor histogram (
The Residual Intensity Histogram RInt Hist(p,rr) (
Image contrast measures the distinguishability of a detail against the background. Percent contrast can refer the relationship between the highest and lowest intensity in the image.
Where the Intensity of signal average (Is) is the average of top 20% intensities in the image. The Intensity of background average (IB), the average of bottom 20% image intensities.
Since each synthetic dataset has a ground truth, the SNR can be calculated by comparing the simulated image to the ground truth. Since these are hyperspectral images, the definition of SNR can be extended to the wavelength dimension of the data and use the term Spectral SNR. Two types of Spectral SNR can include Absolute Spectral SNR and Relative Spectral SNR.
Spectral SNR can be calculated as follows for each single spectrum simulation. First, for each pixel and channel, the absolute value of the difference is taken between the ground truth intensity and the simulated intensity. Then the mean is calculated over all of the pixels for each channel. Finally, the sum is taken over all of the channels and divided by either 32 for the absolute SNR, or the number of channels with signal for the relative SNR. The number of channels with signal is calculated by checking if there is a statistically significative number of pixels in a single channel with a pixel SNR value greater than zero.
Identification of independent spectral components has been an adversity for unmixing hyperspectral data. First, the collected spectra may be distorted by reduced SNR. Secondly, excitation of intrinsic signals causes uncertainty of biological sample. Favorably, HyU simplifies this process by adapting Phasor approach and achieving semi- or full-automation process for spectra identification and selection. In HyU, spectra can be loaded from an existing library, virtually automating the analysis process. Pre-identified cursors are generated from common fluorophores such as mKO2, tdTomato, mRuby, Citrine. Obtaining fluorescence spectra from experimental samples has some advantages compared to utilizing spectra from an existing library, as they account for a multitude of experimental and instrumental settings. Imaging settings such as different types of lenses or optical filters (
Cellular metabolism is a key regulator of cell function and plays an essential role in the development of numerous diseases. Understanding of cellular metabolic pathways is critical for the development and assessment of novel therapies and diagnostics. A number of metabolites have been reported in literature to be fluorescent and to change their spectra according to their biochemical configurations. For example, the measurement of NADH in its free and bound state is possible thanks to a shift in the emission spectra when NADH is bound to enzymes such as Lactate Dehydrogenase (LDH). Likewise, retinol and retinoic are known to have different autofluorescent spectra. A map of the phasor position for common autofluorescence from pure solutions is reported in
An advantage of HyU is speed. HyU provides substantial speed boosts when comparing to other pixel-based unmixing algorithms. The exception is for the conventional LU vs HyU owing to the highly optimized computational implementation of the functions which are utilized in the conventional LU. This speed boost occurs because unmixing is performed at a phasor-histogram level, where a single bin corresponds to a multitude of image pixels. For algorithms other than standard LU, HyU provides up to ˜500-fold improvement in speed at comparable coding language and computing hardware, processing 2 GB in less than 100 seconds (
This improvement provides a solution for open image-analysis challenges in multiplexing fluorescence. First, the increased size of HFI data, resulting from continuously higher throughput and resolution microscopes, scaled with the number of spectral channels. Second, the number of datasets, owing to experimental reproducibility and biological variability.
Linearity of combinations is the general assumption for most of the spectral analysis algorithms in Hyperspectral Fluorescence Imaging (HFI). Each pixel is assumed to contain a linear combination of the independent spectral signatures, or endmembers, contained in the sample. This assumption requires knowledge, or identification, of the independent spectra within the sample. In standard linear unmixing algorithms, the extraction of relative amounts of spectra (ratios) is conducted on a pixel-by-pixel basis, at the expense of computational costs. Disrupted experimental signals, in the case of lower Signal to Noise Ratio (SNR) spectra, complicate the detection of spectral endmembers and reduce the accuracy of ratio determination. These standard unmixing algorithms, however, have the advantage of being unsupervised with the possibility of automating the analysis process.
The phasor approach has become a popular dimensionality reduction approach for the analysis of both fluorescence lifetime and spectral image analysis. Phasors provide key advantages, including spectral compression, denoising, and computational reduction for both pre-processing3 and unmixing of HFI datasets. Phasor analysis overcomes the challenge of low SNR data analysis that limits standard unmixing algorithms, providing a multiplexing solution to a need. The phasor transform is a lossy encoder that in principle carries a reduced percentage of the information compared to the original clean data. In the imaging of fluorescent signals, where signal to noise often decreases to lower digits, the encoding loss is less relevant compared to the noise of the fluorescent signals. This fundamental advantage of increasing SNR in noisy data has made the phasor method a valuable tool for fluorescence microscopy, both for Lifetime and Spectral Fluorescence Microscopy. This point is reported by multiple groups using phasors and, more recently, nicely described in the work of Scipioni et al. Standard Phasor analysis is fully supervised and requires a manual selection of regions or points on a graphical representation of the transformed spectra, called the phasor plot. Each selection of a region in the phasor plot associates pixels containing similar spectra to the same fluorophore, forming an output channel that contains wavelength integral of intensities with unitary ratiometric value. This “winner takes all” approach is suitable when fluorophores for each single excitation light are spectrally overlapping and spatially disperse (
HyU uses the phasor transform to group pixels with similar spectral shape within each phasor histogram bin. This approach maintains the advantage of compressing, denoising and simplifying identification of clean endmember fluorescent spectra. However, HyU improves on the robustness of the analysis. The denoised signals are maintained in a hybrid phasor and wavelength domain, and therefore can be unmixed with a multitude of standard unmixing algorithms (
Unless otherwise stated, all measurements, values, ratings, positions, magnitudes, sizes, and other specifications that are set forth in this specification, including in the claims that follow, are approximate, not exact. They are intended to have a reasonable range that is consistent with the functions to which they relate and with what is customary in the art to which they pertain.
As will be understood by one skilled in the art, for any and all purposes, such as in terms of providing a written description, all ranges disclosed herein also encompass any and all possible sub-ranges and combinations of sub-ranges thereof. Any listed range can be easily recognized as sufficiently describing and enabling the same range being broken down into at least equal halves, thirds, quarters, fifths, tenths, etc. As a non-limiting example, each range discussed herein can be readily broken down into a lower third, middle third and upper third, etc. As will also be understood by one skilled in the art all language such as “up to,” “at least,” “greater than,” “less than,” and the like include the number recited and refer to ranges which can be subsequently broken down into sub-ranges as discussed herein. Finally, as will be understood by one skilled in the art, a range includes each individual member. Thus, for example, a group having 1-3 articles refers to groups having 1, 2, or 3 articles. Similarly, a group having 1-5 articles refers to groups having 1, 2, 3, 4, or 5 articles, and so forth.
While various aspects and implementations have been disclosed herein, other aspects and implementations will be apparent to those skilled in the art. The various aspects and implementations disclosed herein are for purposes of illustration and are not intended to be limiting.
All references cited herein, including but not limited to published and unpublished applications, patents, and literature references, are incorporated herein by reference for the subject matter referenced, and in their entirety and are hereby made a part of this specification. To the extent publications and patents or patent applications incorporated by reference contradict the disclosure contained in the specification, the specification is intended to supersede and/or take precedence over any such contradictory material.
In this disclosure, the indefinite article “a” and phrases “one or more” and “at least one” are synonymous and mean “at least one”.
Relational terms such as “first” and “second” and the like may be used solely to distinguish one entity or action from another, without necessarily requiring or implying any actual relationship or order between them. The terms “comprises,” “comprising,” and any other variation thereof when used in connection with a list of elements in the specification or claims are intended to indicate that the list is not exclusive and that other elements may be included. Similarly, an element preceded by an “a” or an “an” does not, without further constraints, preclude the existence of additional elements of the identical type.
With respect to the use of substantially any plural and/or singular terms herein, those having skill in the art can translate from the plural to the singular and/or from the singular to the plural as is appropriate to the context and/or application. The various singular/plural permutations may be expressly set forth herein for sake of clarity.
The phrase “means for” when used in a claim is intended to and should be interpreted to embrace the corresponding structures and materials that have been described and their equivalents. Similarly, the phrase “step for” when used in a claim is intended to and should be interpreted to embrace the corresponding acts that have been described and their equivalents. The absence of these phrases from a claim means that the claim is not intended to and should not be interpreted to be limited to these corresponding structures, materials, or acts, or to their equivalents.
In at least some of the previously described implementations, one or more elements used in an implementation can interchangeably be used in another implementation unless such a replacement is not technically feasible. It will be appreciated by those skilled in the art that various other omissions, additions and modifications may be made to the methods and structures described herein without departing from the scope of the claimed subject matter. All such modifications and changes are intended to fall within the scope of the disclosed subject matter.
The scope of protection is limited solely by the claims that now follow. That scope is intended and should be interpreted to be as broad as is consistent with the ordinary meaning of the language that is used in the claims when interpreted in light of this specification and the prosecution history that follows, except where specific meanings have been set forth, and to encompass all structural and functional equivalents.
It will be understood by those within the art that, in general, terms used herein, and especially in the appended claims (e.g., bodies of the appended claims) are generally intended as “open” terms (e.g., the term “including” should be interpreted as “including but not limited to,” the term “having” should be interpreted as “having at least,” the term “includes” should be interpreted as “includes but is not limited to,” etc.). It will be further understood by those within the art that if a specific number of an introduced claim recitation is intended, such an intent will be explicitly recited in the claim, and in the absence of such recitation no such intent is present. For example, as an aid to understanding, the following appended claims may contain usage of the introductory phrases “at least one” and “one or more” to introduce claim recitations. However, the use of such phrases should not be construed to imply that the introduction of a claim recitation by the indefinite articles “a” or “an” limits any particular claim containing such introduced claim recitation to implementations containing only one such recitation, even when the same claim includes the introductory phrases “one or more” or “at least one” and indefinite articles such as “a” or “an” (e.g., “a” and/or “an” should be interpreted to mean “at least one” or “one or more”); the same holds true for the use of definite articles used to introduce claim recitations. In addition, even if a specific number of an introduced claim recitation is explicitly recited, those skilled in the art will recognize that such recitation should be interpreted to mean at least the recited number (e.g., the bare recitation of “two recitations,” without other modifiers, means at least two recitations, or two or more recitations). Furthermore, in those instances where a convention analogous to “at least one of A, B, and C, etc.” is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., “a system having at least one of A, B, and C” would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). In those instances where a convention analogous to “at least one of A, B, or C, etc.” is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., “a system having at least one of A, B, or C” would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). It will be further understood by those within the art that virtually any disjunctive word and/or phrase presenting two or more alternative terms, whether in the description, claims, or drawings, should be understood to contemplate the possibilities of including one of the terms, either of the terms, or both terms. For example, the phrase “A or B” will be understood to include the possibilities of “A” or “B” or “A and B.”
In addition, where features or aspects of the disclosure are described in terms of Markush groups, those skilled in the art will recognize that the disclosure is also thereby described in terms of any individual member or subgroup of members of the Markush group.
None of the claims are intended to embrace subject matter that fails to satisfy the requirement of Sections 101, 102, or 103 of the Patent Act, nor should they be interpreted in such a way. Any unintended coverage of such subject matter is hereby disclaimed. Except as just stated in this paragraph, nothing that has been stated or illustrated is intended or should be interpreted to cause a dedication of any component, step, feature, object, benefit, advantage, or equivalent to the public, regardless of whether it is or is not recited in the claims.
The abstract is provided to help the reader quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, various features in the foregoing detailed description are grouped together in various implementations to streamline the disclosure. This method of disclosure should not be interpreted as requiring claimed implementations to require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed implementation. Thus, the following claims are hereby incorporated into the detailed description, with each claim standing on its own as separately claimed subject matter.
This application is based upon and claims priority to U.S. provisional patent application 63/247,688, entitled “A Hyperspectral Imaging System with Hybrid Unmixing,” filed Sep. 23, 2021, attorney docket number AMISC.022PR. The entire content of the aforementioned provisional patent application is incorporated herein by reference.
This invention was made with government support under Grant No. DGE-1842487, which was awarded by the National Science Foundation Graduate Research Fellowship, and under Grant No. PR150666, which was awarded by the Department of Defense. The government has certain rights in the invention.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2022/076883 | 9/22/2022 | WO |
Number | Date | Country | |
---|---|---|---|
63247688 | Sep 2021 | US |