Pathologists and biomedical researchers often need to image across centimeter-scale areas at micrometer resolution (e.g., whole-slide imaging). The most common method of conducting whole-slide imaging uses a scanning microscope and moving either the specimen and/or imaging lens while acquiring a sequence of images over time that are used to generate a composite image of the entire specimen. Unfortunately, because the images that are used to generate the composite image are taken over time, important information can be impossible to obtain, such as movement of organisms within the specimen and movement of a collection of cells within the specimen.
Other systems have tried to resolve these issues by improving primary lenses and images sensors, however, the prohibitive cost of a primary lens that can resolve a centimeter-scale area at micrometer resolution (e.g., a lens having gigapixel capabilities) and the prohibitive cost of a digital image sensor that is capable of capturing the micrometer resolution at centimeter-scale areas has prevented widespread adoption. Furthermore, in addition to being cost prohibitive, primary lenses and digital image sensors in these systems still do not provide gigapixel capabilities (e.g., they are an order of magnitude less than what is achieved via traditional sequence imaging performed by existing scanning microscopes) and these digital sensors provide a very low imaging frame rate. Furthermore, none of the systems used today provide the ability to capture 3D image information and cannot easily record multiple fluorescence and/or polarization channels in a single snapshot.
Micro-camera arrays have been utilized to attempt re-imaging techniques to resolve some of these issues, however, these systems may not provide the ability to capture 3D images, the frame rate may not be adequate for video imaging, and some of these systems require a curved intermediate plane to avoid spherical aberration introduced by the primary lens, resulting in a curved micro-camera array which requires expensive opto-mechanical calibration and precluding arrangement of all of the image sensors onto a single printed circuit board (PCB). In other micro-camera arrays that have been used, the micro-camera arrays are used to directly image the specimen (e.g., without a primary lens needed to achieve the high resolution required) and are therefore not re-imaging systems.
Re-imaging microscopy systems and methods that provide 3D video imaging capabilities at high resolution (e.g., micrometer resolution) and a relatively low cost are provided. By using a macro-camera lens as the primary lens (e.g., a single lens), the space bandwidth product (SBP) of the re-imaging microscopy systems disclosed herein can achieve the hundreds of megapixels up to gigapixels capability. Likewise, with a planar array of micro-cameras focused towards infinity, the cumulative pixel count of the digital sensors can achieve gigapixel capabilities. Furthermore, with a portion of a target area that is captured by each micro-camera of the planar micro-camera array overlapping at least two other micro-camera's portion of the target area in any direction (e.g., with the overlapping being 67% or more in any direction), 3D video imaging, multi-channel fluorescence and/or polarimetry video functionality is enabled by the disclosed re-imaging microscopy systems.
A microscopy system includes a planar array of micro-cameras with at least three micro-cameras of the planar array of micro-cameras each capturing a unique angular distribution of light reflected from a corresponding portion of a target area. The corresponding portions of the target area for the at least three micro-cameras contain an overlapping area of the target area. The microscopy system further includes a primary lens disposed in a path of the light between the planar array of micro-cameras and the target area.
In some cases, the microscopy system is configured to generate a 3D image from the captured unique angular distribution of light reflected from the corresponding portions of the target area of the at least three micro-cameras of the planar array of micro-cameras. In some cases, the at least three micro-cameras of the planar array of micro-cameras are at least nine micro-cameras of the planar array of micro-cameras. In some cases, the at least three micro-cameras of the planar array of micro-cameras are at least forty-eight micro-cameras of the planar array of micro-cameras.
In some cases, for each micro-camera of the planar array of micro-cameras, an overlap amount of the corresponding portion of the target area is at least 67% in any direction. In some cases, for each micro-camera of the planar array of micro-cameras, an overlap amount of the corresponding portion of the target area is at least 90% in any direction. In some cases, each micro-camera of the planar array of micro-cameras has a frame rate of at least twenty-four frames per second. In some cases, each micro-camera of the planar array of micro-cameras includes an aperture, wherein the unique angular distribution of light reflected from the portion of the target area for each micro-camera of the planar array of micro-cameras is determined based on the aperture of that micro-camera. In some cases, the primary lens is located one focal length away from the target area.
In some cases, the microscopy system further includes at least one filter on at least one micro-camera of the planar array of micro-cameras. In some cases, the at least one filter is an emission filter that selectively passes a range of wavelengths of light. In some cases, the at least one filter comprises at least two filters that selectively pass different ranges of wavelengths of light. In some cases, the at least one filter is a polarizing filter.
In some cases, the microscopy system further includes an illumination source configured to provide light from a plurality of directions to the target area. In some cases, the illumination source is further configured to provide light from a single direction of the plurality of directions at a time and the planar array of micro-cameras captures an image of the target area for each of the plurality of directions.
A method of microscopy imaging includes directing light to a target area; and simultaneously capturing a first set of images of the target area while the light illuminates the target area via a planar array of micro-cameras that are each configured to capture a unique angular distribution of the light reflected from a corresponding portion of the target area that travels through a primary lens. The corresponding portions for at least three micro-cameras of the planar array of micro-cameras contain an overlapping area of the target area. A different image of the first set of images is simultaneously captured by each micro-camera of the planar array.
In some cases, the method further includes generating a first composite image by stitching the first set of images together, generating a first height map using the first set of images, and generating a first 3D tomographic image by merging the first composite image and the first height map. In some cases, the method further includes capturing at least twenty-four sets of simultaneous images per second of the target area while the light illuminates the target area via the planar array of micro-cameras and generating a composite image video feed of the target area by stitching each set of images of the at least twenty-four sets of simultaneous images per second together to create at least twenty-four composite images per second. In some cases, the method further includes generating a height map video feed of the target area from the at least twenty-four sets of simultaneous images per second and generating a 3D tomographic video feed by merging the composite image video feed and the height map video feed. In some cases, the method further includes capturing at least twenty-four sets of simultaneous images per second of the target area while the light illuminates the target area via the planar array of micro-cameras and generating a 3D tomographic video feed from the at least twenty-four sets of simultaneous images per second.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Re-imaging microscopy systems and methods that provide 3D video imaging capabilities at high resolution (e.g., micrometer resolution) and a relatively low cost are provided. By using a macro-camera lens as the primary lens (e.g., a single lens), the space bandwidth product (SBP) of the re-imaging microscopy systems disclosed herein can achieve the hundreds of megapixels up to gigapixels capability. Likewise, with a planar array of micro-cameras focused towards infinity, the cumulative pixel count of the digital sensors can achieve gigapixel capabilities. Furthermore, with a portion of a target area that is captured by each micro-camera of the planar micro-camera array overlapping at least two other micro-camera's portion of the target area in any direction (e.g., with the overlapping being 67% or more in any direction), 3D video imaging, multi-channel fluorescence and/or polarimetry video functionality is enabled by the disclosed re-imaging microscopy systems.
There are numerous possible applications for the disclosed re-imaging microscopy system. One non-limiting example application is in vivo model organism imaging. By offering a large field of view and high resolution, the disclosed re-imaging microscopy system can observe organisms at the cellular level as they move unconstrained within a specimen. The re-imaging microscopy system can similarly be used in in vitro cell-culture imaging experiments. Other examples include whole-slide imaging in pathology, cytopathology, and hematology, examining large tissue sections, and large field of view light-sheet imaging experiments. The re-imaging microscopy system can also achieve optimized illumination patterns with specific examples using a convolution neural network (CNN) or other machine learning algorithms.
In certain embodiments, re-imaging microscopy systems and methods that provide a large field of view (e.g., centimeter scale areas) at high resolution (e.g., micrometer resolution) at a relatively low cost are provided via a planar array of micro-cameras having a field of view at an intermediate plane. These embodiments can also include a field of view of each micro-camera of the planar micro-camera array overlapping at least one other micro-camera's field of view at the intermediate plane in a direction (e.g., with the overlapping being 50% or more), enabling snapshot 3D imaging, multi-channel fluorescence and/or polarimetry functionality.
Referring to
Referring to
Furthermore, in this case, there is a relatively small overlap amount of the field of view for each micro-camera in the X direction, which can be useful for creating the composite image. However, an overlap amount is not required in the X direction for creating the composite image, although there should not be a gap in any direction of the overlapping fields of view 230. It should be understood that although shown as rectangular shaped, the field of view for each micro-camera may be other shapes, including but not limited to, circular, ovular, square and the like; the functionality described herein as requiring two images of the same portion of a composite image can be utilized by capturing the same portion of the composite image by at least two micro-cameras of the micro-camera array, regardless of the shape of the field of view. In this case, an 8×12 micro-camera array is used to create the overlapping fields of view 230, however, in some cases, the micro-camera array may include as little as a 2×2 micro-camera array to create an overlapping field of view; in other cases, the micro-camera array may include as many as 50×50 micro-camera array to create an overlapping field of view. It should be understood that embodiments may include any number of micro-cameras in the micro-camera array between the 2×2 embodiment and the 50×50 embodiment. In addition, as micro-camera technology reaches smaller footprints, it may be possible to use an array that is larger than 50×50.
In some cases, the intermediate platform 312 includes a fiber bundle array (e.g., as described in further detail with respect to
In some cases, a telecentric lens can be used as the primary lens 302, such that light at the intermediate plane 306 formed by the image-side telecentric primary lens 302 aligns with the optical axis of each micro-camera of the planar array of micro-cameras 304, thus producing an image without vignetting (e.g., which may be the case with respect to primary lens 102 of re-imaging system 100). Unfortunately, it can be expensive and challenging to create a primary lens with a high SBP and image-side telecentricity. Therefore, a non-telecentric lens can be used when an intermediate platform 312 is included to re-direct the light to travel primarily along the optical axis of each micro-camera of the planar array of micro-cameras 304 having their field of view at the corresponding portion of the intermediate platform 312.
The length of each (vertically aligned) fiber in the fiber bundle array 322 may be anywhere with the range of one centimeter to five centimeters, which can correspond to the height/thickness (H) of the fiber bundle array (e.g., because the individual fibers are aligned vertically). The width (W) (e.g., in an X direction) and length (L) (e.g., in a Y direction) of the fiber bundle array 322 can be any value within the range of five centimeters by five centimeters to 30 centimeters thirty centimeters. It should be understood that although shown as square shaped, the width and length dimensions of an intermediate platform may be altered to include other shapes, including but not limited to, circular, ovular, rectangular and the like depending upon the desired application. Furthermore, an intermediate platform may be incorporated into any of the systems described herein.
In any case, the numerical aperture of the fiber and the fiber size should be larger than the numerical aperture and the resolution in the re-imaging system. Although glass fibers are illustrated in
Light filters may be included on at least one micro-camera of the planar array of micro-cameras 404. A portion 410 of the planar array of micro-cameras 404 is enlarged to illustrate this concept. As seen in the portion 410 of the planar array of micro-cameras, a first micro-camera 412 aligned in the X direction can include a first type of filter 414, a second micro-camera 416 aligned in the X direction can include a second type of filter 418, and a third micro-camera 420 aligned in the X direction can include the first type of filter 414. Although not illustrated, a fourth micro-camera aligned in the X direction can include the second type of filter 418 such that an alternating pattern of types of filters are used in the planar array of micro-cameras 404. In some cases, patterns of non-filters and one or more types of filters are used in the planar array of micro-cameras 404.
In some cases, other patterns of types of filters may be used. For example, three types of filters where each type of filter repeats every third micro-camera in a direction may be used. In some cases, four types of filters where each type of filter repeats every fourth micro-camera in a direction may be used. In some cases, four types of filters in a 2×2 grid (e.g., with a type of filter for each micro-camera in the grid) may be used. In some cases, depending on the pattern used, overlap may be increased/decreased. For example, to get simultaneous composite images for each type of filter using an alternating pattern of two different types of filters (e.g., one composite image for each type of filter), at least 50% overlap is required for those micro-camera's field of view at the intermediate plane in the direction of the alternating pattern (e.g., the X direction as applied to the portion 410 example). As another example, to get simultaneous composite images for each type of filter using a pattern where each type of filter repeats every third micro-camera in a direction, at least 67% overlap is required for those micro-camera's field of view at the intermediate plane in the direction. As another example, to get simultaneous composite images for each type of filter using a pattern where each type of filter repeats every fourth micro-camera in a direction, at least 75% overlap is required for those micro-camera's field of view at the intermediate plane in the direction. As yet another example, to get simultaneous composite images for each type of filter using a 2×2 grid pattern, at least 50% overlap is required for those micro-camera's field of view at the intermediate plane in the X and Y direction.
By using patterns of different types of light filters, multi-channel imaging, multi-channel fluorescence imaging and/or polarization imaging can be achieved with the re-imaging microscopy system 400. For example, fluorescence requires one or more excitation illumination sources to illuminate the target area 408 to cause the target area 408 to emit fluorescent light. For dual-channel fluorescence imaging, at least 50% overlap provides an extra channel for dual-labeling experiments. Using an alternating pattern of two different types of filters, the re-imaging microscopy system 400 can simultaneously capture signals from dual-labeling living cells within the specimen/target area 408. As another example, polarization typically requires polarized illumination, although it is not necessary for the illumination itself to be polarized. In some cases, the illumination source can be white light (or some color of light) or light from LEDs that passes through an analyzer/polarizer placed between the light source and the target area. In some cases, polarizing filters (e.g., filters 414, 418) can be positioned in front of micro-cameras (e.g., 412, 416, 420) of the planar array micro-cameras 404. For dual polarization imaging, the re-imaging microscopy system 400 can provide measurements of polarization in one snapshot by inserting a different analyzer into the adjacent lens in the lens array with a polarizer before the target area (e.g., between the illumination source and the specimen), which can reconstruct the birefringence of the target area.
In some cases, one or more types of filters used are emission filters. In some cases, one or more types of filters used are fluorescence emission filters. In some cases, the types of emission filters and/or fluorescence emission filters include a red light filter that selectively passes wavelengths of red light, a green light filter that selectively passes wavelengths of green light, a blue light filter that selectively passes wavelengths of blue light, and/or a yellow light filter that selectively passes wavelengths of yellow light. In some cases, at least two filters that selectively pass different ranges of wavelengths of light in a pattern include a red light filter and a green light filter. In some cases, at least two filters of different wavelengths in a pattern include any of the types of filters (e.g., including polarization filters) described herein.
In some cases, light is introduced to the target area by at least one illumination source coupled to a moveable arm. In some cases, the moveable arm can be moved by a user. In some cases, the moveable arm can be moved by servo motors that receive control signals from a system controller. In some cases, light is introduced to the target area through an aperture of a plurality of apertures positioned between the illumination source and the target area. For example, a dark box including a plurality of apertures may be positioned between the illumination source and the target area and be configured to open one aperture of the plurality of apertures at a time (e.g., manually or via a servo motor receiving signals from a system controller) to provide light from several different directions (with light from one direction at a time).
It should be understood that, for an illumination design 510 for phase contrast imaging, more or fewer than three different directions of light (and associated illumination sources and condenser lenses to enable light from that number of directions) may be included. In some cases, an illumination design 510 for phase contrast imaging may use light from a range of two to ten different directions.
Following a light path generated by the illumination source 610, light passes through the three condenser lenses 616, 618, 620 to illuminate the specimen 612. Light from the specimen 612 then passes through the primary lens 602 and is redirected off the surface mirror 614 to the fiber bundle array 606. A composite image can then be captured by the planar array of micro-cameras 604 that is formed on the backside of the fiber bundle array 606. The surface mirror 614 is included to provide the necessary distance between the primary lens 602 and the fiber bundle array 606 for an image of the specimen to be focused at the intermediate plane 608 so that the re-imaging microscopy system 600 suitable for ergonomic use (e.g., allows for the re-imaging microscopy system 600 to be compact).
In some cases, the method 700 further includes generating (706) a first composite image by stitching the first set of images together. In some cases, the method (700) further includes simultaneously capturing (708) a second set of images of the target area while the light illuminates the target area via the planar array of micro-cameras and generating (710) a second composite image by stitching the second set of images together. In some cases, the first composite image and the second composite image are compared/manipulated to form a single final composite image. In some cases, the method 700 further includes generating (712) a phase contrast image using at least the first composite image and the second composite image, wherein the light illuminates the target area from a different direction for the first set of images than the light that illuminates the target area for the second set of images.
In some cases, simultaneously capturing (708) the second set of images of the target area while the light illuminates the target area via the planar array of micro-cameras occurs at the same time as simultaneously capturing (704) a first set of images of the target area while the light illuminates the target area via a planar array of micro-cameras. Indeed, the planar array of micro-cameras may include alternating light filters that selectively pass different ranges of wavelengths of light, allowing for the simultaneous capture (704, 708) of two set of images (or more in cases where more than two different types of light filters are used by the micro-camera array as described with respect to
In some cases, simultaneously capturing (708) the second set of images of the target area while the light illuminates the target area via the planar array of micro-cameras occurs at a different time as simultaneously capturing (704) a first set of images of the target area while the light illuminates the target area via a planar array of micro-cameras. For example, as described above with respect to generating (712) the phase contrast image, different sets of images can be captured (704, 708) utilizing light that illuminates the target area from different directions. Because light is utilized to capture (704, 708) two (or more) sets of images from different directions, capture of these different sets of images necessarily occurs at different times.
Generating (706, 710) composite images can be achieved using an image stitching algorithm. The stitching algorithm can contain feature-based detections, Fourier phase alignment, direct pixel-to-pixel comparisons based on a gradient descent method, and/or a viewpoint correction method. Denoising methods can also be applied depending on the type of intermediate platform used (if any). The general methods are BM3D and machine learning methods (e.g., noise2noise and/or noise2void), which allow training without a clean dataset. In some cases, depending on the type of intermediate platform used (if any), it is possible to calculate the transmittance of the intermediate platform and computationally remove the structural pattern. In some cases, the specimen can be shifted to get the general distribution of the pattern profile and remove the fixed pattern noise.
Communications interface 880 can include wired or wireless interfaces for communicating with a system controller such as described with respect to
The inventors conducted an experiment following an implementation similar to that illustrated in
A fiber bundle array used in the experiment included a thin, large plate of fused glass single-mode fibers to account for non-telecentricity of the primary lens. The entrance NA of each micro-camera lens was 0.032 while the NA of the primary lens was 0.36, which exceeds the cumulative entrance NA of the micro-camera array. A 6 μm fiber faceplate (SCHOTT, glass type: 47A) that is 5 mm thick was used. Four of these faceplates, each being 8 cm×12 cm in size, were placed directly beside one another to cover the intermediate image.
Each micro-camera simultaneously captured a de-magnified image of a unique portion of the intermediate plane. Stitching those images together produced a composite image. Using this design setup, the half-pitch resolution of the planar array of micro-cameras was 9 μm (that is, when directly imaging a target at the intermediate image plane without a primary objective lens). The planar array of micro-cameras was placed 150 mm away from the intermediate plane with around 0.14 magnification. For the whole system with magnification of 3.81, the re-imaging microscopy system obtained a 2.2 μm maximum two-point resolution.
While the re-imaging microscopy system provided high-power Kohler illumination via a single large LED (3W) combined with multiple condenser lenses, vignetting effects introduced by the primary lens and fiber bundle array lead to a fixed intensity fall-off towards the edges of the composite field of view. To address this issue, the inventors used a pre-captured image to correct the vignetting and other non-even illumination effects. By assuming the uneven illumination is a low spatial frequency effect, the inventors first convolved the background image with a Gaussian kernel to create a reference image and then divided any subsequent images of the specimen with the illumination-correction reference.
Another issue is the introduction of a fixed, high-spatial frequency modulation pattern by the fiber bundle array (6 μm average diameter per fiber), whose fibers are smaller than the re-imaging microscopy system's resolution (9 μm half-pitch resolution) at the intermediate plane, yet still lead to a speckle-like modulation as seen in
To assess the resolution of the re-imaging microscopy system, the inventors translated a USAF resolution target (Ready Optics) via a manual 3-axis stage to different areas of the entire field of view. At the center of the field of view of the central micro-camera in the planar array of micro-cameras, the inventors measured a 2.2 μm two-point resolution, which drops towards the edges of the central micro-camera field of view. At the center of the field of view of an edge micro-camera of the planar array of micro-cameras, the inventors measured a 3.1 μm two-point resolution, which drops towards the edges of the edge micro-camera field of view.
To explore the flexibility of the re-imaging microscopy system, the inventors modified the illumination unit to include the ability to capture Differential Phase Contrast (DPC) imagery. After magnifying the 3 mm active area LED source by 3 times via a first 1 in 16 mm focal length condenser (ACL25416U, Thorlabs) and two subsequent 2.95 in 60 mm lenses (ACL7560U, Thorlabs), the inventors subsequently inserted an absorption mask at the illumination source jugate plane (between the two 60 mm focal length condensers) for DPC modulation. In this design setup, the specimen is located at the focus of the second 60 mm condenser, which is the Fourier plane of the DPC mask. To first quantitatively evaluate performance, the inventors imaged a large collection of 10 μm diameter polystyrene microspheres (refractive index: n=1.59) immersed in n=1.58 oil. To provide DPC modulation, the inventors inserted 4 unique half-circle masks oriented along 4 cardinal directions and captured 4 snapshots. The resulting phase contrast maps are illustrated in
Next, the inventors tested the effectiveness of the re-imaging microscopy system capturing images of buccal epithelial cell samples, which are mostly transparent and thus exhibit minimal structure contrast under a standard brightfield microscope. For this experiment, the inventors employed ring-shaped DPC masks with 0.7 outer NA and 0.5 inner NA. The inventors fixed the buccal epithelial cells on a glass slide with 100% ethanol and dried with a cover slip to produce the low-contrast bright-field whole field of view as illustrated in
As a first demonstration of snapshot multi-modal acquisition with a multi-scale microscope, the inventors configured the re-imaging microscopy system to acquire dual-channel florescence video. Along an X direction, which exhibited more than 50% field of view intracameral overlap, the inventors alternated red and green emissions filters over successive micro-camera columns. Hence, the inventors got the full field of view (except for corner micro-cameras) covered with both red and green filters, which allowed the inventors to image red and green channel fluorescence simultaneously. For the proof-of-concept, the inventors imaged the mixture of 10 μm red (58010 nm/60510 nm) and 6 μm yellow-green microspheres (44110 nm/48610 nm). The inventors demonstrated the dual-channel fluorescent imaging using customized red filters centered at 610 nm (Chroma, ET610/75 m) and green filters centered at 510 nm (Chroma, ET510/20 m). For fluorescence excitation, the inventors used a custom-built high power 470 nm blue LED (Chanzon, 100 W Blue) with a short-pass filter (Throlabs, FES500) inserted at the output. The filters were mounted on 4×6 3D printed frames and attached to the system via magnets. The inventors stitched the two channels separately together using PTGui. The results are illustrated in
In certain embodiments, a 3D image and/or video capabilities at high resolution (e.g., micrometer resolution) and a relatively low cost are provided. Specifically, in these embodiments, no intermediate image plane is utilized and the micro-cameras of the planar array of micro-cameras are focused to infinity. This results in a smaller overall field of view for the planar array of micro-cameras, but with a more overlap between the individual cameras in the planar array of micro-cameras. This overlap enables enhanced 3D imaging and/or 3D video imaging as well as additional filtering capabilities, such as patterns of filters on micro-cameras that filter specific wavelengths of light and/or polarizing filters, all taken from a simultaneous set of images (with each image of the set of images being taken by a micro-camera of the planar array of micro-cameras) of a target area/portion of a target area.
In some cases, corresponding portions of the target area 1306 for at least ten micro-cameras of the planar array of micro-cameras 1304 contain an overlapping area of the target area 1306. In some cases, corresponding portions of the target area 1306 for at least nine micro-cameras of the planar array of micro-cameras 1304 contain an overlapping area of the target area 1306. In some cases, corresponding portions of the target area 1306 for at least forty-eight micro-cameras of the planar array of micro-cameras 1304 contain an overlapping area of the target area 1306.
Advantageously, 3D imaging capabilities are realized by the re-imaging microscopy system. In some cases, a 3D image includes a 3D color image, where each pixel has an associated height in the form I (r, g, b, h), where r, g, b are the color values and h is the height value. In some cases, a 3D image includes a brightness map in the form I (x, y, tx, ty), where x, y are the spatial coordinates of the point of interest and tx, ty are the angular coordinates.
In some cases, one or more different types of light filters are used by the planar array of micro-cameras 1304 as described with respect to
The primary lens 1302 is disposed in a path of light 1308 between the planar array of micro-cameras 1304 and the target area 1306 (e.g., a specimen). The primary lens 1302 can be a high throughput lens (such as megapixel, gigapixel, or self-designed lens). In some cases, the primary lens 1302 is located one focal length away from the target area 1306.
Referring back to
In certain embodiments, as opposed to a relatively small amount of overlap in the captured portions of the images between cameras (e.g., as described with respect to
In some cases, the at least three micro-cameras of the planar array of micro-cameras are at least nine micro-cameras of the planar array of micro-cameras. In some cases, the at least three micro-cameras of the planar array of micro-cameras are at least forty-eight micro-cameras of the planar array of micro-cameras. In some cases, for each micro-camera of the planar array of micro-cameras, an overlap amount (e.g., measurement and/or percentage of an overlapping area) of the corresponding portion of the target area is at least 67% in any direction. In some cases, for each micro-camera of the planar array of micro-cameras, an overlap amount of the corresponding portion of the target area is at least 90% in any direction.
In some cases, such as described with respect to
Referring to
The micro-camera 1410 captures the unique angular distribution of light 1412 reflected from a portion of the target area 1414 based on the aperture of the micro-camera 1410. The position of the micro-camera 1410 on the planar array of micro-cameras also affects the unique angular distribution of light 1412 that is captured. However, as the micro-camera 1410 is focused to infinity in this embodiment (as well as other micro-cameras in the micro-camera array), the difference in the portion of the target area 1414 that is captured is minimal (e.g., relatively compared to embodiments described above where the micro-cameras in the micro-camera array are focused on an intermediate image plane).
Referring to
Referring to
In some cases, the unique angular distribution of light reflected from the portion of the target area (e.g., the shrimp claw 1602) for each micro-camera of the planar array of micro-cameras is determined based on the aperture of that micro-camera. In some cases, the aperture is the same for each micro-camera. In these cases, the variation between the unique angular distribution of light reflected from the portion of the target area (e.g., the shrimp claw 1602) for each micro-camera of the planar array of micro-cameras is determined based only on the positioning (e.g., in the X and/or Y direction) of that micro-camera on the planar array of micro-cameras.
In some cases, the method 1800 further includes generating (1806) a first composite image by stitching the first set of images 1718 together, generating (1808) a first height map using the first set of images 1718, and generating (1810) a first 3D tomographic image 1724 by merging the first composite image and the first height map. In some cases, such as those that utilize a deconvolution algorithm, the method 1800 alternatively includes generating (1810) a first 3D tomographic image 1724 from the first set of images 1718. It should be understood that the use of the deconvolution algorithm can eliminate the generating (1806) and generating (1808) steps.
In some cases, the method 1800 further includes 3D video imaging capabilities. Specifically, the method 1800 may further includes capturing (1812), via the planar array of micro-cameras 1712, at least twelve sets of simultaneous images per second of the target area 1716 while the light 1714 illuminates the target area 1716, generating (1814) a composite image video feed of the target area by stitching each set of images of the at least twelve sets of simultaneous images per second together to create at least twelve composite images per second, generating (1816) a height map video feed of the target area from the at least twelve sets of simultaneous images per second, and generating (1818) a 3D tomographic video feed of at least twelve 3D tomographic images per second by merging the composite video feed and the height map video feed. In some cases, such as those that utilize a deconvolution algorithm, the method 1800 alternatively further includes generating (1818) a 3D tomographic video feed of at least twelve 3D tomographic images per second from the at least twelve sets of simultaneous images per second. It should be understood that the use of the deconvolution algorithm can eliminate the generating (1814) and generating (1816) steps.
In some cases, the method 1800 further includes capturing (1812) at least twenty-four sets of simultaneous images per second of the target area 1716 while the light 1714 illuminates the target area 1716 via the planar array of micro-cameras 1712, generating (1814) a composite image video feed of the target area by stitching each set of images of the at least twenty-four sets of simultaneous images per second together to create at least twenty-four composite images per second, generating (1816) a height map video feed of the target area from the at least twenty-four sets of simultaneous images per second, and generating (1818) a 3D tomographic video feed of at least twenty-four 3D tomographic images per second by merging the composite video feed and the height map video feed. In some cases, such as those that utilize a deconvolution algorithm, the method 1800 alternatively further includes generating (1818) a 3D tomographic video feed of at least twelve 3D tomographic images per second from the at least twenty-four sets of simultaneous images per second.
In some cases, the method 1800 further includes capturing (1812) up to three-hundred sets of simultaneous images per second of the target area 1716 while the light 1714 illuminates the target area 1716 via the planar array of micro-cameras 1712, generating (1814) a composite image video feed of the target area by stitching each set of images of the up to three-hundred sets of simultaneous images per second together to create up to three-hundred composite images per second, generating (1816) a height map video feed of the target area from the up to three-hundred sets of simultaneous images per second, and generating (1818) a 3D tomographic video feed of up to three-hundred 3D tomographic images per second by merging the composite video feed and the height map video feed. In some cases, the 3D video feed can be generated (1810) and displayed in real time. In some cases, merging the composite video feed and the height map video feed includes merging corresponding (e.g., according to time) images from the composite video feed and the height map video feed. In some cases, such as those that utilize a deconvolution algorithm, the method 1800 alternatively further includes generating (1818) a 3D tomographic video feed of at least twelve 3D tomographic images per second from the up to three-hundred sets of simultaneous images per second.
In some cases, the generating (1810 and/or 1818) of a 3D image(s) includes the use of a deconvolution algorithm. In the deconvolution algorithm, iterative convolution and/or deconvolution is used to converge on a 3D image. Prior to running the deconvolution algorithm, point-spread-functions (PSFs) must be calculated or measured for each elemental image from each plan in the volume reconstruction. To accomplish this, the volume is split into distinct planes based on the axial resolution of the system. For example, if the axial resolution is 5 μm and the depth of field for the system is 50 μm, the volume can be split into 10 planes spaced 5 μm apart. A PSF is generated from each of these planes to each micro-camera. For example, if there are 50 planes and 10 micro-cameras, a total of 50×10=500 PSFs would be required. Because the microscopy system is space invariant, the PSFs can be converted into optical transfer functions (OTFs), which can be directly multiplied with Fourier transforms of each elemental image to perform forward projection.
At each iteration of the deconvolution algorithm, the OTFs are used to project a “volume guess” for the 3D object into the image space. This image “guess” is then compared with the actual acquired image, and an error is computed for each pixel as Image/Guess. This “error image” is then back-projected into object space using the inverse OTFs to form a “volume error.” The “volume error” is pixel wise multiplied with the current “volume guess” to form a new guess, which will be used in the next iteration. This is repeated until a reasonable construction is reached (e.g., until the error falls to a predetermined level).
In some cases, the generating (1808 and/or 1816) of a height map(s) includes the use of an algorithm that utilizes a ray optics model with a neural network. This algorithm computes height maps rather than full volumetric reconstruction. However, this algorithm is an effective solution when the PSFs are not known because the system parameters are estimated alongside the object reconstruction.
This algorithm uses a ray optics model to perform forward and backward projection between the image space and the object space. Because the PSFs are not known, the algorithm tracks several “camera” parameters for each elemental image, including the height, rotation, lateral offset, and focal length. For each iteration of the algorithm, the camera parameters are all updated as well as a “guess” at a flat version of the object/target area. The object/target area guess will show how all the elemental images could be stitched together, but does not contain 3D information. Several rounds of the algorithm are performed, with different resolutions and different weights on each parameter, until a reasonable guess is produced for both the object and the camera parameters.
Next, a 3D height map is generated (1808 and/or 1816) using a convoluted neural network (CNN). The original images are passed into the CNN, which produces a guess at the associated height map. This height map is used alongside the camera parameters and the object guess to perform the forward projection to image space to compute an error between the guessed images and the actual acquired images. This error can then be back-projected to update the object guess, as well as being used to update the camera parameters and the CNN parameters. This process is performed iteratively until a final height map is generated.
In some cases, the 3D height maps and composite images are each displayed separately and/or merged into a 3D tomographic map that is displayed in a 3D video feed. This 3D video feed can be generated and displayed in real time.
Recitation of ranges of values herein are merely intended to serve as a shorthand method of referring individually to each separate value falling within the range, unless otherwise indicated herein, and each separate value is incorporated into the specification as if it were individually recited herein. For example, if a concentration range is stated as 1% to 50%, it is intended that values such as 2% to 40%, 10% to 30%, or 1% to 3%, etc., are expressly enumerated in this specification. These are only examples of what is specifically intended, and all possible combinations of numerical values between and including the lowest value and the highest value enumerated are to be considered to be expressly stated in this disclosure.
Although the subject matter has been described in language specific to structural features and/or acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as examples of implementing the claims and other equivalent features and acts are intended to be within the scope of the claims.
This application is a continuation-in-part of U.S. application Ser. No. 17/675,538, filed Feb. 18, 2022, which claims the benefit of U.S. Provisional Application Ser. No. 63/150,671, filed Feb. 18, 2021.
Number | Date | Country | |
---|---|---|---|
63150671 | Feb 2021 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 17675538 | Feb 2022 | US |
Child | 18295078 | US |