The present invention relates to a microscopic imaging method and apparatus.
Fast volumetric imaging is an increasingly important tool in many fields of biological and biomedical research. Traditional methods of 3D imaging, such as confocal or light-sheet microscopy, require the mechanical scanning of a sample to build a volume from multiple 0D points, or 1D or/2D measurements. This necessitates complex optical setups and results in slow acquisition times to image a 3D volume. The development of faster volumetric imaging methods has typically been focused on one of two outcomes—high-throughput or high temporal resolution.
High-throughput imaging enables the characterisation of large populations of cells, allowing meaningful statistics to be done using structural information. This can be achieved using microfluidics to flow cells through the field of view of the microscope. The movement of the sample is often used as an integral part of the imaging technique. As with all forms of imaging, a 2D representation of a 3D object has the potential to mislead.
It is an aim of the present invention to at least partially address some of the problems discussed above.
According to a first aspect of the invention there is provided a microscopic imaging method for three-dimensional imaging of an object, comprising: flowing a three-dimensional object through a microfluidics channel such that the object position is varied relative to an imaging optical axis; illuminating the object with an illumination optical system as the object flows through the microfluidics channel; and capturing light-field information of the illuminated object with an imaging optical system as the object flows through the microfluidics channel.
Optionally, the illuminating of the three-dimensional object is performed with a light sheet generated by the illumination optical system.
According to a second aspect of the invention, there is provided a microscopic imaging method comprising: illuminating a three-dimensional object with a light sheet generated by an illumination optical system and capturing light-field information of the illuminated object with an imaging optical system.
Optionally, an optical axis of the light sheet extends in a direction having a substantial component parallel to an imaging optical axis. Optionally, the optical axis of the light sheet is parallel to the imaging optical axis. Alternatively, the optical axis of the light sheet is tilted with respect to the imaging optical axis.
Optionally, the light sheet illuminates a substantially planar portion of the object. Optionally, at least two different substantially planar portions of the object are illuminated and the respective light fields thereof imaged.
Optionally, the light sheet illuminates the entirety of the object. Optionally, at least two different objects are illuminated and the respective light fields thereof imaged.
Optionally, the at least two different portions of the object, or at least two different objects are illuminated by varying the relative position of the light sheet and the object or objects.
Optionally, the object position is varied relative to an imaging optical axis. Optionally, the object position is varied by flowing the object or objects through a microfluidics channel. Optionally, the microfluidics channel forms part of a flow cytometer.
Optionally, the illumination position is varied relative to an imaging optical axis. Optionally, the light illumination position is varied by scanning the light sheet across the object or objects.
Optionally, the illumination optical system comprises a laser light source.
Optionally, the captured light field is a Fourier light field.
Optionally, imaging optical system comprises a micro lens array arranged to focus images the illuminated object from different viewing angles.
Optionally, the micro lens array is arranged at a back focal plane of the imaging optical system.
Optionally, an effective numerical aperture of the micro lenses in the micro lens array provides a depth of field of the imaging optical system that is at least as deep as the object.
Optionally, imaginary lines connecting the centres of the adjacent micro lenses in the array form a grid and the micro lens array is arranged such no line of the grid is parallel to the light sheet, with respect to the plane orthogonal to the optical axis.
Optionally, the micro lens array is segmented by different coloured filters.
Optionally, the micro lens array has an order of symmetry of three or more.
Optionally, the smallest dimension of the object is 100 μm or less, optionally 10 μm or less.
Optionally, the imaging optical system has a magnification of at least 10×, optionally in the range of 20× to 100×.
Optionally, the method further comprises processing the captured light-field information to generate a three-dimensional image of the object.
Optionally, the processing comprises a first step of generating one or more three-dimensional images corresponding one or more different substantially planar portions of the object. Optionally, the processing further comprises a second step of combining a plurality of three-dimensional images corresponding to a plurality of substantially planar portions through the object to generate a composite three-dimensional image of the object.
Optionally, the method further comprises illuminating the three-dimensional object with one or more further light sheets generated by the illumination optical system and capturing light-field information of the object illuminated with the one or more further light sheets. Optionally, the light sheet and one or more further light sheets comprise different coloured light. Optionally, the light sheet and one or more further light sheets are translated or rotated relative each other so as to reduce overlap.
Optionally, the method generates 2D images with respect to the flow direction without motion blur by flowing cells at a sufficiently high rate that causes the light field to integrate the whole object along the flow axis.
Optionally, the illumination may be pulsed to minimise motion blur.
Optionally, the illumination may be pulsed multiple times within one exposure of an imaging device to create multiple 3D measurements in one exposure.
According to a third aspect of the invention, there is provided a method of flow cytometry, comprising imaging cells using the imaging method according to the first aspect, e.g. when the object position is varied relative to an imaging optical axis by flowing the object or objects through a microfluidics channel forming part of a flow cytometer.
According to a fourth aspect of the invention, there is provided a method of sorting cells, comprising imaging cells using the imaging method according to the first aspect, analysing the images to identify one or more characteristics of the cells, and sorting the cells according to said characteristics.
According to a fifth aspect of the invention, there is provided a microscopic imaging apparatus comprising: a microfluidics channel through which a three-dimensional object is configured to flow such that the object position is varied relative to an imaging optical axis; an illumination optical system configured to generate a light sheet and illuminate a three-dimensional object with said light sheet as the object flows through the microfluidics channel; and an imaging optical system configured to capture light-field information of the illuminated object as the object flows through the microfluidics channel.
According to a sixth aspect of the invention, there is provided a microscopic imaging apparatus comprising: an illumination optical system configured to generate a light sheet and illuminate a three-dimensional object with said light sheet; and an imaging optical system configured to capture light-field information of the illuminated object.
Further features and advantages of the invention are described below by way of non-limiting examples and with reference to the accompanying drawings in which:
As shown in
As shown in
As shown in
In some examples, an effective numerical aperture of the micro lenses in the micro lens array 4 may provide a depth of field of the imaging optical system 2 that is at least as deep as the object O. The effective numerical aperture may be defined as the proportion of the objective numerical aperture, back-projected from a micro lens focus to the object space.
A multiplicity of effective numerical aperture images, each translated in the back-focal-plane may improve reconstruction of the image.
Depth of field (DOF) is inversely proportional to numerical aperture (NA):
where λ is the wavelength of emitted light, n is the refractive index of the medium between sample and microscope objective.
Extending the depth of field may require the loss of resolution—a smaller NA resulting in a larger diffraction limit. The effective NA (NAeff) of the micro lenses is given by the NA of a micro lenses, NAMLA, multiplied by the overall magnification of the system, which can be modified by changing the focal lengths of lenses leading to the micro lens array.
In practise, changing the DOF involves changing the diameter of the back focal plane by changing the system magnification (the diameter of a single micro lens being fixed for a given micro lens array) by using different combinations of lenses in the relay following the objective lens 3a of the imaging optical system 2.
As shown in
The light-sheet LS may be formed by a substantially one-dimensional light beam, i.e. a light beam that forms a thin line of light. In other words the beam width may be substantially wider than the beam height. Although substantially planar, a light sheet LS has a finite width, albeit significantly smaller than the height and depth of the light sheet LS. Thus, the substantially planar light sheet LS may be a thin, three-dimensional, substantially sheet-shaped beam. The light sheet LS may be centred on a true plane in the optical axis. The width of the beam may change over the length of the imaging optical axis. The ratio of height to width may be at least 2, at least 4 or at least 10, for example. Sheet height and width may be determined by the FWHM of the beam in the orthogonal directions.
As shown in
As shown in
The light source 7 may be a laser light source. The beam shaping optics are configured to form the light sheet LS. As shown in
The arrangement of the beam shaping optics may be used to set the width of the light sheet. The beam shaping optics may comprise optical elements that expand or concentrate the beam in the width direction of the light sheet LS. The beam shaping optics may be adjustable, e.g. the relative positions between two lenses, to set the width of the light sheet LS to a desired width. This tunability of light sheet width may allow imaging speeds to be varied. For example, imaging speeds can be faster if more of an object is captured by each image frame. However, increased imagining speed may be at the cost of spatial resolution or contrast.
As shown, the light sheet LS may pass through the microscope objective 3a to exit the illumination optical system 6 and the microscopic imaging apparatus 1. As shown the, light sheet LS may exit the microscope objective via a dielectric mirror 10 on the optical path of the imaging optical system 2. The dielectric mirror may reflect wavelengths of the illumination light toward the object O and transmit wavelengths emitted by the object O.
The light source 7 may be configured to emit light having a wavelength configured to excite molecules within the object O, which may in turn fluoresce. The light source 7 may emit light having a wavelength of less than 700 nm, and/or optionally greater than 400 nm, preferably in the range of 600 nm to 650 nm, e.g. around 638 nm.
The microscopic imaging system 1 may be configured to image an object O having a size of less than 100 μm, preferably in the range of 1 μm to 50 μm. For example, the object may have a size of around 15 μm. The object O may be a bacterial cell, a mammalian cell (e.g. human or animal cell) or cell aggregate such as organoid or spheriod. The imaging optical system 2 may have a magnification of at least 10×, preferably in the range of 20× to 100×, depending on the choice of detector pixel pitch and objective lens.
The small diameter—and therefore NA—of each micro lens in the micro lens array 4 limits its resolving power when considered individually. This is a necessary trade-off as the low NA of the micro lenses extends the depth of field. Preferably, micro lenses of the array 4 have a depth of field required to image the entire object O at once, i.e. of at least the depth of the object (e.g. 15 μm).
The two images, shown in
As is the case shown in
As in the case shown in
At least two different substantially planar portions of the object O may be illuminated and the respective light fields thereof imaged. Different portions of the object O may be distinguished at least in part by being centred on different planes through the object O, though the portions themselves may overlap, due to the finite with of the light sheet LS.
Further, the at least two different potions of the object may be illuminated by varying the relative position of the illumination and the object O. For example, the object position may be varied relative to the imaging optical axis, and the illumination optical axis may be fixed relative to the imaging optical axis. In the same way, images of multiple objects may be captured.
As shown in
In alternative examples, the illumination position may be varied relative to an imaging optical axis instead. For example the illumination position may be varied by scanning illumination light across the object O. In other words, the illumination optical axis may be moved relative to the imaging optical axis.
Whether imaging an entire object in one frame or multiple frames corresponding to difference slices through an object, the imaging method disclosed exploits the large collection angle of a high-NA objective with a micro lens array to produce an oblique perspective corresponding to a light sheet LS. Light emitted from the object O at high angles—i.e. at the edge of the cone of light captured by the objective—ends up at the outer edge of the back focal plane. The outer micro lenses therefore produce the most oblique perspectives—allowing the light sheet LS to be viewed from the side.
The captured light-field information may be processed to generate a three-dimensional image of the object O, or a portion of an object. The light sheet illumination provides a direct mapping from lateral coordinates in an image to axial coordinates of the object O, allowing reconstruction of the illuminated portion of the object O from one or more perspective views.
The off-axis perspectives given by different points in the back focal plane BFP, may substantially correspond to a skew transformation of the object O. The imaging optical system 2 may be orthographic, meaning that there may be substantially no parallax or change in apparent size with depth within a given view. In this case, the off-axis perspectives may substantially correspond to a pure skew transformation of the object O.
The amount of skew for a particular perspective view is dictated by the position of the respective micro lens in the back focal plane BFP. Micro lenses further from the centre view the object O from greater angles, which correspond to a greater level of skew. The change of wave front curvature with depth creates a mapping from lateral position to axial z-position. For a given point in the back focal plane there is a certain amount of lateral shift per unit shift in z.
The projection of the light sheet LS captured in a perspective view may be blurred by the finite (and varying) width of the light sheet LS. Based on the light sheet shape, the perspective view may be deconvolved to reconstruct the 3D volume illuminated by the light sheet LS. This may be performed by known imagine processing techniques.
The generation of 3D images may comprise full deconvolution (e.g. of a plurality or all the perspective views together) or local deconvolution (e.g. of each perspective view) with subsequent fusion of the individual deconvolved images. In one simple example, the deconvolution may comprise a simple deskewing and summing of perspective views.
As illustrated by
Each partial image may overlap with adjacent partial images. In other words, each pixel of the single perspective image may be represented in multiple partial images. A weighting function may be applied to achieve this. The weighting function may be a Gaussian corresponding to a profile of the light sheet LS. As shown in
Prior knowledge of the light sheet LS profile may be used to apply a weighting to the intensity of each pixel. This may allow the reconstruction to take into account the uncertainty resulting from using a sheet of finite thickness. By including information about the sheet profile, more information about the object O can be extracted. This may require the sheet profile to be measured, e.g. by scanning a dye-coated coverslip in the z-direction.
As illustrated in
More than one perspective image may be used to re-construct a three-dimensional image corresponding to a slice through the object O. for example, these may be added or averaged. Alternatively, different perspective images may be compared to remove unwanted artefacts. Accordingly, the processing may comprise combining images of the object O, having different viewing angles.
The captured light-field information of images corresponding to slices through the object O may be processed to generate a three-dimensional image of the object O. The three-dimension image may correspond to a substantially three-dimensional portion of the object O or the entire object O, i.e. as opposed to a thin slice through the object O.
The processing method may comprise generating a plurality of three-dimensional images corresponding to different slices through the object O. Each of the three-dimensional images corresponding to slices through the object O may be generated by the methods described above. The plurality of three-dimensional images corresponding to slices through the object O may be combined to generate a composite three-dimensional image of the object O, e.g. that corresponds to a substantially three-dimensional portion of the object O or the entire object O.
The rate at which the object O moves relative to the light sheet LS, and a frame-rate of imaging device 5, define the spacing between the different substantially planar portions of the object O that are imaged. The preferred spacing may correspond to a spacing narrower than, or substantially equal to the finite waist thickness of the light sheet LS. A wider spacing may reduce the quality of a reconstructed image. However, a narrower spacing may not provide substantial improvement in the quality of a reconstructed image.
As described above, the light sheet LS illumination provides a direct mapping from lateral to axial coordinates—allowing reconstruction of the illuminated portion of the object O from a single perspective view. This may free up the other lenses in the array to be used for other things, such as the multiplexing of colour channels. Therefore, in some examples, the micro lens array may be segmented by different coloured filters.
If the number of micro lenses illuminated is increased, this increases the number of perspective views and thus the sampling density in the angular domain. If resolution of the reconstructed volume was to be prioritised, an increased number of perspectives would allow tomographic reconstruction of greater fidelity, or the more effective application of computational super-resolution methods.
As can be seen in
Accordingly, lines connecting the centres of the adjacent micro lenses in the array 4 may form a grid and the micro lens array may be arranged such no line of the grid is parallel to the light sheet LS, with respect to the plane orthogonal to the imaging optical axis A.
For a multiplexed system in which the micro lens array 4 is segmented by different colour filters, multiple excitation sheets of different colours may be positioned at different angles. A micro lens array 4 of higher order of symmetry could also be used. Accordingly, the micro lens array may have an order of symmetry of three or more. This may make it easier to segment the micro lens array into three colours, and to find alignment so the light sheet LS that maximises the number of usable micro lenses.
In some examples, the 3D object may be illuminated additionally with one or more further light sheets. These may be generated by the illumination optical system also. In the same way as described above, the light-field information of the object illuminated with the one or more further light sheets may be captured. The light sheet and one or more further light sheets may comprise different coloured light. The imaging system may comprise corresponding imagining light paths and/or optical filters to capture the different coloured images. The light sheet and one or more further light sheets may be translated or rotated relative to each other so as to reduce, or eliminate, overlap between the light sheets.
One example application of the imaging method described above may be in a method of sorting cells (or other objects). Three-dimensional images of the cells may be analysed to identify one or more characteristics of the cells. The cells may then be sorting according to said characteristics. The image analysis and sorting may be performed using known techniques.
It should be understood that variations of the above examples may be made without departing from the spirit and scope of the present invention.
Number | Date | Country | Kind |
---|---|---|---|
2104485.4 | Mar 2021 | GB | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/GB2022/050769 | 3/29/2022 | WO |