The technical field of the invention is three-dimensional location of fluorescent regions in an object.
Fluorescence imaging is a technique that allows fluorescent markers to be located in a human or animal body. One of the main applications is location of fluorescent markers, or fluorophores, the latter targeting cells of interest, cancerous cells for example. The protocol consists in injecting these markers into the organism before an examination by fluorescence imaging. Because it makes it possible to acquire an image indicating the location of various cancerous regions, fluorescence imaging is a useful complement, or even an alternative, to the use of radioactive tracers.
In the field of microscopy, one difficulty encountered in the reconstruction of the fluorescence is the spatial distribution of the refractive index of the sample, which may not be uniform, in particular when the sample is a real biological tissue. Specifically, biological samples exhibit non-uniformities in refractive index, for example at walls or other cellular structures (cell membrane, nucleus). In culture media, such non-uniformities appear at the interface between the cells and the culture medium.
One solution, aiming to limit the influence of the non-uniformity in refractive indices, consists in using a clarifying agent. The latter may be a membrane-destroying solvent. Such a method makes it possible to obtain 3D images of fluorescence that do not suffer from the non-uniformity in refractive index. However, use of a clarifying agent is invasive and does not allow a sample to be followed in real time.
The inventor has provided a method for reconstructing the spatial distribution of fluorescence in a sample, that does not require a clarifying agent to be used. It is a non-invasive method, that does not affect the integrity of the sample. The method described below makes it possible to account for a measured non-uniform spatial distribution of refractive indices, said spatial distribution being assumed to be known. It may for example have been obtained beforehand using optical diffraction tomography.
A first subject of the invention is a method for reconstructing a three-dimensional spatial distribution of fluorescence inside an object, the object emitting fluorescence light under the effect of illumination in an excitation spectral band, the object being discretized into object voxels, such that:
According to one embodiment, the method comprises, in each iteration, the following steps:
In each repetition of step i-bis), the phase value assigned to each image voxel of the acquired image may correspond to the phase value assigned to the same image voxel of the image reconstructed in a step i) of the same iteration.
In each step k-bis), for each object voxel, the emission intensity may be computed on the basis of a mean of the W″ emission intensities estimated, for said object voxel, in each step i-bis).
According to one embodiment:
Preferably,
According to one possibility:
In step e) and in step i), the algorithm modelling propagation of light may employ a one-way propagation model, of BPM type, BPM standing for beam propagation method, so as to determine light propagating through a voxel on the basis of light propagating from an adjacent voxel. The refractive index of the object may vary between two different object voxels. The algorithm modelling propagation of light then accounts for a variation in refractive index between two adjacent object voxels.
A second subject of the invention is a device for observing an object, the device comprising:
A third subject of the invention is a storage medium able to be read by a computer or able to be connected to a computer, the storage medium being configured to implement steps c) to m) of a method according to the first subject of the invention on the basis of images formed by an image sensor at various depths in an object.
The invention will be better understood on reading the text describing examples of embodiment that are given, in the rest of the description, with reference to the figures listed below.
The light source may be a laser source or a light-emitting diode. The light source may be fibre-coupled, the light emitted by the light source being guided to the object with an optical fibre.
The object is capable of containing fluorophores, which emit fluorescence light, in a fluorescence spectral band, when they are illuminated in an excitation spectral band. When the illumination spectral band of the light source is comprised in the excitation spectral band, the object emits fluorescence light in a fluorescence spectral band. The analysis of the object aims to determine the position of the fluorophores in the object. To do this, it is sought to determine a three-dimensional spatial distribution of emission of the fluorescence light inside the object, so as to be able to identify regions of the object containing a high concentration of fluorophores.
The object is for example a biological tissue, that it is desired to analyse in order to identify singularities, should any be present. The objective is to identify local concentrations of fluorophores in the object, said concentrations being able to be used to assist with determination of a pathological state.
The object 10 is discretized into voxels, which are referred to as “object voxels”. Each object voxel corresponds to an elementary volume of the object. It may for example be a question of a volume from 100 nm × 100 nm to 10 µm × 10 µm. In each object voxel, the object has a refractive index, in the fluorescence spectral band. The refractive index may be non-uniform, and vary between one object voxel and another object voxel. Non-uniformity in refractive index is common when the object is a biological sample. Below, each object voxel is designated by a three-dimensional coordinate r.
The device comprises a pixelated image sensor 15, configured to form an image of the object. The image sensor 15 is coupled to an optical system 16, the latter allowing an object plane, called the focal plane, to be conjugated with the image sensor. The image sensor and the optical system are aligned along an optical axis Δ. The assembly formed by the image sensor and the optical system is configured so that the focal plane can be translated parallel to the optical axis Δ. The optical system may comprise an objective and a tube lens. The image sensor may be a CMOS sensor. The focal plane may thus be translated to various depths inside the object.
Alternatively, the image sensor is associated with a confocal diaphragm, the latter allowing various slices of the object to be observed in succession.
Thus, generally, the image sensor is configured to acquire images in various planes, lying at various depths in the object. The object is bounded by a surface 10s, forming an interface between the object and an ambient medium in which the image sensor lies. The ambient medium is generally air. A depth in the object corresponds to a distance with respect to the surface 10s, parallel to the optical axis Δ. The image sensor may be coupled to a spectral filter, for example a band-pass filter, so as to detect a light wave in the emission spectral band.
The device comprises a processing unit 20, arranged to process the images formed by the image sensor, so as to estimate the, three-dimensional, spatial distribution of emission of fluorescence light in the object.
The light source is configured to emit illuminating light 12, which propagates through the object. The illuminating light lies in an illumination spectral band in which the object has a non-zero transmittance. The illumination spectral band corresponds or is comprised in the excitation spectral band of the fluorophores liable to be present in the object.
One difficulty is that fluorescence is an incoherent effect. Under the effect of illumination in the excitation spectral band, two different fluorophores generally emit incoherent light waves, forming the fluorescence light. This happens even if the exciting light is coherent.
Another difficulty is the non-uniformity in the refractive index of the object. In prior-art methods, the spatial distribution of emission of fluorescence is obtained by inverting an equation such as:
where
Such a formula is valid if the refractive index is uniform or not very non-uniform. The assumption of refractive-index uniformity is acceptable when the thickness of the object is small, or when the object is treated beforehand with a clarifying agent, allowing refractive-index uniformity to be increased to a certain extent inside the object. The clarifying agent is usually a solvent that destroys the cell membranes or even the nuclei of cells. This allows the effects of index jumps inside the object to be decreased.
The propagation of light through a medium, between an object voxel r of the object and a pixel r′ of the image sensor, is modelled by:
where:
The complex function Gλ(r,r′) describes the transport of light between r and r′. It is obtained by solving the Helmholtz equation:
where Δ is the Laplace operator and n is the complex refractive index.
Expression (3) describes propagation of a coherent, monochromatic light wave. It is employed in prior-art reconstructing methods.
The intensity measured by the pixel of an image sensor located at the coordinate r′ is such that:
where E is an operator that returns the expected value and * is an operator that returns the complex conjugate. Integration with respect to the wavelength λ makes it possible to account for a certain spectral width of the emission of light in each object voxel r. Generally, the light emitted is not monochromatic and lies in an emission spectral band that is described by a probabilistic spectral distribution P. Each term Pλ of the spectral distribution P is a probability of emission at the wavelength λ. The spectral distribution P is for example Gaussian.
The use of the expected value expresses the fact that the intensity is measured over a time that is long with respect to the frequency of the light wave reaching the pixel. The measured intensity corresponds to a mean of the modulus of the complex amplitude over the measurement time, i.e. the time taken by the image sensor to acquire the image.
Combining (4) and (2) gives:
where r1 and r2 designate object voxels.
The term sλ*(r2) corresponds to the complex conjugate of sλ(r2).
On account of the incoherent nature of the fluorescence emission, the term E[sλ(r1)sλ*(r2)] is zero except when r1 = r2. Thus, E[sλ(r1)sλ*(r2)] may be written:
Combining (5) and (6) gives:
Expression (7) corresponds to a direct model, relating the distribution of emission intensities F to the intensity measured by pixel r′. It may be expressed in matrix form:
where
However, the analysed object may be discretized into a large number of object voxels, a few million for example, or even a few tens or hundreds of millions. Likewise, the number of measurement points r′ may be of the same order of magnitude, especially if various images of the object are formed, as described below. Thus, it is not possible to establish a transfer matrix L, at each wavelength, each term of which is equal to Pλ|Gλ(r′,r)|2. It would be far too great a consumption of resources. For example, if a number of object voxels r and a number of measurement points r′ equal to 100×106 is considered, a transfer matrix, for each wavelength, established according to the direct model would have a size of 100×106 × 100×106, and is hence difficult to envisage.
The processing unit 20 is also configured to implement a reconstructing method, the main steps of which are schematically shown in
The method first comprises steps allowing measured data to be obtained.
In step 100, the object is illuminated by the light source 11 in an illumination spectral band. In the described example, the illumination spectral band is an excitation spectral band of a fluorophore potentially present in the object.
The object may be interposed between the light source 11 and the image sensor 15, this corresponding to an acquisition configuration that is usually designated by the expression “in transmission”. Alternatively, as shown in
According to another possibility, the light source 11 may be moveable with respect to the object during acquisition. For example, as shown in
In this step, one or more elementary images I1....In...IN of the object are acquired, at various depths.
Alternatively, the image sensor has a fixed focal length and it is moved with respect to the object, so that the focal plane is successively moved to various depths d1....dn...dN in the object.
Alternatively, the image sensor remains stationary and a light source that emits a narrow strip of light is used. The light source is then translated with respect to the object, preferably parallel to the optical axis. Thus, during each illumination, a layer of the object parallel to the focal plane of the image sensor is illuminated, the thickness of said layer being smaller than the separation between two successive depths dn, dn-1. The sensor acquires an image in each position of the light source. Various images respectively corresponding to various depths in the object are thus formed in succession.
Generally, step 110 aims to obtain various images I1....In...IN representative of various depths d1....dn...dN in the object. Acquiring images with focal planes placed at various depths makes it possible to obtain images in which the fluorescent sources, in the object, appear more clearly. This allows a more accurate reconstruction to be obtained.
The following steps are implemented by the processing unit 20. The latter comprises a microprocessor connected to a memory. The memory contains instructions allowing the processing described below to be carried out. The memory of the processing unit contains the image I acquired in step 110.
In this step, the spatial distribution of fluorescence in the object is initialized. Each term F(r) takes a randomly determined or predefined initial value F0(r).
Steps 130 to 250 described below are carried out iteratively.
Steps 130 to 150 aim to estimate, for each image voxel r′, a reconstructed intensity Ix(r′) of each image voxel r′ of the acquired image I. x is an integer corresponding to each iteration rank. On account of the fact that it is not possible to employ an explicit direct model, such as expressed by (7), each reconstructed intensity Ix(r′) is estimated using a statistical approach.
In this step, the rank of the iteration is incremented. A fluorescence distribution Fx resulting either from the initialization, or from a preceding iteration, is employed. The objective of the iterations is to update the fluorescence distribution so as to obtain, following each iteration x, a fluorescence distribution Fx. The fluorescence distribution Fx is defined for each object voxel r. Each term Fx(r) of the fluorescence distribution corresponds to a fluorescence intensity in the object voxel r.
As indicated above, fluorescence emission is an incoherent effect. Two point fluorescent light sources respectively placed in two different object voxels respectively emit two incoherent fluorescence light waves, even if the exciting light is a coherent light wave. In order to take into account the incoherence of the emissions of the object voxels, a random phase value Φr,w is assigned to each object voxel r. w is a repetition rank described below.
In this step, a fluorescence wavelength λ is assigned to each object voxel. Preferably, the fluorescence wavelength λ is assigned taking into account the emission probability Pλ in the emission spectral band, which is defined by the probability distribution P, the latter being, in this example, considered to be Gaussian.
Sub-step 132 is optional. It is implemented if it is desired to take into account a certain width of the emission spectral band, this making the model more realistic.
Sub-steps 131 and 132 make it possible to define, depending on Fx, for each object voxel, a complex fluorescence-emission amplitude
such that:
The complex fluorescence-emission amplitude corresponds to the complex amplitude of a fluorescence light wave emitted by object voxel r.
In this step, an algorithm modelling propagation of the light is applied, this algorithm allowing a complex fluorescence-emission amplitude
corresponding to a fluorescence source located in object voxel r to be modelled. The light wave emitted by each object voxel r propagates through the object 10, to the image sensor 15.
The propagation-modelling algorithm is based on propagation of a coherent light wave, according to expressions (2) and (3). One important aspect of the invention is that the incoherent nature of the emissions from each object voxel is accounted for in sub-step 131, in which a random phase is assigned to each object voxel. Likewise, the non-monochromatic nature of the light emission, from each voxel, is accounted for in sub-step 132.
For each image voxel, it is possible to estimate a complex amplitude such that:
Taking into account the discretization of the object into object voxels, this expression may be written in matrix form:
where
Expression (9′) cannot be solved analytically, because the matrix G would need to be too large.
The value of
in each image voxel, is approximated using a BPM propagation-modelling algorithm, BPM standing for beam propagation method. This type of algorithm is known to those skilled in the art, and allows propagation of light to be simulated using a one-way propagation model. The propagation-modelling algorithm may be applied stepwise, so as to model, in each object voxel, an entering complex amplitude and an exiting complex amplitude of the fluorescence light wave propagating through the voxel.
If:
Δz corresponds to the thickness of the voxel r.
The fact that the term nk(r) is taken into account shows that the propagation algorithm allows a variation in refractive index in the object to be taken into account.
On the basis of
obtained in (10), it is possible to estimate
relative to the voxel of rank k+1, using:
Expression (11) may be expressed in the frequency domain:
When the object voxel r contains a source
of fluorescence emission, determined by the fluorescence distribution Fx, expression (11′) becomes,
where
is as defined with respect to (8) and
It is thus possible to compute propagation of the complex amplitude
where K corresponds to the total number of layers considered in the BPM. When the surface 10s of the object is reached, the light wave propagates freely to the optical system, then from the optical system to the image sensor.
In this step, the complex amplitude
of the fluorescence light wave propagating, stepwise, through each voxel of the object, is estimated. The light wave
emanating from each voxel of the last layer of the object is then propagated to the optics, then to the image sensor. In this model, the optical field emanating from the object results solely from the fluorescence sources
found in each voxel of the object, and is according to the fluorescence distribution Fx. The field incident on the object, i.e. incident on layer k = 1, may be considered to be zero.
Following step 140, an estimate of the complex amplitude
in each image voxel r′ is obtained.
Following step 150, W complex-amplitude values
will have been obtained for each image voxel. Each complex amplitude
models one complex-amplitude wave of the light wave detected by the image voxel r′ given the fluorescence distribution Fx resulting from the preceding iteration and given the phase and wavelength distributions taken into account in steps 131 and 132.
On the basis of the W complex-amplitude values
an intensity, referred to as the reconstructed intensity Ix(r′), is estimated for each image voxel. The reconstructed intensity is obtained by finding a mean of the squares of the moduli of the W complex amplitudes
resulting from step 150. Thus,
All of the values Ix(r′) together form a reconstructed image Ix.
Steps 130 to 150 make it possible to obtain, statistically, the reconstructed image Ix such that
The symbol ≈ means “approaching”.
In (15), the reconstructed image Ix is expressed vectorially, through concatenation of image voxels . Likewise, Fx is expressed in vector form, through concatenation of object voxels.
The transfer matrix L is not determined analytically. It is implicit, being approached via the successive propagations of step 140, the results of which propagations are then averaged in step 160. In other words, steps 140 to 160 are equivalent to defining the transfer matrix L. An explicit definition of L is replaced by a mean of w repetitions of steps 140 and 150.
In this step, a reconstructed intensity is determined for each image voxel of the reconstructed image. The reconstructed intensity of each image voxel corresponds to the Ix(r′) resulting from step 160.
Analogously to step 131, a random phase value Φr′,w′ is assigned to each image voxel. w′ is a repetition rank described below.
Analogously to step 132, an emission wavelength is assigned to each image voxel. The emission wavelength is assigned taking into account the emission probability Pλ in the emission spectral band, which is defined by the probability distribution P. Just like sub-step 132, sub-step 162 is optional.
Sub-steps 171 and 172 make it possible to define, for each image voxel r′, a complex amplitude
such that:
(16). Each complex amplitude
is computed based on the reconstructed image Ix resulting from step 160. Each complex amplitude corresponds to an amplitude of a light wave detected in image voxel r′.
In this step, an algorithm modelling back-propagation of the light is applied, this algorithm allowing back-propagation of a light wave from the image sensor to each object voxel r to be modelled. It is a question of determining a set of values of complex fluorescence-emission amplitudes
in each object voxel, explaining each complex amplitude resulting from step 170.
The back-propagation-modelling algorithm is analogous to the algorithm implemented in step 140, i.e. it is a BPM algorithm. It makes it possible to estimate
such that
where H designates the Hilbert operator.
Following back-propagation, a complex fluorescence-emission amplitude
is obtained for each object voxel. This amounts to performing, via modelling, the matrix operation:
where:
is a vector each term of which is equal to an estimated amplitude of a fluorescence light wave
emitted by each object voxel;
is obtained by back-propagation of Ix.
Ux is a vector each term of which is equal to a fluorescence amplitude detected in each image voxel
based on the reconstructed image Ix; GλH is a transfer matrix, each term of which is GλH(r,r′).
The transfer matrix GλH cannot be instantiated, because it would be too large in size. The back-propagation-modelling algorithm allows
to be estimated from Ux.
Following step 190, W′ complex-emission-amplitude values
will have been obtained for each object voxel.
On the basis of the W′ squares of the moduli of the complex amplitudes
resulting from step 190, it is possible to estimate, for each object voxel, a fluorescence emission intensity
corresponding to the reconstructed image Ix.
The transfer matrix LT is not determined analytically, but estimated via the propagations of step 190, the results of which propagations are then averaged in step 210. In other words, steps 180 to 210 are equivalent to defining the transfer matrix LT. The notation
expresses the fact that it is an estimate of S(r) in the iteration of rank x, said estimate being based on the reconstructed image Ix.
If
corresponds to a vector each term of which is equal to
then:
and, given that Ix ≈ LFx,
Steps 210 to 240 described below are equivalent to steps 170 to 200. Whereas steps 170 to 200 allow back-propagation of the reconstructed image Ix, steps 210 to 240 back-propagate, using the same approach, the acquired image I. Whereas the reconstructed image Ix results from a reconstruction algorithm, and therefore corresponds to a simulation, the acquired image I corresponds to the measured data.
Analogously to steps 131 and 171, a random phase value Φr′,w″ is assigned to each image voxel I(r′). w″ is a repetition rank.
Analogously to steps 132 and 172, an emission wavelength λ is assigned to each image voxel. The emission wavelength is assigned taking into account the emission probability Pλ in the emission spectral band, which is defined by the probability distribution P. Sub-step 212 is optional.
Sub-steps 211 and 212 make it possible to define, for each image voxel r′, a complex amplitude
such that:
In this step, an algorithm modelling back-propagation of the light is applied, this algorithm allowing a complex amplitude propagating from the image sensor to each object voxel r to be modelled. It is a question of determining a set of values of complex fluorescence-emission amplitudes
in each object voxel, explaining the acquired image I resulting from step 120.
The back-propagation-modelling algorithm is analogous to the algorithm implemented in step 180. It makes it possible to estimate
such that
where H designates the Hilbert operator.
Following back-propagation, a complex fluorescence-emission amplitude I corresponding to the acquired image I is obtained for each object voxel.
Following step 200, W″ complex-amplitude values
will have been obtained for each object voxel.
On the basis of the W″ complex-amplitude values
resulting from step 230, it is possible to estimate, for each object voxel, a fluorescence emission intensity
corresponding to the acquired image I. The emission intensity may be obtained by finding a mean of the squares of the moduli of the W″ complex amplitudes
resulting from step 230. Thus,
The notation
expresses the fact that it is an estimate of
in the iteration of rank x, said estimate being based on the acquired image I.
If
takes the form of a vector each term of which is equal to
then:
In step 250, the fluorescence distribution Fx is updated according to the expression:
Thus, each term Fx+1(r) is updated such that:
The operator ← is “is replaced by”.
Expression (26) corresponds to an update achieved via non-negative matrix factorization (NMF).
According to one variant, steps 170 to 200 consist not in back-propagation of the reconstructed image Ix, but rather of a differential image representative of a comparison between the reconstructed image Ix and the acquired image I. In step 170, a differential image ΔIx is defined such that
Steps 171 and 172 are implemented in a similar way to the way described above, so as to define, for each image voxel r′, a complex amplitude, referred to as the differential complex amplitude,
such that:
Step 180 described above is carried out using, by way of input data, differential complex amplitudes
such as described in (29). Thus, a differential amplitude
allowing the differential image ΔIx to be explained is obtained for each object voxel, the differential amplitude corresponding to a differential fluorescence emission.
Step 190 is implemented, so as to achieve W′ repetitions of steps 170 and 180.
Step 200 is performed, by computing, for each object voxel, a mean
of the W squares of the moduli of the differential amplitudes
It is thus possible to obtain, for each voxel, a differential intensity such that:
In this embodiment, steps 210 to 240 are not implemented. In step 250, the update formula consists in defining a value of Fx+1 allowing the norm, for example the L2 norm, of the vector ΔSx. to be decreased. The optimization algorithm may be a gradient-descent algorithm. It allows a fluorescence distribution Fx that is used in a reiteration of steps 130 to 200 and 250 to be determined.
The inventor considers that it is preferable for an FMN update formula to be used.
Whatever the embodiment, following the last iteration, a fluorescence-intensity distribution, corresponding to the vector Fx resulting from the last iteration, is obtained.
It is known that a fluorescence intensity corresponds to a fluorescence yield η(r) multiplied by an excitation light intensity. The excitation light intensity Uex(r) corresponds to the intensity of the illuminating light wave reaching each object voxel during step 110. If Uex(r) is known for each object voxel, from each term Fx(r) of the vector Fx, it is possible to estimate η(r) using the expression:
This makes it possible to obtain a map of the fluorescence η in each voxel of the object. The fluorescence map is independent of the illumination. It corresponds to an amount of fluorophore in each object voxel of the object. The fluorescence map η is a vector defined for each object voxel.
The inventor has implemented the method described with reference to steps 100 to 250, using an FMN update formula.
The dimensions of the object were:
Fluorescence images were acquired at various depths, with a spatial step size of Δz = 2 µm: 50 images were thus acquired from z = -100 µm to z = 0 µm.
with
for each layer k in the object.
corresponds to the amplitude of a complex light wave propagating through each object voxel,
being estimated via the BPM propagation model described with respect to step 140. Thus,
In each iteration, a criterion of fit to the data εx, corresponding to a difference between the reconstructed image (Ix ≈ L Fx) and the acquired image (I), is computed.
In each iteration, the sum of the light wave intensity in each plane lying parallel to the Z-axis, between -100 µm and 0 µm, is computed. It is a question of the sum of each term Fx(r) for object voxels lying at the same depth Z.
The invention will be able to be implemented on biological tissues, for diagnostic purposes. It will also be able to be implemented, in microscopy, in applications related to cellular culture or to the culture of micro-organisms, in culture media, given a sufficient transmittance with respect to the illuminating light and fluorescence light. More generally, the invention may be applied to the analysis of solid objects, liquid objects or objects taking the form of gels, in the food-processing industry or in other industrial fields.
Number | Date | Country | Kind |
---|---|---|---|
21 14627 | Dec 2021 | FR | national |