The technical field of the invention is the reconstruction of holographic images, in particular with a view to characterizing a sample, for example a biological sample.
The observation of samples, and in particular biological samples, by lensless imaging has seen substantial development over the last ten years. This technique allows a sample to be observed by placing it between a light source and an image sensor, without placing any optically magnifying lenses between the sample and the image sensor. Thus, the image sensor collects an image of the light wave transmitted by the sample.
This image is formed of interference patterns generated by interference between the light wave emitted by the light source and transmitted by the sample, and diffracted waves resulting from the diffraction, by the sample, of the light wave emitted by the light source. These interference patterns are sometimes called diffraction patterns.
Document WO2008090330 describes a device allowing biological samples, in fact cells, to be observed by lensless imaging. The device allows an interference pattern, the morphology of which allows the type of cell to be identified, to be associated with each cell. Lensless imaging would thus appear to be a simple and inexpensive alternative to a conventional microscope. In addition, its field of observation is clearly much larger than it is possible for that of a microscope to be. It will thus be understood that the prospective applications related to this technology are many and various.
In order to obtain a satisfactory observation of the sample, iterative image-reconstruction algorithms have been developed, such as those described in WO2016189257 or in WO2017162985. These algorithms comprise iteratively applying a holographic propagation operator, so as to propagate the hologram formed in the detection plane to a reconstruction plane, the latter generally corresponding to a sample plane, i.e. the plane in which the sample lies. The sample plane is generally parallel to the detection plane. The algorithms described in the prior art successively propagate/back-propagate images between the detection plane and the sample plane. Specifically, the image acquired by the image sensor contains no information relating to the phase of the exposure light wave. The objective of these algorithms is to estimate, iteratively, the phase of the exposure light wave in the detection plane. This allows a correct image of the sample in the reconstruction plane to be formed. Thus, these algorithms allow optical properties of the exposure light wave to be obtained. It may for example be a question of the modulus or phase.
The inventors propose a method for observing a sample using a holographic imaging method, the method comprising a step of reconstructing a complex image of the sample, on the basis of which image it is possible to obtain a spatial representation of parameters of the sample.
A first subject of the invention is a method for observing a sample, the sample lying in a sample plane defining radial positions, parameters of the sample being defined at each radial position, the method comprising:
Step (iv) may comprise computing a validity indicator, such that the parameters of the sample are updated so as to make the validity indicator tend toward a preset value. In step iv), the parameters of the sample are then updated so as to minimize the validity indicator.
By complex image of the sample, what is meant is a complex image of an exposure light wave, in the sample plane, the exposure light wave propagating to the image sensor.
The supervised machine learning algorithm may for example employ a neural network. The neural network may notably be a convolutional neural network.
Step iv) may comprise determining a gradient of the validity indicator as a function of at least one parameter, such that the parameters are updated to decrease the validity indicator of the following iteration. Step iv) may notably employ a gradient descent algorithm.
According to one embodiment:
By image of a parameter, what is meant is a spatial distribution of the parameter in the sample plane.
According to one embodiment, in step (ii), the computation of the image of the sample in the detection plane comprises a convolution using a convolution kernel, the convolution kernel representing a spatial extent of the light source.
According to one embodiment, the parameters describing the sample comprise:
According to one embodiment, in step (vi), the supervised machine learning algorithm is fed with:
The supervised machine learning algorithm then allows an image of the updated second parameter to be obtained.
According to one embodiment, following a step (viii), steps (vi) to (viii) are repeated at least once, or even at least twice. Thus, following each step (viii), a series of iterations of steps (ii) to (v) is performed. After each series of iterations:
According to one embodiment, no image-forming optics are placed between the sample and the image sensor.
According to one embodiment, an optical system, such as a lens or objective, is placed between the sample and the image sensor, the optical system defining an image plane and an object plane, the method being such that, in step b):
The method may comprise:
d) characterizing the sample on the basis of the image of the sample resulting from step c), or on the basis of each image of the sample resulting from step c).
A second subject of the invention is a device for observing a sample, comprising:
The invention will be better understood on reading the description of examples of embodiments, which are presented, in the rest of the description, with reference to the figures listed below.
The sample 10 is a sample that it is desired to characterize. It may notably be a question of a medium 10m containing particles 10p. The particles 10p may be blood particles, for example red blood cells. It may also be a question of cells, microorganisms, for example bacteria or yeast, micro-algae, micro-spheres, or droplets that are insoluble in the liquid medium, for example lipid nanoparticles. Preferably, the particles 10p have a diameter, or are inscribed in a diameter, smaller than 1 mm, and preferably smaller than 100 μm. It is a question of microparticles (diameter smaller than 1 mm) or of nanoparticles (diameter smaller than 1 μm). The medium 10m, in which the particles bathe, may be a liquid medium, for example a liquid phase of a bodily fluid, a culture medium or a liquid sampled from the environment or from an industrial process. It may also be a question of a solid medium or a medium having the consistency of a gel, for example an agar substrate, which is propitious to the growth of bacterial colonies.
The sample may also be a solid sample, for example a thin slice of biological tissue, such as a pathology slide, or a dry extract of a fluid, for example of a biological fluid.
The sample is preferably transparent or sufficiently translucent to be able to allow an image to be formed with the image sensor.
The sample 10 may be contained in a fluidic chamber 16, for example a micro-cuvette, commonly used in point-of-care-type devices, into which the sample 10 penetrates, for example by capillary action. The thickness e of the sample 10, along the propagation axis, typically varies between 20 μm and 1 cm, and is preferably comprised between 50 μm and 500 μm, for example is 150 μm.
The sample lies in a plane P10, called the sample plane, perpendicular to the propagation axis. It is held on a holder 10s. The sample plane is defined by two orthogonal axes X and Y, respectively defining coordinates x and y. Each pair of coordinates (x,y) corresponds to one radial position r. The radial positions are defined in the sample plane and in a detection plane that is described below.
The distance D between the light source 11 and the sample 10 is preferably larger than 1 cm. It is preferably comprised between 2 and 30 cm. Preferably, the light source, seen by the sample, may be considered to be point-like. This means that its diameter (or its diagonal) is preferably smaller than one tenth, better still one hundredth, of the distance between the sample and the light source. Thus, preferably, the light reaches the sample in the form of plane waves, or waves that may be considered as such.
The light source 11 may be a light-emitting diode or a laser diode. It may be associated with diaphragm 18, or spatial filter. The aperture of the diaphragm is typically comprised between 5 μm and 1 mm, preferably between 10 μm and 200 μm or 500 μm.
In this example, the diaphragm is that supplied by Thorlabs under the reference P150S and its diameter is 150 μm. The diaphragm may be replaced by an optical fiber, a first end of which is placed facing the light source 11 and a second end of which is placed facing the sample 10.
The device may comprise a diffuser 17, placed between the light source 11 and the diaphragm 18. The use of such a diffuser allows constraints on the centrality of the light source 11 with respect to the aperture of the diaphragm 18 to be relaxed. The function of such a diffuser is to distribute the light beam, produced by the elementary light source 11, in a cone of angle α, α being equal to 30° in the present case. Preferably, the scattering angle α varies between 10° and 80°.
Preferably, the emission spectral band Δλ, of the incident light wave 12 has a width smaller than 100 nm. By spectral bandwidth, what is meant is a fullwidth at half maximum of said spectral band. In the rest of the text, the spectral band is designated by a wavelength λ representative of the spectral band, and corresponding for example to the central wavelength.
The sample 10 is placed between the light source 11 and an image sensor 20. The image sensor 20 defines a detection plane P0. The latter preferably lies parallel, or substantially parallel, to the sample plane P10 in which the sample lies. The term substantially parallel means that the two elements may not be rigorously parallel, an angular tolerance of a few degrees, smaller than 20° or 10°, being acceptable.
The image sensor 20 is configured to form an image in the detection plane P0. In the example shown, it is a question of a CCD or CMOS image sensor comprising a matrix array of pixels. CMOS sensors are the preferred sensors because the size of the pixels is smaller, this allowing images, the spatial resolution of which is more favorable, to be acquired. The detection plane P0 preferably lies perpendicular to the propagation axis Z of the incident light wave 12. Thus, the detection plane P0 is parallel to the plane P10 of the sample. The image sensor comprises pixels, one radial position r being associated with each pixel, in the detection plane P0.
The distance d between the sample 10 and the matrix array of pixels of the image sensor 20 is preferably comprised between 50 μm and 2 cm, and more preferably comprised between 100 μm and 2 mm.
In the device shown in
Under the effect of the incident light wave 12, the sample 10 may generate a diffracted wave, liable to produce, in the detection plane P0, interference, in particular with a portion of the incident light wave 12 transmitted by the sample. Moreover, the sample may absorb one portion of the incident light wave 12. Thus, the light wave 14, transmitted by the sample, and to which the image sensor 20 is exposed, is formed following absorption and diffraction of the incident light wave 12 by the sample. Thus, the sample results in absorption of one portion of the incident light wave, and in a phase shift of the latter. The phase shift is due to a variation in refractive index (or optical index) when the light propagates through the sample.
The light wave 14 may also be designated by the term exposure light wave. A processor 30, for example a microprocessor, is configured to process each image acquired by the image sensor 20. In particular, the processor is a microprocessor connected to a programmable memory 31 in which a sequence of instructions for carrying out the image-processing and computing operations described in this description is stored. The processor may be coupled to a screen 32 allowing images acquired by the image sensor 20 or computed by the processor 30 to be displayed.
The image acquired by the image sensor forms a hologram. It generally does not allow a satisfactory visual representation of the sample, in particular when the sample comprises diffracting elements that are very close to one another. This is notably the case when the sample contains particles that are very close to one another, or when the sample is a thin slice of biological tissue.
Whatever the device used, the sample may be described by sample parameters. One or more sample parameters corresponds to each radial position. The parameters corresponding to a given radial position may form a vector F(r), each vector being defined at a radial position r in the sample plane. Each term of each vector corresponds to one parameter of the sample at the radial position r. Each radial position corresponds to one or more pixels of the image sensor.
Each vector of parameters F(r) is of dimension W. W is a strictly positive integer. W corresponds to the number of parameters considered at each radial position. Each vector F(r) contains W terms Fw(r), such that:
All of the vectors F(r), for the various radial positions considered, together form a set of parameters collating the parameters of the sample.
The following is based on the example described in patent application FR1859618, in which the sample may be described by an absorbance α(r) (first term of the vector) and an optical path difference L(r) (second term of the vector), these properties being liable to vary depending on the illumination spectral band. Thus, at each radial position r, the sample may be described by W=2 different parameters:
Fw=1(r)=F1(r)=α(r)
and
Fw=2(r)=F2(r)=L(r).
The absorbance α(r) corresponds to an ability of the sample to absorb all or a portion of the illumination light wave. When a particle is considered to be transparent, α(r)=0.
In other models, the parameters may be the optical index, i.e. the refractive index, of the sample, given that it may be a complex quantity. Thus, the parameters may comprise the real part of the refractive index, and/or the imaginary part of the refractive index.
The optical path difference L(r) depends on the thickness e(r) of the sample, parallel to the propagation axis of the light, and on the index difference induced by the sample. For example, when the sample comprises particles 10p bathing in a medium 10m, each particle 10p induces an optical path difference L(r) such that:
L(r)=(np−nm)×e(r)
where e(r) is the thickness of the particle at the radial position r; and np and nm correspond to the refractive indices of the particle 10p and of the medium 10m, respectively.
In
Let A10 be the image of the complex amplitude of the exposure light wave 14 in the plane P10 of the sample. This image, which is a complex image, may also be considered to be a complex image of the sample. At each radial position r, the complex amplitude A10(r) may be defined from the vector of parameters F(r) corresponding to the radial position r. When the vector of parameters F(r) contains the terms α(r) and L(r), the complex amplitude A10(r) may be expressed by the following expression:
The term b(r) is an amplitude representative of the incident light wave 12 reaching the sample. This amplitude may be measured by the image sensor 20 in the absence of sample 10 on the holder 10s. The light source 11 then directly illuminates the image sensor. From the image I0,b(r) acquired by the image sensor 20 in the emission spectral band Δλ, the amplitude b(r) is for example obtained using the expression:
b(r)=√{square root over (I0,b(r))}. (1′)
Expression (1) allows an expression for a complex image to be determined from the vectors of parameters determined in the various radial positions.
The term
determines the phase of the complex amplitude A10(r). It may be seen that this term is λ periodic. This means that various particles, the optical path difference of which is equal to L(r)+qλ, q being an integer, cause the same complex amplitude A10(r) between the sample and the image sensor to be obtained. In other words, there are potentially an infinite number of objects capable of generating a given complex amplitude between the sample and the image sensor.
One of the objectives of holographic reconstruction algorithms is to reconstruct the optical properties of an object from an image acquired by an image sensor, forming a hologram. However, the hologram contains only partial information on the exposure light wave, to which the image sensor is exposed. In particular, the image acquired by the image sensor, i.e. the hologram, contains no information on the phase of the exposure light wave. The use of iterative holographic reconstruction algorithms allows information on the phase of the exposure light wave, which information corresponds to the shift in the phase of the incident light wave induced by the object, to be estimated, iteratively. When the exposure light wave is expressed using expression (1), the phase-related term corresponds to the term
One problem with the estimation of a phase term using current algorithms, which is usually called “phase unwrapping”, is that of identifying the correct value of the phase term among the infinite number of λ periodic values that result in the same expression for the complex amplitude of the exposure light wave.
The method described below, the main steps of which are illustrated in
Step 100: illuminating the sample and acquiring an image I0 in each illumination spectral band Δλ.
Step 110: Initialization. In this step, an initialization complex image A100 is taken into account in the sample plane P10. The index 10 of the symbol A100 designates the fact that the image is acquired in the sample plane P10. The exponent 0 of the symbol A100 designates the fact that it is a question of an initializaiton image.
Considering an exposure light wave such as defined by expression (1), the initialization amounts to considering, for each radial position r, parameters of the sample forming an initial vector of parameters, such that:
The terms from which the initial vector is composed may be defined arbitrarily, or depending on knowledge of the properties of the sample. For example, it is possible to attribute, to each term, the same value, for example a value of zero. The vectors of parameters F0(r) defined in this step form an initialization set 0 describing the sample 10. They also form an initial complex image A10(r) in the sample plane P10.
Steps 120 to 160 described below are implemented iteratively, according to an iteration rank n. n is an integer comprised between 1 (first iteration) and N, N corresponding to the number of iterations. In step 110, n=0. In the notations used, the iteration rank is presented in the form of an exponent.
A set n of vectors of parameters Fn(r) is associated with each iteration. Each vector of parameters Fn(r) associated with a radial position r is such that:
Each iteration amounts to updating the vectors of parameters Fn(r), i.e. the terms αn(r) and Ln(r), for the various radial positions r in question.
Step 120: Estimating an image in the detection plane.
For each radial position r, from the parameters Fn−1(r) resulting from step 110 or step 150 of a preceding iteration, a complex amplitude A10n−1(r) of the exposure light wave 14 in the sample plane P10 is determined. In the first iteration, n=1. In this example, the complex amplitude is determined from expression (1), from the initial set 0 of vectors of parameters or from the set n−1 of vectors of parameters resulting from a preceding iteration. The complex amplitudes A101−1(r) defined for the various radial positions r in question form a complex image A10n−1 of the exposure light wave 14, in the sample plane P10. The complex image A10n−1 is also called the complex image of the sample.
Thus, the complex image A10n−1 taken into account in step 120 is generated from a initial set n−1 of parameters (when n=1) or from a set of parameters resulting from a preceding iteration of steps 120 to 150.
A holographic propagation operator hP
A0n=A10n−1*hP
hP
with
r=(x,y).
Generally, the holographic propagation operator hP
Considering the square of the modulus of the exposure light wave 14, an estimation Î0n of the image I0 acquired by the image sensor is obtained. Thus,
Î0n=A0nA0n* (4)
A0n* is the conjugated complex image of the complex image A0n.
Expression (4) amounts to adopting a simple measurement model, in which the magnitude of the estimation of the image acquired by the sensor corresponds to the square of the modulus of the complex image A0n.
According to one alternative, it is possible to take into account the spatial coherence of the light source, by considering a convolution kernel K, and lighting non-uniformities, such that:
Î0n=B[(A0nA0n*)*K] (4′)
The convolution kernel K expresses an area of the light source parallel to the detection plane.
B may be obtained by calibration, for example via an acquisition in the absence of an object between the source and the image sensor.
Generally, in this step, the estimation Î02 may be described using the following expression:
Î0n=m(A10n−1) (5).
where A10n−1 corresponds to the complex image of the exposure light wave 14 in the sample plane, namely either the initial image (when n=1) or an image resulting from a preceding iteration (when n>1), and m is a function taking into account expression (4) (or (4′)) and expression (2).
The complex image A10n−1 of the exposure light wave depends on the parameters contained in the vectors Fn−1(r) describing the sample, in the present case the absorbance αn−1(r) and the optical path difference Ln−1(r), according to expression (1).
Thus, it is possible to write:
Î0n=m′(αn−1,Ln−1) (5)′
where αn−1 and Ln−1 are images of the absorbance and of the optical path difference resulting from the initialization or from a preceding iteration, respectively. m′ is a function taking into account expressions (1), (2) and (4) or (4′).
Step 130: comparing the image Î0n estimated in step 120 with the image I0 acquired by the image sensor 20 in step 100. The comparison may be expressed in the form of a difference or of a ratio, or of a squared deviation.
Step 140: computing a validity indicator from the comparison made, in step 130, for each spectral band. The validity indicator represents the relevance of the set n of vectors Fn(r) describing the sample. The index In means that the validity indicator is established for the set n of vectors Fn(r). In this example, the validity indicator decreases as the set describes the sample more correctly.
The validity indicator comprises an error criterion , the latter quantifying an overall error in the estimated image Î0n with respect to the measured image I0. By overall error, what is meant is an error for each radial position.
The error criterion is established on the basis of the comparison of the images Î0n and I0. For example,
where:
The index 0|n attributed to the error criterion represents the fact that this indicator is computed in the detection plane P0, with respect to the set n of vectors taken into account in the iteration.
The error criterion is a data-consistency criterion, in the sense that its minimization allows the measured data, in the present case the image I0, to be got closer to. Thus, when Î0n tends toward I0, i.e. when the set n of vectors correctly describes the sample 10, tends toward 1. In step 150, a minimization algorithm, of gradient-descent type, may be applied so as to gradually approach, in each iteration, the set n allowing a satisfactory minimization of the validity indicator . Thus, the objective of this step is to establish a set n of vectors Fn(r) aiming to obtain, following a reiteration of steps 110 to 140, a validity indicator that is lower than the validity indicator of the current iteration n.
This step allows at least one term Fwn(r) of each vector Fn(r) to be updated.
To do this, for each radial position r, a gradient Gwn(r) of the validity indicator with respect to the optical parameter corresponding to the term Fwn(r) is defined, such that:
A gradient-descent algorithm then defines a direction dwn and a step size of advance σwn. The term Fw(r) of each parameter vector is updated according to the expression:
Fwn+1(r)=Fwn(r)+dwnσwn (12)
The validity indicator is a scalar variable. However, it depends on the set n of parameter vectors from which it was established, by way of the image Î0n estimated in step 120.
The gradient Gwn(r) may be defined for each term Fwn(r) of the vectors Fn(r) considered in the iteration of rank n.
According to a first embodiment, the validity indicator takes into account only the error criterion: =.
In one variant, detailed below, the validity indicator also comprises a morphological criterion, allowing geometric or optical constraints on the sample or on the particles forming the sample to be taken into account.
Step 150: Updating the parameters of the sample, forming the vectors Fn(r), by minimizing the validity indicator . The parameters are updated by applying expression (12).
Step 160: new iteration of steps 120 to 150, taking into account, in step 120 of the following iteration (n+1), the set n updated in step 150 of the iteration carried out last.
Steps 120 to 160 are iterated until the value of the validity indicator is considered to be representative of a good description of the sample by the set n of vectors Fn(r). N designates the rank of the last iteration.
Taking into account an indicator such as defined in equations (10) and (13), the iterations cease when the value of the validity indicator is sufficiently low, or when a preset number of iterations has been reached, or when the validity indicator no longer varies significantly between two successive iterations.
Following the last iteration, parameters FN(r) of the sample are obtained. However, when the phase of the exposure light wave 14 is dependent on some of the parameters, such as the optical path difference, an indeterminateness may remain, because of the periodicity described above.
Step 170: updating all or some of the parameters of the sample using a supervised machine learning algorithm.
At the end of step 160, vectors of parameters FN(r) defined for each radial position r, and forming a set N of parameters, are obtained. Certain terms of these vectors, or even all the terms of these vectors, may be used as input data of a supervised machine learning algorithm. The set of parameters N contains R vectors FN(r), each vector containing W terms FwN(r). R designates the number of radial positions in question. At each radial position r, certain ones of these terms, or even all the terms, may form the input data of the algorithm, as described below.
The machine learning algorithm may be a neural network, for example a convolutional neural network (CNN).
In this example, the neural network comprises two input layers IN. Each input layer represents a spatial distribution (or image) of a parameter FwN describing the sample, such as updated in the last iteration of steps 120 to 160 preceding step 170. In this example, the first input layer represents a distribution of the first parameter F1N(r), in the present case the absorbance αN(r), in the sample plane, whereas the second input layer represents a distribution of the second parameter F2N(r), in the present case the optical path difference LN(r), in the sample plane.
Generally, the algorithm is applied to at least one input layer IN, corresponding to a spatial distribution of a parameter FwN of rank w in the sample plane, resulting from the last iteration n=N of steps 120 to 160. In the example in question, the two spatial distributions of the parameters F1N and F2N resulting from the last iteration N of steps 120 to 160 are used as input layers.
Between the input layers IN and the output layer OUT, the neural network comprises 20 layers L1, L2 . . . L20, the ranks of which are comprised between 1 (layer adjacent the layer IN) and 20 (layer adjacent the layer OUT). Each layer contains 20 planes. A layer is obtained by convoluting the 20 planes of the layer of preceding rank with a convolution kernel of 3×3 size. The layer IN is considered to be the layer of rank 0. The neural network may comprise one or more output layers OUT. Each output layer represents an image of a parameter, in the sample plane. In this example, the neural network comprises only a single output layer, corresponding to an image, called the output image, of the second parameter F2, i.e. the optical path difference L, in the sample plane. The output image comprises parameters, called output parameters, that are updated by the algorithm.
Alternatively, it is possible to employ other neural-network architectures, or even supervised machine learning algorithms, for example a support-vector-machine (SVM) algorithm.
Following step 170, the following will have been obtained, for each radial position:
The index ° represents the fact that the parameters have been updated by the CNN algorithm.
According to one embodiment, the algorithm allows all of the parameters to be updated.
In this example, step 170 allows vectors F° (r) forming a set ° of parameters updated by the algorithm to be generated.
The convolutional neural network (CNN) will have been trained beforehand, using well-characterized training data, as described below (see step 90).
Step 180: reiterating steps 120 to 160 taking into account the output parameters resulting from step 170 in order to form a complex initialization image. In this step, from the parameters of the sample resulting from step 170, the initialization image A100,° is updated. In this example, the initializaiton image A100,° is established, based on the values of α(r) and of L°(r) resulting from step 170, using expression (1).
The iterations of steps 120 to 160 are then reiterated. In the first step 140 of the reiteration, the initialization image A100,° resulting from step 180 is used. Following the iterations of steps 120 to 160:
Step 190: exiting the algorithm.
In this step, an image of a parameter is formed, this corresponding to a spatial distribution of the parameter in question. This allows a representation of the sample to be obtained. In this step, various images of various parameters may respectively be formed. In the described example, an image of the optical path difference and/or an image of the absorbance may be formed. These images may allow the sample to be characterized.
Thus, generally, the method comprises:
Thus, it is possible to attribute a rank x to each series of iterations of steps 120 to 160. In the first series of iterations, x=1. The series of iterations of rank x allows a set of parameters N,x to be obtained. All or some of the parameters N,x may be updated by the machine learning algorithm, so as to obtain updated parameters N,x,°. The latter form initialization parameters used in a series of iterations of rank x+1.
The last series of iterations, of rank X, of steps 120 to 160 allows parameters N,X to be obtained, the latter allowing at least one image of a parameter of the sample to be obtained (step 190). The image of the parameter of the sample, or each image of the parameter of the sample, may allow the sample to be characterized.
The number of series of iterations of steps 120 to 160 to be performed may be defined a priori, on the basis of calibrations using calibration samples considered to be comparable to the analyzed sample. The number of series of iterations of steps 120 to 160 may also be established case by case, for example by comparing the parameters resulting from two different series. Steps 170 and 180 are repeated as many times as the number of series of iterations of steps 120 to 160, minus 1.
One advantageous aspect of the invention is that the parameters resulting from the neural network are used, not to obtain a final representation of the sample, but to initialize a holographic reconstruction algorithm. This allows the performance associated with the use of machine learning to be exploited, while taking into account the measured data.
Use of the CNN algorithm employed in step 170 assumes a prior phase of training with known samples, for example digital samples obtained by simulation. The training samples must preferably be representative of the samples analyzed subsequently. The training is the subject of step 90.
Step 90: training.
The inventors have simulated 1000 samples: they have thus established images of absorbance and of optical path difference for the 1000 simulated samples. Each image extends over 1000×1000 pixels.
For each image, representing a spatial distribution of the parameters of the sample in the sample plane, images acquired by the image sensor were simulated using a model such as described with reference to expression (5).
To each simulation of an image acquired by the image sensor, an iterative reconstruction algorithm, such as described with reference to steps 120 to 160, was applied so as to obtain complex images of training samples, in the sample plane. From the reconstructed complex images, an image of each parameter in the sample plane was obtained.
Considering the 1000 digital samples simulated and, for each simulated sample, one reconstructed image of the absorbance and one reconstructed image of the optical path difference, a total of 2000 reconstructed images, such as those shown in
These thumbnails were used as input data for training the convolutional neural network.
From the simulated parameters, such as shown in
Variant
According to one variant, the validity indicator also comprises a morphological criterion, allowing geometric or optical constraints on the sample or particles forming the sample to be taken into account. The validity indicator also takes into account a morphological criterion . Unlike the error criterion , which is defined on the basis of data measured or estimated in the detection plane P0, the morphological criterion is defined in the plane P10 of the sample.
Generally, the morphological criterion depends on the value of the terms of the vectors of parameters determined in step 110 or in step 150 of a preceding iteration, or of their spatial derivatives. It is representative of the morphology of the sample, such as determined from the vectors of parameters. In other words, the morphological criterion is a criterion enabling consistency to be achieved with morphological data of the sample, the latter possibly being defined by hypotheses.
The morphological criterion may take into account a spatial derivative of the optical path difference, so as to take into account a predefined shape of a particle. For example, when the sample contains adherent cells, the predefined shape may be a hemisphere, such a particular case being shown in
For example, if the complex amplitude of the exposure light wave 14 is defined using expression (1), each parameter vector contains a term Ln(r) and an example of a morphological criterion is:
This criterion tends to decrease when the quantity Ln(r) exhibits a minimum of oscillations, this being the case for example when the particles have a spherical or hemispherical particle morphology. The values of Ln(r) for which the criterion is minimal therefore correspond to particles, for example spherical or hemispherical particles, that are isolated from one another, with a minimum of oscillation of Ln(r) between the particles or on the latter.
The morphological criterion is minimal when the vectors of parameters Fn(r) forming the set n describe objects meeting morphological hypotheses established beforehand.
When the validity indicator takes into account the morphological criterion , it may be defined in the form of a weighted sum of the error criterion and of the morphological criterion . The expression of the validity indicator may then be, for example:
=+γ (12)
where γ is a positive scalar.
Application of the Method to Real Data.
The method described with reference to steps 100 to 190 has been implemented using a sample containing floating Chinese-hamster-ovary (CHO) cells. The cells bathed in a culture medium, in a fluidic chamber of 20 μm thickness.
The light source was a light-emitting diode, the emission spectral band of which was centered on 450 nm, and which was filtered by a diaphragm of 50 μm. The image sensor was a monochromatic sensor comprising 3240×2748 pixels of 1.67 μm side length.
The image L°(r) of
The reiteration of steps 120 to 160 allowed second images of the parameters α(r) and L(r) to be defined. These images were used as input images of the convolutional neural network.
In a third series of trials, a sample containing adherent PC12 cells in a culture medium was used. Another neural network was used, the training of which was based on models of samples containing adherent cells.
The source used was an LED emitting in a spectral band centered on 605 nm, and of spectral width equal to 20 nm. The sample was placed at a distance of 3500 μm.
The method described above may allow the sample to characterized, on the basis of the parameters determined subsequent to step 180. By characterized, what is notably meant is, non-exhaustively:
The invention will possibly be applied to biological samples, in the health field, for example to assist with diagnostics, or in the study of biological processes. It may also be applied to samples sampled from the environment, or from industrial installations, for example in the food-processing field.
Number | Date | Country | Kind |
---|---|---|---|
19 06766 | Jun 2019 | FR | national |
Number | Name | Date | Kind |
---|---|---|---|
20170059468 | Yevick et al. | Mar 2017 | A1 |
Number | Date | Country |
---|---|---|
WO 2018060589 | Apr 2018 | WO |
WO-2018060589 | Apr 2018 | WO |
Entry |
---|
French Preliminary Search Report dated Feb. 13, 2020 in French Application 19 06766 filed Jun. 22, 2019 (with Written Opinion & English Translation of Categories of Cited Documents), 12 pages |
Shimobaba, T. et al., “Convolutional neural network-based regression for depth prediction in digital holography,” arxiv.org, arXiv:1802.00664v1 [cs.CV], Feb. 2, 2018, 4 pages. |
Yevick, A. et al., “Machine-learning approach to holographic particle characterization,” Optics Express, vol. 22, No. 22, Oct. 22, 20014, XP055298737, 7 pages. |
Wu, Y. et al., “Extended depth-of-field in holographic image reconstruction using deep learning based auto-focusing and phase-recovery,” arxiv.org, Mar. 21, 2018, XP081224518, 9 pages. |
Number | Date | Country | |
---|---|---|---|
20200402273 A1 | Dec 2020 | US |