The invention relates to the field of analysis of cells, and more precisely to the inspection of the proliferation of cells, in incubators or biological reactors.
The inspection of the development of cells in incubators or biological reactors is an essential step in the process of producing cells. In these applications, the cells are placed in a culture medium, propitious to their development.
Their number and their state, and in particular whether they are alive or dead, are regularly inspected. These inspecting operations require the use of a microscope, the cells being marked beforehand using a fluorescent tag or a chromophore, the level of fluorescence of cells varying depending on whether they are alive or dead. Such a method has certain drawbacks: firstly, it requires the use of a microscope, a piece of equipment that is costly and bulky. In addition, since the field of observation is small, the analysis of a spatially extensive sample requires time because it is necessary to move the sample in front of the microscope. Moreover, marking cells with a fluorescent label or a chromophore may have consequences on their development.
One of the pursued avenues of research is the use of simple optical methods, such as lensless imaging. The observation of biological particles by lensless imaging has seen a certain amount of development since the end of the years 2000. This technique consists in placing a sample between a light source and a matrix-array photodetector or image sensor. The image captured by the photodetector is formed by interference between the incident wave, produced by the light source, and the wave diffracted by the particles making up the sample. This image is frequently referred to as a “hologram”. Thus, for each particle, it is possible to record, on the sensor, a diffraction pattern that is specific thereto. Applied to biological samples, this technique has been described in document WO2008090330. It is then possible to perform a simple analysis of each particle, by comparing the diffraction pattern that it generates with diffraction patterns established beforehand and corresponding to known particles. However, this method may reach limits as particle concentration increases.
It is possible to apply mathematical techniques i.e. what are referred to as digital holographic reconstruction techniques, in order to construct what is called a complex image of each particle present in the sample. This type of technique consists in back-propagating the light wave to the object plane, in which the particles are located, said object plane being located a known distance from the image. Applications to the characterization of cells on the basis of a reconstructed complex image have been described in the documents US2012/0148141 and WO2014/012031, the cells being spermatozoa. However, these methods are limited to estimation of the properties of said cells, and their path, from the reconstructed complex image. A complex image of a sample may be insufficient to identify a particle.
Therefore what is sought is a method for observing cells, and in particular a means for discriminating living and dead cells, which is simple, inexpensive, reliable, does not require cells to be marked and has an extensive field of observation.
The invention responds to this problem by providing a method for determining the state of a cell, said cell being placed in a sample, the method including the following steps:
The profile is defined depending on the value of the characteristic quantity and determined at said plurality of distances.
In particular, the preset states may comprise a living state and a dead state. The method is then able to classify an examined cell and determine whether it is dead or alive.
By applying a digital reconstruction algorithm, what is meant is the application of a propagation operator to an image, generally in the form of a convolution product.
Each characteristic quantity is in particular obtained by estimating, at said reconstruction distance, a complex expression of the light wave to which the matrix-array photodetector is exposed. The characteristic quantity may be obtained from the modulus or argument of said complex expression.
The classification may be carried out by comparing said variation in said characteristic quantity to preset reference profiles.
According to one embodiment, the method includes:
The method may then include:
The reference complex image may be a complex image formed in a reconstruction plane that is away from the plane of the sample. It may also be a question of a complex image formed in the detection plane.
The method may comprise a step of reconstructing an image of said characteristic quantity in a plane parallel to the detection plane, and at said reconstruction distance, the value of said characteristic quantity at said position of the cell, and at said reconstruction distance, being determined depending on this image.
The position of each cell, in a plane parallel to the detection plane, may be determined using the image acquired by the matrix-array photodetector or using a reconstructed image such as described in the preceding paragraph.
The light source may be a spatially coherent source. It may in particular be a question of a light-emitting diode. The light source may also be temporally coherent; it may in particular be a question of a laser diode.
The matrix-array photodetector or image sensor includes a matrix array of pixels that are able to collect the light wave to which the photodetector is exposed. The distance between the pixels and the sample may vary between 50 μm and 2 cm, and preferably between 100 μm and 5 mm. Preferably the sample is not placed in direct contact with the pixels of the matrix-array photodetector.
Preferably, no magnifying optics are placed between the sample and the matrix-array photodetector.
Another subject of the invention is a device for discriminating a living cell from a dead cell, said cell being placed in a sample, the device comprising:
Preferably, the device includes no magnifying optics between the photodetector and the analyzed sample.
The sample may be placed in a transparent chamber, placed between the photodetector and the light source.
Another subject of the invention is an incubator, intended for the growth of cells, comprising a device such as described above.
The distance Δ between the light source and the sample is preferably larger than 1 cm. It is preferably comprised between 2 and 10 cm and is typically 5 cm. Preferably, the light source, seen by the sample, may be considered to be point-like. This means that its diameter (or its diagonal) must be smaller than one fifth and better still one tenth of the distance between the sample and the light source. Thus, the light reaches the sample in the form of plane waves, or waves that may be considered as such.
The light source 11 may be a point source, or be associated with a diaphragm (not shown in
The diaphragm may be replaced by an optical fiber, a first end of which is placed facing a light source, and a second end of which is placed facing the sample. In this case, said second end may be likened to a point light source 11.
The sample 14 is bounded by a chamber, including a base 15 and a cover 13. The side walls of the chamber have not been shown. Typically a chamber is a petri dish or a well of a multi-well plate. In the example considered here, the bottom 15 and the cover 13 consist of 2 transparent slides that are a distance of 100 μm apart. The distance d between the cells 1,2,3,4,5 and the photodetector 16 is equal to 3450 μm.
Generally, the thickness of the chamber, along the propagation axis Z, is preferably smaller than a few cm, for example smaller than 5 cm, or even smaller than 1 cm.
The light source 11 may be temporally coherent but this is not necessary.
In this example, the light source is an OSRAM light-emitting diode, of reference LA E67B-U2AA-24-1. It is located a distance Δ equal to 5 cm from the sample.
The sample 14 is placed between the light source 11 and a matrix-array photodetector 16. The latter preferably lies in a detection plane P preferably lying parallelly, or substantially parallelly, to the base 15 of the chamber bounding the sample. The detection plane P preferably lies perpendicularly to the propagation axis Z.
The expression substantially parallelly means that the two elements may not be rigorously parallel, an angular tolerance of a few degrees, smaller than 10°, being acceptable.
Preferably, the light source is of small spectral width, for example of spectral width smaller than 200 nm or even 100 nm or indeed 25 nm. The expression spectral width designates the full width at half maximum of the emission peak of the light source.
The photodetector 16 may be a matrix-array photodetector including a matrix-array of CCD or CMOS pixels. CMOS photodetectors are preferred because the size of the pixels is smaller, this allowing images the spatial resolution of which is more favorable to be acquired. In this example, the detector is a 12-bit APTINA sensor of reference MT9P031. It is a question of an RGB CMOS sensor the inter-pixel pitch of which is 2.2 μm. The useful area of the photodetector is 5.7×4.3 mm2. Photodetectors the inter-pixel pitch of which is smaller than 3 μm are preferred, because they allow images with a satisfactory spatial resolution to be obtained.
Preferably, the photodetector comprises a matrix-array of pixels, above which array a transparent protective window is placed. The distance between the matrix-array of pixels and the protective window is generally comprised between a few tens of μm to 150 or 200 μm.
Generally, and whatever the embodiment, the distance d between a particle and the pixels of the photodetector is preferably comprised between 50 μm and 2 cm and preferably comprised between 100 μm and 2 mm.
The absence of magnifying optics between the matrix-array photodetector 16 and the sample 14 will be noted. This does not prevent focusing micro-lenses optionally being present level with each pixel of the photodetector 16.
In this first example, the culture medium is a Dulbecco's Modified Eagle's Medium (DMEM). The sample also contains fibroblast 3T3 cells, the concentration of which is about 0.5×106 cells per ml.
Each elementary diffraction pattern (31, . . . 35) is formed by the interference between the incident light wave 12 produced by the source 11, upstream of the sample, and a diffraction wave produced by diffraction of this incident wave by each cell (1, . . . ,5). Thus, the photodetector 16 is exposed to a light wave 22 formed by the superposition:
A processor 20 receives the images from the matrix-array photodetector 16 and reconstructs characteristic quantities of the light wave 22 to which the photodetector is exposed, along the propagation axis Z. The reconstruction is in particular carried out between the photodetector and the observed sample. The processor 20 may be able to execute a sequence of instructions stored in a memory, in order to implement steps of the identifying method. The microprocessor 20 is connected to a memory 23 able to store instructions for implementing calculating steps described in this application. It may be linked to a screen 25. The processor may be a microprocessor, or any other electronic computer able to process the images delivered by the matrix-array photodetector, in order to execute one or more steps described in this description.
The image shown in
According to well-known digital holographic reconstruction principles, which are described in the publication by Ryle et al, “Digital in-line holography of biological specimens”, Proc. Of SPIE Vol. 6311 (2006), it is possible to reconstruct a complex expression U(x,y,z) for the light wave 22 at any point of spatial coordinates (x,y,z), and in particular in a plane located a distance |z| from the photodetector, and parallel to the plane P in which the photodetector lies, by determining the convolution product of the intensity I(x,y) measured by the photodetector and a propagation operator h(x,y,z).
The function of the propagation operator h(x,y,z) is to describe the propagation of the light between the photodetector 16 and a point of coordinates (x,y,z). It is then possible to determine the amplitude u(x,y,z) and the phase φ (x,y,z) of this light wave 22 at this distance |z|, which is called the reconstruction distance, where:
u(x,y,z)=abs [U(x,y,z)]
φ(x,y,z)=arg [U(x,y,z)]
The operators abs and arg return the modulus and argument, respectively.
Application of the propagation operator in particular allows the complex expression U(x,y,z) to be estimated at a distance |z| from the photodetector, upstream of the latter. The complex value of the light wave 22 before the latter reaches the detector is thus reconstructed. Back-propagation is then spoken of. If the coordinate z=0 is attributed to the detection plane P, this back-propagation is implemented by applying a propagation operator h(x,y,−|z|).
The terms upstream and downstream are to be understood with respect to the propagation direction of the incident wave 12.
If I(x,y)=I(x,y,z=0) corresponds to the intensity of the signal measured by the photodetector, the relationship between the measured intensity I(x,y) and the complex expression U(x,y) of the light wave, in the detection plane P, is given by: I(x,y)=|U(x,y)|2.
The complex expression of the light wave (22), at a coordinate (x,y,z) is given by
U(x, y, z)=√{square root over (I(x, y))}*h(x, y, z),
the symbol * representing a convolution operator, where:
In the half-space delineated by the detection plane P and comprising the sample 14, the complex expression of the light wave may also be written:
U(x, y, z)=√{square root over (I(x, y))}*h(x, y, −|z|)
Preferably mathematical preprocessing is applied beforehand to the measured intensity I(x,y), before the holographic reconstruction. This allows the quality of the results to be improved, in particular by decreasing the number of artefacts created when the propagation operator is applied.
Thus, an intensity Ī(x,y), called the normalized intensity, is determined, such that
Ī(x, y)=(I(x, y)−Average (I))/Average(I)
where
This pre-processing is equivalent to a normalization of the measured intensity by the intensity of the incident light wave 12 the latter being estimated by the quantity Average (I). It allows artefacts generated by the reconstruction process to be limited.
The digital reconstruction may in particular be based on the Fresnel diffraction model. In this example, the propagation operator is the Fresnel-Helmholtz function, such that:
where λ is the wavelength.
Thus,
where
From values of the complex expression U(x,y,z), it is possible to extract characteristic quantities of the light wave 22 resulting from the diffraction, by the particles (1,2 . . . 9), of the incident light wave 12 emitted by the source 11. As mentioned above, it is possible to evaluate the amplitude u(x,y,z) or the phase φ(x,y,z), but it is also possible to evaluate any function of the amplitude or phase.
It is for example possible to evaluate a characteristic quantity that is called the complementary amplitude ũ(x, y, z) such that:
ũ(x, y, z)=abs(1−U(x, y, z))
From each reconstructed complex expression U(x,y,z), it is possible to form:
In each reconstructed image φz an elementary diffraction pattern (31, 32, 33, 34, 35) corresponding to each cell (1,2,3,4,5) of the sample may be seen, the central portion of each pattern allowing the respective coordinates (x1, y1), (x2, y2), (x3, y3), (x4, y4) and (x5, y5) of cells 1 to 5 in the detection plane P to be determined. The value of the phase φ(x1y1,z), φ(x2, y2,z), φ(x3, y3,z), φ(x4, y4,z), φ(x5, y5,z) at the various values Z in question is determined:
Thus, in the plane |z|=3450 μm, corresponding to the plane in which the cells are actually located (z=d), the phase of the reconstructed light wave 22 passing through cells 1, 2 and 3, respectively, is negative, whereas the phase of the reconstructed light wave 22 passing through cells 4 and 5, respectively, is negative.
Moreover, following these reconstructions, the cells were treated with Trypan blue, then observed using a microscope at a 10× magnification. Trypan blue is a die commonly used in the field of cell viability. The cells referenced 1, 2 and 3 appeared to be alive, whereas the cells referenced 4 and 5 are dyed blue, indicating a dead cell. These observations serve as reference measurement in the analyses detailed below.
By reconstructing an image of the radiation to which the detector is exposed in the plane containing the cells (z=3450 μm), and by identifying, in this reconstructed image, the position of each cell, it is possible to discriminate living cells (negative phase) from dead cells (positive phase).
Thus, it is possible to establish a profile representing the variation in the phase of the wave 22 to which the detector is exposed along an axis, parallel to the propagation axis Z, passing through each cell. This profile may then be used to perform a classification between a living cell and a dead cell. This profile may in particular be compared to a library of profiles produced with “standard” cells the state of which is known. In other words, the profile representing the variation in the phase along the propagation axis of the light wave forms a signature of the state of the cell.
Reconstructing a characteristic quantity of the wave 22 resulting from diffraction by a particle and the incident wave 12 not at a single reconstruction distance, but along the propagation axis of the incident wave, at a plurality of reconstruction distances, allows richer information to be obtained. This allows the various states of a cell to be reliably classified. Moreover, this makes it possible to avoid needing to know the precise distance separating a cell to be characterized from the photodetector.
Another indicator may be the distance |z0| at which the phase value φ(xn, yn,z) passes through zero, a cell being considered to be alive if |z0| is lower than the distance d actually separating the cell from the photodetector (in the present case d=3450 μm), and dead in the contrary case.
From
It is therefore possible to establish a profile representing the variation in the complementary amplitude ũ of the light wave 22 to which the detector is exposed, along the propagation axis Z and passing through each cell, and to use this profile to perform a classification between a living cell and a dead cell. This profile may in particular be compared to a library of profiles produced with “standard” cells the state of which is known. In other words, the profile representing the variation in the complementary amplitude ũ along the propagation axis forms a signature of the state of the cell.
In a second example, the device is similar to that implemented above. The characterized cells are PC12 cells. Just as in the first example above, an image was acquired on the matrix-array photodetector, in an identical configuration to the configuration shown in
A reference measurement was then carried out, using staining with Trypan blue, allowing dead cells D and living cells A to be identified.
Thus there are 3 criteria for classifying a cell: value of the phase at |z|=d, variation in the phase as a function of z and the value |z0| at which the value of the phase of the complex expression of the reconstructed wave 22 is zero.
In a third example, the device is similar to that implemented above. The characterized cells are CHO cells (CHO standing for Chinese hamster ovary—cell line derived from the ovary of the Chinese hamster). Just as in the two examples above, an image is acquired on the matrix-array photodetector, in an identical configuration to the configuration shown in
A reference measurement was then carried out, using staining with Trypan blue, allowing dead cells D and living cells A to be identified.
According to one variant, the classification between a living cell and a dead cell is achieved by combining, for a given height z, various parameters of the light radiation 22 to which the detector is exposed. According to one example, the phase φ(x,y,z) and the complementary amplitude ũ (x,y,z) are determined along the propagation axis Z, the classification being achieved using the ratio of these two parameters.
the term k(x6, y6, z) representing the ratio
determined in a portion 6 of the sample exempt of cells. This ratio may be called the reference ratio.
This figure shows the variation in the composite quantity k(xn,yn,z) for n cells, each cell n being identified by its position in the plane (xn,yn) of the photodetector.
The value of the composite quantity, at a given reconstruction distance z, is systematically higher for living cells than for dead cells. It is thus possible to define a threshold kthreshold(z), such that if k(xn,yn, z)≥kthreshold(z), the cell centered on the position (xn,yn), in the plane P, is alive, or dead in the contrary case.
Application of a digital propagation operator h to an image I, or hologram, acquired by a matrix-array photodetector 16 may have certain limits, because the acquired image includes no phase-related information. Thus, before the profile is established, it is preferable to obtain information relating to the phase of the light wave 22 to which the photodetector 16 is exposed. This phase-related information may be obtained by reconstructing a complex image Uz of the sample 14, using methods described in the prior art, so as to obtain an estimation of the amplitude and phase of the light wave 22 in the plane P of the matrix-array photodetector 16 or in a reconstruction plane Pz located at a distance |z| from the latter. The inventors have developed a method based on the calculation of a reference complex image, which method is described with reference to
The algorithm presented in
Step 100: Image Acquisition
In this step, the image sensor 16 acquires an image I of the sample 14, and more precisely of the light wave 22 transmitted by the latter, to which light wave the image sensor is exposed. Such an image, or hologram, is shown in
This image was produced using a sample 10 including Chinese hamster ovary (CHO) cells immersed in a saline buffer, the sample being contained in a fluidic chamber of 100 μm thickness placed at a distance d of 1500 μm from a CMOS sensor. The sample was illuminated with a light-emitting diode 11 the spectral emission band of which was centered on a wavelength of 450 nm and which was located at a distance D=8 cm from the sample.
Step 110: Initialization
In this step, an initial image U0k=0 of the sample 14 is defined, from the image I acquired by the image sensor 16. This step is an initialization of the iterative algorithm described below with regard to steps 120 to 180, the exponent k indicating the rank of each iteration. The modulus u0k=0 of the initial image U0k=0 may be obtained by applying the square-root operator to the image I acquired by the image sensor, in which case u0k=0=√{square root over (I0)}.
The phase φ0k=0 of the initial image U0k=0 is either considered to be zero in each pixel (x,y), or preset to an arbitrary value. Specifically, the initial image U0k=0 results directly from the image I acquired by the matrix-array photodetector 16. However, the latter includes no information relating to the phase of the light wave 22 transmitted by the sample 14, the image sensor 16 being sensitive only to the intensity of this light wave.
Step 120: Propagation
In this step, the image U0k−1 obtained in the plane of the sample is propagated to a reconstruction plane Pz, by applying a propagation operator such as described above, so as to obtain a complex image Uzk, representative of the sample 14, in the reconstruction plane Pz. The propagation is carried out by convoluting the image U0k−1 with the propagation operator h−z′ such that:
Uzk=U0k−1*h−z′
the symbol * representing a convolution operator. The index −z represents the fact that the propagation is carried out in a direction opposite to that of the propagation axis Z. Back-propagation is spoken of.
In the first iteration (k=1), U0k=0 is the initial image determined in step 110. In the following iterations, U0k−1 is the complex image in the detection plane P updated in the preceding iteration.
The reconstruction plane Pz is a plane away from the detection plane P, and preferably parallel to the latter. Preferably, the reconstruction plane Pz is a plane P14 in which the sample 14 lies. Specifically, an image reconstructed in this plane allows a generally high spatial resolution to be obtained. It may also be a question of another plane, located a nonzero distance from the detection plane, and preferably parallel to the latter, for example a plane lying between the matrix-array photodetector 16 and the sample 14.
Step 130: Calculation of an Indicator in a Plurality of Pixels
In this step, a quantity εk(x,y) associated with each pixel of a plurality of pixels (x,y) of the complex image Uzk is calculated, preferably in each of these pixels. This quantity depends on the value Uzk(x,y) of the image Uzk, or of its modulus, in the pixel (x,y) for which it is calculated. It may also depend on a dimensional derivative of the image in this pixel, for example the modulus of a dimensional derivative of this image.
In this example, the quantity associated with each pixel (x,y) is based on the modulus of a dimensional derivative, such that:
Since the image is discretized into pixels, the derivative operators may be replaced by Sobel operators, such that:
where:
and Sy is the transposed matrix of Sx.
Step 140: Establishment of a Noise Indicator Associated with the Image Uzk
In step 130, quantities εk(x,y) were calculated in a plurality of pixels of the complex image Uzk. These quantities may form a vector Ek, the terms of which are the quantities εk(x,y) associated with each pixel (x,y). In this step, an indicator, called the noise indicator, is calculated from a norm of the vector Ek. Generally, an order is associated with a norm, such that the norm ∥x∥p of order p of a vector x of dimension n of coordinates (x1, x2, . . . xn,) is such that: ∥x∥p=(Σi=1n|xi|p)1/p, where p≥0.
In the present case, a norm of order 1 is used, in other words p=1. Specifically, the inventors have estimated that a norm of order 1, or of order lower than or equal to 1, is particularly suitable for such a sample, as explained below.
In this step, the quantity εk(x,y) calculated from the complex image Uzk, in each pixel (x,y) of the latter, is summed so as to form a noise indicator εk associated with the complex image Uzk.
Thus,
εk=Σ(x,y)εk(x, y)
This noise indicator εk corresponds to a norm of the total variation in the complex image Azk.
With reference to the example of
Because a norm of order 1, or of order lower than or equal to 1, is used, the value of the noise indicator εk decreases as the complex image Uzk becomes more and more representative of the sample. Specifically, in the first iterations, the value of the phase φ0k(x,y), in each pixel (x,y) of the image U0k is poorly estimated. Propagation of the image of the sample from the detection plane P to the reconstruction plane Pz is then accompanied by substantial reconstruction noise, as mentioned with regard to the prior art. This reconstruction noise takes the form of fluctuations in the reconstructed image. Because of these fluctuations, a noise indicator εk, such as defined above, increases in value as the contribution of the reconstruction noise, in the reconstructed image, increases. Specifically, the fluctuations due to the reconstruction noise tend to increase the value of this indicator.
An important aspect of this step consists in determining, in the detection plane P, phase values φ0k(x,y) for each pixel of the image of the sample U0k, this allowing, in a following iteration, a reconstructed image Uzk+1 to be obtained the indicator of which εk+1 is lower than the indicator εk.
In the first iteration, as explained above, relevant information is available only on the intensity of the light wave 22 and not on its phase. The first image UZk=1 reconstructed in the reconstruction plane Pz is therefore affected by a substantial amount of reconstruction noise, because of the absence of relevant information as to the phase of the light wave 22 in the detection plane P. Therefore, the indicator εk=1 is high. In following iterations, the algorithm carries out a gradual adjustment of the phase φ0k(x,y) in the detection plane P, so as to gradually minimize the indicator εk.
The image U0k in the detection plane is representative of the light wave 22 in the detection plane P, both from the point of view of its intensity and of its phase. Steps 120 to 160 aim to establish, iteratively, for each pixel of the image U0k, the value of the phase φ0k(x,y) which minimizes the indicator εk, the latter being obtained from the image Uzk obtained by propagating the image U0k−1 to the reconstruction plane Pz.
The minimization algorithm may be a gradient descent algorithm, or a conjugated gradient descent algorithm, the latter being described below.
Step 150: Adjustment of the value of the phase in the detection plane.
Step 150 aims to determine a value of the phase φ0k(x,y) of each pixel of the complex image U0k, so as to minimize, in the following iteration k+1, the indicator εk+1 resulting from a propagation of the complex image U0k to the reconstruction plane Pz. To do this, a phase vector φ0k is established, each term of which is the phase φ0k(x,y) of a pixel (x,y) of the complex image U0k. The dimension of this vector is (Npix, 1), where Npix is the number of pixels in question. This vector is updated in each iteration, using the following updating expression:
φ0k(x, y)=φ0k−1(x, y)+αkpk(x, y)
where:
This equation may be expressed in vectorial form as follows:
φ0k=φ0k−1+αkpk
It may be shown that:
pk=−∇εk+βkpk−1
where:
Each term ∇εk(x,y) of the gradient vector ∇ε is such that
where Im is an operator returning the imaginary part of the operand and r′ is a coordinate (x,y) in the detection plane.
The scale factor βk may be expressed such that:
The step size αk may vary depending on the iteration, for example from 0.03 in the first iterations to 0.0005 in the last iterations.
The updating equation allows an adjustment of the vector φ0k to be obtained, this leading to an iterative update of the phase φ0k(x,y) in each pixel of the complex image U0k. This complex image U0k, in the detection plane, is then updated with these new values of the phase associated with each pixel. It will be noted that the modulus of the complex image U0k is not modified, the latter being determined from the image acquired by the matrix-array photodetector 16, such that u0k(x,y)=u0k(x,y).
Step 160: Reiteration of or exit from the algorithm.
Provided that a convergence criterion has not been reached, step 160 consists in reiterating the algorithm, with a new iteration of steps 120 to 160, on the basis of the complex image U0k updated in step 150. The convergence criterion may be a preset number K of iterations, or a minimum value of the gradient ∇εk of the indicator, or a difference considered to be negligible between two consecutive phase vectors φ0k−1,φ0k. When the convergence criterion is reached, the estimation is considered to be a correct estimation of a complex image of the sample, in the detection plane P or in the reconstruction plane Pz.
Step 170: Obtainment of the reference complex image.
At the end of the last iteration, the method may comprise propagating the complex image U0k resulting from the last iteration to the reconstruction plane Pz, so as to obtain a reference complex image Uref=Uzk. Alternatively, the reference complex image Uref is the complex image U0k resulting from the last iteration in the detection plane P. When the density of the particles is high, this alternative is however less advantageous because the spatial resolution in the detection plane P is lower than in the reconstruction plane Pz, in particular when the reconstruction plane Pz corresponds to a plane P14 in which the sample 14 lies.
Step 180: Selection of particle radial coordinates.
In this step, the radial coordinates (x,y) of a particle are selected from the reference image Uref=UZk=30, for example from the image of its modulus uref=uZk=30 or the image of its phase φref=φZk=30. As mentioned above, the expression radial coordinate designates a coordinate in the detection plane or in the reconstruction plane. It is also envisionable to carry out this selection on the basis of the hologram I0 or of the complex image U0k obtained in the detection plane following the last iteration. However, when the number of particles increases, it is preferable to carry out this selection on the image formed in the reconstruction plane, because of its better spatial resolution, in particular when the reconstruction plane Pz corresponds to the plane of the sample P14. In
Step 185: Application of a Propagation Operator
In this step 185, the reference complex image Uref is propagated to a plurality of reconstruction distances, using a propagation operator h such as defined above, so as to obtain a plurality of what are called secondary complex images Uref,z reconstructed at various distances from the detection plane P or from the reconstruction plane Pz. Thus, this step comprises determining a plurality of complex images Uref,z such that:
Uref,z=Uref*hz with zmin≤z≤zmax.
The values zmin and zmax the minimum and maximum coordinates, along the axis Z, to which the reference complex image is propagated. Preferably, the complex images are reconstructed at a plurality of coordinates z between the sample 14 and the image sensor 16. The complex images may be formed on either side of the sample 14.
These secondary complex images are established by applying a holographic reconstruction operator h to the reference image Uref. The latter is a complex image correctly describing the light wave 22 to which the image sensor is exposed, and in particular its phase, following the iterations of the steps 120 to 160. Therefore, the secondary images Uref,z form a good descriptor of the propagation of the light wave 22 along the propagation axis Z.
Step 190: Formation of a Profile
In this step, from each secondary complex image Uref,z′ a characteristic quantity, such as defined above, of the light wave 22 is determined so as to define a profile representing the variation in said characteristic quantity along the propagation axis Z. The characteristic quantity may, for example, be the modulus or the phase, or a combination thereof.
Step 200: Characterization
The particle may then be characterized from the profile formed in the preceding step. Preferably, there is available a database of standard profiles formed in a learning phase using known standard samples. The characterization is then carried out by comparing or classifying the formed profile on the basis of the standard profiles.
This embodiment, which is based on formation of a reference complex image, was implemented, using the norm of the total variation, on CHO (Chinese hamster ovary) cells immersed in a CD CHO culture medium (Thermo Fisher). The sample was placed in a fluidic chamber of 100 μm thickness and positioned at a distance of 8 cm from a light-emitting diode, the spectral band of which was centered on 450 nm. The sample was placed at a distance of 1500 μm from a CMOS image sensor of 2748×3840 pixels. The aperture of the spatial filter 18 had a diameter of 150 μm.
Moreover, following these reconstructions, the cells were treated with Trypan blue, then observed using a microscope at a 10× magnification. The image obtained is shown in
The profiles of modulus or phase of
The examples described above provide simple identification criteria based on the variation in the profile of a characteristic quantity as a function of reconstruction distance, and on comparisons using preset thresholds. In addition, other classifying methods that are more complex and more robust may be implemented, without departing from the scope of the invention.
Number | Date | Country | Kind |
---|---|---|---|
15 52445 | Mar 2015 | FR | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/FR2016/050644 | 3/23/2016 | WO | 00 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2016/151249 | 9/29/2016 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
20090290156 | Popescu et al. | Nov 2009 | A1 |
20120105858 | Popescu et al. | May 2012 | A1 |
20120148141 | Ozcan et al. | Jun 2012 | A1 |
20130258091 | Ozcan et al. | Oct 2013 | A1 |
20140133702 | Zheng et al. | May 2014 | A1 |
20140327944 | Naidoo et al. | Nov 2014 | A1 |
20140365161 | Naidoo et al. | Dec 2014 | A1 |
20150204773 | Ozcan et al. | Jul 2015 | A1 |
Number | Date | Country |
---|---|---|
2014012031 | Jan 2014 | WO |
WO 2014012031 | Jan 2014 | WO |
Entry |
---|
International Search Report dated Jul. 15, 2016 in PCT/FR2016/050644 filed Mar. 23, 2016. |
International Search Report dated Jun. 20, 2016 in PCT/FR2016/050643 filed Mar. 23, 2016. |
S. Vinjimore Kesavan et al., “High-throughput monitoring of major cell functions by means of lensfree video microscopy,” Scientific Reports, vol. 4, Aug. 6, 2014, pp. 1-11, XP055250876. |
Karen M. Molony et al., “Segmentation and visualization of digital in-line holographic microscopy of three-dimensional scenes using reconstructed intensity images,” Medical Imaging 2002: PACS and Integrated Medical Information Systems: Design and Evaluation, vol. 7443, Aug. 20, 2009, pp. 74431F-1-74431F-10, XP055251117. |
Sergey Missan et al., “Using digital inline holographic microscopy and quantitative phase contrast imaging to assess viability of cultured mammalian cells,” Progress in Biomedical Optics and Imaging, SPIE—International Society for Optical Engineering, vol. 9336, Mar. 11, 2015, pp. 93316X-1-93316X-14, XP060049373. |
Yunxin Wang et al., “Non-invasive monitoring for living cell culture with lensless Fourier transform digital holography microscopy,” Medical Imaging 2002: PACS and Integrated Medical Information Systems: Design and Evaluation, vol. 7791, Aug. 2, 2010, pp. 77910E-1-77910E-8, XP055250808. |
Bjorn Kemper et al., “Application of 3D tracking, LED illumination and multi-wavelength techniques for quantitative cell analysis in digital holographic microscopy,” Engineering of SPIE, SPIE—International Society for Optical Engineering, vol. 7184, Jan. 2009, XP007913134, pp. 71840R-1-71840R-12. |
Number | Date | Country | |
---|---|---|---|
20180113064 A1 | Apr 2018 | US |