The technical field of the invention is the observation of a cell event, in particular cell division or cell death, with the aid of optical detection means.
The observation of cells is frequently confronted with the detection of events affecting the life of the cells, in particular cell division (mitosis or meiosis) or cell death.
Mitosis is a complex cell division process which is still the subject of in-depth studies. This is because the observation of mitoses is an important element for the study of pathologies involving misregulation of cell division, in particular cancer.
At present, mitoses are generally observed by microscopy, whether fluorescence microscopy, standard microscopy, by using a coloured marker, or phase contrast microscopy. In microscopy, however, the observation field is restricted because of the high magnification imparted by the objectives with which microscopes are equipped.
It may also be necessary to monitor the occurrence of a cell death. Currently, on the industrial scale, the main optical devices for estimating cell viability are based on employing viability markers and analyses of the colorimetric type (marking with trypan blue) or fluorescence type (marking with propidium iodide). An optical method without marking has been described in U.S. Ser. No. 10/481,076.
A difficulty may arise when there is a sample, comprising numerous cells, extending over a surface larger than the observation field of a microscope. If the intention is to obtain a sufficient observation statistic, it is difficult to observe different parts of the same sample simultaneously, even more so since mitoses occur at random instants.
Patent EP3519899A1 (or U.S. Ser. No. 10/754,141) describes a dual-mode observation device allowing a combination between lensless imaging and conventional microscopy. The aim is to benefit from the wide observation field of lensless imaging and the high resolution imparted by microscopy. The device makes it possible to switch between a first mode, in this case lensless imaging, allowing a wide-field observation of the sample, and a second mode, in this case microscopy, for observing details of the sample.
The invention described below makes it possible to automatically identify different parts, in which a cell event as described above is taking place, of a sample. This makes it possible to perform a count or to observe each division individually, by microscopy.
A first subject of the invention is a method for detecting or predicting the occurrence of a cell event, selected from between cell division or cell death, in a sample, the sample extending in at least one sample plane and comprising cells immersed in a medium, the method comprising the following steps:
The optical path difference at each transverse coordinate corresponds to a difference between the optical path of the exposure light wave respectively in the presence and in the absence of a cell at the said transverse coordinate. The optical path difference may in particular correspond:
During step c), each observation image may be calculated by using an iterative algorithm comprising the following substeps:
The method may comprise:
Step c) may comprise:
According to one embodiment, the supervised artificial intelligence algorithm uses a convolutional neural network, the observation images being used as an input layer of the convolutional neural network.
Step d) may comprise detection and location of an occurrence of the cell event during the acquisition time range.
Step d) may comprise detection and location of an occurrence of the cell event during a temporal prediction interval subsequent to the acquisition time range.
The temporal prediction interval may occur between 10 minutes and 1 hour after the acquisition time range.
According to one possibility, no image forming optics are arranged between the sample and the image sensor.
According to one possibility, the image sensor used during step b) is a first image sensor, the method comprising, following step d),
According to one possibility, the sample comprises a fluorescent marker, the fluorescent marker defining an excitation spectral band and a fluorescence spectral band, the method being such that during step g):
A second subject of the invention is a device for observing a sample, comprising:
The invention will be understood more clearly on reading the description of the exemplary embodiments which are presented in the rest of the description, in connection with the figures listed below.
The same is true of
The device comprises a first light source 11 capable of emitting a first light wave 12, referred to as the incident light wave, which propagates towards a sample 10 along a propagation axis Z, in a first spectral band Δλ.
The sample 10 is arranged on a sample support 10s. The sample comprises a culture medium 10m in which cells 10p are immersed. The culture medium is a medium conducive to the development of cells.
In this example, the sample comprises a fluorescent marker 10f adapted to make it possible to form fluorescence images making it possible to observe mitosis. The fluorescent marker generates fluorescence light in a fluorescence spectral band when it is illuminated by excitation light in an excitation spectral band. As an alternative, the sample may comprise a coloured marker. Employing a coloured or fluorescent exogenous marker is not an essential element of the invention. Such a marker is useful only when a focused microscopy mode as described below is being used.
The thickness of the sample 10 along the propagation axis Z is preferably between 20 μm and 500 μm. The sample extends in at least one plane P10, referred to as the plane of the sample, which is preferably perpendicular to the propagation axis Z. It is held on the support 10s at a distance d from a first image sensor 16. The plane of the sample is defined by the axes X and Y represented in
Preferably, the optical path travelled by the first light wave 12 before reaching the sample 10 is more than 5 cm. Advantageously, as seen by the sample, the first light source is considered to be a point source. This means that its diameter (or its diagonal) is preferentially less than one tenth, more preferably than one hundredth of the optical path between the sample and the light source. The first light source 11 may, for example, be a light-emitting diode or a laser source, for example a laser diode. It may be associated with an aperture 18, or spatial filter. The aperture is not necessary, in particular when the light source is sufficiently point-like, especially when it is a laser source.
Preferably, the spectral band Δλ of the incident light wave 12 has a width of less than 100 nm. The spectral bandwidth is intended to mean the full width at half maximum of the said spectral band.
The device as represented in
The first image sensor 16 is capable of forming an image I1 in a detection plane P0. In the example represented, it is an image sensor comprising a matrix of pixels, of the CCD type, or a CMOS, the surface of which is generally more than 10 mm2. The surface of the matrix of pixels, referred to as the detection surface, depends on the number of pixels and their size. It is generally between 10 mm2 and 50 mm2. The detection plane P0 preferably extends perpendicularly to the propagation axis Z of the incident light wave 12. The distance d between the sample 10 and the matrix of pixels of the image sensor 16 is preferentially between 50 μm and 2 cm, preferably between 100 μm and 2 mm.
The absence of magnifying optics between the first image sensor 16 and the sample 10 will be noted. This does not preclude the optional presence of focusing microlenses at each pixel of the first image sensor 16, these not having the function of magnifying the image acquired by the first image sensor.
Because of the proximity between the first image sensor 16 and the sample 10, the image I1 (or first image) acquired by the first image sensor extends in a first observation field Ω1 slightly smaller than the area of the first image sensor 16, that is to say typically between 10 mm2 and 50 mm2. This is a large observation field when comparing it with the observation field provided by a high-magnification microscope objective, for example an objective with a magnification of more than 10, as described below in connection with the second mode. Thus, the image I1 acquired by the first image sensor 16 makes it possible to obtain usable information of the sample in a large first observation field Ω1. One important element of the invention is to profit from this large observation field in order to select a region of interest ROI of the sample on the basis of the image I1, then to analyse the selected region of interest by focused imaging, for example fluorescence imaging, according to the second optical mode.
The observation device 1 may comprise a second light source 21, as well as an optical system 25 provided with a magnification of more than 1. The second light source 21 emits a second light wave 22 which propagates to the sample. A second image sensor 26 is coupled to the optical system 25, the second image sensor 26 being arranged in the focal image plane of the optical system 25. The optical system defines a magnification preferably of the order of or more than 10. The second image sensor 26 makes it possible to obtain detailed information of the selected region of interest ROI of the sample. Because of the magnification of the optical system 25, the observation field Ω2 of the second image sensor 26 is reduced in comparison with the first observation field Ω1. The second image sensor 26 is configured to form an image of the region of interest with a high resolution. The second image sensor 26 makes it possible to acquire a second image I2, with a high spatial resolution, through the objective 25.
The sample may comprise a cell marker. This may be a fluorescent marker or a coloured marker. When the marker is fluorescent, the second light wave 22 is emitted in an excitation spectral band of the fluorescent marker. The second image sensor is configured to form an image in the fluorescence spectral band of the fluorescent marker. It is, for example, coupled to a bandpass filter 27 delimiting a passband included in the fluorescence spectral band of the fluorescent marker. The second image thus makes it possible to obtain a detailed representation of the fluorescence of a region of interest ROI identified on the first image I1. When the cell marker is a fluorescent marker, the second light source may be coupled to an excitation filter 23 which defines a passband included in the excitation spectral band of the fluorescent marker. A fluorescence image makes it possible to observe accurately the development of certain subcellular structures during the cell event considered: cell division or cell death.
The first light source 11 and the second light source 21 may be arranged facing the sample 10 and activated successively, in which case the prism 15 is superfluous.
According to one possibility, the first light source 11 and the second light source 21 form a single light source.
Preferably, the sample 10 is kept immobile between the first observation mode and the second observation mode, whereas the first image sensor 16 and the optical system 25/second image sensor 26 assembly are moved relative to the sample between the two observation modes. A mobile plate 30, which supports the first image sensor 16 and the optical system 25/image sensor 26 assembly and allows them to be moved relative to the sample 10, has been represented in
As an alternative, the sample is mounted on a mobile support 10s making it possible to move it either facing the first image sensor 16 or facing the optical system 25/second image sensor 26 assembly. The image sensor may also be arranged on a support 16s, as represented in
Preferably, the relative movement of the first image sensor 16 and the optical system 25 is calculated automatically by the control unit 40 as a function of the region of interest ROI of the sample which has been selected on the basis of the first image I1.
As described below, one of the aims of the invention is to detect or predict the occurrence of a cell event on the basis of images I1 acquired according to the first mode. The detection or prediction makes it possible to define a region of interest ROI in which the cell event is occurring. The multimode device makes it possible to form a high-resolution image I2 according to the second mode. This leads to sharper observation of the cell event. As mentioned above, the cell event may be a cell division (mitosis or meiosis) or a cell death. The region of interest ROI may be defined automatically on the basis of images I1 and transmitted to the control unit 40. The latter then actuates the plate 30 in order to place the sample in the second analysis mode, that is to say facing the objective 25, so that the region of interest ROI is in the object plane of the objective 25.
One important aspect of the invention relates to the detection or prediction of an occurrence of a cell event in the sample on the basis of an image I1 acquired according to the first mode. The first mode may be implemented independently of the second mode. In this case, the first mode may make it possible to carry out detection of the cell event without marking. This avoids employing a coloured marker or a fluorescent marker. When the first mode is implemented independently, the device may comprise a single light source 11 and a single image sensor 16.
During the acquisition of an image I1 according to the first mode, under the effect of the incident light wave 12, the sample may give rise to a diffracted wave 13 capable of producing, in the detection plane P0, interference with a part of the incident light wave 12 transmitted by the sample. During the acquisition of the image I1 according to the first mode, the first image sensor 16 is exposed to an exposure light wave 14. The exposure light wave 14 transmitted by the sample 10 comprises:
These components form interference in the detection plane P0. Thus, each image I1 acquired by the first image sensor 16 according to the first mode comprises interference patterns (or diffraction patterns), each interference pattern being generated by the sample.
The device comprises a processing unit 50 configured to process each image I1 acquired by the first image sensor 16 according to the first mode, so as to detect or predict the occurrence of a cell division in the sample. The processing unit 50 may comprise a processor configured to carry out the steps described below in connection with
According to a first approach, the sample is considered to be described by parameter vectors F(x,y), each parameter vector being defined at a transverse coordinate (x,y) in the plane P10 of the sample. Each term of each vector corresponds to an optical property of the sample at the coordinate (x,y). The expression “transverse coordinate” designates coordinates in a plane perpendicular to the propagation axis Z.
At least one term of each vector may be an optical path difference ax, y), induced by the sample, along the propagation axis Z. The sample is considered to have a refractive index np in each cell 10p, whereas the culture medium 10m in which it is immersed has a refractive index 11m. At each transverse coordinate (x,y), it is considered that the optical path difference L(x,y) induced by a cell positioned at the said coordinate is such that:
L(x,y)=(np−nm)×e(x,y) (1)
where e(x,y): thickness of the cell at the transverse coordinate (x,y) and × is the multiplication operator. In other words, the optical path difference induced by the sample corresponds to a difference between:
At each coordinate (x,y) of the sample, it is possible to define an optical path difference L(x,y). When there is no cell at (x,y), L(x,y)=0. In the presence of a cell, L(x,y)=(np−nm)×e(x,y).
The optical path difference of the sample L(x,y) is therefore defined as a function of the refractive index of a cell and of the thickness of a cell occupying a coordinate (x,y). The thickness of the cell is defined parallel to the propagation axis Z.
The parameter vector F(x,y) has a dimension (1, Nw), where Nw denotes the number of parameters considered for each transverse coordinate (x,y). Another term of each vector F(x,y) may be an absorbance α(x,y) of the sample.
In the example considered, Nw=2. Each vector F(x,y) is such that:
A first part of the processing carried out by the processing unit 50 consists in obtaining an observation image I10 of the sample on the basis of each image I1 acquired according to the first mode. The observation image is a usable image of the sample, making it possible to detect or predict a cell division. The observation image of the sample may in particular be an image of the optical path difference L(x,y) of the sample, the latter being discretised at a plurality of transverse coordinates (x,y). It may also be an image of the absorbance α(x,y) of the sample. The observation image I10 may be obtained as described in application U.S. Pat. No. 16,907,407.
A second part of the processing consists in using different observation images, respectively coming from different images acquired according to the first mode, in order to detect or predict a cell division.
The processing of each image I1 acquired according to the first mode follows the steps in
Steps 100 to 180 constitute the first part of the processing, aiming to form the observation image I10 of the sample. The main principles have been explained here, knowing that details are given in application U.S. Pat. No. 16,907,407.
Step 100: Illumination of the sample 10 with the aid of the light source 11, and acquisition of an image I1 of the sample 10 according to the first mode by the first image sensor 16. This image forms a hologram of the sample.
The image I1 acquired by the first image sensor 16 is an image of the exposure light wave 14. The exposure light wave 14 may be defined at each transverse position (x,y) in the plane P10 of the sample by a complex expression, so that:
where:
When it is considered that a cell occupying a position (x,y) is transparent, α(x,y)=0. When the cell absorbs a part of the light, α(x,y)<0.
Step 110: Initialisation of the parameter vectors F(x,y) defined at each transverse coordinate of the sample.
The terms making up the initial parameter vector are defined arbitrarily or on the basis of an assumption about the sample. This step is carried out for each transverse coordinate (x,y) considered. The parameter vectors F(x,y) defined during this step form a set 1 of vectors describing the sample 10. Each initialised vector is denoted F1(x,y). A possible initialisation of the parameter vector as a function of parameters resulting from step 170 is described below.
Steps 120 to 150 are then carried out iteratively, each iteration being allocated a rank n, n being a natural number. In the first iteration, n=1. During each iteration, a set of parameters n of parameter vectors Fn(x,y) resulting from step 110 or a preceding iteration is considered. The superscript n designates the rank of the iteration.
Step 120: Estimation Î1n of the image I1 acquired by the first image sensor 16 during step 100, on the basis of the set Fn of parameter vectors Fn (x,y).
At each position (x,y), a complex expression A10n(x,y) of the exposure light wave 14 in the plane P10 of the sample is calculated according to expression (2) on the basis of the parameter vectors Fn(x,y). Each complex expression A10n(x,y) at different coordinates (x,y) forms a complex image A10n of the sample.
On the basis of the complex image A10n calculated in the plane of the sample according to (2), application of a holographic propagation operator hP
A
0
n
=A
10
n
*h
P
→P
(3).
hP
In general, the holographic propagation operator models transport of the exposure light wave between at least two points distant from one another. In the application described, the convolution product described in connection with equation (3) models transport of the exposure light wave 14 between the plane P10 of the sample and the detection plane P0.
By considering the square root of the modulus of the complex image A0n, an estimation Î1n of the image I1 acquired by the image sensor 16 during step 100 is obtained. Thus,
Ī
1
n=√{square root over (mod(A0n))} (5),
where mod designates the modulus operator.
Step 130: Comparison of the image Î1n estimated during step 120 with the image I1 acquired by the image sensor during step 100. The comparison may be expressed in the form of a difference or a ratio, or of a mean square deviation.
Step 140: Determination of a validity indicator of the estimation Î1n.
During this step, a validity indicator representing the relevance of the set n of vectors Fn(x,y) describing the sample is calculated. The index |n signifies that the validity indicator is established while knowing the set n of vectors Fn(x,y). In this example, the validity indicator is commensurately lower when the sample is described correctly by the set .
The validity indicator comprises an error criterion , the latter quantifying a global error of the estimated image Î1n in relation to the measured image I1. A global error is intended to mean an error for each coordinate (x,y). In this example, the error criterion is a scalar.
The error criterion is established on the basis of the comparison of the images Î1n and Il. For example,
According to one possibility, =
According to another possibility, as described in Application US20200124586, the validity indicator also takes into account a morphological criterion . Unlike the error criterion , which is defined on the basis of data measured or estimated in the detection plane P0, the morphological criterion is defined in the plane P10 of the sample.
The morphological criterion ϵ10|
Ln(x,y) is the optical path difference estimated during an iteration of rank n.
In this example, the morphological criterion is a scalar.
This criterion tends to decrease when the quantity Ln(x,y) has an oscillation minimum, which is the case for example when the particles have a spherical or hemispherical particle morphology. The values of Ln(x,y) for which the criterion ϵ10|
According to one possibility,
where [αn(x,y)>0] and [Ln(x,y)<0] designate the fact that the quantities 10∫αn2(x,y)dxdy and 10∫√{square root over ((Ln(x,y))2)}dxdy are taken into account only if αn(x,y)>0 and Ln(x,y)<0, respectively.
The validity indicator ϵ|
==+ (8)
or, taking into account (7)
where γ is a positive weighting factor.
Step 150: Refreshing of the vectors Fn(x,y) by minimisation of the validity indicator . The validity indicator is a scalar variable. However, it depends on the set n of parameter vectors, on the basis of which it has been established, by means of the image Î1n.
During step 150, a minimisation algorithm of the gradient descent type is applied so as to progressively approach, at each iteration, the set n allowing satisfactory minimisation of the validity indicator . Thus, the objective of this step is to establish a set n+1 of vectors Fn+1(x,y) aiming to obtain, after repetition of steps 110 to 140, a validity indicator lower than the validity indicator of the current iteration. This step makes it possible to refresh at least one term Fwn(x,y) of each vector Fn(x,y).
For this purpose, a gradient Gwn(x,y) of the validity indicator with respect to the optical parameter corresponding to the term Fwn(x,y), is defined, so that:
A gradient descent algorithm then defines a direction dwn as well as an increment σwn. The term Fw(x,y) of each parameter vector is refreshed according to the expression:
F
w
n+1(x,y)=Fwn(r)+dwnσwn (11)
The gradient Gwn(x,y) may be defined for each term Fwn(x,y) of each vector Fn(x,y).
Step 160: Repetition of steps 120 to 150 while taking into account during the following iteration step 120 the set n+1 refreshed during the step 150 of the iteration carried out last.
Steps 120 to 160 are repeated until the value of the validity indicator is considered to be representative of a good description of the sample by the set n of vectors Fn(x,y). When taking into account an indicator as defined in equations (8) or (9), the iterations cease when the value of the validity indicator is sufficiently low.
Step 170: Updating of the set n of vectors Fn(x,y).
During this step, the set n of vectors Fn(x,y) is subjected to an update by using a convolution neural network CNNa, referred to as the updating neural network. The convolutional neural network comprises two input layers IN and as many output layers OUT.
The updating neural network CNNa comprises two input layers IN. Each input layer IN represents a spatial distribution of a parameter FwN describing the sample, as refreshed during the last iteration of steps 120 to 160. N designates the rank of the last iteration. In this example, the first input layer represents a spatial distribution 1N of the first parameter F1N(x,y), in this case the optical path difference LN(x,y), in the plane of the sample, whereas the second input layer represents a spatial distribution 2N of the second parameter F2N(x,y), in this case the absorbance αN(x,y), in the plane of the sample.
In general, the algorithm is implemented on the basis of at least one input layer IN corresponding to a spatial distribution wN of a parameter FwN(x,y) of rank w in the plane of the sample, resulting from the last iteration n=N of steps 120 to 160. In the example considered, the two distributions 1N and 2N of the parameters 1N(x,y) and F2N(x,y) resulting from the last iteration N of steps 120 to 160 are used as input layers.
Between the input layers IN and the output layer OUT, the neural network comprises 20 convolution layers L1, . . . L20, the ranks of which are between 1 (layer adjacent to the layer IN) and 20 (layer adjacent to the layer OUT). Each convolution layer is followed by a normalisation layer (batch normalisation) and a linear rectification layer, usually designated “RELU”, making it possible in particular to suppress certain values of the images forming the convolution layer. For example, the negative values may be suppressed by replacing them with the value 0. In a manner which is known in the field of convolutional neural networks, each convolution layer is obtained by applying a convolution kernel to a preceding layer. In this example, the convolution kernel has a size of 5×5. The output layer OUT=CNNa(wN) represents at least one spatial distribution w of a parameter Fw in the plane of the sample. w=CNNa(wN) may be written in so far as the neural network CNNa makes it possible to update a spatial distribution Fw. In the example represented, the output layer comprises a spatial distribution 1=CNNa(1N), F2=CNNa(2N) of each parameter considered.
The updating neural network CNNa has previously been subjected to training by using training images corresponding to known situations, referred to as “ground truth”. On the training images, each spatial distribution w,gt of each parameter of order w is known, the index gt signifying “ground truth”. The training images are subjected to an iterative reconstruction according to steps 120 to 160 so as to obtain reconstructed spatial distributions wN. The training phase allows parameterisation of the neural network, in particular the convolution filters, so that each spatial distribution CNNa(wN) is as close as possible to w,gt. The training step corresponds to step 90 in
Following the first iteration of step 170, a set of vectors representing the sample is obtained. The set of parameters comprises each spatial distribution w of each parameter Fw(x,y). The method comprises repetition of steps 110 to 160. In the repeated step 110, the set of vectors resulting from step 170 is used as a set of initialisation vectors 1.
The use of the updating neural network CNNa makes it possible to correct errors of the phase aliasing type occurring during a first implementation of the reconstruction algorithm described in connection with steps 120 to 160. These may in particular be phase aliasing phenomena occurring during the reconstruction.
Following the repetition of steps 110 to 160 initialised using the parameter vectors resulting from step 170, the set of vectors N resulting from the reconstruction is used to form an observation image according to step 180.
Step 180: Formation of an observation image I10.
Following the second series of repetitions of steps 110 to 160, vectors F(x,y) which are considered to form a good description of the sample are available. Each vector F(x,y) comprises a term L(x,y) representing an optical path difference experienced by the exposure light wave 14 in relation to a reference exposure light wave in the absence of a cell at the coordinate (x,y). The observation image I10 corresponds to a spatial distribution of the quantity L(x,y) in the plane P10 of the sample.
an absorbance spatial distribution αN(x,y) in the plane of the sample (units μm), obtained after a first implementation of the iterative algorithm described in connection with steps 120 to 160;
Steps 100 to 180 are carried out again at various successive instants t so as to obtain a plurality of observation images of the sample I10(t), respectively corresponding to each instant. Thus, starting from an initial instant ti, K observation images I10(ti), I10(ti+1) . . . I10(ti+K) are formed. For example, K=5. The instants ti . . . ti+K form an acquisition time range.
On the basis of the observation images, a second processing phase aiming to detect or predict an occurrence of the cell event is carried out.
Steps 200 to 230, described below, aim to detect an occurrence of a cell event. They are based on employing an artificial intelligence algorithm with supervised training. In this example, it is a detection convolutional neural network CNNd.
Step 200: Formation of input data
During this step, a plurality of images I10(t) acquired between predetermined time intervals, for example 10 minutes, become available.
Step 210: Use of the K observation images I10(t) as input data of a convolutional neural network CNNd, referred to as a detection convolutional neural network.
The detection convolutional neural network CNNd has previously been subjected to supervised training so as to be able to detect an occurrence of a cell event on the basis of the images forming the input data.
Such a convolutional neural network is represented in
The detection convolutional neural network CNNd comprises a block for extracting characteristics of the input images. In a manner which is known in the field of convolutional neural networks, the characteristic extraction block comprises a series of layers, each layer resulting from the application of a convolution kernel to a preceding layer. In this example, the convolutional neural network comprises 20 convolution layers L1 . . . LJ, each layer resulting from the application of a convolution kernel with a size of 3×3 to a preceding layer. The number of convolution filters per layer is equal to 32. The parameters of the convolution filters applied to each layer are determined during the training.
Between two successive layers, each layer undergoes batch normalisation and a linear rectification operation, usually designated “ReLU” (Rectified Linear Unit).
The last convolution layer LJ forms a first layer of a reconstruction block aiming to construct an output image representing a spatial distribution of a probability of occurrence of a cell division. In this application, the reconstruction aims to determine a probability of occurrence of a cell event in each pixel of an image forming the input layer. The output image Iout has the same dimension as each observation image forming the input layer, and its grey level corresponds to a probability of occurrence of the cell event considered during the acquisition time range.
The detection convolutional neural network CNNd makes it possible to use images of any dimension in the capacity of input data. It was programmed with the “deep learning library” module of the Matlab software (developer The Mathworks).
The detection convolutional neural network CNNd has as previously been subjected to training with the aid of training images. The training is described below in connection with step 200′.
Step 220: On the basis of the image Iout forming the output of the detection convolutional neural network CNNd, determination of the occurrence of the cell event and, if applicable, location of a region of interest ROI of the sample in which it is estimated that the cell event took place. When the grey level of the output image Iout is commensurately higher than the probability of occurrence of a cell event, the output image comprises bright spots, each bright spot corresponding to a region of interest ROI in which it is estimated that cell division has taken place.
In order to improve the detection performance of the neural network, the observation images forming the input data may be subjected to intensity thresholding so as to eliminate the pixels whose intensity is below a predetermined threshold. The same is true for the output image: output image thresholding makes it possible to address events whose occurrence probability is higher than a predetermined threshold.
Step 230: During this step, each region of interest ROI revealed by the output image Iout may be subjected to more precise observation according to the fluorescence imaging mode with high magnification, with the aid of the optical system 25 coupled to the second image sensor 26. In this mode, the sample is illuminated in the excitation spectral band of the fluorescent marker by the second light source 21.
The control unit 40 may be configured to move the sample automatically relative to the optical system 25 so that each region of interest ROI is successively in the object plane of the optical system 25. Step 230 is optional.
Step 200′: Training.
Employing a neural network presupposes a training phase. During the training phase, training images corresponding to observation images of a training sample are used, of which it is known whether they correspond to an occurrence of a cell event. Positive training images corresponding to a cell event and negative training images not representative of a cell event are used.
The Inventors carried out training of a detection convolutional neural network as described in step 210.
The detection convolutional neural network was trained by using datasets comprising:
The images represented in
The training datasets were supplemented with “negative” datasets on which no mitosis was detected.
More than 10000 training datasets were used.
The acquisition parameters of the training images were:
Following the training, the Inventors carried out steps 100 to 240 on test images for which the occurrence and the position of a possible mitosis were known. The test images had not been used during the training phase. Their dimension was 121 pixels×121 pixels. The test images were taken from two different samples comprising cells of the HeLa type.
Table 1 summarises the results obtained:
According to a second embodiment, the neural network aims not to detect the occurrence of a cell event during the acquisition time range but to predict the occurrence of the cell event at an instant subsequent to the acquisition time range. As described in connection with steps 200 to 220, it is a convolutional neural network.
Step 300: Formation of input data
During this step, a plurality of images I10(t) acquired between time intervals of 10 minutes become available. Thus, starting from an initial instant ti, K observation images I10(ti), I10(ti+1) . . . I10(ti+K) are formed. For example, K=5.
Step 310: Use of the K observation images I10(t) as input data of a convolutional neural network CNNp, referred to as a prediction convolutional neural network.
The prediction convolutional neural network CNNp has previously been subjected to supervised training so as to be able to predict an occurrence of a cell event on the basis of the input images, the cell event occurring subsequent to the acquisition of the input images. The structure of the prediction convolutional neural network is similar to that of the detection neural network described in connection with
In this application, the neural network leads to an output image Iout with the same dimension as each observation image forming the input layer, and its grey level corresponds to a probability of occurrence of a cell event subsequent to the acquisition time range, that is to say subsequent to the instants ti and ti+K.
In contrast to the detection neural network CNNd, the prediction neural network CNNp makes it possible to predict the occurrence of a cell event in a prediction time interval subsequent to the acquisition time range [ti; ti+K] during which the images of the sample are acquired. This involves detecting a cell event not during the acquisition time range of the images, but subsequent to the latter, for example between 10 minutes and 1 hour after the acquisition time range. Thus, the cell event occurs in a prediction time interval subsequent to the acquisition time range, for example temporally offset from the latter by from 10 minutes to 1 hour
The prediction convolutional neural network CNNp has previously been subjected to supervised training with the aid of training images. The training is described below in connection with step 300′.
Step 320: On the basis of the image forming the output of the prediction convolutional neural network CNNp, prediction of an occurrence of a cell event subsequent to the acquisition time range and location of the possible cell event. The grey level of the output image Iout increases as the probability of occurrence of a cell division increases. In this case the output image comprises bright spots, each bright spot corresponding to a region of interest ROI in which it is estimated that cell division will take place in the prediction time interval defined during the training. An example of an output image is described in connection with
In order to improve the prediction performance of the neural network, the images forming the input data may be subjected to intensity thresholding. The same is true for the output image. Output image thresholding makes it possible to address events whose occurrence probability is higher than a predetermined threshold.
Step 330: During this step, each region of interest revealed by the output image Iout may be subjected to more precise observation according to the imaging mode with high magnification, with the aid of the optical system 25 and the second image sensor 26. This may, for example, involve a fluorescence image. In this mode, the sample is illuminated in the excitation spectral band of the fluorescent marker by the second light source 21. The control unit 40 may be configured to move the sample automatically relative to the optical system 25 so that each region of interest ROI is successively in the object plane of the optical system 25. Step 330 is optional.
Step 300′: Training.
Employing a neural network presupposes a training phase. During the training phase, training images corresponding to observation images of a training sample are used, of which it is known whether they correspond to an occurrence of a cell event in a time interval subsequent to the acquisition time range of the images.
The Inventors carried out training of a prediction convolutional neural network as described in step 310. The cell event in question was a mitosis.
either on the basis of images respectively acquired at tm—100 minutes, tm—90 minutes, tm—80 minutes, tm—70 minutes, tm—60 minutes (which is represented in
The prediction convolutional neural network was trained by using datasets comprising:
The images represented in
The mitosis detection network was used on cells of the “mouse lung fibroblast” type. This made it possible to detect the occurrence of mitoses in a sample. The device made it possible to identify, in the sample, regions of interest in which a mitosis occurred. It was possible to visualise the mitosis by fluorescence microscopy.
It should be noted that when it is used without being coupled with a fluorescence microscopy imaging mode, the method makes it possible to detect cell division without marking.
Following the training of the prediction neural network, the Inventors carried out the above-described steps 100 to 170 and 300 to 320 on test images for which the occurrence and the position of a possible mitosis were known. The test images were not used during the training phase. Their dimension was 121 pixels×121 pixels. The test images were taken from a sample comprising cells of the HeLa type.
During a first series of trials, use was made of a prediction neural network the training of which had been carried out by considering images acquired respectively 100, 90, 80, 70 and 60 minutes before the mitosis. During a second series of trials, use was made of a prediction neural network the training of which had been carried out by considering images acquired respectively 70, 60, 50, 40 and 30 minutes before the mitosis.
Table 2 summarises the results obtained:
Although performing less well than the detection neural network, as was expected, the prediction neural network makes it possible to obtain usable results, the incidence of true positives being more than 70%.
In the examples described in connection with
A detection convolutional neural network CNNd has a structure which is similar to that described in connection with steps 200 to 220, as well as
The cell death detection convolutional neural network was trained by using datasets comprising:
The description above has described how to obtain an observation image of the sample, corresponding to a spatial distribution of an optical path difference L(x,y), by adopting a method described in U.S. Pat. No. 16,907,407.
Other ways of obtaining a spatial distribution of an optical path difference or of absorbance may also be used. For example, the optical path difference and the absorbance may be obtained as described in Application US20200124586.
Methods making it possible to estimate a refractive index np of each cell may also be used as a basis. More precisely, it is possible to estimate an average optical path difference L(x,y) induced by the cell on the basis of a difference between the index np of a cell and the refractive index of the medium. The refractive index of the cell may be estimated as described in Patent Application WO2020128282 or US20200110017.
On the basis of the refractive index of the cell, by means of taking into account a thickness of the cell, the optical path difference may be estimated according to expression (1).
According to another possibility, each observation image of the sample corresponds to a spatial distribution of an absorbance α(x,y) of the sample.
It should be noted that the method described in connection with
According to one embodiment, in the first mode, image forming optics are arranged between the sample and the image sensor. The device then comprises an optical system 19 defining an object plane Pobj and an image plane Pim, as represented in
In the example represented in
The method described in connection with steps 100 to 180 may be applied to images acquired according to such a configuration. A lensless imaging configuration is, however, preferred because of the larger observation field that it provides and its greater compactness.
Number | Date | Country | Kind |
---|---|---|---|
20 11442 | Nov 2020 | FR | national |