The technical field of the invention is the observation of mobile or motile microscopic particles in a sample, with a view to the characterization thereof. One application targeted is the characterization of sperm cells.
The observation of motile cellular particles, such as sperm cells, in a sample, is usually performed using a microscope. The microscope comprises a lens defining an object plane, extending in the sample, and an image plane, merged with a detection plane of an image sensor. The microscope takes images of the sperm cells according to a focused configuration. The choice of such a modality presupposes a trade-off between spatial resolution, observed field and depth of field. The higher the numerical aperture of the lens, the better the spatial resolution, to the detriment of the size of the field observed. Similarly, a high numerical aperture reduces the depth of field.
It is understood that the observation of motile particles presupposes an optimization of the lens, bearing in mind that, in such a configuration, it is not possible to obtain both a good spatial resolution, a wide observed field and a great depth of field. Now, these properties are particularly important in the observation of motile microscopic particles, notably when the latter are numerous:
Given these imperatives, it is standard practice to use a microscope with high magnification. The small size of the observed field is compensated by the use of a translation plate. The latter allows an acquisition of images by offsetting the lens relative to the sample, parallel thereto. The small depth of field is compensated by limiting the thickness of the sample: the latter is for example disposed in a fluidic chamber of small thickness, typically less than 20 μm, so as to limit the displacement of the particles in a direction at right angles to the object plane. Moreover, the lens can be brought closer to or moved further away from the sample, so as to displace the object plane in the sample, according to its thickness. The result thereof is a device that is complex and costly, requiring accurate displacement of the lens.
The document WO2019/125583 describes the general principles relating to the analysis of the motility or of the morphology of sperm cells, by describing the use of a supervised learning artificial intelligence algorithm.
The publication by Amann R. et al. “Computer-assisted sperm analysis (CASA): Capabilities and potential developments”, describes the use of a conventional, focused imaging device, using a fluidic chamber of small thickness.
One alternative to conventional microscopy, as described in the abovementioned publication, has been proposed by lensless imaging. It is known that lensless imaging, coupled with holographic reconstruction algorithms, allows observation of cells while conserving a high observation field, as well as a great depth of field. The patents U.S. Pat. Nos. 9,588,037 or 8,842,901 describe for example the use of lensless imaging for the observation of sperm cells. Patents U.S. Pat. No. 10,481,076 or 10,379,027 also describe the use of lensless imaging, coupled with reconstruction algorithms, for characterizing cells.
It is known that the use of numeric reconstruction algorithms makes it possible to obtain sharp particle images. Such algorithms are for example described in US10564602, US20190101484 or US20200124586. In this type of algorithm, from a hologram acquired in a detection plane, an image of the sample is reconstructed in a reconstruction plane, remote from the detection plane. It is standard practice for the reconstruction plane to extend through the sample. However, this type of algorithm may require relatively lengthy computation time.
Such a constraint is acceptable when the particles are considered as immobile in the sample. However, when wanting to characterize mobile particles, and in particular motile particles, the computation time can become too great. Indeed, the characterization of mobile particles necessitates acquisition of several images, at high frequencies, so as to be able to characterize the motion of the particles in the sample.
The inventors are proposing an alternative to the abovementioned patents, that makes it possible to characterize motile particles, and in particular sperm cells, by using a simple observation method. The method devised by the inventors allows for characterization of a great number of particles without necessitating displacement of a lens relative to the sample.
A first subject of the invention is a method for characterizing at least one mobile particle in a sample, the method comprising:
According to one possibility, the steps a) to d) can be performed with a single image. In this case, the series of images comprises a single image.
According to one possibility, on the output image, each particle can be represented in the form of a dot.
The particle (or each particle) can notably be a sperm cell.
Step d) can comprise a characterization, notably morphological, of each detected sperm cell. The step d) then comprises:
The method can be such that each detected sperm cell is centered with respect to each thumbnail image.
Step d) can comprise a characterization of the motility of the sperm cell. Step d) can then comprise, from the positions of the sperm cell resulting from the step c):
The method can comprise:
According to one embodiment,
In such an embodiment, the method can be such that:
According to one embodiment, no image-forming optic extends between the sample and the image sensor.
According to one embodiment, the step a) comprises a normalization of each acquired image by an average of said image or by an average of images of the series of images. In this case, the series of images is taken from each acquired image, after normalization.
According to one embodiment, the step a) comprises an application of a high-pass filter to each acquired image. In this case, the series of images is taken from each acquired image, after application of the high-pass filter.
Advantageously, in the step b), the distribution of intensity assigned to each particle is decreasing, such that the intensity decreases as a function of the distance with respect to the particle.
Advantageously, in the step b), the distribution of intensity assigned to each particle can be a two-dimensional parametric statistical distribution.
A second subject of the invention is a device for observing a sample, the sample comprising mobile particles, the device comprising:
According to one embodiment, no image-forming optic extends between the image sensor and the sample. The holding structure can be configured to maintain a fixed distance between the image sensor and the sample.
According to one embodiment,
The invention will be better understood on reading the explanation of the exemplary embodiments presented, hereinafter in the description, in association with the figures listed below.
by implementing a classification neural network, of which the input layer comprises a series of images of a particle, each image being acquired according to a defocused configuration, without holographic reconstruction.
The device comprises a sample support 10s configured to receive the sample 10, such that the sample is held on the support 10s. The sample thus held extends on a plane, called sample plane P10. The sample plane corresponds for example to an average plane around which the sample 10 extends. The sample support can be a glass plate, for example 1 mm thick.
The sample notably comprises a liquid medium 10m in which mobile and possibly motile particles 10i are bathed. The medium 10m can be a biological liquid or a buffer liquid. It can for example comprise a bodily fluid, in the pure or diluted state. Bodily fluid should be understood to mean a liquid generated by a living body. It can in particular be, as a nonlimiting example, blood, urine, cerebrospinal fluid, semen, lymph.
The sample 10 is preferably contained in a fluidic chamber 10c. The fluidic chamber is for example a fluidic chamber of a thickness of between 20 μm and 100 μm. The thickness of the fluidic chamber, and therefore of the sample 10, on the propagation axis Z, varies typically between 10 μm and 200 μm, and preferably lies between 20 μm and 50 μm.
One of the objectives of the invention is the characterization of particles in motion in the sample. In the exemplary embodiment described, the mobile particles are sperm cells. In this case, the sample comprises semen, possibly diluted. In this case, the fluidic chamber 10c can be a counting chamber dedicated to analyzing the mobility or the concentration of cells. It may for example be a counting chamber marketed by Leja, of a thickness of between 20 μm and 100 μm.
According to other applications, the sample comprises mobile particles, for example microorganisms, for example microalgae or plankton, or cells, for example cells in the process of sedimentation.
The distance D between the light source 11 and the sample 10 is preferably greater than 1 cm. It is preferably between 2 and 30 cm. Advantageously, the light source 11, seen by the sample, is considered to be a spot source. That means that its diameter (or its diagonal) is preferentially less than a tenth, better a hundredth, of the distance between the sample and the light source.
The light source 11 is for example a light-emitting diode. It is preferably
associated with a diaphragm 14, or spatial filter. The aperture of the diaphragm is typically between 5 μm and 1 mm, preferably between 50 μm and 1 mm. In this example, the diaphragm has a diameter of 400 μm. According to another configuration, the diaphragm can be replaced by an optical fiber, a first end of which is placed facing the light source and a second end of which is placed opposite the sample 10. The device can also comprise a diffuser 13, disposed between the light source 13 and the diaphragm 14. The use of a diffuser/diaphragm assembly is for example described in U.S. Pat. No. 10,418,399.
The image sensor 20 is configured to form an image of the sample on a detection plane P20. In the example represented, the image sensor 20 comprises a matrix of pixels, of CCD or CMOS type. The detection plane P20 preferably extends at right angles to the propagation axis Z. Preferably, the image sensor has a high sensitive surface area, typically greater than 10 mm2. In this example, the image sensor is an IDS-UI-3160CP-M-GL sensor comprising pixels of 4.8×4.8 μm2, the sensitive surface area being 9.2 mm×5.76 mm, i.e. 53 mm2.
In the example represented in
In this example:
Such a setup gives an observation field of 3 mm2, with a spatial resolution of 1 μm. The short focal length of the lens makes it possible to optimize the bulk of the device 1 and to adjust the magnification to the dimension of the image sensor.
The image sensor is configured to acquire images of the sample, according to an acquisition frequency of a few tens of images per second, for example 60 images per second. The sampling frequency is typically between 5 and 100 images per second.
The optical system 15 defines an object plane Po and an image plane Pi. In the embodiment represented in
According to such a modality, the image sensor 20 is exposed to a light wave, called exposure light wave. The image acquired by the image sensor comprises interference figures, that can also be referred to by the term “diffraction figures”, formed by:
A processing unit 30, comprising a microprocessor for example, is able to process each image acquired by the image sensor 20. In particular, the processing unit comprises a programmable memory 31 in which is stored a sequence of instructions for performing the image processing and computation operations described in this description. The processing unit 30 can be coupled to a screen 32 allowing the display of images acquired by the image sensor 20 or resulting from the processing performed by the processing unit 30.
The image acquired by the image sensor 20, according to a defocused imaging modality, is a diffraction figure of the sample, sometimes called hologram. It does not make it possible to obtain an accurate representation of the observed sample. Usually in the field of holography, it is possible to apply, to each image acquired by the image sensor, a holographic reconstruction operator so as to calculate a complex expression representative of the light wave to which the image sensor is exposed, and that can be done at any point of coordinates (x, y, z) of the space, and in particular in a reconstruction plane corresponding to the plane of the sample. The complex expression makes it possible to obtain the intensity or the phase of the exposure light wave. Such a holographic reconstruction is described in association with the prior art, and in U.S. Pat. No. 10,545,329.
However, for processing speed reasons, the inventors have followed an approach that differs from that suggested by the prior art, without recourse to a holographic reconstruction algorithm applied to the images acquired by the image sensor. The method implemented by the sensor is described hereinbelow, in association with
In
Preferably, the holding structure is arranged such that the distance between the sample, when the sample 10 is disposed on the holding structure 17, and the image sensor 20, is constant. In the example represented in
Step 100: Acquisition of a series of images I0,n
During this step, a series of images I0,n is acquired according to one of the modalities previously described. n is a natural integer designating the rank of each image acquired, with 1≤n≤N, N being the total number of images acquired. The images acquired are images acquired either according to a defocused imaging modality or according to a lensless imaging modality. The number N of images acquired can be between 5 and 50. As previously indicated, the images can be acquired according to an acquisition frequency of 60 Hz.
In the results presented hereinbelow, the images were acquired by using a defocused imaging modality, as described in association with
Step 110: Preprocessing
The aim is to perform a preprocessing of each image acquired, so as to limit the effects of a fluctuation of the intensity of the incident light wave or of the sensitivity of the camera.
The preprocessing consists in normalizing each image acquired by an average of the intensity of at least one image acquired, and preferably all of the images acquired.
Thus, from each image I0,n, the normalization makes it possible to obtain a normalized image In, such that
in which
According to one possibility, the preprocessing can include an application of a high-pass filter to each image, possibly normalized. The high-pass filter makes it possible to eliminate the low frequencies of the image.
According to one possibility, a Gaussian filter is applied, by effecting a product of convolution of the image In by a Gaussian kernel K. The width at mid-height of the Gaussian kernel is for example 20 pixels. The preprocessing then consists in subtracting, from the image In, the image In*K resulting from the application of the Gaussian filter, such that the image resulting from the preprocessing is: I′n=In−In* K. * is the convolution product operator.
The effect of such a filtering is described hereinbelow, in association with
Step 120: Detection of Particles
This step consists in detecting and in accurately positioning the sperm cells, and doing so on each image I0,n, resulting from the step 100 or of each image I′n preprocessed in the step 110. This step is performed using a detection convolutional neural network CNNd. The neural network CNNd comprises an input layer, in the form of an input image Iin,n. The input image Iin,n is either an acquired image I0,n, or a preprocessed image In, I′n. From the input image Iin,n, the detection neural network CNNd generates an output image Iout,n. The output image Iout,n is such that each sperm cell 10i detected on the input image Iin,n, in a position (xi, yi), appears in the form of a distribution of intensity centered around said position. In other words, from an input image Iin,n, the neural network CNNd allows:
Thus, Iout,n=CNNd(Iin,n)
Each position (xi, yi) is a two-dimensional position, in the detection plane P20. The distribution of intensity Di is such that the intensity is maximal at each position (xi, yi) and that the intensity is considered negligible beyond a vicinity Vi of each position. Vicinity Vi is understood to mean a region extending according to a number of pixels, predetermined, for example between 5 and 20 pixels, around each position (xi, yi). The distribution Di can be of strobe type in which case, in the vicinity Vi, each pixel is of constant intensity is high, and beyond the vicinity Vi, each pixel has a zero intensity.
Preferably, the distribution Di is centered on each position (xi, yi) and is strictly
decreasing around the latter. It can for example be a two-dimensional Gaussian intensity distribution, centered on each position (xi, yi). The width at mid-height is for example less than 20 pixels, and preferably less than 10 or 5 pixels. Any other form of parametric distribution can be envisaged, bearing in mind that it is preferable for the distribution Di to be symmetrical around each position, and preferably strictly decreasing from the position (xi, yi). The fact of assigning an intensity distribution Di to each position (xi, yi) makes it possible to obtain an output image Iout,n in which each sperm cell is simple to detect. In that, the output image Iout,n is a detection image, on the basis of which each sperm cell can be detected.
The output image from the neural network is formed by a resultant of each intensity distribution Di defined respectively around each position (xi, yi).
In the example implemented by the inventors, the detection convolutional neural network comprised 20 layers comprising either 10 or 32 characteristics (more usually referred to as “features”) per layer. The number of features was determined empirically. The transition from one layer to another is performed by applying a convolution kernel of 3×3 size. The output image is obtained by combining the features of the last convolution layer. The neural network was programmed in a Matlab environment (Matlab publisher: The Mathworks). The neural network was previously subjected to learning (step 80), so as to parameterize the convolution filters. During the learning, learning sets were used, each set comprising:
The learning was performed by using 10 000 or 20 000 annotated positions (i.e. between 1000 and 3000 annotated positions per image). The effect of the size of the learning set (10 000 or 20 000 annotations) is discussed in association with
The image of
Thus, on the basis of a hologram, and through possible preprocessings of normalization or high-pass filter type, to reveal the high frequencies, the application of the detection convolutional neural network CNNd allows a detection and an accurate positioning of sperm cells. And does so without recourse to an image reconstruction implementing a holographic propagation operator, as suggested in the prior art. The input image of the neural network is an image formed in the detection plane, and not in a reconstruction plane remote from the detection plane. Furthermore, the detection neural network generates output images with little noise: the signal-to-noise ratio associated with each sperm cell detection is high, which facilitates the subsequent operations.
The step 120 is reiterated for different images of a same series of images, so as to obtain an output series of images Iout,1 . . . Iout,N.
During the step 120, from each output image Iout,1 . . . Iout,N, a local maximum detection algorithm is applied, so as to obtain, for each image, a list of 2D coordinates, each coordinate corresponding to a position of a sperm cell. Thus, from each output image Iout,1 . . . Iout,N, a list Lout,1 . . . Lout,N is established. Each list Lout,n comprises corresponds to the positions 2D, in the detection plane, of sperm cells detected in an image Iout out,n.
During this step, from the lists Lout,1 . . . Lout,N respectively established from the output images Iout,1 . . . Iout,N, resulting from a same series of images I0,1 . . . I0,N, a position tracking algorithm is applied, normally referred to as “tracking algorithm”. For each sperm cell detected following the step 120, an algorithm is implemented that allows the position of the sperm cell to be tracked, parallel to the detection plane, between the different images of the series of images. The implementation of the position tracking algorithm is efficient because it is performed from the lists Lout,1 . . . Lout,N resulting from the step 120. As previously indicated, these images have a high signal-to-noise ratio, which facilitates the implementation of the position tracking algorithm. The position tracking algorithm can be an algorithm of “closest neighbor” type. The step 130 allows a determination of a trajectory of each sperm cell parallel to the detection plane.
In
Step 140: Characterization of Motility
During the step 140, for each sperm cell, each trajectory can be characterized on the basis of metrics applied to the trajectories resulting from the step 130. Knowing the image acquisition frequency, it is possible to quantify the speeds of displacement of each sperm cell detected, and in particular:
From the calculated speeds, it is possible to define indicators making it possible to characterize the motility of the sperm cells that are known to the person skilled in the art. These are for example indicators of the following types:
The quantification of the speeds or parameters listed above makes it possible to categorize the sperm cells according to their motility. For example, a sperm cell is considered as:
The threshold values STRth, VSLth2, VSLth3 are previously determined, with VSLth2≥VSLth3. The same applies for the values VAPth1, VAPth2 and VAPth3 with VAPth1≥VAPth1≥VAPth3.
Step 150: Extraction of Thumbnail Images for Each Sperm Cell
During this image, for each sperm cell detected by the neural network CNNd, a thumbnail image Vi,n extracted, and from each image I0,n acquired by the image sensor in the step 100.
Each thumbnail image Vi,n is a portion of an image I0,n acquired by the image sensor. For each detected sperm cell 10i, from each image I0,n, a thumbnail image Vi,n is extracted around the position (xi, yi) assigned to the sperm cell. The position (xi, yi) of the sperm cell 10i, in each image I0,n, is obtained following the implementation of the position tracking algorithm (step 130).
The size of each thumbnail image Vi,n is predetermined. Relative to each thumbnail image Vi,n, the position (xi, yi) of the sperm cell 10i considered is fixed: preferably, the position (xi, yi) of the sperm cell 10i is centered in each thumbnail image Vi,n.
A thumbnail image Vi,n can for example comprise several tens or even hundreds of pixels, typically between 50 and 500 pixels. In this example, a thumbnail image comprises 64×64 pixels. Because of the size, and of the concentration of the sperm cells in the sample, a thumbnail image Vi,n can include several sperm cells to be characterized. However, only the sperm cell 10i occupying a predetermined position in the thumbnail image Vi,n, for example at the center of the thumbnail image, is characterized by using said thumbnail image.
Step 160: Classification of the Morphology of Each Sperm Cell
During this step, a classification neural network CNNc is used, so as to classify each sperm cell 10i previously detected by the detection convolutional neural network CNNd. For each sperm cell 10i, the neural network is fed by the thumbnail images Vi,1 . . . Vi,N extracted in the step 150.
The classification convolutional neural network CNNc can for example comprise 6 convolutional layers, comprising between 16 and 64 features: 16 features for the four first layers, 32 features for the fifth layer, and 64 features for the sixth layer. The transition from one layer to another is performed by applying a convolution kernel of 3 by 3 size. The output layer comprises nodes, each node corresponding to a probability of belonging to a morphological class. Each morphological class corresponds to a morphology of the sperm cell being analyzed. It can for example be a known classification, the classes being:
The output layer can thus include 11 classes.
The neural network is programmed in the Matlab environment (Matlab publisher: The Mathworks). The neural network was previously subjected to a learning (step 90), so as to parameterize the convolution filters. During the learning, learning sets were used, each set comprising:
Different tests were carried out to examine the performance levels of the detection neural network CNNd. The sensitivity of the detection was tested with respect to the application of a high-pass filter during the step 110.
It can be seen that the image 8C includes detected sperm cells, identified by arrows, which do not appear on the image 7C. The application of the filter enhances the detection efficiency of the classification neural network.
On each of the
The sensitivity and the specificity (ordinate axes) were determined on 14 different samples (abscissa axis) of diluted bovine semen (factor 10). It can be seen that the performance levels of the neural network, whatever the configuration (curves a, b and c), are greater than that of the conventional algorithm, which is noteworthy. Moreover, the best performance is obtained in the configuration c, according to which the neural network is trained with 20 000 annotations and is fed with an image having been subjected to a preprocessing with a high-pass filter.
It will be recalled that the sensitivity corresponds to the ratio of true positives to the sum of the ratios of true positives and false negatives, and that the specificity corresponds to the ratio of true positives to the sum of the ratios of true and false positives.
The axes of each confusion matrix correspond to the classes 1 to 10 previously described.
Note that the classification performance levels of the algorithms described in association with
Note also that a single input image makes it possible to obtain a satisfactory classification performance level. So, for a morphological classification, there is no need to have a series of images comprising several images. A single image can suffice. However, the classification performance level is better when several images are used.
The invention allows an analysis of a sample including sperm cells without recourse to a translation plate. Moreover, it allows a characterization directly from the image acquired by the image sensor, that is to say on the basis of diffraction figures of the different sperm cells, and without requiring recourse to numerical reconstruction algorithms. Currently, the recourse to such an algorithm is reflected by a processing time of 10 seconds per image. In other words 300 seconds for a series of 30 images. Moreover, as represented in
Another advantage of the invention is the tolerance with respect to a defocusing. The inventors estimate that the invention tolerates offsets of ±25 μm between the optical system and the sample.
Number | Date | Country | Kind |
---|---|---|---|
FR2013979 | Dec 2020 | FR | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2021/086909 | 12/20/2021 | WO |