The technical field of the invention is related to the observation of a sample, in particular a biological sample, with an imaging device operating in ambient light.
The observation of samples, and in particular biological samples, by lensless imaging has seen substantial development over the last ten years. This technique allows a sample to be observed by placing it between a light source and an image sensor, without placing any image-forming lenses between the sample and the image sensor. Thus, the image sensor collects an image of a light wave transmitted by the sample, without conjugation between the image sensor and the sample.
Document WO2008090330 for example describes a device allowing biological particles to be observed by lensless imaging. The biological particles are for example cells. The device allows an interference pattern the morphology of which allows the type of cell to be identified to be associated with each cell. Lensless imaging would thus appear to be a simple and inexpensive alternative to a conventional microscope. In addition, its field of observation is clearly much larger than it is possible for that of a microscope to be.
In the visible domain, lensless imaging has been applied to examine samples containing particles, in particular biological particles or cells, for characterization purposes. Examples may be found in WO2017178723, WO2016151248, or WO2016151249. The use of lensless imaging to count particles is described in WO2018115734 or in WO2015166009, or in WO2018060589.
Documents WO2016189257 or EP 319 9941 described the use of lensless imaging to characterize tissue slides of pathology-slide type.
In the aforementioned documents, the lensless imaging is implemented using wavelengths in the visible. In order to keep the ambient light at bay, the devices are such that the main components (light source, image sensor) and the sample are combined in a chamber that is impermeable to light.
The inventors propose a device that is simple to implement, and that allows the constraint on isolation of the sensor with respect to ambient light to be relaxed. A greater ease-of-use results therefrom.
A first subject of the invention is a method for observing a sample, the sample being placed between a light source and an image sensor, preferably comprising at least 10000 pixels, the light source emitting an illuminating beam, which propagates to the sample, the illuminating beam being emitted in an illumination spectral band lying above 800 nm, the method comprising the following steps:
a) illuminating the sample with the light source;
b) acquiring an image of the sample with the image sensor;
the image sensor being configured such that it has a detection spectral band that blocks wavelengths in a visible spectral band, lying at least between 400 nm and 750 nm, or at least between 400 nm and 780 nm.
Thus, the image may be acquired in ambient light, the image sensor being exposed to light in the visible spectral band.
According to one embodiment, no image-forming optics are placed between the sample and the image sensor. According to another embodiment, the device comprises an optical system, placed between the sample and the image sensor, the optical system having an object focal plane and an image focal plane. The device is then such that:
Whatever the embodiment, the method may comprise any one of the following features, implemented alone or in any technically realizable combination:
A second subject of the invention is a device for observing a sample, comprising:
According to one embodiment, no image-forming optics are placed between the sample and the image sensor. According to another embodiment, the device comprises an optical system, placed between the sample and the image sensor, the optical system having an object focal plane and an image focal plane. The device is then such that:
Whatever the embodiment, the method may comprise any one of the following features, implemented alone or in any technically realizable combination:
Other advantages and features will become more clearly apparent from the following description of particular embodiments of the invention, which are given by way of nonlimiting example, and shown in the figures listed below.
The illuminating beam is emitted in an illumination spectral band Δλ12. The illumination spectral band Δλ12 preferably lies outside of the visible spectral band. By visible spectral band, what is meant is a spectral band comprised between 400 nm and 750 nm, or between 400 and 780 nm. Preferably, the illumination spectral band Δλ12 lies between 750 nm or 780 nm and 10 μm, and preferably between 800 nm and 10 μm, and preferably between 750 nm or even 800 nm and 5 μm, and more preferably between 750 nm or even 800 nm and 2 μm, or between 750 nm or even 800 nm and 1200 nm, or between 750 nm or even 800 nm and 1000 nm.
By lies between m and n, m and n representing wavelength values, what is meant is that more than 80% of the intensity of the emitted light, or even more than 90% or 95% of the emitted intensity, is comprised between m and n. The term lies between m and n does not necessarily mean extend from m to n.
The sample 10 is a sample that it is desired to characterize. It notably comprises a medium 10m in which particles 10p bathe. The medium 10m may be a liquid medium. It may comprise a bodily liquid, for example obtained from blood or urine or lymph or cerebrospinal fluid. It may also be a question of a culture medium, comprising nutriments allowing microorganisms or cells to develop. By particle, what is notably meant, non-exhaustively, is:
A particle 10p may be solid or liquid.
The sample 10 may be a thin slide of biological tissue, such as a pathology slide. The thickness of such a slide is of the order of a few tens of microns.
The sample 10 is, in this example, contained in a fluidic chamber 15. The fluidic chamber 15 is for example a Gene Frame® fluidic chamber of thickness e=250 μm. The thickness e of the sample 10, along the propagation axis, typically varies between 10 μm and 1 cm, and is preferably comprised between 20 μm and 500 μm. The sample lies in a plane P10, called the sample plane. The sample plane P10 is preferably perpendicular to the propagation axis Z, or substantially perpendicular to the latter. By substantially perpendicular, what is meant is perpendicular to within an angular tolerance, for example to within ±10% or ±20%. The sample plane is defined by the axes X and Y shown in
The distance D between the light source 11 and the fluidic chamber 15 is preferably larger than 1 cm. It is preferably comprised between 2 and 30 cm. Advantageously, the light source 11, seen by the sample, may be considered to be point-like. This means that its diameter (or its diagonal) is preferably smaller than one tenth and better still one hundredth of the distance between the fluidic chamber 15 and the light source. In
The diaphragm may be replaced by an optical fibre, a first end of which is placed facing the light source 11 and a second end of which is placed facing the sample 10. The device shown in
Alternatively, the light source may be a laser source, such as a laser diode, as shown in
Preferably, the illumination spectral band Δλ42 has a bandwidth narrower than 100 nm. By spectral bandwidth what is meant is a full width at half maximum of said spectral band. Preferably, the illumination spectral bandwidth Δλ12 is narrower than 50 nm, or even narrower than or equal to 20 nm.
The sample 10 is placed between the light source 11 and the image sensor 20. The image sensor 20 defines a detection plane P0, preferably lying parallel, or substantially parallel to the plane P10 in which the sample lies. The expression substantially parallel means that the two elements may not be rigorously parallel, an angular tolerance of a few degrees, of the order of ±20° or ±10° being acceptable.
The image sensor 20 is able to form an image I0 of the sample 10 in the detection plane P0. In the example shown, it is a question of a CCD or CMOS image sensor 20 comprising a matrix array of pixels. The image sensor comprises a number of pixels preferably higher than 10000, and more preferably higher than 100000. The detection plane P0 preferably lies perpendicular to the propagation axis Z. The distance d between the sample 10 and the matrix array of pixels of the image sensor is preferably comprised between 50 μm and 2 cm, and more preferably comprised between 100 μm and 2 mm.
The absence of image-forming or magnifying optics between the image sensor 20 and the sample 10 in this embodiment will be noted. This does not prevent focusing micro-lenses potentially being present level with each pixel of the image sensor 20, said micro-lenses not performing the function of magnifying the image acquired by the image sensor, their function being to optimize detection effectiveness.
The image sensor 20 is configured to form an image in a detection spectral band Δλ20. Advantageously, the detection spectral band does not lie in the visible spectral band, or if it does does so negligibly. It preferably lies between 750 nm or 780 nm and 10 μm, and preferably between 800 nm and 10 μm, and more preferably between 750 nm or even 800 nm and 5 μm, and even more preferably between 750 nm or even 800 nm and 2 μm, or between 750 nm or even 800 nm and 1200 nm, or between 750 nm or even 800 nm and 1000 nm. Because it lies outside of the visible spectral band, the detection spectral band Δλ20 allows images to be acquired when the device 1, and notably the image sensor 20, is exposed to ambient light, in the visible spectral band. The detection spectral band is configured such that the image acquired by the image sensor 20 is not affected, or affected negligibly, by the ambient light. Thus, the device 1 may be used without it being necessary to place it in a chamber that is impermeable to light. It may be used in ambient light. The ambient-light level in which the device is able to operate depends on the fraction of the visible spectral band detected by the image sensor.
Preferably, the detection spectral band Δλ20 has a bandwidth narrower than 100 nm. By spectral bandwidth, what is meant is a full width at half maximum of said spectral band. Preferably, the width of the detection spectral band Δλ20 is narrower than 50 nm, or even narrower than or equal to 20 nm.
It will be understood that the detection spectral band Δλ20 and the illumination spectral band Δλ12 overlap at least partially.
The detection spectral band Δλ20 may be defined by the intrinsic properties of the pixels. The image sensor then comprises pixels able to detect photons solely in the detection spectral band. More simply, the detection spectral band Δλ20 may be defined by a detection filter 29, of high-pass or band-pass type, placed between the image sensor 20 and the sample 10. Analogously, the illumination spectral band Δλ12 may be defined by the intrinsic properties of the light source 11. This is notably the case when the light source is a laser, as shown in
The image sensor 20 may be an RGB CMOS sensor comprising pixels the detection spectral band of which is defined by a Bayer filter. Thus, the pixels of the image sensor are sensitive in spectral bands corresponding to the colours red, green and blue of the visible spectral band, respectively.
As mentioned in the patent applications cited with respect to the prior art, under the effect of the incident light wave 12, the particles 10p present in the sample may generate a diffracted wave 13, liable to produce, in the detection plane P0, interference, in particular with a portion 12′ of the incident light wave 12 transmitted by the sample. Moreover, the sample 10 may absorb some of the incident light wave 12. Thus, the light wave 14, transmitted by the sample, and to which the image sensor 20 is exposed, which light wave is called the “exposure light wave”. The exposure light wave 14 may comprise:
These components form interference in the detection plane. Thus, the image I0 acquired by the image sensor comprises interference patterns (or diffraction patterns), each interference pattern possibly being associated with one particle 10p of the sample.
A processing unit 21, for example a microprocessor, is able to process each image I0 acquired by the image sensor 20. In particular, the processing unit 21 is a microprocessor connected to a programmable memory 22 in which a sequence of instructions for performing the image-processing and computing operations described in this description is stored. The processing unit may be coupled to a screen 24 allowing images acquired by the image sensor 20 or computed by the processor 21 to be displayed.
An image I0 acquired by the image sensor 20, also referred to as a hologram, may be subjected to a reconstruction, called a holographic reconstruction. As described with reference to the prior art, it is possible to apply, to the image I0 acquired by the image sensor 20, a holographic propagation operator h, so as to compute a complex amplitude A(x,y,z) representative of the exposure light wave 14, and to do so for every point of coordinates (x,y,z) of the space, and more particularly between the image sensor 20 and the sample 10. The coordinates (x,y) designate coordinates, called radial coordinates, parallel to the detection plane P0. The coordinate z is a coordinate along the propagation axis Z, expressing a distance between the sample 10 and the image sensor 20.
The complex amplitude may be obtained using one of the following expressions:
A(x,y,z)=I0(x,y,z)*h * designating the convolution operator, or, and preferably,
A(x,y,z)=√{square root over (I0(x,y,z))}*h, or even:
The function of the propagation operator h is to describe the propagation of light between the image sensor 20 and a point of coordinates (x,y,z), located at a distance |z| from the image sensor. The propagation operator is for example the Fresnel-Helmholtz function, such that:
It is then possible to determine a property of the exposure light wave 14, for example the modulus M(x,y,z) and/or the phase φ(x,y,z), at the distance |z|, with:
M(x,y,z)=abs[A(x,y,z)];
φ(x,y,z)=arg[A(x,y,z)];
The operators abs and arg respectively designate the modulus and argument.
The distance |z| is a reconstruction distance.
The complex expression A(x,y,z) of the light wave 14 at any point of coordinates (x,y,z) of the space, is such that: (x,y,z)=M(x,y,z)ej
The complex expression A is a complex quantity the argument and modulus of which are respectively representative of the phase and intensity of the exposure light wave 14.
By implementing holographic reconstruction algorithms, it is possible to determine the complex expression A in a reconstruction plane. The reconstruction plane is preferably parallel to the detection plane P0 and/or to the sample plane P10. A complex image AZ of the exposure light wave 14 in the reconstruction plane is then obtained. Advantageously, the reconstruction plane is the plane P10 in which the sample 10 lies. In order to obtain a holographic reconstruction of good quality, the image acquired by the image sensor may be subjected to an iterative reconstruction algorithm. Iterative reconstruction algorithms are for example described in WO2016189257 or in WO2017162985.
It is possible to form images MZ and ϕz respectively representing the modulus or the phase of a complex image AZ in a plane PZ located at a distance |z| from the detection plane P0, with MZ=mod(AZ) and ϕz=arg(AZ). When the reconstruction plane PZ corresponds to a plane in which the sample lies, the images MZ and ϕz allow the sample 10 to be observed with a correct spatial resolution.
Trials
A trial was carried out using a reference device and a device according to the invention. Each device comprises:
The reference device was placed in a dark chamber, forming a chamber that was impermeable to light. The device according to the invention comprises a detection filter 29 placed directly on the image sensor, defining a detection spectral band centred on 980 nm and of spectral width equal to 10 nm. Thus, the detection spectral band lay between 975 nm and 985 nm. In this example, the device according to the invention is used in daylight.
A sample, containing micron-sized particles in aqueous solution, was placed at a distance of 1.5 mm from the image sensor.
according to another embodiment, schematically shown in
The defocus distance may be comprised between 5 μm and 5 mm, and preferably between 10 μm and 2 mm. In the same way as in a lensless configuration, such a configuration allows an image to be obtained in which diffracting elements of the sample, particles for example, appear in the form of diffraction patterns, interference occurring between the light wave emitted by the light source and propagating to the image sensor and a diffracted wave generated by each diffracting element of the sample. In the example of
However, a lensless-imaging configuration is preferred, because of the larger observation field that it procures.
The invention will possibly be employed to observe samples in the field of biology or health, or in other industrial fields, for example food processing and/or environmental inspection.
Number | Date | Country | Kind |
---|---|---|---|
18 58952 | Sep 2018 | FR | national |
Number | Name | Date | Kind |
---|---|---|---|
5307161 | Miyamoto | Apr 1994 | A |
20090137908 | Patwardhan | May 2009 | A1 |
20100208339 | Kleppe | Aug 2010 | A1 |
20150051498 | Darty | Feb 2015 | A1 |
20150077760 | Koerner | Mar 2015 | A1 |
20170031033 | Makarov | Feb 2017 | A1 |
20170212343 | Morel | Jul 2017 | A1 |
20170317125 | Bordy | Nov 2017 | A1 |
20180046139 | Stahl et al. | Feb 2018 | A1 |
20180172906 | Rothberg | Jun 2018 | A1 |
20180180527 | Zhou | Jun 2018 | A1 |
20190086866 | Douet | Mar 2019 | A1 |
Number | Date | Country |
---|---|---|
WO 2016078946 | May 2016 | WO |
Entry |
---|
Preliminary French Search Reported dated Jun. 20, 2019 in French Appliaction 18 58952 filed on Sep. 28, 2018 (with English Translation of Categories of Cited Documents & Written Opinion ). |
Number | Date | Country | |
---|---|---|---|
20200103337 A1 | Apr 2020 | US |