The present invention relates to a hyperspectral imaging system.
It is known a method of measuring spectrally-resolved images by placing a bandpass filter between an object and a camera or by employing multichannel detectors, and acquiring one image for each spectral band. The number of detected bands and the spectral width of each band depends on the tuning capability of the spectral filter and of the detector. This technique, which collects spectral information in a discrete set of optical bands, is also referred-to as multispectral imaging.
Another approach is based on Fourier-transform spectroscopy and uses an interferometer between the object and the detector. Document J. Craven-Jones et al. “Infrared hyperspectral imaging polarimeter using birefringent prisms”, Applied Optics, Vol. 50, n. 8, 10 Mar. 2011 describes a short-wavelength and middle-wavelength infrared hyperspectral imaging polarimeter (IHIP) including a pair of sapphire Wollaston prisms and several high-order retarders to form an imaging Fourier transform spectropolarimeter. The Wollaston prisms serve as a birefringent interferometer with reduced sensitivity to vibration versus an unequal path interferometer, such as a Michelson. Polarimetric data are acquired through the use of channeled spectropolarimetry to modulate the spectrum with the Stokes parameter information. The collected interferogram is Fourier filtered and reconstructed to recover the spatially and spectrally varying Stokes vector data across the image.
Moreover, document A. R. Harvey et al. “Birefringent Fourier-transform imaging spectrometer”, OPTICS EXPRESS 5368 No. 22, Vol. 12, 1 Nov. 2004 discloses a Birefringent Fourier-transform imaging spectrometer based on an entrance polarizer, two cascaded Wollaston prisms (one fixed and the other movable) a second polarizer and an imaging lens.
The Applicant noticed that the technique described in the above-mentioned documents shows a limited interferometric modulation due to chromatic dispersion and spatial separation of the replicas of light in propagating through the Wollaston prisms.
The present invention provides for a Fourier-transform hyperspectral imaging system based on birefringence which is alternative to the known ones and less critically dependent on chromatic dispersion and spatial separation of the generated replicas with respect the prior art systems.
The present invention relates to a Fourier-transform hyperspectral imaging system as defined by the appended claim 1. Particular embodiments of the system are described by the dependent claims 2-12.
Further characteristics and advantages will be more apparent from the following description of a preferred embodiment and of its alternatives given as an example with reference to the enclosed drawings in which:
The Fourier Transform hyperspectral imaging system 100 (hereinafter “hyperspectral imaging system”) comprises an optical imaging system 106, an adjustable birefringent common-path interferometer module 103 (hereinafter also called interferometer module), a two-dimensional light detector 104 and an analysis device 105.
The optical imaging system 106 is configured to produce an image of an object 102 and defines and optical axis z1. As schematically represented in
The optical imaging system 106 comprises one or more optical components (such as an example, lenses or objectives) selected and designed according to the particular application to be implemented. The optical imaging system 106 is provided with an input 101 for the input radiation INR coming from the object 102. According to the example of
The input 101 can be an opening of a housing containing the components of the hyperspectral imaging system 100 and can be, as an example, provided by optical components (such as, a lens, an objective or a combination of them). In general, the input 101 can be a real or virtual aperture that constitutes the entrance window of the optical imaging system 106.
The adjustable birefringent common-path interferometer module 103 (hereinafter also called “interferometer module”) is configured to produce replicas of the input radiation INR which are delayed from each other by adjustable phase delays and adapted to interfere with each other. The interferometer module 103 comprises at least a movable birefringent element 110 and is configured to produce collinear replicas for entering optical rays parallel to the optical axis z1 of the optical imaging system 106.
The two-dimensional light detector 104 (also called “2D detector”) is optically coupled to the interferometer module 103 to receive said replicas. The 2D detector 104 is configured to provide a plurality of digital images of the object 102 depending on said adjustable delays.
Moreover, the hyperspectral imaging system 100 comprises an analysis device 105 connected to the light detector 104 and configured to translate the plurality of digital images into a frequency domain to obtain a hyperspectral representation of the object 102. The analysis device 105 is configured to compute a Fourier Transform of the digital signal corresponding to each point of the digital images provided by the light detector 104.
The embodiment shown in
It is observed that according to the embodiment of
The interferometer module 103 is provided, particularly, with an adjustable wedge pair 107 and an optical element 108. The adjustable wedge pair 107 is configured to provide an adjustable phase delay between radiation components passing through it and having reciprocally orthogonal polarizations.
The adjustable wedge pair 107 comprises a first optical wedge 109 and a second optical wedge 110. Both first 109 and second 110 optical wedges are made of a birefringent material. As an example, the first 109 and second 110 optical wedges have corresponding optical axes OX1 of the birefringence material parallel to each other. Particularly, the first optical wedge 109 and the second optical wedge 110 are optical prisms, having, preferably, the same apex angle. The first optical wedge 109 coupled to the second optical wedge 110 is equivalent to an optical plate having variable thickness.
At least one of the two optical wedges 109 and 110 is movable (e.g. it can be translated) along a direction transversal to the main direction z1 by means of an actuator 111, schematically represented in
The adjustable phase delay introduced by the wedge pair 107 is dependent on the variable position of the second optical wedge 110. Moreover, as an example, the actuator 111 may be connected to a computer-controlled precision translation stage. Alternatively, the analysis device 105 reads and suitably stores the position values assumed by the second optical wedge 110 shifted by the actuator 111.
It is noticed that the first 109 and the second optical wedges 110 can be arranged to be very close to each other in order to avoid/minimize chromatic dispersion and spatial separation of the replicas. As an example, the first optical wedge 109 and the second optical wedge 110 can be at a distance between the corresponding adjacent tilted faces of 0-1 mm.
Optical element 108 is a birefringent plate (e.g. with fixed thickness) having a respective optical axis OX2 of the birefringent material perpendicular to the optical axis OX1 of wedge pair 107 and the main direction z1. Optical element 108 is coupled with the adjustable wedge pair 107 and configured to introduce a fixed phase delay between the radiations having reciprocally orthogonal polarizations.
Moreover, interferometer module 103 is equipped with an input polarizer 112 to provide an output radiation of linear polarization transversal to the optical axes OX1 and OX2 and, preferably, having tilt of 45° with respect to such axes. According to the shown example, the input polarizer 112 is placed between the object 102 and the optical element 108.
The interferometer module 103 also includes an output polarizer 113, as an example, interposed between the adjustable wedge pair 107 and the optical coupling system 106. As the skilled person can recognize, the order of the elements of the interferometer module 103 can be different from the one shown in the drawings. Optionally, some of the above described components may be glued together.
With reference to the operation of the system 100 of
The adjustable wedge pair 107 introduces a phase delay of opposite sign with respect to optical element 108. By moving the second optical wedge 110, in accordance with the example indicated in
The combination of the optical element 108 and the adjustable wedge pair 107 allows setting phase delays ranging from positive to negative values. The output polarizer 113 projects the two delayed replicas to a common polarization state, allowing them to interfere.
As indicated above, by scanning the position Δz of one of the first wedge 109 and second wedge 110, it is possible to change continuously the phase delay between the two replicas; for each wedge position Δz, the imaging system acquires an image of the object S(x,y,Δz), where (x,y) are the spatial coordinates of the image points on the 2D detector 104.
A plurality of images IMn (
The analysis device 105 translates the digital signal in the frequency domain through a Fourier Transform (FT) procedure, leading to:
S(x,y,ω)=FT{S(x,y,Δz)}(x,y,ω)=∫S(x,y,Δz)e(ωΔzd(Δz) (1)
The function S(x,y,ω) is the so-called hypercube, and carries the spectrum of each point at coordinate (x,y) of the image.
The beam splitter 115 (e.g. dichroic) separates the entering radiation into a first portion that propagates along the main direction z1 and a second portion that propagates along a secondary direction z2.
Along the secondary direction z2, an additional optical imaging system 116 (having optical axis z2) followed by an additional two-dimensional light detector 117 are provided. The additional two-dimensional light detector 117 (connected to the analysis device 105) and the two-dimensional light detector 104 show sensitivity in different spectral regions. As an example, the additional two-dimensional light detector 117 may have proper sensitivity to operate in a spectral region different from the one of the two-dimensional light detector 104. Therefore, the additional optical imaging system 116 and the additional two-dimensional light detector 117 allow obtaining hyperspectral representations of the object 102 in a large spectral region which is the combination of the sensitivity ranges of detectors 117 and 104.
Preferably, the optical system 118 also includes an iris 121 configured to adjust the numerical aperture of the optical system 118 itself. The third embodiment 300 may operate as hyperspectral camera as described for the previous embodiments. The first collimating lens 119 can be considered an example of the input 101.
Reference is made to the fourth embodiment 400, shown in
In addition to the components described with reference to
The microscope objective 122 provides the requested microscope magnification. The operation of the fourth embodiment is analog to the one described with reference to
According to a sixth embodiment, by adding a light shutter 124 to the structure of the fifth embodiment 500 a sharp tunable filter 600 can be obtained. According to this sixth embodiment, a proper choice of the positions Δz allows implementing a sharp tunable filtering. The filter 600 can be controlled in terms of central wavelength and spectral width with a good degree of flexibility. This permits the acquisition of quasi monochromatic images, spectrally matched to the emission wavelength of any sample 102 that must be discriminated from an unspecific background. This permits to set up an easily reconfigurable microscope that can highlight specific fluorescent labelling largely used in molecular biology. One or more labelling agents amongst the most common biomarkers (e.g. Cyanine dyes, and genetically encoded markers, like GFP, DsRed, etc.) can be separated in the same microscope field through the acquisition of only few images. The advantage of this approach, with respect to the acquisition of the whole spectrum, is the dramatic reduction in the measurement and processing time, since the synthesis of each spectral filter requires the acquisition of only two images.
It is observed that the light shutter 124 can be placed in a position different from the one shown in
Once a central wavelength
{Δzn}Ph→{τn=n
The positions of the movable second optical wedge 110 in the set {Δzn}Ph imply constructive interference for the light at the wavelength
In the meantime, all the exposures are integrated by the 2D detector 104 into a single image. This implements a known method used in photography and scientific imaging, called “multiple exposure”.
Yet, the image acquired in this way, even if it possesses the maximum amount of signal at wavelength
{Δzn}Quad→{τn=(2n+1)
Then, a second image is acquired for the (Δzn)Quad set of delays, leading to the so called “Quadrature” image.
It is worth noting that the signal (
Finally, a subtraction of the Quadrature image of expression (3) from the Phase image of expression (2) gives the required spectral selectivity, since the signal is extracted from the noise, which is virtually removed or, at least, strongly attenuated. The subtraction can be implemented by the analysis device 105, which also controls the light shutter 124, that is switched on and off at the positions of the second wedge 110 corresponding to the Phase and Quadrature sets, while the second wedge 110 is continuously moving.
The above described method is especially convenient for performing fluorescence studies of biological specimens. The wavelength
The above described method allows taking the Phase and Quadrature images in a very short time, only limited by the speed of the wedges and by the time constant of the shutter 124, which can be in the range of microseconds.
It is noticed that according to the sixth embodiment 600, the analysis device 105 does not perform a digital computation of a Fourier-Transform of the digital images provided by the two-dimensional light detector 104. Indeed, the combination of the image sampling associated with the light interruption (e.g. the action of the shutter 124) and the sequence of exposures made by the two-dimensional light detector 104 (equivalent to a summation) correspond to a discrete Fourier-Transform.
The Fourier-transform hyperspectral imaging system according to any one of the above-described embodiments can operate in the wavelength ranges dictated by the transparency of the optical elements and the sensitivity of the two-dimensional detector 104. The ranges are, for example (but not limited to) 180-3500 nm (using alfa-BBO α-BaB2O4 as a birefringent material), 500-5000 nm (using lithium niobate LiNbO3 as a birefringent crystal), 400-20000 nm (using calomel Hg2Cl2 as a birefringent crystal.
The above described embodiment of the Fourier-transform hyperspectral imaging system can be employed in different fields, including pharmaceuticals, agriculture, food quality control, material identification, mapping of the works of art and remote sensing, where hyperspectral images are performed regularly in order to retrieve crucial information about the spectral properties of the sample as a function of the position.
It is observed that the hyperspectral images, especially in the infrared region, are helpful to access information below the surface layer and have been also used for many applications that include, in a non-exhaustive list:
i) the search for clues of oilfields through the detection of hydrocarbon micro-seepages;
ii) the analysis of forests and crop fields in agriculture;
iii) the extraction of preparatory drawings from below the painted layer and to detect forgeries in paintings;
iv) the forensic analysis of documents, fingerprint detection and for blood stain dating at crime scenes.
The above described Fourier-transform hyperspectral imaging system shows several advantages over the known hyperspectral imaging systems.
Particularly, the employed Fourier-transform approach provides for the following advantages:
Moreover, the described hyperspectral imaging system shows the following additional advantages:
With reference to the above mentioned prior art documents J. Craven-Jones et al. “Infrared hyperspectral imaging polarimeter using birefringent prisms” and A. R. Harvey et al. “Birefringent Fourier-transform imaging spectrometer” it is noticed that the hyperspectral imaging system herein described does not show the limited interferometric modulation due to chromatic dispersion and spatial separation of the replicas.
The Wollaston prism employed in the above indicated prior art documents divides the light into two orthogonally polarized replicas that after the prisms are angularly separated and then need to be merged again by another optics (i.e. a lens) before the imaging detector. The two light replicas interfere at the detector coming from different directions; therefore, also the wavefronts of the two beams are inclined accordingly and this causes a loss of the interferometric modulation at the detector that reduces the signal to background ratio.
Moreover, the Wollaston prisms separate angularly the different spectral components of the light and this further reduces the interferometric modulation at the detector, because the same colour of the two orthogonally polarized replicas impinges onto a slightly different position of the camera, thus reducing the interferometric signal.
These two undesired effects can be strongly attenuated in the present hyperspectral system, since the two wedges 109 and 110 can be placed close together and both the separation and spectral dispersion effects are negligible with respect to the case of two Wollaston prisms.
It is noticed that these two problems increase when the two Wollaston prisms are made bigger in order to increase the maximum interferometric delay and therefore the maximum spectral resolution.
In the following paragraphs the properties of the interferometer module 103 will be discussed and additional and preferred indications will be provided in order to design the interferometer module 103, used in the Fourier-transform hyperspectral imaging systems 100-600.
It is noticed that the figure of merit which qualifies an interferometer is the visibility of the interferogram it generates as a function of the phase delay. The visibility v is defined as:
Where letter I represents light Intensity. The visibility v ranges from 0 (worst case, no fringes are detected) to 1 (best case, the modulation of the fringes is 100%).
An imaging system can be characterized by three aspects: (i) it collects light from a wide field of view, since light arises from an extended object or a scene; (ii) the light emerging from each point is a diverging field; (iii) light is collected by a bidimensional matrix detector, any pixel of which images one point of the scene.
We call P a generic point of an object or scene, and J the pixel of the detector capturing its image. When an interferometer is applied to the imaging system, it produces a sequence of images as a function of the phase delay.
Any pixel J records the interferogram, which holds information about the corresponding point P. Generally, we can define the following preferred criterion: in a preferred embodiment of the Fourier-transform hyperspectral imaging system, the visibility of the interferogram at any pixel J of the image is larger than a given threshold: v>vmin
Let's first consider which parameters influence the fringe visibility v at a generic pixel J. In the following we will consider monochromatic waves with angular frequency ω.
In order to estimate the fringe visibility in an imaging system, we will evaluate the behavior of the interferometer module 103 for a generic ray propagation direction.
Let's now consider a ray propagating at an angle α with respect to the normal direction. We get that the vertically and horizontally-polarized components accumulate a relative phase shift φ. Panel (b) plots φ as a function of α, when ψT=0 (i.e. both plates have the same thickness L). φ is evaluated for a wave at λ=600 nm in α-BBO blocks with L=2.4 mm.
Particularly,
The interferometer module 103 placed in the imaging system of
E
i=cos[ωt]+cos[ωt−ψT−φi] (5)
where unitary amplitude has been assumed for simplicity and where ψT and φi are the relative phase delays, as follows:
E
J(T)=ΣiEi=Σi{cos[ωt]+cos[ωt−ψT−φi]}) (6)
By taking the summation, we get that the resulting interferogram intensity <EJ(T)2> depends on the distribution of phase shifts φi. To numerically evaluate the visibility, it is sufficient to scan ψT in the neighborhood of ψT=0 (following the book Born-Wolf, “Principles Of Optics”, Cambridge University press (2005), page 298).
When the phase-shifts φi of the ray bundles range from φ1 to φ2, it is possible to show that:
It is clarified that
It is noticed that if all rays have the same phase shift φ, (i.e. Δφ=0) the visibility is v=1.
To get v larger than a given threshold Vmin, the phase range Δφ should be smaller than Δφmax. Particularly, in the case of an interferometric imaging system, the condition v>vmin is, preferably, fulfilled for all the pixels in the image plane.
In our case, since φ depends on α, the phase spread Δφ depends on Δα as Δφ≈(dφ/dα)*Δα, where dφ/dα is the first derivative of φ.
Lines RY1 and lines RY2 are the peripheral rays of the bundle traveling from P to J. They impinge on the birefringent interferometer module 103 at angles α1 and α2, respectively, hence all the rays of the bundle have angles ranging from α1 to α2. The average angle is α0=(α1+α2)/2, and Δα=|α1−α2|.
From these considerations, we can state the preferred criterion to build a hyperspectral imaging system. This criterion defines a preferred example of the design procedure.
In preferred imaging systems, characterized by high visibility v at any pixel, the bundle of rays traveling from P into pixel J shows a phase range Δφ=|φ2−φ1|<Δφmax.
Preferably, the maximum phase difference Δφmax is comprised in the range 1.0 π rad and 1.8 π rad. More preferably, Δφmax is no greater than 1.0 π rad. Particularly, if Δφmax is no greater than 1.0 π rad a visibility v>0.9 can be obtained; if Δφmax is no greater than 1.5 π rad a visibility v>0.55 can be obtained; if Δφmax is no greater than 1.8 π rad a visibility v>0.21 can be obtained.
According to
This paragraph refers to the embodiments of
Δα≈arc tg(D/h)
To design a camera with large visibility for all the pixels J of the detector, it is convenient to focus on the most unfavorable condition, i.e. those points P whose rays have the largest excursion Δφ. As evidenced in
Camera lens diameter: D=5 mm
Object distance: h=1 m
2D Detector: 1/1.8″; Camera lens f=25 mm;
field of view: 15.4°; largest value of α0=7.7
which leads to:
Δα=0.28→Δφ(α0)=0.537π rad (7)
From
v>0.9
The contrast of all other pixels will be higher than this value.
These figures demonstrate that it is possible to build a high-v hyperspectral camera by placing the interferometer module 103 in front of the camera objective of an imaging system.
Hyperspectral Microscope with Uniform Zero Path-Difference
This paragraph refers to the embodiment 500 of
The microscope is illustrated in
a tube lens 123 with focal length fTL such as M=fTL/f0;
the birefringent interferometer module 103 located in the focal plane of the tube lens 123;
a secondary imaging system (the objective or lens 114) attached to the 2D detector 104 that forms the image of the plane of the birefringent interferometer onto the 2D detector 104.
It is noticed that the distance of the tube lens 123 from the objective 122 and the distance of the interferometer module 103 from the tube lens 123 are equal to each other and to the focal length fTL.
The numerical aperture N.A. of a microscope objective is defined as the sine of the angle θ subtended by the peripheral (i.e. marginal) ray.
With this configuration:
n
o
h
o sin(θ)=nihi sin(α),
where no and ni are the refractive indices in the object and image spaces, respectively, while ho and hi are the size of the object and of its image, made by the tube lens. Finally, sin (θ) is the numerical aperture N.A. of the objective 122. Assuming an equal refractive index on both sides of the optical system, since M=hi/ho, it is very simple to get an expression for α:
α=arcsin(N.A./M);
α is typically very small, so that we can assume sin (α)≅α. Hence, we finally get:
α=N.A./M;
the average of the angle of incidence is α0=0, for any point P. As clear from
As a consequence, for any point P, φ ranges from −φ(α) to +φ(α). This means that:
A microscope with these features is considered:
In this case,
Δα=α=1.72°→Δφ(α)=0.38π rad
From
v≅1
These figures demonstrate that it is possible to build a high-v hyperspectral microscope by placing the birefringent interferometer 103 in the focal plane of the tube lens 123.
In addition, since with this configuration the zero path difference ψ0=<<φ> is 0 across all the pixels of the image, it is particularly suited for the application of filtering procedure as described with reference to the sixth embodiment 600.
Hyperspectral Microscope with Maximum Visibility
This paragraph refers to the embodiment 400 of
The microscope is illustrated in
With this configuration:
α0=h0/f0,
where h0 is the height of the object point P.
As a consequence, for any point P, φ=φ(α0), hence the zero path difference ψ0=<φ> is different for each pixel of the image.
A microscope with these features is considered:
f0=20 mm
h0Max=1 mm
In this case, α0 ranges from 0 to 2.86° across the image; conversely, Δφ≅0 for each pixel, which corresponds to v=1.
These figures demonstrate that it is possible to build a hyperspectral microscope with high-v by placing the birefringent interferometer module 103 after the microscope objective 122.
The fact that in configuration of
As already described, the protocol requires only two maps, named Phase and Quadrature. Each map is the superposition of images acquired at specific delays. The Phase map is obtained from images acquired at delays which are even multiples of T/2 (or equivalently, integer multiples of T, where T=1/f). On the contrary, the Quadrature map is obtained from images at delays which are odd multiples of T/2. By subtracting the Quadrature from the Phase map, one obtains information at f.
This protocol shows a variety of advantages: it produces only two maps instead of the numerous (≈100) images required by the hyperspectral detection; calculations show that each Phase or Quadrature map is typically the superposition of only 20 images; image superposition can be optically obtained by the so-called multiple exposure mechanism, which allows one to exploit the full camera dynamic range either by lowering the illumination dose, or by reducing the acquisition time of each exposure to few milliseconds.
Number | Date | Country | Kind |
---|---|---|---|
10201800008171 | Aug 2018 | IT | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/IB2019/056368 | 7/25/2019 | WO | 00 |