The present invention relates to spectral imaging devices.
Referring to the comparative example shown in
The focusing distance LIMG2 between the lens FLNS and the image sensor SEN1 may represent a significant proportion of the total length LCAM1 of the spectral camera CAM1. The focusing distance LIMG2 may cause e.g. that the size of the camera CAM1 is too large for mobile applications. An attempt to reduce the focusing distance LIMG2 may increase the divergence of light beams transmitted through the Fabry-Perot interferometer FPI, which in turn may have an adverse effect on the spectral resolution of the Fabry-Perot interferometer FPI.
An object is to provide a spectral imaging device. An object is to provide a method for spectral imaging. An object is to provide an imaging spectrometer.
According to an aspect, there is provided a device of claim 1.
Further aspects are defined in the other claims.
The scope of protection sought for various embodiments of the invention is set out by the independent claims. The embodiments, if any, described in this specification that do not fall under the scope of the independent claims are to be interpreted as examples useful for understanding various embodiments of the invention.
Optical micro-structures may be utilized to provide a compact size of the imaging device. In particular, the imaging device may comprise a micro lens array to reduce the length of the imaging device. The imaging device may comprise the micro lens array to provide compact size.
The imaging device may be used for multispectral imaging. The Fabry-Perot interferometer may operate as a tunable band-pass filter of the imaging device. The imaging device may simultaneously capture all input fields of a viewing sector at a single wavelength. The spectral position of the passband of the Fabry-Perot interferometer may be scanned to obtain spectral narrowband images of an object at several different wavelengths. The imaging device may scan the optical input spectrally to generate band-pass image data set of a scenery to an image sensor.
The imaging device may be arranged to operate such that the divergence of light transmitted through the Fabry-Perot interferometer is smaller than a predetermined limit. Light received from each field angle of the viewing sector may simultaneously pass through the Fabry-Perot interferometer, so as to provide a spectral image of an object. A single spectral image may represent a narrow spectral band of the total spectrum of the object. Several spectral images may be combined to provide a multi-wavelength spectral image, if desired.
Using the micro lens array may allow substantially reduction of the size of the imaging device. The length of the imaging device may be e.g. in the range of 3 mm to 15 mm.
In an embodiment, the imaging device may comprise a telecentric system to form axial light beams from light beams received from different field angles of the viewing sector.
In an embodiment, the imaging device may comprise an afocal system to reduce the length of the imaging device. The afocal system may comprise a combination of a negative lens and a limiter unit. The limiter unit may prevent propagation of light rays which are outside an acceptance cone.
In an embodiment, the imaging device may comprise a combination of a modulator array and a filter array e.g. in order to enable using one of the several transmittance peaks of the Fabry-Perot interferometer. The modulator array may comprise e.g. a plurality of first modulable regions and a plurality of second modulable regions. The transmittance of the modulable regions may be changed e.g. by an external control signal. The filter array may comprise e.g. a plurality of first optical spectral filter regions, and a plurality of second optical spectral filter regions. The spectral transmittance of the first filter regions may be different from the spectral transmittance of the second filter regions. The transverse positions of the first modulable regions may match the transverse positions of the first filter regions. A first transmittance peak of the interferometer may be at a first wavelength, and a second transmittance peak of the interferometer may be at a second wavelength. The modulator array may be first controlled to allow light at the first wavelength to propagate to the image sensor, wherein the modulator array may prevent propagation of light at the second wavelength. Next, the modulator array may be controlled to allow light at the second wavelength to propagate to the image sensor, wherein the modulator array may prevent propagation of light at the first wavelength.
The imaging device may be used e.g. for hyperspectral imaging. The imaging device may also be called e.g. as a hyperspectral camera device.
The imaging device may be e.g. a portable device. The imaging device may be e.g. a wearable device. The imaging device may be a pocketable device (i.e. may be easily carried in a pocket). The imaging device may be implemented e.g. in a smartphone. The imaging device may be implemented e.g. in a vehicle. The imaging device may be implemented e.g. in an unmanned aerial vehicle (drone).
The imaging device may be easily integrated as a part of an optical apparatus. The imaging device may be implemented e.g. in an industrial measuring device.
In the following examples, several variations will be described in more detail with reference to the appended drawings, in which
Referring to
The light beam modifier system SYS1 may form axial light beams LB2 from received light beams LB1 such that the radial position (r) of each formed axial beam LB2 is substantially proportional to the field angle (φ) of the corresponding received beam. The modifier system SYS1 may be e.g. a telecentric system or an afocal system (
The imaging device 500 may receive light LB1 from an object OBJ1. The imaging device 500 may be arranged to form a spectral image of the object OBJ1 by filtering the light LB1 with the Fabry-Perot interferometer FPI. The object OBJ1 may be located in the viewing sector VIEW1 of the device 500. Spectral images may be formed at several different wavelengths, and the spectral images of the different wavelengths may subsequently be combined to form multi-wavelength spectral image (CIMG) of the object OBJ1, if desired.
The object OBJ1 may reflect, emit and/or transmit light LB1, which may be received to the imaging device 500. The device 500 may be used e.g. for measuring reflection, transmission (absorption) and/or emission of the light LB1 of the object OBJ1.
The object OBJ1 may comprise a plurality of object points P1a, P1b, P1c, P1d, P1e. The imaging device 500 may receive light LB1a from the point P1a, light LB1b from the point P1b, and light LB1c from the point P1c, respectively.
The imaging device 500 may have an optical axis AX1. The modifier system SYS1 may form an axial light beam from a received light beam such that the angular orientation (α,φ) of the received light beam is mapped into a transverse position (α,r) of the centerline of said axial light beam. For example, a field angle φb of the light beam LB1b may be mapped into a radial position rb of the centerline of the axial beam LB2b. For example, a field angle φc of the light beam LB1c may be mapped into a radial position rc of the centerline of the axial beam LB2c. The modifier system SYS1 may also be called e.g. as an optical mapping system SYS1. The modifier system may convert light of inclined beams into axial beams. The modifier system SYS1 may also be called e.g. as a conversion system SYS1.
Each axial beam may be substantially parallel with the optical axis AX1 of the device 500. Each light beam (LB1a, LB1b, LB1c) received from an object point may correspond to an axial beam, which has a different transverse position (α,r). The transverse position of each axial beam may be specified e.g. by an angle α and a radial distance r. The transverse position (α,r) of each axial beam may be a function of the angular orientation (α,φ) of the corresponding received light beam LB1. The modifier system SYS1 may comprise e.g. a telecentric system. The modifier system SYS1 may comprise e.g. a combination of a negative lens and a limiter unit (
The imaging device 500 may from an image point P4a by modifying, filtering and focusing the light LB1a received from the object point P1a. The imaging device 500 may from an image point P4b by modifying, filtering and focusing the light LB1b received from the object point P1b. The imaging device 500 may from an image point P4c by modifying, filtering and focusing the light LB1c received from the object point P1c.
The Fabry-Perot interferometer FPI comprises a pair of semi-transparent mirrors M1, M2, which are arranged to operate as an optical cavity. The spectral position of the transmittance peak (PEAK1) of the Fabry-Perot interferometer FPI may be changed by changing the distance (dF) between the mirrors M1, M2 (
SX, SY and SZ may denote orthogonal directions. The direction SZ may be parallel with the optical axis AX1 of the device 500. The mirrors M1, M2 of the Fabry-Perot interferometer may be perpendicular to the optical axis AX1. The mirrors M1, M2 of the Fabry-Perot interferometer may be parallel with a plane defined by the directions SX and SY.
L0 may denote a distance between the object OBJ1 and the device 500. L500 may denote the external length of the device 500 in the direction of the axis AX1. LSEN may denote a distance between a principal plane of the modifier system SYS1 and the image sensor SEN1.
Using the microlens array ARR1 together with the modifier system SYS1 may allow reducing the distance LSEN. Reducing the distance LSEN may allow reducing the total length L500 of the imaging device 500.
Referring to
The lens array ARR1 may form a plurality of spatially separate optical sub-images S-6,-6.., S0,0, .. S6,6. The optical image IMG4 may consist of a plurality of spatially separate sub-images S-6,-6.., S0,0, .. S6,6. The sub-images may also be called e.g. as partial images.
The light LB3 for forming the plurality of sub-images may be transmitted simultaneously through the mirrors M1, M2 of the interferometer FPI. The light LB3 for forming the plurality of sub-images may be transmitted through the same single interferometer FPI.
The image sensor SEN1 may capture the sub-images S-6,-6.., S0,0, .. S6,6. The image sensor SEN1 covert the optical sub-images S-6,-6.., S0,0, .. S6,6 into digital form. The image sensor SEN1 may provide image data of the sub-images S-6,-6.., S0,0, .. S6,6 to one or more data processors.
The sub-images S-6,-6.., S0,0, .. S6,6 may be stitched together to form a single continuous image (IMGλ1) of the object OBJ1. The device 500 may comprise a data processor (CNT1) for performing stitching. The stitching may also be performed e.g. in an internet server.
In an embodiment, the stitching may be carried out as a device-specific image processing operation, without a need to analyze the captured sub-images to find image points corresponding to common object points.
Referring to
Referring to
Referring to
Referring to
In particular, four or more adjacent sub-images (S0,0, S0,1, S-1,0, S-1,1) may comprise images F1′ of the same object point, so as to allow forming a continuous larger image by stitching. For example, the vertical neighbor S0,1 of the first sub-image S0,0 may comprise the second image F1′0,1 of the feature F1. A horizontal neighbor S-1,0 of the first sub-image S0,0 may comprise a third image F1′-1,0 of the feature F1. A diagonal neighbor S-1,1 of the first sub-image S0,0 may comprise a fourth image F1′-1,0 of the feature F1.
A spectral transmittance peak (PEAK1) of the Fabry-Perot interferometer FPI may be adjusted e.g. to a first wavelength λ1 to capture a first set of sub-images S. The sub-images S of the first set may be stitched together to form a continuous spectral image IMGλ1 of the object OBJ1.
The spectral width ΔλFWHM of a transmittance peak PEAK1 may be e.g. in the range of 5 nm to 30 nm. FWHM denotes full width at half maximum.
The spectral transmittance TF(λ) may have one or more adjacent transmittance peaks PEAK1, PEAK2, PEAK3 of the Fabry-Perot interferometer FPI. For example, a first transmittance peak PEAK1 may be at a wavelength λ1, a second transmittance peak PEAK2 may be at a wavelength λ2, and a third transmittance peak PEAK3 may be at a wavelength λ3, in a situation where the mirror distance dF is equal to a first value dF,1. The interferometer FPI may be scanned by changing the mirror distance dF.
The spectral positions λ1, λ2, λ3 of the transmission peaks PEAK1, PEAK2, PEAK3 may depend on the mirror distance dF according to the Fabry-Perot transmission function. The spectral positions of the transmission peaks may be changed by changing the mirror gap dF. The transmission peaks PEAK1, PEAK2, PEAK3 may also be called passbands of the Fabry-Perot interferometer.
Changing the mirror distance dF may move the spectral position of the transmittance peaks PEAK1, PEAK2, PEAK3. For example, the first transmittance peak PEAK1′ may be at a wavelength λ1b, a second transmittance peak PEAK2′ may be at a wavelength λ2b, and a third transmittance peak PEAK3′ may be at a wavelength λ3b, in a situation where the mirror distance dF is equal to a second value dF,2.
The device 500 may optionally comprise one or more optical filters (e.g. CFA1, FIL1, FIL2) to limit the spectral response of the device 500. The one or more filters may together provide a spectral transmittance. For example, the one or more filters may allow using a single selected transmittance peak of the Fabry-Perot interferometer (e.g. PEAK1, PEAK2, or PEAK3), by preventing transmission of light at the wavelengths of the other transmittance peaks.
For example, the device 500 may comprise one or more filters (CFA1, FIL1, FIL2) to provide a first band pass region PB1 defined e.g. by cut-off wavelengths λ11 and λ12. For example, the device 500 may comprise one or more filters (CFA1, FIL1, FIL2) to provide a second band pass region PB2 defined e.g. by cut-off wavelengths λ21, and λ22. For example, the device 500 may comprise one or more filters (CFA1, FIL1, FIL2) to provide a third band pass region PB3 defined e.g. by cut-off wavelengths λ31, and λ32.
In an embodiment, the device 500 may comprise a modulator (MOD1) and a filter array (FIL1) to alternately enable transmission of light via a first passband PB1 or via a second passband PB2 (
Referring to
Referring to
The first filter regions (R) may provide e.g. a first spectral sensitivity for first detector pixels DPX1 of the image sensor SEN1. The second filter regions (G) may provide e.g. a second spectral sensitivity for second detector pixels DPX2 of the image sensor SEN1. The third filter regions (G) may provide e.g. a third spectral sensitivity for third detector pixels DPX3 of the image sensor SEN1. The fourth filter regions (IR) may provide e.g. a fourth spectral sensitivity for fourth detector pixels DPX4 of the image sensor SEN1.
The first detector pixels DPX1 may detect light e.g. at the wavelength (λ1) of the first transmittance peak PEAK1 of the interferometer. The second detector pixels DPX2 may detect light e.g. at the wavelength (λ2) of the second transmittance peak PEAK2. The third detector pixels DPX3 may detect light e.g. at the wavelength (λ3) of the third transmittance peak PEAK3. The fourth detector pixels DPX4 may detect light e.g. at the wavelength (λ4) of the fourth transmittance peak PEAK4.
The first detector pixels DPX1 may spectrally selectively detect e.g. red light (R). The second detector pixels DPX2 may spectrally selectively detect e.g. green light (G). The third detector pixels DPX3 may spectrally selectively detect e.g. blue light (B). The fourth detector pixels DPX4 may spectrally selectively detect e.g. infrared light (IR).
The filter regions (R, G, B, IR) of the filter array CFA1 do not need to reject all spectral components which are outside the primary passband of each filter region. For example, the first filter regions (R) may allow transmission of light at the wavelengths λ1 and λ4. For example, spectral components of light LB1 at the wavelengths λ1, λ2, λ3, λ4 may be determined from the detected signals of the detector pixels (DPX1, DPX2, DPX3, DPX4) and from the known spectral sensitivity functions of the detector pixels, by solving a system of equations.
Referring to
A spectral transmittance peak (e.g. PEAK1) of the Fabry-Perot interferometer FPI may be adjusted to a second wavelength λ2 to capture a second set of sub-images S. The sub-images S of the second set may be stitched together to form a second spectral image IMGλ2 of the object OBJ1.
A spectral transmittance peak (e.g. PEAK1 or PEAK2) of the Fabry-Perot interferometer FPI may be adjusted to a third wavelength λ3 to capture a third set of sub-images S. The sub-images S of the third set may be stitched together to form a third spectral image IMGλ3 of the object OBJ1.
A spectral transmittance peak (e.g. PEAK1 or PEAK2) of the Fabry-Perot interferometer FPI may be adjusted to a fourth wavelength λ4 to capture a fourth set of sub-images S. The sub-images S of the fourth set may be stitched together to form a fourth spectral image IMGλ4 of the object OBJ1.
The spectral images IMGλ1, IMGλ2, IMGλ3, IMGλ4 may be combined to form a multi-spectral image CIMG. The multi-spectral image CIMG may also be called e.g. as hyperspectral cube. The image CIMG may comprise a three-dimensional array of pixel values, wherein each pixel value may represent a measured intensity value associated with transverse position coordinates (x,y) of said pixel and with a wavelength value of said pixel (λ1, λ2, λ3, or λ4).
The number of spectral positions (λ1, λ2, λ3, or λ4) used for capturing the image data for a single image CIMG may be e.g. in the range of 2 to 100.
In an embodiment, it may also be sufficient to form a single spectral image IMGλ1 without forming a multi-spectral image CIMG.
In an embodiment, the image data of the captured sub-images S may be used without stitching the sub-images S together. For example, change of an object OBJ1 may be detected by comparing captured sub-images S with reference data also without stitching the sub-images S together. A change of an optical property of the object OBJ1 may be detected by comparing the captured sub-images S with reference data.
In an embodiment, the interferometer FPI may be adjusted to a selected wavelength to capture a plurality of sub-images S, wherein the image data of the captured sub-images S may be used e.g. for background correction. The method may comprise capturing sub-images S without stitching the sub-images S together.
Referring to
The device 500 may receive a first light beam LB1a from a first object point P1a. The modifier system SYS1 may form a first axial beam AX1a from light of a the first received light beam LB1a. The angular orientation (φa) of the received beam LB1a may be mapped into a radial position (ra) of the first axial light beam LB2a.
The modifier system SYS1 may form a substantially axial beam LB2 from light of each light beam LB1 received from the viewing sector VIEW1 of the device 500, wherein the radial position r of the formed axial beam AX2 may depend on the field angle φ of said received light beam LB1. The field angle φ may denote the angle between the centerline of the received beam LB1 and the optical axis AX1 of the device 500. The radial position r may indicate the distance between the centerline of the formed axial beam LB2 and the optical axis AX1 of the device 500. To the first approximation, the radial position (r) may be substantially proportional to the field angle (φ). For example, the modifier system SYS1 may form the axial beams LB2 such that r=kSYS1▪φ, where kSYS1 may denote a proportionality constant.
The device 500 may receive a second light beam LB1b from a second object point P1b. The modifier system SYS1 may form a second axial beam AX1b from light of a the second received light beam LB1b. The angular orientation (φb) of the received beam LB1b may be mapped into a radial position (rb) of the second axial light beam LB2b.
The device 500 may receive a third light beam LB1c from a third object point P1c. The modifier system SYS1 may form a third axial beam AX1c from light of a the third received light beam LB1c. The angular orientation (φc) of the received beam LB1c may be mapped into a radial position (rc) of the third axial light beam LB2b.
The Fabry-Perot interferometer FPI may form transmitted light beams LB3a, LB3b, LB3c by filtering light of the axial light beams LB2a, LB2b, LB2c.
The lens array ARR1 may form a plurality of sub-images S by focusing light of the transmitted light beams LB3a, LB3b, LB3c to the image sensor SEN1.
The distance L1 between the aperture APE1 and principal plane of the lens LNS1 may be e.g. substantially equal to the focal length fLNS1 of the lens LNS1 of the telecentric system SYS1. The focal length of the lens LNS1 may be e.g. in the range of 2 mm to 20 mm, advantageously in the range of 4 mm to 8 mm.
The aperture APE1 may be e.g. circular or rectangular. The diameter or width wAPE1 of the aperture APE1 may be e.g. in the range of 0.2 mm to 2 mm.
The diameter or width wAPE1 of the aperture APE1 may be selected to provide a desired spectral resolution of the Fabry-Perot interferometer FPI. Selecting a smaller aperture APE1 may improve spectral resolution. The device 500 may comprise a diaphragm DIA1 to define the aperture APE1.
L4 may denote the distance between the image sensor SEN1 and the principal plane of the lenses of the lens array ARR1. The distance L4 may be selected such that the lenses of the lens array ARR1 may form substantially sharp sub-images of the object OBJ1 on the image sensor SEN1. For example, the distance L4 may be smaller than the focal length of the lenses LNS of the lens array ARR1. The device 500 may be arranged to operate such that the lens array ARR1 does not form a sharp image of the input aperture APE1 on the image sensor SEN1.
The distance L4 may be selected such that at least one of lens of the array ARR1 may form a sharp image F1′ of a feature F1 of the object OBJ1 on the image sensor SEN1. The distance L4 may be selected such that at least one of lens of the array ARR1 may form a sharp image point (P4) of an object point (P1) on the image sensor SEN1. In an embodiment, the distance L0 between the object OBJ1 may be e.g. greater than or equal to 100 times the length L500 of the device 500. In an embodiment, the object OBJ1 may be at infinite distance. The distance L4 may be selected to provide a sharp image point for an object point located at infinite distance.
LSEN may denote the distance between the image sensor SEN1 and the principal plane of the lens LNS1 of the telecentric system SYS1. Using the lens array ARR1 may substantially reduce the distance LSEN. Using the lens array ARR1 may substantially reduce the total external length L500 of the spectral imaging device 500, in the direction of the optical axis AX1.
Referring to
The modifier system may form the axial beam LB2 from light of the received beam LB1 such that the radial position (r) of the axial beam LB2 is substantially proportional to the field angle φ of the received beam.
The modifier system SYS1 may form an axial beam LB2k from light of a received input beam LB1k. The input beam LB1k has a direction (αk,φk). The axial beam has a transverse position (αk, rk).
Referring to
The transmitted light beam LB3 may have a width wLB3 at the input surface of the lens array ARR1. d50 may denote the distance between centers (AX0,0, AX0,1) of adjacent lenses (LNS0,0, LNS0,1) of the array ARR1. The pitch distance d50 of the array ARR1 may be e.g. in the range of 25% to 100% of the width wLB3 so as to provide sufficient spatial resolution and to facilitate stitching of the sub-images S.
The lens array ARR1 may comprise a plurality of lenses arranged in a rectangular MxN array. The number (N) of rows of the lens array ARR1 may be e.g. greater than or equal to 8, and the number (M) of columns of the lens array may be e.g. greater than or equal to 8.
The number (N) of rows of the lens array ARR1 may be e.g. greater than or equal to 2, and the number (M) of columns of the lens array may be e.g. greater than or equal to 2. Using a 2 ×2 lens array may already provide significant reduction of the length of the device.
The device 500 may form an image point P4a from light of a light beam LB1a received from an object point P1a. The device 500 may form an image point P4b from light of a light beam LB1b received from an object point P1b. The device 500 may form an image point P4c from light of a light beam LB1c received from an object point P1c.
A first lens LNS0,0 of the array ARR1 may form a first sub-image S0,0. A second adjacent lens LNS0,1 of the array ARR1 may form a second adjacent sub-image S0,1.
The system SYS1 may form an axial light beam LB2d from light of the received light beam LB1d. The interferometer FPI may form an axial filtered light beam LB3d by filtering light of the axial light beam LB2d. The transmitted light beam LB3d may overlap a first lens LNS0,0 and a second adjacent lens LNS0,1 of the array ARR1. The first lens LNS0,0 may form a first focused beam LB4d0,0 by focusing a first part of the transmitted light beam LB3d. The focused beam LB4d0,0 may impinge on the image sensor SEN1 to form the first image point P4d0,0. The first sub-image S0,0 may comprise the first image point P4d0,0. The second lens LNS0,1 may form a second focused beam LB4d0,1 by focusing a second part of the transmitted light beam LB3d. The focused beam LB4d0,1 may impinge on the image sensor SEN1 to form the second image point P4d0,1. The second sub-image S0,1 may comprise the second image point P4d0,1.
In an embodiment, the transverse position of the first image point P4d0,0 with respect to the transverse position of the second image point P4d0,1 may depend on the distance (L0) between the object point P1d and the spectral imaging device 500. This phenomenon may be significant at small distances L0, for example at distances L0 where the ratio wAPE1/L0 is greater than 1%. Consequently, the method may comprise determining a distance (L0) between an object point (P1d) and the spectral imaging device 500 (by triangulation) from the relative position of the second image point (P4d0,1) with respect to the first image point (P4d0,0). The device may be arranged to determine distance values for a plurality of different object points, e.g. for measuring three-dimensional geometric form of the object. The determined distance may also be used e.g. for autofocusing. The determined distance may also be used e.g. for verifying a distance which has been determined by another method.
Referring to
Referring to
In an embodiment, the limiter unit NAL2 of the afocal system SYS1 may also be positioned between the Fabry-Perot interferometer FPI and the lens array ARR1. The limiter unit NAL2 may allow propagation of axial filtered light beams LB3 to the lens array ARR1, wherein the limiter unit NAL2 may eliminate unwanted light rays which are outside the acceptance cone. The limiter unit NAL2 may be e.g. a stack of aperture arrays. The limiter unit NAL2 may be e.g. a fiber optic array.
The system SYS1 may form axial light beams (LB2a, LB2b, LB2c) from received light beams (LB1a, LB1b, LB1c) such that the radial position (r) of each axial beam depends on the field angle (φ) of the corresponding received beam. The system SYS1 may form axial light beams (LB2a, LB2b, LB2c) from received light beams (LB1a, LB1b, LB1c) such that the radial position (r) of each axial beam may be substantially proportional to the field angle (φ) of the corresponding received beam.
Referring to
For example, an afocal system may comprise a combination of a Fresnel lens LNS2 and a limiter unit NAL2. The limiter unit NAL2 may be e.g. a stack of aperture arrays. The limiter unit NAL2 may be e.g. a fiber optic array.
Referring to
Referring to
The width of the mirrors M1, M2 of the interferometer may be e.g. in the range of 2 mm to 50 mm. The semi-transparent mirrors M1, M2 of the interferometer may be produced with a high degree of accuracy. The deviations of the semi-transparent mirror from the perfect planar shape may initially be e.g. smaller than λ/200. The flatness of the mirror M1, M2 may be e.g. better λN/200, in order to provide a suitable finesse (i.e. the ratio of the free spectral range to the spectral width of a transmission peak). λN denotes a predetermined operating wavelength. The predetermined operating wavelength λN may be e.g. in the range of 500 nm to 4000 nm. The distance dF between the semi-transparent mirrors M1, M2 may be e.g. in the range of 0.2 µm to 1 mm, depending on the desired spectral resolution and depending on the desired free spectral range.
The width of the light-detecting area of the image sensor SEN1 may be e.g. greater than or equal to the width of the mirrors M1, M2. The width of the lens array ARR1 may be e.g. greater than or equal to the width of the mirrors M1, M2.
The second mirror M1 may be substantially parallel with the first mirror M1 during operation. The mirrors M1, M2 may have e.g. a substantially circular form or a substantially rectangular form.
The distance dF between the mirrors M1, M2 may be adjusted to provide constructive interference for transmitted light at one or more given wavelengths so that the interferometer FPI may transmit light. The distance dF may also be adjusted to provide destructive interference for transmitted light at the given wavelength so that the interferometer FPI may reflect light.
The mirror distance dF may be adjusted by one or more actuators ACU1, ACU2. One or more actuators may be arranged to move the second mirror plate 200 with respect to the first mirror plate 100. The actuator ACU1, ACU2 may be e.g. a piezoelectric actuator, an electrostatic actuator, an electrostrictive actuator, or a flexoelectric actuator.
The semi-transparent mirrors M1, M2 may be e.g. dielectric multilayer coatings deposited on a transparent substrate. The semi-transparent mirrors M1, M2 may be e.g. metallic coatings deposited on a transparent substrate. The substrate material of the mirror plates 100, 200 may be transparent in the operating wavelength range of the interferometer 300. The material of the mirror plates 100, 200 may be e.g. glass, silica, silicon or sapphire. The mirror plates 100, 200 may comprise ceramic material. The mirror plates 100, 200 may comprise dimensionally stable material, which is transparent in the operating range of wavelengths of the spectral imaging device 500.
The interferometer FPI may optionally comprise capacitive sensor electrodes G1a, G1b, G2 for capacitively monitoring mirror distance dF. Sensor electrodes G1a, G1b, G2 may together form a sensor capacitor C1, wherein the capacitance value of the sensor capacitor C1 may depend on the mirror distance dF. Consequently, the mirror distance dF may be monitored by monitoring the capacitance value of the sensor capacitor C1. The sensor capacitor C1 may be connected to a capacitance monitoring unit 410 e.g. by conductors CONa, CONb (
Referring to
The interferometer FPI may optionally comprise means for monitoring the distance dF between the mirrors and/or the mirror plates. The interferometer FPI may comprise e.g. capacitive means for monitoring the distance. The interferometer FPI may comprise e.g. inductive means for monitoring the distance. The interferometer FPI may comprise e.g. interferometric means for monitoring the distance.
The interferometer FPI may comprise e.g. capacitive sensor electrodes for capacitively monitoring mirror distance dF. Sensor electrodes may together form a sensor capacitor C1, wherein the capacitance value of the sensor capacitor C1 may depend on the mirror distance dF. Consequently, the mirror distance dF may be monitored by monitoring the capacitance value of the sensor capacitor C1. The sensor capacitor C1 may be connected to a capacitance monitoring unit 410 e.g. by conductors CONa, CONb. The capacitance monitoring unit 410 may provide a sensor signal Sd indicative of the mirror distance dF.
The capacitance monitoring unit 410 may provide a sensor signal Sd. The sensor signal may be used for monitoring the mirror gap dF. The spectral response of the interferometer FPI may be calibrated e.g. as a function of the mirror gap dF. The device 500 may comprise a memory MEM2 for storing spectral calibration parameters DPAR2. The mirror gap dF and/or a spectral position λ may be determined from the sensor signal Sd e.g. by using the spectral calibration parameters DPAR2.
The image sensor SEN1 may provide image data, which may be communicated as an image data signal SSEN. The image data signal SSEN may comprise e.g. the pixel values of an image frame captured at a selected wavelength.
The device 500 may optionally comprise a memory MEM1 for storing intensity calibration parameters CALPAR1. The device 500 may be arranged to obtain pixel values from the image sensor SEN1, and to determine intensity values X(λ) from the pixel values by using one or more intensity calibration parameters CALPAR1. An intensity value X(λ) of the light LB1 may be determined from a pixel value of a captured image frame as a function of the position (x,y) of the pixel and/or as a function of the mirror distance value dF, by using the one or more intensity calibration parameters CALPAR1. Calibrated intensity value may be determined for each pixel of a captured wavelength image, respectively.
The image sensor SEN1 may be e.g. a CMOS sensor or a CCD sensor. CMOS means complementary metal oxide. CCD means charge coupled device.
The device 500 may optionally comprise a memory MEM3 for storing output OUT1. The output OUT1 may comprise e.g. pixel values of one or more captured images IMGλ1, IMGλ2, one or more calibrated intensity values, and/or one or more combined images CIMG.
The device 500 may optionally comprise one or more filters FIL2 to at least partly define one or more passbands PB1.
The device 500 may optionally comprise a modulator array MOD1, a filter array FIL1, and a driver unit 430 for changing the state of the modulator array MOD1. The driver unit 430 may change the state of the modulator array MOD1 according to a modulator control signal SETMOD received from the control unit CNT1.
The device 500 may comprise a memory MEM4 for storing a computer program PROG1. The computer program PROG1 may be configured, when executed by one or more data processors (e.g. CNT1), cause the apparatus 500, FPI to perform one or more of the following:
The device 500 may optionally comprise a user interface USR1 e.g. for displaying information to a user and/or for receiving commands from the user. The user interface USR1 may comprise e.g. a display, a keypad and/or a touch screen.
The device 500 may optionally comprise a communication unit RXTX1. The communication unit RXTX1 may transmit and/or receive a signal COM1 e.g. in order to receive commands, to receive calibration data, and/or to send output data OUT1. The communication unit RXTX1 may have e.g. wired and/or wireless communication capabilities. The communication unit RXTX1 may be arranged to communicate e.g. with a local wireless network (Bluetooth, WLAN), with the Internet and/or with a mobile communications network (4G, 5G).
The object OBJ1 may be e.g. a real object or a virtual object. A real object OBJ1 may be e.g. in solid, liquid, or gaseous form. The real object OBJ1 may be a cuvette filled with a gas. The real object OBJ1 may be e.g. a plant (e.g. tree or a flower), a combustion flame, or an oil spill floating on water. The real object OBJ1 may be e.g. the sun or a star observed through a layer of absorbing gas. The real object may be e.g. an image printed on a paper. A virtual object OBJ1 may be e.g. an optical image formed by another optical device.
The object may be e.g. a biological object, e.g. human body, animal body, tissue sample, or a plant. The object may be e.g. an inorganic object, e.g. a mineral sample or a gaseous sample. The formed spectral image (CIMG) may be compared with reference data e.g. in order to identify the object OBJ1. The formed spectral image (CIMG) may be compared with reference data e.g. in order to determine whether the object belongs to a given category or not. The formed spectral image (CIMG) may be compared with reference data e.g. in order to determine whether the state of the object is normal or abnormal.
The device 500 may be arranged to capture spectral images, which represent two or more wavelengths (λ1, λ2, λ3, λ4) selected e.g. from the range of 600 nm to 1050 nm. The device 500 may be arranged to capture spectral images, which represent several wavelengths (λ1, λ2, λ3, λ4) selected e.g. from the visible and/or near infrared range.
The device 500 may be arranged to capture spectral images, which represent two or more wavelengths (λ1, λ2, λ3, λ4) selected e.g. from the range of 950 nm to 1700 nm. The device 500 may be arranged to capture spectral images, which represent two or more wavelengths (λ1, λ2, λ3, λ4) selected e.g. from the shortwave infrared (SWIR) range. The image sensor SEN1 may be e.g. an InGaAs image sensor.
The dimensions of the spectral imaging device 500 may be selected e.g. such that angular distribution (ΔθLB3) of light rays transmitted through the Fabry-Perot interferometer is as narrow as possible.
The F-number of the lenses of the lens array may be e.g. as small as possible in order to minimize the length of the device 500. The F-number of a lens is equal to the ratio f/D, where f denotes the focal length of said lens and D denotes the diameter of said lens.
The one or more dimensions of the device 500 may be selected to optimize performance. Said dimensions may include e.g. the width wAPE1 of the input aperture APE1, the focal length of the lens (LNS1 or LNS2) of the light beam modifier system SYS1, the focal length of the lenses of the lens array ARR1, and/or the pitch dimension d50 between centers of adjacent lenses of the lens array ARR1.
Selecting a small aperture size (wAPE1) may improve spectral resolution of the Fabry-Perot interferometer. The aperture size (wAPE1) may be selected to be large enough so as to enable stitching of the sub-images.
By way of example, the width wAPE1 of the aperture APE1 of the telecentric system SYS1 may be e.g. substantially equal to 1.2 mm. The focal length of the lens LNS1 of the telecentric system SYS1 may be e.g. substantially equal to 6 mm. The width of the mirror M1, M2 may be e.g. substantially equal to 5 mm. The length L500 of the spectral imaging device 500 may be e.g. substantially equal to 9 mm. The lens array ARR1 may comprise e.g. a rectangular 15 ×15 array of microlenses LNS. The pitch dimension d50 of the lens array ARR1 may be e.g. substantially equal to 0.25 mm. The focal length of the lenses of the lens array ARR1 may be e.g. substantially equal to 1 mm. The image sensor SEN1 may comprise e.g. a rectangular 640 ×480 array of detector pixels. The diagonal field of view (VIEW1) may be e.g. substantially equal to 40°. The distance L0 between the object OBJ1 and the device 500 may be e.g. substantially equal to 500 mm.
For the person skilled in the art, it will be clear that modifications and variations of the devices and methods according to the present invention are perceivable. The figures are schematic. The particular embodiments described above with reference to the accompanying drawings are illustrative only and not meant to limit the scope of the invention, which is defined by the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
202054050 | Apr 2020 | FI | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/FI2021/050301 | 4/21/2021 | WO |