There are a number of applications in which it is of interest to detect or image an object. Detecting an object determines the absence or presence of the object, while imaging results in a representation of the object. The object may be imaged or detected in daylight or in darkness, depending on the application.
Wavelength-dependent imaging is one technique for imaging or detecting an object, and typically involves capturing one or more particular wavelengths that reflect off, or transmit through, an object. In some applications, only solar or ambient illumination is needed to detect or image an object, while in other applications additional illumination is required. But light is transmitted through the atmosphere at many different wavelengths, including visible and non-visible wavelengths. It can therefore be difficult to detect the wavelengths of interest because the wavelengths may not be visible.
Additionally, some filter materials exhibit a distinct absorption spectral peak with a tail extending towards a particular wavelength.
In accordance with the invention, a method and system for wavelength-dependent imaging and detection using a hybrid filter are provided. An object to be imaged or detected is illuminated by a single broadband light source or multiple light sources emitting light at different wavelengths. The light is received by a receiving module, which includes a light-detecting sensor and a hybrid filter. The hybrid filter includes a multi-band narrowband filter and a patterned filter layer. The patterned filter layer includes regions of filter material that transmit a portion of the light received from the narrowband filter and filter-free regions that transmit all of the light received from the narrowband filter. Because the regions of filter material absorb a portion of the light passing through the filter material, a gain factor is applied to the light that is transmitted through the regions of filter material. The gain factor is used to balance the scene signals in one or more images and maximize the feature signals in one or more images.
The invention will best be understood by reference to the following detailed description of embodiments in accordance with the invention when read in conjunction with the accompanying drawings, wherein:
The following description is presented to enable one skilled in the art to make and use the invention, and is provided in the context of a patent application and its requirements. Various modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the generic principles herein may be applied to other embodiments. Thus, the invention is not intended to be limited to the embodiments shown, but is to be accorded the widest scope consistent with the appended claims and with the principles and features described herein. It should be understood that the drawings referred to in this description are not drawn to scale.
Embodiments in accordance with the invention relate to methods and systems for wavelength-dependent imaging and detection using a hybrid filter. A technique for pupil detection is included in the detailed description as an exemplary system that utilizes a hybrid filter in accordance with the invention. Hybrid filters in accordance with the invention, however, can be used in a variety of applications where wavelength-dependent detection and/or imaging of an object or scene is desired. For example, a hybrid filter in accordance with the invention may be used to detect movement along an earthquake fault, to detect the presence, attentiveness, or location of a person or subject, and to detect or highlight moisture in a manufacturing subject. Additionally, a hybrid filter in accordance with the invention may be used in medical and biometric applications, such as, for example, systems that detect fluids or oxygen in tissue and systems that identify individuals using their eyes or facial features. In these biometric identification systems, pupil detection may be used to aim an imager accurately in order to capture required data with minial user training.
With reference now to the figures and in particular with reference to
In an embodiment for pupil detection, two images are taken of the face and/or eyes of subject 300 using detector 300. One of the images is taken using light source 302, which is close to or on axis 308 of the detector 300 (“on-axis light source”). The second image is taken using light source 304 that is located at a larger angle away from the axis 308 of the detector 300 (“off-axis light source”). When eyes of the subject 306 are open, the difference between the images highlights the pupils of the eyes. This is because specular reflection from the retina is detected only in the on-axis image. The diffuse reflections from other facial and environmental features are largely cancelled out, leaving the pupils as the dominant feature in the differential image. This can be used to infer the subject s 306 eyes are closed when the pupils are not detectable in the differential image.
The amount of time eyes of subject 306 are open or closed can be monitored against a threshold in this embodiment in accordance with the invention Should the threshold not be satisfied (e.g. the percentage of time the eyes are open falls below the threshold), an alarm or some other action can be taken to alert subject 306. The frequency or duration of blinking may be used as a criteria in other embodiments in accordance with the invention.
Differential reflectivity off a retina of subject 306 is dependent upon angle 310 between light source 302 and axis 308 of detector 300, and angle 312 between light source 304 and axis 308. In general, making angle 310 smaller will increase the retinal return. As used herein, “retinal return” refers to the intensity (brightness) that is reflected off the back of the eye of subject 306 and detected at detector 300. “Retinal return” is also used to include reflection from other tissue at the back of the eye (other than or in addition to the retina). Accordingly, angle 310 is selected such that light source 302 is on or close to axis 308. In this embodiment in accordance with the invention, angle 310 is typically in the range from approximately zero to two degrees.
In general, the size of angle 312 is chosen so that only low retinal return from light source 304 will be detected at detector 300. The iris (surrounding the pupil) blocks this signal, and so pupil size under different lighting conditions should be considered when selecting the size of angle 312. In this embodiment in accordance with the invention, angle 312 is in typically in the range from approximately three to fifteen degrees. In other embodiments in accordance with the invention, the size of angles 310, 312 may be different. For example, the characteristics of a particular subject may determine the size of angles 310, 312.
Light sources 302, 304 emit light at different wavelengths that yield substantially equal image intensity (brightness) in this embodiment in accordance with the invention. Even though light sources 302, 304 can be at any wavelength, the wavelengths selected in this embodiment are chosen so that the light will not distract the subject and the iris of the eye will not contract in response to the light. The selected wavelengths are typically in a range that allows the detector 300 to respond. In this embodiment in accordance with the invention, light sources 302, 304 are implemented as light-emitting diodes (LEDs) or multi-mode lasers having infrared or near-infrared wavelengths. Each light source 302,304 may be implemented as one, or multiple, sources.
Controller 316 receives the images captured by detector 300 and processes the images. In the embodiment of
Referring now to
On-axis light source 302 emits a beam of light towards beam splitter 500. Beam splitter 500 splits the on-axis light into two segments, with one segment 502 directed towards subject 306. A smaller yet effective on-axis angle of illumination is permitted when beam splitter 500 is placed between detector 300 and subject 306.
Off-axis light source 304 also emits beam of light 504 towards subject 306. Light from segments 502, 504 reflects off subject 306 towards beam splitter 500. Light from segments 502, 504 may simultaneously reflect off subject 306 or alternately reflect off subject 306, depending on when light sources 302, 304 emit light. Beam splitter 500 splits the reflected light into two segments and directs one segment 506 towards detector 300. Detector 300 captures two images of subject 306 using the reflected light and transmits the images to controller 316 for processing.
Each controller 316a, 316b performs an independent analysis to determine the position of the subject's 306 eye or eyes in two-dimensions. Stereo controller 600 uses the data generated by both controllers 316a, 316b to generate the position of the eye or eyes of subject 306 in three-dimensions. On-axis light sources 302a, 302b and off-axis light sources 304a, 304b may be positioned in any desired configuration. In some embodiments in accordance with the invention, an on-axis light source (e.g. 302b) may be used as the off-axis light source (e.g. 304a) for the opposite system.
Referring now to
A patterned filter layer 802 is formed on sensor 800 using filter materials that cover alternating pixels in the sensor 800. The filter is determined by the wavelengths being used by light sources 302, 304. For example, in this embodiment in accordance with the invention, patterned filter layer 802 includes regions (identified as 1) that include a filter material for blocking the light at the wavelength used by light source 302 and transmitting the light at the wavelength used by light source 304. Other regions (identified as 2) are left uncovered and receive light from light sources 302, 304.
In the
Various types of filter materials can be used in the patterned filter layer 802. In this embodiment in accordance with the invention, the filter material includes a polymer doped with pigments or dyes. In other embodiments in accordance with the invention, the filter material may include interference filters, reflective filters, and absorbing filters made of semiconductors, other inorganic materials, or organic materials.
Narrowband filter 916 is a dielectric stack filter in this embodiment in accordance with the invention. Dielectric stack filters are designed to have particular spectral properties. In this embodiment in accordance with the invention, the dielectric stack filter is formed as a dual-band filter. Narrowband filter 916 (i.e., dielectric stack filter) is designed to have one peak at λ1 and another peak at λ2. The shorter wavelength λ1 is associated with the on-axis light source 302, and the longer wavelength λ2 with off-axis light source 304 in this embodiment in accordance with the invention. The shorter wavelength λ1, however, may be associated with off-axis light source 304 and the longer wavelength λ2 with on-axis light source 302 in other embodiments in accordance with the invention.
When light strikes narrowband filter 916, the light at wavelengths other than the wavelengths of light source 302 (λ1) and light source 304 (λ2) are filtered out, or blocked, from passing through narrowband filter 916. Thus, the light at visible wavelengths (λVIS) and at wavelengths (λn) are filtered out in this embodiment, while the light at or near the wavelengths λ1 and λ2 transmit through the narrowband filter 916. Thus, only light at or near the wavelengths λ1 and λ2 pass through glass cover 914. Thereafter, filter regions 910 transmit the light at wavelength λ2 while blocking the light at wavelength λ1. Consequently, pixels 902 and 906 receive only the light at wavelength λ2.
Filter-free regions 912 transmit the light at wavelengths λ1 and λ2. In general, more light will reach uncovered pixels 900, 904 than will reach pixels 902, 906 covered by filter regions 910. Image-processing software in controller 316 can be used to separate the image generated in the second frame (corresponding to covered pixels 902, 906) and the image generated in the first frame (corresponding to uncovered pixels 900, 904). For example, controller 316 may include an application-specific integrated circuit (ASIC) with pipeline processing to determine the difference image. And MATLAB®, a product by The MathWorks, Inc. located in Natick, Mass., may be used to design the ASIC.
Narrowband filter 916 and patterned filter layer 908 form a hybrid filter in this embodiment in accordance with the invention.
Those skilled in the art will appreciate patterned filter layer 908 provides a mechanism for selecting channels at pixel spatial resolution. In this embodiment in accordance with the invention, channel one is associated with the on-axis image and channel two with the off-axis image. In other embodiments in accordance with the invention, channel one may be associated with the off-axis image and channel two with the on-axis image.
Sensor 800 sits in a carrier (not shown) in this embodiment in accordance with the invention. Glass cover 914 typically protects sensor 800 from damage and particle contamination (e.g. dust). In another embodiment in accordance with the invention, the hybrid filter includes patterned filter layer 908, glass cover 914, and narrowband filter 916. Glass cover 914 in this embodiment is formed as a colored glass filter, and is included as the substrate of the dielectric stack filter (i.e., narrowband filter 916). The colored glass filter is designed to have certain spectral properties, and is doped with pigments or dyes. Schott Optical Glass Inc., a company located in Mainz, Germany, is one company that manufactures colored glass that can be used in colored glass filters.
Referring now to
Broadband light source 1100 transmits light towards transparent object 1102. Broadband light source 1100 emits light at multiple wavelengths, two or more of which are the wavelengths of interest detected by detector 300. In other embodiments in accordance with the invention, broadband light source 1100 may be replaced by two light sources transmitting light at different wavelengths.
Lens 1104 captures the light transmitted through transparent object 1102 and focuses it onto the top surface of narrowband filter 916. For systems using two wavelengths of interest, detector 300 captures one image using light transmitted at one wavelength of interest and a second image using light transmitted at the other wavelength of interest. The images are then processed using the method for image processing described in more detail in conjunction with
As discussed earlier, narrowband filter 916 is a dielectric stack filter that is formed as a dual-band filter. Dielectric stack filters can include any combination of filter types. The desired spectral properties of the completed dielectric stack filter determine which types of filters are included in the layers of the stack.
For example, a dual-band filter can be fabricated by stacking three coupled-cavity resonators on top of each other, where each coupled-cavity resonator is formed with two Fabry-Perot resonators.
Cavity 1206 separates two DBR layers 1202, 1204. Cavity 1206 is configured as a half-wavelength (pλ/2) thick cavity, where p is an integer number. The thickness of cavity 1206 and the materials in DBR layers 1202, 1204 determine the transmission peak for FP resonator 1200.
In this first method for fabricating a dual-band narrowband filter, two FP resonators 1200 are stacked together to create a coupled-cavity resonator.
Stacking two FP resonators together splits single transmission peak 1300 in
Stacking three coupled-cavity resonators together splits each of the two peaks 1500, 1502 into a triplet of peaks 1700, 1702, respectively.
Referring now to
Next, one or more difference images are generated at block 2002. The number of difference images generated depends upon the application. For example, in the embodiment of
Next, convolution and local thresholding are applied to the images at block 2004. The pixel value for each pixel is compared with a predetermined value. The value given to the predetermined value is contingent upon the application. Each pixel is assigned a color based on the rank of its pixel value in relation to the predetermined value. For example, pixels are assigned the color white when their pixel values exceed the predetermined value. And pixels whose pixel values are less than the predetermined value are assigned the color black.
Image interpretation is then performed on each difference image to determine where a pupil resides within the difference image. For example, in one embodiment in accordance with the invention, algorithms for eccentricity and size analyses are performed. The eccentricity algorithm analyzes resultant groups of white and black pixels to determine the shape of each group. The size algorithm analyzes the resultant groups to determine the number of pixels within each group. A group is determined to not be a pupil when there are too few or too many pixels within a group to form a pupil. A group is also determined to not be a pupil when the shape of the group does not correspond to the shape of a pupil. For example, a group in the shape of a rectangle would not be a pupil. In other embodiments in accordance with the invention, only one algorithm may be performed. For example, only an eccentricity algorithm may be performed on the one or more difference images. Furthermore, additional or different image interpretation functions may be performed on the images in other embodiments in accordance with the invention.
The variables, equations, and assumptions used to calculate a gain factor depend upon the application.
(1) Maximize |feature signal in frame 1—feature signal in frame 2|
(2) Balance scene signal in frame 1 with scene signal in frame 2
A pixel-based contrast can be defined from the expressions above as:
In this case, maximizing Cp maximizes contrast. For the pixels representing the background scene, a mean difference in pixel grayscale levels over the background scene is calculated with the equation
where the index i sums over the background pixels and r is the number of pixels in the background scene. For the pixels representing the features of interest (e.g., pupil or pupils), a mean difference grayscale level over the features of interest is calculated with the equation
where the index i sums over pixels showing the feature(s) of interest and s is the number of pixels representing the feature(s) of interest. Each histogram in
In this embodiment |MB−MA| is large compared to (σA+σB) by design. In spectral differential imaging, proper selection of the two wavelength bands yields high contrast to make |MB| large and proper choice of the gain will make |MA| small by balancing the background signal in the two frames. In eye detection, angle sensitivity of retinal reflection between the two channels will make |MB| large and proper choice of the gain will make |MA| small by balancing the background signal in the two frames. The standard deviations depend on a number of factors, including the source image, the signal gray levels, uniformity of illumination between the two channels, the gain used for channel two (e.g., off-axis image), and the type of interpolation algorithm used to represent pixels of the opposite frame.
It is assumed in this embodiment that a majority of background scenes contain a wide variety of gray levels. Consequently, the standard deviation σA tends to be large unless the appropriate gain has been applied. In general, a larger value of the difference signal MA will lead to a larger value of the standard deviation σA, or
σA=αMA
where α is approximately constant and assumes the sign necessary to deliver a positive standard deviation σA. In other embodiments in accordance with the invention, other assumptions may be employed. For example, a more complex constant may be used in place of the constant α.
Contrast based on mean values can now be defined as
It is also assumed in this embodiment that σA>σB, so CM is approximated as
To maximize
portion of the equation is maximized by assigning the channels so that MB>>0 and MA is minimized. The equation for CM then becomes
with the above parameters defined as:
λ=wavelength;
Lm(λ) is the power per unit area per unit wavelength of light source m of the differential imaging system at the object, where m represents one wavelength band. Integrating over wavelength band m, Lm=∫Lm(λ)dλ;
A(λ) is the ambient light source power per unit area per unit wavelength Integrating over wavelength band m, Am=∫A(λ)dλ;
Pm(λ) is the reflectance (diffuse or specular) of the point (part of the feature) of interest at wavelength λ per unit wavelength, for wavelength band m. Integrating over wavelength band m, Pm=∫Pm(λ)dλ;
Xx,y,m(λ) is the background scene reflectance (diffuse or specular) at location x,y on the imager per unit wavelength as viewed at wavelength band m;
Tm,n(λ) is the filter transmission per unit wavelength for the pixels associated with wavelength band m measured at the wavelengths of band n. Integrating over wavelength for the case m=n, Tm,m∫Tm,m(λ)dλ;
S(λ) is the sensitivity of the imager at wavelength λ; and
G is a gain factor which is applied to one frame.
In this embodiment, Tm,n(λ) includes all filters in series, for example both a dual-band narrowband filter and a patterned filter layer. For the feature signal in frame 1, if the wavelength bands have been chosen correctly, P1>>P2 and the second integral on the right becomes negligible. And the relatively small size of P2 makes the first integral in the equation for the feature signal in frame 2 negligible. Consequently, by combining integrands in the numerator, condition (1) from above becomes
Maximize |∫(L1+A)P1(T1,1−GT2,1)S1dλ|.
To meet condition (1), L1, P1, and S1 are maximized within eye safety/comfort limits in this embodiment in accordance with the invention. One approach maximizes T1,1, while using a smaller gain G in the wavelength band for channel two and a highly discriminating filter so that T2,1 equals or nearly equals zero. For eye detection in the near infrared range, P1 is higher when the shorter wavelength channel is the on-axis channel, due to water absorption in the vitreous humor and other tissues near 950 nm. S1 is also higher when the shorter wavelength channel is the on-axis channel due to higher detection sensitivity at shorter wavelengths.
Note that for the scene signal in frame 1, the second integral should be small if T1,2 is small. And in the scene signal in frame 2, the second integral should be small if T2,1 is small. More generally, by combining integrands in the denominator, condition (2) from above becomes
minimize |∫(L1+A)Xx,y,1(T1,1−GT2,1)S1dλ−∫(L2+A)Xx,y,2(GT2,2−T1,2)S2dλ|.
To meet condition (2), the scene signal levels in the two frames in the denominator are balanced in this embodiment in accordance with the invention. Therefore,
∫(L1+A)Xx,y,1(T1,1−GT2,1)S1dλ=∫(L2+A)Xx,y,2(GT2,2−T1,2)S2dλ.
Solving for gain G,
It is assumed in this embodiment that X≡Xx,y,1≈Xx,y,2 for most cases, so the equation for the gain is reduced to
Filter crosstalk in either direction does not exist in some embodiments in accordance with the invention. Consequently, T1,2,T2,1=0, and the equation for the gain is
When a dielectric stack filter is used in series with other filters, the filter transmission functions may be treated the same, as the peak levels are the same for both bands. Thus, the equation for the gain becomes
Defining
the gain equation is
If the sources are turned off, L1, L2=0 and
where GAnoXtalk is the optimal gain for ambient lighting only. In this embodiment, the entire image is analyzed for this calculation in order to obtain relevant contrasts. The entire image does not have to be analyzed in other embodiments in accordance with the invention. For example, in another embodiment in accordance with the invention, only a portion of the image near the features of interest may be selected.
Since the ambient spectrum due to solar radiation and the ratio of ambient light in the two channels change both over the course of the day and with direction, the measurements to determine gain are repeated periodically in this embodiment. The ratio of measured light levels is calculated by taking the ratio of the scene signals in the two channels with the light sources off and by applying the same assumptions as above:
Solving for the ratio of the true ambient light levels A1/A2 the equation becomes
Substituting this expression into the equation for GAnoXtalk yields
GAnoXtalk=RAnoXtalk.
Thus the gain for ambient lighting can be selected as the ratio of the true ambient light levels in the two channels (A1/A2) as selected by the dielectric stack filter.
When the light sources are driven relative to the ambient lighting, as defined in the equation
the gain expressions for both the ambient- and intentionally-illuminated no-crosstalk cases will be equal, i.e. GnoXtalk=GAnoXtalk, even in dark ambient conditions where the system sources are more significant. Thus the gain is constant through a wide range of ambient light intensities when the sources are driven at levels whose ratio between the two channels matches the ratio of the true ambient light levels.
In those embodiments with crosstalk in only one of the filters, the expression for the gain can be written as
where T2,1=0, thereby blocking crosstalk at wavelength band 1 into the pixels associated with wavelength band 2. Assuming Xx,y,1≈Xx,y,2, this expression can also be written as
The filter transmission functions are treated similar to delta functions (at the appropriate wavelengths multiplied by peak transmission levels) in this embodiment, so the equation for the gain becomes
Defining
the equation simplifies to
The ratio of the true ambient light levels is calculated by taking the ratio of the scene signals in the two channels with light sources off and applying the same assumptions as above. Therefore, the ratio of the measured signal levels is
Solving for A1/A2, the equation becomes
and again GA=RA. Thus, in the embodiments with crosstalk the ambient gain is set as the ratio of the measured ambient light levels. Similar to the no-crosstalk embodiment above, the illumination levels are set in proportion to the ratio of the true ambient light levels. The system then operates with constant gain over a wide range of illumination conditions.
In practice, for some applications, the feature signal fills so few pixels that the statistics for the entire subframes can be used to determine the gain factor. For example, for pupil detection at a distance of sixty centimeters using a VGA imager with a twenty-five degree full angle field of view, the gain can be set as the ratio of the mean grayscale value of channel one divided by the mean grayscale value of channel 2. Furthermore, those skilled in the art will appreciate that other assumptions than the ones made in the above calculations can be made when determining a gain factor. The assumptions depend on the system and application in use.
Although a hybrid filter and the calculation of a gain factor has been described with reference to detecting light at two wavelengths, λ1 and λ2, hybrid filters in other embodiments in accordance with the invention may be used to detect more than two wavelengths of interest.
A tri-band narrowband filter transmits light at or near the wavelengths of interest (λ1 λ2, and λ3) while blocking the transmission of light at all other wavelengths in this embodiment in accordance with the invention. Photoresist filters in a patterned filter layer then discriminate between the light received at wavelengths λ1 λ2, and λ3.
Determining a gain factor for the sensor of
(1) Maximizing |feature signal in frame 1—feature signal in frame 2|
(2) Maximizing |feature signal in frame 3—feature signal in frame 2| and
(3) Balance scene signal in frame 1 with scene signal in frame 2
(4) Balance scene signal in frame 3 with scene signal in frame 2
which becomes
Maximize=|∫(L1+A)P1T1,1S1dλ−G1,2∫(L2+A)P2T2,2S2dλ|
Maximize=|∫(L3+A)P3T3,3S3dλ−G3,2∫(L2+A)P2T2,2S2dλ|
and
∫(L1+A)Xx,y,1T1,1S1dλ=G1,2∫(L2+A)Xx,y,2T2,2S2dλ
∫(L3+A)Xx,y,3T3,3S3dλ=G3,2∫(L2+A)Xx,y,2T2,2S2dλ
where G1,2 is the gain applied to the reference the channel at (λ2) in order to match channel 1 (e.g., λ1) and G3,2 is the gain applied to the reference channel 2 (λ2) in order to match channel 3 (e.g., λ3). Following the calculations from the two-wavelength embodiment (see
where G1,2=R1,2, the ratio of the scene signals. And
where G3,2=R3,2, the ratio of the scene signals.
Like the two-channel embodiment of