This disclosure relates to optical spectrometer instrumentation, and more particularly to a method of using a tristimulus detector common in commercially available cameras, including low cost consumer cameras and webcams, without the need for high resolution optics nor precision mechanical registration typically required from most spectrometers. Embodiments of the invention apply to general purpose spectrometry, with particular advantages in applications with spectral lines, including, for example, Raman spectrometry and emissions spectrometry. In addition, embodiments of the invention increase effective dynamic range of the detector by reconstructing clipped signals due to over exposure.
Applications of spectrometers, especially in applied spectrometry for determining chemical composition and deformulation, chemical properties, structural integrity and chemical verification against counterfeit or mishap, are of great interest to the pharmaceutical, forensics, biotechnology, food and agriculture, mining, mineralogy, gemology, petroleum exploration, medical diagnostics, electronics and other industries. For food and agriculture, spectrometry has been applied for determining the composition and ripeness of food as well as for detecting contamination due to harmful chemical agents and pathogens such as infectious bacteria. Nutritional information of food ingredients and food in solution may be determined using methods such as Raman spectrometry, which uses a spectrometer to measure the shifts in light (visible and/or infrared) from a monochromatic excitation, typically from a laser, to other frequencies. In all applications, the wavelength or wavenumber, the inverse of wavelength, are important. In some cases, for example for Raman spectrometry, the difference between the excitation spectral line wavenumber and the measured spectral line wavenumbers are important.
Conventional spectrometer technologies include an internal or external light source 2, an optional specimen for determining absorption, transmission or re-emission 4, a spectroscope 6 that produces a spectrogram, a detector 8, processing for removing background signal and noise 10, and optionally further normalization 12 for the cases of absorption and transmission measurement.
Spectrometry, which in general is the application of spectrometers to study objects, typically requires a controlled light source, commonly a laser or broad band source. However, the spectrometer is often a separate device and does not include the light source. The detector within the spectrometer is generally sensitive across a broad band of radiation frequencies. In some cases, tristimulus detectors have been used, for example Charge Coupled Device (CCD) cameras with optical red (R), green (G) and blue (B) filters, but only for determining a single intensity estimate along the position of the spectrogram. This intensity estimate is taken directly from a color image, which is comprised of red, green and blue (RGB) primaries. Since the intensity is associated only with spatial position along the spectrogram, the resulting spectral resolution is limited to the resolution of the optics. The resolution of the optics is typically primarily determined by the width of the slit opening where the light enters the spectrometer.
For example, the Rspec Explorer is a relatively inexpensive commercially available spectrometer, available from fieldtestedsystems.com. Its USB camera is housed in a black box which also includes a diffraction grating and lens. A separate pair of adjustable black foamboard panels are supplied to provide an optical slit, which may be a few feet or further away from the black box. This external slit determines the limit of the optical resolution and therefore also the spectral resolution. A narrow slit improves resolution, but limits the light level for the spectrometer. Thus there is the classic trade-off between spectral resolution and signal to noise ratio. The USB camera is an RGB based CCD camera that captures the conventional image of the camera, with the diffraction grating image super-imposed on the right side. Software that runs on a PC takes this image and creates magnitudes from the RGB image of the spectrogram.
The determination of measured wavelengths or wave numbers from the typical spectrometer are generally determined by spatial location, in turn determined by design, requiring strict adherence to particular alignment of all components in the optical path, and usually further refined through calibration. The difficulty of alignment has been mitigated in many cases by using lenses and filters in two directions, with beam splitters and other optical components that tend to be lossy, that sometimes causes marginal signal-to-noise ratios.
In the Rspec Explorer example, the user must align the peak of the slit from the direct camera image to a reference line (graticule) shown in a window of the corresponding software application on the computer. The alignment is typically somewhat tedious and imprecise as the peak is often too broad due to the slit being too wide. If instead the slit is narrow enough for precise alignment, the resulting spectrum magnitude is typically near or below the noise floor of the camera, or the reference peak is so high that it is beyond the dynamic range of the camera, resulting in clipping. This clipping means limiting the peak to the maximum camera digital code for amplitude in the respective channel(s). So, as with other spectrometers, the wavelength resolution of the Rspec Explorer is no better than the optical resolution determined by the combined point spread function of a slit, the resolution of the CCD, and the intermediate optics. And, as with other spectrometers, because of the trade-off between light intensity and resolution due to slit width, the camera's CCD dynamic range, determined by sensitivity and noise floor, also factors into the determination of the optical resolution and thus the spectral resolution.
Another type of existing spectrometer is one that uses a simple very low cost spectrogram and a web cam detector. Many of the key performance issues with this prior art is summarized in Kong Man Seng, “Trace gas measurement using a web cam spectrometer,” March 2011, City University of Hong Kong, Department of Physics and Materials Science. The “Discussion” of section 5, page 33-40, discusses the typical issues with slits, alignment, optics, noise and similar issues that cause limited optical resolution and dynamic range. This same section includes some of the typical methods for mitigating these issues, the vast majority of which depend on improving optical resolution directly, improving alignment mechanically and increasing light source power along with heat and other energy management required to prevent damage to components due to the increased radiation.
In general, typical methods for improving spectrometer accuracy have been to increase optical resolution and precision, increase precision of overall mechanical and optical alignment, increase light power, and to increase detector dynamic range. Each of these increases cost with ever diminishing returns of improvement. Detector dynamic range is typically improved by reducing the noise floor and allowing for integrating steady state or repeated signals over time. The most common detector uses CCD technology. For highest dynamic range, the CCD is cryogenically cooled to mitigate one form of noise, while other forms of noise are still present. The noise types can be rebalanced by custom CCD design, thus improving cryogenic performance. The resulting detector system may be orders of magnitude more expensive than one based on consumer camera technology.
Many alternative designs attempt to mitigate issues associated with the loss of light through filters, narrow slits, etc., typically by adding more expensive optical components, light sources and the like.
Further, depending on the specific optical arrangement, often the wavelength as a function of distance along the spectrogram primary (frequency) axis is non-linear and follows a cosine function. Cosine correction is optionally included, often complicating the design.
These improvements in accuracy add significant expense. Many of the methods to improve accuracy, especially in combination, also cause the instruments to be large, bulky and prevent portability, or require additional significant expense to reduce size. Also, when integration is required to compensate for weak signals due to small slit size, the stability of the light source becomes critical, thus increasing cost, size and complexity of the light source. For the extra electronics and light power, the power supply required becomes significantly larger and more expensive.
Embodiments of the invention address these and other limitations of the prior art.
Embodiments of the invention are directed toward a simple, inexpensive method that does not require bulky, high power components nor such precision optical resolution nor careful alignment of the optical path, allowing for the detector to be more arbitrarily placed relative to the spectrogram image, and allowing for adaptation to spatial shifts, tilts, geometric distortions and other results of misalignment. This not only simplifies the rest of the design, but also allows for the detector to be separate from the spectrogram. Thus, using embodiments of the invention, the detector can be the camera on a smart phone, personal computer, webcam or consumer stand-alone electronic camera, hand-held or mounted, to digitally capture the spectrogram image. The remaining components for creating the spectrogram may be as few as the spectral separation component, such as a diffraction grating, the housing and a light source (for some applications such as gemology, an external, even natural light source). It is also particularly desirable to not require the optical resolution to determine the spectral resolution, especially for spectral lines as is inherent in Raman spectroscopy and emissions spectroscopy.
Accordingly, embodiments of the invention provide an optical spectrometer with significant improvements in accuracy for both simple, inexpensive components and systems as well as for more expensive and higher precision components and systems. Many applications formerly requiring expensive precision components may now be replaced with a spectrometer system with liberal spatial alignment of many principal components, and in particular with potentially very liberal spatial alignment of the spectrogram with the detector. For spectral lines, the optical resolution may be orders of magnitude worse for the same resulting spectral resolution. In one embodiment, the method uses a relatively low cost electronic camera that produces a tristimulus image of red, green and blue (RGB) samples per pixel. These channels with different wavelength sensitivities are used to determine wavelength mostly independently from the optical resolution. The dynamic range is less limited due to compensation for clipping.
For transmission, reflection and/or absorption, that typically requires a reference broadband light signal (for example, white light), the reference spectrogram is captured as RGB, converted to magnitude, and the wavelength of each spatial location of the spectrogram within the image is determined algorithmically. Subsequently, measurements may be made with magnitude relative to the reference for each wavelength according to spatial location in the RGB image.
For emissions, including those from burning, Raman scattering and similar line spectra, no reference broadband spectrogram need be captured. Generally, the relative line spectra magnitudes are used for analysis, including chemical identification and principal component analysis. For all embodiments, the spectral resolution and accuracy are independent from optical resolution except in the case where two or more adjacent spectral lines are close enough to be within a significant portion of the optical point spread function. In other words, for example, if a green line is smeared due to poor optical resolution, it is still a green line with unique wavelength determined by the method herein described. If a green line and a slightly more yellowish green line are blurred into each other by poor optical resolution then the resulting accuracy improvement using the method described herein may not be as significant. In many significant applications, this requirement is not a limitation.
The objects, advantages and other novel features of the invention are apparent from the following detailed description when read in conjunction with the appended claims and attached drawing views.
As described below, embodiments of the invention make significant improvements in measuring wavelength and magnitude from spectrogram images captured using relatively inexpensive tristimulus detectors. Such detectors are widely available as stand-alone RGB cameras, embedded in mobile devices, such as smart phones (iPhone, Andriod, Blackberry), cell phones, notepads (iPad, etc.), laptops and other portable computers, and as accessories to computers such as USB cameras (webcams, in microscopes, telescopes, etc.). Many of these detectors include processors on which particular operations may be performed, described in detail below.
In addition, embodiments of the invention improve the effective spectral resolution beyond the limits of the system optical resolution not only due to the optical limits of the apparatus for capturing the image of the spectrogram, but also beyond the optical limits of the spectrogram being captured. The amplitude measurement improvement includes both noise mitigation and non-linear distortion mitigation. The noise mitigation is achieved from both temporal and spatial integration of the appropriate wavelength. The non-linearity mitigation includes reconstructing peaks that have been clipped due to over-exposure. Together, these improvements in magnitude dynamic range can be over an order of magnitude. For sufficiently separated line spectra, the resulting improvement in spectral resolution and accuracy can be orders of magnitude.
Referring now to
Processors running on the image capture device or on a spectrometer may perform processing by running operations in software running on such processors. In some embodiments functions or operations may be programmed onto an FPGA or other firmware or hardware.
The RGB spectrogram image is optionally cropped in operation 44 to remove portions of the image that surround the spectrogram, thereby reducing the amount of pixels to process, for speed and/or reduced computation. For a nominally dark surround, cropping is performed by eliminating each line at the top and bottom, and each column on the right and left where all pixels are below a useful amplitude threshold corresponding to a noise floor or black. In a preferred embodiment, the cropped spectrogram has a small border of black sufficient to measure the noise floor on both sides, top and bottom. Alternatively, the cropping may be performed by removing the portions of the image that do not correlate well with the relatively saturated colors in order as is expected with a spectrogram. The cropped result is an image with mostly pure colors or black, with colors changing along the primary axis and colors being relatively constant, but with varying intensity, along the secondary axis. In an alternative embodiment, rotation of the spectrogram image is performed before or after cropping such that the primary axis is parallel to image rows or lines, and the secondary axis is parallel to the image columns.
Next, two types of nonlinearity of the spectrogram are compensated in an operation 46 as shown in more detail in
As per most digitally encoded images, a gamma power function is used. So in order to apply linear operations such as integration, scaling, etc. to each channel, the inverse of the gamma power function must be first applied. For example, for sRGB, the linear representations, Rlinear, Glinear and Blinear are calculated according to well known techniques as follows:
If Rlinear <=0.03928
Rlinear=R+12.92
else
Rlinear=((R+0.055)/1.055)2.4
If Glinear <=0.03928
Glinear=G+12.92
else
Glinear=((G+0.055)/1.055)2.4
If Blinear <=0.03928
Blinear=B+12.92
else
Blinear=((B+0.055)/1.055)2.4
Next, any clipping is mitigated as shown in
If any portion of a channel is clipped, clipping is located for each x location. In other words, for each location along the principal (x) axis, the locations along the secondary (y) axis of the start, clipStart(x) and end, clipEnd(x), of clipping are saved. For example,
Referring again to
The two principal methods of clip mitigation are: A) an adjacent unclipped column or mean of consecutive unclipped columns adjacent to and within the same channel of the clipped portion 78, or B) the mean ratios of the unclipped portion of the set of Rlinear, Glinear and Blinear channels of the top and bottom portion of the clipped column 84. For the second embodiment, as an example, for each column of pixels in 82, the mean triplet {Rlinear, Glinear, Blinear} is calculated for the same column (same x value) in the combination of above 94 and below 84 the clipped portion 82. In other words the mean of nearby unclipped image segments is calculated for each channel, Rmuc, Gmuc, Bmuc. Then, for portions of the image where only Rlinear is clipped within 82, and at least one other channel is not clipped, the larger unclipped channel is the reference channel and the corresponding column is used as the local reference column within 82. Then the portion of the clipped Rlinear signal within 82 is replaced with the scaled portion of the local reference column scaled by the ratio of the mean reference channel (Gmuc or Bmuc) with Rmuc. For example, for a given column x with clipped Rlinear(x) within 82, if Glinear(x) is the only unclipped channel or if it is larger than Blinear(x), then the clipped portion of Rlinear(x,y) is replaced with Glinear(x,y)*Rmuc(x)/Gmuc(x). Let the scale factor
s=Rmuc(x)/Gmuc(x)
and the reference column for a given x be given by
refColumn(y)=Glinear(y)
For the first embodiment, the same strategy of replacing a clipped signal with a ratio scaled nearby reference signal is applied. However, instead of referencing a different unclipped channel, the column segments adjacent (that is adjacent along the secondary y axis, to the top 94 and bottom 84 of 82 in
An example method of matching skirts is as follows. As shown in the block diagram of
A) the optical point spread function guarantees a non-clipped samples of the digital image at the boundaries of the clipped portion of the image (i.e. 94 and 84 are above 0 and not clipped) and
B) the intensity profile across the spectrum naturally includes some unclipped portion (i.e. 78 is above 0 and not clipped).
Using the same example of
The respective centroids are used for registration between respective reference unclipped and clipped columns. The result is shown in
s=(refColSegsT*refColSegs)−1*refColSegsT*RcolSegs
where refColSegs and RcolSegs are both N×1 column vectors.
Shown in
Now applying this scale, s, also to the original portion of the reference 78, that is the portion corresponding to the clipped portion of Rlinear 82, we obtain a patch for and an estimate of the portion in Rlinear that was clipped. The result 160 is shown in
Next a single value mean for each column is calculated for each channel. The mean value increases resolution of the relative magnitudes and reduces noise. Referring again to
Next, in operation 50 shown in
The tristimulus set {Rm(x),Gm(x),Bm(x)} is then converted to magnitude, saturation and wavelength. The tristimulus set is converted to wavelength in steps of A) converting to coordinates in a color plane, B) projecting the color plane coordinates to coordinates of purest form in the color plane and C) selecting the wavelength whose coordinates come closest to those of the projection in step 2. One embodiment uses the very commonly used pseudo-physiological CIE1931 xy color plane.
So as not to confuse the spectrogram primary axis x, with the x of the color plane coordinate system, the remaining text of the invention details will substitute the spectrogram primary axis index variable x with n, as in {Rm(n), Gm(n), Bm(n)}. Thus the corresponding CIE1931 {x,y} values are {x(n),y(n)}.
The conversion of {Rm,Gm,Bm} to magnitude, saturation and wavelength is performed as follows. First, following each of the {Rm,Gm,Bm} values along the centroid curve are converted to CIE1931 {x,y} values using the respective colorimetry conversion operation 52 of
As depicted by the marked up plots of
The slope of the projected line 220 of
slope(n)=(y(n)−yw)/(x(n)−xw)
where {xw,yw} are the CIE 1931 coordinates for the reference white point for the camera colorimetry. In the case of sRGB, {xw,yw}={0.3127, 0.3290}.
The corresponding angle with the horizontal (x) axis of
Then angle(n) is matched to the angle in a table. The table has a column each for angles, x coordinates, y coordinates and wavelength. The angles are calculated from the arc tangent, atan, of the slope of the line between {xw, yw} and the respective {x,y} coordinates of pure monochromatic light of the given wavelength of each row of the table. The CIE 1931 {x,y} coordinate and wavelength data for the pure monochromatic light curve is given by Table 1 of section 3.3.1 of Gunter Wyszecki, W. S. Stiles, “Color Science: Concepts and Methods, Quantitative Data and Formulas, 2nd Edition,” 1982, John Wiley & Sons, NY, that is hereby incorporated by reference herein. The table with precalculated angles from the CIE 1931 {x,y} coordinates is used for expediency for converting angle to wavelength.
Thus, each {Rm,Gm,Bm} is converted to CIE 1931 {x,y} and projected to pure light 226. The corresponding wavelength, lambda, given by the aforementioned reference table is used. In an alternative embodiment, linear interpolation between corresponding angles in the table may be used to determine wavelength at finer resolution.
Again referring to
In one embodiment, the purity estimation values are used to establish nominal mapping between the spectrogram primary axis n and the remaining wavelengths. In typical spectrogram designs, there may be a non-linear relationship between spatial offset and wavelength. Cosine correction is often included to compensate. For the case where a direct image capture of the spectrogram is taken, such compensation may need to be performed through image processing. The lambda values with the highest respective purity estimation values are used to established reference (control) points for registering the corresponding portions of the cosine uncorrected spectrogram, and then the remaining wavelengths follow cosine correction established using known methods. Typically the best of the purest wavelengths to use for this purpose are first near yellow, where red and green channels are equal and second near cyan, where green and blue are equal. These two wavelength points in the spectrum tend to be especially useful for this purpose because A) the points where red and green are equal tend to be at mid-range sensitivities for two channels, where they are less likely to suffer from low signal-to-noise ratio, nor from clipping or other high level related distortion and B) the human vision system is particularly sensitive to wavelength differences near yellow and cyan, where the corresponding cones have high derivatives of sensitivity with respect to wavelength, and thus for commercial success, cameras must be particularly accurate in these regions. Yellow is better than cyan because in a typical colorimetry (including the example sRGB) yellow is fairly well saturated for the case where channels are R=G, B=0 (the yellow point on the red to green primary line within the CIE 1931 xy plane), whereas the corresponding cyan line, G=B, R=0 (cyan point on the green to blue line within the CIE 1931 xy plane) is not as saturated and the human eye is slightly less sensitive to the change in wavelength. The human eye is much less sensitive to changes in wavelengths at extremes of the visual spectra as well as in the middle of green.
Accordingly, an example of measured wavelength, measWlen(n) as 250, and cosine corrected theoretical mapping of measured wavelength, theoryWlen(n) as 252, using yellow and cyan measured points is shown in plots vs n in
The RGB values are also converted to a magnitude:
magnitude(n)=|R(n),G(n),B(n)|=sqrt(R(n)2+G(n)2+B(n)2)
Next, since wavelength typically is not a linear function of n, and a spectrometer produces magnitude vs wavelength, the next step is to determine magnitude as a function of wavelength. Note that limits in optical resolution, optical blur, cause a single essentially pure wavelength of light to be spread, and thus measured, across a span of the spectrogram primary axis n. For example, for the case where a single spectral line is alone in the spectrogram, the optical point spread function of the system will spread this wavelength of light spatially. Most applications are especially interested in wavelength and magnitude, typically with particular value given to magnitude peaks, and no particular value given to information to be gleaned from the optical point spread function. Accordingly, for each wavelength, many magnitudes may be measured across n. Among these many magnitudes for a given wavelength, magnitudes measured far from the expected location (after registration above) of the spectrogram are generally ignored since they are likely stray light or in some other way erroneous. Of the remaining magnitudes measured for the given wavelength, the maximum is taken for that wavelength. Thus the maximum magnitude within the vicinity of the theoretical location (once mapped accordingly to the above method) is used as the measured magnitude for a given wavelength(n).
If (measWlen(n1)==measWlen(n2)) and (|n1−n2|<ndiffMax)
then mag(measWlen(n1))=max(magnitude(n1),magnitude(n2))
where measWlen is the measured wavelength and
For spectral lines this is a preferred embodiment. For broadband spectra, the measured magnitude vs. theoretical wavelength is a preferred embodiment. In the preferred embodiments, the purity estimate values are used to cross-fade between these two methods of determining magnitude for a given wavelength.
Embodiments of the invention may be used to make devices such as: A) smart phone visible spectrometer B) smart phone Raman spectrometer and C) an infra-red (IR) spectrometer from a commercially available electronic camera with altered optical filters. In one embodiment, a Raman spectrometer includes a smart phone, a means of attaching and aligning the smart phone to the spectrogram housing such as a bracket or holder, a Raman excitation laser and a photo-detector trigger of the laser. The laser is turned on when the photo-detector senses the smart phone flash. In these two examples, the imaging device and spectrometer may be either mounted or not mounted.
Embodiments of these may have spectrometer stimulus control via a camera flash as shown in
Although the flash 312, flash detector 334, and light source 332 are illustrated in
In some embodiments the energy source 332 is a laser and the spectrometer 330 is a Raman spectrometer. In some embodiments the energy source 332 is a broad-band infra-red source and the spectrometer 330 is an infra-red (IR) spectrometer. In some embodiments the energy source 332 is a broad-band ultra-violet source and the spectrometer 330 is an ultra-violet (UV) spectrometer. In some embodiments the energy source 332 is a broad-band IR-VIS-UV source and the spectrometer 330 is an ultra-violet (IR-VIS-UV) spectrometer. In some embodiments the energy source 332 is a broad-band terahertz source and the spectrometer 330 is a terahertz spectrometer. In some embodiments the energy source 332 is an electric arc or corona discharge source and the spectrometer 330 is an electric arc or corona discharge spectrometer, respectively.
Although specific embodiments of the invention have been illustrated and described for purposes if illustration, it will be understood that various modifications may be made without departing from the spirit and scope of the invention.
Number | Name | Date | Kind |
---|---|---|---|
20060066265 | Plotz | Mar 2006 | A1 |
20140002481 | Broughton | Jan 2014 | A1 |
Entry |
---|
Svilainis, L., and V. Dumbrava. “Light Emitting Diode Color Estimation: the Initial Study.” Measurements 41.1 (2008). |
Kong Man Seng, Trace gas measurement using a web cam spectrometer, Bachelor of Science (Hons) in Applied Physics 2010-2011, 40 pages, Department of Physics and Materials Science, City University of Hong Kong. |