The present disclosure relates to image data processing, and to processing which reduces aliasing caused by the undersampling of images.
In the discussion that follows, reference is made to certain structures and/or methods. However, the following references should not be construed as an admission that these structures and/or methods constitute prior art. Applicant expressly reserves the right to demonstrate that such structures and/or methods do not qualify as prior art.
A focal plane array (FPA) is a device that includes pixel elements, also referred to herein as detector elements, which can be arranged in an array at the focal plane of a lens. The pixel elements operate to detect light energy, or photons, by generating, for instance, an electrical charge, a voltage or a resistance in response to detecting the light energy. This response of the pixel elements can then be used, for instance, to generate a resulting image of a scene that emitted the light energy. Different types of pixel elements exist, including, for example, pixel elements that are sensitive to, and respond differently to, different wavelengths/wavebands and/or different polarizations of light. Some FPAs include only one type of pixel element arranged in the array, while other FPAs exist that intersperse different types of pixel elements in the array.
For example, a single FPA device may include pixel elements that are sensitive to different wavelengths and/or to different polarizations of light. To utilize such arrays without grossly undersampling the resulting image detected by the pixel elements that are sensitive to one particular wavelength (or polarization), causing aliasing (e.g., distortion) in the resulting image, can require giving up the fundamental resolution of an individual detector element's dimensions by broadening the point spread function (PSF). The PSF of a FPA or other imaging system represents the response of the system to a point source. The width of the PSF can be a factor limiting the spatial resolution of the system, with resolution quality varying inversely with the dimensions of the PSF. For instance, the PSF can be broadened so that it encompasses not only a single pixel element, but also the space between like types of pixel elements (that is, the space between like-wavelength sensitive or like-polarization sensitive pixel elements), where the spaces between same-sense pixel elements are occupied by pixel elements of other wavelength/polarization sensitivities. Enlarging the PSF, however, not only degrades resolution of the resulting image, but also reduces energy on any given pixel element, thereby reducing the signal-to-noise ratio (SNR) for the array.
An exemplary method for processing undersampled image data includes: aligning an undersampled frame comprising image data to a reference frame; accumulating pixel values for pixel locations in the aligned undersampled frame; repeating the aligning and the accumulating for a plurality of undersampled frames; assigning the pixel values accumulated for the pixel locations in the aligned undersampled frames to closest corresponding pixel locations in an upsampled reference frame; and populating the upsampled frame with a combination of the assigned pixel values to produce a resulting frame of image data.
Another exemplary method for processing undersampled image data includes: aligning an undersampled frame comprising image data to a reference frame; assigning pixel values for pixel locations in the aligned undersampled frame to closest corresponding pixel locations in an upsampled reference frame; combining, for each upsampled pixel location, the pixel value or values assigned to the upsampled pixel location with a previously combined pixel value for the upsampled pixel location and incrementing a count of the number of pixel values assigned to the upsampled pixel location; repeating the aligning, the assigning, and the combining for a plurality of undersampled frames; and normalizing, for each upsampled pixel location, the combined pixel value by the count of the number of pixel values assigned to the upsampled pixel location to produce a resulting frame of image data.
An exemplary system for processing undersampled image data includes an image capture device and a processing device configured to process a plurality of undersampled frames comprising image data captured by the image capture device. The processing device is configured to process the undersampled frames by aligning each undersampled frame to a reference frame, accumulating pixel values for pixel locations in the aligned undersampled frames, assigning the pixel values accumulated for the pixel locations in the aligned undersampled frames to closest corresponding pixel locations in an upsampled reference frame, and populating the upsampled frame with a combination of the assigned pixel values to produce a resulting frame of image data.
Another exemplary system for processing undersampled image data includes an image capture device and a processing device. The processing device is configured to align an undersampled frame comprising image data captured by the image capture device to a reference frame, assign pixel values for pixel locations in the aligned undersampled frame to closest corresponding pixel locations in an upsampled reference frame, and combine, for each upsampled pixel location, the pixel value or values assigned to the upsampled pixel location with a previously combined pixel value for the upsampled pixel location and increment a count of the number of pixel values assigned to the upsampled pixel location. The processing device is also configured to repeat the aligning, the assigning, and the combining for a plurality of undersampled frames and, for each upsampled pixel location, normalize the combined pixel value by the count of the number of pixel values assigned to the upsampled pixel location to produce a resulting frame of image data.
Other objects and advantages of the invention will become apparent to those skilled in the relevant art(s) upon reading the following detailed description of preferred embodiments, in conjunction with the accompanying drawings, in which like reference numerals have been used to designate like elements, and in which:
Techniques are described herein for processing image data captured by an imaging system, such as, but not limited to, a focal plane array (FPA) having different types of detector elements interspersed in the array. For example, a frame of image data captured by all of the types of the detector elements interspersed in the FPA can be effectively separated into several image frames, each separated image frame including only the image data captured by one of the types of the detector elements. Because like-type, or same-sense, detector elements can be spaced widely apart in the FPA, separating the image frames according to like-type detector elements produces undersampled image frames that are susceptible to the effects of aliasing. In another example, an FPA having like-type detectors that are relatively small and widely-spaced apart also produces undersampled image frames that are susceptible to the effects of aliasing.
Different techniques are described herein for processing undersampled image frames. These techniques can be applied irrespective of how the undersampled image frames are obtained. In particular, techniques are described for processing the pixels of undersampled image frames to compute image data values for locations in an upsampled frame. As used herein, the term “upsampled” refers to pixel locations that are spaced at different intervals than the spacing of the undersampled frames. Typically, the pixels of the upsampled frame are spaced sufficiently close to avoid undersampling in the Nyquist sense, but the upsampled frame need not be limited to such spacing, and other spacings are possible. In embodiments, the upsampled frame is referred to as a “resampled” or “oversampled” frame. A detailed description of an accumulation technique for processing undersampled frames is presented herein, in accordance with one or more embodiments of the present disclosure. The explanation will be by way of exemplary embodiments to which the present invention is not limited.
In one technique for processing undersampled images, interpolation can be performed on the pixels of a given undersampled frame to compute image data values for locations in an upsampled frame. The upsampled frames, thus populated with values interpolated from the undersampled frames, can then be combined, for example, by averaging the frames, to produce a resulting image frame. Such averaging of the frames can reduce the effects of aliasing in the original undersampled image, and can also improve the SNR of the resulting image. Having reduced the aliasing effects (which occur mostly in the higher-frequency regions), image sharpening filters can also be used to enhance edges, somewhat improving the resolution of the resulting image.
The image capture device can experience a two-dimensional, frame-to-frame, angular dither. The dithering in two dimensions can be either deterministic or random. When dither is not known, shift estimation processing can be preformed, frame-to-frame, to estimate the horizontal and vertical dither shifts so that all frames can be aligned (or registered) to one another before frame integration. Thus, in step 110, integer and fractional shifts in pixel locations between the undersampled frame and a reference frame are determined. The reference frame for a given type of detector element can include, but is not limited to, the first undersampled frame captured during the process 100, a combination of the first several undersampled frames captured during the process 100, an upsampled frame, etc. To determine the shifts, correlation of the undersampled frame and the reference frame can be performed, among other approaches, where the result of the correlation (e.g., a shift vector) describes the two-dimensional shift of the pixel locations in the undersampled frame with respect to the pixel locations in the reference frame.
Then, in step 115, the undersampled image frame is aligned to the reference frame based on the pixel shifts determined in step 110. The alignment performed in step 115 is also referred to herein as frame “registration.” U.S. Pat. No. 7,103,235, issued Sep. 5, 2006, which is incorporated by reference herein in its entirety, provides a detailed description of techniques that can be employed to perform shift estimation and frame registration in accordance with steps 110 and 115. To produce a higher resolution resulting image, pixel values in the aligned/registered undersampled frame can be upsampled to populate pixel locations in an upsampled reference frame. The upsampled reference frame might include, for example, four times as many pixel locations as the undersampled frame. Thus, in step 120, upsampling is performed by interpolating (e.g., bilinear interpolation) the pixels of the aligned undersampled frame to compute image data values for the pixel locations in the upsampled reference frame that do not already exist in the aligned undersampled frame.
In step 125, the populated upsampled frame is combined, or integrated, with previously integrated upsampled frames for the same type of detector. The integration can include, for example, averaging the upsampled frames to produce a resulting image frame for the same type of detector. Integration of multiple frames can result in an improvement in SNR that is proportional to the square root of the number of frames integrated.
Then, in step 130, the integrated frame for a given type of detector element can be combined with the integrated frames generated for the other types of detector elements in the FPA to produce a composite image frame. For example, if the FPA includes different types of wavelength-sensitive detector elements interspersed in the array, such as red, blue and green wavelength-sensitive detector types, then the integrated frame generated for the red detector type can be combined with the integrated frames generated for the blue and green detector types to produce the composite image frame. Similarly, in another example, if the FPA includes different types of polarization-sensitive detector elements interspersed in the array, such as detector elements having −45 degree, horizontal, vertical and +45 degree polarization sensitivities, then the integrated frame generated for the −45 degree detector type can be combined with the integrated frames generated for the horizontal, vertical and +45 degree detector types to produce the composite image.
Because most of the upsampled locations are populated by interpolation across multiple-pixel separations (that is, with smeared values from combinations of detector elements), the resolution of the image generated by the interpolation process 100 can be limited by the PSF, detector size, and detector spacing. That is, for the interpolation technique, spot size is typically matched to the spacing of like detector elements.
As described herein in conjunction with
Another technique for processing undersampled images is described herein that can efficiently use FPAs with widely spaced detector elements in a manner that can reduce aliasing produced by undersampling, while, at the same time, can maintain the inherent resolution of individual detector element dimensions. In accordance with this technique, the pixel samples of dithered undersampled frames can be accumulated and assigned to nearest pixel locations in an upsampled reference frame. In this manner, most, if not all, of the upsampled locations can be populated by values from single detector elements, thereby avoiding interpolating and populating the upsampled locations with smeared values from combinations of detector elements. Accordingly, the inherent resolution of individual detector dimensions can be maintained.
In one embodiment, dither can be used to obtain pixel samples at locations in the undersampled frames that, after registration, are close to all or most of the upsampled pixel locations. In order to populate all or most of the upsampled pixel locations using this technique, random and/or deterministic relative motion between an image capture device and the scene being imaged and/or angular dither of the image capture device are needed so that the closest upsampled pixels to the undersampled detector pixels are not always the same. The relative positions of the aligned undersampled pixels to the upsampled reference pixels resulting from the motion/dither allows contributions to be applied to most, if not all, of the upsampled reference pixels after several undersampled frames have been processed.
For example, the process 200 can be implemented in a variety of image capture systems, including staring systems (e.g., the array captures an image without scanning), step-stare systems and slowly scanning systems, among others, where dither can be supplied by platform motion, gimbal motion, and/or mirror dither motion of these systems. Such motion can be intentional or incidental, and may be deterministic or random. For example, in a step-stare system, the dither may be supplied by back-scanning less than the amount needed to completely stabilize the image on the detector array while scanning the gimbal.
As described herein, if the dither is not known, processing can be performed frame-to-frame to estimate the dither shifts in two dimensions in order to register the captured image frames to one another. Thus, in step 210, integer and fractional shifts in pixel locations between the undersampled frame and a reference frame are determined. As in the process 100, the reference frame for a given type of detector element in the process 200 can include, but is not limited to, the first undersampled frame captured during the process 200, a combination of the first several undersampled frames captured during the process 200, an upsampled frame, etc. Further, as described herein, the undersampled frame and the reference frame can be correlated, among other approaches, the result of which describes the two-dimensional shift of the pixel locations in the undersampled frame with respect to the pixel locations in the reference frame.
In step 215, the undersampled image frame is aligned to the reference frame based on the pixel shifts determined in step 210. Details of the alignment/registration performed in step 215 are described herein with repsect to correpsonding step 115 of the process 100 and are not repeated here. Registration of frames can be performed in software so that registration is not a function of mechanical vibration or temperature. Additionally, registration of the multiple polarization/wavelength detector sensitivities can be known and consistent because the physical arrangement of the detector elements in the FPA is known. Thus, in one embodiment, the pixel shifts determined for each type of detector element can be determined and combined (e.g., averaged), and the undersampled image frame for a given type of detector element can be aligned using the average shift determined based on all of the types of detector elements, as opposed to the shift determined based on one given type of detector element.
In step 220, pixel values for pixel locations in the aligned undersampled frame are accumulated. In step 225, it is determined whether data from a desired number of undersampled frames has been accumulated. If not, undersampled frames continue to be processed in accordance with steps 205-220 until data from the desired number of undersampled frames has been accumulated. In an embodiment, the accumulated data can be stored, for example, in a table in memory.
When data from the desired number of undersampled frames has been accumulated, upsampling is performed in step 230 by assigning the pixel values accumulated for the pixel locations of the aligned undersampled frames processed in steps 205-220 to closest corresponding pixel locations in an upsampled reference frame. That is, the pixel values from the aggregate of the pixel values accumulated from all of the processed undersampled frames can be assigned to closest pixel locations in the upsampled image. As described herein, the upsampled reference frame might include, for example, four times as many pixel locations as the undersampled frame, but the dithering and subsequent re-aligning of a frame can cause that frame's pixels to fall in various locations in between the original undersampled pixel locations, providing samples for most, if not all, of the pixel locations in the upsampled image.
In an embodiment, in step 230, each of the accumulated pixel values (e.g., pixel values from more than one undersampled frame) are assigned to an upsampled pixel location. For each pixel value from a registered, undersampled frame, the assigned location can be the upsampled reference location that is closest to the undersampled pixel location after registration shifts. Then, in step 235, all values assigned to the same location are combined (e.g., averaged) and the combined value is used to populate that location. In the process 200, to obtain image samples for locations of the upsampled image, the data from an entire set of undersampled frames can be collected. This aggregate can contain samples at locations which, after dithering and re-aligning, occur at locations closest to locations of most, if not all, of the upsampled pixel locations to be populated.
In an embodiment, in step 235, those locations in the upsampled frame for which no samples have been accumulated can be populated by copying or interpolating the nearest populated neighboring pixel values. Such interpolation can include, for example, bilinear or a simple nearest-neighbor interpolation. Because few locations in the upsampled frame are likely to be unpopulated by undersampled image data, only a small degree of resolution is likely to be affected by performing interpolation to fill in values for the unpopulated locations.
The image frame resulting from step 235 is referred to herein as an “integrated frame” because it includes a combination of data collected from a number of undersampled frames. As described herein, the integrated frame can experience an improvement in SNR that is proportional to the square root of the number of frames integrated. In an embodiment, image sharpening filters can be applied to enhance edges of the integrated image, since aliasing noise, which can be exacerbated by image sharpening filters, can also been reduced as a result of the intergration process. In one embodiment, the number of frames processed and integrated can be based on whether the scene being imaged is undergoing motion. For example, if portions of the scene being imaged are undergoing motion relative to other scene components, fewer frames may be processed and integrated to avoid blurring those portions in the integrated frame.
As described herein, because the physical arrangement of the pixels in the imaging device (e.g., FPA) is known, in step 240, the integrated frame for the given type of detector can be combined with the integrated frames generated for the other types of detector elements in the imaging device to produce a composite image. For example, such composite image could be displayed on a display device for a human viewer, or could be processed by a computer application, such as an automatic target recognition application or a target tracking application, among other applications that can process data captured from multiple waveband/polarization detectors.
In another embodiment of process 200, illustrated in
By integrating the aggregate data of dithered frames of data, embodiments of the process 200 can overcome both the resolution degradation and the SNR reduction experienced as a result of the interpolation processing technique 100. Moreover, for embodiments of the process 200, resolution of the resulting image can be, in some instances, limited by the PSF and detector size, but not by the detector spacing. For example, spot size can be matched to the detector size for optimum resolution and SNR. Thus, in embodiments of the process 200, resolution on the order of the resolution of the detector/PSF combination can be achieved, rather than being degraded by interpolation across multiple-pixel separations, as in the process 100.
The processing techniques described herein in accordance with embodiments of the present disclosure can have many suitable applications including, but not limited to, electro-optical (EO) targeting systems, particularly those EO systems that utilize polarization and/or waveband differentiation imaging; high-definition television (e.g., improved resolution using a reduced number of detection elements); and still and/or video cameras (where processing can be traded for sensor costs and/or increased performance, especially where multicolor, multi-waveband or multiple-polarization information is needed). In these systems, a FPA can be divided so that a basic repeating pixel pattern includes pixels of varying polarizations and/or wavebands.
System 300 also includes a processing device 310. In accordance with an aspect of the present disclosure, the processing device 310 can be implemented in conjunction with a computer-based system, including hardware, software, firmware, combinations thereof. In an embodiment, the processing device 310 can be configured to implement the steps of the embodiments of the exemplary accumulation process 200, illustrated in
The processing device 310 can be configured to align an undersampled frame, which includes image data captured by a given one of the plurality of different types of detector elements of the image capture device 305, to a reference frame. For example, in an embodiment, the processing device can be configured to determine integer and fractional pixel shifts between the undersampled frame and the reference frame. As described herein, the reference frame can include, but is not limited to, the first undersampled frame, a combination of the first several undersampled frames for the given type of detector element, an upsampled frame, etc. Accordingly, in one embodiment, the processing device 310 can be configured to align the undersampled frame to the reference frame based on the pixel shifts. In an embodiment, the processing device 310 can be configured to pre-process the undersampled image prior to aligning the undersampled frame with the reference frame. As described herein, such pre-processing can include, but is not limited to, non-uniformity correction, dead-pixel replacement and pixel calibration.
The processing device 310 can also be configured to accumulate pixel values for pixel locations in the undersampled frame and populate pixel locations in an upsampled reference frame by combining (e.g., averaging) the accumulated pixel values from the undersampled pixel values whose registered locations are closest to a given upsampled pixel location. In embodiments, the resulting integrated image frame can experience an improvement in SNR that is proportional to the square root of the number of frames integrated.
In an embodiment, the undersampled frame includes dithered image data. As described herein, the dithering and subsequent re-aligning of a frame can cause that frame's pixels to fall in various locations in between the original undersampled pixel locations, providing samples for most, if not all, of the pixel locations in the upsampled frame. For example, as described herein, the image capture device 305 can experience a two-dimensional, frame-to-frame, angular dither. Such dither can be supplied by, among other techniques, platform motion, gimbal motion, and/or mirror dither motion of the image capture device 305 and the motion can be intentional or incidental, and may be deterministic or random.
In an embodiment, the processing device 310 can be configured to accumulate all of the pixel values for a number of undersampled frames before assigning and integrating the accumulated values to upsampled pixel locations, as illustrated in
In another embodiment, the processing device 310 can be configured to assign and integrate the undersampled pixel values to upsampled pixel locations on a frame-by-frame basis, as illustrated in
In an embodiment, the processing device 310 can be configured to process undersampled frames for each of the different types of detector elements in parallel to produce resulting image frames for each of the different types of detector elements of the image capture device 305. Further, the processing device 310 can be configured to combine the integrated frame for one type of detector element with the integrated frames for the other types of detector elements to produce a composite image. For example, the integrated frames might be combined according to color (such as for color television), pseudo-color (e.g., based on polarizations), multi-band features (e.g., for automatic target recognition), polarization features, etc. Such a composite image can be displayed by a display device 315 for a human viewer and/or can be further processed by computer algorithms for target tracking, target recognition, and the like.
According to further embodiments of the present disclosure, an FPA can be divided to include basic repeating patterns of pixel elements of varying wavelength/waveband sensitivities (e.g., pixel elements sensitive to red, blue, or green wavelengths, pixel elements sensitive to short, mid, or long wavebands, etc.) and/or polarization sensitivities.
In other embodiments, a combination of polarizations and wavebands can be used. For example, the unpolarized elements of
Moreover, according to embodiments of the present disclosure, the repeating pattern for a given FPA can be chosen to match a type of motion to be sampled or imaged, thereby optimizing image processing. For example, if the motion relative to the detector elements in the array is substantially linear and horizontal, a pattern such as the striped 4-polarization pattern illustrated in
Optical flow describes detector-to-scene relative motion, such as the apparent motion of portions of the scene relative to the distance of the detector to those portions (e.g., portions of the scene that are closer to the detector appear to be moving faster than more distant portions).
Likewise,
An exemplary simulation was implemented to compare performance of the interpolation and accumulation processing techniques described herein. The exemplary repeating patterns illustrated in
To simulate the two processing techniques described herein, that is, the first processing technique 100 illustrated in
Comparative results of the simulated processing are illustrated in
TABLE 1 summarizes the results of the simulated processing illustrated in
All numbers expressing quantities or parameters used herein are to be understood as being modified in all instances by the term “about.” Notwithstanding that the numerical ranges and parameters set forth herein are approximations, the numerical values set forth are indicated as precisely as possible. For example, any numerical value inherently contains certain errors necessarily resulting from the standard deviation reflected by inaccuracies in their respective measurement techniques.
Although the present invention has been described in connection with embodiments thereof, it will be appreciated by those skilled in the art that additions, deletions, modifications, and substitutions not specifically described may be made without departing from the spirit and scope of the invention as defined in the appended claims.
This application claims the benefit of U.S. Provisional Patent Application No. 60/879,325, filed Jan. 9, 2007, the disclosure of which is herein incorporated by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
60879325 | Jan 2007 | US |