1. Field of the Invention
The present invention relates to image processing, and in particular, to image processors and methods of image processing that can be employed, for example, to reduce blur.
2. Description of the Related Art
Astronomical telescopes that enable optical imaging of celestial objects such as the moon, planets, and stars, can be outfitted with electronic detector arrays disposed at a focal plane for the telescope to record images of these heavenly objects. The detector array comprises a plurality of detectors that outputs an electrical signal in response to illumination. The outputs from the plurality of detectors (the detectors individually being referred to as pixels) together reconstruct the image. The electrical output may be transferred electronically to memory such as RAM or a storage device.
Images of celestial objects when obtained from earth commonly are blurred as a result of atmospheric effects such as fluctuations in the refraction index of the atmosphere, which changes with time, temperature, location, and altitude. These fluctuations in refractive index alter the propagation of light in an irregular and unpredictable manner and result in image degradation. Additionally, the relatively lower sensitivity of reasonably affordable detector arrays inhibits recording images of desired faint celestial objects.
What is needed, therefore, are apparatus and methods for recording faint celestial objects and reducing image degradation resulting from atmospheric effects.
The system, method, and devices of the invention each have several aspects, no single one of which is solely responsible for its desirable attributes. Without limiting the scope of this invention, its more prominent features will now be discussed briefly. After considering this discussion, and particularly after reading the section entitled “Detailed Description of Certain Embodiments” one will understand how the features of this invention provide advantages over other display devices.
One embodiment of the invention includes a method of forming a virtual image by processing multiple images from a telescope, the virtual image comprising an array of pixels, the method comprising capturing an image comprising an array of pixels using the telescope, the pixels in the array of pixels having associated pixel magnitudes, changing pixels of the virtual image based on the pixel magnitudes of the captured image using a drizzle algorithm, adjusting an imaging control parameter after the changing step, and repeating the capturing and changing steps after adjusting the imaging control parameter. In one aspect of the first embodiment, the imaging control parameter is adjusted based on information from the captured image. In a second aspect, the imaging control parameter is adjusted based on information from the virtual image. In a third aspect, the pixels in the captured image have a larger size than the pixels in the virtual image. In a fourth aspect, changing pixels of the virtual image using the drizzle algorithm comprises associating the array of pixels of the captured image with an array of regions of smaller size, respective pixel magnitudes for the array of pixels of the captured image being associated with corresponding regions in said array of regions, and distributing portions from the pixel magnitudes into the pixels in the virtual image, the distribution being based on overlap of the regions with the pixels of the virtual image. In a fifth aspect, the imaging control parameter comprises gain, DC offset, exposure time, focus, or position. In a sixth aspect, the method further comprises repositioning the telescope so that the captured image overlaps a portion of the virtual image that was not included in previously captured images. In a seventh aspect, repositioning the telescope comprises positioning the telescope so that the captured image overlaps a portion of the virtual image that was included in previously captured images. In an eighth aspect, the method further comprises repositioning the telescope so that the captured image is translated an amount comprising more than twice the pitch of the pixels for the captured images. In a ninth aspect, the telescope is translated an amount between about one-tenth ( 1/10) of a pixel and three-quarters (¾) of a length dimension of the virtual image. In a tenth aspect of the first embodiment, the method further comprises evaluating the quality of the captured image before including pixel magnitudes from the captured image in the virtual image. In an eleventh aspect, evaluating the quality of the captured image comprises comparing one or more characteristics of the captured image to one or more criteria, and rejecting the image if the one or more characteristics do not meet the corresponding criteria. In a twelfth aspect, the characteristic comprises sharpness, distortion, or smearing. In a thirteenth aspect, one or more of the criteria are dynamically determined.
Another embodiment of the invention includes a telescope system for generating enhanced images, comprising a telescope, a camera comprising a detector array disposed to capture images formed by the telescope, the captured images comprising arrays of pixels with associated pixel magnitudes, and at least one processor in communication with the camera and the telescope, the processor configured to define a virtual image comprising pixels, receive a first captured image from the detector array, change pixels of the virtual image based on the pixel magnitudes of the first captured image using a drizzle algorithm, adjust an imaging control parameter after changing the pixels of the virtual image, receive a second captured image from the detector array, and change pixels of the virtual image based on the pixel magnitudes of the second captured image using a drizzle algorithm after adjusting the imaging control parameter. In one aspect of the second embodiment, the processor is further configured to reposition the telescope using information from the first captured image to determine the position of the telescope for the second captured image. In a second aspect, the processor is further configured to evaluate the captured image before including pixel magnitudes from the captured image in the virtual image.
Another embodiment includes a method of forming an enlarged virtual image by processing multiple images from a telescope, the enlarged virtual image comprising an array of pixels, the method comprising capturing a first image comprising a first array of pixels using the telescope, the pixels in the first array of pixels having respective pixel magnitudes, capturing a second image comprising a second array of pixels using the telescope, the pixels in the second array of pixels having respective pixel magnitudes, moving the telescope prior to capturing the second image to introduce a shift between the first and second captured images that is at least as large as about 1/10 of the size of the first captured image, and changing pixels of the virtual image based on the pixel magnitudes of the first and second captured image using a drizzle algorithm. In one aspect of the third embodiment, the telescope is moved such that the second captured image is shifted by at least about one-tenth ( 1/10) to about ten (10) times the size of a length dimension of the first captured image. In a second aspect, the method further comprises moving the telescope and capturing images a plurality of times prior to capturing the second image. In a third aspect, the telescope is moved and images are captured between 1 and 100 times after capturing the first image and prior to capturing the second image. In a fourth aspect, the first array of pixels have a pixel pitch, and the telescope is moved sufficiently to provide a shift between captured images at least as much as about twice the pixel pitch. In a fifth aspect, the enlarged virtual image is at least about 100 to 1000 percent as large at the first captured image. In a sixth aspect, the virtual image is changed based on the pixel magnitudes of the first captured image prior to capturing the second image.
Another embodiment includes a system for generating enhanced images, comprising a telescope including a movable positioning system, a camera comprising a detector array disposed to capture images formed by the telescope, the captured images comprising arrays of pixels with associated pixel magnitudes, and at least one processor in communication with the detector array and positioning system, the processor configured to define a virtual image comprising pixels, capture a first image, capture a second image, move the telescope prior to capturing the second image to introduce a shift between the first and second captured images that is at least as large as about 1/10 of the size of the first captured image, and change pixels of the virtual image based on the pixel magnitudes of the first and second captured image using a drizzle algorithm.
Another embodiment includes a system that produces a virtual image by processing multiple images from a telescope, the virtual image comprising an array of pixels, the system comprising means for capturing an image formed by the telescope where the image comprises an array of pixels having a pixel magnitude, means for changing pixels of the virtual image based on the pixel magnitudes of the captured image using a drizzle algorithm, and means for adjusting an imaging control parameter after changing pixels of the virtual image. The means for capturing and said means for changing are configured to repeat the capturing and changing steps after adjustment of the imaging control parameter.
Another embodiment includes a computer-readable storage medium containing a set of instructions for a computer for forming a virtual image by processing multiple images from a telescope, the virtual image comprising an array of pixels, the set of instructions comprising capturing an image comprising an array of pixels using the telescope, the pixels in the array of pixels having respective pixel magnitudes, changing pixels of the virtual image based on the pixel magnitudes of the captured image using a drizzle algorithm, adjusting an imaging control parameter after changing pixels of the virtual image, and repeating the capturing and changing steps after adjusting the imaging control parameter.
The following detailed description is directed to certain specific embodiments. However, the invention can be embodied in a multitude of different ways. Reference in this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. The appearances of the phrase “in one embodiment,” “according to one embodiment,” or “in some embodiments” in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Moreover, various features are described which may be exhibited by some embodiments and not by others. Similarly, various requirements are described which may be requirements for some embodiments but not other embodiments.
The various methods, systems, and techniques disclosed herein can be used to form composite images. Embodiments include methods of processing multiple images from a telescope to form a virtual composite image, which represents a desired area of interest, for example, an area of the sky showing particular stars of interest. The virtual image may be larger than any one of the images that are used to form the virtual image. Numerous electronic images, each encompassing a portion of the virtual image, are captured and processed using image processing techniques, including drizzling. Information from the images (e.g., pixel magnitudes) is used to change the pixel values of the virtual image until the virtual composite image is complete. The pixel magnitude of each pixel in the virtual image may be generated from corresponding pixels in multiple captured images that depict a portion of the virtual image. After an image is captured but before it is used to change the pixel values of the virtual image, the captured image can be analyzed and rejected if its quality is poor (for example, due to lack of sharpness, distortion, and/or smearing). Results from the image analysis can also be used to change an imaging control parameter for capturing subsequent images. The telescope can be repositioned, for example, after capturing an image, either based on the analysis of a captured image or other criteria. Embodiments also include a telescope system that may comprise a telescope, a camera that captures images formed by the telescope, and a computer processor configured to receive the captured images, analyze the images, change pixels in the virtual image using a drizzle algorithm, and adjust imaging control parameters to capture subsequent images for use in forming the virtual image.
Embodiments of the telescope 10 can include any type of earth-based telescope, such as a refractor telescope or a reflecting telescope. For example, the telescope 10 can comprise a Newtonian telescope, a Catadioptric telescope, a Maksutov-Cassegrain telescope, a Schmidt-Cassegrain telescope, or a Dobsonian telescope. The size of the telescope 10 can include those telescopes typically used by all levels of users, for example, amateur astronomers, professional astronomers, institutions, and/or land-based observatories, including a 60 mm or smaller telescope, or up to an 8 m or larger telescope, or a set of telescopes used in combination to form a an equivalent larger telescope. In other embodiments, the telescope 10 comprises binoculars. As with the telescope embodiments described above, a plurality of images can be captured and composite image can be formed using the drizzle process and the other processes and devices described herein.
The telescope 10 can include a camera 12 that has a detector for capturing images formed by the telescope 10. In this embodiment, the camera 12 is a CMOS camera. The CMOS camera 12 comprises a CMOS detector array preferably disposed at a focal plane or image plane of the telescope 10. The CMOS detector array comprises a two-dimensional array of optoelectronic devices or more specifically, optical detectors that convert optical power into electronic signals. The optical detectors in the two-dimensional array are referred to as pixels. An optical image formed on the image plane of the telescope 10 will be sensed by the CMOS detector array, the various optical detectors each outputting an electrical signal dependent on the amount of light incident on the respective detector pixel. In this manner, an optical image can be recorded as an electronic image. The electronic image formed from the CMOS detector array includes an array of pixels that correspond to the CMOS detector array pixels. Each pixel of the electronic image can have a pixel magnitude and an associated position. Such electronic images are often referred to as digital images, e.g., in the case where the electronic signals are digitized.
As described above, the optical detectors in CMOS detector arrays are based on CMOS (Complementary Metal Oxide Semiconductor) device technology. Electronics for handling the electrical signals output from the plurality of detectors may be incorporated with the CMOS detector array. Advantageously, CMOS detector arrays are inexpensive and thus preferred. The camera, however, employed in conjunction with the telescope 10 should not be limited to CMOS detectors arrays. Other optoelectronic focal plane arrays such as for example CCD detector arrays may be employed in certain scenarios.
The telescope 10 can be focused on a celestial body such as the moon, planets, stars, comets, brighter deep space objects, or other objects in space or alternatively on a terrestrial object, thereby producing an optical image on the focal or image plane. With the CMOS camera 12, the optical image can be converted into an electronic image.
To reduce blurring, optical images are captured by the CMOS focal plane array, and the resultant electronic images are transferred to an image processor. The image processor performs processing that yields an improved image. A block diagram of an imaging system 14 comprising a CMOS detector array 16 and an image processor 18 is depicted in
One preferred embodiment of the imaging system 14 is illustrated by the block diagram shown in
The imaging system 14 shown in
The computer 22 shown in
The imaging system 14 shown in
During operation, the user may also define a desired area of interest for generating a composite image. The defined area of interest corresponds to and is referred to herein as a “virtual image,” a defined image space that comprises pixels. The user may in some cases designate a desired area and corresponding “virtual image” that is larger than any of the images used to form the composite image. In some embodiments, the virtual image is at least about 100 to 1000 percent as large at a captured image although the size may be larger or smaller.
To facilitate generating a composite image and to reduce image degradation (e.g., blurring), a plurality of images are obtained, or captured, over the area of interest and are combined to form the composite image. After one or more images of sufficient quality are captured of a particular portion of the area of interest, the telescope positioning system 35 can reposition the telescope to capture an image that includes a portion of the area of interest not captured in the previous image, and/or not captured in any of the previously captured images. The telescope 10 can also be repositioned so that the captured image overlaps a portion of the virtual image that was included in a previously captured image. Although the telescope positioning system 35 may be used to alter the telescope 10 prior to capturing the different images as well as to maintain the telescope directed on a particular celestial object, in some embodiments, drift in the field-of-view of the telescope 10 may produce images translated with respect to each other that may be combined to form the composite image.
In some applications, the telescope 10 moves sufficiently such that the captured image is translated an amount comprising more than the pitch of the pixels for the captured images or more than twice the pitch. In some embodiments, the translated amount can be between one-tenth ( 1/10) of a pixel to three-quarters of the field-of-view of the camera 12. In some embodiments, the translated amount can be between one-tenth ( 1/10) of a pixel and three-quarters (¾) of the size of the virtual image. The telescope 10 can be moved so that images covering the entire area of interest are captured. In some embodiments a first image is obtained and the telescope is moved and an image is captured a plurality of times between the first and last images. In some embodiments, the telescope 10 is moved and images are captured between 1 and 100 times after capturing a designated first image and prior to capturing a designated second image. Values outside these ranges are possible.
As described above, in some cases, the field-of-view of the telescope drifts and this drift contributes to the respective shift between images captured at different times. This drift may occur even if the positioning system 35 is set to maintain the telescope 10 directed at substantially same direction. Accordingly, a multitude of images may be obtained as the telescope drift. These images or a portion thereof may be combined to form the composite image, which may be larger than the individual captured images.
Each captured image comprises pixels. These pixels depict a portion of the area of interest and correspond to a portion of the virtual image. As discussed above, the virtual image also comprises pixels. The resulting composite image is formed by changing the pixels of the virtual image using information from the captured images. In various preferred embodiments, these images are acquired by the detector array 16 onto which optical images are focused by the telescope 10. The detector array 16 captures these images at various points in time and produces electronic representations of the images. The images can be somewhat faint and/or blurred, and can require image processing so that they are suitable for use in the composite image.
The images may be captured automatically with the assistance of computer or microprocessor control or control electronics and/or control signals. Alternatively, the images may be taken manually in some embodiments. Multiple exposures can be captured using shutter control wherein a shutter is opened to expose the detectors to the optical image. Automatic or manual control of exposure time may be provided. The exposure may range, for example, between about 1/5000 second to 30 seconds. Values outside this range may also be used. The images can be displayed in real time and analyzed. A quantitative measure of the quality of the image as well as other measurable characteristics can be provided to the user via the user interface, e.g., display. The quality of the images can be evaluated to determine if characteristics of the images meet certain criteria (e.g., sharpness, smearing, and distortion) so that the images can be used to create the composite image or for other purposes. Images whose characteristics do not meet the criteria can be rejected. Analysis of the images can also be used to determine imaging control parameters, for example, gain, DC offset, exposure time, focus and/or position of the telescope. Signals based on these control parameters can be sent to the telescope positioning system 36 and to the camera electronics to change the imaging control parameters for subsequent images obtained using the imaging system 14. Adjustment to the telescope or telescope system can be made in real time as the images are being obtained. Similarly, data can be presented to the user in real time as the images are being captured. The user can, in response to such data, decide to adjust parameters of the telescope or telescope system.
The multiple electronic images can be processed to reduce image degradations, such as blurring.
Selection of the images may be based, for example, on the amount of information contained in the image or the region of the image tested. The information content can be measured, for example, by determining the compressibility of the image or the portion of the image evaluated. The larger the information content, the less compressible the images. Conversely, less information content translates into increased compressibility. Images with larger amounts of information can be chosen. Other images below a threshold level of information content may be excluded from the subset of images combined to produce the higher quality composite image.
Selection may alternatively be based, for example, on the level of image degradation such as blurring or conversely on the level of clarity and contrast. Images with higher contrast, those with more variation in signal magnitude from pixel to pixel, can be chosen. Other images below a threshold contrast level may be excluded from the subset of images combined to produce the higher quality composite image.
Combining the images to form a composite image can comprise “summing” pixel magnitudes on a pixel-by-pixel basis using various summing techniques. The aggregate magnitude may be scaled in some cases. In various embodiments, for example, the value of a given pixel in the composite image is the average of the magnitudes of the corresponding pixel in each of the images contained in the subset that is used to form the composite.
Also, the images can also be combined to form a composite image using a drizzle algorithm, described hereinbelow. It will be appreciated that there may be various ways of implementing this algorithm, only one of which is described herein for purposes of illustration of the algorithm. The drizzle algorithm is described in available references, including, for example, “Drizzle: A Method for the Linear Reconstruction of Undersampled Images,” Publication of the Astronomical Society of the Pacific 114: 114-152, February, 2002. Composite images can be formed using the drizzle algorithm, or using the drizzle algorithm in combination with one or more other methods of image reconstruction or image processing.
Prior to combining the images, the images may be translated such that the common features in the image are substantially aligned. Translating the images preferably substantially removes the effects of movement of the features in the image over the period of time during which the plurality of images are obtained. Such movement may result, for example, from atmospheric disturbances, vibrations of the telescope, or the rotation of the earth. Additional filtering may be employed to improve the quality of the image. This filtering may comprise contrast-enhancing filtering for increasing the contrast. In some embodiments, this filtering may be performed after the images have been combined to form the composite. This filtering is, however, optional.
In some embodiments of forming a composite image, an image is received by the optical processor 18 as exemplified by block 36 in
As shown, the screen can also include additional items such as controls for specifying parameters and options associated with the image processing as well as measured values, for example, of information content, blur, contrast, or focus. The screen may also include a histogram showing the distribution pixel intensity in a plot of intensity (x-axis) versus number of pixels (y-axis).
As illustrated by block 40 in
The figure of merit may be based on or related to the quantity of information in the region of interest. Information, information theory, and detail regarding the measurement of information in a message is provided in the seminal paper by C. E. Shannon, “A Mathematical Theory of Communication” The Bell System Technical Journal, Vol. 27, pp. 379-423, 623-656, July, October, 1948, which is incorporated herein by reference in its entirety. The amount of information is one method for assessing the images quality. Images of the same object containing different amounts of information may indicate variation in the quality of the images. For example, an image with degradation such as blurring, low resolution, loss of detail, and/or other affects will generally contain a relatively low amount of information. Such degradation may result, for example, from optical distortion, vibration and movement of the telescope or optical system, electronic noise in the detection apparatus, or from other sources. Conversely, images with large information content may reflect significant resolvable detail. Information content, for example, is also related to the ability to predict from the value of signal in one pixel, the signal in an adjacent pixel. Accordingly, in various preferred embodiments the information content is measured to evaluate the quality of the images such as the resolvable useful detail in the images.
In various embodiments, the information content, how much information in, e.g., the region of interest, is assessed by calculating the compressibility within the designated region 42. The compressibility is indicative of the amount of information contained in the image or designated region 42. For example, a completely dark image such as of the dark sky would have little information and be highly compressible. Conversely, a quality image with extensive detail such as of the surface of the moon would contain large amounts of information and be less compressible. Accordingly, an image file, such as a .TIFF, .JPG, containing an image of the dark sky, if compressed, would be smaller compared to a similar compressed file of the detailed image of the moon. Similarly, optical images of the same object should include the same amount of information, and therefore compress to the same size, unless one of the images is substantially degraded. The degraded image would contain less information than the un-degraded image and could be compressed more. Accordingly, compressibility can be used as a measure of information content, and as described above, the amount of information in like images can be used to assess the quality of the image.
One process for determining the information content comprises adaptive delta modulation. Other approaches, both those well known as well as those yet to be devised may also be employed. Other values besides the compressibility can be used to characterize the information content, and hence the quality of the image in the designated region.
Useful background may be found, e.g., in the Space Telescope Science Institute STSDAS User's Guide, Science Computing and Research Support Division, STSCI, Baltimore 1994, and Barnes, Jeanette, A Beginner's Guide to Using IRAF. IRAF Version 2.10, NOAO, Tucson 1993, which are also each incorporated herein by reference in there entirety. See also, Dantowitz, R.; “Sharper Images Through Video”, Sky and Telescope, Vol. 96., No. 2, p. 48, August 1998, Hale, A. S, Danotwitz, R., Kozubel, M., Teare, S., Gillam, S. G; “The Selective Image Reconstruction(SIR) Imaging Technique: Application to Planetary Science” AAS DPS Meeting #33, Bull of AAS, Vol. 33 p. 1143, and Thompson, L. A. “Adaptive Optics In Astronomy”, Physics Today, Vol. 47, No. 12, pp. 24-31, 1994, which are also each incorporated herein by reference in their entirety.
In various alternative embodiments, the figure of merit used to assess the quality of the images is based on the level of contrast. The level of contrast may be assessed by calculating the variance or standard deviation of signal values among the pixels within the designated region 42. The variance can be computed according to the following equation:
σ2=<I(i,j)2>−<I(i,j)>2
where I(i,j) is the signal level at pixel (i,j), i corresponds to the row and j corresponds to the column for each of the M×N pixels in the array 42. The standard deviation, e.g., the square root of this value, may also be employed. Other values besides the variance and standard deviation can be used to characterize the variation, and hence the contrast level in the designated region.
In another approach for quantifying the level of contrast, the difference in signal intensity between adjacent pixels is determined across the array 42. For example, in one embodiment, the variation can be evaluated by assessing the difference in signal level between a given pixel and the pixel to the right as well as the pixel beneath. For example, for the pixel (3,4) shown in
where Δi,j=δ1+δ2. Such a summation is can be computed over the entire array 42 or M×N pixels and yields a figure indicative of the variation among the pixels. A larger value means larger variation and likely higher contrast. Conversely, a smaller value corresponds to smaller variation and lower contrast. This figure of merit can be normalized or scaled. A wide variety of other figures of merit for characterizing the variation and the contrast level can be employed in different embodiments. Moreover, a wide variety of measures of the quality of an image may be utilized.
As indicated by block 44 in
Another image is received and this portion of the processing represented by blocks 36, 38, 40, and 44 is repeated as exemplified by block 48. Namely, a new image is obtained, the portion of the image to be quantitatively evaluated is determined, and the figure of merit within that region is measured. For this image, the region for quantitative analysis may remain the same as originally designated by the user or determined by the processor 18. In other embodiments, the location (and potentially the size) of the region may be reevaluated and redefined. The value of the figure of merit for this image is compared with the previously recorded high and low figure of merit values. If this figure of merit value is either higher than the recorded high figure of merit value or lower than the low figure of merit value, this figure of merit value is recorded as the high or low figure of merit value, respectively.
This portion of the processing, represented by blocks 36, 38, 40, and 44 is repeated a number of times. This number may be set by the user via the user interface. In other embodiments, this number may be established by the processor 18. This number may range, for example, between about 5 and 10, or up to 100 or more, however, the number of times that this portion of the processing is repeated may be outside these ranges.
As shown by block 50 in
In various preferred embodiments, upper and lower values such as the maximum and minimum value of the recorded information content or compressibility are identified. The threshold levels may be determined using these values of high and low information content or compressibility. For example, the threshold value may be a value between maximum and minimum recorded information content and/or compressibility, such as half-way between these values or about 50% of the difference between the maximum and minimum. The threshold need not be limited to the midway point. Other levels closer to maximum or closer to minimum may be used instead. In some embodiments the user can specify whether the threshold is about 10% above the minimum, about 20% or 30%, etc., or whatever value he or she desires. Other approaches can be employed to provide a threshold value.
In other preferred embodiments, upper and lower values such as the maximum and minimum value of the recorded variations are identified. In the case where the standard deviation is employed as a measure of contrast, these values may correspond to σmax and σmin, respectively. The threshold levels may be determined using these values of high and low variation. As discussed above, for example, the threshold value may be a value between σmax and σmin, such as half-way between these values or about 50% of the difference between the maximum and minimum. Other levels closer to maximum or closer to minimum may be used instead. In some embodiments the user can specify whether the threshold is about 10% above the minimum, about 20% or 30%, etc., or whatever value he or she desires. Other approaches can be employed to provide a threshold value.
The threshold determines the quality level of additional images that are used to form the composite image. Accordingly, blocks 52, 54, 56, 58, 60, and 62, represent another portion of the process wherein additional images are received and evaluated. In particular, for each image, the region for quantitative analysis is determined and the figure of merit evaluated within this region is computed. As discussed above, the region for analysis may be the region originally designated by the user or the image processor 18. Alternatively, a new region may possibly be employed. The figure of merit may be assessed by measuring the information content and/or compressibility, contrast and/or variation, as well as other quality indicators within the region of interest, as discussed above.
The figure of merit value of the region is compared with the threshold level as indicated by block 58. If the figure of merit value is larger than the threshold level, the image is added to the composite. If the figure of merit value is less than the threshold level, the image is not added to the composite. Accordingly, if the threshold is high, higher quality images will be added to form the composite. Similarly, if the threshold is low, lesser quality images will be included in forming the composite.
This portion of the process is repeated a number of times as indicated by block 62. The number of times that this process is repeated may depend on the number of images captured, may be specified by the user, or may be determined by the processor 18, or otherwise realized. This number may be, for example, between about 15 to 100, e.g., between about 15 and 30 or between about 50 and 100, or more, however, the number of times that this portion of the process is repeated may be outside these ranges as well. The number of images selected and added to form the composite may, for example, be between about 50 to 100 although a more or less number can be used. In some embodiments, between about 200 to 300 images can be evaluated, although the number may be larger or smaller. To Capturing 200 to 300 images may take 2 to 3 minutes with 1/10 second exposure time.
As indicated above, a wide range of algorithms can be employed as a measure of quality and the specific measurement and/or calculation to assess such image quality need not be limited to those specifically recited herein. Moreover, although in discussing the process shown in
Note that the quality evaluation, e.g., information content, contrast, etc., can be employed to offer additional functions to the user. The calculated value of figure of merit such as information content or contrast, for example, can be displayed for images obtained to provide the user with a quantitative measure of the image quality. Such a value can be presented graphically to the user. This feedback may assist the user, for example, in focusing the telescope. The processor can be set to monitor quality as the telescope is adjusted through the focus. Preferably, the display provides the quality level of the current image as well as the highest quantity obtained so that the user could determine the best focus as determined by the value calculated for figure of merit or image quality.
As discussed above in connection with
For reasons explained above, the features in one image may be offset with respect to another as schematically illustrated in
In the case where the designated region contains such a high contrast feature, the feature may be located by calculating the centroid of the intensity distribution within the designated region. The centroid preferably corresponds to the point in the region in which the intensity within that region may be considered to be concentrated. Accordingly, in the case where the region comprises an image of a bright star, planet, or other celestial object in a dark background, the centroid can be useful in locating a central position of this bright feature in the image. This position can be monitored to track the shift of the feature(s) in image.
Exemplary expressions that may be employed in calculating the X, Y position of the centroid are presented below
where I(i,j) is the pixel intensity value at x=i and y=j. Other representations and methods for calculating the centroid are possible.
In various preferred embodiments, the centroid of the designated region is determined as represented by block 64 in
Preferably, the images are shifted an amount, e.g., Δx, Δy, as shown in
As discussed more fully below, one of the two images may be rotated with respect to the other image to provide proper alignment. Two reference points may be monitored to determine rotation. For example, the centroids of two reference points such as two stars may be used to compute the amount of rotation, the center of rotation, and the direction of rotation. Other methods may also be employed.
As discussed above, and represented by block 70 in
The magnitude levels may be further adjusted, for example, by scaling or normalizing. Other adjustments are also possible. Such adjustments may be represented by block 72.
The composite image may be further processed by filtering. For example, a contrast-enhancing filter may be employed to further improve contrast. As the composite image possesses little noise, contrast-enhancing filtering will increase contrast and highlight features of the object without adding substantial noise. For example, kernel filtering can be employed. As is well known, with Kernel filtering, a convolution kernel is applied to the pixels in the image to obtain new pixel values. See, e.g., Craig A. Lindley, “Practical Image Processing in C”, Wiley Professional Computing, John Wiley & Sons, Inc. 1991, pp. 368-369. Examples of convolution kernels for several high-pass spatial filters are presented below:
Other types of kernel filters can also be employed. Other filters and filtering techniques other than Kernel filtering may also be used for improving image quality or altering the image as desired.
For example, another technique that can be employed to improve image quality is dark subtraction wherein the fixed pattern noise of the detector is subtracted out of the image. A table or database of fixed pattern detector noise can be created that comprises the fixed pattern noise for a variety of exposure levels for the detector. This database may be generated by capturing a number of images over different time intervals with a closed shutter over the detector array. For a given exposure setting, therefore, the appropriate fixed pattern noise can be obtained from the database by the processor and subtracted out of the electronic image. Fine adjustment can also be performed by scaling the fixed pattern noise that is subtracted out of the image. Such fine tuning may be useful where the database does not include fixed pattern noise exactly matching that produced for the exposure time selected. For example, if the database includes fixed pattern noise for 1/600 second and 1/500 second exposure times and the CMOS camera is set for 1/650 second exposure, the fixed pattern noise for 1/500 can be selected and the fixed pattern noise scaled appropriately. Scaling can be employed in other circumstance also to adjust the image.
Such improved image quality can be achieved by employing the embodiments discussed above, for example, in connection with
Additionally, logic may be executed on the architecture such as shown for example in
Additionally, some or all the processing can be performed all on the same device, on one or more other devices that communicates with the device, or various other combinations. The processor may also be incorporated in a network and portions of the process may be performed by separate devices in the network. Display of the images such as the composite image or display of other information, e.g., a user interface, can be included on the device, can communicate with the devices, and/or communicate with a separate device.
The structures and processes described above are not limited solely to use for astronomical applications. The image processor 18 and processing techniques can be used to reduce image blur for other imaging systems such as, for example, terrestrial telescopes and binoculars having an optoelectronic detector array.
In certain preferred embodiments, separate optical systems are employed for the user's eyes and the CMOS camera 110. The optics within the binoculars 100 may comprise a plurality of powered refractive optical elements (e.g., objective and ocular) and prisms for inverting the image. The CMOS camera 110 may also comprise refractive optical elements for forming an optical image on the CMOS detector array. As describe above, other detection devices, such as for example CCDs, may be employed. Other optical designs and configurations are also possible as described above.
As discussed above, CMOS detectors arrays are substantially less expensive than CCD detector arrays. CMOS detectors, however, are also less sensitive. Accordingly, in low light conditions, such as for example dusk, indoors, artificial lighting, etc., these CMOS detectors have difficulty capturing high quality images.
Moreover, handheld binoculars suffer from anatomical vibration. The hands naturally have limited ability to hold the binoculars completely steady. As a result, the user holding the binoculars introduces movement into the optical system during the period over which the images are recorded. This movement is generally lateral movement (e.g., in the x and y directions) which is transverse to the optical axis (e.g., z-direction) of the optical systems. Such vibrations and other movements cause the CMOS camera 110 to capture a blurred image.
To reduce blur, the exposure time of the CMOS camera can be shorten such that the image is captured with a reduced amount of movement and vibration. For example, if an aperture is employed to control exposure of the detector array, the shutter can be opened for a shorter period of time during image capture. The images will therefore be under exposed. Shortening the exposure time limits the quantity of light and, thus, the image will be more faint as less light is collected by the CMOS detector array. As discussed above, however, the CMOS detector array is particularly susceptible to effects of low light levels.
To mitigate against these effects which otherwise degrade the image quality, a plurality of short exposure images is obtained. The exposure length is sufficiently short to reduce the effects of vibration. These exposure times, may for example range between about 1/5000 second to 1/100 second. For example, the exposure time may be between about 1/1000 and 1/100 second or between about 1/5000 and 1/1000 second. Exposure times outside these ranges, however, are possible. The number of images captured is preferably between about 10 to 50, such as between about 10 to 20 or 30 to 50, although more or less images may be obtained. To improve image quality, preferably at least a portion of these images are combined to form a composite image as described above.
As described for other image combination techniques, the plurality of images used to create the composite image are preferably selected from a larger set of images, the subset selected being of superior quality. Selection may be based, for example, on image content and/or compressibility, on the level of image degradation such as blurring or conversely on the level of clarity and contrast. Images with higher information content can be chosen. The compressibility may be used to determine the information content. As described above, images with higher contrast, those with more variation in signal magnitude from pixel to pixel, can also be chosen. Other images below a threshold level may be excluded from the subset of images combined to produce the higher quality composite image. Combining the images may comprise summing the magnitudes on a pixel-by-pixel basis. The aggregate magnitude may be scaled in some cases. In various embodiments, for example, the value of a given pixel in the composite image is the average of the magnitudes of the corresponding pixel in each of the images contained in the subset that is used to form the composite.
Prior to combining the images using any of the compositing processes described herein or other known processes, the images may be translated such that the common features in the image are substantially aligned. Translating the images preferably substantially removes the effects of movement of the features in the image over the period of time during which the plurality of images are obtained. Such movement may result for example from vibrations. Additional filtering may be employed to improve the quality of the image. This filtering may comprise contrast-enhancing filtering for increasing the contrast. In some embodiments, this filtering may be performed after the images have been combined to form the composite. This filtering is, however, optional.
Preferred embodiments of the structures and configuration of the imaging system are extensively discussed above. Some of the applicable structures include those shown in
Preferred embodiments of the image processing techniques are also extensively discussed above. Some of these applicable processes are illustrated by FIGS. 6, 7A-7C, 8-12, and 25-28 and the discussions relating thereto. These processes can also advantageously be employed to improve the quality of the images obtained from the CMOS camera in the binoculars as well.
In one preferred embodiment, however, the region designated for quantitative analysis is presumed to be substantially located at the center of the field-of-view. A user is likely to orient the binoculars such that the object of interest is central. Accordingly, the region of interest is centrally located in certain preferred embodiments. Other approaches for determining the location of the region designated for analysis may be employed as well. As discussed above, evaluating the image over a smaller designated regions expedites processing.
Further examples of the successful performance of the image processing described herein are shown in
As described above, a drizzle algorithm may be employed in combining captured images into a composite. A detailed description of this method of forming a composite image is described with reference to
As illustrated in
If the captured image has acceptable quality, the captured image 172 can be incorporated into the virtual image 170. Pixels of the virtual image 170 are then changed based on pixel magnitudes of the captured image using a drizzle algorithm, as represented by block 154. In the drizzle algorithm, known also as Variable-Pixel Linear Reconstruction (or “drizzling”), pixels in the captured images (input images) are mapped into pixels in the virtual image, taking into account shifts and rotations between the images and the virtual image 170 as illustrated in
Magnitude values are associated with each of the drops. In various preferred embodiments, for example, the drop has the same value as the pixel in the captured image to which the drop is associated. These magnitudes are distributed into pixel in the virtual image 170. The association of the drops with one or more pixels in the virtual image 170 is illustrated in
As described above, the pixels in the virtual image 170 are typically reduced in size in comparison with the pixels in the captured images. The pixels in the virtual image 170 are also smaller than the drops in certain preferred embodiments. For example, the drops have linear dimensions one-half that of the input pixel, slightly larger than the dimensions of the pixels of the virtual image in some embodiments. The drops may range in size from between about one-fifth (⅕) as large as the pixels in the captured images to the same size as the pixels in the captured images, and between about one and two times the size of the pixels in the virtual image. Values outside these ranges are also possible.
Referring again to
Referring again to
Referring again to
When images are combined using a drizzle algorithm, a weight map can be specified for each input image (e.g., containing information on bad pixels in the image). When the drizzle process generates the final virtual image 170 from all the captured images, it can also create an output weight map that combines information from all the input weights. For example, when a drop with value ixy and user defined weight wxy is added to an image with pixel value Ixy, weight Wxy, and fractional pixel overlap 0<axy<1, the resulting value of the image I′xy and weight W′xy is
W′xy=axywxy+Wxy
I′xy=(axyixywxy+Ixy Wxy)/W′x
Drizzle offers many advantages. Combining captured images using a drizzle algorithm or drizzle filtering preserves photometry and resolution. As discussed above, the drizzle approach takes into account the optical distortion of the camera. The drizzle filtering removes the effects of geometric distortion both on image shape and photometry, and increases the effective resolution. Additionally, the input images can be weighted according to the statistical significance of each pixel.
One example of image reconstruction using a drizzle algorithm is shown in
Alternative approaches are also possible. For example, the processing steps may be interchanged and may be executed in different order or may be excluded or replaced altogether. Additional processing steps and features can also be added.
It will be appreciated by those skilled in the art that various omissions, additions and modifications may be made to the processes described above without departing from the scope of the invention, and all such modifications and changes are intended to fall within the scope of the invention, as defined by the appended claims.