1. Field
The disclosed embodiments relate generally to digital image processing.
2. Background
Digital images, such as those captured by digital cameras, can have exposure problems. Consider, for example, the use of a conventional digital camera on a bright sunny day. The camera is pointed upwards to capture a scene involving the tops of graceful trees. The trees in the foreground are set against a bright blue background of a beautiful summer sky. The digital image to be captured involves two portions, a tree portion and a background sky portion.
If the aperture of the camera is set to admit a large amount of light, then the tree portion will contain detail. Subtle color shading will be seen within the trunks and foliage of the trees in the captured image. The individual sensors of the image sensor that detect the tree portion of the image are not saturated. The individual sensors that detect the sky portion of the image, however, may receive so much light that they become saturated. As a consequence, the sky portion of the captured image appears so bright that subtle detail and shading in the sky is not seen in the captured image. The sky portion of the image may said to be “overexposed.”
If, on the other hand, the aperture of the camera is set to reduce the amount of light entering the camera, then the individual sensors that capture the sky portion of the image will not be saturated. The captured image shows the subtle detail and shading in the sky. Due to the reduced aperture, however, the tree portion of the image may appear as a solid black or very dark feature. Detail and shading within the tree portion of the image is now lost. The tree portion of the image may said to be “underexposed.”
It is therefore seen that with one aperture setting, a first portion of a captured image is overexposed whereas a second portion is properly exposed. With a second aperture setting, the first portion is properly exposed, but the second portion is underexposed. A solution is desired.
A method compensates for improperly exposed areas in a first digital image taken with a first aperture setting by rapidly and automatically capturing a second digital image of the same scene using a second aperture setting. An optical characteristic is determined for each portion of the first digital image. The optical characteristic may, for example, be the luminance of the portion. If the optical characteristic is in an acceptable range (for example, the luminance or the portion is high enough), then image information for the portion of the first digital image is used in a third adjusted digital image. If, on the other hand, the optical characteristic of the portion of the first digital image is outside the acceptable range (for example, the luminance of the portion is too low or too high), then image information for the portion in the first digital image is combined with image information for a corresponding portion in the second image, thereby generating a composite portion. The composite portion is used in the third adjusted digital image.
The manner of combining can be based on the luminance of the portion in the first image. In one example, the portion in the first digital image is mixed with the corresponding portion in the second digital image, and the relative proportion taken from the first digital image versus the second digital image is dependent on the magnitude of the optical characteristic. A multiplication factor representing this proportion is generated and the multiplication facture is used in the combining operation. This process of analyzing a portion in the first digital image and of generating a corresponding portion in the third adjusted digital image is performed for each portion of the first digital image. The resulting third digital image is stored as a file (for example, a JPEG file). A header of the file contains an indication that the compensating method has been performed on the image information contained in the file.
The method can be performed such that a portion of the first digital image is analyzed and a composite portion of the third digital image is generated before a second portion of the first digital image is analyzed. Alternatively, all of the portions of the first digital image can be analyzed in a first step thereby generating a two-dimensional array of multiplication factors for the corresponding two-dimensional array of portions of the first image. The multiplication factors are then used in a second step to combine corresponding portions of the first and second digital images to generate a corresponding two-dimension array of composite portions of the third digital image. In the case where a two-dimensional array of multiplication factors is generated, the multiplication factors can be adjusted to reduce an abruptness in transitions in multiplication factors between neighboring portions. This abruptness is a sharp discontinuity in multiplication factors as multiplication factors of portions disposed along a line are considered. Reducing such an abruptness makes boundaries between bright areas and dark areas in the resulting third digital image appear more natural. Reducing such an abruptness may also make undesirable “halo” effects less noticeable.
In accordance with another method, multiple digital images need not be captured in order to compensate for underexposed and/or overexposed areas in a digital image. A first digital image is captured using a relatively small aperture opening such that if a portion of the image is overexposed or underexposed it will most likely be underexposed. An optical characteristic is determined for the portion. If the optical characteristic is in a first range, then the portion of the first digital image is included in a second adjusted digital image in unaltered form. If, however, the optical characteristic is in a second range, then an optical characteristic adjustment process is performed on the portion of the first digital image to generate a modified portion. The modified portion is included in the second adjusted digital image.
In one example, the optical characteristic is luminance and the optical characteristic adjustment process is an iterative screening process. If the luminance of the portion is high enough, then the image information of the portion of the first digital image is used as the image information for the corresponding portion in the second adjusted digital image. If, on the other hand, the luminance of the portion is low, then the iterative screening process is performed to raise the luminance of the portion, thereby generating a modified portion having a higher luminance. The screening process raises the luminance of the portion while maintaining the relative proportions of the constituent red, green and blue colors in the starting portion. The modified portion is included in the second adjusted digital image. The iterative screening process is performed until either the luminance of the portion reaches a predetermined threshold, or until the screening process has been performed a predetermined maximum number of times. In this way, a second adjusted digital image is generated wherein areas that were dark in the first digital image are brighter in the second adjusted digital image. The second adjusted digital image is stored as a file (for example, a JPEG file). A header of the file contains an indication that the compensating method has been performed on the image information contained in the file.
A novel electronic circuit that carries out the novel methods is also disclosed. Additional embodiments are also described in the detailed description below.
Although electronic device 1 has been described above in the context of cellular telephones, electronic device 1 may also include digital camera electronics. The digital camera electronics includes a lens or lens assembly 9, a variable aperture 10, a mechanical shutter 11, an image sensor 12, and an analog-to-digital converter and sensor control circuit 13. Image sensor 12 may, for example, be a charge coupled device (CCD) image sensor or a CMOS image sensor that includes a two-dimensional array of individual image sensors. Each individual sensor detects light of a particular color. Typically, there are red sensors, green sensors, and blue sensors. The term pixel is sometimes used to describe a set of one red, one green and one blue sensor. A/D converter circuit 13 can cause the array of individual sensors to capture an image by driving an appropriate electronic shutter signal into the image sensor. A/D converter circuit 13 can then read the image information captured in the two-dimensional array of individual sensors out of image sensor 12 by driving appropriate readout pulses into image sensor 12 via lines 14. The captured image data flows in serial fashion from sensor 12 to A/D converter circuit 13 via leads 36. An electrical motor or actuator 15 is operable to open or constrict variable aperture 10 so that the aperture can be set to have a desired opening area. Processor 2 controls the motor or actuator 15 via control signals 16. Similarly, an electrical motor or actuator 17 is operable to open and close mechanical shutter 11. Processor 2 controls the motor or actuator 17 via control signals 18. Electronic device 1 also includes an amount of nonvolatile storage 19. Nonvolatile storage 19 may, for example, be flash memory or a micro-hard drive.
To capture a digital image, processor 2 sets the opening area size of variable aperture 10 using control signals 16. Once the aperture opening size is set, processor 2 opens mechanical shutter 11 using control signals 18. Light passes through lens 9, through the opening in variable aperture 10, through mechanical shutter 11, and onto image sensor 12. A/D converter and control circuit 13 supplies the electronic shutter signal to image sensor 12, thereby causing the individual sensors within image sensor 12 to capture image information. A/D converter and control circuit 13 then reads the image information out of sensor 12 using readout pulses supplied via lines 14, digitizes the information, and writes the digital image information into memory 3 across bus 7. Processor 2 retrieves the digital image information from memory 3, performs any desired image processing on the information, and then stores the resulting image as a file 38 in non-volatile storage 19. The digital image may, for example, be stored as a JPEG file. Processor 2 also typically causes the image to be displayed on display 5. The user can control camera functionality and operation, as well as cellular telephone functionality and operation, using switches 8.
The individual sensors of image sensor 12 that captured the second portion 22, however, were substantially saturated due to the brightness of the sky. Relative color information and detail that should have been captured in this second portion 22 has therefore been lost. This lack of detail and shading in the background sky in
In a second step (step 101), a second digital image of the same scene is automatically captured by electronic device 1 using a second aperture setting. The second digital image is captured automatically and as soon as possible after the first digital image so that the locations of the various objects in the scene will be identical or substantially identical in the first and second digital images.
The reduced area of opening 24 has, however, resulted in the proper exposure of the individual sensors that captured the relatively bright background sky. Whereas the second portion 22 of the first digital image 20 of
Next step (step 102), processor 2 determines a multiplication factor Fm for each portion A1-An of the first digital image. The image information of the first digital image is considered in portions that make up a two dimensional array of portions A1-An. In the present example, each portion is a pixel and the two dimensional array of the pixels forms the first digital image. Each pixel is represented by three individual color values: a red color value, a green color value, and a blue color value. Each value is a value between 0 and 255. A value of 0 indicates dark, whereas a value of 255 indicates completely bright. In step 102, each of the portions Am, where m ranges from one to n, is considered one at a time and a multiplication factor Fm is determined for the portion Am.
The multiplication factor can be determined in any one of many different suitable ways. In the present example, the multiplication factor Fm is determined by first determining the luminance L of the pixel. From the red color value (R), the green color value (G) and the blue color value (B), a luminance value L of the pixel is given by Equation (1) below. Equation (1) boosts the brightness for certain colors, while limiting the brightness of others, so that the magnitude of the resulting luminance value L corresponds to the brightness of the composite pixel as perceived by the human eye.
(R*0.30)+(G*0.59)+(B*0.11)=L (1)
Once the luminance value L of the pixel has been determined, a multiplication factor function is used to determine the multiplication factor F for the pixel being considered.
If, on the other hand, the luminance value L for portion Am of the first digital image is too dark or too light (luminance L of the pixel is in a second predetermined range of from 0 to 15 or from 240 to 255), then the image information for portion Am in the first digital image is ignored (multiplied by a multiplication factor of 0%) and the image information for the corresponding portion Bm in the second image is used (multiplied by 100%). The second predetermined range is denoted by reference numeral 30 in
The process of calculating a luminance value L for a portion Am of the first digital image and of then determining an associated multiplication factor Fm for portion Am is repeated n times for m equals one to n until a set of multiplication factors F1-Fn is determined. In the present example where a portion is a pixel, there is a one to one correspondence between the multiplication factors F1-Fn and the pixels A1-An of the first digital image.
Next (step 103), the multiplication factors F1-Fn are adjusted to reduce abruptness of transitions in the multiplication factors between neighboring portions. This adjusting process is explained in connection with a neighborhood of portions identified in
In the example of the first and second digital images of
Even if the halo were not to appear in the final third digital image, the sharpness of the transition of multiplication factors from 0% to 100% from one portion to the next may cause an unnatural looking boundary where first portion 21 of the first digital image 20 is joined to second portion 27 of the second digital image 25.
The multiplication factors determined in step 102 are therefore adjusted (step 103) to smooth out or dither the abrupt transition in multiplication factors.
Next (step 104), a composite portion Cm is generated for each portion C1-Cn of the third digital image by combining portion Am of the first digital image with portion Bm of the second digital image, wherein the combining is based on the multiplication factor Fm of the corresponding portion Am. In one example, the combining step is performed in accordance with Equations (2), (3) and (4) below.
RCm=(Fm*RAm)+((1−Fm)*RBm) (2)
GCm=(Fm*GAm)+((1−Fm)*GBm) (3)
BCm=(Fm*BAm)+((1−Fm)*BBm) (4)
The result is a red value RCm, a green value GCm, and a blue value BCm for portion Cm of the resulting third digital image. RAm is the red value for portion Am. GAm is the green value for portion Am. BAm is the blue value for portion Am. Parameter m ranges from one to n so that one portion Cm is generated for each corresponding portion A1-An in the first digital image.
Processor 2 (see
Although the method described above compensates for improper exposure using image information from multiple digital images, problems due to improper exposure can be ameliorated without the use of image information from multiple digital images.
In a first step (200) a first digital image is captured using a relatively small aperture opening size such that if a portion of the image is overexposed or underexposed, it will most likely be underexposed. The first image is comprised of a two-dimensional array of portions Am, where m ranges from one to n.
Next (step 201), an optical characteristic of a portion Am is determined. In one example, the optical characteristic is pixel luminance L.
If the optical characteristic of portion Am is in a first range (step 202), then the image information in portion Am is included in a second digital image as portion Bm of the second digital image. Portion Am is included in second digital image in unaltered form.
If, however, the optical characteristic of portion Am is in a second range (step 203), then an optical characteristic adjustment process is performed on portion Am to generate a modified portion Am′. The modified portion Am′ is included in the second digital image as portion Bm.
In one example, portion Am is a pixel, the optical characteristic adjustment process is a screening process, the first range is an acceptable range of pixel luminance, and the second range is a range of unacceptably dark pixel luminance. If the pixel being considered has a luminance in the first range, then the pixel is included in the second image in unaltered form. If the pixel being considered has a luminance in the second range, then the pixel information of pixel Am is repeatedly run through the screening process to brighten the pixel. Each time the screening process is performed, the pixel is brightened. This brightening process is stopped when either the pixel luminance has reached a predetermined brightness threshold or when the screening process has been done on the pixel a predetermined number of times.
Equations (5), (6) and (7) below set forth one screening process.
(A−((A−RAm)*(A−RAm)>>8)=RAm′ (5)
(A−((A−GAm)*(A−GAm)>>8)=GAm′ (6)
(A−((A−BAm)*(A−BAm)>>8)=BAm′ (7)
In Equations (5), (6) and (7), A is a maximum brightness of a color value of the pixel being screened. RAm is a red color value of the portion Am that is an input to the screening process. RAm′ is a red color value output by the screening process. GAm is a green color value of the portion Am that is an input to the screening process. GAm′ is a green color value output by the screening process. BAm is a blue color value of the portion Am that is an input to the screening process. BAm′ is a blue color value output by the screening process. The “>>” characters represent a right shift by eight bits operation. As set forth above, the screening process is iteratively performed until pixel luminance has reached a predetermined brightness threshold or the number of iterations has reached a predetermined number. The screening process increases the luminance of the pixel while maintaining the relative proportions of the constituent red, green and blue colors of the pixel. The resulting color values Ram′, GAm′ and Bam′ are the color values of the modified portion Am′ that is included in the second digital image as portion Bm.
The optical characteristic adjustment process is repeated for all the portions A1-An of the first digital image such that a second digital image including portions B1-Bn is generated. This is represented in
In step 200, if some pixels of the first digital image are improperly exposed, it is desirable that they be underexposed rather than overexposed. If individual sensors of the image sensor are saturated such that they output their maximum brightness values (for example, 255 for the red value, 255 for the green color value, and 255 for the blue color value) for a given pixel, then the relative amounts of the colors at the pixel location cannot be determined. If, on the other hand, the pixel is underexposed, then it may appear undesirably dark in the first digital image, but there is a better chance that relative color information is present in the values output by the individual color sensors. The relative amounts of the colors red, green and blue may be correct. The absolute values are just too low. Accordingly, when the pixel is brightened using the screening process, the resulting pixel included in the second digital image will have the proper color ratio. A relatively small aperture area is therefore preferably used to capture the first digital image so that the chance of having saturated image sensors is reduced.
Although certain specific embodiments are described above for instructional purposes, the present invention is not limited thereto. An optical characteristic other than luminance can be analyzed, identified in certain portions, and compensated for. In one example, the red, green and blue component color values of a pixel are simply added and the resulting sum is the optical characteristic of the pixel. A portion can be one pixel or a block of pixels. The portions of an image that are analyzed in accordance with the novel methods can be of different sizes. A starting digital image can be the RGB color space, or of another color space. A starting image can be of one color space, and the resulting output digital image can be of another color space. Although embodiments are described above that utilize a variable aperture, two fixed apertures of different aperture size openings can be employed to capture the first digital image and the second digital image. Rather than using different aperture settings to obtain the first and second digital images, the duration that an image sensor is exposed in a first digital image and a second digital image can be changed using an electronic shutter signal that is supplied to the image sensor. Other ways of obtaining the first and second digital images that have different optical characteristics can be employed. In one embodiment, one of the images is taken without flash artificial illumination, whereas the other image is taken with flash artificial illumination.
The method using the first and second digital images described above is extendable to include the combining of more than two digital images of the same scene. Digital images of different resolutions can be combined to compensate for improperly exposed areas of an image. Screening or another optical characteristic adjustment process can be applied to adjust an optical characteristic of a part of an image, whereas the combining of a portion of the image with a corresponding portion of a second image can be applied to compensate for exposure problems in a different part of the image. An optical characteristic adjustment process can change color values for only a certain color component or certain color components of a portion. The multiplication factor adjusting process can be extended to smooth an abrupt change in multiplication factors out over two, or three, or more portions. Color information in adjacent portions can be used to influence the optical characteristic adjustment process under certain circumstances such as where individual color sensors have been completely saturated and information on the relative amounts of the composite colors has been lost.
The disclosed methods need not be performed by a processor, but rather may be embodied as dedicated hardware. The disclosed method can be implemented in an inexpensive manner in an electronic consumer device (for example, a cellular telephone, digital camera, or personal digital assistant) by having a processor that is provided in the electronic consumer device for other purposes perform the method in software when the processor is not performing its other functions. A compensating method described above can be a feature that a user of the electronic consumer device can enable and/or disable using a switch or button or keypad or other user input mechanism on the electronic consumer device. Alternatively, the method is performed and cannot be enabled or disabled by the user. A compensating method can be employed with or without the multiplication factor smoothing process.
An indication that a compensating method has been performed can be displayed to the user of the electronic consumer device by an icon that is made to appear on the display of the electronic consumer device. An electronic consumer device can analyze a part of a first image, determine that the image has exposure problems, rapidly and automatically capture a second digital image of the same scene, and then apply a compensating method to combine the first and second digital images without knowledge of the user. Although optical characteristic adjustment methods are described above in connection with an electronic consumer device, the methods or portions of the methods can be performed other types of imaging equipment. The described optical characteristic adjustment methods can be performed by a general purpose processing device such as a personal computer. A compensation method can be incorporated into an image processing software package commonly used on personal computers such as Adobe Photoshop. Rather than using just one image sensor, a first image sensor can be used to capture the first digital image and a second image sensor can be used to capture the second digital image. Optical characteristic adjustment methods described above can be applied to images in one or more streams of video information. Accordingly, various modifications, adaptations, and combinations of the various features of the described specific embodiments can be practiced without departing from the scope of the invention as set forth in the claims.