1. Field of the Invention
The present invention relates to an image pickup apparatus, an image processing apparatus, a control method for an image pickup apparatus, and an image processing method.
2. Description of the Related Art
Due to insufficient dynamic range of an image pickup element of an image pickup apparatus, a white spot or a black spot sometimes occurs in an image photographed by the image pickup apparatus. In a case, for example, that a person as a main object is photographed outdoor in a backlight scene, if a brightness of background sky is extremely large, a white spot occurs in a sky part of an image photographed in an exposure condition under which the person is properly photographed. The resultant output image becomes considerably different from what is seen with eyes.
To solve the above problem, there has been proposed a technique that performs photographing with an exposure amount lower than a proper exposure, and at the time of image output, performs brightness gradation conversion to obtain a brightness gradation equivalent to that obtained under proper exposure. With this technique, the brightness of an image photographed in underexposure is compressed on a high-brightness side as shown by an ellipsoidal dotted line in
To obviate this problem, a technique has been proposed in which in a case that exposure of a main object region is improper, two images are photographed while controlling exposures such that a main object region and a background region have appropriate brightnesses, and these two photographed images are weight-synthesized (see, Japanese Laid-open Patent Publication No. 2008-048251). Another technique has been proposed in which an image is divided into a predetermined number of blocks, and each pixel value in each block is corrected using a correction amount calculated based on a gradation conversion characteristic suitable to each block and using a weight that varies according to a distance between each pixel and the center of the block concerned, thereby obtaining an output image (see, Japanese Laid-open Patent Publication No. 2008-085634).
With the technique disclosed in Japanese Laid-open Patent Publication No. 2008-048251, however, a problem is posed that an output image which is low in contrast and small in brightness difference between bright and dark parts is generated since even if various background objects such as sky, plants, artifacts each having a specific brightness range are simultaneously present in a background region, brightness gradation conversion is collectively performed on these background objects with the same conversion characteristic.
With the technique disclosed in Japanese Laid-open Patent Publication No. 2008-085634, main and background objects such as a person and sky that are present within the angle of view are simply divided into blocks and pixel values in each block are simply corrected using an amount of pixel correction that is based on the gradation conversion characteristic suitable to each block. This makes it difficult to perform appropriate gradation control. In addition, since the amount of pixel correction is simply weighted according to the distance to the center of each block, there is a fear that an output image becomes different from what is seen with eyes.
The present invention provides an image pickup apparatus, an image processing apparatus, a control method for an image pickup apparatus, and an image processing method that are capable of obtaining a natural image close to what is seen with eyes and broad in dynamic range.
According to one aspect of this invention, there is provided an image pickup apparatus that acquires a plurality of images for use in generating a synthesized image comprising a region determination unit configured to determine a plurality of object regions based on image data, a calculation unit configured to calculate representative brightness values of respective ones of the plurality of object regions determined by the region determination unit, a first decision unit configured to decide a first exposure condition based on the representative brightness value of a first object region calculated by the calculation unit, wherein the first object region is a main object region, a second decision unit configured to decide a second exposure condition based on the representative brightness value of the first object region and the representative brightness value of a second object region not including the first object region that are calculated by the calculation unit, wherein the second exposure condition differs from the first exposure condition, and an image acquisition unit configured to acquire a plurality of images by using the first and second exposure conditions.
With this invention, it is possible to obtain a natural image close to what is seen with eyes and broad in dynamic range.
Further features of the present invention will become apparent from the following description of exemplary embodiments with reference to the attached drawings.
The present invention will now be described in detail below with reference to the drawings showing preferred embodiments thereof.
In this embodiment, appropriate exposures for the sky, background, and person regions 101-103 into which the image 100 is divided are calculated, and processing is performed to obtain exposures appropriate for respective regions of a synthesized image. It should be noted that in a case where a person is photographed in a backlight scene, the person and background become underexposure since they are usually darker than the sky.
The exposure decision unit 201 has an AE image pickup unit 202, AE image division unit 203, region-dependent brightness calculation unit 204, main object region decision unit 205, and region-dependent exposure calculation unit 206, and decides exposures suitable to photograph respective regions (e.g., sky, background, and person regions) of an object to be photographed.
The AE image pickup unit 202 photographs and acquires an AE image used to decide exposures of respective regions of an object to be photographed. In
An exposure condition (e.g., exposure value) in which an AE image is photographed is decided according to an exposure condition that is output from the region-dependent brightness calculation unit (hereinafter, referred to as the brightness calculation unit) 204. It should be noted that in an initial state where no exposure condition is output from the brightness calculation unit 204, an AE image is photographed in a default exposure condition. As the default exposure condition, there can be mentioned, by way of example, an exposure condition in which a calculated average value of brightnesses in a resultant image becomes a predetermined brightness value.
The AE image division unit 203 divides the AE image 400 into, e.g., a sky region image 401, background region image 402, and person region image 403, as shown in
The brightness calculation unit 204 reads the image regions into which the AE image is divided by the AE image division unit 203, and calculates brightnesses of these region images. If determined based on calculated brightness values of the region images that any of the region images has not been photographed with an exposure suitable for brightness calculation, the brightness calculation unit 204 outputs a new exposure value to the AE image pickup unit 202. The AE image pickup unit 202 again photographs an AE image with the new exposure value.
At start of the brightness calculation process shown in
Next, the brightness calculation unit 204 extracts an image of the region of interest from the AE image as shown in
When the image of the region of interest is extracted from the AE image, a pixel value of 1 is set to each pixel in the region of interest and a pixel value of 0 is set to each pixel in regions other than the region of interest.
For example, when the sky region image is extracted from the AE image, a pixel value of 1 is set to each pixel of the sky region and a pixel value of 0 is set to each pixel of other regions. When the person region is extracted from the AE image, only a person's face portion is extracted. At that time, a pixel value of 1 is set to each pixel of the face portion, whereas a pixel value of 0 is set to each pixel of the neck of the person's body and body portions thereunder and to each pixel of the sky and background regions.
The region and the face portion for which a pixel value of 1 is set when the region of interest is extracted are shown in white in
Next, in steps S303-S305, the brightness calculation unit 204 determines whether the AE image currently processed is suitable for brightness calculation.
The brightness calculation unit 204 creates a brightness histogram of the region of interest (step S303), and determines whether a brightness distribution in the created brightness histogram is deviated to either a low-brightness region or a high-brightness region (steps S304 and S305).
To this end, the brightness calculation unit 204 calculates the number of pixels, Nlow, that are contained in the low-brightness region where the brightness value Y falls in a range of 0≦Y≦Y1 shown in
In step S304, the brightness calculation unit 204 determines whether the calculated number of pixels, Nlow, is equal to or larger than a predetermined threshold value N1. If a relation of Nlow≧N1 is satisfied (YES to step S304), i.e., if a ratio of the number of pixels, Nlow, to the total number of pixels in the image of the region of interest is large as shown in
In step S305, the brightness calculation unit 204 determines whether or not the number of pixels, Nhi, is equal to or larger than a predetermined threshold value N2. If a relation of Nhi≧N2 is satisfied (YES to step S305), i.e., if a ratio of the number of pixels, Nhi, to the total number of pixels in the image of the region of interest is large as shown in
If a relation of Nhi<N2 is satisfied (NO to step S305), i.e., if the ratio of the number of pixels, Nlow, to the total number of pixels in the image of the region of interest is not large and the ratio of number of pixels, Nhi, to the total number of pixels therein is not also large as shown in
In step S306, the brightness calculation unit 204 sets a weight to the read image that includes the image of the region of interest extracted from the AE image. For example, the brightness calculation unit 204 sets a weighting image for allocating a weighting value varying from 0 to 1 to each pixel of the read image including the image of the region of interest.
In
The weighting image 421 allocates the same weighting value to all the pixels of the read image 411 including the sky region image 401. The weighting image 422 allocates a weighting value of 1 to each pixel of a central part of the read image 412 including the background region image 402 and allocates to other pixels a weighting value that decreases with increase of a distance from the center of the read image 412. The weighting image 423 allocates the same weighting value to all the pixels of the read image 413 including the person region image (face portion) 403.
Next, in step S307, the brightness calculation unit 204 calculates a brightness value Yarea of the region of interest by weighted average according to formula (3) given below.
In formula (3), symbol w1(i, j) denotes a pixel value at a coordinate (i, j) in the read image, and the pixel value has a value of 1 in the image of the region of interest and has a value of 0 in other region images. Symbol w2(i, j) denotes a pixel value at a coordinate (i, j) in the weighting image, and Y(i, j) denotes an input brightness value at the coordinate (i, j) in the read image.
In the following, brightness values Yarea of the sky region, background region, and person region (also called human region) that are calculated in step S307 are respectively denoted by Y_SKY, Y_BACK, and Y_HUMAN (see
Next, in step S308, the brightness calculation unit 204 confirms whether the brightness values Yarea of all the regions of interest have been calculated. If the brightness values of all the regions of interest have not been calculated (NO to step S308), the brightness calculation unit 204 proceeds to step S309 where the region of interest is updated, whereupon the flow returns to step S302. On the other hand, if the brightness values of all the regions of interest have been calculated (YES to step S308), the present process is completed.
If determined in step S304 that the relation of Nlow≧N1 is satisfied and the AE image is not suitable for brightness calculation, the brightness calculation unit 204 confirms whether or not the number of times of AE image photographing is equal to or larger than a predetermined number of times (step S310).
If the number of times of photographing is neither equal to nor larger than the predetermined number of times (NO to step S310), the brightness calculation unit 204 determines that the current AE image has been photographed with an exposure value that is more underexposure than a proper value, and outputs to the AE image pickup unit 202 a new exposure value which is more overexposure than the current exposure value by a predetermined value (step S313), whereupon the present process is completed.
If determined in step S305 that the relation of Nhi≧N2 is satisfied and the AE image is not suitable for brightness calculation, the brightness calculation unit 204 confirms whether the number of times of AE image photographing is equal to or larger than a predetermined number of times (step S311).
If the number of times of photographing is neither equal to nor larger than the predetermined number of times (NO to step S311), the brightness calculation unit 204 determines that the current AE image has been photographed with an exposure value which is more overexposure than a proper value, and outputs to the AE image pickup unit 202 a new exposure value which is more underexposure than the current exposure value by a predetermined value (step S314), whereupon the present process is completed.
If determined in step S310 or S311 that the number of times of photographing is equal to or larger than the predetermined number of times, the brightness calculation unit 204 gives an instruction to execute exceptional processing, e.g., strobe photographing (step S312), and completes the present process.
As described above, according to the brightness calculation process of
Based on the brightness values of the sky, background, and person regions of the AE image calculated by the brightness calculation unit 204, the main object region decision unit 205 selects a main object region from among the sky, background, and person regions as will be described below.
At start of the object region determination process of
Next, as shown in
In step S602, processing is performed that is substantially the same as that performed in step S302 of the brightness calculation process of
In
Next, the main object region decision unit 205 calculates an evaluation value VAL of the region of interest by multiplying an area (size) S of the region of interest by a predetermined coefficient k according to formula (4) given below (step S603).
VAL=S×k=Σw1(i,j)×k (4)
In formula (4), as in formula (3), symbol w1(i, j) represents a pixel value at a coordinate (i, j) in the read image. The pixel value has a value of 1 in the image of the region of interest, and has a value of 0 in other region images. Thus, the area (size) S of the region of interest can be calculated by integrating the pixel value w1(i, j) over the entire read image. A predetermined coefficient k represents the degree of importance of the region of interest of the read image in the calculation of the evaluation value VAL. The predetermined coefficient k has a value proper to each region of interest. It should be noted that the predetermined coefficient k can be a fixed value or can be a variable that changes according to a photographic scene.
Next, the main object region decision unit 205 confirms whether the evaluation values of all the regions of interest have been calculated (step S604). If there is a region of interest whose evaluation value has not been calculated (NO to step S604), the main object region decision unit 205 updates the region of interest (step S605), and returns to step S602. On the other hand, if the evaluation values of all the regions of interest have been calculated (YES to step S604), the main object region decision unit 205 determines, as the main object region, the region of interest that is the largest in evaluation value VAL_SKY, VAL_BACK, or VAL_HUMAN among all the regions of interest, i.e., among the sky, background, and person regions (step S606), and completes the present process.
As described above, according to the object region determination process of
At start of the exposure calculation process of
A Bv value is a numerical value that represents brightness of image. In this example, the Bv value corresponds to the brightness value Y_SKY, Y_BACK, or Y_HUMAN of the sky, background, or person region of the AE image. The Bv value has a logarithmic characteristic relative to brightness. In other words, the brightness increases twofold with the increase of the Bv value by one.
The Bv value correction amount is an amount of correction to the Bv value (exposure control value), and is used for exposure condition control to control the brightness value of each image region to a target brightness value Y_TARGET_SKY, Y_TARGET_BACK, or Y_TARGET_HUMAN of the image region.
The Bv value correction amounts ΔBv_SKY, ΔBv_BACK, and ΔBv_HUMAN for sky, background, and person regions of the AE image can be calculated according to formulae (5)-(7) given below.
ΔBv_SKY=log2(Y_SKY/Y_TARGET_SKY) (5)
ΔBv_BACK=log2(Y_BACK/Y_TARGET_BACK) (6)
ΔBv_HUMAN=log2(Y_HUMAN/Y_TARGET_HUMAN) (7)
As shown in
In step S802, the exposure calculation unit 206 calculates proper Bv values Bv_SKY, Bv_BACK, and Bv_HUMAN for respective image regions according to formulae (8)-(10) given below.
Bv_SKY=Bv_CAPTURE+ΔBv_SKY (8)
Bv_BACK=Bv_CAPTURE+ΔBv_BACK (9)
Bv_HUMAN=Bv_CAPTURE+ΔBv_HUMAN (10)
In step S803, the exposure calculation unit 206 decides, as a Bv value Bv_MAIN for the main object region, one of the proper Bv values for the respective regions calculated in step S802 (the proper Bv value Bv_BACK for the background region in this example), as shown in formula (11) given below.
Bv_MAIN=Bv_BACK (11)
The exposure calculation unit 206 also calculates output Bv values Bv_SKY_OUT, Bv_BACK_OUT, and Bv_HUMAN_OUT for the sky, background, and person regions based on the Bv value Bv_MAIN of the main object region and the proper Bv values Bv_SKY, Bv_BACK, and Bv_HUMAN for the respective image regions according to formulae (12)-(14) given below (step S803).
Bv_SKY_OUT=(Bv_SKY+Bv_MAIN)/2 (12)
Bv_BACK_OUT=(Bv_BACK+Bv_MAIN)/2 (13)
Bv_HUMAN_OUT=(Bv_HUMAN+Bv_MAIN)/2 (14)
As described above, the background region is selected as the main object region in this example. Formulae (12) and (14) indicate that the Bv values for the sky and person regions are controlled so as to be close to the Bv value Bv_MAIN for the main object region. In other words, appropriate exposures of the sky and person regions (which are different from proper exposures of these regions) are set by taking into account a relation between the brightness of the main object region and the brightnesses of other regions. This makes it possible to prevent a synthesized image from becoming an unnatural image (such as a synthesized image which is obtained by synthesizing images photographed with proper exposures for image regions into a single image) where brightness discontinuity is caused between image regions.
Next, the exposure calculation unit 206 decides exposure conditions for respective image regions based on the output Bv values for the image regions (step S804). It is assumed in this example that the exposure conditions decided in step S804 are each determined by aperture value, shutter speed, and photographing sensitivity and that the exposure conditions are each controlled based only on shutter speed and photographing sensitivity according to the output Bv value by an exposure condition control method set beforehand in the image pickup apparatus. With this exposure condition control, it is possible to prevent the synthesized image from being degraded in quality due to a phenomenon (such as extent of blur or image magnification being changed between photographed images) that occurs in a case that plural images are photographed while changing the aperture value.
As described above, according to the exposure calculation process of
The region-dependent exposure image pickup unit (also referred to as the exposure image pickup unit) 207 of the image processing apparatus 200 performs a photographing operation in exposure conditions decided by the exposure calculation unit 206. In this embodiment, three images (hereinafter, referred to as the sky exposure image, background exposure image, and person exposure image, respectively, and collectively referred to as the region-dependent exposure images) are photographed with exposures respectively appropriate to sky, background, and person.
The signal processing unit 208 has a first signal processing unit 208A that performs first signal processing to generate images 1010 for position deviation detection from the region-dependent exposure images 1000 supplied from the exposure image pickup unit 207, and has a second signal processing unit 208B that performs second signal processing to generate images 1020 for synthesis from the region-dependent exposure images 1000.
In the first signal processing, the images 1010 for position deviation detection are generated from the region-dependent exposure images 1000. More specifically, a sky image 1011 for position deviation detection, a background image 1012 for position deviation detection, and a person image 1013 for position deviation detection are respectively generated from the sky exposure image 1001, background exposure image 1002, and person exposure image 1003. These images 1010 for position deviation detection are output to the position deviation detection unit 209 of the image processing apparatus 200.
The region-dependent exposure images 1000 supplied from the exposure image pickup unit 207 to the signal processing unit 208 are different in brightness level from one another. On the other hand, it is preferable that the images 1010 for position deviation detection be uniform in brightness level. To this end, in the first signal processing, a gain adjustment is made to adjust the brightness levels of the region-dependent exposure images 1000 to make the images 1010 for position deviation detection uniform in brightness level. It should be noted that it is not limited to whichever brightness level among the brightness levels of the exposure images 1011-1013 with which the brightness levels of other images are to be matched. In the first signal processing, brightness gradation conversion, noise reduction, etc. are also performed.
In the second signal processing, the images 1020 for synthesis are generated from the region-dependent exposure images 1000. More specifically, the sky image 1021 for synthesis, background image 1022 for synthesis, and person image 1023 for synthesis are generated from the sky exposure image 1001, background exposure image 1002, and person exposure image 1003, respectively. In the second signal processing, although brightness gradation conversion, noise reduction, etc. are performed, a gain adjustment to make the region-dependent exposure images 1000 uniform in brightness level is not performed unlike in the first signal processing. The images 1020 for synthesis are output to the image alignment unit 210 of the image processing apparatus 200.
Due to hand shake, a position deviation is caused among the region-dependent exposure images 1000 photographed by the exposure image pickup unit 207. It is therefore necessary to detect and correct the position deviation among these images prior to being synthesized.
The position deviation detection unit 209 of the image processing apparatus 200 inputs the images 1010 for position deviation detection from the signal processing unit 208, and detects position deviations among the images. In this embodiment, the position deviation detection unit 209 detects position deviations by using an existing technique such as where an image is divided into blocks from which a group of movement vectors relative to a reference image is calculated, and coefficients in projective transformation representing position deviations are calculated by least square method using information of the calculated group of movement vectors.
In this embodiment, the background image 1012 for position deviation detection is obtained by performing the first signal processing on the background exposure image 1002 photographed by the exposure image pickup unit 207 with the exposure for the background region (main object region), and this background image 1012 for position deviation detection is used as the reference image. Then, position deviation parameters H1, H2 (projective transformation coefficients) that represent position deviations of the sky and person images 1011, 1013 for position deviation detection relative to the reference image (i.e., the background image 1012 for position deviation detection) are calculated by and output from the position deviation detection unit 209.
The image alignment unit 210 of the image processing apparatus 200 aligns positions of the images 1020 for synthesis generated in the second signal processing. In this example, the image alignment unit 210 modifies the sky and person images 1021, 1023 for synthesis by using the position deviation parameters H1, H2 that are output from the position deviation detection unit 209, thereby obtaining sky and person images for synthesis that are aligned in position with the background image 1022 for synthesis.
As shown in
The synthesized image 1120 whose sky, background, and person regions have appropriate exposures is generated by the image synthesis unit 211 as described above, and is output to the image display unit 212 and to the image storage unit 213 of the image processing apparatus 200. The image display unit 212 displays the synthesized image 1120, and the image storage unit 213 stores image data of the synthesized image 1120.
As described above, according to this embodiment, an image is divided into predetermined image regions, appropriate exposure conditions for these image regions are determined based on a relation between brightnesses of the image regions, and plural images respectively photographed in the determined exposure conditions are synthesized into a synthesized image. It is therefore possible to obtain an image closer to what is seen with eyes and broader in dynamic range, as compared to an image obtained by a conventional method that compresses the gradation of the whole image with a predetermined brightness gradation conversion characteristic.
In the following, a description will be given of an image processing apparatus according to a second embodiment.
In the second embodiment, a desired image is generated from a single image by multiplying image regions, which are different in appropriate exposure conditions from one another, with different gains, unlike the first embodiment where plural images respectively photographed in exposure conditions appropriate for image regions are synthesized into a synthesized image.
The reference exposure/gain decision unit 1201 has an AE image pickup unit 1202, AE image division unit 1203, region-dependent brightness calculation unit 1204, and reference exposure/region-dependent gain calculation unit 1205.
The AE image pickup unit 1202, AE image division unit 1203, and region-dependent brightness calculation unit 1204 respectively perform AE image pickup/acquisition processing, AE image division processing, and brightness calculation process that are the same as those executed by the AE image pickup unit 202, AE image division unit 203, and region-dependent brightness calculation unit 204 of the first embodiment, and a description of which will be omitted.
At start of the gain calculation process of
Next, the exposure/gain calculation unit 1205 executes a reference region decision process (step S1303), thereby deciding a reference region, i.e., a region that is to be used as a reference to decide an exposure condition in which a single image is to be photographed.
In this embodiment, a region area priority method, a brightness gradation priority method, or a low-noise priority method is used to decide the reference region. The desired method can be selected from these methods and can be set in advance in the image processing apparatus. Alternatively, the desired method can be switched according to a photographing scene, or can be selected by the user.
At start of the reference region decision process of
The region area priority method refers to a method where a main object region is determined and decided as a reference region, as in the object region determination process executed by the main object region decision unit 205 in the first embodiment.
If the reference region decision method is the region area priority method (YES to step S1401), the exposure/gain calculation unit 1205 performs an object region determination process that is the same as that performed by the main object region decision unit 205 (step S1402). More specifically, the exposure/gain calculation unit 1205 calculates evaluation values VAL by multiplying areas (sizes) S of respective regions of the AE image by the predetermined coefficient k, and determines as the main object region the region which is the largest in evaluation value. Next, in step S1403, the exposure/gain calculation unit 1205 decides as the reference region the main object region determined in step S1402, and completes the present process and returns to the gain calculation process of
If the reference region decision method is not the region area priority method (NO to step S1401), the exposure/gain calculation unit 1205 determines whether the reference region decision method is the brightness gradation priority method (step S1404).
If the reference region decision method is the brightness gradation priority method (YES to step S1404), the exposure/gain calculation unit 1205 decides the region which is the largest in proper Bv value as the reference region (step S1405). In the brightness gradation priority method, the region which is the largest in proper Bv value is decided as the reference region in this manner.
If the reference region decision method is not the brightness gradation priority method (NO to step S1404), the exposure/gain calculation unit 1205 determines that the reference region decision method is the low-noise priority method.
The low-noise priority method refers to a method in which the reference region is set to prevent gain amounts to be multiplied to an image from exceeding a gain amount upper limit, which is set based on a relation among the resolution and noise level of the image pickup element and an allowable noise amount in the image. The gain amounts are calculated based on Bv differences among image regions. An allowable amount of Bv differences (hereinafter, referred to as the Bv difference threshold value) can be calculated, if the upper limit of the gain amounts is set.
In the case of deciding the reference region by the low-noise priority method, the exposure/gain calculation unit 1205 selects, as the region of interest, a region that is the largest in proper Bv value (the sky region in the example of
Next, the exposure/gain calculation unit 1205 determines a difference between the proper Bv values for the region of interest and the target region (this difference corresponds to a maximum value of gain amounts to be applied to an image photographed such that the region of interest is appropriately photographed), and determines whether the determined difference is equal to or larger than the Bv difference threshold value (step S1408).
If the difference between the proper Bv values for the region of interest and the target region is smaller than the Bv difference threshold value as shown in
On the other hand, if the difference between the proper Bv values is equal to or larger than the Bv difference threshold value (YES to step S1408), the exposure/gain calculation unit 1205 determines that the maximum value of gain amounts is larger than the upper limit of the gain amounts, and changes the current region of interest to the region that is the next largest in proper Bv value (the background region in the example of
Next, the exposure/gain calculation unit 1205 determines whether the region of interest is the same as the target region (step S1411). If the region of interest is not the same as the target region (NO to step S1411), the flow returns to step S1408 where a difference between the proper Bv value for the region of interest after change and the proper Bv value for the target region is determined and whether this difference is equal to or larger than the Bv difference threshold value is determined.
In the example of
On the other hand, if the region of interest is the same as the target region (YES to step S1411), i.e., if the Bv difference between the target region and any other region is equal to or larger than the Bv difference threshold value, the exposure/gain calculation unit 1205 instructs to perform exceptional processing, e.g., strobe emission processing (step S1412). In the example of
It should be noted that if the background region is selected as the reference region in the example of
After the reference region is decided as described above, the exposure/gain calculation unit 1205 completes the reference region decision process of
Next, the exposure/gain calculation unit 1205 selects, as the reference Bv value, the output Bv value for the reference region from among the output Bv values calculated in step S1304, and decides as the reference exposure an exposure condition corresponding to the reference Bv value (step S1305). The exposure/gain calculation unit 1205 then calculates differences between the reference Bv value and the Bv values for respective image regions, and calculates gain amounts to be multiplied to the image regions (step S1306).
In the example of
Bv_STD_OUT=Bv_BACK_OUT (15)
ΔBv_SKY_OUT=Bv_STD_OUT−Bv_SKY_OUT (16)
ΔBv_BACK_OUT=Bv_STD_OUT−Bv_BACK_OUT (17)
ΔBv_HUMAN_OUT=Bv_STD_OUT−Bv_HUMAN_OUT (18)
GAIN_SKY=2̂ΔBv_SKY_OUT (19)
GAIN_BACK=2̂ΔBv_BACK_OUT (20)
GAIN_HUMAN=2̂ΔBv_HUMAN_OUT (21)
In the example of
When photographing is made with exposure that makes the output Bv value for the background region proper, the sky region which is larger in Bv value than the background region (i.e., brighter than the background region) is photographed in more overexposure than in an appropriate exposure condition, and the person region smaller in Bv value (i.e., darker) than the background region is photographed in more underexposure than in an appropriate exposure condition. In that case, to appropriately control the brightnesses of the sky and person regions, a gain amount less than 1 is multiplied to the sky region, a gain amount equal to or larger than 1 is multiplied to the person region, and a gain amount equal to 1 is multiplied to the background region. To prevent color balance from being lost due to the gain amount less than 1 being multiplied, the gain amount less than 1 (e.g., the gain amount GAIN_SKY for the sky region in the example of
As described above, according to the gain calculation process of
The gain processing unit 1207 is input with gain amounts GAIN_SKY, GAIN_BACK, and GAIN_HUMAN that are calculated by the exposure/gain calculation unit 1205 and that are to be multiplied to the sky, background, and person regions, is input with a reference exposure image (e.g., background exposure image) 1800 photographed by the reference exposure image pickup unit 1206, and is input with a ternary image 1810. As with the ternary image 1110 of
The gain processing unit 1207 multiplies the gain amounts GAIN_SKY, GAIN_BACK, and GAIN_HUMAN respectively to the sky, background, and person regions of the reference exposure image, while changing the gain amounts according to the values allocated to respective regions of the ternary image 1810, thereby generating a gain image 1820.
As described above, the gain image 1820 having regions to which appropriate gain amounts are respectively multiplied is generated by and output from the gain processing unit 1207.
The signal processing unit 1208 of the image processing apparatus 1200 performs signal processing such as predetermined brightness gradation conversion and noise reduction on the gain image 1820 output from the gain processing unit 1207. The image processed in the signal processing unit 1208 is supplied as a final image (output image) to the image display unit 1209 and to the image storage unit 1210. The image display unit 1209 displays the output image, and the image storage unit 213 stores image data of the output image.
As described above, according to this embodiment, an image is divided into predetermined image regions, a reference exposure condition is determined based on a relation among brightnesses of respective image regions, and gain amounts appropriate for the image regions are multiplied to an image photographed in the reference exposure condition to thereby obtain a final output image. It is therefore possible to obtain an image that is closer to what is seen with eyes and broader in dynamic range as compared to an image obtained by a conventional method that compresses the gradation of the whole image with a predetermined brightness gradation conversion characteristic.
In the following, a description will be given of an image processing apparatus according to a third embodiment.
In the first embodiment, processing is performed to synthesize plural images photographed with different exposures into a synthesized image (hereinafter, referred to as the plural images-based processing). The plural images-based processing is advantageous in that a noise amount in the synthesized image can be made small and can be made uniform between region images by controlling exposures by changing the shutter speed while not changing the photographing sensitivity. However, if a person to be photographed is moving, a problem is posed that a region in which a part of a person image appears (hereinafter, referred to as the occlusion region) is generated in a background or sky region of the synthesized image obtained by the plural images-based processing.
In the second embodiment, processing (hereinafter, referred to as the single image-based processing) is performed in which gain amounts different between image regions are multiplied to a single photographed image to obtain a desired image. The single image-based processing is advantageous in that no occlusion region is generated in the photographed image. However, the single image-based processing that multiplies gain amounts different between image regions poses a problem that an amount of noise generation differs between the image regions, and poses a problem that the amount of noise generation becomes large in an image region multiplied with a large gain amount, resulting in degraded image quality.
In the third embodiment, in order to improve the image quality, the plural images-based processing or the single image-based processing is selectively carried out according to object state, e.g., according to differences among brightnesses of image regions that correspond to amounts of noise generation or according to an amount of a person's movement that corresponds to degree of generation of occlusion region.
This image processing apparatus 2000 has an AE image pickup unit 2001, AE image division unit 2002, region-dependent brightness calculation unit 2003, person's movement amount calculation unit 2004, processing type determination unit 2005, plural images-based processing unit 2006, single image-based processing unit 2007, image display unit 2008, and image storage unit 2009.
The AE image pickup unit 2001, AE image division unit 2002, and region-dependent brightness calculation unit 2003 perform AE image pickup/acquisition processing, AE image division processing, and brightness calculation process which are the same as processing performed by the AE image pickup unit 202, AE image division unit 203, and region-dependent brightness calculation unit 204 of the exposure decision unit 201 of the first embodiment, and a description of which will be omitted.
At start of the person's movement amount calculation process of
The AE image is photographed one time or plural times. If, for example, that a new exposure value is output in step S313 or S314 in the brightness calculation process of
In
If a person's face is present in each of the AE image and the preceding image (YES to step S2101), the movement amount calculation unit 2004 acquires an amount of a person's movement from a face detection history (step S2102). In this embodiment, when the persons' faces 2211, 2212 are detected from the preceding image 2201 and the AE image 2202, at least start coordinates of the face regions 2211, 2212 are output. The movement amount calculation unit 2004 calculates a magnitude of a vector MV_FACE that represents a difference between the start coordinate of the face region 2211 in the preceding image 2201 and the start coordinate of the face region 2212 in the AE image 2202, and acquires the magnitude of the vector MV_FACE as an amount of a person's movement.
On the other hand, no person's face is present in each of the AE image and the preceding image (NO to S2101), the movement amount calculation unit 2004 sets the amount of person's movement to zero (step S2103).
It should be noted that if plural persons' faces are detected from each of the AE image and the preceding image, the face of one person who is the main object is determined, and the amount of person's movement is calculated based on a determination result.
At start of the processing type determination process, the processing type determination unit 2005 calculates Bv value correction amounts for respective image regions based on brightness values and target brightness values for the respective regions of the AE image, as in step S801 of the exposure calculation process of
Next, the processing type determination unit 2005 performs a processing type decision to decide either the plural images-based processing or the single image-based processing, whichever is to be used (step S2303), and completes the present process.
The gain amount to be multiplied to the image becomes small with the decrease of the difference ΔBv, and the degree of degradation of image quality due to execution of the single image-based processing becomes small. If the difference ΔBv is smaller than a predetermined threshold value TH_ΔBv as shown in
The degree of influence of an occlusion region, which is generated in a synthesized image due to execution of the plural images-based processing, upon image quality becomes small with decrease of the amount of person's movement. If, as shown in
If the difference ΔBv is larger than the predetermined threshold value TH_ΔBv and if the amount of person's movement is larger than the predetermined threshold value TH_MV, the single image-based processing is performed as shown in
As described above, which of the plural images-based processing or the single image-based processing is to be performed is determined by the processing type determination unit 2005.
If the processing type determination unit 2005 determines that the plural images-based processing is to be performed, the plural images-based processing unit 2006 performs the plural images-based processing (which is the same as the processing performed by the units from the main object region decision unit 205 to the image synthesis unit 211 of the image processing apparatus 200 of the first embodiment), thereby synthesizing plural images into a synthesized image and outputting the synthesized image as an output image.
On the other hand, if the processing type determination unit 2005 determines that the single image-based processing is to be performed, the single image-based processing unit 2007 performs the single image-based processing (which is the same as the processing performed by the units from the exposure/gain calculation unit 1205 to the signal processing unit 1208 of the image processing apparatus of the second embodiment), thereby generating and outputting an output image.
The output image generated by and output from either the plural images-based processing unit 2006 or the single image-based processing unit 2007 is supplied to the image display unit 2008 and to the image storage unit 2009.
As described above, according to this embodiment, which of the plural images-based processing or the single image-based processing is to be performed is determined based on the difference ΔBv between reference Bv values for image regions and based on the amount of person's movement. It is therefore possible to generate the output image while enjoying the advantages of both the plural images-based processing and the single image-based processing, whereby the quality of the output image can be improved.
Embodiments of the present invention can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions recorded on a storage medium (e.g., non-transitory computer-readable storage medium) to perform the functions of one or more of the above-described embodiment (s) of the present invention, and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment (s). The computer may comprise one or more of a central processing unit (CPU), micro processing unit (MPU), or other circuitry, and may include a network of separate computers or separate computer processors. The computer executable instructions may be provided to the computer, for example, from a network or the storage medium. The storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD)™), a flash memory device, a memory card, and the like.
While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
This application claims the benefit of Japanese Patent Application No. 2013-022309, filed Feb. 7, 2013, which is hereby incorporated by reference herein in its entirety.
Number | Date | Country | Kind |
---|---|---|---|
2013-022309 | Feb 2013 | JP | national |