1. Field of the Invention
The present invention is related to an image pickup apparatus which is capable of picking up moving images, and a control method thereof. In particular, the present invention is related to an image pickup apparatus that automatically controls exposure by driving the aperture based on picked up images, and a control method thereof.
2. Description of the Related Art
In recent years, digital single lens reflex (DSLR) cameras with interchangeable lens system have become capable of picking up moving images, and also come with live-view function. Interchangeable lenses of DSLR cameras can be divided into two types: the first type performing aperture driving with an aperture varying means placed within the interchangeable lens; and the second type performing aperture driving from the camera body through a mechanical transmission mechanism. The interchangeable lenses of the first type can drive the aperture in finely divided steps, allowing smoother adjustment to the exposure conditions. In comparison, it is difficult to smoothly drive the interchangeable lenses of the second type.
In order to resolve problems which are inherent in the second-type interchangeable lenses, Japanese Patent Laid-Open No. 2002-290828 suggests a technique of restricting the number of steps in aperture value and performing control of shutter speed at each one of aperture values, thereby suppressing aperture driving. According to Japanese Patent Laid-Open No. 2002-290828, it is possible for the second-type interchangeable lenses to attain a level of smoothness that is close to the first-type interchangeable lenses when changing the exposure conditions in response to change in luminance of the object.
On the other hand, in recent years, there are many DSLR cameras with the above-mentioned interchangeable lens systems employing CMOS image sensors as the image pickup device, in place of the traditional CCD sensors. CMOS stands for Complementary Metal-Oxide Semiconductor. These CMOS image sensors are advantageous in that they consume less power, they operate at a lower voltage, their speed of reading electric charges can be increased, when compared to CCD image sensors.
Meanwhile, automatic exposure control in cameras, as is already well known, is performed by following program diagram which indicates the relationship between aperture value of lens, shutter speed, and EV value. Appropriate exposure is performed by controlling the aperture driving and shutter speed which are suitable for an EV value of an object according to the program diagram.
On the other hand, because delays in communication between the camera body and the lens, and mechanical delays due to aperture driving itself occur, the period of aperture driving may span over multiple frames in a moving image. Control of shutter speed can be performed electronically by controlling the time for electric charge accumulation at the image pickup device on the side of the camera body. In this case, control of aperture driving lags behind the control of shutter speed, causing deviations of shutter speed and aperture value from the line of the program diagram. Due to this, appropriate exposure control cannot be attained, leading to quality deterioration in picked-up moving images.
When using a method such as that described in Japanese Patent Laid-Open No. 2002-290828 above, situations occur in which shutter speed and aperture value deviate from the line of the program diagram and exposure control is not appropriately performed. This is particularly prominent when a CMOS image sensor is used as the image pickup device and the shutter control is performed using an electronic rolling shutter.
This problem is explained using
In
Because the image pickup device is driven by an electronic rolling shutter, delay in frame occurs in comparison to change in luminance of an object. In the example provided in
As shown in
In the case of an electronic rolling shutter, the shutter speed is immediately implemented. As illustrated in
To the contrary, driving of the aperture, as mentioned above, is delayed for communicative and mechanical reasons. Therefore, it takes time, for example several frames, from initiating control to reaching the desired aperture value. In the example given in
For this reason, until frame #5 wherein the desired aperture value is attained, exposure control in accordance with the program diagram is not performed, resulting in overexposed images (or underexposed images). Due to this, problems such as deterioration of picked-up image quality and overexposure (or underexposure) of displayed or recorded images occur.
Accordingly, a feature of the present invention is to provide an image pickup apparatus capable of suppressing changes in exposure in response to changes in luminance of the object when picking up a moving image, and a method of controlling the image pickup apparatus.
According to an aspect of the present invention, there is provided an image pickup apparatus comprising: an image pickup unit that generates an image signal by photoelectric conversion of light flux incoming via an aperture; a detection unit that detects luminance of an image signal generated by the image pickup unit; a computing unit that computes an aperture value of the aperture based on the detection result of the detection unit; an exposure control unit that performs exposure control by adjusting the aperture to the aperture value computed by the computing unit; and a correction unit that performs correction of luminance on an image signal generated by photoelectric conversion of the light flux incoming to the image pickup unit when performing adjustment of the aperture in response to change in luminance of an object, based on luminance of an image signal generated prior to performing the adjustment of the aperture.
According to another aspect of the present invention, there is provided a method of controlling an image pickup apparatus having an image pickup unit that generates an image signal by photoelectric conversion of light flux incoming via an aperture, the method comprising: a detection step of detecting luminance of an image signal generated by the image pickup unit; a computing step of computing an aperture value of the aperture based on the detection result from the detection step; an exposure control step of performing exposure control by adjusting the aperture to the aperture value computed at the computing step; and a correction step of performing correction of luminance on an image signal generated by photoelectric conversion of the light flux incoming to the image pickup unit when performing adjustment of the aperture in response to change in luminance of an object, based on luminance of an image signal generated prior to performing the adjustment of the aperture.
The present invention can suppress changes in exposure in response to changes in luminance of the object when picking up a moving image.
Further features of the present invention will become apparent from the following description of exemplary embodiments with reference to the attached drawings.
A first embodiment of the present invention will be explained below with reference to figures.
A lens unit 101 is configured to be exchangeable in relation to the camera body and include an optical aperture mechanism, allowing incoming light to be irradiated onto an image pickup device 105 (to be explained later). A lens driving unit 102, which acts as driving means, performs adjustment of the aperture by driving the optical aperture mechanism (not shown) at the lens unit 101 according to the control by the overall control and computing unit 109 which acts as control means. The driving of the optical aperture mechanism by the lens driving unit 102 is performed in a step-wise fashion. Additionally, the lens driving unit 102 drives a zoom optical system (not shown) and an image forming optical system (not shown) of the lens unit 101 according to the control by the overall control and computing unit 109, thereby performing zoom control and focus control.
The lens driving unit 102 is incorporated into, for example, the camera body side, mechanically transmits driving force to each of the mechanisms of the lens unit 101, thereby performing control of these components. Without restricting to this particular arrangement, it is also possible to incorporate the lens driving unit 102 on the side of the lens unit 101, performing communication with the camera body side, thereby controlling these components.
A shutter unit 103 is, for example, a mechanical shutter, and is driven by a shutter driving unit 104 that is controlled by the overall control and computing unit 109 and shields the image pickup device 105 during image pickup. The shutter unit 103 is driven by the shutter driving unit 104 and is maintained in a non-shielded state, i.e. in a flipped-up position, when picking up moving images. The image pickup device 105, which acts as image pickup means, has sensors that utilize an XY address scanning method, accumulates electric charge in accordance with the light amount of light flux received from an object, and generates image signals of the object based on the accumulated charge. In the first embodiment of the present invention, a CMOS image sensor is used as the sensor of the image pickup device 105.
An image signal processing unit 106 executes noise canceling and amplifying processes on the image signals outputted from the image pickup device 105, and further executes A/D conversion to convert the signals to digital image data. Further, the image signal processing unit 106 executes various types of image processing such as gamma correction and white balance correction. In addition, the image signal processing unit 106 is capable of executing compression-encoding processing using a given method on image data on which image processing is executed.
The overall control and computing unit 109 performs luminance detection on image signals provided to the image signal processing unit 106 and detects luminance components, which then can perform photometry based on these detected luminance components. Further, the overall control and computing unit 109 can calculate sharpness of an image based on the luminance components, which enables acquisition of focus information.
A timing generation unit 107 generates timing signals for the image pickup device 105 and the image signal processing unit 106 in accordance with the control of the overall control and computing unit 109. The image pickup device 105 is driven based on the timing signals provided by this timing generation unit 107. Further, the image signal processing unit 106 can simultaneously perform processing of the image signals outputted by, for example, the image pickup device 105 based on the timing signals provided from the timing generation unit 107.
A memory 108 temporarily stores compressed or non-compressed output image data outputted from the image signal processing unit 106. A storage medium control interface (I/F) 110 controls storage and replay of data to and from a storage medium 111. For example, the storage medium control I/F 110 reads out image data from the memory 108, and stores it in the storage medium 111. The storage medium 111 is, for example, a and re-writable non-volatile memory which is removable from the DSLR camera 100.
A display unit 115 is made of, for example, a display device such as an LCD and a driving circuit therefor, and displays images according to the output image data from the image signal processing unit 106 on the display device. The display unit 115 also may display the stored image data read out from the memory 108 on the display device. For example, live view is performed by continuously outputting frame image signals from the image pickup device 105 at predetermined intervals, for example outputting signals at each frame cycle, sequentially processing the frame image signals at the image signal processing unit 106, and displaying them on the display unit 115.
An external I/F 112 is an interface for performing data communication with external devices. The DSLR camera 100 can perform data transmission with external computers and such via this external I/F 112.
A photometry unit 113 measures luminance of objects. Further, a distance measuring unit 114 measures the distance to objects. Measurement results from the photometry unit 113 and the distance measuring unit 114 are each supplied to the overall control and computing unit 109. When picking up still images, the overall control and computing unit 109 calculates an EV value based on the luminance measurement result outputted from the photometry unit 113. Likewise, the overall control and computing unit 109 detects focus state of the object based on the measurement result outputted from the distance measuring unit 114.
Next, the configuration of the image pickup device 105, which is an XY address scanning device, and its scanning method, will be explained. Regarding scanning of the image pickup device 105, first, a scan (reset operation hereinafter) to remove unnecessary accumulated electric charge is performed per pixels or lines. After the reset operation, electric charge is accumulated for each of the pixels by photoelectric conversion according to the light received at the image pickup device 105. Then, by performing a scan to read out the signal electric charge per pixels or lines, the charge accumulation operation ends. In this way, the function of performing reset scan and readout scan at different times for each region of an image pickup device will be referred to as electronic rolling shutter. By controlling start timing of readout scan, it is possible to configure shutter speed.
The PD 202 converts received light into electric charge. Transfer switch 203 transfers, to the FD 204 using a transfer pulse φTX, the electric charge generated at PD 202. The FD 204 temporarily accumulates the electric charge transferred from the PD 202. The amplification MOS amp 205 is an amplification MOS amp which functions as a source follower. The selection switch 206 selects pixel 201 using a selection pulse φSELV. The reset switch 207 removes electric charge accumulated at the FD 204 using a reset pulse φRES. The FD 204, the amplification MOS amp 205 and a constant current source 209, to be explained later, together comprises a floating diffusion amp.
A column of pixels 201 aligned in a vertical direction, and their respective selection switches 206, are connected to a signal output line 208. Electric charge, accumulated at the pixels 201 that are selected by the selection switches 206, is converted to electric voltage, and is outputted to a readout circuit 213 via the signal output line 208. To the signal output line 208, the constant current source 209, which acts as a load of the amplification MOS amp 205, is connected.
The selection switches 210 which select output signals from the readout circuit 213 are driven by a horizontal scanning circuit 214 based on the timing signals from the timing generation unit 107. Further, a vertical scanning circuit 212 outputs transfer pulses φTX, selection pulses φSELV and set pulses φRES based on the timing signals provided from the timing generation unit 107. With these, the vertical scanning circuit 212 selects switches 203, 206 and 207 at each of the pixels 201.
At each of the lines to which the pulses φTX, φRES and φSELV are supplied, the nth scan line, which is scan-selected by the vertical scanning circuit 212, is referred to as scan line φTXn, scan line φRESn and scan line φSELVn.
At the nth line, the reset pulse φRES and the transfer pulse φTX are respectively applied to the scan lines φRESn and φTXn between time t41 and time t42, then the transfer switch 203 and the reset switch 207 are turned on. By doing so, each of the pixels 201 of the nth line will be reset, and the unnecessary electric charge accumulated at the PD 202 and the FD 204 will be removed.
After the reset operation is performed, the transfer switch 203 is turned off at time t42, and an accumulation operation of accumulating at the FD 204 the photo-electric charge generated at the PD 202 is initiated. Subsequently, at time t44, the transfer pulse φTX is applied to the scan line φTXn and the transfer switch 203 is turned on, then a transfer operation of transferring photo-electric charge accumulated at the PD 202 to the FD 204 is performed. From time t42 at which the transfer switch 203 is turned off, to time t44 at which the transfer switch 203 is turned on again, is the electric charge accumulation time for the FD 204.
The reset switch 207 needs to be turned off prior to this transfer operation and thus the transfer switch 203 and the reset switch 207 are simultaneously turned off at time t42 in the example given in
After performing the transfer operation of the nth line, the selection pulse φSELV is applied to the scan line φSELVn, and the selection switch 206 is turned on. By doing so, the electric charge accumulated at the FD 204 is converted to electric voltage, which is outputted to the readout circuit 213 via the signal output line 208. The readout circuit 213 temporarily retains the signal provided via the signal output line 208.
The signals which are temporarily retained at the readout circuit 213 are read out by controlling the selection switches 210 by the horizontal scanning circuit 214, and are sequentially outputted as signals for individual pixels at time t46.
The time between the onset of transfer at time t44 to the end of readout at time t47 will be referred to as readout interval T4read at the nth line, and the time between time t41 and time t43 will be referred to as wait interval T4wait at the n+1th line. Equally for other lines, the time between the start of transfer to the end of readout will be readout interval T4read, and the time between the start of the reset for the line and the start of reset for the subsequent line will be wait interval T4wait.
As discussed, in the operation of an electronic rolling shutter, the timing of electric charge accumulation differs depending on the position in the vertical direction of the image pickup device. To the contrary, the time required for accumulation of electric charge at each of the pixels can be made identical regardless of the position in the vertical direction of the image pickup device.
As shown in
With reference to
As exemplified in
In this case, a sequence of control such as one described below is performed. This change in luminance is detected at the overall control and computing unit 109 based on the luminance components of image signals supplied to the image signal processing unit 106 from the image pickup device 105, for example. When a change in luminance of the subject is detected, the overall control and computing unit 109 determines whether the aperture value is to be advanced to the next step, based on the present shutter speed, aperture value and program diagram. If a decision is made to advance the aperture value, a control signal is output to the lens driving unit 102 to change the aperture value.
The lens driving unit 102 drives the optical aperture mechanism to bring the aperture value to a designated value, according to the control signal supplied. As a result, for example as shown in
On the other hand, in regard to shutter speed, as shown in
As described above, as a result of controlling aperture value and shutter speed in response to change in luminance value, appropriate exposure cannot be performed for frames that include aperture driving duration in which the aperture value is changing, as shown in
In such a situation, in the present first embodiment, as shown in
The region which is used for calculation of average luminance value, as shown in
The overall control and computing unit 109, for example, calculates the correction coefficient B based on the image signals supplied to the image signal processing unit 106 from the image pickup device 105. This correction coefficient B is handed over to the image signal processing unit 106. The image signal processing unit 106 then multiplies the correction coefficient B to the image signals as exemplified in
The image signal processing unit 106 performs A/D conversion and other (predetermined) image processing on the image signals to which the correction coefficient B is multiplied. The signals then are outputted by the image signal processing unit 106, which is displayed on the display unit 115 or stored in the storage medium 111. From this process, as shown in
In the above, the correction using the correction coefficient B is performed at the image signal processing unit 106 by setting the gain for the image signals supplied from the image pickup device 105. However, the present invention is not limited to this particular example. For instance, the correction using the correction coefficient B can also be performed on images which are already A/D converted at the image signal processing unit 106. Further, it is also possible to have the overall control and computing unit 109 perform the correction to output image data stored in the memory 108.
Additionally, although the correction coefficient B is calculated using average luminance values from certain regions of concerned frames in the above description, the present invention is not limited to this example. For instance, the correction coefficient B can also be calculated using an accumulated luminance value of the region of the pertinent frames.
Next, a second embodiment of the present invention will be explained. Accurate timing for the aperture driving duration can easily be determined in a image pickup apparatus such as a compact digital camera in which the camera body and the lens are integrated into a single unit and are controlled by a common system. On the other hand, in DSLR cameras having interchangeable lenses in which the lens is separated from the camera body and controlled through communication between the camera body and the lens, it is difficult to determine accurate timing for aperture driving duration.
The DSLR camera 300 is of type with interchangeable lens system, wherein the lens unit 101 and the lens driving unit 102 are built in on the side of the interchangeable lens side. Also, communication between the overall control and computing unit 109 and the lens driving unit 102 is to be performed via electrical contact at a lens mounting unit. The position at which the vibration detection unit 116 is placed is not restricted as long as it is within the body of the DSLR camera 300, but it is possible to place the unit at a position which is convenient for detecting vibration generated from the lens unit 101, such as a position in close proximity to the lens mount.
The vibration detection unit 116, for example, utilizes a piezoelectric element as a vibration sensor, and supplies output to the image signal processing unit 106 or the overall control and computing unit 109. The image signal processing unit 106 or the overall control and computing unit 109 detects aperture driving duration based on the supplied vibration sensor output.
The output of the vibration sensor is compared to a given value ±a at a comparison device (not shown). If a driving command is issued from the overall control and computing unit 109, based on the comparison of the vibration sensor with the given value ±a, the time period during which the amplitude of the output signal from the vibration detection unit 116 is larger than the given value ±a is determined as the aperture driving duration.
The method of calculating and applying the correction coefficient are identical to those of the first embodiment, and the explanation thereof will be omitted.
As explained above, according to the present second embodiment, it is possible to directly know the aperture driving duration by detecting vibration of the aperture driving. Accordingly, it is possible to provide a system which does not require aperture control that is synchronized with frame timing.
In the present second embodiment, vibration that is generated during the driving of the aperture is detected using the vibration detection unit 116. However the present invention is not limited to this, and can use other methods to detect aperture driving duration. For example, detection of aperture driving duration can be performed by detecting noise generated during aperture driving.
Next, a third embodiment of the present invention will be explained. In the above-described first embodiment, the correction coefficient B, which is used for correction of inappropriately exposed frames during aperture driving duration, was calculated using average luminance value or accumulated value in certain regions of the frames in concern. In contrast to this, the correction coefficient in the present third embodiment is obtained based on information indicating differences in luminance in vertical directions of images by image signals outputted by the image pickup device 105. The present embodiment calculates the correction coefficient based on projection in horizontal direction of images by image signals outputted by the image pickup device 105.
In the present third embodiment, the configurations of the DSLR camera 100 and the image pickup device 105, the driving method of the image pickup device 105 and the program diagram can be identical to the above-described first embodiment, and thus the explanation thereof will be omitted.
With reference to
The image signal processing unit 106, for example, accumulates luminance values of each individual pixel in each line of image signals supplied from the image pickup device 105, thereby calculating the horizontal projection of the image signal. Then, when a luminance change is detected and aperture driving is started, the ratio of the horizontal projections obtained from image signals during and prior to the aperture driving is calculated. Then, a vertical gain correction value G(v) is calculated based on this ratio (
The image signal processing unit 106 performs A/D conversion and other certain image processing to the image signals to which the vertical gain correction values G(v) are multiplied, and outputs them to be displayed on the display unit 115 or stored in the storage medium 111. By doing so, as shown in
In the present third embodiment, gain correction is performed on image signals of frames during the aperture driving duration based on horizontal projections. From this, it becomes possible to suppress exposure deviation of frames during aperture driving duration, and also has an effect of correcting unevenness in exposure, leading to higher quality of moving images.
In the above, occurrence of overexposure during aperture driving duration is explained. However, it is obvious that each of the embodiments of the present invention can be applied in the same way in situations where underexposure occurs during aperture driving duration.
Further, although a CMOS image sensor is utilized as the image pickup device 105 in the above, each of the embodiments of the present invention is just as effective even when the image pickup device 105 is a CCD sensor.
Furthermore, in each of the above mentioned embodiments, the correction coefficient B or the vertical gain correction value G(v), is calculated based on frames immediately before and after the frame in which a change in luminance is detected. And correction is performed by uniformly applying the calculated correction coefficient B or vertical gain correction value G(v) to the frames included in the aperture driving duration. The present invention is not limited to this, and can also perform correction for each frame by, for example, calculating the correction coefficients B or vertical gain correction values G(v) for each frame included in the aperture driving duration in sequence.
Aspects of the present invention can also be realized by a computer of a system or apparatus (or devices such as a CPU or MPU) that reads out and executes a program recorded on a memory device to perform the functions of the above-described embodiment(s), and by a method, the steps of which are performed by a computer of a system or apparatus by, for example, reading out and executing a program recorded on a memory device to perform the functions of the above-described embodiment (s). For this purpose, the program is provided to the computer for example via a network or from a recording medium of various types serving as the memory device (e.g., computer-readable medium).
While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
This application claims the benefit of Japanese Patent Application No. 2008-237185, filed on Sep. 16, 2008, which is hereby incorporated by reference herein in its entirety.
Number | Date | Country | Kind |
---|---|---|---|
2008-237185 | Sep 2008 | JP | national |