The present invention relates to an imaging apparatus and a control method thereof, and particularly, to an imaging apparatus configured to output a plurality of images with different accumulation periods synchronized from an image element.
When simultaneous capturing of moving images and still images can be performed by one camera, captured scenes can be seen as moving images, a decisive scene in the moving images can be enjoyed as a still image, and it is possible to greatly increase a value of a captured image. In addition, if one camera can capture moving images at a general frame rate and moving images at a high frame rate at the same time, a viewer can enjoy specific scenes as high-quality work while switching to a slow motion image and it is possible to deliver a rich impression to the viewer. Here, generally, when there is a type of impression of choppiness with frame advance in a reproduced moving image, quality greatly deteriorates. In order to avoid an impression of choppiness, it is necessary to set an accumulation time close to one frame period in a capturing sequence. That is, when the frame rate is 30 fps, a relatively long accumulation time such as 1/30 sec or 1/60 sec is appropriate. In particular, this setting is important in a situation in which an orientation of a camera is unstable, for example, in the air.
On the other hand, in still images, since a sharpness with which a moment may be captured is required, in order to obtain a stop motion effect, it is necessary to set, for example, a short accumulation time of about 1/1,000 sec. In addition, in moving images with a high frame rate, since one frame period is short, when a frame rate is, for example, 120 fps, a short accumulation time of 1/125 sec or 1/250 sec is inevitably set. Here, in the imaging apparatus in Japanese Patent Laid-Open No. 2014-48459, a technology in which pixels of an image element include a pair of asymmetric photodiodes is disclosed. In Japanese Patent Laid-Open No. 2014-48459, one photodiode has high light receiving efficiency and the other photodiode has low light receiving efficiency. Therefore, in Japanese Patent Laid-Open No. 2014-48459, it is suggested that two images with different accumulation periods can be captured at the same time. On the other hand, in capturing of moving images, in order to reduce blur of the captured image caused by hand shake of a photographer, shake correction may be performed using a motion vector calculated by comparing and computing a past frame and a current frame. In addition, the calculated motion vector can be used for compression of moving images or subject tracking.
However, as described above, in order to secure the quality of moving images, an accumulation time of moving images is relatively long. Therefore, due to movement of the subject and the imaging apparatus during the accumulation time, blur occurs in each frame image and sharpness is lowered. Thereby, accuracy of the motion vector calculated by comparison between frames is reduced, which results in deterioration of performance of shake correction, moving image compression, and subject tracking.
The present invention proposes an imaging apparatus that improves calculation accuracy of a motion vector according to moving image capturing while capturing still images and moving images at the same time.
According to an aspect of the invention, an imaging apparatus comprises: a memory; and a controller which operates on the basis of data stored in the memory. The controller comprises: an imaging unit capable of continuously acquiring first images and second images for which a time from start of accumulation to end thereof is longer than that of the first image; a computing unit configured to calculate a motion vector from the plurality of first images; and an image processing unit configured to perform image processing on a moving image generated from the second images using the motion vector.
Further features of the present invention will become apparent from the following description of exemplary embodiments with reference to the attached drawings.
Examples of the present invention will be described below with reference to the drawings.
An imaging apparatus in which an imaging optical system for imaging and the like are added to an image processing device will be described below as a preferable example of the present invention.
A digital signal processing unit 187 performs various types of correction on digital image data output from the image element 184 and then compresses the image data. A timing generation unit (control device) 189 outputs various timing signals to the image element 184 and the digital signal processing unit 187. A system control CPU (control device) 178 controls various types of computing and the entire digital still motion camera.
An image memory 190 temporarily stores image data, and a display interface unit 191 displays a captured image. The display unit 153 is a display device such as a liquid crystal display. A recording medium 193 is a removable recording medium such as a semiconductor memory for recording image data, additional data, and the like. A recording interface unit 192 is an interface for performing recording or reading in or from the recording medium 193. An external interface unit 196 is an interface for communication with an external computer 197 and the like. A printer 195 is a printer such as a small ink jet printer. A print interface unit 194 is an interface unit configured to output to and print a captured image on the printer 195. A computer network 199 is a computer network such as the Internet. A wireless interface unit 198 is an interface unit configured to perform communication via the network 199. A switch input unit 179 includes the switch ST 154, the switch MV 155, and a plurality of switches for switching between various modes. A flight control device 200 is a flight control device for performing imaging in the air.
In the circuit diagram of
In addition, the first transfer transistor 501A is controlled by a transfer pulse φTX1A, and the second transfer transistor 502A is controlled by a transfer pulse φTX2A. In addition, the third transfer transistor 501B is controlled by a transfer pulse φTX1B, and the fourth transfer transistor 502B is controlled by a transfer pulse φTX2B. In addition, the reset transistor 504 is controlled by a reset pulse φRES, and the select transistor 506 is controlled by a select pulse φSEL. In addition, the fifth transfer transistor 503 is controlled by a transfer pulse φTX3. Here, control pulses are transmitted from a vertical scanning circuit (not shown).
In addition, in
First, when the luminance Bv is 14, in a still image, an ISO sensitivity is set to an ISO of 100. An equal-Bv line in the still image intersects a line 358 of the program line diagram of the still image and a point 351, and a shutter speed of 1/4,000 sec and an aperture value F of 11 are determined from the point 351. On the other hand, in a moving image, an ISO sensitivity is set to an ISO of 1. An equal-Bv line in the moving image (picture B) intersects a line 359 of the program line diagram of the moving image and a point 352, and a shutter speed of 1/60 sec and an aperture value F of 11 are determined from the point 352.
When the luminance Bv is 11, in a still image, an ISO sensitivity is increased by one step and set to an ISO of 200. An equal-Bv line in the still image intersects a line 358 of the program line diagram of the still image and a point 353, and a shutter speed of 1/1,000 sec and an aperture value F of 11 are determined from the point 353. On the other hand, in a moving image, an ISO sensitivity is set to an ISO of 12. An equal-Bv line in the moving image intersects a line 359 of the program line diagram of the moving image and a point 352, and a shutter speed of 1/60 sec and an aperture value F of 11 are determined from the point 352.
When the luminance Bv is 7, in a still image, an ISO sensitivity is set to an ISO of 200. An equal-Bv line in the still image intersects a line 358 of the program line diagram of the still image and a point 354, and a shutter speed of 1/1,000 sec and an aperture value F of 2.8 are determined from the point 354. On the other hand, in a moving image, an ISO sensitivity is set to an ISO of 12. An equal-Bv line in the moving image intersects a line 359 of the program line diagram of the moving image and a point 355, and a shutter speed of 1/60 sec and an aperture value F of 2.8 are determined from the point 355.
When the luminance Bv is 6, in a still image, an ISO sensitivity is increased by one step and set to an ISO of 400. An equal-Bv line in the still image intersects a line 358 of the program line diagram of the still image and a point 354, and a shutter speed of 1/1,000 sec and an aperture value F of 2.8 are determined from the point 354. On the other hand, in a moving image, an ISO sensitivity is set to an ISO of 25. An equal-Bv line in the moving image intersects a line 359 of the program line diagram of the moving image and a point 355, and a shutter speed of 1/60 sec and an aperture value F of 2.8 are determined from the point 355. Thereafter, as the luminance decreases, the still image and the moving image both have a higher gain and a higher ISO sensitivity without change in the shutter speed and the aperture value.
When an exposure operation shown in the program AE line diagram is performed, a still image in the indicated entire luminance range maintains a shutter speed of 1/1,000 sec or more and a moving image maintains a shutter speed of 1/60 sec in the entire luminance range. Therefore, it is possible to obtain a high-quality moving image with no impression of choppiness with frame advance in the moving image while a stop motion effect is obtained in the still image.
Incidentally, a still image and a moving image which are captured at the same time with the same aperture value are controlled so that ISOs are different from each other. However, when exposure control is performed so that the still image is appropriately exposed, the moving image may become saturated and ISO control may not be able to be performed. Here, in the imaging apparatus according to the present example, a short accumulation is added Np (Np>1) times at uniform time intervals at a shutter speed of 1/60 sec corresponding to a frame rate of a moving image to generate a moving image so that the ISO is substantially lowered.
In the present example, a shutter speed of 1/60 sec for a moving image is set as an accumulation period, and a shutter speed of 1/1,000 sec for a still image is set as an accumulation time, and the accumulation time for the moving image is controlled such that it is the same accumulation time as that of the still image. That is, the total accumulation time of the moving image generated by adding a short accumulation Np (Np>1) times to the signal holding unit 507 of the image element 184 is the same accumulation time as that of the still image, and control is performed with the same ISO as that for the still image captured in the same imaging period. For example, when the luminance Bv is 7, if a moving image is generated by performing accumulation and addition 16 times in a divided manner during a period at a shutter speed of 1/60 sec, one accumulation time for generating a moving image is set to 1/16,000 sec in order to perform the same ISO control as in the still image with an ISO of 200.
In
As a result, a moving image and a still image can be captured at the same time. As the still image, an image with no blur for a short accumulation time intended by a photographer can be acquired. On the other hand, as the moving image, a smooth image with no impression of choppiness can be acquired. In an imaging period 1 in the explanatory diagram of accumulation and read timings in
On the other hand, accumulation of moving images is performed at uniform time intervals during one period, until immediately before reading of moving images in rows (566) starts, and in the present example, time intervals are set so that accumulation divided among 16 times ends. In this case, a time interval for accumulation of moving images is set to an integer multiple of an interval Th of a horizontal synchronization signal 551. As a result, accumulation timings of moving images in rows are the same. In
On the other hand, an example in which a photographer sets a shutter speed T2 for still images to be longer (for example, T2= 1/500 sec) when the subject luminance is low in a part of an imaging period 2 is shown in the explanatory diagram of accumulation and read timings in
As in the imaging period 1, accumulation of moving images is performed at uniform time intervals during one period, until immediately before reading of moving images in rows (566) starts, and time intervals are set so that accumulation divided among 16 times ends. In this case, a time interval for accumulation of moving images is set to an integer multiple of an interval Th of the horizontal synchronization signal 551. As a result, accumulation timings of moving images in rows are the same. In addition, one accumulation time of moving images is set to T2/16 (= 1/8,000 sec). Here, an accumulation start time of moving images in rows is fixed for the vertical synchronization signal 550, and one accumulation end time of moving images is set for the vertical synchronization signal 550 according to the shutter speed T2 for still images set by a photographer. In the imaging period 2 in the explanatory diagram of accumulation and read timings in
Next, a method of controlling the image element 184 that can capture a still image which is a first image and a moving image which is a second image in the imaging period 2 in the explanatory diagram of accumulation and read timings in
In the image element 184 according to the present example, there are m rows of pixel columns in the vertical direction. In
At a time t3, when a transfer pulse φTX2B(1) in the first row reaches a high level, the fourth transfer transistor 502B in the first row is turned on. Then, a signal charge of moving images added and accumulated in the signal holding unit 507B during an immediately preceding imaging period (the imaging period 1 in
At a time t4, a transfer pulse φTX2B(1) in the first row and transfer pulses φTX2A in all rows (in
At a time t5, when transfer pulses φTX3 in all rows become a low level, the fifth transfer transistors 503 in all rows are turned off. Then, resetting of the photodiodes 500 in all rows is released, and accumulation of signal charges of moving images in the photodiodes 500 in all rows starts (shown in accumulation (563) in
Incidentally, start of accumulation of moving images in the first row at the time t5 in the timing charts in
Immediately before a time t6, when a transfer pulse φTX1B(m) in the m-th row reaches a high level, the third transfer transistor 501B in the m-th row is turned on. Then, a signal charge accumulated in the photodiode 500 in the m-th row is transferred to the signal holding unit 507B that maintains charges of moving images in the m-th row (shown in moving image transfer (564) in
Here, the time t5 to the time t6 corresponds to one accumulation time (=T1/16) for moving images in the imaging period 1 in
Immediately before a time t7, when the transfer pulse φTX1B(1) in the first row reaches a high level, the third transfer transistor 501B in the first row is turned on. Then, a signal charge accumulated in the photodiode 500 in the first row is transferred to the signal holding unit 507B that maintains charges of moving images in the first row. In addition, at the time t7, when the transfer pulse φTX1B(1) in the first row reaches a low level, the third transfer transistor 501B in the first row is turned off, and transfer of the signal charge accumulated in the photodiode 500 in the first row to the signal holding unit 507B in the first row ends. Here, the time t5 to the time t7 corresponds to one accumulation time (=T2/16) for moving images in the imaging period 2 in
At a time t8 twice as long as a horizontal synchronization signal interval Th from the time t5 at which accumulation of the 1st moving image starts in the imaging period which starts at the time t1, accumulation of the 2nd moving image starts. Since an accumulation operation of the 2nd moving image which starts at the time t8 and ends at a time t10 is the same as an accumulation operation of the 1st moving image which starts at the time t5 and ends at the time t7, description thereof will be omitted.
Here, in accumulation operations of the 1st and 2nd moving images, a signal charge of moving images in two accumulation periods is added to and held in the signal holding unit 507B. In addition, accumulation of the 6th moving image starts at a time t11. Then, the time t11 at which accumulation of the 6th moving image starts is set to a time of T=6×2×Th+Tb from the time t1 at which the vertical synchronization signal φV reaches a high level. Here, Th is a time interval of the horizontal synchronization signal φH, and Tb is a time interval between the time t1 at which the vertical synchronization signal φV reaches a high level and the time t5 at which accumulation of signal charges of the 1st moving image in the photodiode 500 starts. Since an accumulation operation of the 6th moving image which starts at the time t11 and ends at a time t13 is the same as an accumulation operation of the 1st moving image which starts at the time t5 and ends at the time t7, description thereof will be omitted.
Next, accumulation of a still image which is a first image is performed at a time t14. In the present example, the number of times of accumulation of still images during one imaging period is 1. Since a time at which reading of a still image (shown in still image reading (565) in
At the time t14 which is a time T2 earlier than the time t19 at which accumulation of still images ends, when transfer pulses φTX3 in all rows become a low level, the fifth transfer transistors 503 in all rows are turned off. Then, resetting of the photodiodes 500 in all rows is released, and accumulation of signal charges of still images in the photodiodes 500 in all rows starts (shown in still image accumulation (561) in
In addition, during accumulation of signal charges of still images, reading of moving images in the m-th row in the imaging period 1 ends. First, at a time t15, when the reset pulse φRES(m) in the m-th row reaches a low level, the reset transistor 504 in the m-th row is turned off, and a reset state of the floating diffusion region 508 is released. At the same time, when the select pulse φSEL(m) in the m-th row reaches a high level, the select transistor 506 in the m-th row is turned on, and an image signal in the m-th row can be read.
At a time t16, when the transfer pulse φTX2B(m) in the m-th row reaches a high level, the fourth transfer transistor 502B in the m-th row is turned on. Then, a signal charge of moving images added and accumulated in the signal holding unit 507B during an immediately preceding imaging period (the imaging period 1 in
At a time t17, when the transfer pulse φTX2B(m) in the m-th row reaches a high level, the fourth transfer transistor 502B in the m-th row is turned on. In this case, since the reset pulse φRES(m) in the m-th row has already became a high level and the reset transistor 504 is turned on, the floating diffusion region 508 in the m-th row and the signal holding unit 507B for moving images in the m-th row are reset. In addition, at a time t17, the select pulse φSEL(m) in the m-th row reaches a low level.
At a time t18, when the reset pulse φRES(1) in the first row reaches a low level, the reset transistor 504 in the first row is turned off, and a reset state of the floating diffusion region 508 is released. At the same time, when the select pulse φSEL(1) in the first row reaches a high level, the select transistor 506 in the first row is turned on, and an image signal in the first row can be read.
Immediately before the time t19, when the transfer pulses φTX1A in all rows become a high level, the first transfer transistors 501A in all rows are turned on. Then, a signal charge accumulated in the photodiodes 500 in all rows is transferred to the signal holding unit 507A that maintains charges of still images in all rows (shown in still image transfer (562) in
At a time t20, when the transfer pulse φTX2A(1) in the first row reaches a high level, the second transfer transistor 502A in the first row is turned on, and a signal charge of still images accumulated in the signal holding unit 507A in the first row is transferred to the floating diffusion region 508. In addition, an output corresponding to a change in potential of the floating diffusion region 508 is read out to the signal output line 523 through the amplifying transistor 505 and the select transistor 506 in the first row. Then, the result is supplied to a readout circuit (not shown) and is output to the outside as a still image signal in the first row (shown in still image reading (565) in
In addition, accumulation of the 7th moving image starts at the time t21. Here, the time t21 at which accumulation of the 7th moving image starts is set to a time of T=(7+2)×2×Th+Tb from the time t1 at which the vertical synchronization signal φV reaches a high level. In the present example, since an accumulation period of two moving images overlaps an accumulation period of still images (shown in still image accumulation (561) in
Since an accumulation operation of the 7th moving image which starts at the time t21 and ends at a time t23 is the same as an accumulation operation of the 1st moving image which starts at the time t5 and ends at the time t7, description thereof will be omitted. In addition, accumulation of the final 14th moving image of the imaging period 2 starts at a time t24. Here, the time t24 at which accumulation of the 14th moving image starts is set to a time of T=(14+2)×2×Th+Tb from the time t1 at which the vertical synchronization signal φV reaches a high level. Since an accumulation operation of the 14th moving image which starts at the time t24 and ends at a time t26 is the same as an accumulation operation of the 1st moving image which starts at the time t5 and ends at the time t7, description thereof will be omitted.
At a time t27, when the reset pulse φRES(m) in the m-th row reaches a low level, the reset transistor 504 in the m-th row is turned off and a reset state of the floating diffusion region 508 is released. At the same time, when the select pulse φSEL(m) in the m-th row reaches a high level, the select transistor 506 in the m-th row is turned on and an image signal in the m-th row can be read.
At a time t28, when the transfer pulse φTX2A(m) in the m-th row reaches a high level, the second transfer transistor 502A in the m-th row is turned on, and a signal charge of still images accumulated in the signal holding unit 507A in the m-th row is transferred to the floating diffusion region 508. In addition, an output corresponding to a change in potential of the floating diffusion region 508 is read out to the signal output line 523 through the amplifying transistor 505 and the select transistor 506 in the m-th row. Then, the result is supplied to a readout circuit (not shown) and is output to the outside as a still image signal in the m-th row (shown in still image reading (565) in
At a time t29, in the timing generation unit 189, the vertical synchronization signal φV reaches a high level and an imaging period 3 starts. As described above, in the imaging apparatus according to the present example, an accumulation end time of still images is fixed for a vertical synchronization signal, and an accumulation start time of accumulation of moving images performed a plurality of times during one imaging period is fixed for the vertical synchronization signal. Thereby, moving images and still images can be read in the same imaging period.
In addition, the imaging apparatus according to the present example can continuously acquire a still image which is a first image and a moving image which is a second image for which a time from start of accumulation to end thereof is longer than that of the first image. Here, the time from start of accumulation to end thereof represents an accumulation time in a plurality of still images which are a plurality of first images and an accumulation period in a moving image which is a second image. As a result, even though a shutter speed for still images is changed by a photographer, it is possible to capture a still image with no blur with a short accumulation time and a moving image with no impression of choppiness at the same time during one imaging period. That is, still images and moving images can be captured at the same time with high quality.
Here, shake correction processing according to the present example will be described.
A motion vector calculation unit (computing device) 601 computes a correlation between a past frame and a current frame for a still image within the output of the image element 184 and outputs a motion vector. The past frame is an image obtained in still image reading (565) in the imaging period 1 in
In order for a moving image to support a specified format such as 4K, the resolution may be adjusted by thinning the output of the image element 184. If a resolution of a still image and a resolution of a moving image are different from each other, the motion vector correction unit 602 enlarges and reduces the motion vector according to a ratio between the resolutions.
A frame interval of a moving image is constant at Tf. On the other hand, in a still image, when an interval between the center of an exposure time for the past frame and the center of an exposure time for the current frame is set as a frame interval, the frame interval is not constant.
A segmentation processing unit (image processing device) 603 performs segmentation processing on the output of the moving image of the image element 184 using the calculated corrected motion vector. The segmentation processing is processing of outputting only pixels in a certain specified area from all pixels of the image element 184.
As a method of calculating a motion vector, a block matching method is used. Here,
Tfs2=Tf−(T2−T1)/2
Accordingly, if shutter speeds for a still image are different between the past frame and the current frame, frame intervals of the still image and the moving image are different from each other. Since the motion vector indicates an amount of movement of the image between frames, if the frame intervals are different from each other, correction is necessary. If the frame interval of the still image is different from the frame interval of the moving image, the motion vector correction unit 602 corrects a motion vector according to the ratio. When the motion vector calculated from the still image obtained in the imaging period 2 is set as As2, a motion vector AS2′ obtained by correcting the frame interval is calculated as follows.
AS2′=As2*Tf/Tfs2
Ts2=Tf−(T2−T1)/2
In addition, the motion vector correction unit 602 corrects (3) an accumulation timing and outputs a corrected motion vector.
In the present example, since accumulation sequences differ between the moving image and the still image, timings at which the center of the image is accumulated are different. In a certain imaging period, a time from start of accumulation of the moving image to the center of an accumulation period of the (m+1)/2th row which is the center of the image is set as an accumulation timing Tm. An accumulation timing Tms1 in the still image and an accumulation timing Tmm1 in the moving image in the imaging period 1 can be shown as in
Tms1=Ta−T1/2
Tmm1=Th*(m+1)/2+Tf/2
In addition, a deviation dTm1 of accumulation timings of the still image and moving image in the imaging period 1 is represented by the following formula.
dTm1=Tmm1−Tms1
Similarly, an accumulation timing Tms2 of the still image, an accumulation timing Tmm2 of the moving image and a deviation dTm2 of the accumulation timings of the still image and the moving image in the imaging period 2 are represented by the following formulae.
Tms2=Ta−T2/2
Tmm2=Tmm1
dTm2=Tmm2−Tms2
Since the accumulation timing of the still image from which a motion vector is calculated and the accumulation timing of the moving image to be corrected are different from each other, it is necessary to correct the motion vector. If the accumulation timings of the still image and the moving image are different from each other, the motion vector correction unit 602 corrects the motion vector according to an amount of deviation. When motion vectors calculated from still images obtained in the imaging period 1 and the imaging period 2 are set as AS1 and AS2, a motion vector AS2″ obtained by correcting the accumulation timing is calculated as follows using linear interpolation.
AS2″=(As2−As1)/Tfs2*dTm2
In addition, at least one correction among three corrections of the above resolution, frame interval, and accumulation timing is performed as correction according to the moving image. Therefore, it is possible to improve calculation accuracy of the motion vector. In addition, it is desirable to perform the above three corrections. In this case, the motion vector can be corrected to correspond to the moving image and it is possible to improve calculation accuracy of the motion vector. According to segmentation processing, it is possible to shift a segmentation position of the moving image according to shake calculated as the motion vector and it is possible to reduce blurring in a captured image caused by hand shake of a photographer.
As described above, the imaging apparatus according to the present example can acquire a still image which is a first image and a moving image which is a second image with a time from start of accumulation to end thereof which is longer than that of the first image. In addition, the shake correction processing unit 600 performs shake correction processing which is image processing on the moving image generated from the moving image which is a second image using the motion vector calculated from a plurality of still images which are a plurality of first images.
As described above, in the imaging apparatus of the present invention, a time from start of accumulation to end thereof is shorter for the still image than for the moving image. Therefore, in the still image, image deterioration due to a movement of a subject and a movement of a camera resulting from hand shake is lowered and an image with high sharpness is obtained. Accordingly, compared to a case in which a correlation between frames for the moving image is computed, in a case in which a correlation between frames for the still image is computed, it is possible to improve computing accuracy and it is possible to improve calculation accuracy of the motion vector.
As shown in
In the related art, calculation of the motion vector is started after reading of the moving image ends. However, in the present example, a start timing of the motion vector becomes earlier and processing can be performed at a higher speed. If the number of pixels of the still image is larger than that of the moving image or if a reference block and a search range are larger, since a calculation time of the motion vector tends to increase, the effect becomes more significant.
As shown in
First, when reproduction of the moving image starts, frames are sequentially reproduced at a determined frame rate from a head frame 572 of the frame group 571 of the moving image (picture B). Since the moving image (picture B) is captured in settings (in the present example, 1/60 sec) in which a shutter speed is not excessively high, the reproduced image has high quality with no impression of choppiness with frame advance.
If a user performs a pause manipulation when reproduction proceeds to a frame 573, a frame 582 with the same time code is automatically retrieved from the data file of the still image (picture A) corresponding to the moving image (picture B) and displayed. The still image (picture A) is captured at a high shutter speed (in the present example, 1/1,000 sec) at which a stop motion effect is likely to be obtained and is a powerful image in which a moment of a sports scene is captured. Even if the two images of the still image (picture A) and the moving image (picture B) are captured in settings with different accumulation periods (shutter speeds), the gain of the still image (picture A) is not increased, and the same level of the signal charge is obtained by the image element. Therefore, they are images having a favorable S/N and with no impression of noise.
Here, when printing is instructed, data of the frame 582 of the still image (picture A) is output to the printer 195 through the print interface unit 194. Therefore, the printed matter is powerful with a stop motion effect. When the user releases pausing, automatic returning to the frame group 571 of the moving image (picture B) is performed and reproduction is resumed from a frame 574. In this case, an image to be reproduced has high quality with no impression of choppiness with frame advance.
As described above, in the imaging apparatus according to the present example, while a still image and a moving image are captured at the same time with high quality, it is possible to improve calculation accuracy of the motion vector according to moving image capturing. In addition, the configuration of the present example is not limited to the example above, and can be appropriately changed in a range without departing from the spirit and scope of the present invention. For example, the calculated motion vector may be used for image processing other than shake correction, for example, compression of a moving image and tracking of a subject. As in the present example, it is possible to improve calculation accuracy of the motion vector and improve performance.
Next, a second example will be described. Parts the same as in the first example are denoted with the same reference numerals, and description thereof will be omitted. A main difference from the first example is that shake correction is performed by moving a part of the optical system.
As described above, the correction lens 1001 is moved within a plane orthogonal to the optical axis 180 according to shake calculated as the motion vector, and shake of a subject image on the image element 184 is corrected. That is, in the present example, the shake correction control unit 1002 which is an optical system control device controls the optical system using the motion vector calculated from the still image which is a first image. In addition, shake correction according to the present example is updated for each imaging period, and it is possible to correct shake when the moving image is captured. Therefore, as in the first example, it is possible to improve calculation accuracy of the motion vector according to moving image capturing.
Here, in the present example, optical shake correction is performed by moving a part of the imaging optical system 152. However, optical shake correction may be performed by moving the entire imaging optical system 152 and by moving the image element 184.
Next, a third example will be described. Parts the same as in the first example are denoted with the same reference numerals, and description thereof will be omitted. A main difference from the first example is that a corresponding motion vector is changed according to an accumulation time of the still image.
Like the motion vector correction unit 602 in the first example, a motion vector correction unit 1103 corrects the motion vector calculated from the still image according to a moving image and outputs it as a corrected motion vector. The moving image motion vector calculation unit (second computing device) 1104 calculates a moving image motion vector using the moving image which is a second image. Like the motion vector calculation unit 1102, the motion vector calculation unit 1104 computes a correlation between a past frame and a current frame for a moving image, and outputs a moving image motion vector. However, correction according to a moving image corresponding to the motion vector correction unit 1103 is not performed. A segmentation processing unit 1105 performs segmentation processing on the output of the moving image of the image element 184 using the corrected motion vector or the moving image motion vector based on selection of the motion vector selection unit 1101.
Next, in Step S1202, it is determined whether an accumulation time T of still images is equal to or shorter than an accumulation period Tf of a moving image. If an accumulation time T of still images is equal to or shorter than an accumulation period Tf of a moving image (Yes), the process advances to Step S1203, and the motion vector calculation unit 1102 calculates a motion vector. Then, the process advances to Step S1204, the motion vector correction unit 1103 calculates a corrected motion vector, and the process advances to Step S1206. On the other hand, if an accumulation time T of still images is longer than an accumulation period Tf of a moving image in Step S1202 (No), the process advances to Step S1205, the moving image motion vector calculation unit 1104 calculates a moving image motion vector, and the process advances to Step S1206. Next, in Step S1206, the segmentation processing unit 1105 performs segmentation processing based on the corrected motion vector input in Step S1204 or the moving image motion vector input in Step S1205, and then the process advances to Step S1207, and the shake correction processing ends. When a time from start of accumulation to end thereof is shorter, image deterioration due to a movement of a subject and a movement of a camera resulting from hand shake is lowered, and an image with high sharpness is obtained.
In the present example, when a time from start of accumulation of a still image which is a first image to end thereof is longer than a time from start of accumulation of a moving image which is a second image to end thereof, shake correction processing is performed on the moving image using the motion vector calculated from the moving image which is a second image. Therefore, a motion vector can be calculated from an image having a shorter time from start of accumulation to end thereof between the still image and the moving image, that is, an image with high sharpness, and calculation accuracy of the motion vector according to moving image capturing is improved.
Next, a fourth example will be described. Parts the same as in the third example are denoted with the same reference numerals, and description thereof will be omitted. A main difference from the third example is that a corresponding motion vector is changed according to reliability of the motion vector.
There is a known method in which, when a motion vector is calculated, the reliability of the calculated motion vector is calculated at the same time (for example, corresponds to a motion vector detection unit 103 in Japanese Patent Laid-Open No. 2015-111764). A motion vector calculation unit 1302 and a moving image motion vector calculation unit 1304 according to the present example to be described below can calculate reliability from the relationship between a position of a reference block in a search range and a correlation value. The reliability may be calculated from a magnitude of a difference from an output of a gyro sensor that is provided separately.
In the present example, the motion vector calculation unit 601 which is a first computing device and the motion vector correction unit 602 can calculate the reliability Rm of the motion vector calculated from a plurality of still images which are a plurality of first images. In addition, the moving image motion vector calculation unit 1304 which is a second computing device can calculate the reliability Rm of the motion vector calculated from a plurality of moving images which are a plurality of second images. In addition, when the reliability Rs is lower than the reliability Rm, the shake correction processing unit 1300 which is an image processing device performs shake correction processing on the moving image using the motion vector calculated by the moving image motion vector calculation unit 1304 which is a second computing device. Therefore, a motion vector with higher reliability between the motion vectors calculated from the still image and the moving image can be selected and calculation accuracy of the motion vector according to moving image capturing is improved.
The configuration of the present invention is not limited to the above examples, and can be appropriately changed in a range without departing from the spirit and scope of the present invention. For example, a configuration in which an area on the image element 184 is divided and a still image is captured in a certain area and a moving image is captured in another area may be used. In addition, a configuration in which a plurality of image elements are included, a still image is captured by a certain image element and a moving image is captured by another image element may be used. In this case, an optical path is divided midway, and a plurality of image elements may be arranged on different imaging planes or a plurality of image elements may be arranged on the same imaging plane. According to the above configuration, it is possible to accumulate a still image and a moving image at the same time and increase a degree of freedom of imaging sequences.
In the related art, there is a known technique for performing capturing by reducing the transmittance according to an imaging light intensity in order to perform capturing under an extremely light environment in an imaging apparatus such as a digital camera. Light reduction is performed in order to realize subject display with a shallow depth of field by opening an aperture under a light environment, or display a subject movement trajectory, for example, waterfall water, without causing saturation even if a long-time exposure is performed. As a method of performing light reduction, a method using an ND filter is known. In addition, in Japanese Patent Laid-Open No. 2015-136087, a technology for reducing light by dividing an exposure time for an image element according to time is disclosed.
However, in the imaging apparatus in Japanese Patent Laid-Open No. 2015-136087, if a plurality of image data items acquired from a separate exposure are added and synthesized to acquire image data with reduced light, and a subject that moves is imaged, there is a risk of a movement of the subject being interrupted, which results in an unnatural image.
The second invention proposes an imaging apparatus that can capture a high-quality image even if a subject moves during a separate exposure.
The imaging apparatus 10 to which the second invention is applied is the same as in the first invention. Here, the imaging apparatus 10 will be described again with reference to
The imaging apparatus body 151 is a body part of the imaging apparatus 10 in which an image element and a shutter device are accommodated. The imaging optical system 152 is an imaging optical system having a lens and an aperture therein. The display unit 153 is a movable display unit configured to display imaging information and an image. The display unit 153 has a display luminance range in which an image having a wide dynamic range can be displayed without reducing the luminance range. The switch ST 154 is a shutter button that is used for mainly capturing a still image. The propeller 162 is a propeller for causing the imaging apparatus 10 to rise into the air in order to perform imaging in the air.
The switch MV 155 is a button for starting and stopping capturing of a moving image. The selection lever 156 is a selection lever in an imaging mode for selecting an imaging mode. The menu button 157 is a menu button for performing transition to a function setting mode in which a function of the imaging apparatus 10 is set. The up switch 158 and the down switch 159 are up and down switches for changing various setting values. The dial 160 is a dial for changing various setting values. The reproduction button 161 is a button for performing transition to a reproduction mode in which an image recorded in a recording medium in the imaging apparatus 10 is reproduced on the display unit 153.
The imaging optical system 152 forms an optical image of a subject on the image element 184. The optical axis 180 is an optical axis of the imaging optical system 152. The aperture 181 is an aperture for adjusting an intensity of light that passes through the imaging optical system 152 and is controlled by the aperture control unit 182. The optical filter 183 limits wavelengths of light that enters the image element 184 and a spatial frequency that is transmitted to the image element 184. The image element 184 converts the optical image of the subject formed through the imaging optical system 152 into an electrical image signal (signal charge) in a photoelectric conversion unit. The image element 184 has a sufficient number of pixels, a signal reading speed, a color gamut, and a dynamic range which satisfy ultra high definition television standards.
The digital signal processing unit 187 performs various types of correction on digital image data acquired from the image element 184 and then compresses the image data. The timing generation unit 189 outputs various timing signals to the image element 184 and the digital signal processing unit 187 and controls various timings. The system control unit 178 is a CPU that performs various types of computing and controls the entire imaging apparatus 10. In addition, the system control unit 178 is used to identify a subject from the image processed in the digital signal processing unit 187 and detect a movement speed on the image plane of the subject. That is, the digital signal processing unit 187 and the system control unit 178 have a function of a speed detection device configured to detect a movement speed on the image plane of the subject from imaging results.
The display I/F 191 is an interface for displaying a captured image on the display unit 153. The display unit 153 is a display unit such as a liquid crystal display. The recording I/F unit 192 is an interface for performing recording or reading in or from the recording medium 193. The recording medium 193 is a removable recording medium such as a memory for recording image data, additional data, and the like. The wireless I/F 198 is an interface for communication via the external network 199. The network 199 is a computer network such as the Internet. The print I/F 194 is an interface for outputting to and printing a captured image on the external printer 195. The printer 195 is a printer such as a small ink jet printer. The external I/F 196 is an interface for communication with the external device 197 and the like. The external device 197 is a device that can display an image of a computer or a TV.
The image memory 190 temporarily stores image data. The switch input unit 179 includes the switch ST 154, the switch MV 155, and a plurality of switches for switching between various modes, and receives manipulations by a photographer. In addition, the switch input unit 179 also has a function as a light intensity setting unit that receives a setting of the number of stages of neutral density (ND) which is a light intensity limit amount. The number of ND stages (light reduction stage number) is a value that corresponds to an optical density (light transmittance). The flight control device 200 is a flight control device for performing capturing in the air.
One pixel element of the image element 184 includes two signal holding units (the first signal holding unit 507A and the second signal holding unit 507B) for one photodiode 500. The signal holding units can accumulate charges at different timings and read different images. In the present example, the first signal holding unit 507A accumulates a signal charge for imaging, and the second signal holding unit 507B accumulates a signal charge for detecting a speed of a subject. That is, an image for capturing (first image) is generated from the signal charge accumulated in the first signal holding unit 507A and an image for detecting a speed of a subject (second image) is generated from the signal charge accumulated in the second signal holding unit 507B.
Since a basic structure of the image element 184 including signal holding units is disclosed in Japanese Patent Laid-Open No. 2013-172210 by the applicants, description thereof will be omitted. Since the image element 184 of the present example includes two signal holding units for one photodiode 500, it is possible to read two images with different accumulation periods without reducing S/N. Here, in the present example, an example in which two signal holding units are included will be described. However, the present invention is not limited thereto, and a plurality of signal holding units may be included.
The pixel element 50 includes the photodiode 500 which is a photoelectric conversion unit, and the first signal holding unit 507A and the second signal holding unit 507B. In addition, the pixel element 50 includes the first transfer transistor 501A, the second transfer transistor 502A, the third transfer transistor 501B, the fourth transfer transistor 502B, and the fifth transfer transistor 503. In addition, the pixel element 50 includes the reset transistor 504, the amplifying transistor 505, the select transistor 506 and the floating diffusion region 508. In addition, the power line 520, the power line 521 and the signal output line 523 are included in the pixel element 50.
The first transfer transistor 501A is controlled by a transfer pulse φTX1A. The second transfer transistor 502A is controlled by a transfer pulse φTX2A. The third transfer transistor 501B is controlled by a transfer pulse φTX1B. The fourth transfer transistor 502B is controlled by a transfer pulse φTX2B. The fifth transfer transistor 503 is controlled by a transfer pulse φTX3. The reset transistor 504 is controlled by a reset pulse φRES. The select transistor 506 is controlled by a select pulse φSEL. Here, control pulses are transmitted from a vertical scanning circuit (not shown).
According to the transfer pulse φTX3 used for controlling the fifth transfer transistor 503, the photodiode 500 is reset, and an accumulation start timing is determined. In addition, according to the transfer pulse φTX1A used for controlling the first transfer transistor 501A, a timing at which a charge accumulated in the photodiode 500 is transferred to the first signal holding unit 507A is determined. According to the transfer pulse φTX1B used for controlling the third transfer transistor 501B, a timing at which a charge accumulated in the photodiode 500 is transferred to the second signal holding unit 507B is determined. Here, control pulses are transmitted from a vertical scanning circuit (not shown).
In the present example, imaging in which the total exposure time Tc is divided to provide two stages of an ND effect will be described. One separate exposure is set as a first exposure time T1e. The first exposure time T1e is an exposure time for imaging. The first exposure time T1e represents a time from when the photodiode 500 is reset according to the transfer pulse φTX3 and accumulation starts until the charge is transferred to the first signal holding unit 507A according to the transfer pulse φTX1A. On the other hand, a first non-exposure time T1d is a time during which no imaging for one separate exposure occurs. The first non-exposure time T1d is a time from when transfer according to the immediately preceding transfer pulse φTX1A occurs until accumulation starts after resetting according to the transfer pulse φTX3 which is performed before transfer according to the next transfer pulse φTX1A.
The charge obtained in the photodiode 500 in the first exposure time T1e is transferred to the first signal holding unit 507A each time. When the total exposure time Tc is completed, all charges accumulated in a plurality of first exposure times T1e are transferred to the first signal holding unit 507A. The signal accumulated in the first signal holding unit 507A is read after the total exposure time Tc ends, and an image for capturing is generated.
A transfer count d represents the number of transfers of signal charges from the photodiode 500 to the first signal holding unit 507A in one imaging period (the total exposure time Tc), that is, the number of exposures for imaging. In the present example, transfer of signal charges from the photodiode 500 to the first signal holding unit 507A is performed twice or more during one imaging period (the total exposure time Tc) in order to obtain an ND effect. In
In order to obtain two stages of an ND effect, a relationship of Tc/(2̂2 stages)=ΣT1e is established. Here, Σ is used to indicate a sum of a plurality of first exposure times T1e. In addition, a relationship of Tc=ΣT1e+ΣT1d is established. In the present example, the same first exposure time T1e and the same first non-exposure time T1d are repeated eight times. This is because, if a subject that moves is imaged, a time is uniformly divided and light from the subject is acquired so that no unevenness of a movement trajectory is caused. The transfer count d, the first exposure time T1e, and the first non-exposure time T1d may be changed within a range in which the above formula is established. When the transfer count d is changed according to a movement speed of a subject, it is possible to capture a moving subject with high quality. On the other hand, for a subject that does not move, a time may be separately increased and decreased during one imaging.
Next, an exposure for speed detection will be described with reference to
In imaging for speed detection, during one imaging period (the total exposure time Tc), the number of transfers of signal charges from the photodiode 500 to the second signal holding unit 507B is 3 or more. In the example shown in
When a length by which a subject moves on the image element is associated with the known second non-exposure time T2d, it is possible to calculate a movement speed of the subject on the image element. In this case, when a time of the second non-exposure time T2d is set to a plurality of lengths, it is possible to perform observation in a wide range according to an unknown movement speed of the subject. In addition, if a non-exposure time is short like the second non-exposure time T2dc, it is possible to determine that no unease is caused in observation of imaging results. Here, lengths of times of the second non-exposure time T2d are compared and described, but this similarly applies to times of the first non-exposure time T1d. That is, if a movement speed of a subject is high, it is possible to capture imaging results with high quality in which a movement of the subject is not interrupted by reducing the first non-exposure time T1d.
The calculated movement speed of the subject is used to determine a method of dividing an exposure in the next imaging. For example, in a consecutive imaging scene in which the first exposure and the next exposure are continuous, since a change in movement speed of the subject is considered not to be large, it is effective to determine a method of dividing the next exposure on the basis of the movement speed of the subject calculated in the first imaging. While a movement speed is detected at the same time as imaging in the present example, it is possible to detect a movement speed during live view before imaging. In addition, while a movement speed of the subject is detected from the image in the present example, it may be calculated from an accelerometer built into the image element and may be input in the imaging apparatus in advance.
Next, a division state and imaging results will be described with reference to
When the first non-exposure time T1d is set to be longer as in the sequence in
The first non-exposure time T1d in the sequence in
In the sequence in
Also in the sequence in
Setting conditions of the first non-exposure time T1d are expressed by the formula of “V×T1d<Const.” Here, V denotes a speed of a subject on an image plane, and Const. is a value that is determined from imaging conditions such as a subject distance and a focusing state, and observation conditions. It is desirable that T1d be set to be longer in a range in which the formula is satisfied. This formula means that it is necessary to shorten the first non-exposure time T1d as a speed V of the subject on the image plane becomes higher. That is, if a speed V of a subject on the image plane is higher, the first non-exposure time T1d may be shortened and the transfer count d may be increased. On the other hand, if a speed V of a subject on the image plane is slower, the first non-exposure time T1d may be lengthened in a range in which the above formula is satisfied and the transfer count d may be decreased.
If a speed V of a subject on the image plane increases, it is necessary to shorten the first non-exposure time T1d. However, as described above, there is a limit to shortening the first non-exposure time T1d. The first exposure time T1e and the first non-exposure time T1d have a relationship of (T1e+T1d)/T1e=2̂n when the number of ND stages is set as n. Therefore, if the number of ND stages is 2, when the first exposure time T1e is set to be shortest, the first non-exposure time T1d is also set to be shortest. When the number of ND stages is larger, the first non-exposure time T1d increases with respect to the first exposure time T1e. On the other hand, when the number of ND stages is smaller, it is possible to shorten the first non-exposure time T1d with respect to the first exposure time T1e. Therefore, if a movement speed of a subject is higher than a predetermined value, the number of ND stages is reduced and the transfer count d is increased so that it is possible to shorten the first non-exposure time T1d.
When the number of ND stages is changed, since an intensity of light to be captured changes greatly, it is necessary to limit a light intensity such as shortening an imaging time and lowering an ISO sensitivity. In addition, a photographer can select in advance either a subject speed priority mode in which, even if the number of ND stages is small, a subject that moves is imaged with high quality or an ND stage number priority mode in which setting of the number of ND stages has a priority regardless of a quality of a moving subject. If the subject speed priority mode is selected, when a speed of a subject is high, the imaging apparatus 10 reduces the number of ND stages, increases the transfer count d, and shortens the first non-exposure time T1d. On the other hand, if the ND stage number priority mode is selected, even when a speed of a subject is high, the number of ND stages is not decreased to shorten the first non-exposure time T1d.
When the number of ND stages is changed from 2 to 1, if the number of ND stages is changed while the transfer count d remains at 8, the first non-exposure time T1d is shortened. When the first non-exposure time T1d is shortened, capturing of a high-quality picture can be realized. However, as described above, the first non-exposure time T1d is preferably set to be as long as possible in a range in which no non-exposure time appears in the image. Thus, as shown
As described above, according to the present example, it is possible to provide an imaging apparatus that can capture a high-quality picture by determining a non-exposure time according to a speed of a subject even if the subject moves during a separate exposure.
The second invention can be realized in processes in which a program that executes one or more functions of the above example is supplied to a system or a device through a network or a storage medium, and one or more processors in a computer of the system or the device read and execute the program. In addition, the second invention can be realized by a circuit (for example, an ASIC) that implements one or more functions.
In recent years, image elements have become highly functional, and improvements in the number of pixels and a frame rate have been attempted. There is a known method in which vector data (optical flow) indicating a movement of an object between a plurality of images is obtained using this signal. For example, the optical flow is an index that indicates an amount and direction of hand shake in capturing of a moving image, and is used to calculate a segmentation amount and a direction in electronic vibration prevention according to image segmentation from a large captured image. In addition, the optical flow is an index for estimating a movement direction and a speed of a subject that moves, and is used for tracking auto focus and the like.
The optical flow is acquired when movement amounts and directions of an object between images of preceding and following frames in the moving image are calculated by comparing the images. In this case, when a movement of a subject and hand shake is fast and an object moves greatly during an exposure time for one frame, an image of the object obtained as a result of the exposure becomes blurred in the movement direction. When comparing the blurred object images, since an outline of the object image is not clear, there are problems in that it is not possible to accurately compare positions and it is not possible to precisely obtain an optical flow that indicates a movement amount and a movement direction of an object between images.
In the imaging apparatus in Japanese Patent Laid-Open No. 2010-206522, in order to address the above problem, it is proposed that a shutter speed per one frame when a moving image is captured be set to be higher according to a speed of an object to be captured and an exposure time be shortened so that an image in which the outline of the object is sharp is obtained. In the imaging apparatus in Japanese Patent Laid-Open No. 2010-157893, it is proposed that charges converted by a photoelectric conversion unit are transferred to an accumulation unit a plurality of times and the charges that are transferred a plurality of times are collectively accumulated so that it is possible to change conditions such as an exposure time and an exposure amount at a high speed and freely. In addition, using this, in one frame period, short accumulation periods are uniformly distributed in one frame period and charges that are transferred a plurality of times are collectively accumulated so that a moving image can be obtained.
In the imaging apparatus in Japanese Patent Laid-Open No. 2010-206522, it is possible to obtain a sharp image of an object that moves fast by increasing a shutter speed in one frame during capturing a moving image and shortening an exposure time. However, increasing a shutter speed in moving image capturing has the following disadvantages. Generally, it is known that quality greatly deteriorates when there is a type of impression of choppiness with frame advance in a reproduced moving image. In order to avoid such an impression of choppiness, it is necessary to set an accumulation time close to one frame period in a capturing sequence. That is, when the frame rate is 30 fps, a relatively long exposure time such as 1/30 sec or 1/60 sec is appropriate. That is, an operation of obtaining a sharp image by increasing a shutter speed during capturing a moving image and shortening an exposure time in one frame has a problem that an impression of choppiness is likely to be experienced in a moving image in contrast to a relatively long exposure time that is set close to the one frame period.
In addition, in the imaging apparatus in Japanese Patent Laid-Open No. 2010-157893, when an exposure for a short time and transmission to an accumulation unit are repeated at plurality of times in one frame period, it is possible to perform imaging for a relatively long exposure time while a light intensity is reduced. In addition, since the image obtained accordingly is an overlapping image obtained by performing an exposure and transmission to an accumulation unit a plurality of times, compared to the imaging apparatus in Japanese Patent Laid-Open No. 2010-206522, there is less impression of choppiness when viewed as a moving image. However, in the imaging apparatus in Japanese Patent Laid-Open No. 2010-157893, since obtaining of an optical flow is not assumed, it is not a multiple division accumulation method suitable for precisely obtaining the optical flow. In addition, if the imaging apparatus in Japanese Patent Laid-Open No. 2010-157893 is operated to acquire an optical flow, when images are compared between frames, outlines of these images overlap each other. Accordingly, there are problems in that it is difficult to accurately select outlines that are common between frames therefrom and it is not possible to perform precise detection.
A third invention provides an imaging apparatus that can prevent a user from experiencing an impression of choppiness and obtain an optical flow with high accuracy and a control method thereof.
187 indicates a digital signal processing unit configured to perform various types of correction on digital image data output from the image element 184 and then compress image data. 189 indicates a timing generation unit configured to output various timing signals to the image element 184 and the digital signal processing unit 187. 178 indicates a system control CPU configured to control various types of computing and the entire digital still motion camera. The timing generation unit 189 and the system control CPU 178 correspond to a “control device” in the scope of the claims.
190 indicates an image memory configured to temporarily store image data. 191 indicates a display interface unit configured to display a captured image. 153 indicates a display unit such as a liquid crystal display. 193 indicates a removable recording medium such as a semiconductor memory for recording image data, additional data, and the like. 192 indicates a recording interface unit configured to perform recording or reading in or from the recording medium 193. 196 indicates an external interface unit configured to perform communication with the external computer 197 and the like. 195 indicates a printer such as a small ink jet printer. 194 indicates a print interface unit configured to output to and print a captured image on the printer 195. 199 indicates a computer network such as the Internet. 198 indicates a wireless interface unit configured to perform communication via the network 199. 179 indicates a switch input unit that includes the switch ST 154, the switch MV 155, and a plurality of switches for switching between various modes.
In the circuit diagram in
In addition, the first transfer transistor 501A is controlled by a transfer pulse φTX1A. The second transfer transistor 502A is controlled by a transfer pulse φTX2A. In addition, the reset transistor 504 is controlled by a reset pulse φRES and the select transistor 506 is controlled by a select pulse φSEL. In addition, the third transfer transistor 503 is controlled by a transfer pulse φTX3. Here, control pulses are transmitted from a vertical scanning circuit (not shown). In addition, 520 and 521 are power lines and 523 is a signal output line.
Operations of the image element will be described below in detail with reference to
Here, the image element 184 of the present example includes multiple rows of pixel columns in the vertical direction.
In
First, at the time t1, in the timing generation unit 189, the vertical synchronization signal φV reaches a high level and at the same time, a horizontal synchronization signal φH reaches a high level. In synchronization with the time t1 at which the vertical synchronization signal φV and the horizontal synchronization signal φH become a high level, the reset pulse φRES(1) in the first row reaches a low level. Then, the reset transistor 504 in the first row is turned off, and a reset state of the floating diffusion region 508 is released. At the same time, when a select pulse φSEL(1) in the first row reaches a high level, the select transistor 506 in the first row is turned on, and an image signal in the first row can be read. In addition, an output corresponding to a change in potential of the floating diffusion region 508 is read out to the signal output line 523 through the amplifying transistor 505 and the select transistor 506. A signal read out to the signal output line 523 is supplied to a readout circuit (not shown) and is output to the outside as an image signal in the first row (moving image).
Next, at a time t2, when the transfer pulse φTX2(1) in the first row reaches a high level, the second transfer transistor 502A in the first row is turned on. In this case, since reset pulses φRES(1) in all rows have already become a high level and the reset transistor 504 is turned on, the floating diffusion region 508 in the first row and the first signal holding unit 507A are reset. Here, the select pulse φSEL(1) in the first row at the time t2 reaches a low level.
Next, at a time t3, the transfer pulse φTX3(1) in the first row reaches a low level. Then, the third transfer transistor 503 is turned off, resetting of the photodiode 500 in the first row is released, and accumulation of signal charges of moving images in the photodiode 500 starts. In addition, at the time t4, the transfer pulse φTX1(1) in the first row reaches a high level. Then, the first transfer transistor 501A is turned on, and a signal charge accumulated in the photodiode 500 is transferred to the signal holding unit 507A that maintains charges of moving images in the first row. In addition, at the time t5, the transfer pulse φTX1(1) in the first row reaches a low level. Then, a first transfer transistor 501A is turned off, and transfer of the signal charge accumulated in the photodiode 500 to the signal holding unit 507A ends.
Here, the time t3 to the time t5 corresponds to one accumulation time of 1/480 sec of a moving image in an imaging period and is shown as an accumulation time 602-1 with an area of lines rising diagonally upward. That is, when such an accumulation operation is performed discretely 4 times, this is shown as four accumulation times 602-1, 602-2, 602-3, and 602-4 with an area of lines rising diagonally upward. Then, when signal charges obtained in these four accumulation times 602-1, 602-2, 602-3, and 602-4 are added, a signal amount equivalent to the signal charge obtained for one general accumulation time ( 1/480 sec×4 times= 1/120 sec) is obtained. Here, since control operations in three accumulation times 602-2, 602-3, and 602-4 following the first accumulation time 602-1 are the same as in the first accumulation time 602-1, description thereof will be omitted here.
Next, at the time t6, the vertical synchronization signal φV reaches a high level at the timing generation unit 189 and the horizontal synchronization signal φH reaches a high level at the same time, and the next imaging period starts. Then, the signal charge of the moving image accumulated and added for four accumulation times 602-1, 602-2, 602-3, and 602-4 is output as an image signal (moving image) to the outside after the time t6. Here, a timing chart of the second row is executed in synchronization with a horizontal synchronous vibration φH immediately after the time t1. That is, timing charts in all rows start from the time t1 to the time t6. For example, a timing chart that is started by the horizontal synchronization signal φH at the time t0 is set to the m-th row. In this case, switch signals are represented as φSEL(m), φRES(m), φTX3(m), φTX1A(m), φTX1B(m), φTX2A(m), and φTX2B(m).
According to the timing charts described above, the moving image can be obtained with an exposure amount equivalent to one exposure for 1/120 sec by repeating accumulation of a signal charge according to an exposure for 1/480 sec 4 times in an imaging period of 1/30 sec. Here, an operation of obtaining an image signal by performing exposure and accumulation a plurality of times during one imaging period corresponds to “an operation of generating a first or second image signal by transferring signal charges n times from a photoelectric conversion unit to a signal holding unit in a first or second imaging period” in the scope of the claims. This is provided that, n is a natural number of 2 or more.
Since the moving image obtained in this case is configured by obtaining one image signal by adding short accumulation times set at substantially equal intervals in an imaging period of 1/30 sec, it is possible to obtain a high-quality moving image with no impression of choppiness with frame advance. Here, in the above example, the number of accumulations and additions (the number of separate exposures) of signal charges at general exposure intervals (fps value) is 4. However, the present invention is not limited thereto, and the number may be, for example, 8, 16, 32, or 64.
In recent years, image elements have become highly functional, and improvements in the number of pixels and a frame rate have been attempted. There is a known method in which vector data (optical flow) indicating a movement of an object between a plurality of images is obtained using this signal. In the present example, the optical flow is acquired from through images. The system control CPU 178 in
In
First, as shown in
Specifically, in the image 64, feature points such as edges and corners in the image 64 are extracted while the area 65 is gradually shifted as indicated by the arrow 66 in a predetermined range with the area 65 corresponding to the region of interest 63 as the center. In addition, feature values are computed from the surrounding area to perform matching between the two images 61 and 64. Feature points such as edges and corners are extracted such that luminance gradient values of luminance data in the horizontal direction and vertical direction are computed, and a part in which the gradient value is a constant value or more in each of the directions is extracted.
As a result, it can be seen that, in the image 68, the region of interest 63 is moved like the vector 69. Then, the above operation is performed on a plurality of regions of interest set in the image 61. In this case, a plurality of movement vectors are detected in the image 68. Then, vector selection is performed focusing on the subject 62. For example, an estimated value may be obtained using random sample consensus (RANSAC), and one evaluation value of the movement vector can be determined. Here, since there is a known technology regarding RANSAC, details thereof will be omitted here. At this time, in an imaging method in the related art, when the outline of the image between frames to be compared becomes unclear due to a movement of an object, hand shake, or the like, there is a problem that a precise optical flow is not obtained.
<Problems when an Optical Flow is Acquired by Extracting Feature Points>
Hereinafter, problems when an optical flow is acquired will be described using a schematic diagram of an image obtained when a scene in which a subject passes by in the horizontal left direction of a screen is captured by different exposure methods.
Here, a case in which feature values are computed from the outlines of both the images of the subject S in the first frame and the second frame which is the next frame shown in
In
In this case, when a luminance value in a pixel in the X-th column on the line α is set as I(X), a gradient value K(x) is represented by the following formula (1).
K(x)=dI(X)/dx (1)
In
When the subject S moves at a substantially uniform speed, a part in which the outline is blurred because the subject S in
With reference to the luminance value in
In this case, the sharp outline part of the subject S in
A sharp outline part in
As described above, since exposure, transfer, and accumulation are performed only for the first short time ( 1/480 sec) in each frame, during the remaining time of 1/30 sec for one frame after the exposure, transfer, and accumulation are completed, no image is captured. A time at which capturing of the next image starts is a starting point of the second frame, and the subject continues to move in an advancing direction during that time. As shown in
On the other hand, in the above example in which the outline is blurred in
As described above, as shown in the example in
As described above, in the first and second comparative examples, it is not possible to achieve both precise calculation of the optical flow and capturing a high-quality moving image with no impression of choppiness.
On the other hand, in the present invention, an optical flow is estimated using a method of acquiring a moving image by performing exposure, transfer, and accumulation a plurality of times for 1/30 sec for one frame in a divided manner so that the problem is addressed. Descriptions thereof are as follows.
In this case, in
In addition,
In
Therefore, if the outline in the second frame corresponding to the outline of interest in the first frame can be accurately selected, since it is possible to precisely obtain positions of the outlines, it is possible to precisely obtain an optical flow which is a movement vector thereof. In
Here, an operation of obtaining an image in which a plurality of sharp outlines overlap each other corresponds to “an operation of generating a first or second image signal by transferring signal charges n times from a photoelectric conversion unit to a signal holding unit in a first or second imaging period” in the scope of the claims.
In addition, in the present example, as described above, exposure is performed a plurality of times although it is performed for a short time during an imaging period of 1/30 sec. Therefore, as shown, an interval between the end of the raising part indicating the outlines A1 to A4 in the graph of the first frame indicated by a solid line and the end of the raising part indicating the outlines B1 to B4 in the graph of the second frame indicated by a dashed line is controlled such that it becomes smaller. Therefore, for a user who views a moving image, the outline of the subject S in the first frame and the outline of the subject S in the second frame appear substantially continuous, and a high-quality moving image with no impression of choppiness is obtained.
However, in the above method, like A1 to A4 indicating the outlines of the subject S in the first frame and the edges B1 to B4 indicating the outlines of the subject S in the second frame, edges with the same shape are arranged at uniform intervals. In this case, there is a risk of the same edge corresponding to the edge in the first frame being erroneously selected from the second frame. That is, this is because, in the case of the present example, like Japanese Patent Laid-Open No. 2010-157893, since outlines with the same shape are arranged with the same intervals therebetween, in comparison to detecting feature points with limited blocks, a corresponding part may be erroneously recognized.
Specifically, in
In order to address the above problem, as will be described below, the present invention performs a two-stage narrowing down procedure in estimation of the optical flow.
That is, after images of the first frame and the next second frame are obtained, as a first procedure, low pass filter processing in which a predetermined number of adjacent pixels in a predetermined direction (such as the horizontal direction and the vertical direction) of the image and luminance values are averaged with respect to the image luminance value and a high frequency component is removed is performed.
For example, an example in which luminance values in the horizontal direction are averaged will be described as follows.
When a luminance value of a pixel in the x-th column of original data is set as I1(x), the luminance value I2(x) obtained after low pass filter processing is performed is calculated by performing a process of averaging a predetermined number of preceding and following pixels. For example, in the case of averaging three preceding and three following pixels, the luminance value I2(x) is an average of luminances I1(x) in a total of 7 pixels including three pixels in front of a pixel of interest, three pixels behind the pixel of interest and the one pixel of interest itself, and is represented by the following Formula (2).
I
2(x)=I1(x−3)+I1(x−2)+I1(x−1)+I1(x)+I1(x+1)+I1(x+2)+I1(x+3)/(3+3+1) (2)
Here, while an average of a total of 7 pixel signals including three preceding and three following pixel signals has been described in the present example, the present invention is not limited to this number of pixels. As long as a method of performing low pass filter processing using an average of a plurality of adjacent pixel signals is used, the present invention can be applied to a case using any number of pixel signals.
The operation corresponds to “operation of generating a third or fourth image signal by averaging luminance values of pixel parts with luminance values of a predetermined number of adjacent other pixel parts with respect to a generated first or second image signal” in the scope of the claims. In addition, a device that can remove a high frequency component with a low pass filter corresponds to “first or second averaging device” in the scope of the claims.
Here, the operation of obtaining a luminance I2(x) corresponds to “operation of generating a third or fourth image signal by averaging luminance values of pixel parts with luminance values of a predetermined number of adjacent other pixel parts” in the scope of the claims.
Parts in which the outline is blurred by low pass filter processing in
In this case, as shown in
The center position (first center) G1 in the first frame and the center position (second center) G2 in the second frame are obtained, and an approximate optical flow FG1G2 is then obtained by a block matching method. The approximate optical flow FG1G2 indicates a movement direction and a movement amount of the corresponding center position in the first frame and the second frame. The approximate optical flow FG1G2 is an optical flow obtained when the image outline is unclear due to low pass filtering as in the case of
However, to summarize the present invention, by combining both this rough movement vector and a plurality of sharp outline parts obtained by performing an exposure a plurality of times in the above one imaging period, and performing block matching performed once again, a precise outline position is thereby obtained.
That is, after the approximate optical flow FG1G2 is obtained, as the next procedure, the procedure returns to data of the luminance I1(x) before the low pass filter in
Here, a gradient value of the luminance shown in
That is, as described above, in the procedure of the present invention, the approximate optical flow FG1G2 which is a rough movement vector is obtained according to data of the luminance I2(x) to which the low pass filter is applied in advance. Therefore, it is possible to speculate that there is an edge corresponding to the edge A1 in the second frame near a point that has deviated from the edge A1 by the approximate optical flow FG1G2. Therefore, block matching is started using a point that has deviated from the edge A1 by the approximate optical flow FG1G2 as a starting point, and an outline at a position closest to the starting point among outlines in which feature points match is determined as a corresponding edge, and thus it is possible to improve detection accuracy.
When the flow starts, first, in Step S001, exposure, transfer, and accumulation are performed a plurality of times during one imaging period and thereby a moving image is acquired. That is, a luminance I1(x) which is a first image signal is acquired in a first imaging period (the first frame) and a luminance I1(x) which is a second image signal is acquired in a second imaging period (the second frame). Next, in Step S101, the low pass filter processing according to averaging with a predetermined number of adjacent pixel signals is performed on the first and second image signals obtained in the previous step and third and fourth image signals (data of the luminance I2(x)) are generated.
Then, in Step S102, the third and fourth image signals generated in the previous step are compared and an approximate optical flow FG1G2 which is a rough optical flow is calculated. In addition, in Step S201, the first and second image signals I1(x) obtained in Step S001 are compared and the above optical flow candidate is selected. Here, regarding the first and second image signals, since a plurality of same outlines are arranged with the same intervals therebetween, when feature points are compared, there are a plurality of optical flow candidates in which features match.
Finally, in Step S202, from among the plurality of optical flow candidates selected in the previous step, an optical flow that is closest to the approximate optical flow FG1G2 obtained in Step S102 is selected as a final optical flow.
Thus, it is possible to estimate the final optical flow with high accuracy and the flow of the optical flow estimation procedure ends.
Here, the above optical flow estimation procedure is performed by the “optical flow candidate calculation device, approximate optical flow calculation device, and optical flow estimation device” in the scope of the claims.
In addition, an operation of selecting an optical flow candidate corresponds to “operation of calculating a plurality of optical flow candidates which are vectors indicating a movement direction and amount of a subject during first and second imaging periods” in the scope of the claims. In addition, an operation of obtaining an approximate optical flow FG1G2 corresponds to “operation of calculating an approximate optical flow which is a vector indicating an approximate movement direction and amount of a subject during first and second imaging periods” in the scope of the claims. In addition, an operation of obtaining a real optical flow corresponds to “operation of estimating one optical flow candidate that is closest to an approximate optical flow among a plurality of optical flow candidates as a final optical flow” in the scope of the claims.
As described above, in the present invention, as the first procedure, a movement vector with low accuracy is obtained from low-pass filtered image data. In addition, as the second procedure, a precise movement vector is narrowed down from the original image data based on the movement vector. The present invention performs such two-stage narrowing down. According to such a procedure, it is possible to accurately select a corresponding edge when the first frame transitions to the second frame from among a plurality of edges that appear in the image according to a separate exposure, and even if a separate exposure is used, it is possible to precisely estimate a real optical flow FA1B1.
Accuracy of selection of a corresponding outline between the preceding and following frames can be improved using the following method.
In
Here, as shown in
The images obtained in the present example are shown in
In this case, the outline parts of the subject S in
In addition,
In
In this case, since a part having a luminance gradient value that is a certain value or greater is a very narrow area, it is possible to precisely obtain positions of four outlines that are arranged at nonuniform intervals in the frames in the x axis direction. Here, unlike the case in
In addition, since exposure is performed a plurality of times at nonuniform intervals, in the graph of the gradient value in
As described above, when a plurality of exposure timings are set at nonuniform time intervals in one imaging period, it is possible to implement a configuration that can acquire a high-quality moving image while a precise optical flow is calculated. In addition, the plurality of exposure timings that are set at nonuniform time intervals corresponds to “in each of first and second imaging periods, n times of signal charge transfer are performed such that time intervals between n transfer timings at which transfer starts are different from each other” in the scope of the claims.
It is desirable to set the number of times of exposure, transfer, and accumulation so that a time interval between signal charge transfer timings, that is, a time interval between a plurality of exposure timings in
Therefore, as an example, when an exposure time for one frame is 1/30 sec for moving image capturing, an exposure that is divided among at least 4 times within one frame is performed and time intervals between four exposures is set to 1/120 sec or shorter. In addition, when an exposure time for one frame is 1/60 sec, an exposure is divided among at least twice and an time interval between two exposures is set to 1/120 sec or shorter.
According to the above configuration, it is possible to implement a configuration that can optimize the number of times of exposure and acquire a high-quality moving image with no impression of choppiness.
Here, an operation of setting the number of times of exposure so that the time interval becomes 1/120 sec or shorter corresponds to “operation of increasing and decreasing a value of n so that time intervals between n transfer timings at which n times of signal charge transfer start are 1/120 sec or shorter” in the scope of the claims.
In the above imaging apparatus, the system control CPU (control device) 178 in
In addition, the system control CPU (control device) 178 in
In addition, the system control CPU (control device) 178 in
In addition, the system control CPU (control device) 178 in
In the above imaging apparatus, functions performed by the system control CPU (control device) 178 in
The present invention can be realized in processes in which a program that executes one or more functions of the above embodiment is supplied to a system or a device through a network or a storage medium, and one or more processors in a computer of the system or the device read and execute the program. In addition, the present invention can be realized by a circuit (for example, an ASIC) that implements one or more functions.
While preferable examples of the first to third inventions have been described above, the first to third inventions are not limited to the above examples, and various modifications and alternations can be made within the scope of the spirit of the invention.
While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
This application claims the benefit of Japanese Patent Applications No. 2017-140851 filed on Jul. 20, 2017, No. 2017-160110 filed on Aug. 23, 2017, and No. 2018-033447 filed on Feb. 27, 2018, which are hereby incorporated by reference herein in its entirety.
Number | Date | Country | Kind |
---|---|---|---|
2017-140851 | Jul 2017 | JP | national |
2017-160110 | Aug 2017 | JP | national |
2018-033447 | Feb 2018 | JP | national |