The present invention relates to a technique for correcting an error in focus detection in an image capturing apparatus.
It is common that an image capturing apparatus equipped with an automatic focus adjustment apparatus capable of automatically focusing on an object has a function of causing a calibration apparatus to correct, if any, an error in a defocus amount calculated by the automatic focus adjustment apparatus.
Japanese Patent Laid-Open. No. 2005-109621 discloses an image capturing apparatus with a calibration mode for acquiring a correction value for an automatic focus adjustment apparatus. Images and defocus amounts are acquired at different focusing lens positions, and a user selects an image with a perfect focus from among the acquired images. Then, the defocus amount acquired at the same time as the selected image is stored as a correction value.
Meanwhile, an image capturing apparatus has been offered that divides an exit pupil of a photographing lens into a plurality of partial pupil areas, and hence can simultaneously shoot a plurality of parallax images corresponding to the partial pupil areas. U.S. Pat. No. 4,410,804 discloses an image capturing apparatus that includes a two-dimensional image sensor in which a photoelectric converter divided into a plurality of parts is disposed in correspondence with one microlens. Each photoelectric converter is configured in such a manner that its divided parts receive light from different partial pupil areas of an exit pupil of a photographing lens via one microlens. A plurality of parallax images corresponding to the partial pupil areas can be generated from image signals generated by photoelectrically converting object light received by the divided parts of each photoelectric converter.
Japanese Patent Laid-Open No. 2010-197551 offers an image capturing apparatus that performs bracket shooting while moving a focusing lens so as to expand a range in which a plurality of parallax images can be generated. The plurality of shot parallax images are equivalent to light field (LF) data, which is information of the spatial distribution of light intensity and the angle distribution, Stanford Tech. Report CTSR 2005-02, 1 (2005) discloses a refocusing technique for changing an in-focus position of a shot image, after shooting, by synthesizing an image on a virtual image-forming plane that is different from an imaging plane using acquired IF data.
The conventional technique disclosed in Japanese Patent Laid-Open No. 2005-109621 mentioned earlier needs to repeat a shooting operation while moving a lens so as to shoot a plurality of images on which a user can check the changed focus; therefore, shooting takes time.
The present invention has been made in view of the foregoing problem, and makes it possible to easily acquire a plurality of images with different focuses in calibrating focus detection.
According to a first aspect of the present invention, there is provided an image capturing apparatus, comprising: an image capturing unit that photoelectrically converts an object image; a focus detection unit that detects a focus position of an imaging optical system based on a phase-difference detection method using a pair of image signals formed by light that has passed through different pupil areas of the imaging optical system; an imaging control unit that controls the image capturing unit to make the image capturing unit acquire pieces of light field data in one-to-one correspondence with a plurality of focus positions of the imaging optical system that are obtained by shifting the detected focus position of the imaging optical system in increments of a predetermined amount; and a generation unit that generates a refocused image corresponding to a focus position between the plurality of focus positions of the imaging optical system using the pieces of light field data, the generation depending on a direction of phase-difference detection by the focus detection unit and on a direction of parallax in the pieces of light field data.
According to a second aspect of the present invention, there is provided a method of controlling an image capturing apparatus including an image capturing unit that photoelectrically converts an object image, the method comprising: detecting a focus position of an imaging optical system based on a phase-difference detection method using a pair of image signals formed by light that has passed through different pupil areas of the imaging optical system; controlling the image capturing unit to make the image capturing unit acquire pieces of light field data in one-to-one correspondence with a plurality of focus positions of the imaging optical system that are obtained by shifting the detected focus position of the imaging optical system in increments of a predetermined amount; and generating a refocused image corresponding to a focus position between the plurality of focus positions of the imaging optical system using the pieces of light field data, the generating depending on a direction of phase-difference detection in the detecting and on a direction of parallax in the pieces of light field data.
According to a third aspect of the present invention, there is provided an image capturing apparatus, comprising: an image capturing unit capable of acquiring light field data; a generation unit that generates a refocused image by applying refocusing processing to the light field data acquired by the image capturing unit; an image processing unit that applies image processing to at least one of a shot image to which the refocusing processing based on the light field data has not been applied and the refocused image; a display unit that displays the shot image and the refocused image; and a control unit that, in a predetermined mode for causing the display unit to display a plurality of images including the shot image and the refocused image and allowing one of the plurality of images to be selected, controls the image processing unit to execute the image processing to reduce a difference between resolutions of the shot image and the refocused image displayed on the display unit.
According to a fourth aspect of the present invention, there is provided a method of controlling an image capturing apparatus including an image capturing unit capable of acquiring light field data, the method comprising: generating a refocused image by applying refocusing processing to the light field data acquired by the image capturing unit; applying image processing to at least one of a shot image to which the refocusing processing based on the light field data has not been applied and the refocused image; displaying the shot image and the refocused image on a display unit; and in a predetermined mode for causing the display unit to display a plurality of images including the shot image and the refocused image and allowing one of the plurality of images to be selected, controlling the applying of the image processing to reduce a difference between resolutions of the shot image and the refocused image displayed on the display unit.
Further features of the present invention will become apparent from the following description of exemplary embodiments with reference to the attached drawings.
The following describes embodiments of the present invention in detail with reference to the attached drawings.
An imaging optical system 10 is housed in the interchangeable lens 1. The imaging optical system 10 is composed of a plurality of lens units and a diaphragm. Focusing can be performed by moving a focusing lens unit (hereinafter, simply referred to as a focusing lens) 10a, which is one of the plurality of lens units, along an optical axis.
A lens driving unit 11 includes an actuator that moves a zoom lens and the focusing lens 10a, a driving circuit for the actuator, and a transmission mechanism that transmits a driving force of the actuator to each lens. A lens state detection unit 12 detects the positions of the zoom lens and the focusing lens 10a, that is, a zoom position and a focus position.
A lens control unit 13 is constituted by, for example, a CPU, and controls the operations of the interchangeable lens 1 in accordance with instructions from a later-described camera control unit 40. The lens control unit 13 is connected to the camera control unit 40 via a communication terminal included among electric contacts 15 in such a manner that they can communicate with each other. Power is supplied from the camera 2 to the interchangeable lens 1 via a power source terminal included among the electric contacts 15. A lens storage unit 14 is constituted by, for example, a ROM, and stores various types of information, such as data used in control performed by the lens control unit 13, identification information of the interchangeable lens 1, and optical information of the imaging optical system 10.
In the camera 2, during optical viewfinder observation in which a user observes an object through an optical finder, a main mirror 20 constituted by a half-silvered mirror is placed at a down position on an imaging optical path and reflects light from the imaging optical system 10 toward a focusing screen 30 as shown in
A sub mirror 21 pivots together with the main mirror 20, and directs light transmitted through the main mirror 20 placed at the down position toward an autofocus (AF) sensor unit 22. When the main mirror 20 pivots to be placed at the up position, the sub mirror 21 also withdraws from the imaging optical path.
The AF sensor unit 22 detects a focus state of the imaging optical system 10 (focus detection) in a plurality of focus detection areas set within an imaging range of the camera 2 based on the phase-difference detection method using incident light from an object that has passed through the imaging optical system 10 and been reflected by the sub mirror 21. The AF sensor unit 22 includes a secondary image-forming lens that forms a pair of images (object images) using light from each focus detection area, and an area sensor (a CCD or CMOS sensor) having a pair of light-receiving element columns that photoelectrically converts the pair of object images. A pair of light-receiving element columns in the area sensor outputs a pair of image signals, that is, photoelectrically converted signals corresponding to the luminance distributions of the pair of object images, to the camera control unit 40. A plurality of pairs of light-receiving element columns corresponding to the plurality of focus detection areas are arranged two-dimensionally in the area sensor.
The camera control unit 40 calculates a phase difference between a pair of image signals, and calculates a focus state (a defocus amount) of the imaging optical system 10 from the phase difference. Furthermore, based on the detected focus state of the imaging optical system 10, the camera control unit 40 calculates an in-focus position to which the focusing lens 10a is to be moved to achieve an in-focus state with the imaging optical system 10. Automatic focus adjustment that uses such a focus detection method is called phase-difference autofocus (AF).
Then, the camera control unit 40 transmits a focus instruction to the lens control unit 13 to move the focusing lens 10a to the in-focus position calculated based on the phase-difference detection method. The lens control unit 13 moves the focusing lens 10a to the in-focus position via the lens driving unit 11 in accordance with the received focus instruction. As a result, the imaging optical system 10 achieves the in-focus state.
As described above, phase-difference AF is performed by detecting a focus state based on the phase-difference detection method, calculating an in-focus position based on the focus state, and moving the focusing lens 10a to the in-focus position. The camera control unit 40 functions as focus control means.
The camera control unit 40 also functions as reliability degree determination means that determines whether the reliability of phase-difference AF with respect to an object targeted for phase-difference AF is high or low by calculating a reliability degree of phase-difference At with respect to the object (an image signal reliability degree) using information related to a pair of image signals from the AF sensor unit 22. In the present embodiment, a reliability degree of phase-difference AF with respect to an object is basically a reliability degree of a defocus amount (a focus state) acquired using a pair of image signals from the AF sensor unit 22 that receives light from the object. Note that this reliability degree can be ultimately considered as a reliability degree of an in-focus position based on a phase difference, because the in-focus position based on the phase difference is calculated from a defocus amount. In the following description, a reliability degree of phase-difference AF with respect to an object may simply be referred to as a reliability degree.
For example, a select level (S level) value SL disclosed in Japanese Patent Laid-Open No. 2007-52072 is used as a reliability degree. Serving as information related to a pair of image signals, the S level value SL depends on such parameters as a degree of match U, the number of edges (a correlated change amount ΔV), sharpness SH, and a contrast ratio PBD of the pair of image signals, and is given by the following expression. The higher the reliability, the smaller the S level value SL.
S_Level=U/(ΔV×SH×PBD)
Note that the information related to the pair of image signals may not necessarily be information related to both of the pair of image signals, and may be information related to one of the pair of image signals. Furthermore, the degree of match U, correlated change amount ΔV, sharpness SH, and contrast ratio PBD may be rephrased as information acquired from the pair of image signals. The information acquired from the pair of image signals is not limited to these, and may include, for example, a charge accumulation period T.
As the light-receiving elements are arranged two-dimensionally in the area sensor of the AF sensor unit 22, focus detection can be performed by detecting the luminance distribution of an object in both the horizontal direction and the vertical direction in the same field-of-view area (e.g., a central area) within the imaging range. This wall be described later in detail.
The shutter 23 closes during optical viewfinder observation, and opens during live-view observation and shooting of moving images to allow the image sensor 24 to photoelectrically convert object images formed by the imaging optical system 10 (generate live-view images and shot moving images). It also controls exposure of the image sensor 24 by opening and closing at a set shutter speed during shooting of a still image.
The image sensor 24 is composed of a CMOS or CCD image sensor and its peripheral circuit, and outputs analog imaging signals by photoelectrically converting object images formed by the imaging optical system 10. In the image sensor 24, a plurality of pixels are disposed that have a pupil division function and serve as both imaging pixels and focus detection pixels. This will be described later in detail.
The focusing screen 30 is placed at a primary image-forming plane of the imaging optical system 10, which is equivalent to the position of the image sensor 24. During optical viewfinder observation, object images (viewfinder images) are formed on the focusing screen 30. A pentaprism 31 converts object images formed on the focusing screen 30 into erect images. An eyepiece lens 32 allows the user to observe the erect images. The focusing screen 30, the pentaprism 31, and the eyepiece lens 32 constitute an optical viewfinder.
An AE sensor 33 receives light from the focusing screen 30 via the pentaprism 31, and measures the luminances of object images formed on the focusing screen 30. The AE sensor 33 has a plurality of photodiodes, and can measure the luminances in each of a plurality of photometric areas set by dividing the imaging range of the camera 2. It not only measures the luminances of object images, but also has an object detection function of determining an object state by measuring the shapes and colors of object images.
The camera control unit 40 is constituted by a microcomputer including an MPU and the like, and controls the operations of the entire camera system 100 including the camera 2 and the interchangeable lens 1. The camera control unit 40 functions not only as the focus control means as mentioned earlier, but also as later-described AF calibration means (referred to as an AF microadjustment function in the present embodiment). It also functions as imaging control means that controls imaging operations, the reliability degree determination means, and a counter for counting the number of times shooting has been performed in focus bracket shooting.
A digital signal processing unit 41 converts analog imaging signals from the image sensor 24 into digital imaging signals, and generates image signals (image data) by applying various types of processing to the digital imaging signals. The image sensor 24 and the digital signal processing unit 41 constitute an imaging system.
The digital signal processing unit 41 detects a focus state based on the phase-difference detection method using the plurality of pixels with the pupil division function arranged in the image sensor 24. The digital signal processing unit 41 also calculates a position to which the focusing lens 10a is to be moved to achieve an in-focus state with the imaging optical system 10 based on the detected focus state of the imaging optical system 10; this calculation is referred to as second focus detection (hereinafter referred to as imaging plane phase-difference autofocus (AF)).
The digital signal processing unit 41 includes a refocused image generation unit 44 that changes an in-focus position of a shot image after shooting. Refocusing processing executed to generate a refocused image will be described later in detail. A camera storage unit 42 stores various types of data used in the operations of the camera control unit 40 and the digital signal processing unit 41. The camera storage unit 42 also stores images that have been generated to be recorded. The backside monitor 43 is constituted by a display element, such as a liquid crystal panel, and displays live-view images, images to be recorded, and various types of information.
[Layout of Focus Detection Areas]
In upright lines, a correlation direction coincides with the up-down direction (the direction of short edges of the image sensor 24), and a focus state of the imaging optical system 10 is detected from the luminance distribution in the up-down direction. On the other hand, in crosswise lines, a correlation direction coincides with the crosswise direction (the direction of long edges of the image sensor 24), and a focus state of the imaging optical system 10 is detected from the luminance distribution in the crosswise direction.
The two upright focus detection lines L1 and L2 that are disposed next to each other are misaligned by a minute amount in the up-down direction. This amount of misalignment is half of the pitch of pixels arrayed in the correlation direction. Variation in focus detection is alleviated by calculating a detection result in consideration of the two focus detection results obtained from the focus detection lines L1 and L2 that are misaligned by half of the pixel pitch. Similarly to the focus detection lines L1 and L2, the two crosswise focus detection lines L3 and L4 that are disposed next to each other are misaligned by half of the pixel pitch to alleviate variation in focus detection.
The central area of the focus detection range 51 also has one crosswise focus detection line L19 which has a large base-line length and in which focus detection is performed using a light beam corresponding to f/2.8. In the focus detection line L19, as its base-line length is larger than those of the focus detection lines L3 and L4, images move by a large amount on a light-receiving plane of the area sensor included in the AF sensor unit 22. Therefore, in this line, focus detection can be achieved with high precision compared with the focus detection lines L3 and L4.
Note that as the focus detection line L19 uses a light beam corresponding to f/2.8, it is effective only when a high-brightness interchangeable lens having the maximum aperture of f/2.8 or less is attached. Furthermore, this line is inferior to the focus detection lines L3 and L4 in the ability to detect large defocus because images move by a large amount therein. As described above, when a high-brightness interchangeable lens having the maximum aperture of f/2.8 or less and requiring high detection precision is attached, the focus detection line 119 corresponding to f/2.8 (hereinafter referred to as an f/2.8 line) can achieve high-precision detection as required.
In the AF sensor unit 22, as upright and crosswise focus detection lines are disposed to cross one another in the central area of the focus detection range 51, the luminance distribution can be detected in both the up-down direction and the crosswise direction. As the luminance distribution is acquired in both the up-down direction and the crosswise direction, there is no need to forcedly perform focus detection in a direction that does not exhibit a significant luminance distribution among the up-down and crosswise directions; thus, variation in detection can be alleviated. As a result, focus detection can be performed with respect to a wider variety of objects, and the detection precision can be increased. As described above, in the central area of the focus detection range 51, three focus detection results can be obtained from the upright lines (L1, L2), crosswise lines (L3, L4), and f/2.8 line (L19).
A description is now given of focus detection lines that are disposed in upper and lower parts of the focus detection range 51. In the upper part of the focus detection range 51, crosswise lines L5 and L11 are disposed in which a focus state of the imaging optical system 10 is detected from the crosswise luminance distributions of object images located in the upper part of the focus detection range 51. In the lower part of the focus detection range 51, crosswise lines L6 and L12 are disposed in which a focus state of the imaging optical system 10 is detected from the crosswise luminance distributions of object images located in the lower part of the focus detection range 51. Furthermore, a crosswise f/2.8 line L20 is disposed at the same position as the focus detection line 15, and a crosswise f/2.8 line L21 is disposed at the same position as the crosswise line L6. Accordingly, when a high-brightness interchangeable lens having the maximum aperture of f/2.8 or less and requiring high detection precision is attached, focus detection can be achieved with high precision.
Focus detection lines L7 and L9 are disposed next to each other in the up-down direction in contact with the left edges of the focus detection lines L3, L4, L5, L6, L11, and L12, and focus detection lines L8 and L10 are disposed next to each other in the up-down direction in contact with the right edges of the focus detection lines L3, L4, L5, L6, L11, and L12. The focus detection lines L7, L9, L8, and L10 are upright lines in which a focus state of the imaging optical system 10 is detected from the luminance distribution in the up-down direction.
Focus detection lines L13 and L15 are disposed next to each other in the up-down direction to the left of the focus detection lines L7 and L9, and focus detection lines L14 and L16 are disposed next to each other in the up-down direction on the outer side of the focus detection lines L8 and L10. The focus detection lines L13, L14, L15, and L16 are also upright lines in which a focus state of the imaging optical system 10 is detected from the luminance distribution in the up-down direction.
Focus detection lines L17 and L18 are disposed as outermost lines in the crosswise direction. The centers of the focus detection lines L17 and L18 in the up-down direction are on the optical axis, and in these lines, a focus state of the imaging optical system 10 is detected from the up-down luminance distributions of object images located at the opposite ends of the focus detection range 51 in the crosswise direction.
[Image Sensor]
Image signals and focus detection signals can be acquired by arranging pixels shown in
As shown in
Light incident on the pixel 200G shown in
In
In
In the image sensor according to the present embodiment, a plurality of pixels are arrayed, and each pixel includes the first photoelectric converter 201 that receives a light beam passing through the first partial pupil area 501 of the imaging optical system 10, and the second photoelectric converter 202 that differs from the first partial pupil area and receives a light beam passing through the second partial pupil area 502 of the imaging optical system 10. One pixel composed of the combination of the first photoelectric converter 201 and the second photoelectric converter 202 functions as an imaging pixel that receives light beams passing through the pupil area composed of the combination of the first partial pupil area 501 and the second partial pupil area 502 of the imaging optical system 10. If necessary, the imaging pixels and the first and second photoelectric converters may be provided as separate pixel components, in which case first focus detection pixels corresponding to the first photoelectric converters and second focus detection pixels corresponding to the second photoelectric converters may be disposed in parts of the array of the imaging pixels.
In the present embodiment, focus detection is performed using first focus detection signals generated from a collection of received light signals of the first photoelectric converters 201 of the pixels in the image sensor, and second focus detection signals generated from a collection of received light signals of the second photoelectric converters 202 of the pixels. Furthermore, an imaging signal (image signal) corresponding to a resolution of the number of effective pixels N is generated by summing a signal of the first photoelectric converter 201 and a signal of the second photoelectric converter 202 for each pixel in the image sensor.
[Relationship between Defocus Amount and Image Shift Amount]
A description is now given of a relationship between a defocus amount and an image shift amount based on the first and second focus detection signals acquired by the image sensor 24.
Provided that a distance from an image-forming position of an object to the imaging plane 800 is defined as a magnitude |d|, a defocus amount d has a minus sign (d<0) in a front-focus state where the image-forming position of the object is closer to the object than the imaging plane 800 is, and has a plus sign (d>0) in a rear-focus state where the image-forming position of the object is farther from the object than the imaging plane 800 is. The defocus amount d is zero in an in-focus state where the image-forming position of the object is on the imaging plane 800 (an in-focus position).
In a front-focus state (d<0), among light beams from the object 802, light beams that have passed through the first partial pupil area 501 (or second partial pupil area 502) are collected and then spread to a width Γ1 (or Γ2) around the position of the center of Mass of the light beams G1 (or G2), thereby forming a blurred image on the imaging plane 800. The first photoelectric converters 201 (or second photoelectric converters 202) composing pixels arrayed in the image sensor receive the blurred image, and generate first focus detection signals (or second focus detection signals). Accordingly, the first focus detection signals (or second focus detection signals) are recorded as an image of the object 802 that has been blurred across the width Γ1 (or Γ2) at the position of the center of mass G1 (or G2) on the imaging plane 800. The blur width Γ1 (or Γ2) of the object image increases roughly in proportion to an increase in the magnitude |d| of the defocus amount d. Similarly, a magnitude |p| of an image shift amount p between the object images of the first and second focus detection signals (=a difference between the positions of the centers of mass of the light beams G1-G2) increases roughly in proportion to an increase in the magnitude |d| of the defocus amount d. The same goes for the rear-focus state (d>0), although in this case the direction of image shift between the object images of the first and second focus detection signals is reversed from the front-focus state.
Therefore, in the present embodiment, the magnitude of the image shift amount between the object images of the first and second focus detection signals increases with an increase in the defocus amount based on the first and second focus detection signals, or the defocus amount based on the imaging signal generated by summing the first and second focus detection signals. Note that in the second focus detection (imaging plane phase-difference AF), the first and second focus detection signals are relatively shifted to calculate correlation amounts indicating the degrees of match between the signals (first evaluation values), and the image shift amount is detected from a shift amount that yields high correlation (a high degree of match between the signals). As the magnitude of the image shift amount between the object images of the first and second focus detection signals increases with an increase in the magnitude of the defocus amount based on the imaging signal, the image shift amount is converted into the defocus amount in performing focus detection.
[Refocusing Processing]
A description is now given of refocusing processing that uses the aforementioned light field (LF) data acquired by the image sensor 24, and a refocusable range of the refocusing processing.
The first focus detection signal Ai and the second focus detection signal Bi have light intensity distribution information and incident angle information. A refocus signal at a virtual image-forming plane 810 can be generated by translating the first focus detection signal. Ai to the virtual image-forming plane 810 in accordance with the angle θa, translating the second focus detection signal Bi to the virtual image-forming plane 810 in accordance with the angle θb, and then summing the translated signals. Translation of the first focus detection signal Ai to the virtual image-forming plane 810 in accordance with the angle θa corresponds to a +0.5 pixel shift in the column direction. Translation of the second focus detection signal Bi to the virtual image-forming plane 810 in accordance with the angle θb corresponds to a −0.5 pixel shift in the column direction. Therefore, the refocus signal on the virtual image-forming plane 810 can be generated by shifting the first focus detection signal Ai and the second focus detection signal Bi relatively by +1 pixel, and summing Ai and the corresponding signal Bi+1. Similarly, by shifting the first focus detection signal Ai and the second focus detection signal Bi by an amount corresponding to an integer and summing the shifted signals, a shift summation signal (refocus signal) can be generated on any virtual image-forming plane based on the shift amount corresponding to the integer. By generating an image using the generated shift summation signal (refocus signal), a refocused image can be generated on a virtual image-forming plane.
In the present embodiment, the refocused image generation unit 44 applies second filter processing and second shift processing to the first and second focus detection signals and sums the resultant signals to generate a shift summation signal. By generating an image using the generated shift summation signal (refocus signal), a refocused image can be generated on a virtual image-forming plane. As each of the imaging pixels arrayed in the image sensor 24 according to the present embodiment is divided into two in the x-direction and one in the y-direction, a shift summation signal is generated only in the x-direction (horizontal direction or crosswise direction).
[Refocusable Range]
As a refocusable range is limited, a range in which the refocused image generation unit 44 can generate a refocused image on a virtual image-forming plane is limited.
Provided that a permissible circle of confusion 5 and an f-number of an image-forming optical system is F, a depth of field at the f-number F is ±Fδ. The effective f-number F01 (or F02) in the horizontal direction of the partial pupil area 501 (or 502), which has become narrow due to the NH—NV (2×1) division, is set as F01=NH·F, and hence dark. An effective depth of field for each first focus detection signal (or second focus detection signal) is multiplied by NH and thus becomes ±NH·Fδ, and accordingly, an in-focus range expands NH-fold. Within the range of the effective depth of field ±NH·Fδ, an in-focus object image is acquired for each first focus detection signal (or second focus detection signal). Therefore, the in-focus position can be readjusted (refocused) after shooting by executing refocusing processing for translating first focus detection signals (or second focus detection signals) in accordance with the principal ray angle θa (or θb) shown in
|d|≦NH·Fδ Expression 1
The permissible circle of confusion δ is defined as, for example, δ=2ΔX (the reciprocal of the Nyquist frequency 1/(2ΔX) of a pixel period ΔX). If necessary, the reciprocal of the Nyquist frequency 1/(2ΔX) of a period ΔXAF of the first focus detection signals (or second focus detection signals) after pixel summation processing (=6ΔX when six pixels are summed) may be used as the permissible circle of confusion δ=2ΔXAF. In generating a refocused image on a virtual image-forming plane using a shift summation signal (refocus signal), a range in which a refocused image satisfying the permissible circle of confusion δ can be generated is roughly limited to the range of Expression 1.
[Autofocus (AF) Microadjustment Function]
A description is now given of an autofocus (AF) calibration (hereinafter, CAL) mode of the camera 2 according to the present embodiment.
If the user selects the CAL mode, step 5100 is executed. In step S100, the camera control unit 40 determines whether a first switch (SW1) has been turned ON by an operation of depressing a non-illustrated release switch by half. If the first switch has not been turned ON, a stand-by state follows; if the first switch has been turned ON, step S200 is executed.
In step S200, the AC sensor unit 22 performs phase-difference AF. The details will be described later. Upon completion of phase-difference AF in step S200, step S300 is executed. In step S300, object information evaluation values are calculated. Object information includes, for example, AF reliability evaluation values. The AF reliability evaluation values are calculated based on signals corresponding to light received by the area sensor provided inside the AF sensor unit 22. For example, the aforementioned S level values SL are used as the AF reliability evaluation values.
The AF sensor unit 22 may suffer a decrease in the focus detection precision when, for example, an object is dark or the contrast is low. In the case of an object that reduces the focus detection precision, the AF reliability evaluation values calculated as the object information evaluation values are small. The object information evaluation values are not limited to the AF reliability evaluation values, and may be calculated in accordance with spatial frequency information of an object or a magnitude of object edge information (e.g., an integrated value of differences between neighboring pixel values). The object information for calculating the object information evaluation values is not limited to being detected by the area sensor provided inside the AF sensor unit 22, and may be detected by an object detection function of the AC sensor 33 provided in the optical viewfinder. The object information may be detected by the image sensor 24. Upon completion of the calculation of the object information evaluation values, step S400 is executed.
In step S400, whether CAL can be performed is determined based on the object information evaluation values calculated in step S300. For example, if the AF reliability evaluation values calculated as the object information evaluation values are large, it is determined that CAL can be performed, and step S700 is executed; if the AF reliability evaluation values are small, it is determined that CAL cannot be performed, and step S500 is executed. Note that there are a plurality of AC reliability evaluation values as they are calculated from different perspectives. As stated earlier, different perspectives include the luminance of an object, the contrast of an object, and so forth. In view of this, the determination may be made based on whether all AF reliability evaluation values satisfy a predetermined condition, or based on a value associated with a certain, preset perspective. In step S500, the user is notified of the fact that an object targeted for focus detection is not suitable for CAL using the backside monitor 43. Upon completion of the notification, step 3600 is executed.
In step S600, the user determines whether to end CAL. The backside monitor 43 displays a material that allows the user to determine whether to perform CAL again, and the user determines whether to perform CAL again by operating a non-illustrated console button. If the user determines to perform CAL again, step S100 is executed again; if the user determines to end CAL, the CAL mode ends.
In step S700, focus bracket shooting is performed, that is, a plurality of images are shot while shifting the focus by moving the focusing lens 10a in increments of a predetermined amount. The details will be described later. Upon completion of the focus bracket shooting, step S800 is executed. In step S800, the user selects, from among these plurality of images, an image that the user thinks has the best focus. The details will be described later.
In step S900, a correction value is stored. The correction value is determined based on a defocus amount associated with the image selected by the user, or on a defocus amount calculated in correspondence with a lens position. The determined correction value is stored to the camera storage unit 42. Furthermore, the user is notified of the stored correction value.
In step S201, AF line flags are reset. The AF line flags are provided in correspondence with upright lines and crosswise lines included among the focus detection lines (L1 to L21) disposed inside the AF sensor unit 22. The AG line flag corresponding to upright lines APP, and the AG line flag corresponding to crosswise lines is AFH.
In step S202, the camera control unit 40 designates one focus detection line from among the 21 focus detection lines disposed inside the imaging range in accordance with the user's selection operation or a predetermined algorithm.
In step S203, the camera control unit 40 determines whether the focus detection line designated in step S202 is one of the focus detection lines L1 to L4 and L19 included in the central area of the focus detection range. If the designated focus detection line is not one of the focus detection lines included in the central area, step S204 is executed; if the designated focus detection line is one of the focus detection lines included in the central area, step S206 is executed.
In step S204, the camera control unit 40 imports a pair of image signals from a pair of light-receiving element columns that is included in the area sensor of the AF sensor unit 22 and corresponds to the designated focus detection line. Then, it calculates a phase difference between the pair of image signals, and calculates a defocus amount from the calculated phase difference. It also determines the designated focus detection line as a specific focus detection line. Thereafter, step S205 is executed.
In step S205, the camera control unit 40 calculates, from the pair of image signals that was imported in step S204 in correspondence with the focus detection line, a degree of match U, correlated change amount ΔV, sharpness SH, and contrast ratio PBD of the pair of image signals. It also calculates the aforementioned S level value SL using the values of these four parameters. Upon completion of the calculation of the S level value SL, step S209 is executed.
In step S206, the camera control unit 40 imports pairs of image signals from pairs of light-receiving element columns that are included in the area sensor and correspond to the focus detection lines included in the central area of the focus detection range. Then, it calculates phase differences between the pairs of image signals, and calculates defocus amounts from the calculated phase differences. Thereafter, step S207 is executed.
In step S207, the camera control unit 40 calculates S level values SL from the pairs of image signals that were imported in step S206 in correspondence with the focus detection lines, similarly to step S205. Upon completion of the calculation of the S level values SL, step S208 is executed.
In step S208, the camera control unit 40 selects a focus detection line corresponding to the smallest S level value SL (i.e., a focus detection line with high reliability) from among the focus detection lines included in the central area of the imaging range, and determines the selected focus detection line as a specific focus detection line. Thereafter, step S209 is executed.
In step S209, whether the specific focus detection line determined in step S208 or S204 is a crosswise line is determined. If the specific focus detection line is a crosswise line, step S210 is executed; if the specific focus detection line is not a crosswise line (is an upright line), step S211 is executed. In the present embodiment, two types of AF line flags, that is, the AF line flag corresponding to upright lines and the AF line flag corresponding to crosswise lines, are provided because each focus detection line in the AF sensor unit 22 is either an upright line or a crosswise line. However, the layout of the focus detection lines is not limited in this way, and a criterion for selecting the specific focus detection line and the number and types of the AF line flags may vary depending on the layout of the focus detection lines.
In step S210, the AF line flag AFH corresponding to crosswise lines is set to 1. Upon completion of the setting, step S212 is executed. In step S211, the AF line flag AFV corresponding to upright lines is set to 1. Upon completion of the setting, step S212 is executed.
In step S212, the camera control unit 40 calculates a moving amount (including a moving direction) of the focusing lens 10a necessary for achieving an in-focus state from the defocus amount in the specific focus detection line. Specifically, the number of driving pulses of the actuator in the lens driving unit 11 for moving the focusing lens 10a is calculated. Calculation of the moving amount of the focusing lens 10a is equivalent to calculation of an in-focus position based on the phase-difference detection method. If a correction amount is currently designated by the AS microadjustment function, the moving amount of the focusing lens 10a calculated in step S212 is corrected by adding (or subtracting) the correction amount designated by the AF microadjustment function to (or from) the moving amount. If the correction amount has not been generated, it means that the correction amount is zero, and thus the moving amount of the focusing lens 10a (the in-focus position based on the phase difference) is not corrected.
In step S213, the camera control unit 40 transmits a focus instruction to the lens control unit 13 so as to move the focusing lens 10a by the corrected moving amount. Accordingly, the lens control unit 13 moves the focusing lens 10a to the corrected in-focus position based on the phase difference via the lens driving unit 11. In step S212, the focusing lens 10a may be moved by the pre-correction moving amount (i.e., to the pre-correction in-focus position based on the phase difference). Then, in step S213, the focusing lens 10a may be moved again by a moving amount equivalent to the correction amount designated by the At microadjustment function; in this case, the focusing lens 10a is ultimately moved to the corrected in-focus position based on the phase difference upon completion of the foregoing phase-difference AF operations, the present sub flow returns to step S300 of the main flow in the CAL mode shown in
The focusing lens 10a may be moved by other autofocus means (e.g., imaging plane phase-difference AF), rather than by phase-difference AF using the AF sensor unit 22 as described above. Upon completion of the foregoing phase-difference AF operations, the present sub flow returns to step S300 of the main flow in the CAL mode shown in
In step S701, whether the AF line flag AFH is 1 is determined. If the AF line flag AFH is 1 (i.e., if the specific focus detection line is a crosswise line), step S704 is executed. If the AF line flag AFH is 0 (i.e., the specific focus detection line is an upright line), step S702 is executed.
In steps S702 and S703, a lens driving amount w and the number m of images to be shot in focus bracket shooting for a case in which the AFH is 0 (i.e., for a case in which the specific focus detection line is an upright line) are determined.
In step S703, the number m of images to be shot in focus bracket shooting is determined. When the AF line flag AFH is 0, the number Vm of images to be shot when an upright line is selected is substituted for the number m of images to be shot. In the present embodiment, the number m of images to be shot is Vm=9. Once the number m of images to be shot has been determined, step S706 is executed.
In steps S704 and S705, a lens driving amount w and the number m of images to be shot in focus bracket shooting for a case in which the AFH is 1 (i.e., for a case in which the specific focus detection line is a crosswise line) are determined.
In step S704, the lens driving amount w is determined.
In the present embodiment, as shift summation is performed in the horizontal direction in generating refocused images, refocused images can be generated in line with a later-described image selection flowchart of
In step S705, the number m of images to be shot in focus bracket shooting is determined. When the AF line flag ASH is 1, the number Hm of images to be shot when a crosswise line is selected is substituted for the number m of images to be shot. In the present embodiment, the number m of images to be shot is Hm=3. Similarly to step S704, refocused images can be generated by executing refocusing processing using a defocus amount equivalent to +(m−1)/2w. That is, as two images (B0a and B0b) can be generated from one shot image (B0), the number Hm of images to be shot when a crosswise line is selected can be one-third of the number Vm (=9) of images to be shot when an upright line is selected. In other words, the number Vm (=9) of images to be shot when an upright line is selected, and the number Hm (=3) of images to be shot when a crosswise line is selected, satisfy the relationship Vm>Hm, meaning that the number m of images to be shot can be smaller when a crosswise line is selected than when an upright line is selected. Once the number m of images to be shot has been determined, step S706 is executed.
Although the number m of images to be shot is set by the camera in the present embodiment, it may be freely set by the user. Making the number m changeable according to the level of the user can provide a system suitable for the user. Furthermore, although the number m of images to be shot and the lens driving amount w are variables in the present embodiment, one or both of them may be a fixed value in the camera 2 or the interchangeable lens 1.
In step S706, the camera control unit 40 resets the counter (sets a counted value n to 0). Upon completion of the resetting of the counter, step S707 is executed. In step S707, AF information is detected from the AF sensor unit 22. In this detection, the specific focus detection line determined in step S204 or S208 of the sub flow of phase-difference AF shown in
In step S708, the mirrors are flipped up. Once the main mirror 20 and the sub mirror 21 have withdrawn to the up position, step S709 is executed. In step S709, the focusing lens 10a is driven. As shown in
In step S710, a still image is shot. The shot image is stored to the camera storage unit 42 in association with the AF information acquired in step S707 and a lens position. Upon completion of the recording of the shot image, step S711 is executed. In step S711, the mirrors are flipped down. Accordingly, the main mirror 20 and the sub mirror 21 move to the down position. Once the mirrors have been flipped down, step S712 is executed.
In step S712, the value counted by the counter included in the camera control unit 40 is incremented by one (n=n+1). Upon completion of the increment, step S713 is executed. In step S713, whether the value counted by the counter included in the camera control unit 40 has reached the number m of images to be shot (n=m−1) is determined. If the counted value has not reached the number m of images to be shot, step S703 is executed again; if the counted value has reached the number m of images to be shot, the present sub flow of the focus bracket shooting operations is completed. Upon completion of the foregoing focus bracket shooting operations, the present sub flow returns to step S800 of the main flow in the CAL mode shown in
In step S801, whether the AG line flag AFH corresponding to crosswise lines is 1 is determined. If the AF line flag AFH corresponding to crosswise lines is 1 (i.e., if the specific focus detection line is a crosswise line and the AG information is detected in crosswise focus detection lines in CAL), step S802 is executed. If the AG line flag AFH corresponding to crosswise lines is 0 (i.e., if the specific focus detection line is an upright line and the AF information is detected in upright focus detection lines in CAL), step S803 is executed.
In step S802, the refocused image generation unit 44 generates refocused images by performing shift summation in the horizontal direction. In the present embodiment, as each photoelectric converter of the image sensor 24 is divided into two in the x-direction and one in the y-direction, shift summation is performed only in the x-direction (horizontal direction or crosswise direction) in generating refocused images. Therefore, in the present sub flow, generation of refocused images is permitted only when a crosswise line, in which a focus state is detected from the luminance distribution in the x-direction (horizontal direction or crosswise direction), is selected (i.e., only when the AF line flag AFH corresponding to crosswise lines is 1). Upon completion of the generation of the refocused images, step S803 is executed.
In step S803, the backside monitor 43 presents images (display images) to the user. In this step, images that were actually shot in focus bracket shooting (i.e., shot images to which refocusing processing has not been applied) are presented together with the refocused images if the refocused images were generated in step S802. These images may be displayed one by one, or may be displayed altogether next to one another. Upon completion of the presentation of the images, step S804 is executed.
In step S804, whether the user has decided on an image is determined. The user selects and decides on an image that has the best focus from among the presented images. A standby state lasts until this selection. Once the user has decided on the image, step S805 is executed.
In step S805, a correction value is calculated based on the image selected in step S804. A defocus amount corresponding to the AF information acquired in step S707 of the flow of focus bracket shooting shown in
Although step S802 according to the present embodiment has been described based on an example in which each photoelectric converter of the image sensor 24 is divided into two in the x-direction and one in the y-direction, the present invention is not limited in this way. For example, when each photoelectric converter of the image sensor is divided into one in the x-direction and two is the y-direction, shift summation is performed only in the y-direction (vertical direction or up-down direction) in generating refocused images. In this case, during image selection in the CAL mode, generation of refocused images is permitted only when an upright line, in which a focus state is detected from the luminance distribution is the y-direction (vertical direction or up-down direction), is selected.
The present embodiment differs from the first embodiment in that, after generating a plurality of images with different focuses using a refocusing technique similarly to the first embodiment, the resolving power is changed by applying image processing to a shot image or an image generated by the refocused image generation unit. For simplicity, it will be assumed that the AF microadjustment function is implemented only with respect to crosswise lines (in the direction in which a shift summation signal is generated in refocusing processing). Note that the present embodiment is also applicable to an image capturing apparatus that implements the AF microadjustment function with respect to both upright lines and crosswise lines as in the first embodiment.
In some cases, the resolving power for the original image cannot be reproduced by an image generated by the refocusing technique, that is, the resolving power for the image generated by the refocusing technique is inferior to the resolving power for the original image. In view of this, the present embodiment will focus on an image capturing apparatus that can reduce a difference between the resolving power for a shot image and the resolving power for an image generated based on the shot image using the refocusing technique.
Although an overall configuration is substantially similar to that of the first embodiment, a digital processing unit 41 according to the present embodiment has a function of applying image processing to a shot image or an image generated by the refocused image generation unit 44. The image processing mentioned here denotes image processing for changing the resolving power, such as edge enhancement processing and low-pass processing.
The above-described sections [Image Sensor], [Relationship between Defocus Amount and Image Shift Amount], [Refocusing Processing], and [Refocusable Range] according to the first embodiment apply to the present embodiment as well, and thus a description thereof will be omitted.
[Resolving Powers for Refocused Images]
In some cases, the resolving powers for refocused images generated by the refocused image generation unit 44 based on a shift summation method are lower than the resolving power for an image actually shot.
As described above, in some cases, the resolving powers for refocused images acquired by refocusing processing based on the shift summation method are lower than the resolving power for a shot image. Therefore, even if the refocused images have captured a perfect focus position of an object, a user who compares the shot image with the refocused images may not feel that the refocused images have a perfect focus.
[Autofocus (AF) Microadjustment Function]
In the present embodiment also, the AF microadjustment function can be implemented in line with the flowchart of
Focus bracket shooting can also be performed in line with the flowchart of
In step S1801, the refocused image generation unit 44 generates refocused images by performing shift summation. Upon completion of the generation of the refocused images, step S1802 is executed in step S1802, resolution reduction image processing is applied to images that were actually shot in focus bracket shooting.
In step S1803, the backside monitor 43 displays images to the user. The images shot in focus bracket shooting are displayed together with the refocused images generated in the aforementioned step S1801. These images may be displayed one by one, or may be displayed altogether next to one another. Upon completion of the display of the images, step S1804 is executed. In step S1804, whether the user has decided on an image is determined. The user selects and decides on an image that the user thinks has the best focus from among the displayed images. A standby state lasts until this selection. Once the user has decided on the image, step S1805 is executed.
In step S1805, a correction value is calculated based on the image selected in step S1804, A defocus amount corresponding to the AF information acquired in step S707 of the flow of focus bracket shooting shown an
A third embodiment of the present invention will be described. Note that a description of configurations that are the same as those of the second embodiment will be omitted, and only the differences from the second embodiment will be described.
In step S1812, resolution enhancement image processing is applied to the refocused images generated from images that were actually shot in focus bracket shooting.
In step S1815, the backside monitor 43 displays images to a user. A correction value is calculated in a manner similar to step S1803 of the first embodiment. In step S1814, whether the user has decided on an image is determined. The user selects and decides on an image that the user thinks has the best focus from among the displayed images. A standby state lasts until this selection. Once the user has decided on the image, step S1815 is executed. In step S1815, a correction value is calculated based on the image selected in step S1814. The correction value is calculated in a manner similar to step S1805 of the first embodiment. Upon completion of the foregoing image selection operations, the present sub flow returns to step S900 of the main flow in the CAL mode shown in
Although preferred embodiments of the present invention have been described thus far, the present invention is not limited to these embodiments, and can be modified and changed in various manners within the scope of the principles of the present invention.
For example, the above-described first embodiment has introduced an example in which resolution reduction processing is applied to shot images, and the above-described third embodiment has introduced an example in which resolution enhancement. processing is applied to refocused images. However, the present invention is not limited in this way, and image processing may be applied to both shot images and refocused images. In this case, for example, resolution reduction processing of a certain level may be applied to shot images, and resolution enhancement processing that compensates for the insufficiency caused by such resolution reduction processing may be applied to refocused images. In the present invention, it is sufficient to reduce a resolution difference between a group of shot images and a group of refocused images to bring the groups to the same resolution level by applying image processing to at least one of the groups.
Although preferred embodiments of the present invention have been described thus far, the present invention is not limited to these embodiments, and can be modified and changed in various manners within the scope of the principles of the present invention.
For example, although focus detection is performed using the AF sensor unit 22 that is dedicated to focus detection in the above-described embodiments, focus detection may be performed using the image sensor that has a plurality of photoelectric converters per unit pixel as shown in
Embodiment(s) of the present invention can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a ‘non-transitory computer-readable storage medium’) to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s). The computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions. The computer executable instructions may be provided to the computer, for example, from a network or the storage medium. The storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD)™), a flash memory device, a memory card, and the like.
While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
This application claims the benefit of Japanese Patent Application Nos. 2016-080472, filed Apr. 13, 2016, and 2016-085428, filed Apr. 21, 2016, which are hereby incorporated by reference herein in their entirety.
Number | Date | Country | Kind |
---|---|---|---|
2016-080472 | Apr 2016 | JP | national |
2016-085428 | Apr 2016 | JP | national |