The present invention relates to a technique of reducing the influence of an eclipse of a light beam arising from the structures of an imaging optical system and image capturing apparatus in focus detection.
Some image capturing apparatuses such as a digital camera have a function of automatically adjusting an optical system including an imaging lens and focus lens in accordance with a imaging scene. Automatic adjustment functions are, for example, an autofocus function of detecting and controlling an in-focus position, and an exposure control function of performing photometry for an object to adjust the exposure to a correct one. By measuring the state of an object image, imaging settings suited to a imaging scene can be automatically selected, reducing the burden of settings on the user.
Focus detection methods adopted in the autofocus function are classified into an active method and passive method. In the active method, the distance to an object is measured using, for example, an ultrasonic sensor or infrared sensor, and the in-focus position is calculated in accordance with the distance and the optical characteristics of the optical system. The passive method includes a contrast detection method of detecting an in-focus position by actually driving the focus lens, and a phase difference detection method of detecting the phase difference between two pupil-divided optical images. Most image capturing apparatuses such as a digital single-lens reflex camera employ the latter method. A defocus amount representing the phase difference between two optical images is calculated, and the optical system is controlled to eliminate the defocus amount, thereby focusing on an object.
When performing the passive focus detection, two optical images to be compared need to have the same shape with merely a shift in the horizontal or vertical direction. However, a so-called “eclipse” may occur to cut off part of a light beam used in focus detection due to the lens or aperture of the optical system, the structure of the image capturing apparatus, or the like. If a light beam used in focus detection is eclipsed, the shape or luminance of an optical image used in focus detection changes. For this reason, neither the phase difference between pupil-divided optical images nor the contrast may be detected, or the precision may decrease.
To prevent generation of an eclipse, some image capturing apparatuses which execute the passive focus detection function restrict the aperture ratio of the imaging optical system or the focus detectable region. However, the following problems arise when the focus detection unit is designed not to eclipse a light beam used in focus detection. For example, when a line CCD used in focus detection is arranged so that a light beam which reaches the line CCD is not cut off by the openings of the apertures of various imaging lenses mounted on the image capturing apparatus, it is necessary to decrease the image height of the focus detectable region or reduce the line CCD scale. That is, the design not to eclipse a light beam used in focus detection narrows the focus detection range, decreases the focus detection precision due to shortage of the base-line length, or decreases the precision of focus detection for a low-luminance object.
However, some image capturing apparatuses improve the focus detection precision or widen the focus detection range by permitting an eclipse of a light beam used in focus detection and concretely grasping the eclipse generation conditions and the degree of eclipse. For example, there is an image capturing apparatus using a method of grasping a focus detection region where an eclipse occurs in accordance with the settings of the imaging optical system, and inhibiting focus detection in this focus detection region. Also, there is an image capturing apparatus using a method of mathematizing in advance attenuation of the light quantity of an optical image upon an eclipse, and correcting the output.
Japanese Patent Laid-Open No. 63-204236 discloses a technique of numerically expressing the amount of eclipse (eclipse amount) of a light beam. In this technique, the reliability of a defocus amount calculated by focus detection is determined based on whether the eclipse amount exceeds a predetermined threshold which changes dynamically based on the presence/absence of an eclipse.
In general, the optical system such as the imaging lens has so-called chromatic aberration in which the refractive index changes for each light wavelength contained in a light beam. More specifically, a light beam having passed through the lens is split into light components of respective wavelengths. An optical image used in focus detection has different optical paths extending to the imaging surface of the optical image for the respective wavelengths. Thus, the presence/absence of an eclipse, the eclipse amount, and the like differ between the respective wavelengths. However, Japanese Patent Laid-Open No. 63-204236 does not consider the chromatic aberration of the imaging optical system, so an error may occur depending on the spectral intensity in focus detection.
Chromatic aberration can be reduced using a plurality of optical components. However, this is not practical considering the difficulty of completely eliminating chromatic aberration, an increase in space for arranging the optical components, the manufacturing cost of the optical components, and the like.
The present invention has been made to solve the conventional problems. The present invention provides improvement of the precision of focus detection considering an eclipse arising from chromatic aberration.
According to one aspect of the present invention, there is provided an image capturing apparatus including detection means for performing passive focus detection using outputs from a pair of light receiving element arrays which receive optical images generated by a pair of light beams having passed through different regions of an exit pupil of an imaging optical system, comprising: obtaining means for obtaining a ratio of light quantities of predetermined wavelengths contained in a light beam having passed through a focus detection region where focus detection is to be performed by the detection means; determination means for determining whether the light beam is eclipsed due to the imaging optical system; and correction means for, when the determination means determines that the light beam is eclipsed, correcting outputs respectively from the pair of light receiving element arrays using new correction coefficients obtained by calculating, in accordance with the ratio, eclipse correction coefficients determined in advance for the respective predetermined wavelengths, wherein when the determination means determines that the light beam is eclipsed, the detection means performs the focus detection using the outputs from the pair of light receiving element arrays that have been corrected by the correction means.
According to another aspect of the present invention, there is provided a method of controlling an image capturing apparatus including detection means for performing passive focus detection using outputs from a pair of light receiving element arrays which receive optical images generated by a pair of light beams having passed through different regions of an exit pupil of an imaging optical system, comprising: an obtaining step of obtaining a ratio of light quantities of predetermined wavelengths contained in a light beam having passed through a focus detection region where focus detection is to be performed by the detection means; a determination step of determining whether the light beam is eclipsed due to the imaging optical system; and a correction step of correcting outputs respectively from the pair of light receiving element arrays using new correction coefficients obtained by calculating, in accordance with the ratio, eclipse correction coefficients determined in advance for the respective predetermined wavelengths when the light beam is determined in the determination step to be eclipsed, wherein when the light beam is determined in the determination step to be eclipsed, the detection means performs the focus detection using the outputs from the pair of light receiving element arrays that have been corrected in the correction step.
Further features of the present invention will become apparent from the following description of exemplary embodiments (with reference to the attached drawings).
Preferred embodiments of the present invention will now be described in detail with reference to the accompanying drawings. In the following embodiment, the present invention is applied to a digital camera having the phase difference detection type focus detection function as an example of an image capturing apparatus. However, the present invention is applicable to an arbitrary device having the passive focus detection function.
(Arrangement of Digital Camera 100)
An imaging optical system 101 is a lens unit including a imaging lens and focus lens. The lens unit is centered on an optical axis L of the imaging optical system 101 shown in
A main mirror 102 and sub-mirror 103 are interposed between the imaging optical system 101 and the image sensor unit 104 on the optical axis, and are retracted from the optical path of a imaging light beam by a well-known quick return mechanism in imaging. The main mirror 102 is a half mirror, and splits the imaging light beam into reflected light which is guided to a viewfinder optical system above the main mirror 102, and transmitted light which impinges on the sub-mirror 103.
Light reflected by the main mirror 102 forms an image on the matt surface of a focusing plate 105 having the matt surface and Fresnel surface. The image is guided to the observer eye via a pentaprism 106 and eyepiece lens unit 107. Part of light diffused by the focusing plate 105 passes through a photometry lens 110 and reaches a photometry sensor 111. The photometry sensor 111 is formed from a plurality of pixels, and R, G, and B color filters are arranged on the respective pixels so that the spectral intensity of an object can be detected. In the embodiment, the photometry sensor 111 includes R, G, and B color filters. However, the practice of the present invention is not limited to this embodiment as long as the photometry sensor 111 includes color filters having predetermined wavelengths as center wavelengths of transmitted light.
The optical path of transmitted light having passed through the main mirror 102 is deflected downward by the sub-mirror 103, and guided to a focus detection unit 120. More specifically, part of a light beam having passed through the imaging optical system 101 is reflected by the main mirror 102 and reaches the photometry sensor 111, and the remaining light beam passes through the main mirror 102 and reaches the focus detection unit 120. The focus detection unit 120 detects the focal length by the phase difference detection method. Note that spectral reflectance information of the main mirror 102 may be stored in advance in a nonvolatile memory (not shown). In this case, the spectral intensity of a light beam which reaches the focus detection unit 120 can be detected using an output from the photometry sensor 111.
(Arrangement of Focus Detection Unit 120)
An example of the internal arrangement of the focus detection unit 120 will be explained in detail with reference to the accompanying drawings.
A field mask 201 is a mask for preventing entrance of disturbance light into light receiving element arrays 214 (to be described later) which perform focus detection. The field mask 201 is arranged near a position optically equivalent via the sub-mirror 103 to the imaging surface of the image sensor unit 104 serving as the expected imaging surface of the imaging optical system 101. In the embodiment, the field mask 201 has three cross-shaped openings 202 as shown in
In the description of the embodiment, the three cross-shaped openings 202 are identified by assigning a to the cross-shaped opening positioned at the center, b to the cross-shaped opening positioned right, and c to the cross-shaped opening positioned left in the arrangement of
A field lens 203 includes field lenses 203a, 203b, and 203c having different optical characteristics. The respective lenses have optical axes different from each other.
Light beams having passed through the field lenses 203a, 203b, and 203c pass through corresponding openings of an aperture 204, and then reach a secondary imaging lens unit 205. Note that an infrared cut filter is arranged in front of the aperture 204 to remove, from the light beam, a component of an infrared wavelength unnecessary for focus detection, but is not illustrated for simplicity.
As shown in
The secondary imaging lens unit 205 forms again, on a light receiving unit 206 arranged behind, images having passed through the cross-shaped openings 202 out of an optical image which is formed on the field mask 201 corresponding to the expected imaging surface via the sub-mirror 103 by the imaging optical system 101. The secondary imaging lens unit 205 includes prisms 212 as shown in
The light receiving unit 206 includes a pair of light receiving element arrays 214 which are arranged in each of the vertical and horizontal directions for each of the cross-shaped openings 202a, 202b, and 202c, as shown in
The distance between optical images formed on the light receiving element arrays paired in the vertical or horizontal direction changes depending on the focus state of an optical image formed on the expected imaging surface of the imaging optical system 101. A defocus amount representing the focus state is calculated based on the difference (change amount) between the distance between optical images that is obtained by calculating the correlation between the light quantity distributions of optical images output from the paired light receiving element arrays 214, and a predetermined distance between in-focus optical images. More specifically, the relationship between the defocus amount and the distance change amount is approximated in advance using a polynomial for the distance change amount. The defocus amount is calculated using the change amount of the distance between optical images that is obtained by the focus detection unit 120. A focus position where an object is in focus can be obtained from the calculated defocus amount. The focus lens is controlled by a focus lens driving unit (not shown), thereby focusing on the object.
Note that the light receiving element arrays paired in one direction are suited to focus detection of an object image having the contrast component in this direction. By arranging the light receiving element arrays 214 in the vertical and horizontal directions as in the embodiment, so-called cross-shaped focus detection can be executed regardless of the direction of the contrast component of an object image.
As shown in
The embodiment assumes that the photometry sensor 111 performs photometry for 15 photometry regions obtained by dividing a photometry range 219 into three in the vertical direction and five in the horizontal direction. The photometry range 219 and the focus detection regions 220 of the focus detection unit 120 have a positional relationship as shown in
(Principle of Eclipse in Focus Detection)
The relationship between a light beam which reaches the light receiving unit 206 of the focus detection unit 120 and a light beam which passes through the imaging optical system 101 when performing focus detection in the focus detection region 220a will be explained with reference to
As shown in
As described above, light beams passing through the point of intersection between the field mask 201 and the optical axis L are determined by the aperture diameter of the imaging lens aperture 304. Of these light beams, only light beams passing through a region obtained by back-projecting the openings 211a-3 and 211a-4 of the aperture 204 on the surface of the imaging lens aperture 304 via the field lens 203 reach the light receiving unit 206 of the focus detection unit 120.
More specifically, when viewed from the point of intersection between the field mask 201 and the optical axis L, the openings 211a-3 and 211a-4 back-projected on the surface of the imaging lens aperture 304 form back-projected images 801a-3 and 801a-4, as shown in
Also, light beams passing through the point of intersection between the field mask 201 and the optical axis L are free from an eclipse caused by vignetting by the front frame member 305 and back frame member 306. As shown in
To the contrary, when focus detection is performed in the focus detection region 220c, the presence/absence of an eclipse of a light beam which reaches the light receiving unit 206 of the focus detection unit 120 is changes. This will be described with reference to the top view of
As shown in
The generated eclipse results in a difference between outputs from the light receiving element arrays 214c-3 and 214c-4 corresponding to the focus detection region 220c. For example, for an object image exhibiting a uniform luminance, the respective light receiving element arrays exhibit outputs as shown in
Assume that an output from each light receiving element array 214 shown in
(Difference in Eclipse Depending on Wavelength Difference)
An object image is generally colored and contains light components of various wavelengths. In practice, a light beam having passed through the imaging optical system 101 and field lens 203 is split into light components of respective wavelengths, as described above. Splitting of a light beam passing through the end point on the side far from the optical axis L on the back-projected image 217 as in
In
In
In
Which of light beams of the red and blue wavelengths is eclipsed out of a pair of light beams used in focus detection is determined based on the image height, the position of a member which causes an eclipse, and the opening size. As shown in
In some cases, a light receiving element array suffering an eclipse changes depending on splitting of a light beam used in focus detection. The focus detection precision decreases if processing of correcting a luminance decrease caused by an eclipse is simply applied to an output from one of paired light receiving element arrays as shown in
(Circuit Arrangement of Digital Camera 100)
A central processing circuit 1401 is a 1-chip microcomputer including a CPU, RAM, ROM, ADC (A/D converter), and input/output ports. The ROM of the central processing circuit 1401 is a nonvolatile memory. The ROM stores control programs for the digital camera 100 including the program of object focusing processing (to be described later), and parameter information about settings of the digital camera 100 and the like. In the embodiment, the ROM stores even information about the state of the imaging optical system 101 for determining whether a light beam used in focus detection is eclipsed.
A shutter control circuit 1402 controls traveling of the front and rear curtains of a shutter (not shown) based on information input via a data bus DBUS while receiving a control signal CSHT from the central processing circuit 1401. More specifically, the central processing circuit 1401 receives SW2 corresponding to a release button imaging instruction from SWS which outputs a switching signal upon operating the user interface of the digital camera 100. Then, the central processing circuit 1401 outputs a control signal to drive the shutter.
An aperture control circuit 1403 controls driving of the imaging lens aperture 304 by controlling an aperture driving mechanism (not shown) based on information input via the DBUS while receiving a control signal CAPR from the central processing circuit 1401.
A light projecting circuit 1404 projects auxiliary light for focus detection. The LED of the light projecting circuit 1404 emits light in accordance with a control signal ACT and sync clock CK from the central processing circuit 1401.
A lens communication circuit 1405 serially communicates with a lens control circuit 1406 based on information input via the DBUS while receiving a control signal CLCOM from the central processing circuit 1401. The lens communication circuit 1405 outputs lens driving data DCL for the lens of the imaging optical system 101 to the lens control circuit 1406 in synchronism with a clock signal LCK. In addition, the lens communication circuit 1405 receives lens information DLC representing the lens state. The lens driving data DCL contains the body type of the digital camera 100 on which the imaging optical system 101 is mounted, the type of the focus detection unit 120, and the lens driving amount.
The lens control circuit 1406 changes the focus state of an object image by moving a predetermined lens of the imaging optical system 101 using a lens driving unit 1502. The lens control circuit 1406 has an internal arrangement as shown in
A CPU 1503 is an arithmetic unit which controls the operation of the lens control circuit 1406. The CPU 1503 outputs, to the lens driving unit 1502, a control signal corresponding to lens driving amount information out of input lens driving data to change the position of a predetermined lens of the imaging optical system 101. When a focus adjustment lens (not shown) is moving, the CPU 1503 outputs a signal BSY to the lens communication circuit 1405. When the lens communication circuit 1405 receives this signal, serial communication between the lens communication circuit 1405 and the lens control circuit 1406 is not executed.
A memory 1501 is a nonvolatile memory. The memory 1501 stores, for example, the type of the imaging optical system 101, the positions of the range ring and zoom ring, a coefficient representing the extension amount of the focus adjustment lens with respect to the defocus amount, and exit pupil information corresponding to the focal length of the imaging lens. The exit pupil information is information about the position of a member for restricting the effective f-number for a light beam passing through the imaging optical system 101 or the diameter of the member, such as the imaging lens aperture 304, front frame member 305, or back frame member 306. Information stored in the memory 1501 is read out by the CPU 1503, is applied predetermined arithmetic processing, and is transmitted as the lens information DLC to the central processing circuit 1401 via the lens communication circuit 1405.
When the imaging optical system 101 is an optical system having a plurality of focal lengths, such as a so-called zoom lens, focal length information is the representative value of each range obtained by dividing a continuously changing focal length into a plurality of ranges. In general, range ring position information is not directly used in focusing calculation, and thus its precision need not be so high, unlike other pieces of information.
Upon receiving a control signal CSPC from the central processing circuit 1401, a photometry circuit 1407 outputs an output SSPC of each photometry region of the photometry sensor 111 to the central processing circuit 1401. The output SSPC of each photometry region is A/D-converted by the ADC of the central processing circuit 1401, and used as data for controlling the shutter control circuit 1402 and aperture control circuit 1403. The central processing circuit 1401 detects the ratio of predetermined wavelength light components contained in a light beam passing through the focus detection region by using outputs from the respective photometry regions.
A sensor driving circuit 1408 is connected to the light receiving element arrays 214 of the light receiving unit 206 of the above-described focus detection unit 120. The sensor driving circuit 1408 drives a light receiving element array 214 corresponding to a selected focus detection region 220, and outputs an obtained image signal SSNS to the central processing circuit 1401. More specifically, the sensor driving circuit 1408 receives control signals STR and CK from the central processing circuit 1401. Based on the signals, the sensor driving circuit 1408 transmits control signals φ1, φ2, CL, and SH to the light receiving element array 214 corresponding to the selected focus detection region 220, thereby controlling driving.
(Object Focusing Processing)
Object focusing processing in the digital camera 100 having the above arrangement according to the embodiment will be explained in detail with reference to the flowchart of
In step S1601, the CPU determines a focus detection region 220, where focus detection is to be performed, out of the predetermined focus detection regions 220 falling within the imaging angle of view. The focus detection region 220 is determined based on a user instruction, a principal object detection algorithm in the digital camera 100, or the like.
In step S1602, the CPU controls the sensor driving circuit 1408 to expose light receiving element arrays 214 corresponding to the focus detection region 220 where focus detection is to be performed that has been determined in step S1601. The exposure time of the light receiving element arrays 214 is determined not to saturate the light quantity in each light receiving element. After the end of exposing the light receiving element arrays 214, the CPU receives the image signals SSNS of the light receiving element arrays 214 from the sensor driving circuit 1408.
In step S1603, the CPU controls the photometry circuit 1407 so that the photometry sensor 111 performs photometry in a photometry region corresponding to the focus detection region 220 where focus detection is to be performed that has been determined in step S1601. Then, the CPU obtains, from the photometry circuit 1407, the output value of the photometry sensor 111 for the photometry region corresponding to the focus detection region 220 where focus detection is to be performed. Further, the CPU obtains the ratio of predetermined wavelength light components contained in a light beam used in focus detection in the photometry region. Note that exposure of the photometry sensor 111 may be executed at a timing synchronized with the focus detection operation in step S1602. Of outputs from the photometry sensor 111 immediately before the focus detection operation, an output from the photometry sensor 111 for the photometry region corresponding to the focus detection region 220 where focus detection is to be performed may be used in the following processing.
In step S1604, the CPU determines whether a light beam used in focus detection in the vertical or horizontal direction is eclipsed in the focus detection region 220 where focus detection is to be performed. As described above, whether the light beam used in focus detection is eclipsed changes depending on the spectral intensity of the light beam, the arrangements and structures of the imaging optical system 101 and focus detection unit 120, and the like. In the embodiment, the CPU makes the determination using a table representing the presence/absence of generation of an eclipse in at least either the vertical or horizontal direction for information about the lens type and focal length that can be obtained from the imaging optical system 101, and for the focus detection region 220.
For example, as shown in
More specifically, in step S1604, the CPU determines whether the table representing the presence/absence of generation of an eclipse contains a combination (focus detection condition) of the lens type and focal length obtained from the mounted imaging optical system 101, and the focus detection region 220 where focus detection is to be performed. If the table representing the presence/absence of generation of an eclipse contains the focus detection condition, the CPU determines that the light beam used in focus detection is eclipsed, and the process shifts to step S1605. If the table representing the presence/absence of generation of an eclipse does not contain the focus detection condition, the CPU determines that the light beam used in focus detection is not eclipsed, and the process shifts to step S1606.
In step S1605, the CPU corrects outputs from the light receiving element arrays 214 corresponding to the focus detection region 220, where focus detection is to be performed, by using a correction equation which corresponds to the focus detection condition and is obtained from the table representing the presence/absence of generation of an eclipse.
Letting IN(g) be a pixel value input from a pixel g of the light receiving element array 214 and OUT(g) be a pixel value output after correction, the basic form of the correction equation is given by
OUT(g)=IN(g)×(K1(g)×T1+K2(g)×T2+ . . . +Kn(g)×Tn)
The correction equation determines the eclipse correction coefficient Ki(g) for each of i (=1, 2, . . . , n) wavelengths for each pixel. The correction coefficients are added at the ratio Ti at which the light beam contains i wavelength light components, thereby obtaining a new correction coefficient for correcting a pixel value. The pixel value is multiplied by the thus-obtained new correction coefficient, obtaining an output in which the influence of an eclipse is corrected by taking account of the spectral intensity. That is, the correction equation for each focus detection condition in the table representing the presence/absence of generation of an eclipse determines, for each light receiving element array 214 corresponding to the focus detection region, the eclipse correction coefficient which changes between respective predetermined wavelengths. Note that the eclipse correction coefficient which changes between respective wavelengths is not limited to a numerical value, and may be a polynomial using, for example, the pixel position as a variable.
For example, when the mounted imaging optical system 101 is “lens 3”, the currently set focal length is “focal length 3-3”, and the “focus detection region 220b” is selected, the direction in which an eclipse occurs is the vertical direction, and the correction equation used in focus detection is “correction equation C”. For example, correction equation C suffices to determine correction coefficients for respective wavelengths at which the photometry sensor 111 detects the spectral intensity as shown in
In step S1606, the CPU detects the phase difference by calculating the correlations between outputs from light receiving element arrays paired in the vertical direction and those paired in the horizontal direction out of the light receiving element arrays 214 corresponding to the focus detection region 220 where focus detection is to be performed. Then, the CPU calculates the defocus amount including the defocus direction from the phase difference. Note that calculation of the defocus amount can use a well-known method as disclosed in Japanese Patent Publication No. 5-88445. If an output from the light receiving element array 214 has been corrected in step S1605, the CPU only calculates the defocus amount for the corrected output from the light receiving element array in this step.
In step S1607, the CPU determines, based on the defocus amount calculated in step S1606, whether the object is in focus in the current focus state. If the CPU determines that the object is in focus in the current focus state, the object focusing processing ends; if the CPU determines that the object is out of focus in the current focus state, the process shifts to step S1608. In step S1608, the CPU moves a predetermined lens of the imaging optical system 101 in accordance with the defocus amount, and the process returns to step S1602.
After the object becomes in focus, the digital camera 100 can photograph. When the user presses the release button fully, the digital camera 100 executes imaging processing. If the digital camera 100 waits for a predetermined time without receiving a imaging instruction from the user after the object becomes in focus, the CPU may execute object focusing processing again because the focus detection state may change.
In the embodiment, the spectral intensity of a light beam used in focus detection is measured using the photometry sensor 111. However, the practice of the present invention is not limited to this. For example, the spectral intensity may be measured on the light receiving element array 214. The focus detection method is not limited to the method described in the embodiment, and the present invention can be practiced even using another passive focus detection method. For example, the present invention is effective when phase difference-based focus detection is performed by a pair of pixels arranged on an expected imaging surface 210 of a imaging lens as disclosed in Japanese Patent Laid-Open No. 2000-156823 using light beams passing through different portions of the imaging lens.
In the above-described embodiment, information representing the presence/absence of generation of an eclipse in correspondence with each focus detection condition is stored in the storage area on the body side of the digital camera. However, the information may be stored in the storage area of a lens barrel. With this setting, the present invention can cope with even a lens barrel developed after the manufacture of the digital camera body.
In the above-described embodiment, the table representing the presence/absence of generation of an eclipse contains information about all focus detection regions where an eclipse occurs. However, for focus detection regions which exist at positions symmetrical about the optical axis of the optical system, the table suffices to contain only information about one focus detection region. Also, the method of determining the presence/absence of generation of an eclipse is not limited to a method which looks up the table. The presence/absence of generation of an eclipse may be determined every time using information such as the exit pupil and the positions and diameters of the front and back frame members which are obtained from the lens barrel.
The above-described embodiment has explained a method of correcting the influence of an eclipse on a light receiving element array corresponding to a focus detection region where focus detection is to be performed. However, whether the focus detection region is usable in focus detection may be determined based on the degree of eclipse. More specifically, when it is determined that the influence of an eclipse is significant in an output from the light receiving element array and even correction cannot improve the focus detection precision, the focus detection region can be excluded from the focus detection target, avoiding poor-precision focus detection.
As described above, the image capturing apparatus according to the embodiment can correct an eclipse at high precision by accurately calculating the eclipse amount of a light beam used in focus detection by the imaging optical system without the influence of the object color or the light source in the imaging environment. The image capturing apparatus can perform high-precision focus detection.
Aspects of the present invention can also be realized by a computer of a system or apparatus (or devices such as a CPU or MPU) that reads out and executes a program recorded on a memory device to perform the functions of the above-described embodiment(s), and by a method, the steps of which are performed by a computer of a system or apparatus by, for example, reading out and executing a program recorded on a memory device to perform the functions of the above-described embodiment(s). For this purpose, the program is provided to the computer for example via a network or from a recording medium of various types serving as the memory device (for example, computer-readable medium).
While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
This application claims the benefit of Japanese Patent Application No. 2011-076393, filed Mar. 30, 2011, which is hereby incorporated by reference herein in its entirety.
Number | Date | Country | Kind |
---|---|---|---|
2011-076393 | Mar 2011 | JP | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/JP2012/056950 | 3/13/2012 | WO | 00 | 9/18/2013 |