The present invention relates to a focus detection apparatus and an image capturing apparatus.
Conventionally, a phase difference detection type image capturing apparatus is known. In the phase difference detection method, light from a subject is divided into two images using two convex lenses arranged side by side, and a focus state is detected from a phase difference between a pair of image signals corresponding to the respective images.
Japanese Patent Laid-Open No. 10-104502 discloses a configuration in which an area sensor capable of detecting correlation in a longitudinal direction and an area sensor capable of detecting correlation in a lateral direction are arranged on one chip and the readout direction is changed for each area sensor.
However, in the conventional technique disclosed in the above-mentioned Japanese Patent Laid-Open No. 10-104502, since the readout direction is changed for each image forming area, the circuit scale between the area sensors becomes necessarily large. Therefore, the sensor layout is restricted, and the size of the image capturing device itself as well as the size of the sensor increases.
The present invention has been made in consideration of the above situation, and shortens the time required for focus detection processing while suppressing the circuit scale of the area sensor used as the focus detection sensor.
According to the present invention, provided is a focus detection apparatus comprising: an image sensor having a pixel region, having a plurality of pixels, which includes a pair of first light receiving areas that receive a pair of light beams which have undergone pupil division in a first direction, and a pair of second light receiving areas that receive a pair of light beams which have undergone pupil division in a second direction different from the first direction, and a scanning unit that selects rows of the pixel area from which signals are read out, wherein the first light receiving areas and the second light receiving areas are arranged so as not to be simultaneously included in the rows selected by the scanning unit.
Further, according to the present invention, provided is a focus detection apparatus comprising: a dividing unit that divides incoming light beams that enter via an imaging optical system into a plurality of different directions; an image sensor having a pixel region, having a plurality of pixels, which includes a plurality of pairs of light receiving areas that receive the light beams divided by the dividing unit, and a scanning unit that selects rows of the pixel area from which signals are read out; and a focus detection unit that detects a focus state based on phase differences between a plurality of pairs of signals read out from the plurality of pairs of light receiving areas, respectively, wherein the dividing unit divides the light beams so that the plurality of pairs of light receiving areas whose detection directions of the phase difference are different are not simultaneously included in the rows selected by the scanning unit.
Furthermore, according to the present invention, provided is a focus detection apparatus comprising: an image sensor having a pixel region, having a plurality of pixels, which includes a pair of first light receiving areas that receive a pair of light beams which have undergone pupil division in a first direction, and a pair of second light receiving areas that receive a pair of light beams which have undergone pupil division in a second direction different from the first direction, a first light-shielded region and a second light-shielded region provided along periphery of the pixel region in the first direction and in the second direction, respectively, and a scanning unit that selects rows of the pixel area from which signals are read out, wherein the scanning unit selects the rows so as to read out signals from the first light-shielded region in order to correct a pair of first signals read out from the pair of first light receiving areas and to read out signals from the second light-shielded region in order to correct a pair of second signals read out from the pair of second light receiving areas.
Further, according to the present invention, provided is an image capturing apparatus comprising: an image sensing device that performs photoelectric conversion on light beams incoming via an imaging optical system and outputs an image signal; the focus detection apparatus comprising an image sensor having a pixel region, having a plurality of pixels, which includes a pair of first light receiving areas that receive a pair of light beams which have undergone pupil division in a first direction, and a pair of second light receiving areas that receive a pair of light beams which have undergone pupil division in a second direction different from the first direction, and a scanning unit that selects rows of the pixel area from which signals are read out; and a controller that controls the imaging optical system based on a focus state detected by the focus detection apparatus, wherein the first light receiving areas and the second light receiving areas are arranged so as not to be simultaneously included in the rows selected by the scanning unit.
Further, according to the present invention, provided is an image capturing apparatus comprising: an image sensor that performs photoelectric conversion on light beams incoming via an imaging optical system and outputs an image signal; the focus detection apparatus comprising a dividing unit that divides incoming light beams that enter via an imaging optical system into a plurality of different directions; an image sensor having a pixel region, having a plurality of pixels, which includes a plurality of pairs of light receiving areas that receive the light beams divided by the dividing unit, and a scanning unit that selects rows of the pixel area from which signals are read out; and a focus detection unit that detects a focus state based on phase differences between a plurality of pairs of signals read out from the plurality of pairs of light receiving areas, respectively; and a controller that controls the imaging optical system based on a focus state detected by the focus detection apparatus, wherein the dividing unit divides the light beams so that the plurality of pairs of light receiving areas whose detection directions of the phase difference are different are not simultaneously included in the rows selected by the scanning unit.
Further, according to the present invention, provided is an image capturing apparatus comprising: an image sensor that performs photoelectric conversion on light beams incoming via an imaging optical system and outputs an image signal; the focus detection apparatus comprising an image sensor having a pixel region, having a plurality of pixels, which includes a pair of first light receiving areas that receive a pair of light beams which have undergone pupil division in a first direction, and a pair of second light receiving areas that receive a pair of light beams which have undergone pupil division in a second direction different from the first direction, a first light-shielded region and a second light-shielded region provided along periphery of the pixel region in the first direction and in the second direction, respectively, and a scanning unit that selects rows of the pixel area from which signals are read out; and a controller that controls the imaging optical system based on a focus state detected by the focus detection apparatus, wherein the scanning unit selects the rows so as to read out signals from the first light-shielded region in order to correct a pair of first signals read out from the pair of first light receiving areas and to read out signals from the second light-shielded region in order to correct a pair of second signals read out from the pair of second light receiving areas.
Further features of the present invention will become apparent from the following description of exemplary embodiments (with reference to the attached drawings).
The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate embodiments of the invention, and together with the description, serve to explain the principles of the invention.
Exemplary embodiments of the present invention will be described in detail in accordance with the accompanying drawings. The dimensions, shapes and relative positions of the constituent parts shown in the embodiments should be changed as convenient depending on various conditions and on the structure of the apparatus adapted to the invention, and the invention is not limited to the embodiments described herein.
The CPU 102 performs control in the camera main body 101. The memory 103 is a memory such as a RAM and a ROM connected to the CPU 102, and stores programs executed by the CPU 102 and various data.
When image shooting is not performed, the half mirror 106 reflects a part of the light incident from the lens unit 118 and forms an image on the focusing plate 107. The pentaprism 109 reflects the light passing through the focusing plate 107 to the photometric sensor 108 and the optical viewfinder 110. The photometric sensor 108 includes an image sensor such as a CCD or a CMOS sensor, and performs subject recognition processing such as photometric calculation, face detection computation, tracking computation, and light source detection.
The half mirror 106 transmits a part of the light, and the transmitted light is bent downward at the sub mirror 111 arranged behind the half mirror 106, and the field mask 112, the infrared cut filter 113, the field lens 114, the diaphragm 115, the secondary imaging lens 116 and forms an image on the focus detection sensor 117. Based on the image signal obtained by photoelectric conversion of this image, the focus state of the lens unit 118 is detected.
On the other hand, at the time of image shooting, the half mirror 106 and the sub mirror 111 are retracted from the optical path, and the light entered from the lens unit 118 enters the imaging unit 104 as a subject image via the shutter 105. The shutter 105 can be opened and closed, closed when image shooting is not performed to shield the imaging unit 104 from light, and opened at the time of image shooting to pass light toward the imaging unit 104. The imaging unit 104 includes an image sensor such as a CCD or a CMOS sensor including an infrared cut filter, a low pass filter, and the like, and outputs an image signal corresponding to the amount of incident light.
The LPU 119 performs control to move the lens group 120 in the lens unit 118. For example, upon receiving a defocus amount indicating an amount of defocus from the CPU 102, the LPU 119 moves the lens group 120 to a position where the lens group 120 is in focus (hereinafter referred to as “in-focus position”) based on the defocus amount.
Likewise, light beams 202a and 202b (incident light) from the object OBJ pass through pupil areas 302a and 302b located away from the optical axis of the lens group 120 than the pupil areas 301a and 301b, and form images on the focus plane P (primary imaging plane) located in the vicinity of the field mask 112. The light beams 202a and 202b are split in the right and left direction (second direction) by secondary imaging lenses 402a and 402b and form images again in imaging areas 502a and 502b of the focus detection sensor 117, and the left and right two object images are used in correlation calculation, and a defocus amount is obtained.
The imaging areas 502a and 502b correspond to the light beams 202a and 202b having a long base line length and high focus detection accuracy. On the other hand, the imaging areas 501a and 501b correspond to the light beams 201a and 201b having a wide range in which the defocus amount can be detected.
In
Next, the operation of the focus detection sensor 117 will be described with reference to the timing chart of
At time t0, the control pulse ϕS0 is made high, and the horizontal selection switch MOS transistors 6 of the first and second rows are turned on to select the pixels 30-1j, 30-2j of the first and second rows. Next, at time t1, control pulse ϕR0 is made low, thereby resetting of the FD corresponding portions 21 is stopped, the FD portion 21 are brought into a floating state, and the gate-source paths of the corresponding source follower amplifier MOS transistors 5 are opened. Thereafter, during the period from the time t2 to the time t3, the control pulse ϕTN is made high and the dark voltages of the FD portions 21 are output to the dark output capacitors 10 via the source follower operation.
Next, in order to output charges generated by photoelectric conversion in the pixels 30-1j in the first row, at time t4, the control pulse ϕTXo0 of the first row is made high to make the corresponding transfer switch MOS transistors 3 conductive, and then during the period from time t5 to t6, the control pulse ϕPGo0 is made low. At this time, it is preferable to set the voltage relation such that the potential well spreading under the photogate 2 is raised so that the photocarrier is completely transferred to the FD portion 21. Therefore, if complete transfer is possible, the control pulse ϕTX may not be a pulse but may be a fixed potential.
The electric charges from the photoelectric conversion portions 1 are transferred to the FD portions 21 during the period from time t4 to time t7, so that the potentials of the FD portions 21 change corresponding to the amount of light. At this time, since the source follower amplifier MOS transistors 5 are in a floating state, the control pulse ϕTS is made high during the period from time t8 to time t9 and the potentials of FD portions 21 are readout to the bright output capacitors 11. At this point, since the dark outputs and the light outputs of the pixels 30-1j in the first row are stored in the capacitors 10 and 11, respectively, if differential outputs are taken from a period from time t9 to time t11 by the differential amplifiers 12, it is possible to obtain a signal with a good S/N ratio from which random noise and fixed pattern noise are reduced. The differential outputs are converted into digital data by the column AD circuits 13, and the converted digital data is output to the CPU 102 at a pulse timing controlled by the DFE circuit 14.
During the period from time t8 to time t9, the bright output is outputted to the bright output capacitors 11. Then, the control pulse ϕR0 is made high from time t10 to time t11 to render the corresponding reset MOS transistors 4 conductive and reset the FD portions 21 to the power supply VDD. When the output of the digital data of the first line is completed, the readout operation of the second line is performed. To read out the second line, the drive control pulse ϕTXe0 and the control pulse ϕPGe0 are controlled in the same way, photo charges are stored in the capacitors 10 and 11 by providing high control pulses ϕTN and ϕTS, respectively, thereby obtaining dark output and bright output. With the above-described driving, the pixels 30-1j and 30-2j in the first row and the second row can be read independently.
Thereafter, by reading the 2n+1th rows and the 2n+2th rows (n=1, 2, . . . ) in the same manner as above under control of the vertical scanning circuit 15, it is possible to output signals independently from all the pixels. That is, in the case of n=1, control pulse ϕS1 is made high, then ϕR1 is made low, thereafter the control pulses ϕTN and ϕTXo1 are made high, the control pulse ϕPGo1 is made low and control pulse ϕTS is made high, thereby reading the pixel signals of the pixels 30-3j in the third row. Subsequently, the control pulses ϕTXe1, ϕPGe1 and the control pulse are applied in the same manner as above to read out the pixel signals of the pixels 30-4j in the fourth row.
It should be noted that the vertical scanning circuit 15 is configured to be able to select an arbitrary row in accordance with an instruction from the CPU 102.
By arranging the vertical scanning circuit 15 as shown in
Here, it is assumed that a lens darker than the full-open F number of the lens unit 118 is attached. In
On the other hand, also in
As described above, the vertical scanning circuit 15 of the first embodiment is arranged so that the area where the imaging areas having different correlation directions are simultaneously selected is smaller. Furthermore, in a case where there are a plurality of pairs of imaging areas having different base line lengths in the effective pixel area, the vertical scanning circuit 15 is arranged so that imaging area pairs having short base line length are included in the same row. By configuring the focus detection sensor 117 in this manner, the time required for the AF calculation can be shortened.
In
Next, the CPU 102 controls the focus detection sensor 117 to perform a phase difference AF (auto focus) process (step S103). The CPU 102 transmits a lens drive amount based on the defocus amount calculated in the AF process to the LPU 119, and the LPU 119 moves the lens group 120 to the in-focus position based on the received lens drive amount. The process in step S103 will be further described with reference to
Next, an on/off notification indicating whether or not a full-pressing (hereinafter referred to as “SW2”) of the shutter switch (not shown) has been performed by the user is received, and if SW2 is off (NO in step S104), the CPU 102 returns the process to step S201. On the other hand, if the SW2 is on (YES in step S104), the CPU 102 performs the main shooting (step S105), and then ends the present processing.
In step S202, the CPU 102 determines whether the imaging areas 502a and 502b are valid or invalid. The CPU 102 communicates with the LPU 119 of the lens unit 118 attached to the image capturing apparatus 100 to determine whether or not light beams can pass through the pupil areas 302a and 302b based on the full-open F number or pupil information of the lens. If the light beams can pass through the pupil areas 302a and 302b, it is determined that the imaging areas 502a and 502b are valid, and the process proceeds to step S203. On the other hand, if the light beams cannot pass through the pupil areas 302a and 302b, it is determined that the imaging areas 502a and 502b are invalid and the process proceeds to step S204.
In step S203, since the imaging areas 502a and 502b are determined to be valid in step S202, the CPU 102 instructs the focus detection sensor 117 to output signals from all pixels including pixels in the OB pixel area, and a readout operation (Readout 1) is performed.
On the other hand, in step S204, since the imaging areas 502a and 502b are determined to be invalid in step S202, the CPU 102 instructs the focus detection sensor 117 to output limited signals from the imaging areas 501a and 501b, and a readout operation (Readout 2) is performed. The limited output is as described in
In step S205, the CPU 102 calculates a defocus amount from the pixel signals for each imaging area obtained in step S203 or step S204. Here, signal images are obtained from the pixel output on the same row of the pair of imaging areas. Then, from the phase difference of the signal images, a focus state (defocus amount) of the imaging lens is detected. Calculation results of the defocus amounts of respective rows are averaged, weighted averaged, or the like, and the obtained value is taken as the final result of each imaging area pair. Further, in a case where the defocus amounts are obtained for the imaging areas 501a and 501b and imaging areas 502a and 502b by the Readout 1, one of the defocus amounts is selected. Although there is no particular limitation on the selection method, the defocus amount which is considered to have high reliability of the defocus amount, such that correlation of the waveform of the signal images is high, contrast of the signal images is high, and so forth, may be selected. Alternatively, the two defocus amounts may be averaged or weighted averaged.
In step S206, if the defocus amount calculated in step S205 is within a desired range, for example, within ¼Fδ (F: aperture value of the lens, δ: constant (20 μm)), the CPU 102 determines that the image is in focus. Specifically, if the aperture value F of the lens is 2.0, in the case where the defocus amount is 10 μm or less, it is determined that the image is in focus and the AF process is ended.
On the other hand, if the defocus amount calculated in step S205 is larger than ¼Fδ, the CPU 102 transmits the lens drive amount corresponding to the defocus amount obtained in step S205 to the lens unit 118 in step S207. Then, the CPU 102 returns the process to step S201 and repeats the above-described operation until it is determined that the focused state is achieved.
According to the first embodiment as described above, the vertical scanning circuit is disposed in such a direction that, in the focus detection sensor, focusing areas having different correlation directions are not simultaneously selected and read out. Furthermore, based on the information from the lens, it is determined whether or not the pairs of imaging areas in the focus detection sensor are valid or invalid. Then, by selecting the readout rows by limiting to the effective imaging areas by the vertical scanning circuit and outputting signals only from the pixels in the selected rows, the time required for the AF control can be shortened.
Next, a second embodiment of the present invention will be described in detail with reference to the accompanying drawings.
The focus detection sensor 117 according to the first embodiment has two pairs of imaging areas, and the AF area 701 is located at the center of the screen of the viewfinder 700. By contrast, the focus detection sensor 217 according to the second embodiment has six pairs of imaging areas and enlarges the AF area in the left and right region of the screen in addition to the central portion of the screen.
At the central portion of the focus detection sensor 217, two pairs of imaging areas, i.e., imaging areas 801a and 801b with correlation direction in the horizontal direction and imaging areas 802a and 802b with the correlation direction in the vertical direction, are arranged. The baseline lengths of the imaging areas 801a and 801b and the imaging areas 802a and 802b have the same relationship as the focus detection sensor 117, and the baseline length of the imaging areas 802a and 802b is longer.
In the left and right portions of the focus detection sensor 217, two imaging areas are arranged similarly to the central portion. In the right portion of the focus detection sensor 217, two pairs of imaging areas, i.e., imaging areas 803a and 803b having the correlation direction in the vertical direction and imaging areas 804a and 804b having the correlation direction in the horizontal direction, are arranged. Further, in the left portion of the focus detection sensor 217, two pairs of imaging areas, i.e., imaging areas 805a and 805b having the correlation direction in the vertical direction and imaging areas 806a and 806b having the correlation direction in the horizontal direction, are arranged.
The vertical scanning circuit 15 is disposed on the lower side with respect to effective pixel regions 801, 803, and 805, and scans in the direction of an arrow (lateral direction). The column AD circuits 13 are arranged in the vertical direction (direction orthogonal to the arrow). Note that in
Here, the pixel signals of the imaging area are corrected using VOB shading (row direction) which is mappings of the respective pixel signals of the VOB 807, VOB 808, and VOB 809, and HOB shading (column direction) which is a mapping of the pixel signals of HOB 810. Based on the shading of the VOB 807, pixel signals of the imaging areas 805a and 805b are corrected. Similarly, pixel signals of the imaging areas 801a and 801b are corrected based on the shading of the VOB 808. Likewise, pixel signals of the imaging areas 803a and 803b are corrected on the basis of the shading of the VOB 809, and based on the shading of the HOB 810, pixel signals of the imaging areas 806a and 806b, the imaging areas 802a and 802b, and the imaging areas 804a and 804b are corrected.
By disposing the vertical scanning circuit 15 as shown in
A user can arbitrarily select any one of the AF areas 901 to 903 as the AF target by operating an AF selection switch (not shown) of the image capturing apparatus 100.
In step S301, the CPU 102 executes the accumulation operation of the focus detection sensor 217 with the accumulation time determined based on the photometric value including the object luminance information determined in step S102.
In steps S302 and S303, the CPU 102 determines the selected area out of the AF areas 901 to 903 by the AF selection switch (not shown). In step S302, the CPU 102 receives the state of the AF selection switch operated by the user, and determines whether or not the AF area 901 is selected. If the AF area 901 is selected, the process proceeds to step S304. On the other hand, if the area other than the AF area 901 is selected, the process proceeds to step S303.
In step S303, the CPU 102 receives the state of the AF selection switch operated by the user, and determines whether or not the AF area 902 is selected. If the AF area 902 is selected, the process proceeds to step S305. On the other hand, if the AF area 902 is not selected, the process proceeds to step S306.
In step S304, the CPU 102 instructs the focus detection sensor 217 to perform limited output of signals from the imaging areas 805a and 805b, the imaging areas 806a and 806b, and the VOB 807, and performs a readout operation (Readout a).
In step S305, the CPU 102 instructs the focus detection sensor 217 to perform limited output of signals from the imaging areas 801a and 801b, the imaging areas 802a and 802b, and the VOB 808, and performs a readout operation (Readout b).
In step S306, the CPU 102 instructs the focus detection sensor 217 to perform limited output of signals from the imaging areas 803a and 803b, the imaging areas 804a and 804b, and the VOB 809, and performs a readout operation (Readout c).
In step S307, the CPU 102 calculates a defocus amount from the pixel signals of respective imaging areas obtained in any one of steps S304 to S306. Then, one of the obtained defocus amounts for the respective imaging area is selected. Although there is no particular limitation on the selection method, the defocus amount which is considered to have high reliability of the defocus amount, such that correlation of the waveform of the signal images is high, contrast of the signal images is high, and so forth, may be selected. Alternatively, the two defocus amounts may be averaged or weighted averaged.
In step S308, if the defocus amount calculated in step S307 is within a desired range, for example, within ¼Fδ (F: aperture value of the lens, δ: constant (20 μm)), the CPU 102 determines that the image is in focus. Specifically, if the aperture value F of the lens is 2.0, in the case where the defocus amount is 10 μm or less, it is determined that the image is in focus and the AF process is ended.
On the other hand, if the defocus amount calculated in step S307 is larger than ¼Fδ, the CPU 102 transmits the lens drive amount corresponding to the defocus amount obtained in step S307 to the lens unit 118 in step S309. Then, the CPU 102 returns the process to step S301 and repeats the above-described operation until it is determined that the focused state is achieved.
According to the second embodiment as described above, the vertical scanning circuit is disposed in such a direction that a plurality of AF areas are not simultaneously selected and read out. Furthermore, based on the selection information of the AF area, validity of each of a plurality of imaging areas in the focus detection sensor is determined. Then, by limiting to valid imaging areas by the vertical scanning circuit and outputting signals only from the pixels in the valid imaging areas, the time required for the AF control can be shortened.
In the second embodiment as well, the vertical scanning circuit is arranged in such a direction that imaging areas having different correlation directions are not simultaneously selected and read out in the focus detection sensor. Therefore, after limiting the AF area, the imaging area to be read out may be limited based on the lens information as described in the first embodiment.
While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
This application claims the benefit of Japanese Patent Application No. 2018-058657, filed on Mar. 26, 2018 which is hereby incorporated by reference herein in its entirety.
Number | Date | Country | Kind |
---|---|---|---|
2018-058657 | Mar 2018 | JP | national |