The present disclosure relates to an image pickup apparatus capable of autofocus (AF).
Japanese Patent Laid-Open No. 2010-263568 discloses, as an image pickup apparatus configured to perform focus detection by a phase difference detection method using an image sensor such as a CMOS sensor, an image pickup apparatus configured to perform focus detection in a plurality of focus detection directions different from one another. In this image pickup apparatus, one defocus amount to be used for AF is obtained by comparing defocus amounts obtained in the plurality of focus detection directions. In addition, Japanese Patent Laid-Open No. 2016-090798 discloses a method of performing AF by using focus sensitivity corresponding to the image height in a case where different defocus amounts occur depending on the image height.
In a case where one defocus amount to be used for AF is determined by, for example, comparing defocus amounts obtained in a plurality of focus detection directions, like the image pickup apparatus of Japanese Patent Laid-Open No. 2010-263568, correct comparison and the like cannot be performed when there is defocus amount difference corresponding to the image height as in Japanese Patent Laid-Open No. 2016-090798. As a result, one accurate defocus amount is not obtained and AF cannot be performed at high accuracy.
An image pickup apparatus according to one aspect of the present disclosure includes a focus detector configured to perform focus detection in an imaging area to acquire a plurality of first defocus amounts different from one another in at least one of an image height, a focus detection direction, and an angle of an object; and a processor configured to perform focus control. The processor is configured to acquire first information indicating a relation between a defocus amount at a specific position in the imaging area and a defocus amount corresponding to the at least one in an area other than the specific position, acquire a plurality of second defocus amounts by using the plurality of first defocus amounts and the first information, and acquires a third defocus amount to be used for the focus control from among the plurality of second defocus amounts.
Further features of the disclosure will become apparent from the following description of embodiments with reference to the attached drawings.
In the following, the term “unit” may refer to a software context, a hardware context, or a combination of software and hardware contexts. In the software context, the term “unit” refers to a functionality, an application, a software module, a function, a routine, a set of instructions, or a program that can be executed by a programmable processor such as a microprocessor, a central processing unit (CPU), or a specially designed programmable device or controller. A memory contains instructions or programs that, when executed by the CPU, cause the CPU to perform operations corresponding to units or functions. In the hardware context, the term “unit” refers to a hardware element, a circuit, an assembly, a physical structure, a system, a module, or a subsystem. Depending on the specific embodiment, the term “unit” may include mechanical, optical, or electrical components, or any combination of them. The term “unit” may include active (e.g., transistors) or passive (e.g., capacitor) components. The term “unit” may include semiconductor devices having a substrate and other layers of materials having various concentrations of conductivity. It may include a CPU or a programmable processor that can execute a program stored in a memory to perform specified functions. The term “unit” may include logic elements (e.g., AND, OR) implemented by transistor circuits or any other switching circuits. In the combination of software and hardware contexts, the term “unit” or “circuit” refers to any combination of the software and hardware contexts as described above. In addition, the term “element,” “assembly,” “component,” or “device” may also refer to “circuit” with or without integration with packaging materials.
An example of the present disclosure will be described below with reference to the accompanying drawings.
The lens unit 100 includes a first lens group 101, an aperture stop 102, a second lens group 103, a focus lens group (hereinafter simply referred to as focus lens) 104 as a focus element, which constitute an imaging optical system, and a drive-control system. The imaging optical system acquires light from an object and forms an object image.
The first lens group 101 is disposed closest to the object and held to be movable in an optical axis direction in which an optical axis OA extends. The aperture stop 102 performs light quantity adjustment by changing its opening diameter and functions as a shutter for exposure time adjustment during still image pickup. The aperture stop 102 and the second lens group 103 are integrally movable in the optical axis direction and perform zooming by moving in coordination with the first lens group 101. The focus lens 104 is movable in the optical axis direction to perform focusing. Focus control (AF) is performed by controlling the position of the focus lens 104 in accordance with a focus detection result to be described later.
The drive-control system includes a zoom actuator 111, an aperture actuator 112, a focus actuator 113, a zoom drive circuit 114, an aperture drive circuit 115, a focus drive circuit 116, a lens MPU 117, and a lens memory 118. During zooming, the zoom drive circuit 114 drives the zoom actuator 111 to move the first lens group 101 and the second lens group 103 in the optical axis direction. The aperture drive circuit 115 drives the aperture actuator 112 to operate the aperture stop 102, thereby performing aperture operation and shutter operation.
During focusing, the focus drive circuit 116 drives the focus actuator 113 to move the focus lens 104 in the optical axis direction. The focus drive circuit 116 functions as a position detector configured to detect the current position (hereinafter referred to as focus position) of the focus lens 104 through the focus actuator 113.
The lens MPU 117 is a computer configured to execute calculation and processing related to the lens unit 100 and controls the zoom drive circuit 114, the aperture drive circuit 115, and the focus drive circuit 116. The lens MPU 117 is connected to a camera MPU 125 in the camera body 120 to perform communication therebetween through communication terminals of the mount M and communicates commands and data. For example, the lens MPU 117 notifies the camera MPU 125 of lens information in accordance with a request from the camera MPU 125. The lens information includes information such as the focus position, the position in the optical axis direction and the diameter of an exit pupil of the imaging optical system, and the position in the optical axis direction and the diameter of a lens area that restricts a light beam in the exit pupil.
The lens MPU 117 controls the zoom drive circuit 114, the aperture drive circuit 115, and the focus drive circuit 116 in accordance with a request from the camera MPU 125. The lens memory 118 stores optical information necessary for AF. The camera MPU 125 controls operation of the lens unit 100 by executing computer programs stored in a built-in nonvolatile memory and the lens memory 118.
The camera body 120 includes an optical lowpass filter 121, an image sensor 122, an image processing circuit 124, and the drive-control system. The optical lowpass filter 121 is provided to reduce false color and moire.
The image sensor 122 is constituted by a CMOS sensor and its peripheral circuits and photoelectrically converts the object image (optical image) formed through the imaging optical system and outputs an image pickup signal and a pair of focus detection signals (two-image signal). A plurality of image pickup pixels, m pixels in the lateral direction and n pixels in longitudinal direction (m and n are integers equal to or larger than two), are disposed in the image sensor 122. Each image pickup pixel includes a pair of focus detection pixels as described later and has a pupil dividing function capable of focus detection by a phase difference detection method.
The drive-control system includes an image-sensor drive circuit 123, the image processing circuit 124, the camera MPU 125, a display unit 126, an operation switch (SW) 127, a memory 128, a phase-difference AF unit 129, a flicker detector 130, an AE unit 131, and a white-balance (WB) adjusting unit 132. The image-sensor drive circuit 123 controls electric charge accumulation and signal readout in the image sensor 122, A/D converts the image pickup signal and the pair of focus detection signals output from the image sensor 122, and outputs the converted signals to the image processing circuit 124 and the camera MPU 125. The image processing circuit 124 generates image data by performing image processing such as γ conversion, color interpolation processing, and compression encoding processing on the digital image pickup signal from the image-sensor drive circuit 123.
The camera MPU 125 as a control unit is a computer configured to execute calculation and processing related to the camera body 120 and controls the image-sensor drive circuit 123, the image processing circuit 124, the display unit 126, the phase-difference AF unit 129, the flicker detector 130, the AE unit 131, and the WB adjusting unit 132. The camera MPU 125 is connected to the lens MPU 117 to perform communication therebetween through the communication terminals of the mount M and communicates commands and data with the lens MPU 117. For example, the camera MPU 125 requests the lens information and the optical information from the lens MPU 117 and also requests driving of the lenses 101 and 104 and the aperture stop 102. The camera MPU 125 receives the lens information and the optical information transmitted from the lens MPU 117.
The camera MPU 125 includes a ROM 125a storing various computer programs, a RAM 125b storing variables, and an EEPROM 125c storing various parameters. The camera MPU 125 executes various kinds of processing including AF processing to be described later in accordance with the computer programs stored in the ROM 125a. The camera MPU 125 generates two-image data from the pair of digital focus detection signals from the image-sensor drive circuit 123 and outputs the generated two-image data to the phase-difference AF unit 129.
The display unit 126 is constituted by an LCD or the like and displays information on an image pickup mode, a preview image before image pickup, a confirmation image after image pickup, a focus state, and the like. The operation SW 127 includes a power switch, a release (image pickup instruction) switch, a zoom switch, an image pickup mode selection switch, and the like. The memory 128 is a flash memory detachably attached to the camera body 120 and records record images obtained by image pickup.
The phase-difference AF unit 129 as a focus detector performs focus detection by using the two-image data generated by the camera MPU 125. The image sensor 122 photoelectrically converts a pair of optical images formed by light beams having passed through a pair of pupil areas different from each other in the exit pupil of the imaging optical system and outputs the pair of focus detection signals. The phase-difference AF unit 129 performs correlation calculation on the two-image data generated from the pair of focus detection signals by the camera MPU 125 to calculate an image shift amount that is the phase difference therebetween, and calculates (acquires) a defocus amount as focal point information from the image shift amount. The camera MPU 125 calculates a drive amount of the focus lens 104 based on the defocus amount calculated by the phase-difference AF unit 129 and transmits a focus control command including the drive amount to the lens MPU 117.
In this manner, on-sensor phase difference AF using output from the image sensor 122 is performed without using an AF sensor dedicated for focus detection in the present example. The phase-difference AF unit 129 in the present example includes an acquisition unit 129a configured to acquire the two-image data and a calculation unit 129b configured to calculate the defocus amount. At least one of the acquisition unit 129a and the calculation unit 129b may be provided in the camera MPU 125.
The flicker detector 130 detects flicker from flicker detection image data obtained from the image processing circuit 124. The camera MPU 125 performs control to adjust the exposure amount so that influence of the detected flicker decreases.
The AE unit 131 performs exposure control (AE) by performing exposure metering using AE image data obtained from the image processing circuit 124. Specifically, the AE unit 131 acquires luminance information on the AE image data and calculates image pickup conditions including aperture value, shutter speed (shutter time), and ISO sensitivity based on the difference between an exposure amount obtained from the luminance information and an exposure amount set in advance. Then, the aperture value, the shutter speed, and the ISO sensitivity are controlled to the calculated values, thereby performing AE.
The WB adjusting unit 132 calculates WB of WB adjustment image data obtained from the image processing circuit 124 and adjusts weights of RGB colors in accordance with the difference between the calculated WB and predetermined appropriate WB, thereby performing WB adjustment.
The camera MPU 125 may perform processing of detecting an object such as a human face in image data obtained from the image processing circuit 124. The camera MPU 125 can select, in accordance with the position and size of the detected object, an image height range for performing phase difference AF, AE, and WB adjustment.
The photoelectrical conversion units 301 and 302 may be p-i-n structure photodiodes in which an intrinsic layer is sandwiched between a p-type layer and an n-type layer or may be p-n junction photodiodes in which the intrinsic layer is omitted. A color filter 306 is formed between the micro lens 305 and the photoelectrical conversion units 301 and 302. The spectral transmittance of the color filter may be varied for each focus detection pixel, or the color filter may be omitted.
Two light beams incident on the pixel 200Ga through a pair of pupil areas are condensed through the micro lens 305, spectrally separated through the color filter 306, and then received by the photoelectrical conversion units 301 and 302. In each photoelectrical conversion unit, electrons and holes in pairs are generated in accordance with the quantity of the received light, separated by a depleted layer, and then the electrons as negative electric charge are accumulated in the n-type layer. The holes are discharged out of the image sensor 122 through the p-type layer connected to a non-illustrated constant-voltage source. The electrons accumulated in the n-type layer of each photoelectrical conversion unit are forwarded to a capacitance unit (FD) through a transfer gate and converted into a voltage signal.
As illustrated in
As illustrated in
In the present example, the first and second focus detection pixels are provided at each image pickup pixel of the image sensor 122, but two image pickup pixels may be used as the first and second focus detection pixels, or the first and second focus detection pixels may be provided at some image pickup pixels.
In
In the short-focus state, light beams having passed through the first pupil area 501 and the second pupil area 502, respectively, among light beams from the object 802 are condensed once and then broadened to widths Γ1 and Γ2 centered at barycenter positions G1 and G2 of the light beams, and form blurred optical images on the imaging plane 800. These blurred images are received by the first focus detection pixel 201 and the second focus detection pixel 202 of each image pickup pixel on the imaging plane 800, and accordingly, a first focus detection signal and a second focus detection signal as the pair of focus detection signals are generated. The first and second focus detection signals are recorded as blurred images in which the object 802 is broadened to the blurring widths Γ1 and Γ2, respectively, at the barycenter positions G1 and G2 on the imaging plane 800. The blurring widths Γ1 and Γ2 increase substantially in proportion to increase of the magnitude |d| of the defocus amount d. Similarly, the magnitude |p| of an image shift amount p between the first and second focus detection signals (=difference G1−G2 between the barycenter positions of the light beams) increases substantially in proportion to increase of the magnitude |d| of the defocus amount d. The same applies in the over-focus state (d>0) as well although the first and second focus detection signals have an image shift direction opposite that in the short-focus state.
In the present example, barycenter difference of incident angle distribution between the first pupil area 501 and the second pupil area 502, respectively, is referred to as a base length. The relation between the defocus amount d and the image shift amount p at the imaging plane 800 is substantially similar to the relation of the base length with the sensor pupil distance. The magnitude of the image shift amount between the first and second focus detection signals increases as the magnitude of the defocus amount d increases, and thus, with this relation, the phase-difference AF unit 129 converts the image shift amount into the defocus amount by using a conversion coefficient calculated based on the base length.
In the following description, calculation of the defocus amount by using the pair of focus detection signals from a focus detection pixel with pupil division in the horizontal direction (lateral direction) like the pixel 200Ga is referred to as horizontal focus detection (first detection). In addition, calculation of the defocus amount by using the pair of focus detection signals from a focus detection pixel with pupil division in the vertical direction (longitudinal direction) like the pixel 200Gb is referred to as vertical focus detection (second detection).
Focus sensitivity is an indicator of the relation between a unit movement amount of the focus lens 104 and a change amount of the image position that is the imaging position of an optical image, and in the present example, indicates the ratio of the change amount of the image position to the unit movement amount of the focus lens 104. For example, the focus sensitivity is one in a case where the change amount of the image position is 1 mm when the focus lens 104 is moved by the unit movement amount of 1 mm, and the focus sensitivity is two in a case where the change amount of the image position is 2 mm. However, the focus sensitivity may be another value, for example, the reciprocal of the above-described ratio.
The focus sensitivity may be the change amount of the image position for a particular unit amount in a case where the focus lens 104 is moved based on a monotonically increasing function. For example, in a case where the focus lens 104 is driven along the shape of a cam provided at a cam ring in cooperation with rotation of the cam ring, the focus sensitivity may be the change amount of the image position for an unit rotation angle of the cam ring, which is equivalent to the unit movement amount of the focus lens.
Typically, a drive amount X of the focus lens 104 is calculated by
where d represents the defocus amount and S represents the focus sensitivity.
A pulse number P of focus drive pulses supplied to the focus actuator 113 by a focus drive unit 109 in a case where the focus actuator 113 is a pulse-driven actuator such as a stepping motor is obtained by Expression (2) below.
In Expression (2), m represents the movement amount of the focus lens 104 per focus drive pulse.
The phase-difference AF unit 129, the camera MPU 125, and the lens MPU 117 perform focus detection by the phase difference detection method as described below. The camera MPU 125 acquires the current position of the focus lens 104, the focus sensitivity S, and the movement amount m of the focus lens 104 per focus drive pulse from the lens MPU 117 in advance.
As described above, the phase-difference AF unit 129 calculates the image shift amount of the pair of focus detection signals (two-image data) acquired from the image sensor 122 and calculates (detects) the defocus amount from the image shift amount. The camera MPU 125 calculates the pulse number P of focus drive pulses by using Expression (2) described above and transmits a focus drive command including the pulse number P to the lens MPU 117.
The lens MPU 117 controls the focus drive circuit 116 so that focus drive pulses in the received pulse number P are supplied to the focus actuator 113. Accordingly, the focus lens 104 moves by the drive amount X (=d/S) and the in-focus state of the imaging optical system is obtained.
However, the interchangeable lens 100 of the present example includes an imaging optical system with a defocus amount d(h) that changes in accordance with an image height h. Thus, the camera MPU 125 calculates the drive amount X of the focus lens 104 by using the defocus amount d(h) corresponding to the image height h and the focus sensitivity S (h) corresponding to the image height h. Specifically, the drive amount X of the focus lens 104 is calculated by Expression (3) below. In addition, the pulse number P of focus drive pulses is calculated by Expression (4) below.
In a case where the horizontal focus detection is performed at the image height other than the center in the imaging area, the defocus amount d(h) is calculated by using the horizontal defocus amount ratio corresponding to the image height in
As understood from these diagrams, the defocus amount ratio of the imaging optical system increases as the position moves from the center toward the periphery. Moreover, the defocus amount ratio is different between focus detection directions (the lateral direction and the longitudinal direction).
Irrespective of the horizontal focus detection and the vertical focus detection, the defocus amount ratio for the same image height may be switched in accordance with the angle (edge angle) of an object detected in image data obtained from the image processing circuit 124. With this scheme, defocus amount difference corresponding to the image height can be corrected at higher accuracy.
In
The pixel coordinates (x, y) are orthogonal coordinates, where the horizontal right direction and the vertical upward direction are defined as positive directions, respectively. H(x,y) represents the horizontal contrast intensity at a specific frequency at the coordinates (x, y) and is given by Expression (6) below. P(α,β) represents the luminance value at pixel coordinates (α, β).
Similarly, V(x, y) represents the vertical contrast intensity at the coordinates P(x, y) and is given by Expression (7) below.
In this example, a demodulation filter at calculation of the contrast intensities H(x,y) and V(x, y) is (1, 0, −1). However, the filter may be replaced with any filter with which frequency components of the object can be detected.
The case of θ=90° corresponds to the defocus amount ratio in the horizontal focus detection illustrated in
As described above, the defocus amount ratio indicates the relation between the defocus amount at the center as a specific position on the imaging area and the defocus amount corresponding to at least one of the image height, the focus detection direction, and the object angle in a peripheral area other than the specific position. The specific position on the imaging area does not necessarily need to be the center but only needs to be a position that can be easily used as a reference of the defocus amount ratio.
Problem in Determination of One Focus Detection Result from Among a Plurality of Focus Detection Results
In the present example, the horizontal focus detection and the vertical focus detection are performed and a focus detection result used to drive the focus lens 104 is the result of one of the horizontal focus detection and the vertical focus detection. Furthermore, in a case where a plurality of defocus amounts at different image heights are obtained, the defocus amount obtained at a particular image height is used to drive the focus lens 104 in AF.
As described above with reference to
In
In this manner, by determining the use defocus amount using a histogram, it is possible to expect effects such as preventing near-far conflicts and reducing influence of variance among defocus amounts in focus detection areas on a focus detection area for obtaining a definitive defocus amount.
However, in a case where such a histogram is used, the above-described defocus amount ratio corresponding to the image height becomes a problem.
For example, the focus detection area (1, 4) illustrated in
The defocus amount at each image height is calculated as a value multiplied by the defocus amount ratio. As a result, the original histogram illustrated in
Thus, in the present example, the use defocus amount that is appropriate for AF is determined from among a plurality of defocus amounts with different defocus amount ratios.
When image pickup preparation is instructed by an user operation on the release switch of the operation switch 127, the camera MPU 125 starts AF at step S100.
Subsequently at step S101, the camera MPU 125 acquires, from the lens MPU 117, information (first information) of defocus amount ratios different from one another in at least one of the image height, the focus detection direction, and the object angle, which is stored in the lens memory 118. The camera MPU 125 also acquires information (second information) of the focus sensitivity for each image height, which is stored in the lens memory 118. In this case, the information on defocus amount ratio and focus sensitivity may be acquired in a table data format, or information on coefficients of functions for calculating the defocus amount ratio and the focus sensitivity may be acquired. In other words, information on the defocus amount ratio and the focus sensitivity may be acquired in any manner.
Information on the defocus amount ratio only in some quadrants as illustrated in
Subsequently at step S102, the camera MPU 125 causes the phase-difference AF unit 129 to acquire defocus amounts (first defocus amounts) in one or a plurality of focus detection areas.
Subsequently at step S103, the camera MPU 125 determines whether a plurality of defocus amounts different from one another in at least one of the image height, the focus detection direction, and the object angle are to be used in the current AF, and performs processing at step S104 if so. Otherwise, the camera MPU 125 performs processing at step S105. The camera MPU 125 proceeds to step S104 in a case where the use defocus amount is determined by using the above-described defocus amount histogram, the average value of a plurality of defocus amounts is determined as the use defocus amount, or a defocus amount value at the closest distance or at the infinity distance is determined as the use defocus amount.
At step S104, the camera MPU 125 normalizes the plurality of defocus amounts acquired at step S102. Specifically, the plurality of defocus amounts acquired at step S102 are divided by the corresponding defocus amount ratio of the image height, the focus detection direction, or the object angle. Accordingly, each defocus amount becomes a defocus amount (second defocus amount) normalized to a value corresponding to a defocus amount at the center where the defocus amount ratio is one, and the plurality of normalized defocus amounts can be compared irrespective of the image height, the focus detection direction, and the object angle.
As a result, a histogram illustrated in
Subsequently at step S105, the camera MPU 125 calculates (acquires) a drive amount of the focus lens 104 by using the use defocus amount determined at step S104 or one defocus amount acquired at step S102. Specifically, the pulse number P of focus drive pulses is calculated by using Expressions (3) and (4) described above. In a case where step S104 has been passed, the defocus amount d(h) and the focus sensitivity S(h) at the center (h=0) may be used in Expressions (3) and (4). In a case where step S104 has not been passed, the defocus amount d(h) and the focus sensitivity S(h) in Expressions (3) and (4) may be values corresponding to the image height at which the defocus amount is acquired.
Subsequently at step S106, the camera MPU 125 transmits a focus drive command including the drive amount calculated at step S105 to the lens MPU 117. The lens MPU 117 controls the focus drive unit 109 based on the received focus drive command, thereby driving the focus lens 104.
Thereafter, the camera MPU 125 ends the present processing at step S107.
Through the above-described AF processing, one accurate defocus amount to be used for AF can be obtained from among a plurality of defocus amounts different from one another in at least one of the image height, the focus detection direction, and the object angle (for example, the image height and the focus detection direction, or the image height and the object angle).
Although the present example is described above for a case where focus control to move a focus lens as the focus element in the optical axis direction is performed, focus control to move an image sensor as the focus element in the optical axis direction may be performed.
While the disclosure has been described with reference to embodiments, it is to be understood that the disclosure is not limited to the disclosed embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions. According to the present disclosure, it is possible to obtain one accurate defocus amount from among a plurality of defocus amounts different from one another in at least one of an image height, a focus detection direction, and an object angle.
This application claims priority to Japanese Patent Application No. 2023-205176, which was filed on Dec. 5, 2023, and which is hereby incorporated by reference herein in its entirety.
Number | Date | Country | Kind |
---|---|---|---|
2023-205176 | Dec 2023 | JP | national |