The present invention relates to an image sensor in which pixel units each having a plurality of photoelectric conversion units are two-dimensionally arranged, and an image capturing apparatus equipped with the image sensor.
As one of focus detection methods performed in an image capturing apparatus, a so-called on-imaging plane phase difference method in which a pair of pupil division signals are obtained using focus detection pixels formed in an image sensor and phase difference focus detection is performed using the pair of pupil division signals is known.
As an example of such an on-imaging plane phase difference method, an image capturing apparatus using a two-dimensional image sensor in which one microlens and a plurality of divided photoelectric conversion units are formed for each pixel is disclosed in Japanese Patent Laid-Open No. 58-24105. The plurality of photoelectric conversion units are configured to receive light transmitted through different regions of the exit pupil of an imaging lens via one microlens to realize pupil division. By calculating the image shift amount using the phase difference signals, which are the signals of the respective photoelectric conversion units, phase difference focus detection can be performed. Further, an image can be acquired from an image signal obtained by adding the signals from the individual photoelectric conversion units for each pixel.
In such an image sensor, in a configuration in which a plurality of photoelectric conversion units are arranged in the horizontal direction within a pixel and thus the pupil division direction is the horizontal direction, in a case where a subject has horizontal stripes, for example, parallax is less likely to appear, which may cause a decrease in focus detection accuracy.
To address this problem, Japanese Patent Laid-Open No. 2011-53519 discloses a technique for improving focus detection accuracy by arranging pairs of the photoelectric conversion units arranged under respective microlenses of the focus detection pixels in two directions to make the pupil division directions to two.
On the other hand, in technical field of a CMOS image sensor, a backside illumination technology that receives light on the side opposite to the side on which the pixel circuit is formed, and a technology that laminates semiconductor substrates to form a laminated structure in the backside illumination type CMOS image sensor are in progress. Japanese Patent Laid-Open No. 2021-68758 discloses an example in which capacitors for accumulating pixel signals are provided, using this laminated structure, on a semiconductor substrate different from the semiconductor substrate on which the pixel circuit is formed, thereby providing a global shutter function.
Further, Japanese Patent Laid-Open No. 2021-68758 discloses phase difference detection pixels whose division directions are vertical and phase difference detection pixels whose division directions are horizontal.
However, in Japanese Patent Laid-Open No. 2021-68758, there is no disclosure about arranging both the phase difference detection pixels whose division direction is vertical and the phase difference detection pixels whose division direction is horizontal in the same solid-state imaging device. In the case of an object having luminance fluctuation in the division direction, focus detection accuracy will be degraded. Further, even if the pupil division directions are two as in Japanese Patent Laid-Open No. 2011-53519, in the pixel configuration of Japanese Patent Laid-Open No. 2021-68758, the isolation region in each pixel is large, so sufficient light reception area cannot be secured, and if the object is dark, it is conceivable that the accuracy of focus detection will decrease.
The present invention has been made in consideration of the above situation, and further enhances focus detection accuracy in different pupil division directions.
According to the present invention, provided is an image sensor including a plurality of pixels, wherein each pixel comprises: a microlens; a plurality of photoelectric conversion units that convert incident light into charge and accumulate the charge; a plurality of holding units that hold signals corresponding to the charge accumulated in the plurality of photoelectric conversion units; a controller that controls timings of accumulating the charge converted by the plurality of photoelectric conversion units and timings of causing the plurality of holding units to hold the signals corresponding to the charge accumulated in the plurality of photoelectric conversion units; and an output unit that outputs the signals held in the plurality of holding units in units of one row, wherein the plurality of pixels include a plurality of first pixels having the plurality of photoelectric conversion units arranged in a first direction and a plurality of second pixels having the plurality of photoelectric conversion units arranged in a second direction which is perpendicular to the first direction.
According to the present invention, provided is an image capturing apparatus comprising: the image sensor including a plurality of pixels, wherein each pixel comprises: a microlens; a plurality of photoelectric conversion units that convert incident light into charge and accumulate the charge; a plurality of holding units that hold signals corresponding to the charge accumulated in the plurality of photoelectric conversion units; a controller that controls timings of accumulating the charge converted by the plurality of photoelectric conversion units and timings of causing the plurality of holding units to hold the signals corresponding to the charge accumulated in the plurality of photoelectric conversion units; and an output unit that outputs the signals held in the plurality of holding units in units of one row; and a focus detection unit that performs phase difference focus detection based on the signals output from the plurality of holding units, wherein the plurality of pixels include a plurality of first pixels having the plurality of photoelectric conversion units arranged in a first direction and a plurality of second pixels having the plurality of photoelectric conversion units arranged in a second direction which is perpendicular to the first direction.
Further features of the present invention will become apparent from the following description of exemplary embodiments (with reference to the attached drawings).
The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate embodiments of the invention, and together with the description, serve to explain the principles of the invention.
Hereinafter, embodiments will be described in detail with reference to the attached drawings. Note, the following embodiments are not intended to limit the scope of the claimed invention, and limitation is not made to an invention that requires a combination of all features described in the embodiments. Two or more of the multiple features described in the embodiments may be combined as appropriate. Furthermore, the same reference numerals are given to the same or similar configurations, and redundant description thereof is omitted.
A plurality of pixels 105 are arranged in a matrix in the pixel array portion 101. By inputting the output of the vertical selection circuit 102 to the pixels 105 via pixel actuation wirings 107, the pixel signals of the pixels 105 in the row selected by the vertical selection circuit 102 are read out to the column circuit 103 via output signal lines 106. One output signal line 106 may be provided for each pixel column or for a plurality of pixel columns, or a plurality of output signal lines may be provided for each pixel column. A column circuit 103 receives signals read out in parallel via the plurality of output signal lines 106, performs processing such as signal amplification, noise reduction, and A/D conversion, and holds the processed signals. The horizontal selection circuit 104 sequentially, randomly, or simultaneously selects the signals held in the column circuit 103, so that the selected signals are output to the outside of the image sensor 100 via a horizontal output line and an output unit (both not shown).
By sequentially performing the operation of outputting the pixel signals of the row selected by the vertical selection circuit 102 to the outside of the image sensor 100 while changing the row selected by the vertical selection circuit 102, two-dimensional image signal or focus detection signals can be read out.
[Circuit Configuration of Pixel]
Of the pixels 105 included in the range of 2×2 pixels, reference numeral 301 indicates the portion included in the PDIC-side pixel region 203 and reference numeral 302 indicates the portion included in the MEMIC-side pixel region 204. Moreover, each set of a PDIC-side pixel 311 and a MEMIC-side pixel 321, a PDIC-side pixel 312 and a MEMIC-side pixel 322, a PDIC-side pixel 313 and a MEMIC-side pixel 323, and a PDIC-side pixel 314 and a MEMIC-side pixel 324 constitutes each pixel 105.
Next, the configuration of the pixel 105 will be described by taking the PDIC-side pixel 311 and the MEMIC-side pixel 321 as representative examples. The PDIC-side pixels 312 to 314 and the MEMIC-side pixels 322 to 324 also have the same circuit configuration as that of the PDIC-side pixel 311 and the MEMIC-side pixel 321.
The PDIC-side pixel 311 has a photodiode (PDA) 331 and a photodiode (PDB) 332, which are two photoelectric conversion units. The signal charge photoelectrically converted by the PDA 331 according to the amount of incident light and accumulated is transferred via a transfer transistor (TXA) 333 to a charge-voltage converter (FD) 335 for conversion into voltage. Further, the signal charge photoelectrically converted and accumulated by the PDB 332 is transferred to the FD 335 via the transfer transistor (TXB) 334. When the reset transistor (RES) 336 is turned on, the FD 335 is reset to the voltage of a constant voltage source VDD. Also, by turning on the RES 336 and the TXA 333 and TXB 334 at the same time, the PDA 331 and PDB 332 can be reset.
When a selection switch (SEL) 337 that selects the PDIC-side pixel 311 is turned ON, an amplification transistor (SF) 338 converts the signal charge accumulated in the FD 335 into voltage, and outputs the converted signal voltage from the PDIC-side pixel 311 to the MEMIC-side pixel 321. The gates of the TXA 333, TXB 334, RES 336 and SEL 337 are connected to corresponding pixel actuation wirings 107 and controlled by the vertical selection circuit 102.
In the following description, in this embodiment, the signal charge accumulated in the photoelectric conversion unit is assumed to be electrons, and the photoelectric conversion unit is formed of an N-type semiconductor and separated by a P-type semiconductor. Alternatively, the signal charge to be accumulated may be holes, and the photoelectric conversion unit may be formed of a P-type semiconductor and separated by an N-type semiconductor.
In the MEMIC-side pixel 321, a constant current source (CS) 361 supplies a constant current when outputting a signal from the SF 338. The MEMIC-side pixel 321 also has a signal holding capacitor (MEMN) 341, a signal holding capacitor (MEMA) 342, and a signal holding capacitor (MEMB) 343 for holding the output signal voltages of the PDIC-side pixel 311. The MEMIC-side pixel 321 is further provided with a selection switch (GSN) 344, a selection switch (GSA) 345, and a selection switch (GSB) 346 for selecting the MEMN 341, MEMA 342, and MEMB 343, respectively.
Then, when a signal holding capacitor selection transistor (MSELN) 350 is turned on, the voltage signal of the MEMN 341 is output to an output signal line 353 via an amplification transistor (MSFN) 347. Similarly, when a signal holding capacitor selection transistor (MSELA) 351 is turned on, the voltage signal of the MEMA 342 is output to an output signal line 354 via an amplification transistor (MSFA) 348. Further, when a signal holding capacitor selection transistor (MSELB) 352 is turned on, the voltage signal of the MEMB 343 is output to an output signal line 355 via an amplification transistor (MSFB) 349. Note that the output signal lines 353 to 355 correspond to the output signal line 106 shown in
Further, outside the MEMIC-side pixel 321, a reset transistor (MRES) 362 for resetting the MEMN 341, MEMA 342, and MEMB 343 of the MEMIC-side pixels 321 to 324 is configured.
[Readout Operation of Pixel Signal]
The readout operation of pixel signals in this embodiment includes a signal readout operation from the PDIC-side pixels to the MEMIC-side pixels performed simultaneously in the entire pixel array portion 101, and a signal readout operation from the MEMIC-side pixels in the selected row to the column circuit 103 which is sequentially performed while changing the row selected by the vertical selection circuit 102.
Signal Readout Operation from PDIC-Side Pixels to MEMIC-Side Pixels
The reset operation of the PDA 331 and PDB 332 is performed by sequentially resetting the PDIC-side pixel 311 (t401˜t402), PDIC-side pixel 312 (t403˜t404), PDIC-side pixel 313 (t405 to t406), PDIC-side pixel 314 (t407 to t408) so that the time difference between the PDIC-side pixel 311, PDIC-side pixel 312, PDIC-side pixel 313, and PDIC-side pixel 312 in the reset operation is the same as that in the signal read operation from the PDIC 201 side to the MEMIC 202 side, which will be described later. In the reset operation of each PDIC-side pixel, the RES 336, TXA 333, and TXB 334 are turned on/off at the same time to discharge charges in the PDA 331 and PDB 332.
The period from the completion of the reset operation of the PDA 331 and PDB 332 to the start of the readout operation from the PDIC-side pixels to the MEMIC-side pixels is the exposure period.
The signal readout period from the PDIC-side pixels to the MEMIC-side pixels is comprised of a signal readout period of the PDIC-side pixel 311 (t411 to t421), a signal readout period of the PDIC-side pixel 312 (t431 to t441), a signal readout period of the PDIC-side pixel 313 (t451 to t461) and a signal readout period of the PDIC-side pixel 314 (t471 to t481), which are successively performed. Since the signal readout operation from each PDIC-side pixel is the same, the operation during the signal readout period of the PDIC-side pixel 311 will be described below.
First, the SEL 337 is turned on (t411), the SF 338 and CS 361 are connected to operate the SF 338 as a source follower, so that the node of the HB 205 becomes a voltage corresponding to the voltage of the FD 335. Subsequently, the RES 336 and GSN 344 are turned on to reset the FD 335, and the MEMN 341 and HB 205 are connected (t412). After that, the RES 336 is turned off (t413), and the GSN 344 is turned off (t414) after the potential of the MEMN 341 is settled. Thereby, the MEMN 341 holds a voltage (FD reset voltage) corresponding to the voltage of the FD 335 before the charges accumulated in the PDA 331 and PDB 332 are transferred.
Subsequently, the TXA 333 and GSA 345 are turned on, and when the charge accumulated in the PDA 331 during the accumulation period is transferred to the FD 335 (t415), the voltage across the FD 335 drops by an amount corresponding to the transferred charge. After that, the TXA 333 is turned off (t416), and the GSA 345 is turned off (t417) after the potential of the MEMA 342 is settled. As a result, the MEMA 342 holds a voltage lower than the reset voltage of the FD 335 by the amount corresponding to the charge accumulated in the PDA 331 during the accumulation period.
Subsequently, the TXB 334 and GSB 346 are turned on, and when the charge accumulated in the PDB 332 during the accumulation period is transferred to the FD 335 (t418), the voltage across the FD 335 drops by an amount corresponding to the transferred charge. After that, the TXB 334 is turned off (t419), and the GSB 346 is turned off (t420) after the potential of the MEMB 343 is settled. As a result, the MEMB 343 holds a voltage lower than the reset voltage of the FD 335 by the amount corresponding to the charge accumulated in the PDA 331 and PDB 332 during the accumulation period.
After that, by turning off the SEL 337 (t421), the readout operation of the PDIC-side pixel 311 is completed.
Signal Readout Operation from MEMIC-Side Pixels to Column Circuit
Signal readout from the MEMIC-side pixels is performed sequentially in units of one row.
First, the MSELN 350, MSELA 351 and MSELB 352 are turned on (t501). The output signal lines 353, 354, 355 are connected to column constant current sources (not shown). Therefore, the MSFN 347, MSFA 348, and MSFB 349 operate as source followers, and the output signal lines 353, 354, and 355 become to have voltages corresponding to the voltages of the MEMN 341, MEMA 342, and MEMB 343, respectively. After the output signal lines 353, 354 and 355 are settled, the voltages of the output signal lines 353, 354 and 355 are respectively AD-converted by the column circuit 103. The signal corresponding to the voltage of the output signal line 353 at this time is expressed as N1, the signal corresponding to the voltage of the output signal line 354 is expressed as A1, and the signal corresponding to the voltage of the output signal line 355 is expressed as B1.
After AD conversion is completed, the MSELN 350, MSELA 351 and MSELB 352 are turned off and the MRES 362, GSN 344, GSA 345 and GSB 346 are turned on to reset the MEMN 341, MEMA 342 and MEMB 343 (t502). Then, after the output signal lines 353, 354, and 355 are settled to voltages corresponding to the respective reset levels of the MEMN 341, MEMA 342, and MEMB 343, the voltages of the output signal lines 353, 354, and 355 are AD-converted by the column circuit 103. The signal corresponding to the voltage of the output signal line 353 at this time is expressed as N2, the signal corresponding to the voltage of the output signal line 354 is expressed as A2, and the signal corresponding to the voltage of the output signal line 355 is expressed as B2. From the signals obtained in this way, by calculating
(A1−A2)−(N1−N2)
it is possible to obtain the amount of charge accumulated in the PDA 331 during the accumulation period. Further, by calculating
(B1−B2)−(A1−A2)
it is possible to obtain the amount of charge accumulated in the PDB 332 during the accumulation period.
The reason for subtracting the voltages corresponding to the reset levels of the MEMN 341, MEMA 342, and MEMB 343 as shown by (N1-N2), (A1-A2), and (B1-B2) is to cancel variations in the threshold values of the the MSFN 347, MSFA 348 and MSFB 349 and to cancel the voltage drop corresponding to the length of the output signal line from the pixel to the column circuit. In addition, by holding the signals corresponding to N2, A2, and B2 as adjustment values for one frame outside the image sensor 100, t502 to t503 can be omitted. Further, in the present embodiment, for signal readout of one pixel, AD conversion is performed twice for each of the output signal lines 353, 354, and 355. However, the difference between the voltage immediately before t502 and the voltage immediately before t503 may be AD-converted.
As described above with reference to
[Structure of Light Receiving Portion]
Basic Structure of Light Receiving Portion
Next, the basic configuration of the PDIC-side pixels in this embodiment will be described with reference to
In
The PDA 331 includes a storage region 711, a sensitivity region 713, and an N-type connection region 715, and the PDB 332 includes a storage region 712, a sensitivity region 714, and an N-type connection region 716. These storage regions 711 and 712, sensitive regions 713 and 714, and N-type connection regions 715 and 716 are made of N-type semiconductors. The sensitive regions 713 and 714 are larger in area than the storage regions 711 and 712. Further, as will be described in detail below with reference to
As shown in
As shown in
In
Also, as shown in
Horizontal Division Layout and Vertical Division Layout
Since the storage regions and the sensitivity regions are arranged at different depths, the layout direction of the sensitivity regions 713 and 714 of the PDA 331 can be made different from that of the PDB 332 while keeping the positions of the readout transistors of the PDIC-side pixel. The layout directions of the sensitivity regions of the PDA 331 and PDB 332 may be differed with the positions of the readout transistors of the PDIC-side pixel being changed.
[Arrangement of Horizontally Divided Pixels, Vertically Divided Pixels, and Color Filters]
The horizontally divided pixel 311 with a color filter 806 having R (red) spectral sensitivity is arranged on the upper left, the horizontally divided pixel 312 with the color filter 806 having G (green) spectral sensitivity is arranged on the upper right, the vertically divided pixel 313 with the color filter 806 having G (green) spectral sensitivity is arranged on the lower left, and the horizontally divided pixel 314 with the color filter 806 having B (blue) spectral sensitivity is arranged on the lower right, and thus the arrangement of the color filters 806 is Bayer arrangement.
The arrangement of 2×2 pixels shown in
Further, since horizontal phase difference signals are obtained for each of the pixels with R, G, and B color filters, horizontal phase difference signals can be obtained regardless of the color of the object. In addition, the phase difference signals in the vertical direction are obtained from the pixels with a G color filter, which has the highest transmittance among R, G, and B color filters, so the accuracy of the obtained phase difference signals is higher comparing to a case where the phase difference signals are obtained from the pixels with an R color filter or a B color filter.
[Structure of MEMIC-Side Pixel]
Next, the structure of the MEMIC-side pixel will be described.
[Overall Configuration]
The imaging lens unit 5 forms an optical image of a subject on the image sensor 100. Although it is represented by one lens in the figure, the imaging lens unit 5 may include a plurality of lenses including a focus lens, a zoom lens, and so on, and a diaphragm, and may be detachable from the main body of the image capturing apparatus or may be integrally configured with the main body.
The image sensor 100 has the configuration as described in the above embodiment, converts the light incident through the imaging lens unit 5 into electric signals and outputs them. Signals are read out from each pixel of the image sensor 100 so that pupil division signals that can be used in phase difference focus detection and an image signal that is a signal of each pixel can be acquired.
The signal processing unit 7 performs predetermined signal processing such as correction processing on the signals output from the image sensor 100, and outputs the pupil division signals used for focus detection and the image signal used for recording.
The overall control/arithmetic unit 2 comprehensively actuates and controls the entire image capturing apparatus. In addition, the overall control/arithmetic unit 2 also performs calculations for focus detection using the pupil division signals processed by signal processing unit 7, and performs arithmetic processing for exposure control, and predetermined signal processing, such as development for generating images for recording/playback and compression, on the image signal.
The lens actuation unit 6 actuates the imaging lens unit 5, and performs focus control, zoom control, aperture control, and the like on the imaging lens unit 5 according to control signals from the overall control/arithmetic unit 2.
The instruction unit 3 receives inputs such as shooting execution instructions, actuation mode settings for the image capturing apparatus, and other various settings and selections that are input from outside by the operation of the user, for example, and sends them to the overall control/arithmetic unit 2.
The timing generation unit 4 generates a timing signal for actuating the image sensor 100 and the signal processing unit 7 according to a control signal from the overall control/arithmetic unit 2.
The display unit 8 displays a preview image, a playback image, and information such as the actuation mode settings of the image capturing apparatus.
The recording unit 9 is provided with a recording medium (not shown), and records an image signal for recording. Examples of the recording medium include semiconductor memories such as flash memory. The recording medium may be detachable from the recording unit 9 or may be built-in.
[Calculation of Defocus Amount]
Next, a calculation method for calculating a defocus amount from the pupil division signals in the overall control/arithmetic unit 2 will be described with reference to
The pupil plane and the light receiving surface (second surface) of the image sensor 100 have substantially conjugated relationship via the ML 701. Therefore, the luminous flux that has passed through a partial pupil region 1701 is mostly received in the sensitivity region 713 (PDA). Further, the luminous flux that has passed through a partial pupil region 1702 is mostly received in the sensitivity region 714 (PDB). Signal charges photoelectrically converted near the boundary between the sensitivity regions 713 and 714 are stochastically transported to the storage region 711 or the storage region 712. Accordingly, at the boundary between the partial pupil region 1701 and the partial pupil region 1702, the signal gradually switches as the x coordinate increases, and the x-direction dependency of the pupil intensity distribution has a shape as illustrated in
Next, with reference to
Hereinafter, the first pupil intensity distribution 1801 and the second pupil intensity distribution 1802 are called the “sensor entrance pupil” of the image sensor 100, and the distance Ds is called the “sensor pupil distance” of the image sensor 100. It should be noted that it is not necessary to configure all pixels to have a single entrance pupil distance. For example, the pixels located at up to 80% of image height may have substantially the same entrance pupil distance, or the pixels in different rows or in different detection areas may be configured to have different entrance pupil distances.
For a defocus amount d, the magnitude of the distance from the imaging position of the subject to the imaging plane is given by |d|, the front focused state in which the in-focus position of the subject is on the subject side with respect to the imaging plane is expressed by negative (d<0), and the rear focused state in which the in-focus position of the subject is on the opposite side of the subject with respect to the imaging plane is expressed by positive (d>0). The in-focus state in which the in-focus position of the subject is on the imaging plane is expressed as d=0.
In the front focused state (d<0), among the luminous fluxes from the subject on the object plane 2002, the luminous flux that has passed through the partial pupil region 1701 converges once and then diverges to have the radius Γ1 (Γ2) about the position G1 (G2) as the center of gravity of the luminous flux, and formed as a blurred image on the imaging plane 1700. The blurred image is received by the sensitivity region 713 (PDA 331) and the sensitivity region 714 (PDB 332), and parallax images are generated. Therefore, the generated parallax images are of a blurred image of the subject with the image of the subject on the object plane 2002 being spread to have the radius Γ1 (Γ2) about the position G1 (G2) of the center of gravity.
The radius Γ1 (Γ2) of blur of the subject image generally increases proportionally as the magnitude |d| of the defocus amount d increases. Similarly, the magnitude |p| of an image shift amount p (=G2−G1) between the subject images of the parallax images also increases approximately proportionally as the magnitude |d| of the defocus amount d increases. The same relationship holds in the rear focused state (d>0), although the image shift direction of the subject images between the parallax images is opposite to that in the front focused state. In the in-focus state (d=0), the positions of the centers of gravity of the subject images in the parallax images are the same (p=0), and no image shift occurs.
Therefore, with regard to the two phase difference signals obtained by using the signals from the sensitivity region 713 (PDA 331) and the sensitivity region 714 (PDB 332), as the magnitude of the defocus amount of the parallax images increases, the magnitude of the image shift amount between the two phase difference signals in the x direction increases. Based on this relationship, the phase difference focus detection is performed by converting the image shift amount calculated by performing correlation operation on the image shift amount between the parallax images in the x-direction into the defocus amount.
[Vertically Divided Pixels and Horizontally Divided Pixels in Calculating Defocus Amount]
In the calculation of the defocus amount described above, it is necessary to calculate an image shift amount. To calculate the image shift amount, two phase difference signals (signal obtained from the PDA 331 and signal obtained from PDB 332) should be compared. Therefore, in order to calculate the image shift amount in the vertical direction, it is necessary to compare two phase difference signals obtained from different rows. Therefore, in an image sensor that does not have a global shutter function, different rows have phase difference signals corresponding to charges accumulated during accumulation periods at different timings, and there is a possibility that the accuracy of the phase difference detection in the vertical direction may be lower than that of the phase difference detection in the horizontal direction.
On the other hand, in the present embodiment, by sequentially controlling the accumulation time and signal readout of the PDIC-side pixels by four pixels over the entire pixel array portion 101, an operation close to that of a global shutter can be achieved. In addition, since the timing of the accumulation time in the vertically divided pixels 313 is the same over the entire pixel array portion 101, it is possible to suppress deterioration in accuracy of phase difference detection in the vertical direction due to the time difference in the timing of the accumulation period.
Next, a second embodiment of the present invention will be described.
In the first embodiment described above, the case where the number of pupil divisions is 2 has been described. However, the present invention is not limited to this, and a configuration in which the number of pupil divisions is greater than two may be employed. In the second embodiment, differences from the first embodiment will be described for the case where the number of pupil divisions is four.
Further, although the HB 205 shared by 2×2 pixels in the first embodiment is arranged in each pixel, it may be shared by 2×2 pixels as in the first embodiment.
As described above, according to the second embodiment, regardless of the number of the plurality of photoelectric conversion units formed in each pixel, it is possible to suppress deterioration in accuracy of phase difference detection in the vertical direction due to the time difference in the timing of the accumulation period.
Next, a third embodiment of the present invention will be described.
In the first embodiment described above, the case where the repetition pattern of the color filters corresponds to that of the vertically divided pixels and horizontally divided pixels has been described. However, the present invention is not limited to this, and the arrangement of color filters may be repeated by a multiple of the repetition pattern of the vertically divided pixels and horizontally divided pixels. As the third embodiment, a case where the repetition pattern of the color filters is set to twice that of the repetition pattern of the vertically divided pixels and horizontally divided pixels will be described.
As described above, according to the third embodiment, the same effects as those of the first embodiment can be obtained.
While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
This application claims the benefit of Japanese Patent Application No. 2022-184971, filed Nov. 18, 2022 which is hereby incorporated by reference herein in its entirety.
Number | Date | Country | Kind |
---|---|---|---|
2022-184971 | Nov 2022 | JP | national |