The present invention relates to a solid-state image sensor and camera.
Japanese Patent Laid-Open No. 2001-250931 discloses a solid-state image sensor having a configuration in which N neighboring pixels form one group, and one microlens is located on N pixels which belong to an identical group, as a solid-state image sensor appended with a focus detection function based on a phase-difference detection method.
In the solid-state image sensor described in Japanese Patent Laid-Open No. 2001-250931, since transistors included in pixels are located on the light-receiving face side, an area of a light-receiving unit is limited by transistors and wiring patterns. For this reason, as a pixel size is reduced, it becomes difficult to obtain sufficient sensitivity.
The present invention provides a technique advantageous in improvement of sensitivity in a solid-state image sensor having a configuration in which a plurality of pixels are assigned to each microlens.
One of the aspects of the present invention provides a solid-state image sensor, which includes a semiconductor substrate having a first face and a second face opposite to the first face, the sensor comprising: a plurality of pixel groups each including a plurality of pixels, each pixel having a photoelectric converter formed in the semiconductor substrate and a wiring pattern which configures a part of a circuit in the pixel, the photoelectric converter including a region whose major carriers are the same with charges to be accumulated in the photoelectric converter as a signal; and a plurality of microlenses which are located so that one microlens is arranged for each pixel group, wherein the wiring patterns are located at a side of the first face of the semiconductor substrate, and the plurality of microlenses are located at a side of the second face of the semiconductor substrate, and light-incidence faces of the regions of the photoelectric converters of each pixel group are arranged along the second face such that the light-incidence faces are apart from each other in a direction along the second face.
Further features of the present invention will become apparent from the following description of exemplary embodiments with reference to the attached drawings.
A solid-state image sensor according to the first embodiment of the present invention will be described below with reference to
With the configuration in which one microlens 30 is assigned to one pixel group 50 including a plurality of pixels, light passing through different regions of a pupil of an image sensing lens which forms an image of an object on the image sensing plane of the solid-state image sensor 1 can be detected by the plurality of pixels of each pixel group 50. For the sake of descriptive convenience, assume that the different regions of the pupil of the image sensing lens are defined as first and second regions, and the plurality of pixels in each pixel group 50 include first and second pixels. A first image is obtained by detecting light which passes through the first region by the first pixels of the plurality of pixel groups 50, and a second image is obtained by detecting light which passes through the second region by the second pixels of the plurality of pixel groups 50. From a deviation between the first and second images, a deviation amount (that is, a defocus amount) between an image formed by the image sensing lens and the image sensing plane of the solid-state image sensor 1 or a distance to the object can be detected. Such method is called a phase-difference detection method. In the example of
Note that one of two diffusion regions (source and drain) of the transfer transistor 203 is commonized with the photoelectric converter PD, and the other region is commonized with the floating diffusion 204. The gate electrode of the transfer transistor 203 forms a channel through which charges accumulated in the photoelectric converter PD are transferred to the floating diffusion 204.
The photoelectric converter PD includes a p+-type region 304 which is located at the first face side (wiring layer side) of the n+-type region 303, and a p+-type region 305 located at the second face side (light-receiving face side) of the n-type region 301, and is configured as an embedded photodiode. The p+-type region 305 on the light-receiving face side is formed over the entire region of the pixel array. A gate electrode 307 is that of the transfer transistor 203 which transfers charges from the n+-type region 304 as a charge accumulation-type region of the photoelectric converter PD to the floating diffusion 204. The gate electrode 307 is located on the first face via a gate insulating film (not shown). Also, the floating diffusion (FD) 204 is an n-type region.
The n+-type region 303 as the charge accumulation-type region of the photoelectric converter PD is completely depleted by a reset operation, and then accumulates electrons generated by photoelectric conversion according to light incidence. For this reason, an area of the photoelectric converter PD can be assured to be as broad as possible to get close to the neighboring photoelectric converter PD within a range in which isolations from the floating diffusion 204 and the photoelectric converter PD of the neighboring pixel are kept. In
According to the first embodiment, the wiring patterns 60 are located on the side of the first face 11 of the semiconductor substrate 10, and the microlenses 30 are located on the side of the second face 12 of the semiconductor substrate 10. Hence, light is never intercepted by the wiring patterns 60, and a large light-receivable region can be assured. Furthermore, according to the first embodiment, when each pixel group 50 includes circuit elements such as transistors and the like, since the circuit elements are located on the side of the first face 11 of the semiconductor substrate 10, light is never intercepted by the circuit elements, and a large light-receivable region can be assured.
A solid-state image sensor according to the second embodiment of the present invention will be described below with reference to
In the example shown in
In the example of
In this configuration, when wiring patterns of pixels are located between the microlenses 30 and semiconductor substrate 10, each second photoelectric converter may be shaded by wiring patterns required to read out a signal from the first photoelectric converter surrounded by the second photoelectric converters. On the other hand, according to the configuration in which the wiring patterns are located on the side of the first face 11 of the semiconductor substrate 10, and the microlenses 30 are located on the side of the second face 12 of the semiconductor substrate 10 as in the present invention, the semiconductor substrate 10 or photoelectric converters are never shaded by the wiring patterns. Hence, a large light-receiving region (a region that can receive light) can be assured, thus improving the sensitivity.
A solid-state image sensor according to the third embodiment of the present invention will be described below with reference to
For example, in the configuration example shown in
The translational symmetry layout is advantageous to eliminate characteristic variations for respective pixels when misalignment between a mask and a pattern already formed on a semiconductor substrate has occurred. For example, a case will be examined below wherein misalignment has occurred between an active region and a polysilicon patterning mask, and the gate electrode 307 of the transfer transistor shifts to the right in
In general, upon formation of an n+-type region 303 and p+-type region 304 so as to obtain required transfer characteristics, impurity ions are injected at an angle inclined from a normal direction to a semiconductor substrate. When all the pixels have a common charge transfer direction, an impurity injection required to form each of the n+-type region 303 and p+-type region 304 need only be performed once. Therefore, since an impurity can be injected under the same condition in all the pixels, transfer characteristic variations between pixels are small. By contrast, when all the pixels do not have a common charge transfer direction, a plurality of impurity injections required to form the n+-type region 303 and p+-type region 304 have to be performed while changing angles. Hence, manufacturing variations of doses and injection angles cannot be avoided, thus causing transfer characteristic variations for respective pixels. Such variations may lower focus detection precision when each photoelectric converter is used as a focus detection unit like in the present specification. Therefore, in order to suppress transfer characteristic variations, all the pixels desirably have a standardized charge transfer direction.
A solid-state image sensor according to the fourth embodiment of the present invention will be described below with reference to
In the configuration of
In the configuration of
A technical idea of arrangement of the p+-type region 901 can be more generally explained as follows. Let C1 be a maximum value of a p-type impurity concentration (an impurity concentration of a second conductivity type) in a region R1 between n-type regions 301 (semiconductor regions of a first conductivity type) corresponding to a minimum distance between the n-type regions 301 of the photoelectric converters PD between pixels which belong to a single pixel group 50. Also, let C2 be a maximum value of a p-type impurity concentration (an impurity concentration of the second conductivity type) in a region R2 between the n-type regions 301 corresponding to a minimum distance between the n-type regions 301 of the photoelectric converters PD between pixels which belong to different pixel groups 50. In this case, the n-type region 301 is a semiconductor region which can accumulate charges as a signal. The region R1 is a p-well 312, and the region R2 includes a p+-type region 901 formed by injecting a p-type impurity in a p-well 302. Therefore, the maximum value C1 of the p-type impurity concentration (the impurity concentration of the second conductivity type) in the region R1 is smaller than the maximum value C2 of the p-type impurity concentration (the impurity concentration of the second conductivity type) in the region R2.
A solid-state image sensor according to the fifth embodiment of the present invention will be described below with reference to
In the circuit shown in
The amount of charges accumulated by the photoelectric converter PD are converted into a voltage by the floating diffusion, and is supplied to an input of the amplifier transistor. When the capacitance of the floating diffusion is small, a small amount of charges can be converted into a large signal voltage. For this reason, that signal voltage is unsusceptible to noise superposed by a read circuit after the floating diffusion, thus improving the S/N.
A solid-state image sensor according to the sixth embodiment of the present invention will be described below with reference to
As the seventh embodiment, a solid-state image sensor added with functions other than image sensing and focus detection functions will be described below. In a configuration in which one microlens is formed for one pixel group including a plurality of pixels, a dynamic range expansion function can be added. A ratio between charges accumulated on photoelectric converters and a final output voltage of the solid-state image sensor is called a charge conversion coefficient. When charges accumulated on the photoelectric converters are equal to each other, an output voltage becomes larger with increasing charge conversion coefficient. In this case, a charge conversion coefficient of at least one pixel of a plurality of pixels formed under one microlens is designed to be smaller than other pixels. Then, the pixel having the small charge conversion coefficient generates a low output voltage even when it receives a charge of the same magnitude. Therefore, when an output voltage range is fixed, a charge larger than a saturated charge amount of a pixel having a large charge conversion coefficient can be read out from the pixel with the small charge conversion coefficient. By contrast, in a low-luminance region, the pixel with the large charge coefficient which can obtain a large output voltage even by a small signal charge is advantageous in terms of the S/N. Hence, outputs of pixels having the large charge conversion coefficient are used in the low-luminance region, and an output of the pixel having the low charge conversion coefficient is used in a high-luminance region where the outputs of these pixels are saturated. In this manner, a plurality of pixel outputs having different charge conversion coefficient values are combined, thus expanding the dynamic range.
In addition, as strobe light control pixels, some pixels may have a global electronic shutter function. By adding active elements including memories to elements which configure some pixels, such function can be implemented.
As an application example of the solid-state image sensor according to the above embodiments, a camera which incorporates the solid-state image sensor will be exemplified below. The concept of the camera includes not only an apparatus primarily intended to an image capturing operation, but also an apparatus which includes the image capturing function as an auxiliary function (for example, a personal computer and mobile terminal). The camera includes the solid-state image sensor according to the present invention exemplified as the embodiments, and a processing unit which processes a signal output form the solid-state image sensor. The processing unit can include, for example, an A/D converter, and a processor which processes digital data output from the A/D converter.
While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
This application claims the benefit of Japanese Patent Application No. 2011-219562, filed Oct. 3, 2011 which is hereby incorporated by reference herein in its entirety.
Number | Date | Country | Kind |
---|---|---|---|
2011-219562 | Oct 2011 | JP | national |
This is a continuation of U.S. patent application Ser. No. 14/982,494, filed Dec. 29, 2015, which is a continuation of U.S. patent application Ser. No. 13/627,507, filed Sep. 26, 2012, and which issued as U.S. Pat. No. 9,300,884.
Number | Name | Date | Kind |
---|---|---|---|
6933978 | Suda | Aug 2005 | B1 |
6973265 | Takahashi | Dec 2005 | B2 |
6995800 | Takahashi et al. | Feb 2006 | B2 |
7187052 | Okita et al. | Mar 2007 | B2 |
7280146 | Takahashi et al. | Oct 2007 | B2 |
7283305 | Okita et al. | Oct 2007 | B2 |
7294818 | Matsuda et al. | Nov 2007 | B2 |
7321110 | Okita et al. | Jan 2008 | B2 |
7408210 | Ogura et al. | Aug 2008 | B2 |
7456880 | Okita et al. | Nov 2008 | B2 |
7460162 | Koizumi et al. | Dec 2008 | B2 |
7466003 | Ueno et al. | Dec 2008 | B2 |
7514732 | Okita et al. | Apr 2009 | B2 |
7522341 | Mouli | Apr 2009 | B2 |
7538804 | Okita et al. | May 2009 | B2 |
7538810 | Koizumi et al. | May 2009 | B2 |
7550793 | Itano et al. | Jun 2009 | B2 |
7557847 | Okita et al. | Jul 2009 | B2 |
7561199 | Noda et al. | Jul 2009 | B2 |
7623056 | Yamashita | Nov 2009 | B2 |
7633539 | Yamashita | Dec 2009 | B2 |
7688377 | Matsuda et al. | Mar 2010 | B2 |
7692713 | Kinugasa et al. | Apr 2010 | B2 |
7741593 | Iwata et al. | Jun 2010 | B2 |
7742088 | Shizukuishi | Jun 2010 | B2 |
7787037 | Ono et al. | Aug 2010 | B2 |
7817199 | Yamashita et al. | Oct 2010 | B2 |
7872286 | Okita et al. | Jan 2011 | B2 |
7920192 | Watanabe et al. | Apr 2011 | B2 |
7928477 | Kobayashi et al. | Apr 2011 | B2 |
7982789 | Watanabe et al. | Jul 2011 | B2 |
8009213 | Okita et al. | Aug 2011 | B2 |
8013369 | Iwata et al. | Sep 2011 | B2 |
8081245 | Itano et al. | Dec 2011 | B2 |
8094225 | Yamashita | Jan 2012 | B2 |
8106955 | Okita et al. | Jan 2012 | B2 |
8115848 | Onuki et al. | Feb 2012 | B2 |
8158920 | Suzuki et al. | Apr 2012 | B2 |
8159573 | Suzuki et al. | Apr 2012 | B2 |
8159577 | Iwata et al. | Apr 2012 | B2 |
8169498 | Yamashita | May 2012 | B2 |
8199235 | Okita et al. | Jun 2012 | B2 |
8223238 | Kuroda et al. | Jul 2012 | B2 |
8426238 | Nishizawa et al. | Apr 2013 | B2 |
8451352 | Hayasaka et al. | May 2013 | B2 |
8704934 | Yokogawa | Apr 2014 | B2 |
8728613 | Jones et al. | May 2014 | B2 |
8928795 | Kusaka | Jan 2015 | B2 |
9184197 | Kusaka | Nov 2015 | B2 |
9466633 | Kusaka | Oct 2016 | B2 |
20040100570 | Shizukuishi | May 2004 | A1 |
20060268139 | Kobayashi et al. | Nov 2006 | A1 |
20070205439 | Okita et al. | Sep 2007 | A1 |
20090020690 | Toda | Jan 2009 | A1 |
20090227064 | Suzuki et al. | Sep 2009 | A1 |
20090303371 | Watanabe et al. | Dec 2009 | A1 |
20100118172 | McCarten et al. | May 2010 | A1 |
20100128152 | Hayasaka et al. | May 2010 | A1 |
20100141816 | Maruyama et al. | Jun 2010 | A1 |
20100165167 | Sugiyama et al. | Jul 2010 | A1 |
20100171157 | Hynecek | Jul 2010 | A1 |
20100182465 | Okita | Jul 2010 | A1 |
20100200738 | Yamashita | Aug 2010 | A1 |
20100230583 | Nakata et al. | Sep 2010 | A1 |
20100292579 | Sato | Nov 2010 | A1 |
20100328302 | Yamashita et al. | Dec 2010 | A1 |
20110013062 | Yamashita | Jan 2011 | A1 |
20110019042 | Yamaguchi | Jan 2011 | A1 |
20110025896 | Yamashita et al. | Feb 2011 | A1 |
20110032379 | Kobayashi et al. | Feb 2011 | A1 |
20110058070 | Awazu | Mar 2011 | A1 |
20110058075 | Yanagita et al. | Mar 2011 | A1 |
20110080492 | Matsuda et al. | Apr 2011 | A1 |
20110080493 | Kono et al. | Apr 2011 | A1 |
20110109776 | Kawai | May 2011 | A1 |
20110156104 | Yamaguchi | Jun 2011 | A1 |
20110157398 | Okita et al. | Jun 2011 | A1 |
20110169996 | Takada et al. | Jul 2011 | A1 |
20110176045 | Ahn et al. | Jul 2011 | A1 |
20110181747 | Kido et al. | Jul 2011 | A1 |
20110234868 | Yamashita et al. | Sep 2011 | A1 |
20110242380 | Ogura et al. | Oct 2011 | A1 |
20110273597 | Ishiwata | Nov 2011 | A1 |
20110273598 | Ogino et al. | Nov 2011 | A1 |
20120006993 | Arishima et al. | Jan 2012 | A1 |
20120007197 | Kikuchi et al. | Jan 2012 | A1 |
20120007203 | Kono et al. | Jan 2012 | A1 |
20120008030 | Kono et al. | Jan 2012 | A1 |
20120008031 | Yamashita et al. | Jan 2012 | A1 |
20120008177 | Fujimura et al. | Jan 2012 | A1 |
20120026371 | Itano et al. | Feb 2012 | A1 |
20120086844 | Dai et al. | Apr 2012 | A1 |
20120194696 | Ohshitanai et al. | Aug 2012 | A1 |
20120194714 | Yamashita | Aug 2012 | A1 |
20120298841 | Yamashita et al. | Nov 2012 | A1 |
20130161774 | Okigawa | Jun 2013 | A1 |
20130241023 | Nishizawa et al. | Sep 2013 | A1 |
20130342751 | Yoshimura | Dec 2013 | A1 |
Number | Date | Country |
---|---|---|
2001-124984 | May 2001 | JP |
2001-250931 | Sep 2001 | JP |
2004-186311 | Jul 2004 | JP |
2006-086226 | Mar 2006 | JP |
2008-270298 | Nov 2008 | JP |
2009-295937 | Dec 2009 | JP |
2010-154493 | Jul 2010 | JP |
2010-161200 | Jul 2010 | JP |
2011-029337 | Feb 2011 | JP |
2011-054911 | Mar 2011 | JP |
2011-082253 | Apr 2011 | JP |
2011-129785 | Jun 2011 | JP |
2011-138927 | Jul 2011 | JP |
2011-142330 | Jul 2011 | JP |
2011-176715 | Sep 2011 | JP |
2010056285 | May 2010 | WO |
2011074234 | Jun 2011 | WO |
2012026292 | Mar 2012 | WO |
WO-2012026292 | Mar 2012 | WO |
Entry |
---|
Office Action dated Mar. 10, 2017, in Japanese Patent Application No. 2015-250495. |
Office Action dated May 18, 2018, in Japanese Patent Application No. 2017-137336. |
Number | Date | Country | |
---|---|---|---|
20170338261 A1 | Nov 2017 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14982494 | Dec 2015 | US |
Child | 15668820 | US | |
Parent | 13627507 | Sep 2012 | US |
Child | 14982494 | US |