The present disclosure relates to a solid-state imaging device and an electronic device including the solid-state imaging device.
For example, according to PTL 1, two pixels having different areas, i.e., a large pixel and a small pixel are arranged in a unit pixel and a light-reducing part is provided on the pixel with the smaller area, so that the pixels have different sensitivities. In this way, the amount of charge to be stored at a charge storage unit of the photoelectric conversion element of the small-area pixel is increased more than the area ratio thereof, and the dynamic range is expanded.
In this example, the transfer electrode positions (detection node electrode positions) of the large-area and small-area pixels are located at the edge of the unit pixel or at the edge of the photoelectric conversion area, such that the photoelectrically converted charge is transferred toward the edges during charge detection. The electrode positions are each at least 10% of the pixel size apart from the optical center.
In recent years, there has been a demand for in-vehicle cameras having a resolution high enough to recognize numerical values on distant signs about 200 m ahead and a frame rate of at least 60 fps. For this reason, the horizontal blanking period (readout time) must be shortened while increasing the number of pixels, and above all, the signal charge transfer time of pixels must be shortened.
[PTL 1]
In view of the foregoing, when the transfer electrode is provided at the edge of the photoelectric conversion area, it takes time to transfer generated charge, the charge cannot be transferred within desired time. The average transfer time is the worst when the potential is in a no-gradient region and is expressed by “square of distance/diffusion coefficient D”. When the potential is deepened to increase the amount of saturated charge, a potential pocket is created in the potential gradient of the transfer path and the charge is more likely to be trapped. Depending on the height and temperature of the pocket, it also takes time for the charge to get out of there, and therefore it is disadvantageous to provide the transfer electrode at the edge in view of maximizing the saturation and transfer performance.
In the structure including the large and small pixels, the structure for creating a potential gradient toward the transfer gate (the shape of the photoelectric conversion area) is not symmetrical between large and small pixels, resulting in transfer defects and transfer time delays because of asymmetry in charge transfer, and the sensitivity ratio and sensitivity shading between large and small pixels prevent correlation to the light quantity and wavelength from being constant. Since the outputs of large and small pixels are finally synthesized by multiplying a sensitivity ratio gain, the output linearity with respect to the light quantity must be constant.
With the foregoing in view, it is an object of the present disclosure to provide a solid-state imaging device and an electronic device that allow high saturation and maximum transfer performance to be achieved.
A solid-state imaging device according to an aspect of the present disclosure includes a plurality of unit pixels arranged in a two-dimensional array, the plurality of unit pixels each includes a photoelectric conversion unit that photoelectrically converts incident light and a wiring layer stacked on a surface opposite to a light-incident side surface of the photoelectric conversion unit and having a detection node that detects charge stored at the photoelectric conversion unit, and in at least some of the plurality of unit pixels, a center of the detection node is substantially coincident with a light receiving center of the photoelectric conversion unit.
An electronic device according to another aspect of the present disclosure includes a solid-state imaging device, the solid-state imaging device includes a plurality of unit pixels arranged in a two-dimensional array, the plurality of unit pixels each includes a photoelectric conversion unit that photoelectrically converts incident light and a wiring layer stacked on a surface opposite to a light-incident side surface of the photoelectric conversion unit and having a detection node that detects charge stored at the photoelectric conversion unit, and in at least some of the plurality of unit pixels, and a center of the detection node is coincident with a light receiving center of the photoelectric conversion unit.
Embodiments of the present disclosure will be described with reference to the drawings. In the drawings to be referred to in the following description, the same or similar portions will be denoted by the same or similar reference characters and their description will not be repeated. It should be noted however that the drawings are schematic, and the relationships between thicknesses and two-dimensional sizes and the ratios of thicknesses of devices or members may not be true to reality. Therefore, specific thicknesses and dimensions should be determined in consideration of the following description. In addition, it is understood that some portions have different dimensional relationships and ratios among drawings.
Herein, a “first conductivity type” refers to one of p-type and n-type, and a “second conductivity type” refers to one of p-type and n-type that is different from the “first conductivity type”. The semiconductor regions with “+” and “−” suffixed to “n” and “p” indicate that the semiconductor regions have relatively higher and lower impurity densities than semiconductor regions without “+” and “−”. However, it does not necessarily mean that semiconductor regions with the same character “n” have exactly the same impurity density.
In addition, the directions defined such as upward and downward in the following description are merely definitions provided for the sake of brevity and are not intended to limit technical ideas in the present disclosure. For example, it should be understood that when an object is rotated by 90 degrees and observed, the up-down direction is interpreted as the left-right direction, and when an object is rotated by 180 degrees and observed, the up and down positions are reversed. The advantageous effects described herein are merely exemplary and are not restrictive, and other advantageous effects may be produced.
(Overall Configuration of Solid-state Imaging Device) A solid-state imaging device 1 according to a first embodiment of the present disclosure will be described.
The solid-state imaging device 1 in
As shown in
The pixel region 3 includes a plurality of unit pixels 9 arranged regularly in a two-dimensional array on the substrate 2. The unit pixel 9 includes a large-area pixel 91 and a small-area pixel 92 shown in
The vertical driving circuit 4 may include a shift register, selects a desired pixel driving wiring 10, supplies a pulse for driving the unit pixel 9 to the selected pixel driving wiring 10, and drives unit pixels 9 on a row-basis. More specifically, the vertical driving circuit 4 selectively scans the unit pixels 9 in the pixel region 3 sequentially in the vertical direction on a row-basis, and supplies pixel signals based on signal charge generated according to the quantities of received light in the photoelectric conversion units of the unit pixels 9 to the column signal processing circuits 5 through vertical signal lines 11.
For example, the column signal processing circuit 5 is provided for each of the columns of unit pixels 9 to perform signal processing such as noise removal to signals output from a row of unit pixels 9 on a pixel column basis. For example, the column signal processing circuit 5 performs signal processing such as correlated double sampling (CDS) for removing pixel-specific fixed pattern noise and analog-digital (AD) conversion.
The horizontal driving circuit 6 may include a shift register, sequentially outputs horizontal scanning pulses to the column signal processing circuits 5 to select each of the column signal processing circuits 5 in order, and outputs a pixel signal having been subjected to signal processing to the horizontal signal line 12 from each of the column signal processing circuits 5.
The output circuit 7 performs signal processing on the pixel signals sequentially supplied from the column signal processing circuits 5 through the horizontal signal line 12, and outputs resultant pixel signals. Examples of the signal processing include buffering, black level adjustment, column variation correction, and various digital signal processing.
The control circuit 8 generates a clock signal or a control signal as a reference for example for operation of the vertical driving circuit 4, the column signal processing circuit 5, and the horizontal driving circuit 6 on the basis of a vertical synchronization signal, a horizontal synchronization signal, and a master clock signal. The control circuit 8 also outputs the generated clock signal or control signal for example to the vertical driving circuit 4, the column signal processing circuit 5, and the horizontal driving circuit 6.
In
(Equivalent Circuit of Unit Pixel)
The unit pixel 9 includes a photodiode (SP1) 91a for the large-area pixel 91, a photodiode (SP2) 92a for the small-area pixel 92, a transfer transistor (TGL) 93a, conversion efficiency adjustment transistors (FDG and FCG) 93b and 93c, a reset transistor (RST) 93d, an amplification transistor (AMP) 93e, a selection transistor (SEL) 93f, and a charge storage capacitor unit 93g. The transfer transistor (TGL) 93a, the conversion efficiency adjustment transistors (FDG and FCG) 93b and 93c, the reset transistor (RST) 93d, the amplification transistor 93e, and the selection transistor (SEL) 93f is a pixel transistor, and may be MOS transistors.
The photodiode 91a for the large-area pixel 91 constitutes a photoelectric conversion unit that performs photoelectric conversion on incident light. The photodiode 91a has its anode grounded. The photodiode 91a has its cathode connected to the source of the transfer transistor 93a.
The transfer transistor 93a has its drain connected to the charge storage unit 93h which is made of a floating diffusion region. The transfer transistor 93a transfers charge from the photodiode 91a to the charge storage unit 93h in response to a transfer signal applied to the gate.
The charge storage unit 93h stores the charge transferred from the photodiode 91a through the transfer transistor 93a. The potential of the charge storage unit 93h is modulated according to the amount of charge stored at the charge storage unit 93h. The source of the conversion efficiency adjustment transistor 93b is connected to the charge storage unit 93h. The conversion efficiency adjustment transistor 93b has its drain connected to the sources of the conversion efficiency adjustment transistor 93c and the reset transistor 93d. The conversion efficiency adjustment transistor 93b adjusts the charge conversion efficiency in response to a conversion efficiency adjustment signal applied to the gate.
Meanwhile, the photodiode 92a for the small-area pixel 92 constitutes a photoelectric conversion unit that converts incident light into a photoelectric signal. The photodiode 92a has its anode grounded. The photodiode 92a has its cathode connected to the charge storage capacitor unit 93g. A power supply potential (FC-VDD) is applied to the charge storage capacitor unit 93g. The drain of the conversion efficiency adjustment transistor 93c is connected to the cathode of the photodiode 92a and the charge storage capacitor unit 93g.
When the conversion efficiency adjustment transistors 93b and 93c are off, the charge storage capacitor unit 93g stores charge generated from the photodiode 92a. In response to a conversion efficiency adjustment signal applied to the gates of the conversion efficiency adjustment transistor 93b and 93c, the charge generated from the photodiode 92a and the charge stored at the charge storage capacitor unit 93g are transferred to the charge storage unit 93h.
A power supply potential (VDD) is applied to the drain of the reset transistor 93d. The reset transistor 93d initializes (resets) the charge stored at the charge storage capacitor unit 93g and the charge stored at the charge storage unit 93h in response to a reset signal applied to the gate.
The charge storage unit 93h and the drain of the transfer transistor 93a are connected with the gate of the amplification transistor 93e. The amplification transistor 93e has its drain connected with the source of the selection transistor 93f. The power supply potential (VDD) is applied to the source of the amplification transistor 93e. The amplification transistor 93e amplifies the potential of the charge storage unit 93h.
The selection transistor 93f has its drain connected to the vertical signal line 11. The selection transistor 93f selects a unit pixel 9 in response to a selection signal. When the unit pixel 9 is selected, a pixel signal corresponding to the potential amplified by the amplification transistor 93e is output through the vertical signal line 11.
(Arrangement of Pixel Transistors)
The transfer transistor (TGL) 93a, the conversion efficiency adjustment transistors (FDG and FCG) 93b and 93c, and the reset transistor (RST) 93d are provided in the wiring 21. The amplification transistor (AMP) 93e and the selection transistor (SEL) 93f are provided in the wiring 22. The wiring 21 and the amplification transistor (AMP) 93e are connected for example by a bonding wire. The wiring 22 and the wiring 23 are electrically disconnected.
(Sectional Structure of Unit Pixel)
As shown in
The substrate 2 may be a semiconductor substrate made of silicon (Si). The photodiode 91a is made by a pn junction between an n-type semiconductor region 91a1 and a p-type semiconductor region 91a2 formed on the front surface side of the substrate 2. In the photodiode 91a, signal charge corresponding to the quantity of incident light through an n-type semiconductor region 2a is generated, and the generated signal charge is stored at the n-type semiconductor region 91a1. The electrons attributable to dark current generated at the interface of the substrate 2 are absorbed by the holes that are the majority carriers of a p-type semiconductor region 2b formed in the depth-wise direction from the backside surface of the substrate 2 and a p-type semiconductor region 2c formed on the front surface, so that the dark current is reduced.
The large-area pixel 91 is electrically isolated by the RDTI 31 formed in the P-type semiconductor region 2b. As shown in
The color filter 41 is formed corresponding to the wavelength of light desired to be received by each unit pixel 9. The color filter 41 transmits light in an arbitrary light wavelength, and lets the transmitted light enter the photodiode 91a in the substrate 2.
The wiring layer 43 is formed on the front surface side of the substrate 2 and includes pixel transistors (among which only the transfer transistor 93a, the conversion efficiency adjustment transistor 93b, and the reset transistor 93d are shown in
In the solid-state imaging device 1 having the above configuration, light is emitted from the backside surface of the substrate 2, the emitted light is transmitted through the on-chip lens 42 and the color filter 41, and the transmitted light is photoelectrically converted by the photodiode 91a, so that signal charge is generated. Then, the generated signal charge is output as a pixel signal on the vertical signal line 11 shown in
According to the first embodiment, the charge storage capacitor unit 93g is not a storage layer inside the substrate 2, but is placed in the wiring layer 43. A high density p type is implanted to the boundary between the laminated layers to isolate the layers. In this way, the photoelectric conversion area can be maximized rather than planar layout arrangement.
According to the first embodiment, the light receiving center of the large-area pixel 91 is the center of the area surrounded by the RDTI 31. The detection node center refers to the center of the gate electrode of the transfer transistor 93a. The detection node detects charge stored at the photodiode 91a.
In this example, the position of the light receiving center of and the position of the center of the detection node are substantially coincident. Here, the wording “substantially coincident” refers to the case in which the normal passing through the center of the light-receiving surface of the large-area pixel 91 and the normal passing through the center of the detection node are perfectly coincident and also other cases in which these lines are considered substantially coincident. There may be discrepancies that do not affect the accuracy of uniformity. For example, The range with a discrepancy within 10% of the pixel size can be called substantial coincidence. For example, if the pixel size is 3 μm, and a detection node center is within a distance of 0.3 μm from the light receiving center, the state may be a substantial coincidence.
Note that in order to provide an FD (floating diffusion) region and the pixel transistors adjacent to the transfer gate electrode of the transfer transistor 93a provided at the center, a high density p-type semiconductor region 2c must be provided to isolate the n-type semiconductor region 2a in the underlying photoelectric conversion area and the n-type semiconductor region 2d of the FD diffusion layer. It is essential to place the FD diffusion layer near the center regardless of the presence or absence of FC capacitance.
As described above, according to the first embodiment, the moment the transfer transistor 93a as the detection node is turned on, charge generated by photoelectric conversion by the photodiode 91a is subjected to an electric field corresponding to the power supply voltage in the vicinity of the transfer transistor 93a, and this allows the transfer to be efficient in the shortest possible time since the position of the gate electrode of transfer transistor 93a is at the same position as the light receiving center of the photodiode 91a.
According to the first embodiment, the potential is deepest is the center of the photoelectric conversion area, i.e., directly below the gate electrode of the transfer transistor 93a. The charge needs only move substantially in the vertical direction from the deepest point and does not have to move horizontally, which makes it difficult for pockets to form in the potential gradient.
Therefore, according to the first embodiment, high saturation and maximum transfer performance can be achieved by matching the center of light reception and the center of transfer, and sensitivity shading can be suppressed, coloration can be reduced, and the SN ratio can be improved in the structure including large-area and small-area pixels.
Next, a second embodiment will be described. The second embodiment is a modification of the first embodiment.
According to the second embodiment, a planar type transfer transistor 93a1 is used.
(Sectional Structure of Unit Pixel)
As in the foregoing, according to the second embodiment, the center of the gate electrode of the transfer transistor 93a1 is further coincide with the light receiving center of the photodiode 91a, so that the transfer time can be shortened.
Next, a third embodiment will be described. The third embodiment is a modification of the first embodiment.
(Sectional Structure of Unit Pixel)
According to the third embodiment, the detection node center is at the center of the gate electrode of the vertical transfer transistor 93a2. In this example, the position of the light receiving center and the position of the detection node center are even more coincident than the case according to the first embodiment.
As described above, according to the third embodiment, while the center of the gate electrode of the transfer transistor 93a2 is further coincident with the light receiving center of the photodiode 91a, the transfer in the depth-wise direction is further facilitated and the transfer time can be shortened.
Next, a fourth embodiment will be described. The fourth embodiment is a modification of the first embodiment.
(Sectional Structure of Unit Pixel)
As shown in
The photodiode 92a includes a pn junction between an n-type semiconductor region 92a1 and a p-type semiconductor region 92a2 formed on the front surface side of the substrate 2. In the photodiode 92a, signal charge corresponding to the quantity of incident light through an n-type semiconductor region 2e is generated, and the generated signal charge is stored at the n-type semiconductor region 92a1. The electrons attributable to dark current generated at the interface of the substrate 2 are absorbed by the holes that are the majority carriers of a p-type semiconductor region 2f formed in the depth-wise direction from the backside surface of the substrate 2 and a p-type semiconductor region 2g formed on the front surface, so that the dark current is reduced.
The small-area pixel 92 is electrically isolated by an RDTI 31 formed in the p-type semiconductor region 2f. As shown in
The on-chip lens 62 collects emitted light and lets the collected light efficiently enter the photodiode 92a in the substrate 2 through the color filter 61.
The wiring layer 43 is formed on the front surface side of the substrate 2 and includes pixel transistors (among which only the conversion efficiency adjustment transistor 93b and the amplification transistor 93e are shown in
According to the fourth embodiment, metal 51 connected to the photodiode 92a as a detection node center is arranged in the wiring layer 43. In this case, the detection node center is a direct-connection type that makes direct contact with the diffusion layer. Thus, the POLY electrode does not have to be used.
As in the foregoing, according to the fourth embodiment, the detection node center is coincident with the light receiving center of the photodiode 92a, so that the transfer time can be shortened.
Next, a fifth embodiment will be described. The fifth embodiment is a modification of the first embodiment.
<Equivalent Circuit of Unit Pixel>
The transfer transistor 93i has its drain connected to the charge storage unit 93j which is made of a floating diffusion region. The transfer transistor 93i transfers charge from the photodiode 92a to the charge storage unit 93j in response to a transfer signal applied to the gate.
(Arrangement of Pixel Transistors)
The transfer transistor (TGL) 93a, the conversion efficiency adjustment transistors (FDG and FCG) 93b and 93c, the reset transistor (RST) 93d, and the transfer transistor (TGS) 93i are provided in the wiring 21. The amplification transistor (AMP) 93e and the selection transistor (SEL) 93f are provided in the wiring 22. The wiring 21 and the amplifying transistor (AMP) 93e are connected to by a bonding wire. The amplification transistor (AMP) 93e is also provided in the wiring 24.
(Sectional Structure of Unit Pixel)
As in the foregoing, according to the fifth embodiment, the gate electrode of the transfer transistor 93i is coincident with the light receiving center of the photodiode 92a, so that the transfer time can be shortened.
Next, a sixth embodiment will be described. The sixth embodiment is a modification of the fifth embodiment.
In the solid-state imaging device 1E according to the sixth embodiment, the transfer transistor 93i1 is a vertical transistor with a vertigal gate (VG). The detection node center is at the center of the gate electrode of the transfer transistor 93i1 which is a vertical transistor. In this case, the position of the light receiving center and the position of the detection node center are even more coincident than the case according to the fifth embodiment.
As in the foregoing, according to the sixth embodiment, while the center of the gate electrode of the transfer transistor 93i1 is more coincident with the light receiving center of the photodiode 92a, the transfer in the depth-wise direction is further facilitated, so that the transfer time can be shortened.
Next, a seventh embodiment will be described. The seventh embodiment is a modification of the first embodiment.
(Sectional Structure of Unit Pixel)
Next, an eighth embodiment will be described. The eighth embodiment is a modification of the seventh embodiment.
(Sectional Structure of Unit Pixel)
As in the foregoing, according to the eighth embodiment, the charge storage capacitor unit 93g as an intra-pixel capacitor is the MIM capacitor 71, and as the kind of the insulating film is varied, the capacitance value can be easily increased.
Next, a ninth embodiment will be described. The ninth embodiment is a modification of the first embodiment.
According to the ninth embodiment, the large-area pixel 91 includes an n-type semiconductor region 81 and a p-type semiconductor region 82 provided to form a pn junction with the n-type semiconductor region 81. The small-area pixel 92 includes an n-type semiconductor region 84 and a p-type semiconductor region 85 provided to form a pn junction with the n-type semiconductor region 84.
The depth position 86 of the pn junction of the small-area pixel 92 is positioned closer to the side of the wiring layer 43 than the depth position 83 of the pn junction of the large-area pixel 91. The depth position 86 of the pn junction of the small-area pixel 92 is positioned closer to the light incident side than the depth end of the RDTI 31.
The depth position of the RDTI 31 is not particularly limited. The position may be changed depending on the thickness of silicon or the DTI may be an FDTI etched from the front surface side or a penetrating DTI. For any DTI, the depth position 86 of the pn junction that forms the small-area pixel 92 needs only be shallower than the depth position 83 of the pn junction of the large-area pixel 91 and deeper than the depth end of the RDTI 31.
As in the foregoing, according to the ninth embodiment, for the large-area pixel 91, the p-type semiconductor region 82 can be used to pin defect levels that occur at the silicon interface at the backside surface. Accordingly, dark current can be reduced. In addition to the dark current reduction, in the small-area pixel 92, even if the high energy implantation for the depth of the n-type semiconductor region 84 is not allowed because of a finer resist shape, and depletion cannot be carried out, charge outflow to the adjacent large-area pixel 91 can be prevented by surrounding at least the neutral region with the RDTI 31.
Next, a tenth embodiment will be described.
The color filter 41 for the large-area pixel 91R is formed corresponding to the wavelength of red light desired to be received. The color filter 41 for the large-area pixel 91R transmits light in the red light wavelength, and lets the transmitted light enter the photodiode 91a. The color filters 41 for the large-area pixels 91Gr and 91Gb transmit light in the green light wavelength, and lets the transmitted light enter the photodiode 91a. The color filter 41 for the large-area pixel 91B transmits light in the blue light wavelength, and lets the transmitted light enter the photodiode 91a.
Meanwhile, the color filter 61 for small-area pixel 92R transmits light in the red light wavelength, and lets the transmitted light enter the photodiode 92a. The color filters 61 for small-area pixels 92Gr and 92Gb transmit light in the green light wavelength, and lets the transmitted light enter the photodiode 92a. The color filter 61 for the small-area pixel 92B transmits light in the blue light wavelength, and lets the transmitted light enter the photodiode 92a.
The color filter 41 for the large-area pixel 91C is formed corresponding to the wavelength of light desired to be received such as near-transparent light. The color filter 61 for the small-area pixel 92C is formed corresponding to the wavelength of light desired to be received such as near-transparent light.
The color filter 41 for the large-area pixel 91Y is formed corresponding to the wavelength of yellow light desired to be received. The color filter 41 for the large-area pixel 91Y transmits light in the wavelength of yellow light desired to be received, and lets the transmitted light enter the photodiode 91a.
The color filter 41 for the large-area pixel 91Cy is formed corresponding to the wavelength of cyan light desired to be received. The color filter 41 for the large-area pixel 91Cy transmits light in the wavelength of cyan light, and lets the transmitted light enter the photodiode 91a.
Meanwhile, the color filter 61 for the small-area pixel 92Y is formed corresponding to the wavelength of yellow light desired to be received. The color filter 61 for the small-area pixel 92Y transmits light in the wavelength of yellow light, and lets the transmitted light enter the photodiode 92a.
The color filter 61 for the small-area pixel 92Cy is formed corresponding to the wavelength of cyan light desired to be received. The color filter 61 for the small-area pixel 92Cy transmits light in the wavelength of cyan light, and lets the transmitted light enter the photodiode 92a.
The color filter 61 for the small-area pixel 92BLK transmits light in the wavelength of black light and lets the transmitted light enter the photodiode 92a.
The color filter 61 for the small-area pixel 92IR transmits light in the wavelength of infrared light and lets the light enter the photodiode 92a. The color filter 61 for the small-area pixel 92IR is formed corresponding to the wavelength of infrared light desired to be received.
The color filter 61 for the small-area pixel 92P polarizes light desired to be received and lets the light enter the photodiode 92a.
The color filter 41 for the large-area pixel 91IR is formed corresponding to the wavelength of infrared light desired to be received. The color filter 41 for large-area pixel 91IR transmits light in the infrared wavelength, and lets the transmitted light enter the photodiode 91a.
Note that the colors of the color filters 41 and 61 are not particularly limited and the kinds of color are not limited. Color combinations among the large-area pixels 91 and the small-area pixels 92 are not limited. The IR or polarization at the small-area pixel 92 needs only be present at a part of the array arrangement.
As in the foregoing, the present disclosure has been described with reference to the first to tenth embodiments, but the description and drawings that form a part of the present disclosure should not be construed as limiting the features. It is to be understood that various alternative embodiments, embodiments, and operation features will be apparent to those skilled in the art from the gist of the technical content disclosed according to the first to tenth embodiments. The disclosed features according to the first to tenth embodiments may be combined as appropriate so that no contradictions arise. For example, the disclosed features according to multiple different embodiments may be combined and features according to multiple different modifications of the same embodiment may be combined.
<Exemplary Application to Electronic Device>
Next, an electronic device according to an eleventh embodiment of the present disclosure will be described.
The electronic device 100 according to the eleventh embodiment includes a solid-state imaging device 101, an optical lens 102, a shutter device 103, a driving circuit 104, and a signal processing circuit 105. According to the eleventh embodiment, the solid-state imaging device 1 according to the first embodiment of the present disclosure is used as the solid-state imaging device 101 for the electronic device 100 (such as a camera).
The optical lens 102 forms an image based on image light (incident light 106) from an object on the imaging surface of the solid-state imaging device 101. In this way, signal charge is stored for a fixed time period in the solid-state imaging device 101. The shutter device 103 controls the light irradiation period and light-shielding period to the solid-state imaging device 101. The driving circuit 104 supplies drive signals that control the transfer operation of the solid-state imaging device 101 and the shutter operation of the shutter device 103. The drive signal (timing signal) supplied by the driving circuit 104 controls the signal transfer of the solid-state imaging device 101. The signal processing circuit 105 performs various kinds of signal processing on signals (pixel signals) output from the solid-state imaging device 101. A video signal having been subjected to signal processing is stored at a storage medium such as a memory or output to a monitor.
In this way, the electronic device 100 according to the eleventh embodiment allows optical color mixing to be reduced in the solid-state imaging device 101, so that the image quality of video signals can be improved.
Note that the electronic device 100 for which the solid-state imaging devices 1, 1A, 1B, 1C, 1D, 1E, 1F, 1G, or 1H can be used is not limited to a camera, and the solid-state imaging device can also be used for any of other electronic devices. For example, the solid-state imaging device may be used for an imaging device such as a camera module for a mobile device such as a mobile phone.
Also according to the eleventh embodiment, any of the solid-state imaging devices 1, 1A, 1B, 1C, 1D, 1E, 1F, 1G, and 1H according to the first to tenth embodiments is used as the solid-state imaging device 101 for an electronic device, but other configurations may be used.
The present disclosure can also be configured as follows.
(1)
A solid-state imaging device comprising a plurality of unit pixels arranged in a two-dimensional array, the plurality of unit pixels each comprising:
a photoelectric conversion unit that photoelectrically converts incident light; and
a wiring layer stacked on a surface opposite to a light-incident side surface of the photoelectric conversion unit and having a detection node that detects charge stored at the photoelectric conversion unit,
wherein
in at least some of the plurality of unit pixels,
a center of the detection node is substantially coincident with a light receiving center of the photoelectric conversion unit.
(2)
The solid-state imaging device according to (1), wherein the plurality of unit pixels comprises a large-area pixel and a small-area pixel, and
in both or one of the large-area pixel and the small-area pixel, the center of the detection node is substantially coincident with the light receiving center of the photoelectric conversion unit.
(3)
The solid-state imaging device according to (1) or (2), wherein the detection node is a planar type node.
(4)
The solid-state imaging device according to (1) or (2), wherein the detection node is a vertical transistor.
(5)
The solid-state imaging device according to (1) or (2), wherein the detection node is a directly connecting type node.
(6)
The solid-state imaging device according to (1) or (2), wherein the wiring layer has a charge storage unit that stores charge generated by the photoelectric conversion unit.
(7)
The solid-state imaging device according to (1) or (2), wherein the wiring layer has a pixel transistor that performs signal processing on charge output from the photoelectric conversion unit.
(8)
The solid-state imaging device according to (1) or (2), wherein the wiring layer has an intra-pixel capacitor.
(9)
The solid-state imaging device according to (8), wherein the intra-pixel capacitor is a metal-insulator-metal (MIM) capacitor.
(10)
The solid-state imaging device according to (2), wherein the photoelectric conversion unit has a first electrode region of a first conductivity type, and a second electrode region of a second conductivity type provided to form a pn junction with the first electrode region, and
the depth position of the pn junction of the small-area pixel is located closer to the wiring layer side than the depth position of the pn junction of the large-area pixel.
(11)
The solid-state imaging device according to (10), further comprising an inter-pixel light-shielding part that insulates and light-shields between the small-area pixel and the large-area pixel, wherein
the depth position of the pn junction of the small-area pixel is located closer to the wiring layer side than the depth position of the pn-junction of the large-area pixel and closer to the light-incident side than the depth end of the inter-pixel light-shielding part.
(12)
The solid-state imaging device according to (1), wherein at least some of the plurality of unit pixels comprise a color filter corresponding to a different light wavelength and provided on the light-incident side of the photoelectric conversion unit.
(13)
The solid-state imaging device according to (1), wherein the center of the detection node includes a transfer gate electrode for transferring charge stored at the photoelectric conversion unit.
(14)
The solid-state imaging device according to (1), wherein the center of the detection node includes metal.
(15)
An electronic device comprising a solid-state imaging device, the solid-state imaging device including a plurality of unit pixels arranged in a two-dimensional array,
the plurality of unit pixels each including:
a photoelectric conversion unit that photoelectrically converts incident light; and
a wiring layer stacked on a surface opposite to a light-incident side surface of the photoelectric conversion unit and having a detection node that detects charge stored at the photoelectric conversion unit,
wherein
in at least some of the plurality of unit pixels,
a center of the detection node is substantially coincident with a light receiving center of the photoelectric conversion unit.
Number | Date | Country | Kind |
---|---|---|---|
2020-138945 | Aug 2020 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2021/025185 | 7/2/2021 | WO |