The technology according to the present disclosure (hereinafter also referred to as “the present technology”) relates to a solid-state imaging device and an electronic apparatus.
There has been a known solid-state imaging device in which a first film having a high refractive index and a second film having a low refractive index are disposed in this order on a surface of a semiconductor substrate on which photoelectric conversion elements are formed (see Patent Document 1).
In the conventional solid-state imaging device, however, it is necessary to thicken the first film and/or the second film to suppress reflection on the surface of the semiconductor substrate, and color mixing (crosstalk) increases accordingly.
Therefore, the present technology mainly aims to provide a solid-state imaging device capable of suppressing reflection on a surface of a semiconductor substrate and color mixing.
The present technology provides a solid-state imaging device that includes:
The thickness of the semiconductor layer may be ½ or less of the total thickness of the first and second transparent dielectric layers.
The thickness of the semiconductor layer may be 2 nm or greater but not greater than 10 nm.
The thickness of the first transparent dielectric layer may be 5 nm or greater but not greater than 20 nm.
The thickness of the second transparent dielectric layer may be 15 nm or greater but not greater than 60 nm.
The total thickness of the first transparent dielectric layer, the semiconductor layer, and the second transparent dielectric layer may be 20 nm or greater but not greater than 80 nm.
The semiconductor layer may include p-Si or a-Si.
The second transparent dielectric layer may include one of SiO2 or a transparent dielectric material having a higher refractive index than SiO2.
The second transparent dielectric layer may include a transparent dielectric material having a refractive index of 1.7 or higher.
The second transparent dielectric layer may include Nb2O5, Ta2O5, TiO2, HfO2, or ZrO2.
The first transparent dielectric layer may include a multilayer film in which a plurality of films is stacked.
The plurality of films may include an Al2O3 film and a Ta2O5 film in this order from the side of the semiconductor substrate.
A negative bias may be applied to the semiconductor layer.
The solid-state imaging device may further include a light-blocking film in contact with the semiconductor layer on the opposite side from the side of the first transparent dielectric material, and a negative bias may be applied to the light-blocking film.
A trench may be formed in the surface of the semiconductor substrate on the light incident side, and part of the first transparent dielectric layer, part of the semiconductor layer, and part of the second transparent dielectric layer may be disposed in the trench.
A negative bias may be applied to the semiconductor layer.
The plurality of layers may include a color filter layer in which a plurality of color filters is arranged in an in-plane direction on the opposite side from the side of the semiconductor layer of the second transparent dielectric layer, and the second transparent dielectric layer may have a plurality of regions that correspond to the plurality of color filters and have different thicknesses.
Among the plurality of regions, the region corresponding to the color filter having a longer transmission wavelength may be thicker.
The plurality of layers may include a microlens layer on the opposite side from the side the semiconductor layer of the second transparent dielectric layer.
The present technology also provides an electronic apparatus including the solid-state imaging device.
In the description below, preferred embodiments of the present technology will be explained in detail with reference to the accompanying drawings. Note that, in the present specification and the drawings, components having substantially the same functional configurations are denoted by the same reference signs, and repetitive explanation is not made. The embodiments described below provide representative embodiments of the present technology, and the scope of the present technology is not to be narrowly interpreted according to those embodiments. In the present specification, even in a case where a solid-state imaging device and an electronic apparatus according to the present technology each exert a plurality of effects, each method for manufacturing the solid-state imaging device and the electronic apparatus according to the present technology is only required to exert at least one of the effects. The effects described in the present specification are merely examples and are not restrictive, and other effects may be achieved.
Further, the explanation will be made in the following order.
Conventionally, an image sensor (a solid-state imaging device) has a problem of color mixing (crosstalk) in which light leaks into adjacent pixels through a transparent dielectric film under color filters. For example, in a solid-state imaging device 1 of a comparative example illustrated in
In the solid-state imaging device 1 of the comparative example, 5.9% of light IL1 (RGB average light) that enters through the color filter layer CFL and the transparent dielectric film TDF is reflected by the surface of the semiconductor substrate SS1, to turn into reflected light RL and return to the side of the color filter CF. About 94% of light IL2 enters the photoelectric conversion elements PCE. As described above, in the solid-state imaging device 1, about 6% of incident light is lost due to reflection on the surface of the semiconductor substrate SS1. In the solid-state imaging device 1, the transparent dielectric film TDF is thick, and therefore, the amount of crosstalk light CL (leakage light) that causes color mixing is large. The high reflectance on the surface of the semiconductor substrate SS1 leads to a decrease in efficiency (a decrease in sensitivity).
That is, in the solid-state imaging device 1 of the comparative example, there is room for improvement in suppressing reflection on the surface of the semiconductor substrate and color mixing.
Therefore, as a result of intensive studies, the inventor has developed a solid-state imaging device according to the present technology as a solid-state imaging device capable of suppressing reflection on the surface of the semiconductor substrate and color mixing.
In the description below, an embodiment of the present technology will be explained in detail through some examples.
As an example, as illustrated in
As an example, the pixel substrate 100 includes a plurality of pixels arranged two-dimensionally (arranged in a matrix, for example). Each pixel includes a photoelectric conversion element 100a1. The photoelectric conversion element 100a1 is a photodiode (PD), for example. More specifically, the photoelectric conversion element is a PN photodiode, a PIN photodiode, a single photon avalanche photodiode (SPAD), an avalanche photo diode (APD), or the like, for example.
As an example, the pixel substrate 100 includes a first semiconductor substrate 100a and a first wiring layer 100b stacked on each other. In each pixel, as an example, the surface of the first semiconductor substrate 100a on the side opposite to the side of the first wiring layer 100b is the surface on the light incident side.
As an example, the plurality of pixels described above, a control circuit (analog circuit) that controls each pixel, and an A/D conversion circuit (analog circuit) are formed on the first semiconductor substrate 100a.
The control circuit includes circuit elements such as transistors, for example. Specifically, as an example, the control circuit includes a plurality of pixel transistors (so-called MOS transistors). The plurality of pixel transistors can include the three transistors of a transfer transistor, a reset transistor, and an amplification transistor, for example. Further, a selection transistor may be added, and the pixel transistors may include the four transistors. Since an equivalent circuit of a unit pixel is similar to a normal circuit, detailed explanation thereof is not made herein. A pixel can be formed as one unit pixel. Also, a pixel may have a shared pixel structure. This pixel sharing structure is a structure in which a plurality of photodiodes shares a floating diffusion that forms the transfer transistor, and the transistors other than the transfer transistor.
The A/D conversion circuit converts an analog signal generated in each pixel of the pixel substrate 100 into a digital signal.
The first semiconductor substrate 100a is a Si substrate, a Ge substrate, a GaAs substrate, an InGaAs substrate, or the like, for example. The first wiring layer 100b includes an insulating layer and an internal wiring line (intra-layer wiring line) provided in the insulating layer. The first wiring layer 100b may be a single-layer wiring layer in which the internal wiring line is provided in a single layer in the insulating layer, or may be a multilayer wiring layer in which the internal wiring line is provided in multiple layers in the insulating layer. The insulating layer is formed with a silicon oxide film, a silicon nitride film, or the like, for example. The internal wiring line is formed with copper (Cu), aluminum (Al), tungsten (W), or the like, for example.
The processing substrate 200 includes a second semiconductor substrate 200a and a second wiring layer 200b stacked on each other. The second wiring layer 200b is bonded so as to face the first wiring layer 100b. The processing substrate 200 includes a logic circuit and a memory circuit, as an example. Note that the processing substrate 200 may include an AI circuit, an interface circuit, and the like, for example, in addition to the logic circuit and the memory circuit. Note that the interface circuit is a circuit that inputs and outputs signals. The AI circuit is a circuit that has a learning function with artificial intelligence (AI).
The logic circuit processes a digital signal generated by the A/D conversion circuit described above. The memory circuit temporarily stores and holds the digital signal generated by the A/D conversion circuit described above and/or the digital signal processed by the logic circuit.
The second semiconductor substrate 200a is a Si substrate, a Ge substrate, a GaAs substrate, an InGaAs substrate, or the like, for example. The second wiring layer 200b includes an insulating layer and an internal wiring line (intra-layer wiring line) provided in the insulating layer. The second wiring layer 200b may be a single-layer wiring layer in which the internal wiring line is provided in a single layer in the insulating layer, or may be a multilayer wiring layer in which the internal wiring line is provided in multiple layers in the insulating layer. The insulating layer is formed with a silicon oxide film, a silicon nitride film, or the like, for example. The internal wiring line is formed with copper (Cu), aluminum (Al), tungsten (W), or the like, for example.
The plurality of layers including the semiconductor layer 400 includes, in addition to the semiconductor layer 400, first and second transparent dielectric layers 300 and 500, a color filter layer 600, a microlens layer 700, and a protective film 800, as an example.
The first transparent dielectric layer 300, the semiconductor layer 400, and the second transparent dielectric layer 500 are stacked in this order from the side of the first semiconductor substrate 100a (the lower side). The color filter layer 600 is disposed on the opposite side (upper side) of the second transparent dielectric layer 500 from the side of the semiconductor layer 400. The microlens layer 700 is disposed on the opposite side (upper side) of the color filter layer 600 from the side of the second transparent dielectric layer 500.
The color filter layer 600 includes a plurality of color filters 600a corresponding to the plurality of pixels. Each color filter 600a is disposed at a position above the photoelectric conversion element 100a1 of the corresponding pixel via the first transparent dielectric layer 300, the semiconductor layer 400, and the second transparent dielectric layer 500. Note that a film that has an appropriate thickness and a lower refractive index than that of the second transparent dielectric layer 500, such as a SiO2 film, for example, may be disposed between the second transparent dielectric layer 500 and the color filter layer 600.
As an example, each color filter 600a is a color filter corresponding to one of the colors (wavelengths) of red (R), green (G), and blue (B), and transmits light of the corresponding color (wavelength). Each color filter 600a is a so-called on-chip color filter. The thickness of each color filter 600a is 500 nm, for example.
The microlens layer 700 includes a plurality of microlenses 700a corresponding to the plurality of pixels. Each microlens 700a is a so-called on-chip microlens, and condenses incident light from the outside onto the photoelectric conversion element 100a1 of the corresponding pixel. The thickness of the microlens layer 700 is 1000 nm, for example.
The protective film 800 is a low-temperature oxide film called low temperature oxide (LTO), as an example, and is formed on the microlens layer 700. The thickness of the protective film 800 is 110 nm, for example.
On the second transparent dielectric layer 500, an inter-pixel light-blocking film 550 that reduces light leakage (color mixing) between adjacent pixels is provided. The inter-pixel light-blocking film 550 is formed in a lattice-like shape along the boundary lines between the pixels in a planar view, for example.
The inter-pixel light-blocking film 550 is formed with a material that blocks light. As for the material that forms the inter-pixel light-blocking film 550, a material that has high light-blocking properties and is suitable for fine processing is preferable so that processing can be performed with high accuracy by etching, for example. Examples of such materials include metals such as aluminum (Al), tungsten (W), and copper (Cu), for example.
As an example, the first transparent dielectric layer 300 is formed with a multilayer film in which a plurality of (two, for example) layers are stacked. Here, the multilayer film that forms the first transparent dielectric layer 300 has a two-layer structure in which an intermediate refractive index film 300a and a high refractive index film 300b are stacked in this order from the side of the first semiconductor substrate 100a. The intermediate refractive index film 300a is formed with Al2O3, for example. The high refractive index film 300b is formed with Ta2O5, for example. The thickness (total thickness) of the first transparent dielectric layer 300 is preferably 5 nm or greater but not greater than 20 nm, and more preferably, is 10 nm or greater but not greater than 17 nm, as an example. Here, the thickness of the first transparent dielectric layer 300 is set to 15 nm, for example. Note that the first transparent dielectric layer 300 may be a single-layer film formed with a low refractive index film (its refractive index being lower than 1.5), an intermediate refractive index film (its refractive index being 1.5 or higher but lower than 1.7), or a high refractive index film (its refractive index being 1.7 or higher), or may be a multilayer film in which three or more films including at least one of a low refractive index film, an intermediate refractive index film, and a high refractive index film are stacked.
The semiconductor layer 400 is formed with bulk Si, polysilicon (p-Si), or amorphous silicon (a-Si), as an example. The semiconductor layer 400 may be a single-layer film, or may be a multilayer film in which a plurality of semiconductor films is stacked.
As an example, the second transparent dielectric layer 500 is preferably formed with a transparent dielectric material (an intermediately refractive transparent dielectric material or a high refractive transparent dielectric material) having a higher refractive index than that of SiO2. The second transparent dielectric layer 500 is more preferably formed with a transparent dielectric material having a refractive index of 1.7 or higher. Specifically, the second transparent dielectric layer 500 may include Nb2O5, Ta2O5, TiO2, HfO2, or ZrO2, for example. Note that the second transparent dielectric layer 500 may be a multilayer film in which a plurality of films including at least one of an intermediate refractive index film and a high refractive index film is stacked.
The multilayer film including the first transparent dielectric layer 300, the semiconductor layer 400, and the second transparent dielectric layer 500 functions as a reflection suppression film that suppresses reflection of incident light on the surface of the first semiconductor substrate 100a. The total thickness of the first transparent dielectric layer 300, the semiconductor layer 400, and the second transparent dielectric layer 500 is preferably 20 nm or greater but not greater than 80 nm, is more preferably 35 nm or greater but not greater than 75 nm, and is even more preferably 50 nm or greater but not greater than 65 nm. Here, the total thickness of the first transparent dielectric layer 300, the semiconductor layer 400, and the second transparent dielectric layer 500 is set to 60 nm, for example. Further, as an example, the thickness of the semiconductor layer 400 is preferably ½ or less of the total thickness of the first and second transparent dielectric layers 300 and 400. This is because the silicon (Si) that forms the semiconductor layer 400 is an absorptive material for RGB visible light, and therefore, if the silicon is too thick, sensitivity might be significantly degraded.
In view of the above, even when the total thickness of the reflection suppression film between the first semiconductor substrate 100a on which the photoelectric conversion elements 100a1 are formed and the color filter layer 600 is a half or smaller than the total thickness (150 nm, for example) of the reflection suppression film of the solid-state imaging device 1, the solid-state imaging device 10 can suppress reflection on the surface of the first semiconductor substrate 100a. This is because the solid-state imaging device 10 has a three-layer structure in which the semiconductor layer 400 is sandwiched between the first and second transparent dielectric layers 300 and 500 on the surface of the first semiconductor substrate 100a on the light incident side. Additionally, as the semiconductor layer 400 is disposed between the first and second transparent dielectric layers 300 and 500, part of incident light is absorbed by the semiconductor layer 400 and is lost. However, even if the total thickness is reduced, reflection on the semiconductor substrate surface can be sufficiently suppressed. As a result, color mixing can be suppressed, and sensitivity can be increased.
A negative bias is preferably applied to the semiconductor layer 400. As a result, dark current can be reduced, and sensitivity can be increased.
In the description below, an operation of the solid-state imaging device 10 is explained. Light (image light) from the object enters the photoelectric conversion element 100a1 of each pixel through the microlens layer 700, the color filter layer 600, the second transparent dielectric layer 500, the semiconductor layer 400, and the first transparent dielectric layer 300 in this order. At this point of time, the photoelectric conversion elements 100a1 perform photoelectric conversion. The electrical signals (analog signals) photoelectrically converted by the photoelectric conversion elements 100a1 are transmitted to the A/D conversion circuit, are converted into digital signals, are temporarily stored and held in the memory circuit, and are sequentially transmitted to the logic circuit. The logic circuit processes the transmitted digital signals. Note that the digital signals can also be temporarily stored and held in the memory circuit during and/or after the processing in the logic circuit.
In the description below, a method for manufacturing the solid-state imaging device 10 is explained with reference to a flowchart in
In the first step S1, the pixel substrate 100 and the processing substrate 200 are prepared (see
In the next step S2, the pixel substrate 100 and the processing substrate 200 are joined (see
In the next step S3, the first transparent dielectric layer 300, the semiconductor layer 400, and the second transparent dielectric layer 500 are stacked (see
In the next step S4, the inter-pixel light-blocking film 550 is formed (see
In the next step S5, the color filter layer 600 is formed (see
In the next step S6, the microlens layer 700 is formed (see
In the final step S7, the protective film 800 is formed (see
The solid-state imaging device 10 according to Example 1 of an embodiment of the present technology includes the first semiconductor substrate 100a on which the photoelectric conversion elements 100a1 are formed, and a plurality of layers including the first transparent dielectric layer 300, the semiconductor layer 400, and the second transparent dielectric layer 500 in this order from the side of the first semiconductor substrate 100a.
In this case, there is no need to increase the total thickness of the first transparent dielectric layer 300, the semiconductor layer 400, and the second transparent dielectric layer 500 to suppress reflection on the surface of the first semiconductor substrate 100a, and color mixing (crosstalk) can be reduced. Conversely, even if the total thickness is reduced, reflection on the surface of the first semiconductor substrate 100a can be suppressed.
As a result, the solid-state imaging device 10 according to Example 1 can provide a solid-state imaging device capable of suppressing reflection on the surface (the surface on the light incident side) of the first semiconductor substrate 100a and color mixing.
The thickness of the semiconductor layer 400 is preferably ½ or less of the total thickness of the first and second transparent dielectric layers 300 and 500. Thus, it is possible to suppress reflection on the surface of the first semiconductor substrate 100a, while suppressing absorption of light in the semiconductor layer 400.
The solid-state imaging device 20 has a configuration similar to that of the solid-state imaging device 10 according to Example 1, except that the inter-pixel light-blocking film 550 is in contact with the semiconductor layer 400 on the opposite side (upper side) from the side of the first transparent dielectric layer 300.
In the solid-state imaging device 20, the second transparent dielectric layer 500 is partitioned for each pixel by the inter-pixel light-blocking film 550.
In the solid-state imaging device 20, a negative bias is preferably applied to the inter-pixel light-blocking film 550. As a result, a negative bias can be efficiently applied from the inter-pixel light-blocking film 550 to each photoelectric conversion element 100a1 via the semiconductor layer 400. Thus, dark current can be reduced, and sensitivity can be increased.
The solid-state imaging device 20 operates in a manner similar to the solid-state imaging device 10 according to Example 1.
In the description below, a method for manufacturing the solid-state imaging device 10 is explained with reference to a flowchart in
In the first step S11, the pixel substrate 100 and the processing substrate 200 are prepared (see
In the next step S12, the pixel substrate 100 and the processing substrate 200 are joined (see
In the next step S13, the first transparent dielectric layer 300 and the semiconductor layer 400 are stacked (see
In the next step S14, the inter-pixel light-blocking film 550 is formed (see
In the next step S15, the second transparent dielectric layer 500 is formed (see
In the next step S16, the second transparent dielectric layer 500 is partially removed (see
In the next step S17, the color filter layer 600 is formed (see
In the next step S18, the microlens layer 700 is formed (see
In the final step S19, the protective film 800 is formed (see
The solid-state imaging device 30 has a configuration similar to that of the solid-state imaging device 20 according to Example 2, except that the inter-pixel light-blocking film 550 is covered with the second transparent dielectric layer 500 on the opposite side from the side of the semiconductor layer 400.
In the solid-state imaging device 30, a negative bias is preferably applied to the inter-pixel light-blocking film 550. As a result, a negative bias can be efficiently applied from the inter-pixel light-blocking film 550 to each photoelectric conversion element 100a1 via the semiconductor layer 400. Thus, dark current can be reduced, and sensitivity can be increased.
The solid-state imaging device 30 operates in a manner similar to the solid-state imaging device 10 according to Example 1.
In the description below, a method for manufacturing the solid-state imaging device 30 is explained with reference to a flowchart in
In the first step S21, the pixel substrate 100 and the processing substrate 200 are prepared (see
In the next step S22, the pixel substrate 100 and the processing substrate 200 are joined (see
In the next step S23, the first transparent dielectric layer 300 and the semiconductor layer 400 are stacked (see
In the next step S24, the inter-pixel light-blocking film 550 is formed (see
In the next step S25, the second transparent dielectric layer 500 is formed (see
In the next step S26, the color filter layer 600 is formed (see
In the next step S27, the microlens layer 700 is formed (see
In the final step S28, the protective film 800 is formed (see
The solid-state imaging device 40 has a configuration similar to that of the solid-state imaging device 10 according to Example 1, except that a trench TR is provided on the surface (the surface on the light incident side) of the first semiconductor substrate 100a, and part of the first transparent dielectric layer 300, part of the semiconductor layer 400, and part of the second transparent dielectric layer 500 are disposed in the trench TR.
The trench TR is formed in a lattice-like shape in a planar view so as to partition the photoelectric conversion elements 100a1 of the respective pixels. The trench TR may be deep or shallow, and the depth can be changed as appropriate.
In the solid-state imaging device 40, a negative bias is preferably applied to the semiconductor layer 400. As a result, a negative bias can be efficiently applied from the semiconductor layer 400 to each photoelectric conversion element 100a1 via the trench TR. Thus, dark current can be reduced, and sensitivity can be increased.
The solid-state imaging device 40 operates in a manner similar to the solid-state imaging device 10 according to Example 1.
In the description below, a method for manufacturing the solid-state imaging device 40 is explained with reference to a flowchart in
In the first step S31, the pixel substrate 100 and the processing substrate 200 are prepared (see
In the next step S32, the pixel substrate 100 and the processing substrate 200 are joined (see
In the next step S33, the trench TR is formed (see
In the next step S34, the first transparent dielectric layer 300, the semiconductor layer 400, and the second transparent dielectric layer 500 are stacked (see
In the next step S35, the inter-pixel light-blocking film 550 is formed (see
In the next step S36, the color filter layer 600 is formed (see
In the next step S37, the microlens layer 700 is formed (see
In the final step S38, the protective film 800 is formed (see
In the solid-state imaging device 50, in the second transparent dielectric layer 500, a plurality of regions (regions 500a1, 500a2, and 500a3, for example) corresponding to a plurality of color filters 600a (color filters 600a1, 600a2, and 600a3, for example) have different thicknesses.
As an example, the color filter 600a1 is a color filter that transmits red light, the color filter 600a2 is a color filter that transmits green light, and the color filter 600a3 is a color filter that transmits blue light.
Meanwhile, the relationship between the film thickness of the second transparent dielectric layer 500 and the reflectance of light on the surface of the first semiconductor substrate 100a varies depending on the wavelength (color) of the light.
Therefore, the thickness of the region of the second transparent dielectric layer 500 corresponding to each color filter 600a is desirably adjusted and optimized so that the reflectance of the light transmitted through the color filter 600a on the surface of the first semiconductor substrate 100a can be reduced as much as possible.
As an example, the plurality of regions 500a1, 500a2, and 500a3 of the second transparent dielectric layer 500 is preferably thicker as the transmission wavelength of the corresponding color filter 600a is longer.
Specifically, the thickness of the region 500a1 corresponding to the color filter 600a1 having the transmission wavelength of red (R) may be about 50 nm, the thickness of the region 500a2 corresponding to the color filter 600a2 having the transmission wavelength of green (G) may be about 38 nm, and the thickness of the region 500a3 corresponding to the color filter 600a3 having the transmission wavelength of blue (B) may be about 25 nm.
Note that the thickness of the second transparent dielectric layer 500 can also be optimized in a similar manner for colors (wavelengths) other than RGB, such as infrared (IR), for example.
The solid-state imaging device 10-1 has a configuration similar to that of the solid-state imaging device 10 according to Example 1, except that the second transparent dielectric layer 500 is formed with a low refractive transparent dielectric material.
The low refractive transparent dielectric material as the second transparent dielectric layer 500 is SiO2 or a transparent dielectric material having a lower refractive index than that of SiO2, for example.
In the solid-state imaging device 10-1, the semiconductor layer 400 is provided between the first and second transparent dielectric layers 300 and 500. Accordingly, reflection on the surface of the first semiconductor substrate 100a can be suppressed, even if the thickness of the low refractive transparent dielectric material as the second transparent dielectric layer 500 is made smaller than that of the solid-state imaging device 1 (see
The configurations of the solid-state imaging devices of the respective Examples and modifications described above can be changed as appropriate.
The reflection suppression film may have another thin film that is separate from the semiconductor layer 400 and the first and second transparent dielectric layers 300 and 500.
The first transparent dielectric layers 300 and the semiconductor layers 400 may be alternately stacked, as long as the total thickness is within a predetermined range (7 nm to 30 nm, for example).
The solid-state imaging devices according to the above respective Examples and modifications are of a back-illuminated type, but may be of a surface-illuminated type in which the first wiring layer 100b is provided on the light incident surface side of the first semiconductor substrate 100a. In the case of a surface-illuminated structure, the color filter layer 600 and the microlens layer 700 are provided on the first semiconductor substrate 100a on the same side as the first wiring layer 100b via the first wiring layer 100b provided on one side of the first semiconductor substrate 100a.
A solid-state imaging device may not include at least one of the color filter layer 600, the microlens layer 700, and the protective film 800, for example. In a case where the solid-state imaging device is used to generate a black-and-white image, for example, the color filter layer 600 may not be provided. In a case where the solid-state imaging device is used for sensing such as distance measurement, for example, at least one of the color filter layer 600 and the microlens layer 700 may not be provided.
In each of the above embodiments and modifications, the first wiring layer 100b of the pixel substrate 100 and the second wiring layer 200b of the processing substrate 200 are electrically connected by metal joining, for example. However, in addition to or instead of this, the first wiring layer 100b and the second wiring layer 200b may be electrically connected by a through silicon via (TSV).
In each of the above Examples and modifications, a solid-state imaging device having a two-layer structure in which the pixel substrate 100 and the processing substrate 200 are stacked has been described. However, the present technology can also be applied to a stacked solid-state imaging device with three or more layers in which the pixel substrate 100 and a plurality of processing substrates 200 are stacked.
In each of the above Examples and modifications, a stacked solid-state imaging device in which the pixel substrate 100 and the processing substrate 200 are stacked has been described. However, the present technology can also be applied to a non-stacked solid-state imaging device in which a pixel unit corresponding to a pixel substrate and a processing unit corresponding to a processing substrate are arranged adjacent to each other on the same substrate.
For example, the configurations of the solid-state imaging devices of the above-described embodiments and modifications may be combined with each other within a range that is not technically contradictory.
The numerical values, materials, shapes, dimensions, and the like used in the description of the above respective Examples and modifications are merely examples, and do not limit the present technology.
The respective Examples and modifications described above can be used in various cases of sensing light such as visible light, infrared light, ultraviolet light, and an X-ray as described below, for example. That is, as illustrated in
Specifically, in the field of viewing, a solid-state imaging device according to the present technology can be used for a device for capturing an image to be viewed, such as a digital camera, a smartphone, and a mobile phone with a camera function, for example.
In the field of transportation, for example, for safe driving such as automatic stop, recognition of a state of the driver, and the like, a solid-state imaging device according to the present technology can be used for a device to be used for transportation, such as a vehicle-mounted sensor that captures an image in the front, the rear, the surroundings, the interior, and the like of an automobile, a monitoring camera that monitors traveling vehicles and roads, or a distance measurement sensor that measures distance between vehicles.
In the field of household electric appliances, to capture an image of a user's gesture and operate a device in accordance with the gesture, for example, a solid-state imaging device according to the present technology can be used for a device that is used in household electric appliances such as a TV receiver, a refrigerator, and an air conditioner.
In the field of medical care and healthcare, for example, a solid-state imaging device according to the present technology can be used for a device that is used for medical care and healthcare, such as an endoscope or a device that performs angiography by receiving infrared light.
In the field of security, for example, a solid-state imaging device according to the present technology can be used for a device that is used for security, such as a monitoring camera for crime prevention or a camera for person authentication.
In the field of beauty care, for example, a solid-state imaging device according to the present technology can be used for a device that is used for beauty care, such as a skin measuring instrument for capturing an image of the skin or a microscope for capturing an image of the scalp.
In the field of sports, for example, a solid-state imaging device according to the present technology can be used for a device that is used for sports, such as an action camera or a wearable camera for the use in sports and the like.
In the field of agriculture, for example, a solid-state imaging device according to the present technology can be used for a device that is used for agriculture, such as a camera for monitoring a condition of fields and crops.
Next, examples of use of a solid-state imaging device according to the present technology (for example, a solid-state imaging device according to each of Examples and modifications) are specifically described. For example, the solid-state imaging device according to each of Examples and the modifications described above can be applied as a solid-state imaging device 501 to an electronic apparatus of any type that has an imaging function, such as the camera system of a digital still camera, a video camera, or the like, or a mobile phone having an imaging function, for example.
The optical system 502 guides image light (incident light) from an object to a pixel region of the solid-state imaging device 501. The optical system 502 may include a plurality of optical lenses. The shutter device 503 controls a light irradiation period and a light shielding period regarding the solid-state imaging device 501. The drive unit 504 controls a transfer operation of the solid-state imaging device 501 and a shutter operation of the shutter device 503. The signal processing unit 505 performs various types of signal processing on a signal output from the solid-state imaging device 501. A video signal Dout after the signal processing is stored in a storage medium such as a memory or output to a monitor and the like.
A solid-state imaging device according to the present technology (a solid-state imaging device according to each Example, for example) can also be applied to some other electronic apparatus that detects light, such as a time of flight (TOF) sensor, for example. In a case where the solid-state imaging device is applied to a TOF sensor, for example, the solid-state imaging device can be applied to a distance image sensor by a direct TOF measurement method, or a distance image sensor by an indirect TOF measurement method. In the distance image sensor by the direct TOF measurement method, arrival timing of photons is directly obtained in a time domain in each pixel. Therefore, a light pulse having a short pulse width is transmitted, and an electrical pulse is generated by a receiver that responds at a high speed. The present disclosure can be applied to the receiver at that time. Meanwhile, by the indirect TOF method, a flight time of light is measured with a semiconductor element structure in which detection and an accumulation amount of carriers generated by light change depending on the arrival timing of light. The present disclosure can also be applied to such a semiconductor structure. In the case of application to a TOF sensor, a color filter layer and a microlens layer as illustrated in
The technology according to the present disclosure (the present technology) can be applied to various products. For example, the technology according to the present disclosure may be implemented as a device mounted on a mobile object of any kind, such as an automobile, an electric vehicle, a hybrid electric vehicle, a motorcycle, a bicycle, a personal mobility, an airplane, a drone, a ship, a robot, or the like.
A vehicle control system 12000 includes a plurality of electronic control units connected to each other via a communication network 12001. In the example illustrated in
The driving system control unit 12010 controls the operation of devices related to the driving system of the vehicle in accordance with various kinds of programs. For example, the driving system control unit 12010 functions as a control device for a driving force generating device for generating the driving force of the vehicle, such as an internal combustion engine, a driving motor, or the like, a driving force transmitting mechanism for transmitting the driving force to wheels, a steering mechanism for adjusting the steering angle of the vehicle, a braking device for generating the braking force of the vehicle, and the like.
The body system control unit 12020 controls the operation of various kinds of devices provided to a vehicle body in accordance with various kinds of programs. For example, the body system control unit 12020 functions as a control device for a keyless entry system, a smart key system, a power window device, or various kinds of lamps such as a headlamp, a backup lamp, a brake lamp, a turn signal, a fog lamp, or the like. In this case, radio waves transmitted from a mobile device as an alternative to a key or signals of various kinds of switches can be input to the body system control unit 12020. The body system control unit 12020 receives these input radio waves or signals, and controls a door lock device, the power window device, the lamps, or the like of the vehicle.
The outside-vehicle information detecting unit 12030 detects information about the outside of the vehicle including the vehicle control system 12000. For example, an imaging section 12031 is connected to the outside-vehicle information detecting unit 12030. The outside-vehicle information detecting unit 12030 makes the imaging section 12031 image an image of the outside of the vehicle, and receives the imaged image. On the basis of the received image, the outside-vehicle information detecting unit 12030 may perform processing of detecting an object such as a human, a vehicle, an obstacle, a sign, a character on a road surface, or the like, or processing of detecting a distance thereto.
The imaging section 12031 is an optical sensor that receives light, and which outputs an electrical signal corresponding to a received light amount of the light. The imaging section 12031 can output the electric signal as an image, or can output the electrical signal as information about a measured distance. Further, the light received by the imaging section 12031 may be visible light, or may be invisible light such as infrared rays or the like.
The in-vehicle information detecting unit 12040 detects information about the inside of the vehicle. A driver state detecting section 12041 that detects the state of a driver, for example, is connected to the in-vehicle information detecting unit 12040. The driver state detecting section 12041 includes a camera that images the driver, for example. On the basis of detection information input from the driver state detecting section 12041, the in-vehicle information detecting unit 12040 may calculate a degree of fatigue of the driver or a degree of concentration of the driver, or may determine whether the driver is dozing off.
The microcomputer 12051 can calculate a control target value for the driving force generating device, the steering mechanism, or the braking device on the basis of the information about the inside or outside of the vehicle which information is obtained by the outside-vehicle information detecting unit 12030 or the in-vehicle information detecting unit 12040, and output a control command to the driving system control unit 12010.
For example, the microcomputer 12051 can perform cooperative control intended to implement functions of an advanced driver assistance system (ADAS) which functions include collision avoidance or shock mitigation for the vehicle, following driving based on a following distance, vehicle speed maintaining driving, a warning of collision of the vehicle, a warning of deviation of the vehicle from a lane, or the like.
Also, the microcomputer 12051 can perform cooperative control intended for automated driving, which makes the vehicle to travel automatedly without depending on the operation of the driver, or the like, by controlling the driving force generating device, the steering mechanism, the braking device, or the like on the basis of the information about the outside or inside of the vehicle which information is obtained by the outside-vehicle information detecting unit 12030 or the in-vehicle information detecting unit 12040.
Further, the microcomputer 12051 can output a control command to the body system control unit 12020 on the basis of the information about the outside of the vehicle which information is obtained by the outside-vehicle information detecting unit 12030. For example, the microcomputer 12051 can perform cooperative control intended to prevent a glare by controlling the headlamp so as to change from a high beam to a low beam, for example, in accordance with the position of a preceding vehicle or an oncoming vehicle detected by the outside-vehicle information detecting unit 12030.
The sound/image output section 12052 transmits an output signal of at least one of a sound and an image to an output device capable of visually or auditorily notifying information to an occupant of the vehicle or the outside of the vehicle. In the example in
In
The imaging sections 12101, 12102, 12103, 12104, and 12105 are provided at positions such as the front nose, the sideview mirrors, the rear bumper, the back doors, and an upper portion of the windshield in the interior of the vehicle 12100, for example. The imaging section 12101 provided at the front nose and the imaging section 12105 provided at the upper portion of the windshield in the interior of the vehicle obtain mainly images of the front of the vehicle 12100. The imaging sections 12102 and 12103 provided at the sideview mirrors obtain mainly images of the sides of the vehicle 12100. The imaging section 12104 provided at the rear bumper or the back door obtains mainly images of the rear of the vehicle 12100. The forward images obtained by the imaging sections 12101 and 12105 are used mainly to detect a preceding vehicle, a pedestrian, an obstacle, a traffic signal, a traffic sign, a lane, or the like.
Note that
At least one of the imaging sections 12101 to 12104 may have a function of obtaining distance information. For example, at least one of the imaging sections 12101 to 12104 may be a stereo camera including a plurality of imaging elements, or may be an imaging element having pixels for phase difference detection.
For example, the microcomputer 12051 can determine a distance to each three-dimensional object within the imaging ranges 12111 to 12114 and a temporal change in the distance (relative speed with respect to the vehicle 12100) on the basis of the distance information obtained from the imaging sections 12101 to 12104, and thus, extract, as a preceding vehicle, the nearest three-dimensional object in particular that is present on the traveling path of the vehicle 12100 and is traveling in substantially the same direction as the vehicle 12100 at a predetermined speed (equal to or higher than 0 km/hour, for example). Further, the microcomputer 12051 can set a following distance to be maintained in front of a preceding vehicle in advance, and perform automatic brake control (including following stop control), automatic acceleration control (including following start control), or the like. Thus, it is possible to perform cooperative control intended for automated driving that makes the vehicle travel automatedly without depending on the operation of the driver or the like.
For example, the microcomputer 12051 can classify three-dimensional object data on three-dimensional objects into three-dimensional object data of a two-wheeled vehicle, a standard-sized vehicle, a large-sized vehicle, a pedestrian, a utility pole, and other three-dimensional objects on the basis of the distance information obtained from the imaging sections 12101 to 12104, extract the classified three-dimensional object data, and use the extracted three-dimensional object data for automatic avoidance of an obstacle. For example, the microcomputer 12051 identifies obstacles around the vehicle 12100 as obstacles that the driver of the vehicle 12100 can recognize visually and obstacles that are difficult for the driver of the vehicle 12100 to recognize visually. The microcomputer 12051 then determines a collision risk indicating a risk of collision with each obstacle. In a situation where the collision risk is equal to or higher than a set value, and there is a possibility of collision, the microcomputer 12051 outputs a warning to the driver via the audio speaker 12061 or the display section 12062, and performs forced deceleration or avoidance steering via the driving system control unit 12010. Thus, the microcomputer 12051 can assist in driving to avoid a collision.
At least one of the imaging sections 12101 to 12104 may be an infrared camera that detects infrared rays. For example, the microcomputer 12051 can recognize a pedestrian by determining whether or not there is a pedestrian in images captured by the imaging sections 12101 to 12104. Such recognition of a pedestrian is performed through a process of extracting feature points in the images captured by the imaging sections 12101 to 12104 as infrared cameras, for example, and a process of determining whether or not it is a pedestrian by performing a pattern matching process on a series of feature points indicating the contour of the object. When the microcomputer 12051 determines that there is a pedestrian in the images captured by the imaging sections 12101 to 12104, and thus recognizes the pedestrian, the sound/image output section 12052 controls the display section 12062 so that a square contour line for emphasis is displayed so as to be superimposed on the recognized pedestrian. The sound/image output section 12052 may also control the display section 12062 so that an icon or the like representing the pedestrian is displayed at a desired position.
An example of a vehicle control system to which the technology according to the present disclosure (the present technology) can be applied has been described above. The technology according to the present disclosure can be applied to the imaging section 12031 and the like, for example, among the components described above. Specifically, a solid-state imaging device 111 of the present disclosure can be applied to the imaging section 12031. By applying the technology according to the present disclosure to the imaging section 12031, it is possible to increase yield and reduce costs related to the manufacturing.
The present technology can be applied to various products. For example, the technology according to the present disclosure (the present technology) may be applied to an endoscopic surgery system.
The endoscope 11100 includes a lens barrel 11101 that has a region of a predetermined length from the tip thereof and is to be inserted into a body cavity of the patient 11132, and a camera head 11102 connected to the base end of the lens barrel 11101. In the example in the drawing, the endoscope 11100 designed as a so-called rigid scope having the rigid lens barrel 11101 is illustrated, but the endoscope 11100 may be designed as a so-called flexible scope having a flexible lens barrel.
The lens barrel 11101 has, at the tip thereof, an opening in which an objective lens is fitted. A light source device 11203 is connected to the endoscope 11100, and light generated by the light source device 11203 is guided to the tip of the lens barrel by a light guide extending inside the lens barrel 11101, and is applied to the observation target in the body cavity of the patient 11132 via the objective lens. Note that the endoscope 11100 may be a forward-viewing endoscope, an oblique-viewing endoscope, or a side-viewing endoscope.
An optical system and an imaging element are provided in the camera head 11102 so that reflected light (observation light) from the observation target is collected on the imaging element by the optical system. The observation light is photo-electrically converted by the imaging element, to generate an electrical signal corresponding to the observation light, or an image signal corresponding to the observation image. The image signal is transmitted as RAW data to a camera control unit (CCU) 11201.
The CCU 11201 includes a central processing unit (CPU), a graphics processing unit (GPU), or the like, and integrally controls operation of the endoscope 11100 and a display device 11202. Further, the CCU 11201 receives the image signal from the camera head 11102, and performs various kinds of image processing on the image signal to display an image based on the image signal, such as a development process (a demosaicing process), for example.
Under the control of the CCU 11201, the display device 11202 displays the image based on the image signal on which the image processing has been performed by the CCU 11201.
The light source device 11203 is formed with a light source such as a light emitting diode (LED), for example, and supplies irradiation light at a time of capturing an image of the surgical site or the like, to the endoscope 11100.
An input device 11204 is an input interface for the endoscopic surgery system 11000. The user can input various kinds of information and instructions to the endoscopic surgery system 11000 through the input device 11204. For example, the user inputs an instruction or a like to change a condition (the type of irradiation light, the magnification, the focal distance or the like) for imaging by the endoscope 11100.
A treatment tool control device 11205 controls driving of the energy device 11112 for cautery or incision of a tissue, sealing of a blood vessel or the like. A pneumoperitoneum device 11206 sends gas into the body cavity of the patient 11132 via the pneumoperitoneum tube 11111 to inflate the body cavity for the purpose of securing a field of view for the endoscope 11100 and securing work space for the surgeon. A recorder 11207 is a device capable of recording various kinds of information relating to surgery. A printer 11208 is a device capable of printing various kinds of information relating to surgery in various forms such as text, images, and graphs.
Note that the light source device 11203 that supplies the endoscope 11100 with the irradiation light at the time of imaging the surgical site can include an LED, a laser light source, or a white light source including a combination thereof, for example. In a case where a white light source is formed with a combination of red, green, and blue (RGB) laser light sources, the output intensity and the output timing can be controlled with a high degree of accuracy for each color (each wavelength), and thus, adjustment of the white balance of a captured image can be performed by the light source device 11203. Furthermore, in this case, it is also possible to capture an image corresponding to each of R, G, and B in a time division manner by irradiating the observation target with laser light from each of the RGB laser light sources in a time-division manner, and controlling driving of the imaging element of the camera head 11102 in synchronization with the irradiation timing. According to this method, a color image can be obtained even if color filters are not provided in the imaging element.
Furthermore, driving of the light source device 11203 may be controlled so as to change the intensity of output light at predetermined time intervals. By controlling driving of the imaging element of the camera head 11102 in synchronism with the timing of the change of the intensity of light to acquire images time-divisionally, and combining the images, an image of a high dynamic range free from underexposed blocked up shadows and overexposed highlights can be generated.
Furthermore, the light source device 11203 may be designed to be able to supply light having a predetermined wavelength band corresponding to special light observation. In the special light observation, for example, by emitting light in a narrower band than irradiation light (in other words, white light) at the time of normal observation using wavelength dependency of body tissue to absorb light, so-called narrow band imaging is performed in which an image of predetermined tissue, such as a blood vessel in a mucosal surface layer, is captured with high contrast. Alternatively, in special light observation, fluorescent observation for obtaining an image from fluorescent light generated by irradiation of excitation light may be performed. In the fluorescence observation, the body tissue can be irradiated with excitation light to observe fluorescence from the body tissue (autofluorescence observation), or a reagent such as indocyanine green (ICG) is locally injected to the body tissue, and the body tissue is irradiated with excitation light corresponding to the fluorescent wavelength of the reagent, to obtain a fluorescent image. The light source device 11203 can be designed to be able to supply such narrow-band light and/or excitation light suitable for special light observation as described above.
The camera head 11102 includes a lens unit 11401, an imaging section 11402, a drive section 11403, a communication section 11404, and a camera head control section 11405. The CCU 11201 includes a communication section 11411, an image processing section 11412, and a control section 11413. The camera head 11102 and the CCU 11201 are communicably connected to each other by a transmission cable 11400.
The lens unit 11401 is an optical system provided at a connecting portion with the lens barrel 11101. Observation light taken in from the tip of the lens barrel 11101 is guided to the camera head 11102, and enters the lens unit 11401. The lens unit 11401 is formed with a combination of a plurality of lenses including a zoom lens and a focusing lens.
The imaging section 11402 is formed with an imaging element. The number of imaging elements constituting the imaging section 11402 may be one (a so-called single plate type), or may be two or larger (a so-called multiplate type). Where the imaging section 11402 is formed with the multiplate type, image signals corresponding to R, G, and B are generated by the respective imaging elements, for example, and the image signals may be combined to obtain a color image. Alternatively, the imaging section 11402 may include a pair of imaging elements for acquiring right-eye and left-eye image signals to cope with three-dimensional (3D) display. With 3D display, the surgeon 11131 may grasp the depth of the living tissue in the surgical site more accurately. Note that, in a case where the imaging section 11402 is of the multiplate type, a plurality of systems of lens units 11401 may be provided so as to correspond to the respective imaging elements.
Further, the imaging section 11402 is not necessarily provided in the camera head 11102. For example, the imaging section 11402 may be provided immediately behind the objective lens in the lens barrel 11101.
The drive section 11403 is formed with an actuator, and, under the control of the camera head control section 11405, moves the zoom lens and the focusing lens of the lens unit 11401 by a predetermined distance along the optical axis. As a result, the magnification and the focal point of an image captured by the imaging section 11402 can be adjusted as appropriate.
The communication section 11404 is formed with a communication device for transmitting and receiving various kinds of information to and from the CCU 11201. The communication section 11404 transmits an image signal acquired from the imaging section 11402 as RAW data to the CCU 11201 through the transmission cable 11400.
Further, the communication section 11404 receives a control signal for controlling driving of the camera head 11102 from the CCU 11201, and supplies the control signal to the camera head control section 11405. The control signal includes information regarding the imaging conditions, such as information specifying the frame rate of the captured images, information specifying the exposure value at the time of imaging, and/or information specifying the magnification and focal point of captured images, for example.
Note that the above imaging conditions such as the frame rate, the exposure value, the magnification, and the focal point described above may be specified as appropriate by the user, or may be automatically set by the control section 11413 of the CCU 11201 on the basis of the acquired image signal. In the latter case, a so-called auto exposure (AE) function, an auto focus (AF) function, and an auto white balance (AWB) function are installed in the endoscope 11100.
The camera head control section 11405 controls driving of the camera head 11102 on the basis of a control signal received from the CCU 11201 through the communication section 11404.
The communication section 11411 is formed with a communication device for transmitting and receiving various kinds of information to and from the camera head 11102. The communication section 11411 receives an image signal transmitted from the camera head 11102 through the transmission cable 11400.
Also, the communication section 11411 transmits, to the camera head 11102, a control signal for controlling driving of the camera head 11102. The image signal and the control signal can be transmitted by electrical communication, optical communication, or the like.
The image processing section 11412 performs various kinds of image processing on the image signal that is RAW data transmitted from the camera head 11102.
The control section 11413 performs various kinds of control related to imaging of the surgical site or the like by the endoscope 11100, and display of a captured image obtained by the imaging of the surgical site or the like. For example, the control section 11413 generates the control signal for controlling driving of the camera head 11102.
Further, on the basis of an image signal on which image processing has been performed by the image processing section 11412, the control section 11413 controls the display device 11202 to display a captured image showing the surgical site or the like. At this point of time, the control section 11413 may recognize various objects in the captured image, using various image recognition technologies. For example, the control section 11413 can recognize a surgical tool such as forceps, a particular living body region, bleeding, mist at a time of use of the energy device 11112, and the like, by detecting the shape, the color, and the like of edges of the objects included in a captured image. To cause the display device 11202 to display a captured image, the control section 11413 may superimpose various kinds of surgery support information on the image of the surgical site, using a result of the recognition. As the surgery support information is displayed in an overlapping manner and is presented to the surgeon 11131, the burden on the surgeon 11131 can be reduced, and the surgeon 11131 can proceed with the surgery with certainty.
The transmission cable 11400 that connects the camera head 11102 and the CCU 11201 to each other is an electrical signal cable compatible with electrical signal communication, an optical fiber compatible with optical communication, or a composite cable of these cables.
Here, in the example illustrated in the drawing, communication is performed by wire using the transmission cable 11400. However, communication between the camera head 11102 and the CCU 11201 may be performed wirelessly.
An example of N endoscopic surgery system to which the technology according to the present disclosure can be applied has been described above. The technology according to the present disclosure can be applied to the endoscope 11100, (the imaging section 11402 of) the camera head 11102, and the like among the components described above. Specifically, the solid-state imaging device 111 of the present disclosure can be applied to the imaging section 10402. By applying the technology according to the present disclosure to the endoscope 11100, (the imaging section 11402 of) the camera head 11102, and the like, it is possible to increase yield and reduce cost related to the manufacturing.
Here, the endoscopic surgery system has been described as an example, but the technology according to the present disclosure may be applied to other systems such as a microscopic surgery system or the like, for example.
Furthermore, the present technology can also have the following configurations.
(1) A solid-state imaging device including:
(2) The solid-state imaging device according to (1), in which a thickness of the semiconductor layer is ½ or less of a total thickness of the first and second transparent dielectric layers.
(3) The solid-state imaging device according to (1) or (2), in which the semiconductor layer has a thickness of at least 2 nm but not greater than 10 nm.
(4) The solid-state imaging device according to any one of (1) to (3), in which the first transparent dielectric layer has a thickness of at least 5 nm but not greater than 20 nm.
(5) The solid-state imaging device according to any one of (1) to (4), in which the second transparent dielectric layer has a thickness of at least 15 nm but not greater than 60 nm.
(6) The solid-state imaging device according to any one of (1) to (5), in which a total thickness of the first transparent dielectric layer, the semiconductor layer, and the second transparent dielectric layer is at least 20 nm but not greater than 80 nm.
(7) The solid-state imaging device according to any one of (1) to (6), in which the semiconductor layer is formed with one of p-Si or a-Si.
(8) The solid-state imaging device according to any one of (1) to (7), in which the second transparent dielectric layer is formed with one of SiO2 or a transparent dielectric material having a higher refractive index than SiO2.
(9) The solid-state imaging device according to any one of (1) to (8), in which the second transparent dielectric layer is formed with a transparent dielectric material having a refractive index not lower than 1.7.
(10) The solid-state imaging device according to any one of (1) to (9), in which the second transparent dielectric layer is formed with one of Nb2O5, Ta2O5, TiO2, HfO2, or ZrO2.
(11) The solid-state imaging device according to any one of (1) to (10), in which the first transparent dielectric layer is formed with a multilayer film in which a plurality of films is stacked.
(12) The solid-state imaging device according to (11), in which the plurality of films includes an Al2O3 film and a Ta2O5 film in order from the side of the semiconductor substrate.
(13) The solid-state imaging device according to any one of (1) to (12), in which a negative bias is applied to the semiconductor layer.
(14) The solid-state imaging device according to any one of (1) to (13), further including a light-blocking film in contact with the semiconductor layer on an opposite side from a side of the first transparent dielectric layer, in which a negative bias is applied to the light-blocking film.
(15) The solid-state imaging device according to any one of (1) to (14), in which a trench is formed in a surface of the semiconductor substrate on a light incident side, and part of the first transparent dielectric layer, part of the semiconductor layer, and part of the second transparent dielectric layer are disposed in the trench.
(16) The solid-state imaging device according to (15), in which a negative bias is applied to the semiconductor layer.
(17) The solid-state imaging device according to any one of (1) to (16), in which the plurality of layers includes a color filter layer in which a plurality of color filters is arranged in an in-plane direction on an opposite side from a side of the semiconductor layer of the second transparent dielectric layer, and the second transparent dielectric layer has a plurality of regions that correspond to the plurality of color filters and have different thicknesses.
(18) The solid-state imaging device according to (17), in which, among the plurality of regions, the region corresponding to the color filter having a longer transmission wavelength is thicker.
(19) The solid-state imaging device according to any one of (1) to (18), in which the plurality of layers includes a microlens layer on an opposite side from a side the semiconductor layer of the second transparent dielectric layer.
(20) An electronic apparatus including the solid-state imaging device according to any one of (1) to (19).
Number | Date | Country | Kind |
---|---|---|---|
2022-037203 | Mar 2022 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2023/001635 | 1/20/2023 | WO |