The present disclosure relates to a light detection device and an electronic device, and particularly relates to a light detection device and an electronic device capable of reducing reflection of incident light depending on an image height position.
The refractive index of a silicon substrate used as a semiconductor substrate in a CMOS image sensor is high, and the difference from the refractive index of a color filter layer formed on an incident surface side of the silicon substrate is large. As such, when the color filter layer is formed directly on the silicon substrate, the difference in the refractive indices results in a large amount of the incident light being reflected. This reflection causes problems such as a drop in the quantum efficiency Qe and the occurrence of flaring.
In response to such problems, for example, PTL 1 discloses a technique for reducing the reflection of incident light by forming a moth-eye structure as an anti-reflection structure between the color filter layer and the silicon substrate.
Incidentally, because the incident angle of incident light varies depending on the image height position, the reflection characteristics of the incident light also vary depending on the image height position. However, a pixel structure optimized according to the image height position has not been disclosed.
Having been conceived in view of such circumstances, the present disclosure makes it possible to reduce reflection of incident light depending on the image height position.
A light detection device according to a first aspect of the present disclosure includes:
An electronic device according to a second aspect of the present disclosure includes a light detection device,
In the first and second aspects of the present disclosure, a pixel array unit in which a plurality of pixels is arranged in a two-dimensional array is provided. Each pixel includes: a refractive index variation layer having at least two regions in the same layer, the two regions being a first region containing a first matter and a second region containing a second matter; and a photoelectric conversion unit that photoelectrically converts light incident through the refractive index variation layer. An effective refractive index of the refractive index variation layer is configured to differ depending on an image height position of the pixel.
The light detection device and the electronic device may be independent devices, or may be modules incorporated into a different device.
Modes for embodying the technique of the present disclosure (hereinafter referred to as “embodiments”) will be described below with reference to the accompanying drawings. The descriptions will be given in the following order.
In the drawings referred to in the following descriptions, the same or similar parts will be denoted by the same or similar reference signs, and redundant descriptions will be omitted. The drawings are schematic, and the relationships between thicknesses and planar dimensions, the ratios of thicknesses of layers, and the like are different from the actual ratios, thicknesses, and the like. The drawings may also include parts having dimensional relationships and ratios different from those in other drawings.
In addition, it is to be understood that definitions of directions such as up-down in the following descriptions are merely definitions provided for the sake of brevity and are not intended to limit the technical spirit of the present disclosure. For example, when an object is observed after being rotated by 90 degrees, “up-down” is transformed into “left-right”, and when an object is observed after being rotated by 180 degrees, “up-down” is inverted.
First, a first embodiment of a pixel according to the present disclosure will be described with reference to
Each pixel 10 in
In the cross-sectional view in
An anti-reflection film 22 constituted by a plurality of films is formed on the rear surface of the semiconductor substrate 20, which is the upper side in
A plan view of (part of) the refractive index variation layer 34 is provided below the cross-sectional view of the pixel 10 in the position near the center of the image height and the cross-sectional view of the pixel 10 in the high image height position.
As illustrated in the plan view, the refractive index variation layer 34 is configured by combining the region of the titanium oxide film 32 and the region of the silicon oxide film 33. The titanium oxide film 32 constitutes the main region of the refractive index variation layer 34, and a plurality of the silicon oxide films 33 formed in circular shapes are disposed in a partial region of the titanium oxide film 32 at predetermined intervals.
Of the three layers constituting the anti-reflection film 22, the refractive index of the aluminum oxide film 31 is, for example, about 1.64; the refractive index of the titanium oxide film 32 is, for example, about 2.67; and the refractive index of the silicon oxide film 33 is, for example, about 1.46. The refractive index of silicon (Si), which constitutes the semiconductor substrate 20, is, for example, about 4.16.
When the structure in which the silicon oxide film 33 is embedded in the titanium oxide film 32 is formed to be sufficiently smaller than the wavelength of incident light, the effective refractive index of the refractive index variation layer 34 is determined by the average refractive index according to an area ratio between the refractive index of the titanium oxide film 32 and the refractive index of the silicon oxide film 33. Because the titanium oxide film 32 constitutes the main region of the refractive index variation layer 34, the effective refractive index of the refractive index variation layer 34 is a refractive index greater than both the lower layer aluminum oxide film 31 and the upper-layer silicon oxide film 33. Instead of the titanium oxide film 32, tantalum oxide (Ta2O5) or the like, for example, may be used as an intermediate layer having a higher refractive index than the upper and lower layers. The refractive index of tantalum oxide film (Ta2O5) is, for example, about 2.35. In the present embodiment, the refractive index variation layer 34 is formed in a high refractive index layer of the three layers stacked with the refractive indices thereof being “low-high-low”, but the refractive index variation layer 34 may be formed in a low refractive index layer. In other words, the refractive index variation layer 34 may be formed by embedding a high refractive index layer in a part of a low refractive index layer of the anti-reflection film 22, which is constituted by a plurality of films having different refractive indices. An air layer (air gap) may also be used as the high refractive index layer or the low refractive index layer.
The pattern size of the silicon oxide film 33 formed having circular shapes is different between the pixels 10 in the position near the center of the image height and the pixels 10 in the high image height position. Specifically, if the pattern of the silicon oxide film 33 in the refractive index variation layer 34 at the pixel 10 in the position near the center of the image height is circles each having a diameter DA1, the pattern of the silicon oxide film 33 in the refractive index variation layer 34 at the pixel 10 in the high image height position is circles each having a diameter DA2 smaller than the diameter DA1 (DA1>DA2).
As such, the effective refractive index of the refractive index variation layer 34 is higher in the pixels 10 in the high image height position, where the ratio of the titanium oxide film 32 having a high refractive index is higher, than the pixels 10 in the position near the center of the image height.
A pitch (interval) PT1 of the silicon oxide film 33 formed as circles is the same for the pixels 10 in the position near the center of the image height and the pixels 10 in the high image height position. However, as will be described later, the pitch of the silicon oxide film 33 may be different between the pixels 10 in the position near the center of the image height and the pixels 10 in the high image height position. The pitch PT1 of the silicon oxide film 33 is formed to be a pitch smaller than the wavelength of the incident light that passes through the refractive index variation layer 34 and is incident on the photodiodes 11.
An inter-pixel light shielding film 23 is formed in the pixel boundary part on the upper side of the anti-reflection film 22 in the cross-sectional view. The inter-pixel light shielding film 23 is formed as a lattice when seen in plan view. The inter-pixel light shielding film 23 may be any material that shields light, but a material that has strong light shielding properties and can be accurately processed through microfabrication, such as etching, for example, is desirable. The inter-pixel light shielding film 23 can be formed of a metal film such as tungsten (W), aluminum (Al), copper (Cu), titanium (Ti), molybdenum (Mo), nickel (Ni), or the like, for example. The inter-pixel light shielding film 23 may be formed of an oxide film or a resin film having a low refractive index, an air layer, or the like.
A color filter layer 24 that transmits light of a corresponding one of R (red), G (green), or B (blue) colors (wavelengths) is formed on a pixel-by-pixel basis in the regions on the anti-reflection film 22 aside from the inter-pixel light shielding film 23. The color filter layer 24 is formed by rotationally applying a photosensitive resin containing a colorant such as a pigment or a dye, for example. The R, G, and B color filter layers 24 are provided as a Bayer array, for example, but may be provided using another arrangement method. In the example in
An on-chip lens 25 is formed for each pixel above the color filter layer 24. The on-chip lens 25 focuses the light entering the pixel 10 onto the photodiode 11 in the semiconductor substrate 20. The on-chip lens 25 is formed of, for example, a resin material such as a styrene resin, an acrylic resin, a styrene-acrylic copolymer resin, or a siloxane resin.
Each pixel 10 in
Because the refractive index of the semiconductor substrate 20 on which the photodiode 11 is formed is high (the refractive index of the silicon is, for example, about 4.16), when the color filter layer 24 is formed directly on the semiconductor substrate 20, the difference in refractive indices between the semiconductor substrate 20 and the color filter layer 24 is high, and a large amount of incident light is reflected due to the refractive index difference. This reflection causes problems such as a drop in the quantum efficiency Qe and the occurrence of flaring.
Accordingly, in the pixel 10, an anti-reflection film 22 is formed between the color filter layer 24 and the semiconductor substrate 20 in order to reduce the reflection of incident light at the interface of the semiconductor substrate 20. The anti-reflection film 22 is configured by layering the silicon oxide film 33, the titanium oxide film 32, and the aluminum oxide film 31 in that order from the upper layer on the color filter layer 24 side. The refractive indices of the silicon oxide film 33, the titanium oxide film 32, and the aluminum oxide film 31 are “low-high-low.”
The refractive index variation layer 34 is configured by embedding the upper-layer silicon oxide film 33 in a part of a planar region of the titanium oxide film 32 in the intermediate layer, which has a high refractive index. The effective refractive index of the refractive index variation layer 34 is configured to be different depending on the pixel position in the pixel array unit 50.
Specifically, in the refractive index variation layer 34, the area ratio between the titanium oxide film 32 and the silicon oxide film 33 in the pixel 10 in the position near the center of the image height is different from the area ratio between the titanium oxide film 32 and the silicon oxide film 33 in the pixel 10 in the high image height position. The ratio of the titanium oxide film 32 having a higher refractive index is higher in the high image height position than in the position near the center of the image height. Conversely, the ratio of the silicon oxide film 33 having a lower refractive index is lower in the high image height position than in the position near the center of the image height.
An incident angle (CRA) of the incident light relative to the semiconductor substrate 20 is small in the position near the center of the image height, which is near the center of the optical axis, but increases as the image height increases, and then decreases after peaking at the predetermined high image height position. When the incident angle increases and the incident light becomes oblique, the reflection characteristics of the incident light shift toward the short-wavelength side, resulting in what is known as “blue shift”. In the refractive index variation layer 34, the ratio of the titanium oxide film 32 is higher at the high image height position than at the position near the center of the image height, and the effective refractive index is higher at the high image height position than at the position near the center of the image height, which cancels out the blue shift.
In the simulation, the inventors assumed a layered structure including the semiconductor substrate (silicon layer) 20, the aluminum oxide film 31, the refractive index variation layer 34 constituted by the titanium oxide film 32 and the silicon oxide film 33, the silicon oxide film 33, and the color filter layer (STSR) 24, as illustrated in the layer cross-sectional view on the left, and calculated the light reflection characteristics (reflectance). The color filter layer 24 was assumed to transmit incident light of the G wavelength, and the refractive index of the refractive index variation layer 34 was assumed to have no wavelength dependence for the sake of simplicity.
A reflection characteristic graph 71 represents a case where the refractive index of the refractive index variation layer 34 (the average refractive index of the titanium oxide film 32 and the silicon oxide film 33) is 2.4 and the incident angle is 0 degrees, or in other words, indicates a relationship between the wavelength and the reflectance of incident light for the pixel 10 in the image height center. The reflection characteristic graph 71 is adjusted such that the reflectance is low around 530 to 550 nm, which corresponds to the G wavelength.
A reflection characteristic graph 72 represents a case where the refractive index of the refractive index variation layer 34 is also 2.4 and the incident angle is 36 degrees, or in other words, indicates a relationship between the wavelength and the reflectance of incident light for the pixel 10 in the high image height position. The reflection characteristic graph 72 represents a relationship between the wavelength and the reflectance of the incident light in which the wavelength has an extremely small point at about 490 nm. Accordingly, the pixel position changing from the image height center to the high image height position results in the reflection characteristics shifting in the short wavelength direction from the reflection characteristic graph 71 to the reflection characteristic graph 72, i.e., a blue shift.
Accordingly, assume that the refractive index of the refractive index variation layer 34 has been changed to 2.6 by changing the area ratio of the titanium oxide film 32 and the silicon oxide film 33. A reflection characteristic graph 73 represents a case where the refractive index of the refractive index variation layer 34 is 2.6 and the incident angle is 36 degrees, or in other words, indicates a relationship between the wavelength and the reflectance of incident light for the pixel 10 in the high image height position. The reflection characteristic graph 73 represents a relationship between the wavelength and the reflectance of the incident light in which the wavelength has an extremely small point at about 520 nm. In other words, increasing the refractive index of the refractive index variation layer 34 from 2.4 to 2.6 cancels out the blue shift.
For comparative purposes, a reflection characteristic graph 74 represents a case where the refractive index of the refractive index variation layer 34 is 2.6 and the incident angle is 0 degrees, or in other words, indicates a relationship between the wavelength and the reflectance of incident light for the pixel 10 in the image height center.
On the basis of these simulation results, it can be seen that varying the refractive index of the refractive index variation layer 34 between the image height center and the high image height position makes it possible to handle differences in the incident angle depending on the image height position, and reduce the reflection of incident light.
A method for designing the refractive index variation layer 34 will be described with reference to
First, an optimal effective refractive index is calculated according to the incident angle of the incident light. When the pattern shape of the silicon oxide film 33 is circular and the pitch PT1 of the circular pattern is set to a predetermined pitch smaller than the wavelength of the incident light, the diameter of the circular pattern (a hole diameter) is set corresponding to the calculated effective refractive index. As such, the relationship between the incident angle and the diameter of the silicon oxide film 33 having the circular pattern (the hole diameter), illustrated on the left side in
The relationship between the image height position and the incident angle of the light rays in the pixel array unit 50 can also be calculated. Accordingly, the relationship between the image height position and the diameter of the circular pattern of the silicon oxide film 33 (the hole diameter), illustrated on the right side in
In the refractive index variation layer 34 of the pixel 10 illustrated in
However, the pattern shape of the silicon oxide film 33 is not limited to circles, and may be other shapes instead. For example, the pattern shape may be quadrangles as illustrated in A of
In addition, the arrangement pattern of the silicon oxide film 33 is not limited to the example in
However, for example, the pattern may be an arrangement pattern in which the silicon oxide film 33 having a predetermined shape is arranged in a matrix, as illustrated in D of
The planar shape and arrangement pattern of the silicon oxide film 33 is not particularly limited, and any shape and arrangement can be used. As such, the refractive index can be set to a desired value, and a planar shape and arrangement pattern that are easy to manufacture (process) can be selected. This makes it possible to improve the degree of freedom by which the refractive index can be changed, and makes the manufacturing process easier.
In addition, the pattern shape of the silicon oxide film 33 formed in the titanium oxide film 32 need not be the same pattern shape in all regions of the pixel array unit 50, and different pattern shapes may be formed depending on the image height position. For example, as illustrated in
In the arrangement of the silicon oxide film 33 in
Furthermore, the pattern shape of the silicon oxide film 33 formed in the refractive index variation layer 34 may be formed such that the cross-section is angled, as indicated in the cross-sectional views in A to C of
A in
B in
C in
As illustrated in the cross-sectional views in A to C of
Furthermore, the planar pattern shape of the silicon oxide film 33 may be formed to differ depending on the depth position in the refractive index variation layer 34.
Like
In the first embodiment described above, the ratio of the silicon oxide film 33 disposed in the titanium oxide film 32 was changed according to the image height position such that the effective refractive index of the refractive index variation layer 34 is optimized according to the incident angle of the incident light. More specifically, the diameter DA of the silicon oxide film 33 having the circular pattern is assumed to be the diameter DA1 in the pixel 10 at the position near the center of the image height, and the diameter DA2, which is smaller than the diameter DA1 (DA1>DA2) in the pixel 10 at the high image height position.
On the other hand, in the refractive index variation layer 34 in the first embodiment described above, there is no difference in refractive index according to the color (transmission wavelength) of the color filter layer 24.
In contrast, in the second embodiment, the effective refractive index of the refractive index variation layer 34 is adjusted to be optimal according not only to the incident angle of the incident light, but also to the wavelength of the incident light received by each pixel 10, i.e., the color of the color filter layer 24.
Specifically, the wavelengths of the incident light incident on the pixel 10 where the R color filter layer 24 is formed (also called an “R pixel” hereinafter as appropriate), the pixel 10 where the G color filter layer 24 is formed (also called a “G pixel” hereinafter as appropriate), and the pixel 10 where the B color filter layer 24 is formed (also called a “B pixel” hereinafter as appropriate) are in a magnitude relationship of B pixel<G pixel<R pixel. As the wavelength of the incident light becomes shorter, it is necessary to reduce the refractive index, and it is therefore necessary to increase the ratio of the silicon oxide film 33, which has a lower refractive index, in the refractive index variation layer 34.
Accordingly, in the second embodiment illustrated in
In other words, the diameter DA and the pitch PT of the circular pattern of the silicon oxide film 33 in the refractive index variation layer 34 in the position near the center of the image height are the diameter DA1 and the pitch PT1 in the R pixel, but are the diameter DA1 and the pitch PT2 (PT2<PT1) in the G pixel.
The diameter DA and the pitch PT of the circular pattern of the silicon oxide film 33 in the refractive index variation layer 34 in the high image height position are the diameter DA2 and the pitch PT1 in the R pixel, but are the diameter DA2 and the pitch PT2 (PT2<PT1) in the G pixel.
In other words, for the R pixel, the diameter DA1 and the pitch PT1 in the position near the center of the image height, and the diameter DA2 and the pitch PT1 in the high image height position are the same as in the first embodiment, but in the second embodiment, the pitch PT of the circular pattern of the silicon oxide film 33 in the G pixel is changed from the pitch PT1 of the first embodiment to the pitch PT2.
Although not illustrated, in the rows or columns in which the B pixels and the R pixels are arranged in an alternating manner, the pitch PT1 of the B pixels is changed to a pitch PT3 that is smaller than the pitch PT2 of the G pixels (PT3<PT2<PT1).
As described above, each pixel 10 in the second embodiment has the refractive index variation layer 34 in which the effective refractive index is optimized according to the incident angle and wavelength of the incident light. The refractive index variation layer 34 is configured by combining the region of the titanium oxide film 32 and the region of the silicon oxide film 33. This makes it possible to reduce the reflection of the incident light in accordance with differences in the incident angle depending on the image height position, as well as differences in the wavelength.
Like
In the first embodiment illustrated in
In contrast, in the third embodiment, the refractive index variation layer 34 is formed by embedding the titanium oxide film 32 and the aluminum oxide film 31, which are the intermediate and lower layers of the three layers constituting the anti-reflection film 22, respectively, in a region within the semiconductor substrate 20 in which the photodiode 11 is formed (called a “PD formation region” hereinafter). In other words, the refractive index variation layer 34 is configured by combining (i) the PD formation region and (ii) the region of the aluminum oxide film 31 and the titanium oxide film 32.
Although the plan view of the refractive index variation layer 34 is omitted, the pattern shapes of the aluminum oxide film 31 and the titanium oxide film 32 embedded in the PD formation region are a circular pattern (circles), as in the first embodiment. As described as a variation on the first embodiment, the pattern shapes of the aluminum oxide film 31 and the titanium oxide film 32 embedded in the PD formation region may be shapes other than a circular pattern.
For the diameter DA of the circular pattern of the aluminum oxide film 31 and the titanium oxide film 32 in the R pixel, the magnitude relationship between the position near the center of the image height and the high image height position is the same as in the first embodiment. In other words, the diameter is assumed to be the diameter DA1 in the position near the center of the image height, and the diameter DA2, which is smaller than the diameter DA1 (DA1>DA2), in the high image height position.
As described above, the refractive index of silicon (Si), which constitutes the semiconductor substrate 20, is, for example, about 4.16; the refractive index of the aluminum oxide film 31 is, for example, about 1.64; and the refractive index of the titanium oxide film 32 is, for example, about 2.67. As such, the higher the ratio of the PD formation region (silicon) is, the greater the effective refractive index of the refractive index variation layer 34 becomes.
Accordingly, of the position near the center of the image height and the high image height position, the effective refractive index of the refractive index variation layer 34 is higher in the high image height position than in the position near the center of the image height for the R pixel.
In addition, in the third embodiment, the effective refractive index of the refractive index variation layer 34 is optimally adjusted according to the wavelength of the incident light, as in the second embodiment.
Specifically, the pitch PT of the circular pattern of the aluminum oxide film 31 and the titanium oxide film 32 is the pitch PT1 in the R pixel, but is the pitch PT2, which is smaller than the pitch PT1 (PT2<PT1), in the G pixel. In the G pixel, the diameter DA of the circular pattern of the aluminum oxide film 31 and the titanium oxide film 32 is the diameter DA1 in the position near the center of the image height, and the diameter DA2 in the high image height position.
Accordingly, the circular pattern of the aluminum oxide film 31 and the titanium oxide film 32 is formed such that the effective refractive index of the refractive index variation layer 34 is lower for the G pixels than for the R pixels, as in the second embodiment.
As described above, each pixel 10 in the third embodiment has the refractive index variation layer 34 in which the effective refractive index is optimized according to the incident angle and wavelength of the incident light. The refractive index variation layer 34 is configured by combining (i) the PD formation region, (ii) the aluminum oxide film 31 region, and (iii) the titanium oxide film 32 region. This makes it possible to reduce the reflection of the incident light in accordance with differences in the incident angle depending on the image height position, as well as differences in the wavelength.
Note that in the third embodiment, the refractive index variation layer 34 may be configured to have no difference according to the color (transmission wavelength) of the color filter layer 24, as in the first embodiment.
In addition, depending on the diameters DA1 and DA2, the material embedded in the PD formation region of the refractive index variation layer 34 may be only the aluminum oxide film 31, rather than two layers, i.e., the aluminum oxide film 31 and the titanium oxide film 32.
The fourth embodiment differs from the foregoing first embodiment in that the refractive index variation layer 34 is formed in a layer of an anti-reflection film 90 formed on the uppermost surface of the on-chip lens 25, rather than in a layer of the anti-reflection film 22 on the semiconductor substrate 20.
The anti-reflection film 90 formed on the upper surface of the on-chip lens 25 is constituted by layering a first film 91 and a second film 92. Like the anti-reflection film 22, a tantalum oxide film (Ta2O5), an aluminum oxide film (Al2O3), a titanium oxide film (TiO2), or the like, for example, can be used for the first film 91 and the second film 92. A silicon oxide film, a silicon nitride film, or a silicon oxynitride film, or a resin material such as a styrene resin, an acrylic resin, a styrene-acrylic copolymer resin, a siloxane resin, or the like, may also be used for the first film 91 and the second film 92. One of the first film 91 and the second film 92 may be formed from the same material as the on-chip lens 25.
The upper-layer second film 92 is embedded in a partial region of the lower-layer first film 91. The lower-layer first film 91 is constituted, for example, by a material having a higher refractive index than the upper-layer second film 92. In other words, the refractive index variation layer 34 is configured by combining a region of the first film 91 having a high refractive index and a region of the second film 92 having a lower refractive index. The first film 91 constitutes the main region of the refractive index variation layer 34, and a plurality of the second films 92 formed in a predetermined pattern shape are disposed in a partial region of the first film 91 at predetermined intervals.
The ratio of the first film 91 and the second film 92 in the refractive index variation layer 34 is different between the pixel 10 in the position near the center of the image height and the pixel 10 in the high image height position. In other words, the effective refractive index of the refractive index variation layer 34 is adjusted to an optimal value according to the image height position, and the density of the first film 91 in the pixel 10 at the high image height position is higher than in the pixel 10 at the position near the center of the image height.
Furthermore, like the second embodiment, in the fourth embodiment, the effective refractive index of the refractive index variation layer 34 may be adjusted so as to be optimized according not only to the incident angle of the incident light, but also the wavelength of the incident light received by each pixel 10.
In the fourth embodiment, by forming the refractive index variation layer 34 on the uppermost surface of the on-chip lens 25, each of the three layers constituting the anti-reflection film 22, specifically, the aluminum oxide film 31 in the lowermost layer, the titanium oxide film 32 in the intermediate layer, and the silicon oxide film 33 in the uppermost layer, is formed on the entire region of the pixel array unit 50 at a uniform film thickness.
The configuration of the pixel 10 aside from the above-described points is the same as in the first embodiment and will therefore not be described.
As described above, each pixel 10 in the fourth embodiment has the refractive index variation layer 34, in which the effective refractive index is optimized according to the incident angle of the incident light, formed on the uppermost surface of the on-chip lens 25. The refractive index variation layer 34 is configured by combining a region of the first film 91 and a region of the second film 92 having different refractive indices. This makes it possible to reduce the reflection of the incident light in accordance with differences in the incident angle depending on the image height position. In addition, when the effective refractive index of the refractive index variation layer 34 is adjusted to be optimal in accordance with the wavelength of the incident light received by each pixel 10 as well, the reflection of the incident light can be reduced also in accordance with differences in the wavelength.
The fifth embodiment differs from the foregoing first embodiment in that the refractive index variation layer 34 is formed in a layer of an anti-reflection film 100 formed above the color filter layer 24, rather than in a layer of the anti-reflection film 22 on the semiconductor substrate 20.
The anti-reflection film 100 formed on an upper surface of the color filter layer 24 is constituted by a combination of a region of a first film 101 and a region of a second film 102. The first film 101 is constituted, for example, by a material having a higher refractive index than the second film 102. The first film 101 is constituted, for example, by a titanium oxide film, a tantalum oxide film, or the like, as in the first embodiment, and the second film 102 is constituted, for example, by a silicon oxide film, a silicon nitride film, a silicon oxynitride film, or the like.
In other words, the refractive index variation layer 34 is configured by combining a region of the first film 101 having a high refractive index and a region of the second film 102 having a lower refractive index. The first film 101 constitutes the main region of the refractive index variation layer 34, and a plurality of the second films 102 formed in a predetermined pattern shape are disposed in a partial region of the first film 101 at predetermined intervals.
The ratio of the first film 101 and the second film 102 in the refractive index variation layer 34 is different between the pixel 10 in the position near the center of the image height and the pixel 10 in the high image height position. The effective refractive index of the refractive index variation layer 34 is adjusted to an optimal value according to the image height position, and the density of the first film 101 in the pixel 10 at the high image height position is higher than in the pixel 10 at the position near the center of the image height.
Furthermore, like the second embodiment, in the fifth embodiment, the effective refractive index of the refractive index variation layer 34 may be adjusted so as to be optimized according not only to the incident angle of the incident light, but also the wavelength of the incident light received by each pixel 10.
In the fifth embodiment, by forming the refractive index variation layer 34 on the upper surface of the color filter layer 24, the three layers constituting the anti-reflection film 22 are formed on the entire region of the pixel array unit 50 at a uniform film thickness. In addition, the on-chip lens 25 formed above the color filter layer 24 in the first embodiment is omitted.
The configuration of the pixel 10 aside from the above-described points is the same as in the first embodiment and will therefore not be described.
As described above, each pixel 10 in the fifth embodiment has the refractive index variation layer 34, in which the effective refractive index is optimized according to the incident angle of the incident light, formed on the upper surface of the color filter layer 24. The refractive index variation layer 34 is configured by combining a region of the first film 101 and a region of the second film 102 having different refractive indices. This makes it possible to reduce the reflection of the incident light in accordance with differences in the incident angle depending on the image height position. In addition, when the effective refractive index of the refractive index variation layer 34 is adjusted to be optimal in accordance with the wavelength of the incident light received by each pixel 10 as well, the reflection of the incident light can be reduced also in accordance with differences in the wavelength.
The on-chip lens 25 omitted in the fifth embodiment and illustrated in
Each of the foregoing embodiments described an example in which the on-chip lens 25 is formed in units of pixels, and the effective refractive index of the refractive index variation layer 34 is varied depending on the incident angle of the incident light, which changes depending on the image height position in the pixel array unit 50. In other words, examples were described in which the effective refractive index of the refractive index variation layer 34 is caused to correspond to a different incident angle for the center area of the image height and the high image height position.
Incidentally, some solid-state image capturing devices have a structure in which one on-chip lens is provided for a plurality of adjacent pixels.
For example, there is a structure in which the pixel 10 has a rectangular pixel shape, and one on-chip lens 121 is provided for two pixels 10 adjacent in the row direction, as indicated by A in
In addition, for example, there is a structure in which the pixel 10 has a square pixel shape, and one on-chip lens 121 is provided for a total of four pixels 10 in a 2×2 arrangement, with two pixels arranged in each of the row direction and the column direction, as indicated by B in
For the color filter layer 24, a color filter layer 24 of the same color is provided for the plurality of pixels on which one on chip lens 121 is provided. In the pixel structure indicated in A of
In such a pixel structure, when signals from a plurality of pixels under the one on-chip lens 121 are read out at the same time for all the pixels, those signals can be used as a pixel signal of a single pixel having a large pixel size. On the other hand, when signals from a plurality of pixels under the one on-chip lens 121 are read out individually, those signals can be used as phase difference signals.
In addition, such a pixel structure has a feature in that the incident angle of the incident light is different for each pixel under the one on-chip lens 121 at the high image height position. For example, with the pixel structure in which one on-chip lens 121 is provided for the two rectangular pixels indicated in A of
The pixel 10 in each of the foregoing embodiments includes the refractive index variation layer 34 having at least two regions in the same layer, a first region containing a first matter, and a second region containing a second matter having a refractive index different from the first matter. The effective refractive index of the refractive index variation layer 34 is configured to differ depending on the image height position of the pixel 10. Specifically, the area ratio between the first region and the second region in the refractive index variation layer 34 is adjusted according to the incident angle of the incident light, which differs depending on the image height position.
In the foregoing first embodiment, for example, the first region is a region of the titanium oxide film 32, in which the first matter is titanium oxide, and the second region is a region of the silicon oxide film 33, in which the second matter is silicon oxide. The refractive index variation layer 34 may be configured by using air for one of the first matter or the second matter, and such that the first region or the second region is an air layer, by forming an air layer region and an oxide film region in the same layer, for example.
In addition, for example, as in the third embodiment illustrated in
Providing the pixel 10 with the refractive index variation layer 34 makes it possible to reduce the reflection of incident light depending on the image height position. Reducing the reflection of the incident light makes it possible to increase the amount of transmitted light, which in turn makes it possible to increase the quantum efficiency Qe and suppress the occurrence of flaring.
The foregoing embodiments have described an example in which the photodiode 11, serving as a photoelectric conversion unit, is configured in the semiconductor substrate 20 formed of silicon (Si).
However, the material of the semiconductor substrate 20 is not limited to silicon. For example, the material of the semiconductor substrate 20 may be germanium (Ge), a compound semiconductor having a chalcopyrite structure such as SiGe, GaAs, InGaAs, InGaAsP, InAs, InSb, InAsSb, or the like, or a III-V compound semiconductor, and the photodiode 11 may be configured therein.
A solid-state image capturing device 200 in
Each of the pixels 202 arranged in a two-dimensional array in the pixel array unit 203 has any one of the configurations of the first to fifth embodiments of the pixel 10 described above. In other words, the pixel 202 includes the refractive index variation layer 34 in which the effective refractive index is varied at least according to the image height position, and has a pixel structure in which reflection of incident light is reduced according to the image height position.
The control circuit 208 receives an input clock, data instructing an operation mode, and the like, and outputs data such as internal information of the solid-state image capturing device 200. In other words, the control circuit 208 generates clock signals and control signals as references for the operations of the vertical drive circuit 204, the column signal processing circuits 205, and the horizontal drive circuit 206 based on vertical synchronization signals, horizontal synchronization signals, and a master clock. The control circuit 208 then outputs the generated clock signal or control signal to the vertical drive circuit 204, the column signal processing circuits 205, the horizontal drive circuit 206, and the like.
The vertical drive circuit 204, which is constituted by, for example, a shift register, drives the pixels 202 in units of rows by selecting a predetermined pixel driving wire 210 and supplying a pulse for driving the pixels 202 to the selected pixel driving wire 210. That is, the vertical drive circuit 204 sequentially performs selection scanning on the pixels 202 in the pixel array unit 203 in the vertical direction in units of rows, and supplies a pixel signal based on signal charges generated in accordance with the amount of light received in the photoelectric conversion unit of each of the pixels 202 to the column signal processing circuits 205 through vertical signal lines 209.
One column signal processing circuit 205 is provided for each column of the pixels 202, and performs signal processing such as noise cancellation on a signal output from the pixels 202 corresponding to one row for each column. For example, the column signal processing circuit 205 performs signal processing such as correlated double sampling (CDS) and AD conversion for removing pixel-specific fixed pattern noise.
The horizontal drive circuit 206, which is constituted by, for example, a shift register, sequentially outputs a horizontal scanning pulse to select each of the column signal processing circuits 205 in order, and outputs a pixel signal to a horizontal signal line 211 from each of the column signal processing circuits 205.
The output circuit 207 performs predetermined signal processing on the signals sequentially supplied from the respective column signal processing circuits 205 through the horizontal signal line 211, and outputs the resulting signals. For example, the output circuit 207 may perform only buffering in some cases, or may perform black level adjustment, column variation compensation, and various kinds of digital signal processing in other cases. An input/output terminal 213 exchanges signals with the exterior.
The solid-state image capturing device 200 configured as described above is a CMOS image sensor, known as a column AD type, in which the column signal processing circuits 205 that perform CDS processing and AD conversion processing are provided on a column-by-column basis. In addition, the solid-state image capturing device 200 includes the pixel 10 configured as described above as the pixels 202 of the pixel array unit 203.
By using the pixel 10 configured as described above as the pixels 202 of the pixel array unit 203, the solid-state image capturing device 200 can reduce reflection of incident light in each pixel 202, and can therefore generate a high-quality captured image.
The technique of the present disclosure (the present technique) is not limited to being applied in a solid-state image capturing device. In other words, the present technique can be applied in electronic devices in general in which a solid-state image capturing device is used in an image capturing unit (a photoelectric conversion unit), including image capturing devices such as digital still cameras and video cameras, mobile terminal devices having image capturing functions, copiers which use solid-state image capturing devices as image reading units, and the like. The solid-state image capturing device may be formed as a single chip, or may be formed as a module having an image capturing function, in which an image capturing unit and a signal processing unit or an optical system are packaged together.
An image capturing device 300 in
The optical unit 301 captures incident light (image light) from a subject and forms an image on an image forming surface of the solid-state image capturing device 302. The solid-state image capturing device 302 converts the amount of incident light formed as an image on the image forming surface by the optical unit 301 into an electrical signal for each pixel, and outputs the electrical signal as a pixel signal. The solid-state image capturing device 302 can have the configuration of the solid-state image capturing device 200 in
The display unit 305 includes, for example, a panel-type display device such as a liquid crystal panel or an organic EL (Electro Luminescence) panel, and displays moving images or still images captured by the solid-state image capturing device 302. The recording unit 306 records the moving images or the still images captured by the solid-state image capturing device 302 in a recording medium such as a hard disk or a semiconductor memory.
The operation unit 307 issues operation commands for various functions of the image capturing device 300 in response to operations by a user. The power source unit 308 appropriately supplies various types of power serving as operation power for the DSP circuit 303, the frame memory 304, the display unit 305, the recording unit 306, and the operation unit 307, to those units.
As described above, by using the solid-state image capturing device 302 having the pixel 10 configured as described above as the pixels that receive incident light from the subject, i.e., the pixel structure including the refractive index variation layer 34 for which the effective refractive index differs according to differences in the incident angle depending on the image height position, reflection of the incident light can be reduced, and a drop in image quality can be suppressed, for example. In addition, increasing the quantum efficiency Qe and suppressing the occurrence of flaring makes it possible to improve the S/N ratio and achieve a high dynamic range. Accordingly, a high captured image quality can be achieved in images captured by the image capturing device 300 employed in a video camera, a digital still camera, and furthermore, in a camera module for a mobile device, including a mobile phone.
An image sensor including the solid-state image capturing device 200 described above can be used in various cases for sensing light such as visible light, infrared light, ultraviolet light, and X-rays, for example, as will be described below.
Although the foregoing examples described the technique of the present disclosure being applied in a solid-state image capturing device that outputs an image signal, the technique of the present disclosure can be applied not only in such solid-state image capturing devices, but also in any light detection device that includes pixels which receive incident light and photoelectrically convert the light. For example, the technique of the present disclosure can also be applied in a light receiving device (a range sensor) of a rangefinding system that receives infrared light emitted as active light and measures a distance to a subject by a direct ToF method or an indirect ToF method. The technique is also not limited to a CMOS-type solid-state image capturing device, and can also be applied in a Charge Coupled Device (CCD)-type solid-state image capturing device.
The technique according to the present disclosure (the present technique) can be applied in various products. For example, the technique according to the present disclosure may be applied in an endoscopic surgery system.
The endoscope 11100 includes a lens barrel 11101, of which a region having a predetermined length from a tip thereof is inserted into a body cavity of the patient 11132, and a camera head 11102 connected to a base end of the lens barrel 11101. In the example illustrated here, the endoscope 11100 is configured as what is known as a rigid endoscope having a rigid lens barrel 11101, but the endoscope 11100 may be configured as what is known as a flexible endoscope having a flexible lens barrel.
The tip of the lens barrel 11101 is provided with an opening into which an objective lens is fitted. A light source device 11203 is connected to the endoscope 11100, light generated by the light source device 11203 is guided to the tip of the lens barrel 11101 by a light guide provided extending along the inside of the lens barrel 11101, and the light is emitted toward an observation target in the body cavity of the patient 11132 through the objective lens. The endoscope 11100 may be a direct-view endoscope, an oblique-view endoscope, or a side-view endoscope.
An optical system and an image sensor are provided inside of the camera head 11102, and reflected light (observation light) from the observation target is focused on the image sensor by the optical system. The observation light is photoelectrically converted by the image sensor, and an electrical signal corresponding to the observation light, i.e., an image signal corresponding to an observation image, is generated. The image signal is transmitted to a camera control unit (CCU) 11201 as RAW data.
The CCU 11201 is constituted by a central processing unit (CPU), a graphics processing unit (GPU), and the like, and comprehensively controls the operations of the endoscope 11100 and a display device 11202. In addition, the CCU 11201 receives an image signal from the camera head 11102 and performs various types of image processing on the image signal to display an image based on the image signal, such as, for example, development processing (demosaicing) and the like.
The display device 11202 displays the image based on the image signal subjected to the image processing by the CCU 11201 under the control of the CCU 11201.
The light source device 11203 is constituted by, for example, a light source such as a Light Emitting Diode (LED), and supplies the endoscope 11100 with emitted light when capturing an image of a surgical site or the like.
An input device 11204 is an input interface for the endoscopic surgery system 11000. The user can input various types of information or instructions to the endoscopic surgery system 11000 through the input device 11204. For example, the user inputs an instruction to change image capturing conditions (the type of emitted light, the magnification, the focal length, or the like) of the endoscope 11100.
A treatment tool control device 11205 controls driving of the energized treatment tool 11112 for the cauterization or incision of tissues, sealing blood vessels, or the like. The pneumoperitoneum device 11206 sends a gas into the body cavity of the patient 11132 through the pneumoperitoneum tube 11111 in order to inflate the body cavity and secure a field of view for the endoscope 11100 and a working space of the operator. A recorder 11207 is a device capable of recording various types of information pertaining to the surgery. A printer 11208 is a device capable of printing various types of information pertaining to the surgery in various formats such as text, images, and graphs.
The light source device 11203 that supplies the endoscope 11100 with the emitted light for capturing images of the surgical site can be constituted by, for example, an LED, a laser light source, or a white light source constituted by a combination thereof. When a white light source is constituted by a combination of RGB laser light sources, the output intensity and the output timing of each color (each wavelength) can be controlled with high accuracy, and thus the light source device 11203 can adjust the white balance of the captured image. In this case, by irradiating the observation target with laser light from each of the RGB laser light sources in time-division and controlling the driving of the image sensor in the camera head 11102 in synchronization with that irradiation timing, images corresponding to each of the RGB colors can be captured in time-division as well. According to this method, color images can be obtained even without providing the image sensor with a color filter.
In addition, the driving of the light source device 11203 may be controlled to change the intensity of the output light every predetermined interval. By controlling the driving of the image sensor of the camera head 11102 and obtaining an image through time division in synchronization with the timing at which the intensity of the light is changed, and then compositing those images, a high dynamic range image without blocked-up shadows or blowouts can be generated.
The light source device 11203 may be configured to be capable of supplying light in a predetermined wavelength band corresponding to special light observation. In the special light observation, for example, by emitting light in a band narrower than that of emitted light during normal observation (that is, white light) using wavelength dependence of light absorption in a body tissue, what is known as narrow band light observation (narrow band imaging) in which a predetermined tissue such as a blood vessel in a mucous membrane surface layer is imaged with a high contrast is performed. Alternatively, in special light observation, fluorescence observation may be used to obtain an image from fluorescence generated by emitting excitation light. In fluorescence observation, body tissue is irradiated with excitation light and the fluorescence from the body tissue is observed (autofluorescence observation), or a reagent such as indocyanine green (ICG) is injected into the body tissue and the tissue is irradiated with excitation light corresponding to the fluorescence wavelength of the reagent to obtain a fluorescent image. The light source device 11203 can be configured to supply narrow band and/or excitation light corresponding to such special light observation.
The camera head 11102 includes a lens unit 11401, an image capturing unit 11402, a drive unit 11403, a communication unit 11404, and a camera head control unit 11405. The CCU 11201 has a communication unit 11411, an image processing unit 11412, and a control unit 11413. The camera head 11102 and the CCU 11201 are communicatively connected to each other by a transmission cable 11400.
The lens unit 11401 is an optical system provided in a connection part for connection to the lens barrel 11101. Observation light taken in from the tip of the lens barrel 11101 is guided to the camera head 11102 and is incident on the lens unit 11401. The lens unit 11401 is configured as a combination of a plurality of lenses including a zoom lens and a focus lens.
The image capturing unit 11402 is constituted by an image sensor. The image sensor constituting the image capturing unit 11402 may be a single element (what is known as a single plate type) or a plurality of elements (what is known as a multi-plate type). When the image capturing unit 11402 is configured as a multi-plate type, for example, image signals corresponding to each color of R, G, and B may be generated by the image sensors, and a color image may be obtained by compositing the image signals. Alternatively, the image capturing unit 11402 may be configured to include a pair of image sensors for obtaining image signals for the right eye and the left eye, respectively, so as to implement a three-dimensional (3D) display. Implementing a 3D display enables the operator 11131 to ascertain the depth of biological tissues in the surgical site more accurately. When the image capturing unit 11402 is configured as a multi-plate type, a plurality of lens units 11401 may be provided so as to correspond to the respective image sensors.
The image capturing unit 11402 does not necessarily have to be provided in the camera head 11102. For example, the image capturing unit 11402 may be provided immediately after the objective lens inside of the lens barrel 11101.
The drive unit 11403 is constituted by an actuator, and the zoom lens and the focus lens of the lens unit 11401 are moved by a predetermined distance along an optical axis under the control of the camera head control unit 11405. The magnification and focus of the image captured by the image capturing unit 11402 can therefore be adjusted appropriately.
The communication unit 11404 is constituted by a communication device for exchanging various types of information with the CCU 11201. The communication unit 11404 transmits the image signal obtained from the image capturing unit 11402 as RAW data to the CCU 11201 over the transmission cable 11400.
The communication unit 11404 receives a control signal for controlling the driving of the camera head 11102 from the CCU 11201 and supplies that control signal to the camera head control unit 11405. The control signal includes, for example, information regarding image capturing conditions, such as information indicating a designation of a framerate of a captured image, information indicating a designation of an exposure value when an image is captured, and/or information indicating a designation of the magnification and the focus of the captured image.
The image capturing conditions such as the framerate, the exposure value, the magnification, and the focus may be designated by the user as appropriate, or may be automatically set by the control unit 11413 of the CCU 11201 based on the obtained image signal. In the latter case, what are known as an auto exposure (AE) function, an auto focus (AF) function, and an auto white balance (AWB) function are provided in the endoscope 11100.
The camera head control unit 11405 controls the driving of the camera head 11102 based on a control signal from the CCU 11201 received via the communication unit 11404.
The communication unit 11411 is constituted by a communication device that exchanged various kinds of information with the camera head 11102. The communication unit 11411 receives an image signal transmitted over the transmission cable 11400 from the camera head 11102.
The communication unit 11411 also transmits control signals for controlling the driving of the camera head 11102 to the camera head 11102. The image signals and the control signals can be transmitted through electric communication, optical communication, or the like.
The image processing unit 11412 performs various types of image processing on the image signal that is the RAW data transmitted from the camera head 11102.
The control unit 11413 performs various types of control pertaining to capturing images of a surgical site by the endoscope 11100, displaying captured images obtained by capturing images of a surgical site, or the like. For example, the control unit 11413 generates control signals for controlling the driving of the camera head 11102.
In addition, the control unit 11413 causes the display device 11202 to display a captured image showing a surgical site or the like based on an image signal subjected to the image processing by the image processing unit 11412. At this time, the control unit 11413 may recognize various objects in the captured image using various image recognition techniques. For example, the control unit 11413 can recognize a surgical instrument such as forceps, a specific biological site, bleeding, mist or the like produced when using the energized treatment tool 11112, or the like by detecting a shape, a color, or the like of an edge of an object present in the captured image. When causing the display device 11202 to display a captured image, the control unit 11413 may superimpose various types of surgery support information on an image of the surgical site for display using a result of the recognition. Displaying the surgery support information in a superimposed manner and presenting that information to the operator 11131 makes it possible to lighten the burden on the operator 11131 and enable the operator 11131 to proceed with the surgery with confidence.
The transmission cable 11400 that connects the camera head 11102 and the CCU 11201 is an electrical signal cable that supports communication of electrical signals, an optical fiber that supports optical communication, or a composite cable thereof.
Although wired communication is performed using the transmission cable 11400 in the example illustrated here, the camera head 11102 and the CCU 11201 may communicate wirelessly.
An example of an endoscopic surgery system to which the technique according to the present disclosure can be applied has been described thus far. The technique according to the present disclosure may be applied to the image capturing unit 11402 of the camera head 11102 among the configurations described above. Specifically, the solid-state image capturing device 200 illustrated in
Here, although an endoscopic surgery system has been described as an example, the technique according to the present disclosure may be applied to other systems, such as a microscopic surgery system, for example.
The technique according to the present disclosure (the present technique) can be applied in various products. For example, the technique according to the present disclosure may be realized as a device mounted on any type of moving body such as an automobile, an electric automobile, a hybrid electric automobile, a motorcycle, a bicycle, a personal mobility device, an airplane, a drone, a ship, a robot, or the like.
A vehicle control system 12000 includes a plurality of electronic control units connected over a communication network 12001. In the example illustrated in
The drive system control unit 12010 controls operations of devices related to a drive system of the vehicle according to various types of programs. For example, the drive system control unit 12010 functions as control devices, such as a driving force generation device for generating driving force for the vehicle, such as an internal combustion engine or a driving motor; a driving force transmission mechanism for transmitting driving force to wheels; a steering mechanism for adjusting a turning angle of the vehicle; a braking device that generates braking force for the vehicle; and the like.
The body system control unit 12020 controls the operations of various devices mounted in the vehicle body in accordance with various programs. For example, the body system control unit 12020 functions as control devices for a keyless entry system, a smart key system, power window devices, or various lamps such as headlights, backup lights, brake lights, turn signals, fog lights, and the like. In this case, radio waves emitted from a portable device that substitutes for a key or signals from various switches can be input to the body system control unit 12020. The body system control unit 12020 receives the input of the radio waves or signals and controls door lock devices, power window devices, the lamps, and the like of the vehicle.
The vehicle exterior information detection unit 12030 detects information on the exterior of the vehicle in which the vehicle control system 12000 is installed. For example, an image capturing unit 12031 is connected to the vehicle exterior information detection unit 12030. The vehicle exterior information detection unit 12030 causes the image capturing unit 12031 to capture an image of the exterior of the vehicle, and receives the captured image. The vehicle exterior information detection unit 12030 may perform object detection processing or distance detection processing for people, cars, obstacles, signs, letters on the road, and the like based on the received image.
The image capturing unit 12031 is an optical sensor that receives light and outputs an electrical signal according to the intensity of the received light. The image capturing unit 12031 can also output the electrical signal as an image or as distance measurement information. In addition, the light received by the image capturing unit 12031 may be visible light or non-visible light such as infrared light.
The vehicle interior information detection unit 12040 detects information on the interior of the vehicle. For example, a driver state detection unit 12041 that detects a state of a driver is connected to the vehicle interior information detection unit 12040. The driver state detection unit 12041 includes, for example, a camera that captures an image of the driver, and the vehicle interior information detection unit 12040 may calculate the level of the driver's fatigue or concentration, or may determine whether the driver is dozing, based on detection information input from the driver state detection unit 12041.
For example, the microcomputer 12051 can calculate control target values for a driving force generation device, a steering mechanism, or a braking device based on information on the inside and outside of the vehicle obtained by the vehicle exterior information detection unit 12030 and the vehicle interior information detection unit 12040, and output control commands to the drive system control unit 12010. For example, the microcomputer 12051 can perform coordinated control for the purpose of implementing functions of an Advanced Driver Assistance System (ADAS) including vehicle collision avoidance, impact mitigation, following traveling based on an inter-vehicle distance, constant vehicle speed driving, vehicle collision warnings, and lane departure warnings.
In addition, the microcomputer 12051 can perform coordinated control for the purpose of automated driving or the like in which autonomous travel is performed without requiring operations of the driver, by controlling the driving force generation device, the steering mechanism, the braking device, or the like based on information about the surroundings of the vehicle, the information being obtained by the vehicle exterior information detection unit 12030 or the vehicle interior information detection unit 12040.
In addition, the microcomputer 12051 can output control commands to the body system control unit 12020 based on the information on the exterior of the vehicle obtained by the vehicle exterior information detection unit 12030. For example, the microcomputer 12051 can perform coordinated control for the purpose of suppressing glare, such as switching from high beams to low beams by controlling the headlights according to the position of a preceding vehicle or an oncoming vehicle detected by the vehicle exterior information detection unit 12030.
The sound/image output unit 12052 transmits an output signal of at least one of sound and an image to an output device capable of visually or audibly providing information to an occupant or to the exterior of the vehicle. In the example illustrated in
In
The image capturing units 12101, 12102, 12103, 12104, and 12105 are provided at the positions of the front nose, the side-view mirrors, the rear bumper, the trunk door, an upper part of the windshield within the vehicle cabin, and the like of the vehicle 12100, for example. The image capturing unit 12101 provided on the front nose and the image capturing unit 12105 provided in an upper part of the windshield within the vehicle cabin mainly obtain images from in front of the vehicle 12100. The image capturing units 12102 and 12103 provided in the side-view mirrors mainly obtain images from the sides of the vehicle 12100. The image capturing unit 12104 provided on the rear bumper or the trunk door mainly obtains images of an area behind the vehicle 12100. The images obtained by the image capturing units 12101 and 12105 are mainly used for detecting preceding vehicles, pedestrians, obstacles, traffic signals, traffic signs, lanes, and the like.
At least one of the image capturing units 12101 to 12104 may have a function for obtaining distance information. For example, at least one of the image capturing units 12101 to 12104 may be a stereo camera constituted by a plurality of image sensors, or may be an image sensor that has pixels for phase difference detection.
For example, the microcomputer 12051 can extract, particularly, a closest three-dimensional object on a path through which the vehicle 12100 is traveling, which is a three-dimensional object traveling at a predetermined speed (for example, at least 0 km/h) in substantially the same direction as the vehicle 12100, as a preceding vehicle by obtaining a distance to each three-dimensional object in the image capturing ranges 12111 to 12114 and temporal changes in the distance (a relative speed with respect to the vehicle 12100) based on distance information obtained from the image capturing units 12101 to 12104. The microcomputer 12051 can also set an inter-vehicle distance to the preceding vehicle to be maintained in advance and perform automatic brake control (including following stop control) and automatic acceleration control (including following start control). It is therefore possible to perform coordinated control for the purpose of, for example, automated driving in which the vehicle travels in an automated manner without requiring the driver to perform operations.
For example, the microcomputer 12051 can classify and extract three-dimensional data regarding three-dimensional objects as two-wheeled vehicles, normal vehicles, large vehicles, pedestrians, and other three-dimensional objects such as electrical poles based on the distance information obtained from the image capturing units 12101 to 12104, and can use the three-dimensional data to automatically avoid obstacles. For example, the microcomputer 12051 classifies obstacles around the vehicle 12100 into obstacles visible to the driver of the vehicle 12100 and obstacles which are difficult to see. Then, the microcomputer 12051 determines a collision risk indicating the degree of risk of collision with each obstacle, and when the collision risk is at least a set value and there is a possibility of collision, an alarm is output to the driver through the audio speaker 12061 or the display unit 12062, forced deceleration or avoidance steering is performed through the drive system control unit 12010, and the like, making it possible to provide driving assistance for collision avoidance.
At least one of the image capturing units 12101 to 12104 may be an infrared camera that detects infrared rays. For example, the microcomputer 12051 can recognize a pedestrian by determining whether there is a pedestrian in an image captured by the image capturing units 12101 to 12104. Such pedestrian recognition is performed by, for example, a sequence in which feature points in the images captured by the image capturing units 12101 to 12104 as infrared cameras are extracted and a sequence in which pattern matching processing is performed on a series of feature points indicating an outline of an object to determine whether the object is a pedestrian. When the microcomputer 12051 determines that there is a pedestrian in the image captured by the image capturing units 12101 to 12104 and the pedestrian is recognized, the sound/image output unit 12052 controls the display unit 12062 such that a quadrangular frame for emphasis is superimposed on and displayed with the recognized pedestrian. In addition, the sound/image output unit 12052 may control the display unit 12062 such that an icon indicating a pedestrian or the like is displayed at a desired position.
An example of the vehicle control system to which the technique according to the present disclosure can be applied has been described thus far. The technique according to the present disclosure may be applied to the image capturing unit 12031 and the like among the above-described configurations. Specifically, the solid-state image capturing device 200 illustrated in
The embodiments of the present disclosure are not limited to the above-described embodiments, and various modifications can be made without departing from the essential spirit of the technique of the present disclosure.
The advantageous effects described in the present specification are merely exemplary and are not limited, and other advantageous effects of the advantageous effects described in the present specification may be achieved.
The technique of the present disclosure can be configured as follows.
(1)
A light detection device including:
The light detection device according to (1),
The light detection device according to (2),
The light detection device according to (2) or (3),
The light detection device according to any one of (1) to (4),
The light detection device according to any one of (1) to (5),
The light detection device according to (6),
The light detection device according to (6) or (7),
The light detection device according to any one of (1) to (8), further including:
The light detection device according to any one of (1) to (8),
The light detection device according to any one of (1) to (8),
The light detection device according to any one of (1) to (11),
The light detection device according to any one of (1) to (12),
The light detection device according to any one of (1) to (13),
An electronic device including a light detection device,
Number | Date | Country | Kind |
---|---|---|---|
2021-212518 | Dec 2021 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2022/046031 | 12/14/2022 | WO |