The present disclosure relates to a solid-state imaging device, an imaging device, and an electronic apparatus, and particularly relates to a solid-state imaging device, an imaging device, and an electronic apparatus in which generation of flare and coloring caused by the flare can be suppressed with a simple configuration.
In recent years, a chip scale package (CSP) structure is proposed as a simple packaging method of an imaging element, and imaging elements of this CSP structure are mass-produced.
But, flare (a ghost image) light, which has been never generated in a package structure other than the CSP structure, is generated in the above-described CSP structure, in which the flare is generated when light reflected at an upper surface of the imaging element is entirely reflected at sealing glass (a protective base) and incident again.
Considering this, there is a proposed technology in which flare is suppressed by forming a wavelength control film on sealing glass (see Patent Documents 1 and 2).
However, a wavelength control film disclosed in Patent Documents 1 and 2 have wavelength dependency, and there is a possibility that coloring caused by flare cannot be suppressed.
Furthermore, since the wavelength control film is formed by using a laminated film such as TiO/SiO and the like, a lot of man-hours are required to form the wavelength control film.
The present disclosure is made in view of such situations and is particularly directed to suppressing generation of flare and coloring caused by the flare with a simple configuration.
A solid-state imaging device, an imaging device, and an electronic apparatus according to an aspect of the present disclosure are a solid-state imaging device, an imaging device, and an electronic apparatus in which a high refractive index layer having a refractive index higher than a refractive index of any one of a transparent protective substrate and a surface layer of an imaging surface of a solid-state imaging element is formed in a prior stage of the solid-state imaging element in a light incident direction.
In one aspect of the present disclosure, a high refractive index layer having a refractive index higher than a refractive index of any one of a transparent protective substrate and a surface layer of an imaging surface of a solid-state imaging element is formed in a prior stage of the solid-state imaging element in a light incident direction.
In the following, preferred embodiments of the present disclosure will be described in detail with reference to the accompanying drawings. Note that, in the present specification and the drawings, a constituent element having substantially the same functional configuration will be denoted by the same reference sign, and duplicate description will be omitted.
Modes for carrying out the present disclosure (hereinafter referred to as embodiments) will be described below. Note that the description will be provided in the following order.
8. Exemplary Application to Electronic Apparatus
<Exemplary Configuration of Imaging Device of Present Disclosure>
Referring to
An imaging device 1 in
The solid-state imaging element 11 is an image sensor including a complementary metal oxide semiconductor (so-called CMOS), a charge coupled device (CCD), or the like, and is fixed in a state of being electrically connected on the circuit board 17. As described later with reference to
The high refractive index layer 12 is provided on an upper surface portion of the solid-state imaging element 11 in
The high refractive index layer 12 includes, for example, a silicon compound such as a silicon nitride film, silicon carbide, or the like, a metal oxide such as a titanium oxide, a tantalum oxide, a niobium oxide, a hafnium oxide, an indium oxide, a tin oxide, or the like, a complex oxide of these, or an organic substance such as an acrylic resin, siloxane, or the like.
The high refractive index layer 12 and the protective substrate 14 are bonded by the bonding resin 13 having the same refractive index as that of the protective substrate 14.
That is, the solid-state imaging element 11, the high refractive index layer 12, and the protective substrate 14 are laminated and is formed as an integrated configuration by being mutually pasted by the transparent bonding resin 13, and connected to the circuit board 17. Note that the solid-state imaging element 11, the high refractive index layer 12, and the protective substrate 14 surrounded by an alternate long and short dash line in the drawing are mutually pasted by the bonding resin 13 and is integrally formed into a so-called chip scale package (CSP), and therefore, the configuration will be simply referred to as an integrated portion 10 as well.
The spacer 20 is formed on the circuit board 17 in a manner surrounding an entire portion where the solid-state imaging element 11, the high refractive index layer 12, and the protective substrate 14 are integrally formed. Furthermore, the actuator 18 is provided on the spacer 20. The actuator 18 is formed in a cylinder shape and has the lens group 16 built inside thereof, the lens group 16 is formed by laminating a plurality of lenses inside the cylinder, and the actuator 18 drives the lens group 16 in an up-down direction of
With such a configuration, since the actuator 18 moves the lens group 16 in the up-down direction (a front-back direction with respect to an optical axis) in
<Schematic External View>
Next, a configuration of the integrated portion 10 will be described with reference to
The integrated portion 10 illustrated in
A plurality of solder balls 11f corresponding to backside electrodes to provide electrical connection with the circuit board 17 of
The upper substrate 11b has an upper surface on which an interlayer insulation film 11c, and an on-chip lens (microlens) 11d including a color filter of red (R), green (G), or blue (B) is formed. Furthermore, the upper substrate 11b is connected by a cavityless structure via the flattening film 11e and the interlayer insulation film 11c, in which the flattening film 11e is provided in order to protect and flatten the on-chip lens 11d.
For example, as illustrated in
Alternatively, as illustrated in
As described above, since the logic circuit 23 or both the control circuit 22 and the logic circuit 23 are formed and laminated on the lower substrate 11a that is different from the upper substrate 11b of the pixel region 21, the imaging device 1 can be more downsized than in a case of arranging the pixel region 21, the control circuit 22, and the logic circuit 23 in a plane direction on one semiconductor substrate.
In the following, a description will be provided while referring to, as a pixel sensor substrate 11b, the upper substrate 11b having at least the pixel region 21 formed thereon, and referring to, as a logic substrate 11a, the lower substrate 11a having at least the logic circuit 23 formed thereon.
<Exemplary Configuration of Lamination Substrate>
The solid-state imaging element 11 includes a pixel array unit 33 in which pixels 32 are arrayed in a two-dimensional array, a vertical drive circuit 34, a column signal processing circuit 35, a horizontal drive circuit 36, an output circuit 37, a control circuit 38, and an input/output terminal 39.
A pixel 32 includes a photodiode as a photo-electric conversion element and a plurality of pixel transistors. An exemplary circuit configuration of the pixel 32 will be described later with reference to
Furthermore, the pixel 32 can also have a pixel sharing structure. The pixel sharing structure includes a plurality of photodiodes, a plurality of transfer transistors, one shared floating diffusion (floating diffusion region), and another shared pixel transistor in each. That is, in sharing pixels, the photodiodes and the transfer transistors constituting a plurality of unit pixels are formed in a manner sharing another pixel transistor in each.
The control circuit 38 receives an input clock and data that commands an operation mode and the like, and also outputs data such as internal information and the like of the solid-state imaging element 11. That is, the control circuit 38 generates a clock signal or a control signal to be reference of operation of the vertical drive circuit 34, the column signal processing circuit 35, the horizontal drive circuit 36, and the like on the basis of a vertical synchronization signal, a horizontal synchronization signal, and a master clock. Then, the control circuit 38 outputs the generated clock signal or the generated control signal to the vertical drive circuit 34, the column signal processing circuit 35, the horizontal drive circuit 36, and the like.
The vertical drive circuit 34 includes, for example, a shift register and selects a predetermined pixel drive wire 40, supplies pulses to the selected pixel drive wire 40 in order to drive the pixels 32, and drives the pixels 32 in a row unit. That is, the vertical drive circuit 34 selectively and sequentially scans, in a vertical direction, each of the pixels 32 in a row unit of the pixel array unit 33, and supplies, through a vertical signal line 41, the column signal processing circuit 35 with a pixel signal based on signal electric charge generated in a photo-electric conversion unit in each of the pixels 32 in accordance with a received light amount.
The column signal processing circuit 35 is arranged per column of the pixels 32 and performs, per pixel column, signal processing such as noise removal and the like to a signal output from each of the pixels 32 included in one row. The column signal processing circuit 5 performs, for example, the signal processing such as correlated double sampling (CDS) to remove fixed-pattern noise peculiar to a pixel, AD conversion, and the like.
The horizontal drive circuit 36 includes, for example, a shift register, sequentially selects each of column signal processing circuits 35 by sequentially outputting a horizontal scanning pulse, and causes each of the column signal processing circuits 35 to output a pixel signal to a horizontal signal line 42.
The output circuit 37 performs signal processing for each of the signals sequentially supplied from each of the column signal processing circuits 35 through the horizontal signal line 42, and outputs the processed signal. There is a case where the output circuit 37 performs, for example, only buffering, or there is a case where the output circuit 37 performs black level adjustment, correction of column variation, various kinds of digital signal processing, and the like. The input/output terminal 39 exchanges signals with the outside.
The solid-state imaging element 11 thus configured is a CMOS image sensor called a column AD system in which the column signal processing circuit 35 that performs CDS processing and AD conversion processing is arranged per pixel column.
<Exemplary Circuit Configuration of Pixel>
The pixel 32 illustrated in
The pixel 32 includes a photodiode 51 as a photo-electric conversion element (a photo-electric conversion region), a first transfer transistor 52, a memory unit (MEM) 53, a second transfer transistor 54, a floating diffusion region (FD) 55, a reset transistor 56, an amplification transistor 57, a selection transistor 58, and a discharge transistor 59.
The photodiode 51 is a photo-electric conversion unit that generates and accumulates electric charge (signal electric charge) corresponding to a received light amount. The photodiode 51 has an anode terminal grounded and further has a cathode terminal connected to the memory unit 53 via the first transfer transistor 52. Furthermore, the cathode terminal of the photodiode 51 is connected also to the discharge transistor 59 that is provided to discharge unnecessary electric charge.
When the first transfer transistor 52 is turned on by a transfer signal TRX, the first transfer transistor 52 reads electric charge generated at the photodiode 51 and transfers the electric charge to the memory unit 53. The memory unit 53 is an electric charge holding unit that temporarily holds the electric charge until the electric charge is transferred to the FD 55.
When the second transfer transistor 54 is turned on by a transfer signal TRG, the second transfer transistor 54 reads the electric charge held in the memory unit 53 and transfers the electric charge to the FD 55.
The FD 55 is an electric charge holding unit that holds the electric charge in order to read, as a signal, the electric charge read from the memory unit 53. When the reset transistor 56 is turned on by a reset signal RST, the reset transistor 56 resets potential of the FD 55 by discharging, to a constant voltage source VDD, the electric charge accumulated in the FD 55.
The amplification transistor 57 outputs a pixel signal corresponding to the potential of the FD 55. That is, the amplification transistor 57 constitutes a source follower circuit with a load MOS 60 as a constant current source, and a pixel signal indicating a level corresponding to the electric charge accumulated in the FD 55 is output to the column signal processing circuit 35 (
The selection transistor 58 is turned on when a pixel 32 is selected by a selection signal SEL, and outputs a pixel signal of the pixel 32 to the column signal processing circuit 35 via the vertical signal line 41.
When the discharge transistor 59 is turned on by a discharge signal OFG, the discharge transistor 59 discharges, to the constant voltage source VDD, unnecessary electric charge accumulated in the photodiode 51.
The transfer signals TRX and TRG, the reset signal RST, the discharge signal OFG, and the selection signal SEL are supplied from the vertical drive circuit 34 via the pixel drive wire 40.
Operation of the pixel 32 will be briefly described.
First, the discharge transistor 59 is turned on when a discharge signal OFG having a high level is supplied to the discharge transistor 59 before starting exposure, and the electric charge accumulated in the photodiode 51 is discharged to the constant voltage source VDD to reset photodiodes 51 in all of pixels.
When the discharge transistor 59 is turned off by a discharge signal OFG having a low level after resetting the photodiodes 51, exposure is started in all of the pixels of the pixel array unit 33.
When a predetermined certain exposure time elapses, the first transfer transistors 52 are turned on by the transfer signal TRX in all of the pixels of the pixel array unit 33, and the electric charge accumulated in the photodiodes 51 is transferred to the memory unit 53.
After the first transfer transistor 52 is turned off, the electric charge held in the memory unit 53 of each pixel 32 is sequentially read out to the column signal processing circuit 35 in a row unit. In the reading operation, the second transfer transistor 54 of each of the pixels 32 located in a row to be read is turned on by the transfer signal TRG, and the electric charge held in the memory unit 53 is transferred to the FD 55. Then, when the selection transistor 58 is turned on by the selection signal SEL, a signal indicating a level corresponding to the electric charge accumulated in the FD 55 is output from the amplification transistor 57 to the column signal processing circuit 35 via the selection transistor 58.
As described above, in the pixel 32 having the pixel circuit of
Note that the circuit configuration of each pixel 32 is not limited to the configuration illustrated in
<Exemplary Basic Structure of Solid-State Imaging Device>
Next, a detailed structure of the solid-state imaging element 11 will be described with reference to
In the logic substrate 11a, a multilayer wiring layer 82 is formed on an upper side (the pixel sensor substrate 11b side) of a semiconductor substrate 81 (hereinafter referred to as a silicon substrate 81) including, for example, silicon (Si). The multilayer wiring layer 82 constitutes the control circuit 22 and the logic circuit 23 of
The multilayer wiring layer 82 includes: a plurality of wiring layers 83 including an uppermost wiring layer 83a closest to the pixel sensor substrate 11b, a middle wiring layer 83b, a lowermost wiring layer 83c closest to the silicon substrate 81, and the like; and an interlayer insulation film 84 formed between the respective wiring layers 83.
The plurality of wiring layers 83 is formed by using, for example, copper (Cu), aluminum (Al), tungsten (W), and the like, and the interlayer insulation film 84 is formed by using, for example, a silicon oxide film, a silicon nitride film, or the like. As for each of the plurality of wiring layers 83 and the interlayer insulation film 84, all of level layers may be formed by using the same material, or two or more materials may be suitably used depending on each level layer.
A silicon through hole 85 penetrating the silicon substrate 81 is formed at a predetermined position of the silicon substrate 81, and a through silicon via (TSV) 88 is formed by embedding a connection conductor 87 in an inner wall of the silicon through hole 85 via an insulation film 86. The insulation film 86 can be formed by using, for example, a SiO2 film, a SiN film, or the like.
Note that, in the through silicon via 88 illustrated in
Furthermore, a solder mask (solder resist) 91 is formed on the lower surface side of the silicon substrate 81 in a manner covering the rewiring 90 and the insulation film 86 excluding a region where the solder balls 11f is formed.
On the other hand, in the pixel sensor substrate 11b, a multilayer wiring layer 102 is formed on a lower side (the logic substrate 11a side) of a semiconductor substrate 101 (hereinafter referred to as a silicon substrate 101) formed by using silicon (Si). This multilayer wiring layer 102 constitutes the pixel circuit of the pixel region 21 in
The multilayer wiring layer 102 includes: a plurality of wiring layers 103 including an uppermost wiring layer 103a closest to the silicon substrate 101, a middle wiring layer 103b, a lowermost wiring layer 103c closest to the logic substrate 11a, and the like; and an interlayer insulation film 104 formed between the respective wiring layers 103.
As a material used as the plurality of wiring layers 103 and the interlayer insulation film 104, it is possible to adopt the same kind of material as the material(s) of the wiring layers 83 and the interlayer insulation film 84 described above. Furthermore, a point that one, or two or more materials may be suitably used for the plurality of wiring layers 103 and the interlayer insulation film 104 is also similar to the wiring layers 83 and the interlayer insulation film 84 described above.
Note that, in the example of
Inside the silicon substrate 101, a photodiode 51 formed by a PN junction is formed for each pixel 32.
Furthermore, although not illustrated, the plurality of pixel transistors such as the first transfer transistor 52, the second transfer transistor 54, and the like, the memory unit (MEM) 53, and the like are formed on the multilayer wiring layer 102 and the silicon substrate 101.
The through silicon via 108 connected to the wiring layer 103a of the pixel sensor substrate 11b, and the through chip via 105 connected to the wiring layer 83a of the logic substrate 11a are formed at predetermined positions of the silicon substrate 101 where the on-chip lens 11d is not formed.
The through chip via 105 and the through silicon via 108 are connected by connection wiring 106 formed on an upper surface of the silicon substrate 101. Furthermore, an insulation film 107 is formed between the silicon substrate 101 and each of the through silicon via 108 and the through chip via 105. Moreover, the on-chip lens 11d is formed on the upper surface of the silicon substrate 101 via the flattening film (insulation film) 11c.
As described above, the solid-state imaging element 11 illustrated in
Furthermore, in the solid-state imaging element 11 of the imaging device 1, the wiring layer 103 of the pixel sensor substrate 11b and the wiring layer 83 of the logic substrate 11a are connected by the two through electrodes of the through silicon via 108 and the through chip via 105, and the wiring layer 83 of the logic substrate 11a and the solder ball (backside electrode) 11f are connected to the through silicon via 88 by the rewiring 90. Therefore, the plane area of the imaging device 1 can be reduced to the extreme.
Moreover, a size in a height direction can also be reduced by forming a cavityless structure between the high refractive index layer 12 and the protective substrate 14 on the solid-state imaging element 11 and pasting the high refractive index layer 12 and the protective substrate 14 to each other by the bonding resin 13.
Accordingly, with the imaging device 1 illustrated in
<Principle of Generation of Flare>
Here, the principle of generation of flare will be described. As illustrated in
Among the rays of the reflected light L11 to L14, the rays of the reflected light L11 and L12 each having a reflection angle smaller than a critical angle indicated by a dotted line pass through the bonding resin 13 and the protective substrate 14.
However, among the rays of the reflected light L11 to L14, the rays of the reflected light L13 and L14 each having a reflection angle larger than the critical angle indicated by the dotted line are entirely reflected again at positions P11 and P12 in a boundary between the protective substrate 14 and an air layer 151. In this case, the entirely-reflected rays of the reflected light L13 and L14 are incident again on the solid-state imaging element 11 by passing through the protective substrate 14 and the bonding resin 13 having the same reflection rate. The rays of the reflected light L13 and L14 that have been incident again cause the flare.
<Principle of Suppressing Generation of Flare>
On the other hand, the integrated portion 10 in the imaging device 1 of
In this case, when the reflected light L111 passes through the flattening film 11e, the reflected light L111 is partly reflected as reflected light r1 as indicated by an arrow at a position P111 in a boundary between the flattening film 11e and the high refractive index layer 12. Therefore, reflected light L111-1 having a light amount that has been reduced by an amount of the reflected light r1 from the reflected light L111 passes through the high refractive index layer 12.
Furthermore, when the reflected light L111-1 passes through the high refractive index layer 12, the reflected light L111-1 is partly reflected as reflected light r2 as indicated by an arrow at a position P112 in a boundary between the high refractive index layer 12 and the bonding resin 13. Therefore, reflected light L111-2 having a light amount that has been reduced by an amount of the reflected light r2 from the reflected light L111-1 passes through the bonding resin 13 and the protective substrate 14. Moreover, when the reflected light L111-2 passes through the bonding resin 13 and the protective substrate 14, the reflected light L111-2 is entirely reflected as indicated by an arrow at a position P121 in a boundary between the protective substrate 14 and the air layer 151, and again passes through the protective substrate 14 and the bonding resin 13 as reflected light L111-3.
Then, when the reflected light L111-3 passes through the protective substrate 14 and the bonding resin 13, the reflected light L111-3 is partly reflected as reflected light r3 as indicated by an arrow at a position P113 in a boundary between the bonding resin 13 and the high refractive index layer 12. Therefore, reflected light L111-4 having a light amount that has been reduced by an amount of the reflected light r3 from the reflected light L111-2 passes through the high refractive index layer 12.
Moreover, when the reflected light L111-4 passes through the high refractive index layer 12, the reflected light L111-4 is partly reflected as reflected light r4 as indicated by an arrow at a position P114 in a boundary between the high refractive index layer 12 and the flat film 11d. Therefore, reflected light L111-5 having a light amount that has been reduced by an amount of the reflected light r4 from the reflected light L111-4 is incident again on the on-chip lens 11d.
That is, the reflected light L111 is partly reflected as the rays of the reflected light r1 to r4 at the respective positions P111 to P114 on interfaces with the high refractive index layer 12 (an interface between the high refractive index layer 12 and the flattening film 11e, and an interface between the bonding resin 13 and the high refractive index layer 12). Therefore, the amount of the reflected light L111 is gradually reduced, and the reflected light is incident again on the on-chip lens 11d finally as the reflected light L111-5.
As a result, as described with reference to
Furthermore, the high refractive index layer 12 has a single-layer structure, the configuration is more simplified and man-hours in manufacturing is more reduced than, for example, those in a case of a wavelength control film or the like formed by laminating layers, and therefore, it is possible to reduce a manufacturing cost.
A description has been provided above for an example in which a high refractive index layer 12 is formed between a flattening film 11e of a solid-state imaging element 11 and a bonding resin 13, but the high refractive index layer 12 may also be formed at a different position as far as the high refractive index layer 12 can form an interface with another layer so as to be able to gradually reduce an amount of reflected light between a protective substrate 14 and the flattening film 11e, and therefore, the high refractive index layer 12 may be formed between, for example, the bonding resin 13 and the protective substrate 14.
Here, consideration is given with reference to
In this case, when the reflected light L131 passes through the flattening film 11e and the bonding resin 13, the reflected light L131 is partly reflected as reflected light r11 as indicated by an arrow at a position P131 in a boundary between the bonding resin 13 and the high refractive index layer 12. Therefore, reflected light L131-1 having a light amount that has been reduced by an amount of the reflected light r11 from the reflected light L131 passes through the high refractive index layer 12.
Furthermore, when the reflected light L131-1 passes through the high refractive index layer 12, the reflected light L131-1 is partly reflected as reflected light r12 as indicated by an arrow at a position P132 in a boundary between the high refractive index layer 12 and the protective substrate 14. Therefore, reflected light L131-2 having a light amount that has been reduced by an amount of the reflected light r12 from the reflected light L131-1 passes through the protective substrate 14.
Moreover, when the reflected light L131-2 passes through the protective substrate 14, the reflected light L131-2 is entirely reflected as indicated by an arrow at a position P141 in a boundary between the protective substrate 14 and the air layer 151, and again passes through the protective substrate 14 as reflected light L131-3.
Furthermore, when the reflected light L131-3 passes through the protective substrate 14, the reflected light L131-3 is partly reflected as indicated by an arrow as reflected light r13 at a position P133 in the boundary between the protective substrate 14 and the high refractive index layer 12. Therefore, reflected light L131-4 having a light amount that has been reduced by an amount of the reflected light r13 from the reflected light L131-3 passes through the high refractive index layer 12.
Moreover, when the reflected light L131-4 passes through the high refractive index layer 12, the reflected light L131-4 is partly reflected as reflected light r14 as indicated by an arrow at a position P134 in the boundary between the high refractive index layer 12 and the bonding resin 13. Therefore, reflected light L131-5 having a light amount that has been reduced by an amount of the reflected light r14 from the reflected light L131-4 passes through the bonding resin 13 and the flattening film 11e and is incident again on the on-chip lens 11d.
That is, since the reflected light L131 is sequentially and partly reflected as the rays of the reflected light r11 to r14 at the respective positions P131 to P134, the amount of the reflected light L131 is gradually reduced and incident again on the on-chip lens 11d as the reflected light L131-5.
As a result, the reflected light L131-5 incident again on the solid-state imaging element 11 becomes the light having the light amount sufficiently reduced from the reflected light L131, and then is incident again on the solid-state imaging element 11 in the integrated portion 10 of
Furthermore, since the high refractive index layer 12 has a single-layer structure, the configuration is more simplified and man-hours for manufacturing is more reduced than, for example, those in a case of a wavelength control film or the like formed by laminating layers, and therefore, it is possible to reduce a manufacturing cost.
A description has been provided above for an example in which a high refractive index layer 12 is formed between a bonding resin 13 and a protective substrate 14, but a plurality of high refractive index layers 12 may be formed.
That is, the high refractive index layer 12-1 is formed between a flattening film 11e of a solid-state imaging element 11 and a bonding resin 13, and the high refractive index layer 12-2 is formed between the bonding resin 13 and the protective substrate 14 in the integrated portion 10 of
Here, consideration is given for reflected light L151 having a reflection angle larger than a critical angle out of reflected light of incident light L101 reflected at a position P101 of an on-chip lens 11d in the integrated portion 10 of
In this case, when the reflected light L151 passes through the flattening film 11e, the reflected light L151 is partly reflected as reflected light r21 as indicated by an arrow at a position P151 in a boundary between the flattening film 11e and the high refractive index layer 12-1. Therefore, reflected light L151-1 having a light amount that has been reduced by an amount of the reflected light r21 from the reflected light L151 passes through the high refractive index layer 12-1.
Furthermore, when the reflected light L151-1 passes through the high refractive index layer 12-1, the reflected light L151-1 is partly reflected as reflected light r22 as indicated by an arrow at a position P152 in a boundary between the high refractive index layer 12-1 and the bonding resin 13. Therefore, reflected light L151-2 having a light amount that has been reduced by an amount of the reflected light r22 from the reflected light L151-1 passes through the bonding resin 13.
Moreover, when the reflected light L151-2 passes through the bonding resin 13, the reflected light L151-2 is partly reflected as reflected light r23 as indicated by an arrow at a position P153 in a boundary between the bonding resin 13 and the high refractive index layer 12-2. Therefore, reflected light L151-3 having a light amount that has been reduced by an amount of the reflected light r23 from the reflected light L151-2 passes through the high refractive index layer 12-2.
Furthermore, when the reflected light L151-3 passes through the high refractive index layer 12-2, the reflected light L151-3 is partly reflected as reflected light r24 as indicated by an arrow at a position P154 in a boundary between the high refractive index layer 12-2 and the protective substrate 14. Therefore, reflected light L151-4 having a light amount that has been reduced by an amount of the reflected light r24 from the reflected light L151-3 passes through the protective substrate 14.
Moreover, when the reflected light L151-4 passes through the protective substrate 14, the reflected light L151-4 is entirely reflected as indicated by an arrow at a position P161 in a boundary between the protective substrate 14 and an air layer 151, and again passes through the protective substrate 14 as reflected light L151-5.
Furthermore, when the reflected light L151-5 passes through the protective substrate 14, the reflected light L151-5 is partly reflected as indicated by an arrow as reflected light r25 at a position P155 in the boundary between the protective substrate 14 and the high refractive index layer 12-2. Therefore, reflected light L151-6 having a light amount that has been reduced by an amount of the reflected light r25 from the reflected light L151-5 passes through the high refractive index layer 12-2.
Moreover, when the reflected light L151-6 passes through the high refractive index layer 12-2, the reflected light L151-6 is partly reflected as reflected light r26 as indicated by an arrow at a position P156 in the boundary between the high refractive index layer 12-2 and the bonding resin 13. Therefore, reflected light L151-7 having a light amount that has been reduced by an amount of the reflected light r26 from the reflected light L151-6 passes through the bonding resin 13.
Furthermore, when the reflected light L151-7 passes through the bonding resin 13, the reflected light L151-7 is partly reflected as reflected light r27 as indicated by an arrow at a position P157 in the boundary between the bonding resin 13 and the high refractive index layer 12-1. Therefore, reflected light L151-8 having a light amount that has been reduced by an amount of the reflected light r27 from the reflected light L151-7 passes through the high refractive index layer 12-1.
Moreover, when the reflected light L151-8 passes through the high refractive index layer 12-1, the reflected light L151-8 is partly reflected as reflected light r28 as indicated by an arrow at a position P158 in the boundary between the high refractive index layer 12-1 and the flattening film 11e of the solid-state imaging element 11. Reflected light L151-9 having a light amount that has been reduced by an amount of the reflected light r28 from the reflected light L151-8 is incident again on the on-chip lens 11d.
That is, since the reflected light L151 is sequentially and partly reflected as the rays of the reflected light r21 to r28 at the respective positions P151 to P158, the amount of the reflected light L151 is gradually reduced and finally is incident again on the on-chip lens 11d as the reflected light L151-9.
As a result, as described with reference to
Note that the high refractive index layers 12-1 and 12-2 constitute a two-layer configuration in the integrated portion 10 of
Furthermore, since any one of the high refractive index layers 12-1 and 12-2 has a single-layer structure, the configuration is more simplified and man-hours for manufacturing is more reduced than those in a case of a wavelength control film or the like formed by laminating layers, and therefore, it is possible to reduce a manufacturing cost.
Note that the description has been provided for the case where the high refractive index layer 12 has the two-layer structure, but a multi-layer structure having the larger number of layers may also be adopted, in which an amount of reflected light that is incident again on the solid-state imaging element 11 can be more reduced by forming the plurality of the high refractive index layers 12, and it is possible to further suppress the generation of flare and the coloring caused by the flare.
A description has been provided above for an example of having a two-layer structure of high refractive index layers 12-1 and 12-2, but a bonding resin 13 may be made to have a high refractive index.
Here, consideration is given with reference to
In this case, when the reflected light L171 passes through the flattening film 11e, the reflected light L171 is partly reflected as reflected light r31 as indicated by an arrow at a position P171 in a boundary between the flattening film 11e and the highly-refractive bonding resin 12′. Therefore, reflected light L171-1 having a light amount that has been reduced by an amount of the reflected light r31 from the reflected light L171 passes through the highly-refractive bonding resin 12′.
Furthermore, when the reflected light L171-1 passes through the highly-refractive bonding resin 12′, the reflected light L171-1 is partly reflected as reflected light r32 as indicated by an arrow at a position P172 in a boundary between the highly-refractive bonding resin 12′ and the protective substrate 14. Therefore, reflected light L171-2 having a light amount that has been reduced by an amount of the reflected light r32 from the reflected light L171-1 passes through the protective substrate 14.
Moreover, when the reflected light L171-2 passes through the protective substrate 14, the reflected light L171-2 is entirely reflected as indicated by an arrow at a position P181 in a boundary between the protective substrate 14 and an air layer 151, and again passes through the protective substrate 14 as reflected light L171-3.
Furthermore, when the reflected light L171-3 passes through the protective substrate 14, the reflected light L171-3 is partly reflected as reflected light r33 as indicated by an arrow at a position P173 in the boundary between the protective substrate 14 and the highly-refractive bonding resin 12′. Therefore, reflected light L171-4 having a light amount that has been reduced by an amount of the reflected light r33 from the reflected light L171-3 passes through the highly-refractive bonding resin 12′.
Moreover, when the reflected light L171-4 passes through the highly-refractive bonding resin 12′, the reflected light L171-4 is partly reflected as reflected light r34 as indicated by an arrow at a position P174 in the boundary between the highly-refractive bonding resin 12′ and the flattening film 11e. Therefore, reflected light L171-5 having a light amount that has been reduced by an amount of the reflected light r34 from the reflected light L171-4 passes through the flattening film 11e and is incident again on the on-chip lens 11d.
That is, the reflected light L171 is sequentially and partly reflected as the rays of the reflected light r31 to r34 at the respective positions P171 to P174. Therefore, the amount of the reflected light L171 is gradually reduced, and the reflected light is incident again on the on-chip lens 11d as the reflected light L171-5.
As a result, in the integrated portion 10 of
Furthermore, since the highly-refractive bonding resin 12′ has a single-layer structure, the configuration is more simplified and man-hours for manufacturing is more reduced than, for example, those in a case of a wavelength control film or the like formed by laminating layers, and therefore, it is possible to reduce a manufacturing cost.
Moreover, since the flattening film 11e of the solid-state imaging element 11 and the protective substrate 14 are only to be pasted to each other by the highly-refractive bonding resin 12′, the man-hours can be further reduced.
Note that, in the integrated portion 10 of
A description has been provided above for an example in which a highly-refractive bonding resin 12′ is provided instead of a high refractive index layer 12 and a bonding resin 13, but a protective substrate 14 may be made to have a high refractive index.
Here, consideration is given with reference to
In this case, when the reflected light L191 passes through a flattening film 11e and the bonding resin 13, the reflected light L191 is partly reflected as reflected light r41 as indicated by an arrow at a position P191 in a boundary between the bonding resin 13 and the high refractive protective substrate 12″. Therefore, reflected light L191-1 having a light amount that has been reduced by an amount of the reflected light r41 from the reflected light L191 passes through the high refractive protective substrate 12″.
Furthermore, when the reflected light L191-1 passes through the high refractive protective substrate 12″, the reflected light L191-1 is entirely reflected as indicated by an arrow at a position P201 in a boundary between the high refractive protective substrate 12″ and an air layer 151, and then passes through again the high refractive protective substrate 12″ as reflected light L191-2.
Furthermore, when the reflected light L191-2 passes through the high refractive protective substrate 12″, the reflected light L191-2 is partly reflected as reflected light r42 as indicated by an arrow at a position P192 in the boundary between the high refractive protective substrate 12″ and the bonding resin 13. Therefore, reflected light L191-3 having a light amount that has been reduced by an amount of the reflected light T42 from the reflected light L191-2 passes through the bonding resin 13 and the flattening film 11e and is incident again on the on-chip lens 11d.
That is, since the reflected light L191 is sequentially and partly reflected as the rays of the reflected light r41 and r42 at the respective positions P191 and P192, the amount of the reflected light L191 is gradually reduced and incident again on the on-chip lens 11d as the reflected light L191-3.
As a result, in the integrated portion 10 of
Note that, in the integrated portion 10 of
A description has been provided above for an example in which a protective substrate 14 is made to have a high refractive index, but a high refractive index layer 12 may be formed on an upper surface of the protective substrate 14.
Here, consideration is given with reference to
In this case, when the reflected light L211 passes through a flattening film 11e, a bonding resin 13, and the protective substrate 14, the reflected light L211 is partly reflected as reflected light r51 as indicated by an arrow at a position P211 in a boundary between the protective substrate 14 and the high refractive index layer 12. Therefore, reflected light L211-1 having a light amount that has been reduced by an amount of the reflected light r51 from the reflected light L211 passes through the high refractive index layer 12.
Furthermore, the reflected light L211-1 passes through the high refractive index layer 12 and entirely reflected as indicated by an arrow at a position P221 in a boundary between the high refractive index layer 12 and an air layer 151, and passes through again the high refractive index layer 12 as reflected light L211-2.
Furthermore, when the reflected light L211-2 passes through the high refractive index layer 12, the reflected light L211-2 is partly reflected as reflected light r52 as indicated by an arrow at a position P212 in the boundary between the high refractive index layer 12 and the protective substrate 14. Therefore, reflected light L211-3 having a light amount that has been reduced by an amount of the reflected light r52 from the reflected light L211-2 passes through the protective substrate 14, the bonding resin 13, and the flattening film 11e and is incident again on the on-chip lens 11d.
That is, since the reflected light L211 is sequentially and partly reflected as the rays of the reflected light r51 and r52 at the respective positions P211 and P212, the amount of the reflected light L211 is gradually reduced and incident again on the on-chip lens 11d as the reflected light L211-3.
As a result, in the integrated portion 10 of
Furthermore, since the high refractive index layer 12 has a single-layer structure in the integrated portion 10 of
In the above, a lens group 16 is provided in a prior stage in a light incident direction in an integrated portion 10, but a part of a final stage of the lens group 16 may be pasted to the integrated portion 10.
That is,
In the integrated portion 10 of
Furthermore, since a high refractive index layer 12 has a single-layer structure in the integrated portion 10 of
Moreover, weight of the lens group 16 can be reduced by pasting the part of the lens group 16 onto the integrated portion 10, and therefore, autofocusing can be speeded up due to reduction in a load of an actuator 18.
Note that, the example in which the lens 171 is pasted to the integrated portion 10 of
An imaging device 1 in
The optical system includes one or a plurality of lenses, guides light (incident light) from a subject to the solid-state imaging element, and forms an image on a light receiving surface of the solid-state imaging element.
The shutter device is arranged between the optical system and the solid-state imaging element, and controls a light emission period and a light shielding period relative to the solid-state imaging element in accordance with control of the drive circuit.
The solid-state imaging element includes a package including the above-described solid-state imaging element. The solid-state imaging element accumulates signal electric charge for a certain period in accordance with light from which an image is formed on the light receiving surface via the optical system and the shutter device. The signal electric charge accumulated in the solid-state imaging element is transferred in accordance with a drive signal (timing signal) supplied from the drive circuit.
The drive circuit outputs a drive signal that controls transfer operation of the solid-state imaging element and shutter operation of the shutter device, and drives the solid-state imaging element and the shutter device.
The signal processing circuit applies various kinds of signal processing to signal electric charge output from the solid-state imaging element. An image (image data) obtained by the signal processing circuit applying the signal processing is supplied and displayed on the monitor or supplied and stored (recorded) in the memory.
In the imaging device having such a configuration, it is possible to suppress flare caused by internal irregular reflection and coloring caused by the flare can be suppressed while achieving reduction in a size and a height of the device configuration by applying the imaging device 1 including the integrated portion 10 of one of
For example, the above-described imaging device 1 can be used in various cases of sensing light such as visible light, infrared light, ultraviolet light, X-rays, and the like as described below.
The technology according to the present disclosure (the present technology) is applicable to various products. For example, the technology according to the present disclosure may be applied to an endoscopic surgery system.
The endoscope 0 includes a lens barrel 1 and a camera head 2 connected to a proximal end of the lens barrel 1, and a predetermined length from a distal end of the lens barrel 1 is to be inserted into a body cavity of the patient 2. In the illustrated example, the endoscope 0 provided as a so-called rigid mirror including the rigid lens barrel 1 is illustrated, but the endoscope 0 may also be provided as a so-called flexible mirror including a flexible lens barrel.
The distal end of the lens barrel 1 is provided with an open portion into which an objective lens is fitted. The endoscope 0 has a light source device 3 connected, and light generated by the light source device 3 is guided to the distal end of the lens barrel by a light guide provided in a manner extending inside the lens barrel 1, and the light is emitted to an observation target inside the body cavity of the patient 2 via the objective lens. Note that the endoscope 0 may be a forward-viewing endoscope, an oblique-viewing endoscope, or side-viewing endoscope.
An optical system and an imaging element are provided inside the camera head 2, and reflected light (observation light) from the observation target is condensed into the imaging element by the optical system. The observation light is photoelectrically converted by the imaging element, and an electric signal corresponding to the observation light, that is, an image signal corresponding to an observation image is generated. The image signal is transmitted to a camera control unit (CCU) 1 as RAW data.
The CCU 1 includes a central processing unit (CPU), a graphics processing unit (GPU), and the like, and integrally controls operation of the endoscope 0 and a display device 2. Moreover, the CCU 1 receives an image signal from the camera head 2 and applies, to the image signal, various kinds of image processing, such as development processing (demosaic processing) and the like, in order to display an image based on the image signal.
The display device 2 displays the image based on the image signal applied with the image processing by the CCU 1 under the control of the CCU 1.
The light source device 3 includes a light source such as a light emitting diode (LED) or the like, and supplies the endoscope 0 with irradiation light at the time of capturing an image of a surgical site or the like.
An input device 4 is an input interface for the endoscopic surgery system 0. A user can input various kinds of information and can input a command to the endoscopic surgery system 0 via the input device 4. For example, the user inputs a command to change imaging conditions by the endoscope 0 (a kind of irradiation light, a magnification, a focal length, and the like).
A treatment tool control device 5 controls drive of the energy treatment tool 2 for ablation of tissue, incision, sealing of a blood vessel, and the like. A pneumoperitoneum device 6 feeds a gas into the body cavity via the pneumoperitoneum tube 1 in order to inflate the body cavity of the patient 2 for the purpose of securing a field of view by the endoscope 0 and securing a work space for an operator. A recorder 7 is a device that can record various kinds of information related to surgery. A printer 8 is a device that can print various kinds of information related to surgery in various formats such as text, an image, a graph, and the like.
Note that the light source device 3 that supplies the endoscope 0 with the irradiation light at the time of capturing an image of a surgical site can include, for example, an LED, a laser light source, or a white light source including a combination thereof. In a case where the white light source includes a combination with RGB laser light sources, it is possible to control output intensity and output timing of respective colors (respective wavelengths) with high accuracy, and therefore, white balance of a captured image can be adjusted in the light source device 3. Furthermore, in this case, images corresponding to the respective RGB can also be captured in a time sharing manner by: irradiating an observation target with rays of laser light from the respective RGB laser light sources in a time sharing manner; and controlling drive of the imaging element of the camera head 2 in synchronization with irradiation timing thereof. According to this method, a color image can be obtained without providing a color filter in the imaging element.
Furthermore, drive of the light source device 3 may be controlled so as to change, at predetermined time intervals, the intensity of the light to be output. Since images are acquired in the time sharing manner by controlling the drive of the imaging element of the camera head 2 in synchronization with the timing of changing the intensity of the light and then the images are synthesized, it is possible to generate an image of a so-called high dynamic range without underexposure and overexposure.
Furthermore, the light source device 3 may be capable of supplying light of a predetermined wavelength band suitable for special light observation. In the special light observation, for example, a so-called narrow band imaging is performed, in which an image of predetermined tissue such as a blood vessel of a mucosal surface layer or the like is captured with high contrast by emitting light of a narrower band than that of irradiation light at the time of normal observation (that is, white light) while utilizing wavelength dependency of light absorption in a body tissue. Alternatively, in the special light observation, fluorescence observation in which an image is obtained by fluorescence generated by emitting excitation light may be performed. In the fluorescence observation, it is possible to: perform observation on fluorescence from a body tissue by irradiating the body tissue with the excitation light (auto-fluorescence observation); or obtain a fluorescence image by locally injecting a reagent such as indocyanine green (ICG) or the like into the body tissue and also irradiating the body tissue with excitation light corresponding to a fluorescence wavelength of the reagent. The light source device 3 may be capable of supplying the narrow band light and/or the excitation light suitable for such special light observation.
The camera head 2 includes a lens unit 1, an imaging unit 2, a drive unit 3, a communication unit 4, and a camera head control unit 5. The CCU 1 includes a communication unit 1, an image processing unit 2, and a control unit 3. The camera head 2 and the CCU 1 are connected in a manner communicable with each other by a transmission cable 0.
The lens unit 1 is an optical system provided at a connecting portion with the lens barrel 1. The observation light taken in from the distal end of the lens barrel 1 is guided to the camera head 2 and incident on the lens unit 1. The lens unit 1 is formed by combining a plurality of lenses including a zoom lens and a focus lens.
The imaging unit 2 includes an imaging element. The number of imaging elements constituting the imaging unit 2 may be one (a so-called single-plate type) or plural (a so-called multi-plate type). In a case where the imaging unit 2 has a multi-plate type configuration, for example, image signals corresponding to the respective RGB may be generated by the respective imaging elements, and a color image may be obtained by synthesizing these image signals. Alternatively, the imaging unit 2 may include a pair of imaging elements in order to acquire respective image signals for a right eye and a left eye, in which the image signals are adaptable to three-dimensional (3D) display. The operator 1 can grasp more correctly a depth of a living tissue in a surgical site by performing the 3D display. Note that, in a case where the imaging unit 2 has the multi-plate type configuration, a plurality of systems of lens units 1 can be also provided in a manner corresponding to the respective imaging elements.
Furthermore, the imaging unit 2 is not necessarily provided in the camera head 2. For example, the imaging unit 2 may be provided immediately behind the objective lens inside the lens barrel 1.
The drive unit 3 includes an actuator, and moves a zoom lens and a focus lens of the lens unit 1 by a predetermined distance along an optical axis under control of the camera head control unit 5. Therefore, a magnification and a focal point of an image captured by the imaging unit 2 can be adjusted as appropriate.
The communication unit 4 includes a communication device in order to exchange various kinds of information with the CCU 1. The communication unit 4 transmits, as RAW data, an image signal obtained from the imaging unit 2 to the CCU 1 via the transmission cable 0.
Furthermore, the communication unit 4 receives, from the CCU 1, a control signal in order to control the drive of the camera head 2, and supplies the control signal to the camera head control unit 5. The control signal includes information associated with imaging conditions including, for example, information indicating designation of a frame rate of a captured image, information indicating designation of an exposure value at the time of imaging, information indicating designation of a magnification and a focal point of a captured image, and/or the like.
Note that the above-described imaging conditions such as the frame rate, the exposure value, the magnification, the focal point, and the like may be designated as appropriate by a user, or may be automatically set by the control unit 3 of the CCU 1 on the basis of an acquired image signal. In the latter case, a so-called auto exposure (AE) function, an auto focus (AF) function, and an auto white balance (AWB) function are mounted on the endoscope 0.
The camera head control unit 5 controls the drive of the camera head 2 on the basis of a control signal from the CCU 1 received via the communication unit 4. The communication unit 1 includes a communication device to exchange various kinds of information with the camera head 2. The communication unit 1 receives an image signal transmitted from the camera head 2 via the transmission cable 0.
Furthermore, the communication unit 1 transmits a control signal in order to control the drive of the camera head 2 to the camera head 2. The image signal and the control signal can be transmitted by electric communication, optical communication, and the like.
The image processing unit 2 applies various kinds of image processing to the image signal that is RAW data transmitted from the camera head 2.
The control unit 3 performs various kinds of control associated with capturing an image of a surgical site or the like by the endoscope 0 and display of a captured image obtained by capturing the image of the surgical site or the like. For example, the control unit 3 generates a control signal in order to control the drive of the camera head 2.
Furthermore, the control unit 3 causes, on the basis of an image signal applied with the image processing by the image processing unit 2, the display device 2 to display a captured image on which a surgical site or the like is included. At this time, the control unit 3 may recognize various objects inside the captured image by using various image recognition technologies. For example, the control unit 3 can recognize a surgical tool such as forceps and the like, a specific living body part, bleeding, a mist at the time of using the energy treatment tool 2, and the like by detecting an edge shape, a color, and the like of an object included in the captured image. When the control unit 3 causes the display device 2 to display the captured image, the control unit 3 may cause the display device 2 to display various kinds of surgical assistance information in a manner superimposed on the image of the surgical site by using the recognition results. Since the surgical assistance information is displayed in the superimposed manner and presented to the operator 1, it is possible to reduce a burden on the operator 1 and the operator 1 can surely perform the surgery.
The transmission cable 0 that connects the camera head 2 and the CCU 1 is an electric signal cable adaptable to electric signal communication, an optical fiber adaptable to optical communication, or a composite cable thereof.
Here, in the illustrated example, communication is performed by wire using the transmission cable 0, but the communication between the camera head 2 and the CCU 1 may also be performed wirelessly.
In the above, the description has been provided for the example of the endoscopic surgery system to which the technology according to the present disclosure can be applied. The technology according to the present disclosure can be applied to, for example, the endoscope 0, the camera head 2 (imaging unit 2), the CCU 1 (image processing unit 2), and the like among the configurations described above. Specifically, for example, an imaging device 1 of
Note that the endoscopic surgery system has been described as an example here, but the technology according to the present disclosure may also be applied to, for example, a microscopic surgery system and the like.
The technology according to the present disclosure (the present technology) is applicable to various products. For example, the technology according to the present disclosure may be implemented as a device mounted on any kind of mobile objects such as a car, an electric car, a hybrid electric car, a motorcycle, a bicycle, a personal mobility, an airplane, a drone, a ship, a robot, and the like.
A vehicle control system 0 includes a plurality of electronic control units connected via a communication network 1. In the example illustrated in
The drive system control unit 0 controls operation of devices associated with a drive system of a vehicle in accordance with various programs. For example, the drive system control unit 0 functions as a control device for: a drive force generating device to generate drive force of a vehicle, such as an internal combustion engine or a drive motor; a drive force transmission mechanism to transmit the driving force to wheels; a steering mechanism that adjusts a rudder angle of a vehicle; a brake device that generates braking force of a vehicle; and the like.
The body system control unit 0 controls operation of various devices equipped on a vehicle body in accordance with various programs. For example, the body system control unit 0 functions as a control device for: a keyless entry system; a smart key system; a power window device; or various kinds of lamps such as a headlamp, a back lamp, a brake lamp, a turn indicator, a fog lamp, and the like. In this case, radio waves transmitted from a portable machine substituted for a key, or signals of various switches can be received in the body system control unit 0. The body system control unit accepts these radio waves or signals and controls a door lock device, a power window device, a lamp, and the like of a vehicle.
The vehicle exterior information detection unit 0 detects information associated with the outside of the vehicle having the vehicle control system 0 mounted thereon. For example, the vehicle exterior information detection unit 0 has an imaging unit 1 connected thereto. The vehicle exterior information detection unit 0 causes the imaging unit 1 to capture an image of the outside of the vehicle, and receives the captured image. The vehicle exterior information detection unit 0 may perform, on the basis of the received image, object detection processing or distance detection processing for a person, a vehicle, an obstacle, a sign, characters on a road surface, and the like.
The imaging unit 1 is an optical sensor that receives light and outputs an electric signal corresponding to a received amount of the light. The imaging unit 1 can output an electric signal as an image and can also output an electric signal as ranging information. Furthermore, the light received by the imaging unit 1 may be visible light or may be invisible light such as infrared rays or the like.
The vehicle interior information detection unit 0 detects information associated with the inside of the vehicle. For example, the vehicle interior information detection unit 0 is connected to a vehicle operator state detecting unit 1 that detects a state of a vehicle operator. The vehicle operator state detecting unit 1 includes, for example, a camera that captures images of the vehicle operator, and the vehicle interior information detection unit 0 may evaluate a degree of fatigue or a degree of concentration of the vehicle operator on the basis of the detection information received from the vehicle operator state detecting unit 1, or may discriminate whether or not the vehicle operator is dozing off.
The microcomputer 1 calculates a control target value for the drive force generating device, the steering mechanism, or the brake device on the basis of information associated with the inside or the outside of the vehicle acquired by the vehicle exterior information detection unit 0 or the vehicle interior information detection unit 0, and can output a control command to the drive system control unit 0. For example, the microcomputer 1 can perform cooperative control intended to implement functions of an advanced driver assistance system (ADAS) including: collision avoidance or impact mitigation of a vehicle; adaptive cruise based on an inter-vehicle distance; speed maintaining cruise; vehicle collision warning or vehicle lane departure warning; and the like.
Furthermore, the microcomputer 1 controls the drive force generating device, the steering mechanism, the brake device, or the like on the basis of information associated with a periphery of the vehicle acquired by the vehicle exterior information detection unit 0 or the vehicle interior information detection unit 0, thereby achieving cooperative control intended to perform automated cruise and the like in which autonomous travel is performed without depending on operation by a vehicle operator.
Furthermore, the microcomputer 1 can output a control command to the body system control unit 0 on the basis of the vehicle exterior information acquired in the vehicle exterior information detection unit 0. For example, the microcomputer 1 controls a headlamp in accordance with a position of a preceding vehicle or an oncoming vehicle detected by the vehicle exterior information detection unit 0, and can perform cooperative control intended to perform an anti-dazzling such as switching a high beam to a low beam, and the like.
The audio image output unit 2 transmits at least one of an audio output signal or an image output signal to an output device capable of visually or audibly notifying a passenger of a vehicle or the outside of the vehicle of information. In the example of
In
For example, the imaging units 1, 2, 3, 4, and 5 are provided at positions such as a front nose, a side mirror, a rear bumper, a back door, an upper portion of a front windshield inside a vehicle interior of the vehicle 0, and the like. The imaging unit 1 provided at the front nose and the imaging unit 5 provided at the upper portion of the front windshield inside the vehicle interior mainly acquire images in front of the vehicle 0. The imaging units 2 and 3 provided at the side mirrors mainly acquire images of lateral sides of the vehicle 0. The imaging unit 4 provided at the rear bumper or the back door mainly acquires an image behind the vehicle 0. The front images acquired by the imaging units 1 and 5 are mainly used to detect a preceding vehicle, a pedestrian, an obstacle, a traffic signal, a traffic sign, a lane, or the like.
Note that
At least one of the imaging units 1 to 4 may have a function of acquiring distance information. For example, at least one of the imaging units 1 to 4 may be a stereo camera including a plurality of imaging elements, or may be an imaging element including pixels for detecting a phase difference.
For example, the microcomputer 1 obtains, on the basis of distance information acquired from the imaging units 1 to 4: a distance to each of three-dimensional objects within the imaging ranges 1 to 4; and a temporal change of the distance (a relative speed with respect to the vehicle 0), and as a result, it is possible to extract, as a preceding vehicle, a closest three-dimensional object that exists particularly on an advancing route of the vehicle 0 and also the three-dimensional object that travels at a predetermined speed (e.g., 0 km/h or more) in a direction substantially same as the vehicle 0. Moreover, the microcomputer 1 can preliminarily set an inter-vehicle distance to be secured in a front space with a preceding vehicle, and can perform automatic brake control (also including adaptive cruise stop control), automatic acceleration control (also including adaptive cruise start control), and the like. Thus, it is possible to perform cooperative control intended to perform the automated cruise or the like in which autonomous travel is performed without depending on operation of a vehicle operator.
For example, the microcomputer 1 extracts three-dimensional object data associated with three-dimensional objects while categorizing the three-dimensional objects into: a two-wheeled vehicle; a regular vehicle; a large vehicle; a pedestrian; or another three-dimensional object such as a telephone pole or the like, on the basis of the distance information obtained from the imaging units 1 to 4, and can use the extracted data to automatically avoid obstacles. For example, the microcomputer 1 distinguishes whether an obstacle in the periphery of the vehicle 0 is an obstacle visible to a vehicle driver of the vehicle 0 or an obstacle hardly visible to the vehicle driver. Then, the microcomputer 1 makes a determination on a collision risk indicating a risk level of collision with each obstacle, and when the collision risk is a setting value or more and there is a possibility of collision, the microcomputer 1 can provide operational assistance in order to avoid the collision by outputting an alarm to the vehicle driver via the audio speaker 1 and the display unit 2 or by performing forced deceleration or avoidance steering via the drive system control unit 0.
At least one of the imaging units 1 to 4 may be an infrared camera that detects infrared rays. For example, the microcomputer 1 can recognize a pedestrian by determining whether or not a pedestrian is included in captured images of the imaging units 1 to 4. Such pedestrian recognition is performed by, for example: a procedure of extracting feature points in the captured images of the imaging units 1 to 4 provided as the infrared cameras; and a procedure of discriminating whether or not an object is a pedestrian by applying pattern matching processing to a series of feature points indicating an outline of the object. When the microcomputer 1 determines that a pedestrian is included in the captured images of the imaging units 1 to 4 and recognizes the pedestrian, the audio image output unit 2 controls the display unit 2 such that the display unit 2 displays, for emphasis, a rectangular contour line over the recognized pedestrian in a superimposed manner. Furthermore, the audio image output unit 2 may also control the display unit 2 such that the display unit 2 displays an icon or the like indicating the pedestrian at a desired position.
The exemplary vehicle control system to which the technology according to the present disclosure can be applied has been described above. The technology according to the present disclosure can be applied to, for example, the imaging unit 1 among the configurations described above. Specifically, for example, an imaging device 1 of
Note that the present disclosure can also adopt the following configurations.
<1> A solid-state imaging device in which
<2> The solid-state imaging device recited in <1>, in which
<3> The solid-state imaging device recited in <2>, in which
<4> The solid-state imaging device recited in <2>, in which
<5> The solid-state imaging device recited in <2>, in which
<6> The solid-state imaging device recited in <5>, in which
<7> The solid-state imaging device recited in <2>, in which
<8> The solid-state imaging device recited in any one of <1> to <7>, in
<9> The solid-state imaging device recited in any one of <1> to <8>, in which the high refractive index layer is formed in a prior stage of the transparent protective substrate, and a bonding resin layer is formed between the transparent protective substrate and the solid-state imaging element.
<10> The solid-state imaging device recited in any one of <1> to <9>, in whicha lens group including a plurality of lenses that adjusts a focal point is formed by pasting the lens group in a final stage to a most prior stage in the light incident direction.
<11> The solid-state imaging device recited in <10>, in which the lens group in the final stage includes a concave lens, a convex lens, and a combination of the concave lens and the convex lens.
<12> The solid-state imaging device recited in any one of <1> to <11>, in which
<13> The solid-state imaging device recited in any one of <1> to <12>, in which
<14> An imaging device in which
<15> An electronic apparatus in which
Number | Date | Country | Kind |
---|---|---|---|
2018-155518 | Aug 2018 | JP | national |
This application is a continuation application of U.S. patent application Ser. No. 17/250,626, filed on Feb. 12, 2021, which is a U.S. National Phase of International Patent Application No. PCT/JP2019/031339, filed on Aug. 8, 2019, which claims priority benefit of Japanese Patent Application No. JP 2018-155518 filed in the Japan Patent Office on Aug. 22, 2018. Each of the above-referenced applications is hereby incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
Parent | 17250626 | Feb 2021 | US |
Child | 18348496 | US |