The present technology relates to an image pickup element and an electronic device and, more particularly, relates to an image pickup element and an electronic device that suppress a difference in sensitivity of light receiving units.
Conventionally, there has been proposed an image pickup element that detects light of a predetermined narrow wavelength band (narrow band) (hereinafter, also referred to as narrow band light) by using a plasmon filter (see, for example, Patent Document 1).
Here, Patent Document 1 describes a pixel arrangement in which pixels in a same color are arranged in units of blocks of two rows and two columns. In a case where the pixels in a same color are aligned in a vertical direction or a horizontal direction in this manner, a difference in sensitivity is likely to occur between pixels in the same color, compared to a case where pixels in different colors area aligned.
The present technology has been made in view of such a situation and has an object to suppress the difference in sensitivity of the light receiving units such as pixels.
An image pickup element according to a first aspect of the present technology includes a pixel array in which at least a first light receiving unit configured to receive light in a predetermined color and a second light receiving unit configured to receive light having a wavelength band having a band width narrower than a band width of a wavelength band of the predetermined color are arranged, in which, in a case where the pixel array is divided into four areas by a vertical line and a horizontal line, a position of the second light receiving unit in a first block, in which one or more of each of the first light receiving unit and the second light receiving unit are arranged, differs in each of the areas.
The position of the second light receiving unit in the first block in each of the areas may be symmetrical with respect to an intersection of the vertical line and the horizontal line.
The position of the second light receiving unit in the first block in each of the areas may be symmetrical with respect to the vertical line or the horizontal line.
A combination of colors received by a third light receiving unit which is closer to an intersection of the vertical line and the horizontal line among upper and lower light receiving units adjacent to the second light receiving unit, and received by a fourth light receiving unit which is closer to the intersection among right and left light receiving units adjacent to the second light receiving unit, may correspond to each other between each of the second light receiving units.
The position of the second light receiving unit in the first block may be set on the basis of sensitivity of the first light receiving unit in the first block.
An intersection of the vertical line and the horizontal line may correspond to a center of the pixel array.
The intersection of the vertical line and the horizontal line may be on an optical axis of an optical system that leads light to the pixel array.
The first light receiving unit and the second light receiving unit may be pixels, respectively.
The first light receiving unit and the second light receiving unit may be light receiving areas of a pixel, respectively.
A second optical filter used in the second light receiving unit may be an optical filter that has a transmission band having a band width narrower than a band width of a first optical filter used in the first light receiving unit.
The second optical filter may be a plasmon filter.
The second optical filter may be a Fabry-Perot interference filter.
A second block including a fifth light receiving unit that receives red light, a third block including a sixth light receiving unit that receives green light, a fourth block including a seventh light receiving unit that receives green light, and a fifth block including an eighth light receiving unit that receives blue light may be arranged in the pixel array, the first light receiving unit may be one of the fifth light receiving unit to the eighth light receiving unit, and the first block may be one of the second block to the fifth block.
The colors of the second block to the fifth block in the pixel array may be arranged according to an arrangement of colors of Bayer array.
In the second block to the fifth block, the light receiving units may be arranged in two rows and two columns.
An electronic device according to a second aspect of the present technology includes an image pickup element, and a signal processor configured to process a signal output from the image pickup element, in which the image pickup element includes a pixel array in which at least a first light receiving unit configured to receive light in a predetermined color and a second light receiving unit configured to receive light having a wavelength band having a band width narrower than a band width of a wavelength band of the predetermined color are arranged, and, in a case where the pixel array is divided into four areas by a vertical line and a horizontal line, a position of the second light receiving unit in a first block, in which one or more of each of the first light receiving unit and the second light receiving unit are arranged, differs in each of the areas.
According to the first or second aspect of the present technology, light in the predetermined color is received by the first light receiving unit, and the second light receiving unit receives light having a wavelength band narrower than the wavelength band of the predetermined color.
According to the first or second aspect of the present technology, a difference in sensitivity of the light receiving units can be suppressed.
Note that effects described here should not be limited and there may be any one of the effects described in the present disclosure.
Hereinafter, a mode for carrying out the invention (hereinafter, referred to as “an embodiment”) will be described in detail with reference to the drawings. Note that the description will be given in the following order.
1. First embodiment
2. Second embodiment
3. Modification examples
4. Application examples
First, a first embodiment of the present technology will be described with reference to
<Configuration Example of Image Pickup Apparatus>
An image pickup apparatus 10 of
The image pickup apparatus 10 includes an optical system 11, an image pickup element 12, a memory 13, a signal processor 14, an output unit 15, and a control unit 16.
The optical system 11 includes, for example, a zoom lens, a focus lens, a diaphragm, and the like, which are not illustrated, and causes light from outside to enter the image pickup element 12. Furthermore, various filters such as a polarization filter are provided in the optical system 11 according to need.
The image pickup element 12 is, for example, a complementary metal oxide semiconductor (CMOS) image sensor. The image pickup element 12 receives incident light from the optical system 11, performs photoelectric conversion, and outputs image data corresponding to the incident light.
The memory 13 temporarily stores the image data output from the image pickup element 12.
The signal processor 14 performs signal processing (for example, processing such as noise removal, white balance adjustment, and the like) using the image data stored in the memory 13 and supplies the processed signal to the output unit 15.
The output unit 15 outputs the image data received from the signal processor 14. For example, the output unit 15 has a display (not illustrated) including liquid crystal or the like and displays a spectrum (image) corresponding to the image data from the signal processor 14, which is a so-called through image. For example, the output unit 15 includes a driver (not illustrated) for driving a recording medium such as a semiconductor memory, a magnetic disk, or an optical disk, and records the image data from the signal processor 14 to the recording medium. For example, the output unit 15 functions as a communication interface for performing communication with an unillustrated external device, and transmits the image data from the signal processor 14 to the external device wirelessly or by wire.
The control unit 16 controls each unit of the image pickup apparatus 10 according to user's operation or the like.
<Configuration Example of Circuit in Image Pickup Element>
The image pickup element 12 includes a pixel array 31, a row scanning circuit 32, a phase locked loop (PLL) 33, a digital analog converter (DAC) 34, a column analog digital converter (ADC) circuit 35, a column scanning circuit 36, and a sense amplifier 37.
In the pixel array 31, a plurality of pixels 51 are two-dimensionally arranged.
The pixel 51 is disposed at a point where the horizontal signal line H connected to the row scanning circuit 32 and the vertical signal line V connected to the column ADC circuit 35 intersect each other, and includes a photodiode 61 which performs photoelectric conversion, and several types of transistors for reading stored signals. In other words, the pixel 51 includes the photodiode 61, a transfer transistor 62, a floating diffusion 63, an amplification transistor 64, a selection transistor 65, and a reset transistor 66 as illustrated in an enlarged view on the right side in
The electric charge accumulated in the photodiode 61 is transferred to the floating diffusion 63 via the transfer transistor 62. The floating diffusion 63 is connected to a gate of the amplification transistor 64. In a case where the pixel 51 is a target of signal readout, the selection transistor 65 is turned on by the row scanning circuit 32 via the horizontal signal line H, and a signal of the selected pixel 51 is read out to the vertical signal line V as a pixel signal corresponding to an accumulated charge amount of the charge accumulated in the photodiode 61 by driving a source-follower of the amplification transistor 64. Furthermore, the pixel signal is reset by turning on the reset transistor 66.
The row scanning circuit 32 sequentially outputs drive signals for driving (for example, transferring, selecting, resetting, and the like) the pixel 51 of the pixel array 31 in each row.
On the basis of a clock signal supplied from outside, the PLL 33 generates and outputs a clock signal of a predetermined frequency needed for driving each part of the image pickup element 12.
The DAC 34 generates and outputs a ramp signal of a shape (substantially a saw shape) that returns to a predetermined voltage value after voltage drops from a predetermined voltage value with a certain slope.
The column ADC circuit 35 includes a plurality of comparators 71 and counters 72 corresponding to a number of columns of the pixels 51 in the pixel array 31, extracts a signal level by performing correlated double sampling (CDS) from the pixel signal output from the pixels 51, and outputs pixel data. In other words, the comparator 71 compares the ramp signal supplied from the DAC 34 with the pixel signal (a luminance value) output from the pixel 51, and supplies a comparison result signal, which is obtained as a result, to the counter 72. Then, the counter 72 counts a counter clock signal of a predetermined frequency according to the comparison result signal output from the comparator 71 so that the pixel signal is A/D converted.
The column scanning circuit 36 sequentially supplies signals, which cause pixel data to be output, to the counter 72 of the column ADC circuit 35 at a predetermined timing.
The sense amplifier 37 amplifies the pixel data supplied from the column ADC circuit 35 and outputs the data to outside of the image pickup element 12.
<First Embodiment of Image Pickup Element>
In each pixel 51, an on-chip microlens 101, an interlayer film 102, a narrow band filter layer 103, an interlayer film 104, a photoelectric conversion element layer 105, and a signal wiring layer 106 are laminated in order from the top. In other words, the image pickup element 12 includes a back-illuminated CMOS image sensor in which the photoelectric conversion element layer 105 is disposed on a light incident side from the signal wiring layer 106.
The on-chip microlens 101 is an optical element for collecting light to the photoelectric conversion element layer 105 of each pixel 51.
The interlayer film 102 and the interlayer film 104 include a dielectric such as SiO2. As will be described later, dielectric constants of the interlayer film 102 and the interlayer film 104 are preferably made as low as possible.
In the narrow band filter layer 103, each pixel 51 is provided with a narrow band filter NB which is an optical filter that transmits a narrow band light of a predetermined narrow wavelength band (narrow band). For example, it is a type of metal thin film filter using a metal thin film such as aluminum, and a plasmon filter using surface plasmons is used for the narrow band filter NB. Furthermore, the transmission band of the narrow band filter NB is set for each pixel 51. The type (band number) of the transmission band of the narrow band filter NB is arbitrary, and is set to 4 or more, for example.
Here, the narrow band represents, for example, a wavelength band which is narrower than a transmission band of a conventional color filter of red (R), green (G), and blue (B), or yellow (Y), magenta (M), and cyan (C) based on three primary colors or color-matching function. Furthermore, in the following, a pixel that receives the narrow band light transmitted through the narrow band filter NB will be referred to as a multispectral pixel or an MS pixel.
The photoelectric conversion element layer 105 includes, for example, the photodiode 61 and the like illustrated in
In the signal wiring layer 106, wires and the like for reading electric charges accumulated in the photoelectric conversion element layer 105 are provided.
<About Plasmon Filter>
Next, with reference to
The plasmon filter 121A includes a plasmon resonator in which holes 132A are arranged in a honeycomb form in a thin metal film (hereinafter, referred to as a conductor thin film) 131A.
Each hole 132A penetrates the conductor thin film 131A and functions as a waveguide. In the waveguide, generally, a cutoff frequency and a cutoff wavelength, which are determined by a shape such as a side length and a diameter, are set, and there is a property that light having a frequency equal to or lower than the cutoff frequency (or wavelengths equal to or greater than the cutoff wavelength) is not transmitted. The cutoff wavelength of the hole 132A mainly depends on the opening diameter D1, and in a case where the opening diameter D1 becomes smaller, the cutoff wavelength becomes shorter. Here, the opening diameter D1 is set to a value smaller than the wavelength of light to be transmitted.
On the other hand, in a case where light enters the conductor thin film 131A in which the holes 132A are formed at regular intervals with a short period equal to or shorter than the wavelength of the light, this causes a phenomenon that light having a longer wavelength than the cutoff wavelength of the hole 132A is transmitted. This phenomenon is called an abnormal transmission phenomenon of plasmon. This phenomenon is caused by surface plasmons excitation at the boundary between the conductor thin film 131A and the interlayer film 102 in an upper layer.
Here, with reference to
εd represents dielectric constant of the dielectric constituting the interlayer film 102.
From equation (1), the surface plasma frequency ωsp increases as the plasma frequency ωp increases. Furthermore, the surface plasma frequency ωsp increases as the dielectric constant εd decreases.
The line L1 indicates a light dispersion relation (light line) and is expressed by the following equation (2).
c represents speed of light.
The line L2 indicates a dispersion relation of the surface plasmons and is expressed by the following equation (3).
εm represents dielectric constant of the conductor thin film 131A.
In a range where the angular frequency vector k is small, the dispersion relation of the surface plasmons indicated by the line L2 gradually approaches the light line indicated by the line L1 and gradually approaches the surface plasma frequency ωsp as the angular wave vector k increases.
Then, in a case where the following equation (4) is satisfied, an abnormal transmission phenomenon of plasmon occurs.
A represents a wavelength of incident light. θ represents an entering angle of incident light. Gx and Gy are expressed by the following equation (5).
|Gx|=|Gy|=2π/a0 (5)
a0 represents a lattice constant of the hole array structure including the holes 132A of the conductor thin film 131A.
The left side of the equation (4) indicates the angular wave vector of the surface plasmons, and the right side indicates the angular frequency vector of the hole array period of the conductor thin film 131A. Therefore, in a case where the angular wave vector of the surface plasmons and the angular frequency vector of the hole array period of the conductor thin film 131A become equal, an abnormal transmission phenomenon of the plasmon occurs. Then, the value of A at this time is the resonance wavelength of the plasmon (the transmission wavelength of the plasmon filter 121A).
Note that the angular frequency vector of the surface plasmons on the left side of the equation (4) is determined by the dielectric constant εm of the conductor thin film 131A and the dielectric constant εd of the interlayer film 102. On the other hand, the angular frequency vector of the hole array period on the right side is determined by the light incident angle θ and the pitch (hole pitch) P1 between adjacent holes 132A of the conductor thin film 131A. Therefore, the resonance wavelength and the resonance frequency of the plasmon are determined by the dielectric constant εm of the conductor thin film 131A, the dielectric constant εd of the interlayer film 102, the incident angle θ of light, and the hole pitch P1. Here, in a case where the entering angle of light is 0°, the resonance wavelength and the resonance frequency of the plasmon are determined by the dielectric constant εm of the conductor thin film 131A, the dielectric constant εd of the interlayer film 102, and the hole pitch P1.
Therefore, the transmission band (resonance wavelength of plasmon) of the plasmon filter 121A varies according to the material and thickness of the conductor thin film 131A, the material and thickness of the interlayer film 102, the pattern period of the hole array (for example, an opening diameter D1 and a hole pitch P1 of the hole 132A) and the like. In particular, in a case where the material and thickness of the conductor thin film 131A and the interlayer film 102 have been determined, the transmission band of the plasmon filter 121A varies according to the pattern period of the hole array, particularly the hole pitch P1. In other words, as the hole pitch P1 becomes narrower, the transmission band of the plasmon filter 121A shifts to the shorter wavelength side, and as the hole pitch P1 becomes wider, the transmission band of the plasmon filter 121A shifts to the longer wavelength side.
In a case where the hole pitch P1 is set to 250 nm, the plasmon filter 121A mainly transmits light in a blue wavelength band. In a case where the hole pitch P1 is set to 325 nm, the plasmon filter 121A mainly transmits light in a green wavelength band. In a case where the hole pitch P1 is set to 500 nm, the plasmon filter 121A mainly transmits light in a red wavelength band. However, in a case where the hole pitch P1 is set to 500 nm, the plasmon filter 121A transmits a large amount of light in a band having a wavelength lower than that of red by a later described waveguide mode.
Note that the transmittance of the plasmon filter 121A is mainly determined by the opening diameter D1 of the hole 132A. As the opening diameter D1 increases, the transmittance increases and color mixing is likely to occur. In general, it is desirable to set the opening diameter D1 so that the opening ratio becomes 50% to 60% of the hole pitch P1.
Furthermore, as described above, each hole 132A of the plasmon filter 121A serves as a waveguide. Therefore, depending on the pattern of the hole array of the plasmon filter 121A, in the spectral characteristics, not only the wavelength component (wavelength component in the plasmon mode) transmitted by the surface plasmon resonance but also the wavelength component (wavelength component in the plasmon mode) transmitted through the hole 132A (waveguide) may increase in some cases.
As described above, the cutoff wavelength mainly depends on the opening diameter D1 of the hole 132A and, in a case where the opening diameter D1 is smaller, the cutoff wavelength becomes shorter. Then, as the difference between the cutoff wavelength and the peak wavelength in the plasmon mode is increased, the wavelength resolution characteristic of the plasmon filter 121A is improved.
Furthermore, as described above, as the plasma frequency ωp of the conductor thin film 131A increases, the surface plasma frequency ωsp of the conductor thin film 131A increases. In addition, as the dielectric constant εd of the interlayer film 102 decreases, the surface plasma frequency ωsp increases. Then, as the surface plasma frequency ωsp increases, the resonance frequency of the plasmon can be set higher and the transmission band (plasmon resonance wavelength) of the plasmon filter 121A can be set to a shorter wavelength band.
Therefore, in a case where a metal having a smaller plasma frequency ωp is used for the conductor thin film 131A, the transmission band of the plasmon filter 121A can be set to a shorter wavelength band. For example, aluminum, silver, gold, and the like are preferable. However, in a case where the transmission band is set to a long wavelength band such as infrared light, copper or the like can also be used.
Furthermore, in a case where a dielectric having a smaller dielectric constant εd is used for the interlayer film 102, the transmission band of the plasmon filter 121A can be set to a shorter wavelength band. For example, SiO2, Low-K, or the like is preferable.
Furthermore,
The propagation distance ΛSPP(λ) in the depth direction of the surface plasmons is expressed by the following equation (6).
kSPP represents an absorption coefficient of a material through which surface plasmons propagate. εm(λ) represents a dielectric constant of the conductor thin film 131A with respect to the light having the wavelength A. εd(λ) represents a dielectric constant of the interlayer film 102 with respect to the light having the wavelength A.
Therefore, as illustrated in
Furthermore, the surface plasmons for the light with the wavelength of 400 nm propagate from the surface of the conductor thin film 131A including aluminum to about 10 nm in the depth direction. Therefore, in a case where the thickness of the conductor thin film 131A is set to 10 nm or more, this prevents the surface plasmons at the interface between the interlayer film 102 and the conductor thin film 131A from being affected by an influence of the interlayer film 104.
<Another Example of Plasmon Filter>
Next, another example of the plasmon filter will be described with reference to
The plasmon filter 121B in A of
Furthermore, in the plasmon resonator, all of the holes do not have to penetrate through the conductor thin film, and the plasmon resonator can function as a filter even in a case where some of the holes are formed by non-through holes, which do not penetrate the conductive thin film.
For example, B of
Furthermore, a plasmon resonator of a single layer is basically used as the plasmon filter; however, for example, a two-layer plasmon resonator may be used.
For example, a plasmon filter 121D illustrated in
In addition, the distance D2 between the plasmon filter 121D-1 and the plasmon filter 121D-2 is preferably about ¼ the peak wavelength of the transmission band. Furthermore, in consideration of flexibility of design, the distance D2 is more preferably ½ or less of the peak wavelength of the transmission band.
Here, in addition to arranging the holes in a same pattern in the plasmon filter 121D-1 and the plasmon filter 121D-2 as in the plasmon filter 121D, for example, the holes may be arranged in a pattern similar to each other in the two-layer plasmon resonator structure. Furthermore, in the two-layer plasmon resonator structure, holes and dots may be arranged in a pattern such that a hole array structure and a dot array structure (described later) are reversed. Furthermore, the plasmon filter 121D has a two-layer structure; however, three or more layers can be used.
Furthermore, the configuration example of the plasmon filter using the plasmon resonator of the hole array structure has been described above; however, a plasmon resonator of the dot array structure may be employed as the plasmon filter.
With reference to
A plasmon filter 121A′ in A of
The plasmon filter 121A′ absorbs light of a predetermined wavelength band and is therefore used as a complementary color filter. The wavelength band (hereinafter, referred to as an absorption band) of light absorbed by the plasmon filter 121A′ varies depending on the pitch P3 between adjacent dots 133A (hereinafter, referred to as a dot pitch) and the like. Furthermore, the diameter D3 of the dot 133A is adjusted in accordance with the dot pitch P3.
A plasmon filter 121B′ in B of
An absorption band of the plasmon filter 121B′ varies depending on the dot pitch P4 between adjacent dots 133B and the like. Furthermore, the diameter D3 of the dot 133B is adjusted in accordance with the dot pitch P4.
As illustrated in this diagram, as the dot pitch P3 becomes narrower, the absorption band of the plasmon filter 121A′ shifts to the shorter wavelength side and, as the dot pitch P3 becomes wider, the absorption band of the plasmon filter 121A′ shifts to the longer wavelength side.
Here, in any of the plasmon filters of the hole array structure and the dot array structure, the transmission band or the absorption band can be adjusted simply by adjusting the pitch in the planar direction of the holes or dots. Therefore, for example, by merely adjusting the pitch of the holes or dots in the lithography process, the transmission band or the absorption band can be individually set for each pixel, and the filter can be multicolored in fewer steps.
Furthermore, the thickness of the plasmon filter is about 100 to 500 nm which is practically similar to that of the organic material type color filter, and has preferable process compatibility.
Furthermore, for the narrow band filter NB, a plasmon filter 151 using guided mode resonant (GMR) illustrated in
In the plasmon filter 151, a conductor layer 161, a SiO2 film 162, a SiN film 163, and a SiO2 substrate 164 are laminated in order from the top. For example, the conductor layer 161 is included in the narrow band filter layer 103 in
In the conductor layer 161, for example, rectangular conductor thin films 161A including aluminum are arranged at a predetermined pitch P5 so that longitudinal sides of the conductor thin films 161A are adjacent to each other. Then, the transmission band of the plasmon filter 151 varies depending on the pitch P5 or the like.
The plasmon filter 151 using this GMR also has preferable compatibility with the organic material type color filter as in the described above plasmon filter of the hole array structure and dot array structure.
<Second Embodiment of Image Pickup Element>
Next, with reference to
The image pickup element 12B is different from the image pickup element 12A in that a color filter layer 107 is laminated between the on-chip microlens 101 and the interlayer film 102.
In the narrow band filter layer 103 of the image pickup element 12B, the narrow band filter NB is provided only in some of the pixels 51, not all the pixels 51. The type (band number) of the transmission band of the narrow band filter NB is arbitrary, and may be set to one or more, for example.
A color filter is provided in each pixel 51 in the color filter layer 107. For example, in the pixel 51, which does not include the narrow band filter NB, any one of the general red filter R, green filter G, and blue filter B (not illustrated) is provided. With this configuration, for example, an R pixel provided with the red filter R, a G pixel provided with the green filter G, a B pixel provided with the blue filter, and an MS pixel provided with the narrow band filter NB are connected to the pixel array 31.
Furthermore, in the pixel 51 provided with the narrow band filter NB, the transmission filter P is provided in the color filter layer 107. As will be described later, the transmission filter P includes an optical filter (a low-pass filter, a high-pass filter, or a band-pass filter) that transmits light in a wavelength band including the transmission band of the narrow band filter NB of the same pixel 51.
Note that the color filter provided in the color filter layer 107 may be any one of an organic material type and an inorganic material type.
For example, as the organic material type color filter, there are a dyeing and coloring system using a synthetic resin or a natural protein, and a pigment-containing system using a pigment dye or a dye coloring matter.
For an inorganic material type color filter, for example, materials such as TiO2, ZnS, SiN, MgF2, SiO2, and Low-k, are used. Furthermore, methods such as deposition, sputtering, and chemical vapor deposition (CVD) film formation, are used for forming the inorganic material-based color filter, for example.
Furthermore, as described above with reference to
Here, occurrence of flare is suppressed by the transmission filter P provided in the color filter layer 107. This point will be described with reference to
In this example, the image pickup element 12A is provided in a semiconductor chip 203. Specifically, the semiconductor chip 203 is mounted on a substrate 213, and the periphery thereof is covered with a seal glass 211 and a resin 212. Then, the light transmitted through the lens 201, the IR cut filter 202, and the seal glass 211 provided in the optical system 11 in
Here, in a case where the narrow band filter NB of the narrow band filter layer 103 of the image pickup element 12A includes a plasmon filter, a metallic conductor thin film is formed on the plasmon filter. Since this conductor thin film has high reflectance, light of wavelengths other than the transmission band is easily reflected. Then, a part of the light reflected by the conductor thin film is reflected by the seal glass 211, the IR cut filter 202, or the lens 201, for example, as illustrated in
In order to prevent this reflected light, for example, it is conceivable to use an antireflection film including a metal different from the conductor thin film or a material having a high dielectric constant. However, the plasmon filter uses surface plasmon resonance and, in a case where such an antireflection film touches the surface of the conductor thin film, the characteristics of the plasmon filter deteriorate or desired characteristics may become hard to obtain.
On the other hand,
The example of
As described above, in the image pickup element 12B, a transmission filter P is provided above the narrow band filter NB (in a light entering side). Therefore, the light entering the image pickup element 12B enters the narrow band filter NB after the predetermined wavelength band is blocked by the transmission filter P, so that the amount of light entering the narrow band filter NB is suppressed. As a result, the amount of light reflected by the conductor thin film of the narrow band filter NB (plasmon filter) is also reduced, and this reduces flare.
The line L41 in
The line L51 in
The line L61 in
Here, in a case where the transmission band of the red filter R, the green filter G, or the blue filter B includes the transmission band of the lower-layer narrow band filter NB, these filters may be used for the transmission filter P.
Furthermore, in the example of
Furthermore, the color combination of the color filter of the color filter layer 107 is not limited to the above described example, and modification can be made according to need.
In addition, in a case where it is not necessary to take measures against the above described flare, for example, the transmission filter P may not be provided on the upper layer of the narrow band filter NB or a dummy filter that transmits light of all wavelengths may be provided.
<Third Embodiment of Image Pickup Element>
Next, with reference to
The image pickup element 12C is different from the image pickup element 12A in that a filter layer 108 is provided as a substitute for the narrow band filter layer 103. In addition, the image pickup element 12C is different from the image pickup element 12B in
With this configuration, in a case where the R pixel, the G pixel, the B pixel, and the MS pixel are arranged in the pixel array 31 of the image pickup element 12C, the color filter layer 107 can be omitted.
Here, in a case where an organic material type color filter is used, for example, a narrow band filter NB is formed first and then color filter is formed after a high-temperature final heat treatment such as a sintering process is performed in order to prevent damage of the color filter due to heat and the like. On the other hand, in a case of using an inorganic material type color filter, basically, there is no need to restrict the above formation order.
Furthermore, in a case of taking measures against flare like the image pickup element 12B, as in the case of the image pickup element 12B in
Next, a second embodiment of the present technology will be described with reference to
<First Embodiment of Pixel Array>
Furthermore, the intersection of the vertical line Lv, the horizontal line Lh, and the horizontal line coincides with a center of the pixel array 31A.
In the pixel array 31A, the R pixel, the Gr pixel, the Gb pixel, and the B pixel are arranged in units of blocks of two rows and two columns, respectively (hereinafter referred to as pixel blocks). Furthermore, the arrangement of colors in units of pixel blocks is made according to the arrangement of the colors of the Bayer array. Here, in this case, the G pixel is divided into a Gr pixel arranged in the same row as the R pixel and a Gb pixel arranged in the same row as the B pixel.
Hereinafter, the arrangement of the pixels 51 is referred to as a Quadra array. Furthermore, in the following, a pixel block including R pixels (pixels R1 to R4) will be referred to as an R block. A pixel block including Gr pixels (pixels Gr1 to Gr4) is referred to as a Gr block. A pixel block including Gb pixels (pixels Gb1 to Gb4) is referred to as a Gb block. A pixel block including B pixels (pixels B1 to B4) is referred to as a B block.
In the Quadra array, the sensitivities of the pixels 51 in the same pixel block are different. This will be described with reference to
As illustrated in the diagram, light enters each pixel 51 of the pixel array 31 not only in a direction perpendicular to the pixel array 31 but also from an oblique direction. Furthermore, the incident direction of light varies depending on the position of the pixel 51. For example, regarding the pixel 51 in the upper right corner of the area A1, a large amount of light from the obliquely downward left direction enters. On the other hand, regarding the pixel 51 at the lower left corner of the area A3, a large amount of light from the obliquely upper right direction enters. Such light in an oblique direction (oblique light) becomes a cause of color mixing and causes a difference in sensitivity between the pixels 51 in the same pixel block.
The upper side of
A large amount of oblique light from the obliquely upward left direction is incident on the pixels 51 in the lower right corner, particularly in the area A4. At this time, in the pixel Gb1 surrounded by a circle in the drawing, not only the light directly entering the pixel Gb1 but also light from the upper pixel R3 and light from the right pixel B2 which are adjacent on the incident side of the oblique light are likely to enter. Here, since the contact area between the pixel Gr4 and the pixel Gb1 adjacent in the oblique direction on the incident side of the oblique light is very small, almost no light enters from the pixel Gr4 to the pixel Gb1.
In a similar manner, in the pixel Gb4 surrounded by a circle in the drawing, not only light directly entering the pixel Gb4 but also light from the upper pixel Gb3 and light from the right pixel Gb2 which are adjacent on the incident side of the oblique light are likely to enter. Here, since the contacting area between the pixel Gb1 and the pixel Gb4 adjacent in the oblique direction on the incident side of the oblique light is very small, almost no light enters from the pixel Gb1 to the pixel Gb4.
The lower left side of
On each pixel 51, an on-chip microlens 311, a filter layer 312, an interlayer film 313, and a photoelectric conversion element layer 314 are stacked in order from the top. Furthermore, in the pixels Gb1, Gb2, and Gb4, a green filter G is provided in the filter layer 312 and, in the pixel R3, a red filter R is provided in the filter layer 312.
Here, since the light passing through the on-chip microlens 311 of the pixel R3 and entering the boundary between the red filter R of the pixel R3 and the green filter G of the pixel Gb1 has different transmission bands of the red filter R and the green filter G, the light is blocked by the green filter G and hardly enters the pixel Gb1. In other words, in a case where the adjacent pixels 51 are in different colors, color mixing is less likely to occur.
On the other hand, light passing through the on-chip microlens 311 of the pixel Gb2 and entering the boundary between the green filter G of the pixel Gb2 and the green filter G of the pixel Gb4 mostly enters the pixel Gb4 without being blocked since both of the color filters are in the same transmission band. In other words, in a case where the adjacent pixels 51 are in a same color, color mixing is likely to occur.
Because of the difference in the ease of color mixture occurrence, a difference in sensitivity is generated between the pixels 51 in the same pixel block.
Specifically, as illustrated in
The pixel Gb2 is adjacent to the pixel Gb1 in the same color and the pixel R4 in a different color on the incident side of the oblique light. The pixel Gb3 is adjacent to the pixel Gb1 in the same color and the pixel B4 in a different color on the incident side of the oblique light. On the other hand, the red filter R has higher transmittance than the blue filter B in general. Therefore, the pixel Gb3 adjacent to the pixel B4 has lower sensitivity and is darker (has a smaller pixel value) than the pixel Gb2 adjacent to the pixel R4.
Since the pixel Gb4 is adjacent to the pixel Gb2 and the pixel Gb3 in the same color on the incident side of the oblique light, color mixing is most likely to occur. Therefore, the pixel Gb4 has the highest sensitivity and becomes brighter (has a larger pixel value).
Therefore, the sensitivity of each pixel 51 in the Gb block of the area A4 is basically in an order of pixel Gb4>pixel Gb2>pixel Gb3>pixel Gb1. For a similar reason, the sensitivity of each pixel 51 in the Gr block of the area A4 is basically in an order of pixel Gr4>pixel Gr3>pixel Gr2>pixel Gr1.
Basically, the sensitivity of each pixel 51 in the R block of the area A4 is in an order of pixel R4>pixel R3>pixel R2>pixel R1. Note that the pixel R2 and the pixel R3 have the same color combination of the adjacent pixels 51 on the incident side of the oblique light, but their sensitivity slightly differs because of the difference between the pixel Gb4 adjacent to the pixel R2 and the pixel Gr4 adjacent to the pixel R3.
Basically, the sensitivity of each pixel 51 in the B block of the area A4 is in an order of pixel B4>pixel B2>pixel B3>pixel B1. Note that the pixel B2 and the pixel B3 have the same color combination of the adjacent pixels 51 on the incidence side of the oblique light, but their sensitivity differs because of the difference between the pixel Gr4 adjacent to the pixel B2 and the pixel Gb4 adjacent to the pixel B3.
Although detailed description will be omitted, the sensitivity of each pixel 51 in the R block of the area A1 is basically in an order of pixel R2>pixel R1>pixel R4>pixel R3. Basically, the sensitivity of each pixel 51 in the Gr block of the area A1 is in an order of pixel Gr2>pixel Gr1>pixel Gr4>pixel Gr3. Basically, the sensitivity of each pixel 51 in the Gb block of the area A1 is in an order of pixel Gb2>pixel Gb4>pixel Gb1>pixel Gb3. Basically, the sensitivity of each pixel 51 in the B block of the area A1 is in the order of pixel B2>pixel B4>pixel B1>pixel B3.
Basically, the sensitivity of each pixel 51 in the R block of the area A2 is in an order of pixel R1>pixel R2>pixel R3>pixel R4. Basically, the sensitivity of each pixel 51 in the Gr block of the area A2 is in an order of pixel Gr1>pixel Gr2>pixel Gr3>pixel Gr4. Basically, the sensitivity of each pixel 51 in the Gb block of the area A2 is in an order of pixel Gb1>pixel Gb3>pixel Gb2>pixel Gb4. Basically, the sensitivity of each pixel 51 in the B block of the area A2 is in an order of pixel B1>pixel B3>pixel B2>pixel B4.
Basically, the sensitivity of each pixel 51 in the R block of the area A3 is in an order of pixel R3>pixel R4>pixel R1>pixel R2. Basically, the sensitivity of each pixel 51 in the Gr block of the area A3 is in the order of pixel Gr3>pixel Gr4>pixel Gr1>pixel Gr2. Basically, the sensitivity of each pixel 51 in the Gb block of the area A3 is in an order of pixel Gb3>pixel Gb1>pixel Gb4>pixel Gb2. Basically, the sensitivity of each pixel 51 in the B block of the area A3 is an order of pixel B3>pixel B1>pixel B4>pixel B2.
As described above, for each of the R pixel, the Gr pixel, the Gb pixel, and the B pixel, there are four types of pixels with different sensitivities in each pixel block. The difference in sensitivity between the pixels 51 in the same color may cause degradation of image quality or the like, for example.
Furthermore, even in a case of a pixel block in a same color, the distribution of the sensitivity of each pixel 51 is different from the area A1 to the area A4. For example, in the areas A1 to A4, the positions of the pixels 51 having the highest sensitivity in each pixel block are different.
<Second Embodiment of Pixel Array>
In the pixel array 31B, multispectral pixels are arranged in addition to the R pixel, the G pixel, and the B pixel as in the above described image pickup element 12C in
Here, the numbers in the circles in the drawing indicate sensitivity order of each pixel 51 in the Gb block in the pixel array 31A of the Quadra array before being replaced with the multispectral pixels MS. For example, the pixel 51 indicated by the number, 1, represents a pixel 51 having a highest sensitivity in the Gb block.
For example, in the Gb block of the area A1, the pixel Gb1 having a third highest sensitivity is replaced with a multispectral pixel MS. As a result, there remain the pixel Gb2 having a highest sensitivity, the pixel Gb4 having a second highest sensitivity, and the pixel Gb3 having a lowest sensitivity.
In the Gb block of the area A2, the pixel Gb1 having a highest sensitivity is replaced with a multispectral pixel MS. As a result, there remain the pixel Gb3 having a second highest sensitivity, the pixel Gb2 having a third highest sensitivity, and the pixel Gb4 having a lowest sensitivity.
In the Gb block of the area A3, the pixel Gb1 having a second highest sensitivity is replaced with a multispectral pixel MS. As a result, there remain the pixel Gb3 having a highest sensitivity, the pixel Gb4 having a third highest sensitivity, and the pixel Gb2 having a lowest sensitivity.
In the Gb block of the area A4, the pixel Gb1 having a lowest sensitivity is replaced with a multispectral pixel MS. As a result, there remain the pixel Gb4 having a highest sensitivity, the pixel Gb3 having a second highest sensitivity, and the pixel Gb2 having a third highest sensitivity.
As a result, in the pixel array 31B, the sensitivity of the Gb pixel remains unchanged from the four kinds as compared with the pixel array 31A in
<Third Embodiment of Pixel Array>
As in the pixel array 31B in
On the other hand, in the pixel array 31C, unlike the pixel array 31B, the positions of the multispectral pixels MS in the pixel block are different in the areas A1 to A4. More specifically, in the pixel array 31C, the pixel 51 having a lowest sensitivity in the Gr block of the pixel array 31A in
For example, in the area A1, the pixels Gr3 and Gb3 are replaced with multispectral pixels MS. In the area A2, the pixels Gr4 and Gb4 are replaced with multispectral pixels MS. In the area A3, the pixels Gr2 and Gb2 are replaced with multispectral pixels MS. In the area A4, the pixels Gr1 and Gb1 are replaced with multispectral pixel MS.
As a result, the position of the multispectral pixel MS in the Gr block in each of the areas A1 to A4 is symmetrical with respect to a center (the optical axis of the lens 301) of the pixel array 31C. In a similar manner, the position of the multispectral pixel MS in the Gb block in each of the areas A1 to A4 is symmetrical with respect to the center (the optical axis of the lens 301) of the pixel array 31C.
In addition, the combination of colors of adjacent pixels 51 on the incidence side of the oblique light of each multispectral pixel MS coincides as being red (R) and blue (B). In other words, among the multispectral pixels MS, the color combination of the pixel 51 which is closer to the optical axis of the lens 301 (the center of the pixel array 31C) among the adjacent upper and lower pixels 51, and the pixel 51 which is closer to the optical axis of the lens 301 (the center of the pixel array 31C) among the adjacent left and right pixels 51 coincides as being red (R) and blue (B).
For example, in the area A1, the pixel R1 and the pixel B4, or the pixel B1 and the pixel R4 are adjacent to the lower side and the left side closer to the optical axis of the lens 301 of the multispectral pixel MS. In the area A2, the pixel R2 and the pixel B3, or the pixel B2 and the pixel R3 are adjacent to the lower side and the right side closer to the optical axis of the lens 301 of the multispectral pixel MS. In the area A3, the pixel R4 and the pixel B1, or the pixel B4 and the pixel R1 are adjacent to the upper side and the right side closer to the optical axis of the lens 301 of the multispectral pixel MS. In the area A4, the pixel R3 and the pixel B2, or the pixel B3 and the pixel R2 are adjacent to the upper side and the left side closer to the optical axis of the lens 301 of the multispectral pixel MS.
As a result, in the pixel array 31C, as compared with the pixel array 31A in
<Fourth Embodiment of Pixel Array>
As in the pixel array 31B of
On the other hand, the position of the multispectral pixel MS of the pixel array 31D is different from that of the pixel array 31B. More specifically, in the pixel array 31D, the pixel 51 having a second lowest sensitivity in the Gr block of the pixel array 31A in
More specifically, in the area A1, the pixels Gr4 and Gb1 are replaced with the multispectral pixels MS. In the area A2, the pixels Gr3 and Gb2 are replaced with multispectral pixels MS. In the area A3, the pixels Gr1 and Gb4 are replaced with multispectral pixels MS. In the area A4, the pixels Gr2 and Gb3 are replaced with multispectral pixels MS.
With this configuration, the position of the multispectral pixel MS in the Gr block in each of the areas A1 to A4 is symmetrical with respect to the center (the optical axis of the lens 301) of the pixel array 31D. In a similar manner, the position of the multispectral pixel MS in the Gb block in each of the areas A1 to A4 is symmetrical with respect to the center (the optical axis of the lens 301) of the pixel array 31D.
Furthermore, the combination of colors of adjacent pixels on the incidence side of the oblique light of each multispectral pixel MS coincides as being green (Gb or Gr) and blue (B). In other words, among the multispectral pixels MS, the color combination of the pixel 51 which is closer to the optical axis of the lens 301 (the center of the pixel array 31D) among the adjacent upper and lower pixels 51, and the pixel 51 which is closer to the optical axis of the lens 301 (the center of the pixel array 31D) among the adjacent left and right pixels 51 coincides as being green (Gb or Gr) and blue (B).
As a result, in the pixel array 31D, the sensitivity of the Gr pixel is reduced to three kinds as compared with the pixel array 31A of
<Fifth Embodiment of Pixel Array>
As in the pixel array 31B of
On the other hand, in the pixel array 31E, the position of the multispectral pixel MS is different from that of the pixel array 31B. More specifically, in the pixel array 31E, the pixel 51 having a second highest sensitivity in the Gr block of the pixel array 31A of
In more detail, in the area A1, the pixels Gr1 and Gb4 are replaced with multispectral pixels MS. In the area A2, the pixels Gr2 and Gb3 are replaced with multispectral pixels MS. In the area A3, the pixels Gr4 and Gb1 are replaced with multispectral pixel MS. In the area A4, the pixels Gr3 and Gb2 are replaced with multispectral pixels MS.
As a result, the position of the multispectral pixel MS in the Gr block in each of the areas A1 to A4 is symmetrical with respect to a center (the optical axis of the lens 301) of the pixel array 31E. In a similar manner, the position of the multispectral pixel MS in the Gb block in each of the areas A1 to A4 is symmetrical with respect to the center (the optical axis of the lens 301) of the pixel array 31E.
Furthermore, the color combination of adjacent pixels on the incidence side of the oblique light of each multispectral pixel MS coincides as being green (Gb or Gr) and red (R). In other words, among the multispectral pixels MS, the color combination of the pixel 51 which is closer to the optical axis of the lens 301 (the center of the pixel array 31E) among the adjacent upper and lower pixels 51, and the pixel 51 which is closer to the optical axis of the lens 301 (the center of the pixel array 31E) among the adjacent left and right pixels 51 coincides as being green (Gb or Gr) and red (R).
As a result, in the pixel array 31E, as compared with the pixel array 31A in
<Sixth Embodiment of Pixel Array>
As in the pixel array 31B in
On the other hand, in the pixel array 31F, the position of the multispectral pixel MS is different from that of the pixel array 31B. More specifically, in the pixel array 31F, the pixel 51 having a highest sensitivity in the Gr block of the pixel array 31A in
In more detail, in the area A1, the pixels Gr2 and Gb2 are replaced with multispectral pixels MS. In the area A2, the pixels Gr1 and Gb1 are replaced with multispectral pixel MS. In the area A3, the pixels Gr3 and Gb3 are replaced with multispectral pixels MS. In the area A4, the pixels Gr4 and Gb4 are replaced with multispectral pixels MS.
As a result, the position of the multispectral pixel MS in the Gr block in each of the areas A1 to A4 is symmetrical with respect to the center (the optical axis of the lens 301) of the pixel array 31F. In a similar manner, the position of the multispectral pixel MS in the Gb block in each of the areas A1 to A4 is symmetrical with respect to the center (the optical axis of the lens 301) of the pixel array 31F.
Furthermore, the color combination of adjacent pixels on the incidence side of the oblique light of each multispectral pixel MS coincides as being green (Gb or Gr). In other words, among the multispectral pixels MS, the combination of colors of the pixel 51 which is closer to the optical axis of the lens 301 (the center of the pixel array 31F) among the adjacent upper and lower pixels 51, and the pixel 51 which is closer to the optical axis of the lens 301 (the center of the pixel array 31F) among the adjacent left and right pixels 51 coincides as being green (Gb or Gr).
As a result, in the pixel array 31F, as compared with the pixel array 31A in
<Seventh Embodiment of Pixel Array>
In the pixel array 31G, a single multispectral pixel MS is arranged in each pixel block. More specifically, in the pixel array 31G, the pixel 51 having a lowest sensitivity of each pixel block of the pixel array 31A in
In more detail, in the area A1, the pixel R3, the pixel Gr3, the pixel Gb3, and the pixel B3 are replaced with multispectral pixels MS. In the area A2, the pixel R4, the pixel Gr4, the pixel Gb4, and the pixel B4 are replaced with multispectral pixels MS. In the area A3, the pixel R2, the pixel Gr2, the pixel Gb2, and the pixel B2 are replaced with multispectral pixels MS. In the area A4, the pixel R1, the pixel Gr1, the pixel Gb1, and the pixel B1 are replaced with multispectral pixels MS.
As a result, the position of the multispectral pixel MS in the R block in each of the areas A1 to A4 is symmetrical with respect to the center (the optical axis of the lens 301) of the pixel array 31G. The position of the multispectral pixel MS in the Gr block in each of the areas A1 to A4 is symmetrical with respect to the center of the pixel array 31G (the optical axis of the lens 301). The position of the multispectral pixel MS in the Gb block in each of the areas A1 to A4 is symmetrical with respect to the center (the optical axis of the lens 301) of the pixel array 31G. The position of the multispectral pixel MS in the B block in each of the areas A1 to A4 is symmetrical with respect to the center (the optical axis of the lens 301) of the pixel array 31G.
As a result, as illustrated in
<Eighth Embodiment of Pixel Array>
In the pixel array 31H, R pixels, Gr pixels, Gb pixels, and B pixels are arranged in blocks in one row and two columns respectively, and the color arrangement in units of pixel blocks is in accordance with the color arrangement of the Bayer array.
However, in each Gr block, one pixel 51 is replaced with a multispectral pixel MS. More specifically, in the area A1 and the area A4, the pixel Gr1 on the left side in the Gr block is replaced with a multispectral pixel MS. In the area A2 and the area A3, the pixel Gr2 on the right side in the Gr block is replaced with a multispectral pixel MS.
In a similar manner, one pixel 51 in each Gb block is replaced with a multispectral pixel MS. More specifically, in the area A1 and the area A4, the pixel Gb1 on the left side in the Gb block is replaced with the multispectral pixel MS. In the areas A2 and A3, the right pixel Gb2 in the Gb block is replaced with the multispectral pixel MS.
With this configuration, the position of the multispectral pixel MS in the Gr block in each of the areas A1 to A4 is symmetrical with respect to a center (the optical axis of the lens 301) of the pixel array 31H. Furthermore, the position of the multispectral pixel MS in the Gr block in each of the areas A1 to A4 is symmetrical with respect to the vertical line Lv.
In a similar manner, the position of the multispectral pixel MS in the Gb block in each of the areas A1 to A4 is symmetrical with respect to the center (the optical axis of the lens 301) of the pixel array 31H. Furthermore, the position of the multispectral pixel MS in the Gb block in each of the areas A1 to A4 is symmetrical with respect to the vertical line Lv.
As a result, in the pixel array 31H, the sensitivity difference of the Gr pixel and the sensitivity difference of the Gb pixel are suppressed.
<Ninth Embodiment of Pixel Array>
A color arrangement of the pixel array 31 is basically made according to Bayer array. However, in each pixel 51, two photodiodes (not illustrated) are arranged so as to be aligned in a horizontal direction so that two light receiving areas are arranged in the horizontal direction. With this configuration, a block (hereinafter, referred to as a light receiving area block) including light receiving areas in one row and two columns is formed in each pixel 51 (divided pixel). Then, for example, detection of a phase difference or the like is performed on the basis of a difference between the light receiving amounts of the two light receiving areas in the same pixel 51.
Note that the light receiving areas indicated by R1 and R2 in the drawing receive red light, the light receiving areas indicated by Gr1, Gr2, Gb1, and Gb2 receive green light, and the light receiving areas indicated by B1 and B2 receive blue light.
Therefore, light receiving areas of the same color are arranged in each of the R pixel and the B pixel. On the other hand, in the Gr pixel and the Gb pixel, a multispectral light receiving area MS for receiving narrow band light is arranged in one of the right and left side.
More specifically, within the areas A1 and A4, the multispectral light receiving area MS is arranged in the Gr pixel and on the left side in the Gb pixel. In the areas A2 and A3, the multispectral light receiving area MS is arranged in the Gr pixel and on the right side within the Gb pixel.
As a result, the position of the multispectral pixel MS within the Gr pixel (within the light receiving area block) in each of the areas A1 to A4 is symmetrical with respect to the center of the pixel array 31I. Furthermore, the position of the multispectral pixel MS within the Gr pixel (within the light receiving area block) in each of the areas A1 to A4 is symmetrical with respect to the vertical line Lv.
In a similar manner, the position of the multispectral pixel MS within the Gb pixel (within the light receiving area block) in each area of the areas A1 to A4 is symmetrical with respect to the center of the pixel array 31I. Furthermore, the position of the multispectral pixel MS within the Gb pixel (in the light receiving area block) in each area of the areas A1 to A4 is symmetrical with respect to the vertical line Lv.
As a result, in the pixel array 31I, the sensitivity difference of the light receiving area Gr and the sensitivity difference of the light receiving area Gb are suppressed.
Here, the second embodiment can be applied to any of cases that the color filter layer 107 and the narrow band filter layer 103 are laminated as in the image pickup element 12B of
In the following, a modified example of the above described embodiment of the present technology will be described.
The example of the pixel arrangement of the pixel array 31, which has been described above with reference to
Here, as in the pixel array 31H of
Furthermore, the intersection of the vertical line Lv and horizontal line Lh dividing the pixel array 31 does not necessarily coincide with the center of the pixel array 31. For example, in a case where the optical axis of the optical system 11 does not coincide with the center of the pixel array 31, the intersection of the vertical line Lv and the horizontal line Lh coincides with the optical axis of the optical system 11 and does not coincide with the center of the pixel array 31. In this case, for example, the multispectral light receiving units are arranged so that the positions of the multispectral light receiving units in the block are symmetrical with respect to the intersection point (the optical axis of the optical system 11) of the vertical line Lv and the horizontal line Lh.
Furthermore, the present technology can be applied not only to the above described back-illuminated CMOS image sensor but also to another image pickup element using a plasmon filter. For example, the present technology can be applied to a front-illuminated CMOS image sensor, a charge coupled device (CCD) image sensor, an image sensor having a photoconductor structure including an organic photoelectric conversion film, a quantum dot structure, and the like, for example.
Furthermore, the present technology can be applied to, for example, a laminated solid-state image pickup apparatus illustrated in
A of
B of
In B of
In C of
Furthermore, the present technology can also be applied to an image pickup element using a narrow band filter such as a metal thin film filter other than a plasmon filter. Furthermore, as such a narrow band filter, there may be an optical filter to which a photonic crystal using a semiconductor material is applied, a Fabry-Perot interference filter, or the like.
Next, application examples of the present technology will be described.
For example, as illustrated in
A more specific application example will be described below.
For example, by adjusting the transmission band of the narrow band filter NB of each pixel 51 of the image pickup apparatus 10 of
For example,
For example, in a case of detecting myoglobin, which indicates a flavor component of tuna, beef, and the like, a peak wavelength of the detection band is in a range of 580 to 630 nm and a half value width is in a range of 30 to 50 nm. In a case of detecting oleic acid, which indicates freshness of tuna, beef, and the like, the peak wavelength of the detection band is 980 nm and the half value width is in a range of 50 to 100 nm. In a case of detecting chlorophyll, which indicates freshness of leafy vegetables such as Japanese mustard spinach, the peak wavelength of the detection band is in a range of 650 to 700 nm and the half value width is in a range of 50 to 100 nm.
For example, in a case of detecting a fruit optical path length, which indicates a sugar content of Raiden, a kind of melon, the peak wavelength of the detection band is 880 nm and the half value width is in a range of 20 to 30 nm. In a case of detecting sucrose, which indicates a sugar content of the starch, the peak wavelength of the detection band is 910 nm and the full width at half maximum is in a range of 40 to 50 nm. In a case of detecting sucrose, which indicates a sugar content of Raiden Red, another kind of melon, the peak wavelength of the detection band is 915 nm and the half value width is in a range of 40 to 50 nm. In a case of detecting a moisture content, which indicates the sugar content of Raiden Red, the peak wavelength of the detection band is 955 nm and the half value width is in a range of 20 to 30 nm.
In a case of detecting sucrose, which indicates a sugar content of a sugar content of an apple, the peak wavelength of the detection band is 912 nm, and the half value width is in a range of 40 to 50 nm. In a case of detecting moisture, which indicates a moisture content of an orange, the peak wavelength of the detection band is 844 nm, and the half value width is 30 nm. In a case of detecting sucrose, which indicates a sugar content of the orange, the peak wavelength of the detection band is 914 nm and the half value width is in a range of 40 to 50 nm.
For example, in a case of detecting poly ethylene terephthalate (PET), the peak wavelength of the detection band is 1669 nm, and the half value width is in a range of 30 to 50 nm. In a case of detecting poly styrene (PS), the peak wavelength of the detection band is 1688 nm, and the half value width is in a range of 30 to 50 nm. In a case of detecting poly ethylene (PE), the peak wavelength of the detection band is 1735 nm, and the half value width is in a range of 30 to 50 nm. In a case of detecting poly vinyl cloride (PVC), the peak wavelength of the detection band is in a range of 1716 to 1726 nm, and the half value width is in a range of 30 to 50 nm. In a case of detecting Polyepropylene (PP), the peak wavelength of the detection band is in a range of 1716 to 1735 nm and the half value width is in a range of 30 to 50 nm.
Furthermore, for example, the present technology can be applied to freshness management of cut flowers.
Furthermore, for example, the present technology can be applied to inspection of a foreign substance mixed in food. For example, the present technology can be applied to detection of a foreign substance such as skin, shell, stone, leaves, branches, wood chips, and the like mixed in nuts or fruits such as almonds, blueberries, walnuts, and the like. Furthermore, for example, the present technology can be applied to detection of a foreign substance such as a plastic piece mixed in processed food, beverage, and the like.
Furthermore, for example, the present technology can be applied to detection of a normalized difference vegetation index (NDVI), which is an indicator of vegetation.
Furthermore, for example, the present technology can be applied to human detection on the basis of one of or both of a spectroscopic shape near the wavelength of 580 nm derived from human hemoglobin and a spectroscopic shape near the wavelength of 960 nm derived from melanin pigment contained in the human skin.
Furthermore, for example, the present technology can be applied to forgery prevention, monitoring, and the like of organism detection (biometric authentication), user interface, signature, and the like.
<Example of Application to Endoscopic Surgery System>
Furthermore, for example, the technology (the present technology) according to the present disclosure may be applied to an endoscopic surgical system.
The endoscope 11100 includes a lens barrel 11101 having a part of a predetermined length from its front end is inserted into a body cavity of the patient 11132, and a camera head 11102 connected to a base end of the lens barrel 11101. In the illustrated example, the endoscope 11100 configured as a so-called rigid mirror having a rigid lens barrel 11101 is illustrated, but the endoscope 11100 may be configured as a so-called flexible mirror having a flexible lens barrel.
At the front end of the lens barrel 11101, there is an opening having an objective lens fitted therein. To the endoscope 11100, a light source device 11203 is connected and the light generated by the light source device 11203 is guided to the front end of the lens barrel by a light guide extending inside the lens barrel 11101, and is emitted toward an observation target in a body cavity of the patient 11132 via the objective lens. Here, it should be noted that the endoscope 11100 may be a direct view mirror, a perspective mirror, or a side view mirror.
An optical system and an image pickup element are provided inside the camera head 11102, and reflected light (observation light) from the observation target is converged on the image pickup element by the optical system. The observation light is photoelectrically converted by the image pickup element, and an electric signal corresponding to the observation light, that is, an image signal corresponding to an observation image is generated. The image signal is transmitted as RAW data to a camera control unit (CCU) 11201.
The CCU 11201 is configured by a central processing unit (CPU), a graphics processing unit (GPU), and the like, and controls operation of the endoscope 11100 and a display device 11202 in an integrated manner. Furthermore, the CCU 11201 receives an image signal from the camera head 11102, and performs various image processing for displaying an image based on the image signal, such as development processing (demosaic processing) or the like, on the image signal.
The display device 11202 displays an image based on the image signal which is processed by the CCU 11201 under the control of the CCU 11201.
The light source device 11203 includes a light source such as a light emitting diode (LED), for example, and supplies irradiation light for photographing a surgery part or the like to the endoscope 11100.
The input device 11204 is an input interface to the endoscopic surgical system 11000. The user can input various kinds of information and an instruction to the endoscopic surgical system 11000 via the input device 11204. For example, the user inputs an instruction or the like to change a condition of imaging by the endoscope 11100 (type of irradiation light, magnification, focal length, and the like).
A treatment tool control device 11205 controls driving of the energy treatment tool 11112 for cauterizing tissue, cutting incisions, sealing blood vessels, or the like. The pneumoperitoneum device 11206 has a structure in which a gas is injected into the body cavity through the pneumoperitoneum tube 11111 so as to inflate the body cavity of the patient 11132 for the purpose of securing the visual field by the endoscope 11100 and securing the working space of the surgeon. A recorder 11207 is a device capable of recording various kinds of information on surgery. A printer 11208 is a device capable of printing various kinds of information relating to surgery in various forms such as text, image, graph, and the like.
Here, it is to be noted that the light source device 11203 for supplying irradiation light for photographing a surgical area to the endoscope 11100 can be constituted by, for example, a white light source constituted by an LED, a laser light source, or a combination thereof. In a case where a white light source is configured by a combination of RGB laser light sources, since the output intensity and the output timing of each color (each wavelength) can be controlled with high precision, the white balance of the captured image can be adjusted by the light source device 11203. Furthermore, in this case, by irradiating the observation target with the laser light from each of the RGB laser light sources by time-sharing process and controlling the driving of the image pickup element of the camera head 11102 in synchronization with the irradiation timing, images corresponding to the respective RGB can be captured by time-sharing process. According to this method, a color image can be obtained without providing a color filter in the image pickup element.
Furthermore, the light source device 11203 may be controlled so as to change intensity of light to be output at predetermined time intervals. By controlling the driving of the image pickup element of the camera head 11102 in synchronization with the timing of the change of the light intensity, images are obtained in a time-sharing manner and those images are synthesized to generate an image, which is so-called high dynamic range image with no black crushed part or blown out highlights.
Furthermore, the light source device 11203 may be configured to be capable of supplying light of a predetermined wavelength band corresponding to special light observation. In the special light observation, for example, by using the wavelength dependency of the light absorption in a body tissue, so-called narrow band light observation (narrow band imaging) is performed, in which an image of a predetermined tissue such as a blood vessel of a mucosal surface layer is captured with high contrast by emitting light in a band, which is narrower than irradiation light (that is, white light) in a case of ordinary observation. Alternatively, in the special light observation, fluorescence observation for obtaining an image by using fluorescence generated by emitted excitation light may be performed. In fluorescence observation, it is possible to irradiate a body tissue with excitation light to observe fluorescence from the body tissue (auto-fluorescence observation), or to inject a reagent such as indocyanine green (ICG) into the body tissue and irradiate the body tissue with excitation light corresponding to a fluorescence wavelength of the reagent to obtain a fluorescent image and the like. The light source device 11203 can be made to supply narrow band light and/or excitation light corresponding to such special light observation.
The camera head 11102 includes a lens unit 11401, an imaging unit 11402, a drive unit 11403, a communication unit 11404, and a camera head control unit 11405. The CCU 11201 includes a communication unit 11411, an image processor 11412, and a control unit 11413. The camera head 11102 and the CCU 11201 are communicably connected to each other by a transmission cable 11400.
The lens unit 11401 is an optical system provided at a connection unit with the lens barrel 11101. The observation light taken in from the front end of the lens barrel 11101 is guided to the camera head 11102 and enters the lens unit 11401. The lens unit 11401 is formed by combining a plurality of lenses including a zoom lens and a focus lens.
The image pickup element constituting the imaging unit 11402 may be one (so-called single plate type) or a plurality of (so-called multi plate type) image pickup elements. In a case where the imaging unit 11402 includes a multi-plate type, image signals corresponding to R, G, and B may be generated, for example, by each image pickup element and may be combined to obtain a color image. Alternatively, the imaging unit 11402 may be configured to have a pair of image pickup elements for acquiring right-eye and left-eye image signals for three-dimensional (3D) displaying, respectively. With the 3D displaying, the surgeon 11131 can further accurately recognize a depth of the living tissue in the surgery area. Note that, in a case where the imaging unit 11402 includes a multi-plate type, a plurality of lens units 11401 can also be provided corresponding to each image pickup element.
In addition, the imaging unit 11402 is not necessarily provided in the camera head 11102. For example, the imaging unit 11402 may be provided inside the lens barrel 11101 immediately behind the objective lens.
The drive unit 11403 is constituted by an actuator and moves the zoom lens and the focus lens of the lens unit 11401 by a predetermined distance along the optical axis under the control of the camera head control unit 11405. As a result, magnification and focus of the image captured by the imaging unit 11402 can be appropriately adjusted.
The communication unit 11404 includes a communication device for transmitting and receiving various types of information to and from the CCU 11201. The communication unit 11404 transmits the image signal obtained from the imaging unit 11402 as RAW data to the CCU 11201 via the transmission cable 11400.
In addition, the communication unit 11404 receives a control signal for controlling driving of the camera head 11102 from the CCU 11201, and supplies the control signal to the camera head control unit 11405. The control signal includes, for example, information associated with imaging conditions such as information designating a frame rate of the captured image, information designating an exposure value at the time of image capturing, and/or information specifying magnification and focus of the captured image.
Note that the imaging conditions such as the above frame rate, exposure value, magnification, focus, and the like may be appropriately designated by a user or automatically set by the control unit 11413 of the CCU 11201 on the basis of the acquired image signal. In the latter case, so-called auto exposure (AE) function, auto focus (AF) function, and auto white balance (AWB) function are installed in the endoscope 11100.
The camera head control unit 11405 controls driving of the camera head 11102 on the basis of a control signal from the CCU 11201 received via the communication unit 11404.
The communication unit 11411 includes a communication device for transmitting and receiving various types of information to and from the camera head 11102. The communication unit 11411 receives an image signal transmitted from the camera head 11102 via the transmission cable 11400.
Furthermore, the communication unit 11411 transmits a control signal for controlling the driving of the camera head 11102 to the camera head 11102. The image signal and the control signal can be transmitted by electric communication, optical communication, or the like.
The image processor 11412 performs various types of image processing on the image signal which is RAW data transmitted from the camera head 11102.
The control unit 11413 performs various types of control related to imaging of a surgery area or the like by the endoscope 11100 and displaying of captured images obtained by imaging the surgery area or the like. For example, the control unit 11413 generates a control signal for controlling the driving of the camera head 11102.
In addition, the control unit 11413 causes the display device 11202 to display the captured image including the surgery area or the like on the basis of the image signal subjected to the image processing by the image processor 11412. In this case, the control unit 11413 may recognize various types of objects in the captured image using various types of image recognition techniques. For example, by detecting the shape, color, and the like of the edge of the object included in the captured image, the control unit 11413 can recognize a surgical tool such as a forceps, a specific body part, bleeding, a mist in a case of using the energy treatment tool 11112, and the like. In a case where the display device 11202 is caused to display a captured image, the control unit 11413 may superimpose various kinds of surgical operation support information on the image of the surgery area using the recognition result to display. In a case where the surgical operation support information is superimposed and presented to the surgeon 11131, this may reduce a burden on the surgeon 11131 and help the surgeon 11131 to reliably proceed the surgery.
The transmission cable 11400 that connects the camera head 11102 and the CCU 11201 is an electric signal cable compatible with electric signal communication, an optical fiber compatible with optical communication, or a composite cable thereof.
Here, in the illustrated example, communication is performed by wire using the transmission cable 11400; however communication between the camera head 11102 and the CCU 11201 may be performed wirelessly.
The above description has explained an example of an endoscopic surgical system to which the technology related to the present disclosure can be applied. The technology according to the present disclosure can be applied to, for example, the camera head 11102 and the imaging unit 11402 of the camera head 11102, in the above described configurations. More specifically, for example, the image pickup element 12 of
Note that, although an endoscopic surgical system has been described as an example here, the technology according to the present disclosure may be applied to, for example, a microscopic surgery system or the like.
<Example of Application to Moving Body>
Furthermore, for example, the technology related to the present disclosure can be realized as a device to be mounted in a moving body in any one of types including vehicles, electric vehicles, hybrid electric vehicles, motorcycles, bicycles, personal mobilities, airplanes, drones, vessels, robots, and the like.
A vehicle control system 12000 includes a plurality of electronic control units connected via a communication network 12001. In the example illustrated in
The driving system control unit 12010 controls operation of a device related to a vehicle driving system according to various programs. For example, the driving system control unit 12010 functions as a control device such as a driving force generation device for generating a driving force of a vehicle such as an engine or a drive motor, a driving force transmission mechanism for transmitting the driving force to a wheel, a steering mechanism for adjusting a vehicular steering angle, and a braking system for generating vehicular braking force.
The body system control unit 12020 controls operation of various devices mounted in the vehicle according to various programs. For example, the body system control unit 12020 functions as a control device of a keyless entry system, a smart key system, a power window device, or various lamps such as a head lamp, a back lamp, a brake lamp, a blinker, and a fog lamp. In this case, to the body system control unit 12020, radio waves or signals of various switches sent from a mobile device serving as a substitute of a key can be input. The body system control unit 12020 receives these inputs of radio wave or signals and controls a door lock device, a power window device, a lamp, or the like of the vehicle.
The vehicle surroundings information detection unit 12030 detects information from the outside of the vehicle in which the vehicle control system 12000 is mounted. For example, to the vehicle surroundings information detection unit 12030, the imaging unit 12031 is connected. The vehicle surroundings information detection unit 12030 causes the imaging unit 12031 to capture an image of the outside of the vehicle and receives the captured image. On the basis of the received image, the vehicle surroundings information detection unit 12030 may perform a process of detecting an object including a person, a vehicle, an obstacle, a sign, a letter on a street surface, or the like or a process of detecting a distance.
The imaging unit 12031 is an optical sensor that receives light and outputs an electrical signal corresponding to a light-receiving amount of the light. The imaging unit 12031 can output the electric signal as an image or output the electric signal as distance measurement information. Furthermore, the light the imaging unit 12031 receives may be visible light or invisible light such as infrared light.
The vehicle interior information detection unit 12040 detects information inside the vehicle. To the vehicle interior information detection unit 12040, for example, a driver state detection unit 12041 for detecting a state of a driver is connected. The driver state detection unit 12041 includes, for example, a camera for capturing an image of the driver, and the vehicle interior information detection unit 12040 may calculate a fatigue degree or a concentration degree of the driver on the basis of the detection information input from the driver state detection unit 12041 or may determine if the driver is surely awake.
The microcomputer 12051 can calculate a control target value of the driving force generation device, steering mechanism, or braking system on the basis of the vehicle interior and surroundings information obtained in the vehicle surroundings information detection unit 12030 or vehicle interior information detection unit 12040 and output a control command to the driving system control unit 12010. For example, the microcomputer 12051 can perform a coordinate control aiming to realize an advanced driver assistance system (ADAS) function including collision avoidance or impact relaxation of the vehicle, following travel based on an inter-vehicular distance, vehicle speed maintaining travel, vehicular collision-warning, vehicular lane departure warning, or the like.
Furthermore, the microcomputer 12051 can perform a coordinate control aiming at an automatic operation for autonomously traveling regardless of the driver's operation, or the like by controlling the driving force generation device, steering mechanism, braking system, or the like on the basis of vehicle surroundings information obtained by the vehicle surroundings information detection unit 12030 or vehicle interior information detection unit 12040.
Furthermore, the microcomputer 12051 can output a control command to the body system control unit 12020 on the basis of vehicle surroundings information obtained by the vehicle surroundings information detection unit 12030. For example, the microcomputer 12051 can control the head lamp according to a position of a preceding vehicle or an oncoming vehicle detected by the vehicle surroundings information detection unit 12030 and perform a coordinate control aiming to antidazzle by switching high beam to low beam, or the like.
The sound/image output unit 12052 transmits an output signal of at least one of a sound or an image to an output device which can visually or aurally notify a vehicular passenger or the outside of the vehicle of information. In the example of
In
The imaging units 12101, 12102, 12103, 12104, and 12105 are provided at positions on a front nose, side mirrors, a rear bumper, a back door, an upper portion of a windshield inside the vehicle, or the like of the vehicle 12100, for example. The imaging unit 12101 provided on the front nose and the imaging unit 12105 provided on the upper portion of the windshield inside the vehicle mainly obtain an image in front of the vehicle 12100. The imaging units 12102 and 12103 provided at the side mirrors mainly obtain images in sides of the vehicle 12100. The imaging unit 12104 provided at the rear bumper or back door mainly obtains an image behind the vehicle 12100. The imaging unit 12105 provided on the upper portion of the windshield inside the vehicle is mainly used for detecting a preceding vehicle, a pedestrian, an obstacle, a traffic signal, a traffic sign, a lane, and the like.
Note that
At least one of the imaging units 12101 to 12104 may have a function for obtaining distance information. For example, at least one of the imaging units 12101 to 12104 may be a stereo camera including a plurality of image pickup elements or may be an image pickup element having a pixel for detecting a phase difference.
For example, the microcomputer 12051 can extract a three-dimensional object, which is especially existing closest to the vehicle 12100 on a roadway in a traveling direction and is traveling at a predetermined speed (for example, equal to or greater than 0 km/h) in an almost same direction with the vehicle 12100 as a preceding vehicle by obtaining distances to each three-dimensional object within the coverages 12111 to 12114 and temporal variation of the distances (relative velocity with respect to the vehicle 12100) on the basis of the distance information obtained from the imaging units 12101 to 12104. Furthermore, the microcomputer 12051 sets an inter-vehicular distance, which is to be maintained, before the preceding vehicle in advance, and can perform an automatic brake control (also including a follow-up stoppage control), an automatic acceleration control (also including a follow-up start control), and the like. A coordinate control aiming at an automatic operation for autonomously traveling regardless of the driver's operation in this manner, or the like can be performed.
For example, the microcomputer 12051 extracts three-dimensional object data related to the three-dimensional object as classifying into other three-dimensional objects such as motorcycles, typical vehicles, large-sized vehicle vehicles, pedestrians, and utility poles on the basis of the distance information obtained from the imaging units 12101 to 12104, and the extracted three-dimensional object data can be used in an obstacle automatic avoidance. For example, the microcomputer 12051 identifies obstacles around the vehicle 12100 into obstacles visible by the driver of the vehicle 12100 and obstacles hardly visible by the driver. Then, the microcomputer 12051 determines a collision risk indicating a risk of colliding with each obstacle and, in a condition that the collision risk is equal to or greater than a set value and there is a possibility of a collision, a drive assist for a collision avoidance by outputting a warning to the driver via the audio speaker 12061 or display unit 12062 or performing forced deceleration or avoidance steering via the driving system control unit 12010.
At least one of the imaging units 12101 to 12104 may be an infrared camera, which detects infrared ray. For example, the microcomputer 12051 can recognize pedestrians by determining whether or not there is a pedestrian in captured images of the imaging units 12101 to 12104. This pedestrian recognition is performed, for example, by a procedure for extracting a characteristic point in the captured images of the imaging units 12101 to 12104, which are infrared cameras or a procedure for determining whether or not the obstacle is a pedestrian by performing a pattern matching process on a series of characteristic points, which indicates an outline of the obstacle. In a case where the microcomputer 12051 determines that there is a pedestrian in the captured images of the imaging units 12101 to 12104 and recognizes the pedestrian, and the sound/image output unit 12052 controls the display unit 12062 to superimposedly display a rectangular outline that emphasizes the recognized pedestrian. Furthermore, the sound/image output unit 12052 may control the display unit 12062 to display an icon or the like indicating the pedestrian in a desired position.
In the above, an example of a vehicle control system to which the technology related to the present disclosure can be applied has been explained. The technology related to the present disclosure can be applied to, for example, the imaging unit 12031 in the above described configuration. More specifically, for example, the image pickup apparatus 10 of
Note that the embodiment according to the present technology is not limited to the above described embodiment and various changes can be made within the scope of the present technology.
<Example of Configuration Combinations>
Furthermore, for example, the present technology may have following configurations.
(1)
An image pickup element including:
a pixel array in which at least a first light receiving unit configured to receive light in a predetermined color and a second light receiving unit configured to receive light having a wavelength band having a band width narrower than a band width of a wavelength band of the predetermined color are arranged,
in which, in a case where the pixel array is divided into four areas by a vertical line and a horizontal line, a position of the second light receiving unit in a first block, in which one or more of each of the first light receiving unit and the second light receiving unit are arranged, differs in each of the areas.
(2)
The image pickup element according to (1), in which the position of the second light receiving unit in the first block in each of the areas is symmetrical with respect to an intersection of the vertical line and the horizontal line.
(3)
The image pickup element according to (1), in which the position of the second light receiving unit in the first block in each of the areas is symmetrical with respect to the vertical line or the horizontal line.
(4)
The image pickup element according to any one of (1) to (3), in which a combination of colors received by a third light receiving unit which is closer to an intersection of the vertical line and the horizontal line among upper and lower light receiving units adjacent to the second light receiving unit, and received by a fourth light receiving unit which is closer to the intersection among right and left light receiving units adjacent to the second light receiving unit, correspond to each other between each of the second light receiving units.
(5)
The image pickup element according to any one of (1) to (4), in which the position of the second light receiving unit in the first block is set on the basis of sensitivity of the first light receiving unit in the first block.
(6)
The image pickup element according to any one of (1) to (5), in which an intersection of the vertical line and the horizontal line corresponds to a center of the pixel array.
(7)
The image pickup element according to any one of (1) to (6), in which an intersection of the vertical line and the horizontal line is on an optical axis of an optical system that leads light to the pixel array.
(8)
The image pickup element according to any one of (1) to (7), in which the first light receiving unit and the second light receiving unit are pixels, respectively.
(9)
The image pickup element according to any one of (1) to (7), in which the first light receiving unit and the second light receiving unit are light receiving areas of a pixel, respectively.
(10)
The image pickup element according to any one of (1) to (9), in which a second optical filter used in the second light receiving unit is an optical filter that has a transmission band having a band width narrower than a band width of a first optical filter used in the first light receiving unit.
(11)
The image pickup element according to (10), in which the second optical filter is a plasmon filter.
(12)
The image pickup element according to (10), in which the second optical filter is a Fabry-Perot interference filter.
(13)
The image pickup element according to any one of (1) to (12), in which
a second block including a fifth light receiving unit that receives red light, a third block including a sixth light receiving unit that receives green light, a fourth block including a seventh light receiving unit that receives green light, and a fifth block including an eighth light receiving unit that receives blue light are arranged in the pixel array,
the first light receiving unit is one of the fifth light receiving unit to the eighth light receiving unit, and
the first block is one of the second block to the fifth block.
(14)
The image pickup element according to (13), in which the colors of the second block to the fifth block in the pixel array are arranged according to an arrangement of colors of Bayer array.
(15)
The image pickup element according to (13) or (14), in which, in the second block to the fifth block, the light receiving units are arranged in two rows and two columns.
(16)
An electronic device including:
an image pickup element; and
a signal processor configured to process a signal output from the image pickup element,
in which the image pickup element includes a pixel array in which at least a first light receiving unit configured to receive light in a predetermined color and a second light receiving unit configured to receive light having a wavelength band having a band width narrower than a band width of a wavelength band of the predetermined color are arranged, and
in a case where the pixel array is divided into four areas by a vertical line and a horizontal line, a position of the second light receiving unit in a first block, in which one or more of each of the first light receiving unit and the second light receiving unit are arranged, differs in each of the areas.
Number | Date | Country | Kind |
---|---|---|---|
2016-241255 | Dec 2016 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2017/044628 | 12/12/2017 | WO | 00 |