The present disclosure relates to an electronic device.
Recent electronic devices such as smartphones, mobile phones, and personal computers (PCs) are equipped with cameras so that video phones and moving image capturing can be easily performed. On the other hand, in an imaging unit that captures an image, in addition to normal pixels that output imaging information, special purpose pixels such as polarization pixels and pixels having complementary color filters may be arranged. The polarization pixels are used, for example, for correction of flare, and the pixels having complementary color filters are used for color correction.
However, when a large number of special pixels are arranged, the number of normal pixels decreases, and the resolution of the image captured by the imaging unit may decrease.
In an aspect of the present disclosure, an electronic device capable of suppressing a decrease in resolution of a captured image while increasing types of information obtained by an imaging unit is provided.
In order to solve the above problem, the present disclosure provides an electronic device including an imaging unit that includes a plurality of pixel groups each including two adjacent pixels, in which
at least one first pixel group of the plurality of pixel groups includes
a first pixel that photoelectrically converts a part of incident light condensed through a first lens, and
a second pixel different from the first pixel that photoelectrically converts a part of the incident light condensed through the first lens, and
at least one second pixel group different from the first pixel group among the plurality of pixel groups includes
a third pixel that photoelectrically converts incident light condensed through a second lens, and
a fourth pixel that is different from the third pixel and photoelectrically converts incident light condensed through a third lens different from the second lens.
The imaging unit may include a plurality of pixel regions in which the pixel groups are arranged in a two-by-two matrix, and
the plurality of pixel regions may include
a first pixel region that is the pixel region in which four of the first pixel groups are arranged, and
a second pixel region that is the pixel region in which three of the first pixel groups and one of the second pixel groups are arranged.
In the first pixel region, one of a red filter, a green filter, and a blue filter may be arranged corresponding to the first pixel group that receives red light, green light, and blue light.
In the second pixel region, at least two of the red filter, the green filter, and the blue filter may be arranged corresponding to the first pixel group that receives at least two colors among red light, green light, and blue light, and
at least one of the two pixels of the second pixel group may include one of a cyan filter, a magenta filter, and a yellow filter.
At least one of the two pixels of the second pixel group may be a pixel having a blue wavelength region.
A signal processing unit that performs color correction of an output signal output by at least one of the pixels of the first pixel group on the basis of an output signal of at least one of the two pixels of the second pixel group may be further included.
At least one pixel of the second pixel group may have a polarization element.
The third pixel and the fourth pixel may include the polarization element, and the polarization element included in the third pixel and the polarization element included in the fourth pixel may have different polarization orientations.
A correction unit that corrects an output signal of a pixel of the first pixel group by using polarization information based on an output signal of the pixel having the polarization element may be further included.
The incident light may be incident on the first pixel and the second pixel via a display unit, and
the correction unit may remove a polarization component captured when at least one of reflected light or diffracted light generated when passing through the display unit is incident on the first pixel and the second pixel and captured.
The correction unit may perform, on digital pixel data obtained by photoelectric conversion by the first pixel and the second pixel and digitization, subtraction processing of a correction amount based on polarization information data obtained by digitizing a polarization component photoelectrically converted by the pixel having the polarization element, to correct the digital pixel data.
A drive unit that reads charges a plurality of times from each pixel of the plurality of pixel groups in one imaging frame, and
an analog-to-digital conversion unit that performs analog-to-digital conversion in parallel on each of a plurality of pixel signals based on a plurality of times of charge reading
may be further included.
The drive unit may read a common black level corresponding to the third pixel and the fourth pixel.
The plurality of pixels including the two adjacent pixels may have a square shape.
Phase difference detection may be possible on the basis of output signals of two pixels of the first pixel group.
The signal processing unit may perform white balance processing after performing color correction on the output signal.
An interpolation unit that interpolates the output signal of the pixel having the polarization element from an output of a peripheral pixel of the pixel may be further included.
The first to third lenses may be on-chip lenses that condense incident light onto a photoelectric conversion unit of a corresponding pixel.
A display unit may be further included, and the incident light may be incident on the plurality of pixel groups via the display unit.
Hereinafter, an embodiment of an electronic device will be described with reference to the drawings. Although main components of the electronic device will be mainly described below, the electronic device may have components and functions that are not illustrated or described. The following description does not exclude components and functions that are not illustrated or described.
Note that, in
As illustrated in
The circularly polarizing plate 5 is provided to reduce glare and enhance visibility of the display screen 1a even in a bright environment. A touch sensor is incorporated in the touch panel 6. There are various types of touch sensors such as a capacitive type and a resistive film type, but any type may be used. Furthermore, the touch panel 6 and the display panel 4 may be integrated. The cover glass 7 is provided to protect the display panel 4 and the like.
The camera module 3 includes an imaging unit 8 and an optical system 9. The optical system 9 is arranged on a light incident surface side of the imaging unit 8, that is, on a side close to the display unit 2, and condenses light passing through the display unit 2 on the imaging unit 8. The optical system 9 usually includes a plurality of lenses.
The imaging unit 8 includes a plurality of photoelectric conversion units. The photoelectric conversion unit photoelectrically converts light incident through the display unit 2. The photoelectric conversion unit may be a complementary metal oxide semiconductor (CMOS) sensor or a charge coupled device (CCD) sensor. Furthermore, the photoelectric conversion unit may be a photodiode or an organic photoelectric conversion film.
Here, an example of a pixel array and an on-chip lens array in the imaging unit 8 will be described with reference to
As illustrated in
Reference numeral R denotes a pixel that receives red light, reference numeral G denotes a pixel that receives green light, reference numeral B denotes a pixel that receives blue light, reference numeral C denotes a pixel that receives cyan light, reference numeral Y denotes a pixel that receives yellow light, and reference numeral M denotes a pixel that receives magenta light. The same applies to other drawings.
The imaging unit 8 includes first pixel regions 8a and second pixel regions 8b, 8c, and 8d. In
In a first pixel region 8a, pixels are arranged in a form in which one pixel in a normal Bayer array is replaced with two pixels 80 and 82 arranged in a row. That is, pixels are arranged in a form in which each of R, G, and B in the Bayer array is replaced with two pixels 80 and 82.
On the other hand, in the second pixel regions 8b, 8c, and 8d, pixels are arranged in a form in which each of R and G in the Bayer array is replaced with two pixels 80 and 82, and pixels are arranged in a form in which B in the Bayer array is replaced with two pixels 80a and 82a. For example, the combination of the two pixels 80a and 82a is a combination of B and C in the second pixel region 8b, a combination of B and Y in the second pixel region 8c, and a combination of B and M in the second pixel region 8d.
Further, as illustrated in
On the other hand, as illustrated in
In the first pixel region 8a, pixels in a B array acquire only color information of blue, whereas in the second pixel region 8b, the pixels in the B array can further acquire color information of cyan in addition to the color information of blue. Similarly, the pixels in the B array in the second pixel region 8c can further acquire color information of yellow in addition to the color information of blue. Similarly, the pixels in the B array in the second pixel region 8d can further acquire color information of magenta in addition to the color information of blue.
The color information of cyan, yellow, and magenta acquired by the pixels 80a and 82a in the second pixel regions 8b, 8c, and 8d can be used for color correction. In other words, the pixels 80a and 82a in the second pixel regions 8b, 8c, and 8d are special purpose pixels arranged for color correction. Here, the special purpose pixel according to the present embodiment means a pixel used for correction processing such as color correction and polarization correction. These special purpose pixels can also be used for applications other than normal imaging.
The on-chip lenses 22a of the pixels 80a and 82a in the second pixel regions 8b, 8c, and 8d are elliptical, and the amount of received light is also half the total value of the pixels 80 and 82 that receive the same color. A light reception distribution and an amount of light, that is, sensitivity and the like can be corrected by signal processing.
On the other hand, the pixels 80a and 82a can obtain color information of two different systems, and are effectively used for color correction. In this manner, in the second pixel regions 8b, 8c, and 8d, the types of information to be obtained can be increased without reducing the resolution. Note that details of color correction processing will be described later.
In the present embodiment, the pixels of the B array in the Bayer array are formed by the two pixels 80a and 82a, but the present invention is not limited thereto. For example, as illustrated in
Note that, in the present embodiment, the pixel array is formed by the Bayer array, but the present invention is not limited thereto. For example, an interline array, a checkered array, a stripe array, or other arrays may be used. That is, the ratio of the number of pixels 80a and 82a to the number of pixels 80 and 82, the type of received light color, and the arrangement location are arbitrary.
On a second surface 11b side of the substrate 11, a light shielding layer 15 is arranged in the vicinity of a boundary of pixels via a flattening layer 14, and an underlying insulating layer 16 is arranged around the light shielding layer 15. A flattening layer 20 is arranged on the underlying insulating layer 16. A color filter layer 21 is arranged on the flattening layer 20. The color filter layer 21 includes filter layers of three colors of RGB. Note that, in the present embodiment, the color filter layers 21 of the pixels 80 and 82 include filter layers of three colors of RGB, but are not limited thereto. For example, filter layers of cyan, magenta, and yellow, which are complementary colors thereof, may be included. Alternatively, a filter layer that transmits colors other than visible light such as infrared light may be included, a filter layer having multispectral characteristics may be included, or a decoloring filter layer such as white may be included. By transmitting light other than visible light such as infrared light, sensing information such as depth information can be detected. The on-chip lens 22 is arranged on the color filter layer 21.
As can be seen from these, in the pixels 80 and 82 and the pixels 80a and 82a, the shapes of the on-chip lenses 22 and 22a and the combination of the color filter layers 21 are different, but the components of the flattening layers 20 and below have equivalent structures. Therefore, reading of data from the pixels 80 and 82 and reading of data from the pixels 80a and 82a can be performed equally. Thus, as will be described in detail later, the types of information to be obtained can be increased by the output signals of the pixels 80a and 82a, and a decrease in the frame rate can be prevented.
Here, a system configuration example of the electronic device 1 and a data reading method will be described with reference to
In the imaging unit 8, pixel drive lines are wired along a row direction for each pixel row and, for example, two vertical signal lines 310 and 32 are wired along a 0 column direction for each pixel column with respect to the pixel array in the matrix form. The pixel drive line transmits a drive signal for driving when a signal is read from the pixels 80, 82, 80a, and 82a. One end of the pixel drive line is connected to an output terminal corresponding to each row of the vertical drive unit 130.
The vertical drive unit 130 includes a shift register, an address decoder, and the like, and drives all the pixels 80, 82, 80a, and 82a of the imaging unit 8 at the same time, in units of rows, or the like. That is, the vertical drive unit 130 forms a drive unit that drives each of the pixels 80, 82, 80a, and 82a of the imaging unit 8 together with a system control unit 190 that controls the vertical drive unit 130. The vertical drive unit 130 generally has a configuration including two scanning systems of a read scanning system and a sweep scanning system. The read scanning system selectively scans each of the pixels 80, 82, 80a, and 82a sequentially in units of rows. Signals read from each of the pixels 80, 82, 80a, and 82a are analog signals. The sweep scanning system performs sweep scanning on a read row, on which read scanning is performed by the read scanning system, prior to the read scanning by a time corresponding to a shutter speed.
By the sweep scanning by the sweep scanning system, unnecessary charges are swept out from each of the photoelectric conversion units of the pixels 80, 82, 80a, and 82a of the read row, and thereby the photoelectric conversion units are reset. Then, by sweeping out (resetting) unnecessary charges by the sweep scanning system, what is called an electronic shutter operation is performed. Here, the electronic shutter operation refers to an operation of discharging photocharges of the photoelectric conversion unit and newly starting exposure (starting accumulation of photocharges).
The signal read by the read operation by the read scanning system corresponds to the amount of light received after the immediately preceding read operation or electronic shutter operation. Then, a period from read timing by the immediately preceding read operation or sweep timing by the electronic shutter operation to the read timing by the current read operation is an exposure period of photocharges in the unit pixel.
Pixel signals output from each of the pixels 80, 82, 80a, and 82a of a pixel row selected by the vertical drive unit 130 are input to the AD conversion units 140 and 150 through the two vertical signal lines 310 and 320. Here, the vertical signal line 310 of one system includes a signal line group (first signal line group) that transmits the pixel signal output from each of the pixels 80, 82, 80a, and 82a of the selected row in a first direction (one side in a pixel column direction/upward direction of the drawing) for each pixel column. The vertical signal line 320 of the other system includes a signal line group (second signal line group) that transmits the pixel signal output from each of the pixels 80, 82, 80a, and 82a of the selected row in a second direction (the other side in the pixel column direction/downward direction in the drawing) opposite to the first direction.
Each of the AD conversion units 140 and 150 includes a set (AD converter group) of AD converters 141 and 151 provided for each pixel column, is provided across the imaging unit 8 in the pixel column direction, and performs AD conversion on the pixel signals transmitted by the vertical signal lines 310 and 320 of the two systems. That is, the AD conversion unit 140 includes a set of AD converters 141 that perform AD conversion on the pixel signals transmitted and input in the first direction by the vertical signal line 31 for each pixel column. The AD conversion unit 150 includes a set of AD converters 151 that perform AD conversion of a pixel signal transmitted in the second direction by the vertical signal line 320 and input for each pixel column.
That is, the AD converter 141 of one system is connected to one end of the vertical signal line 310. Then, the pixel signal output from each of the pixels 80, 82, 80a, and 82a is transmitted in the first direction (upward direction of the drawing) by the vertical signal line 310 and input to the AD converter 141. Furthermore, the AD converter 151 of the other system is connected to one end of the vertical signal line 320. Then, the pixel signal output from each of the pixels 80, 82, 80a, and 82a is transmitted in the second direction (downward of the drawing) by the vertical signal line 320 and input to the AD converter 151.
The pixel data (digital data) after the AD conversion in the AD conversion units 140 and 150 is supplied to the memory unit 180 via the column processing units 160 and 170. The memory unit 180 temporarily stores the pixel data that has passed through the column processing unit 160 and the pixel data that has passed through the column processing unit 170. Furthermore, the memory unit 180 also performs processing of adding the pixel data that has passed through the column processing unit 160 and the pixel data that has passed through the column processing unit 170.
Furthermore, in a case where the black level signal of each of the pixels 80, 82, 80a, and 82a is acquired, the black level to be the reference point may be read in common for each pair of adjacent two pixels (80, 82) and (80a, 82a). Thus, black level reading is made common, and the reading speed, that is, the frame rate can be increased. That is, after the black level serving as the reference point is read in common, it is possible to perform driving of individually reading a normal signal level.
The system control unit 190 includes a timing generator that generates various timing signals and the like, and performs drive control of the vertical drive unit 130, the AD conversion units 140 and 150, the column processing units 160 and 170, and the like on the basis of various timings generated by the timing generator.
The pixel data read from the memory unit 180 is subjected to predetermined signal processing in the signal processing unit 510 and then output to the display panel 4 via the interface 520. In the signal processing unit 510, for example, processing of obtaining a sum or an average of pixel data in one imaging frame is performed. Details of the signal processing unit 510 will be described later.
In the electronic device 1 according to the present embodiment, under control of the system control unit 190, the vertical drive unit 130 performs, for example, charge reading drive twice from the photoelectric conversion unit 800a in one imaging frame. The charge amount corresponding to the number of times of reading can be read from the photoelectric conversion unit 800a by performing reading twice at a faster reading speed than in a case of one-time charge reading, storing in the memory unit 180, and performing addition processing.
The electronic device 1 according to the present embodiment employs a configuration (two-parallel configuration) in which two systems of AD conversion units 140 and 150 are provided in parallel for two pixel signals based on two times of charge reading. Since the two AD conversion units are provided in parallel for the two pixel signals read out in time series from each of the respective pixels 80, 82, 80a, and 82a, the two pixel signals read out in time series can be AD-converted in parallel by the two AD conversion units 140 and 150. In other words, since the AD conversion units 140 and 150 are provided in two systems in parallel, the second charge reading and the AD conversion of the pixel signal based on the second charge reading can be performed in parallel during the AD conversion of the image signal based on the first charge reading. Thus, the image data can be read from the photoelectric conversion unit 800a at a higher speed.
Here, an example of color correction processing of the signal processing unit 510 will be described in detail with reference to
First, correction processing of generating corrected output signals BS3 and BS4 of the B (blue) pixel using an output signal CS1 of the C (cyan) pixel will be described. As described above, an output signal RS1 of the R (red) pixel, an output signal GS1 of the G (green) pixel, and an output signal GB1 of the B (blue) pixel are stored in the first region (180a) of the memory unit 180. On the other hand, the output signal CS1 of the C (cyan) pixel, an output signal YS1 of the Y (yellow) pixel, and an output signal MS1 of the M (magenta) pixel are stored in the second region (180b) of the memory unit 180.
As illustrated in
Accordingly, in the second pixel region 8b (
BS2=k1×CS1−k2×GS1 (1)
Here, k1 and k2 are coefficients for adjusting the signal intensity.
Then, the signal processing unit 510 calculates a corrected output signal BS3 of the B (blue) pixel by, for example, Expression (2).
Here, k3 is a coefficient for adjusting the signal intensity.
Similarly, in the second pixel region 8e (
BS4=k1×CS1−k2×GS1+k4×BS1 (3)
Here, k4 is a coefficient for adjusting the signal intensity. In this manner, the signal processing unit 510 can obtain the output signals BS3 and BS4 of the B (blue) pixel corrected using the output signal CS1 of the C (cyan) pixel and the output signal GS1 of the G (green) pixel.
Next, correction processing of generating corrected output signals RS3 and RS4 of the R (red) pixel using the output signal YS1 of the Y (yellow) pixel will be described.
As illustrated in
Accordingly, in the second pixel region 8c (
RS2=k5×YS1−k6×GS1 (4)
Here, k5 and k6 are coefficients for adjusting the signal intensity.
Then, the signal processing unit 510 calculates a corrected output signal RS3 of the R (red) pixel by, for example, Expression (5).
Here, k7 is a coefficient for adjusting the signal intensity.
Similarly, in the second pixel region 8f (
RS4=k5×YS1−k6×GS1+k8×RS1 (6)
Here, k8 is a coefficient for adjusting the signal intensity. In this manner, the signal processing unit 510 can obtain the output signals RS3 and RS4 of the R (red) pixel corrected using the output signal YS1 of the Y (yellow) pixel and the output signal GS1 of the G (green) pixel.
Next, correction processing of generating corrected output signals BS6 and BS7 of the B (blue) pixel using the output signal MS1 of the M (magenta) pixel will be described.
As illustrated in
Accordingly, in the second pixel region 8d (
BS5=k9×MS1−k10×RS1 (7)
Here, k9 and k10 are coefficients for adjusting the signal intensity.
Then, the signal processing unit 510 calculates a corrected output signal BS6 of the B (blue) pixel by, for example, Expression (8).
Here, k11 is a coefficient for adjusting the signal intensity.
Similarly, in the second pixel region 8g (
BS7=k9×MS1−k10×RS1+k12×BS1 (9)
Here, k12 is a coefficient for adjusting the signal intensity. In this manner, the signal processing unit 510 can obtain the output signals BS6 and BS7 of the B (blue) pixel corrected using the output signal MS1 of the M (magenta) pixel and the output signal RS1 of the R (red) pixel.
Next, correction processing of generating corrected output signals RS6 and RS7 of the R (red) pixel using the output signal MS1 of the M (magenta) pixel will be described.
In the second pixel region 8d (
RS5=k13×MS1−k14×BS1 (10)
Here, k13 and k14 are coefficients for adjusting the signal intensity.
Then, the signal processing unit 510 calculates a corrected output signal RS6 of the R (red) pixel by, for example, Expression (11).
Here, k16 is a coefficient for adjusting the signal intensity.
Similarly, in the second pixel region 8g (
RS7=k13×MS1−k14×BS1+k17×RS1 (12)
Here, k17 is a coefficient for adjusting the signal intensity. In this manner, the signal processing unit 510 can obtain the output signals RS6 and RS7 of the R (red) pixel corrected using the output signal MS1 of the M (magenta) pixel and the output signal BS1 of the B (blue) pixel.
Furthermore, the signal processing unit 510 performs various types of processing such as white balance adjustment, gamma correction, and contour emphasizing, and outputs a color image. In this manner, since the white balance adjustment is performed after the color correction is performed on the basis of the output signal of each of the pixels 80a and 82a, a captured image with a more natural color tone can be obtained.
As described above, according to the present embodiment, the imaging unit 8 includes a plurality of pixel groups each including two adjacent pixels, and the first pixel group 80 and 82 including one on-chip lens 22 and the second pixel group 80a and 82a each including the on-chip lens 22a are arranged. Thus, the first pixel group 80 and 82 can detect a phase difference and function as normal imaging pixels, and the second pixel group 80a and 82a can function as special purpose pixels each capable of acquiring independent imaging information. Furthermore, one pixel region area of the pixel group 80a and 82a capable of functioning as special purpose pixels is ½ of the pixel group 80 and 82 capable of functioning as normal imaging pixels, and it is possible to avoid hindrance of the arrangement of the first pixel group 80 and 82 capable of normal imaging.
In the second pixel regions 8b to 8k, which are pixel regions in which the three first pixel groups 80 and 82 and the one second pixel group 80a and 82a are arranged, at least two of a red filter, a green filter, or a blue filter are arranged corresponding to the first pixel groups 80 and 82 that receive at least two colors of red light, green light, and blue light, and any one of a cyan filter, a magenta filter, or a yellow filter is arranged in at least one of the two pixels 80a and 82a of the second pixel group. Thus, the output signal corresponding to any one of the R (red) pixel, the G (green) pixel, and the B (blue) pixel can be subjected to color correction using the output signal corresponding to any one of the C (cyan) pixel, the M (magenta) pixel, and the Y (yellow) pixel. In particular, by performing color correction on the output signal corresponding to any one of the red (R) pixel, the green (G) pixel, and the blue (B) pixel using the output signal corresponding to any one of the cyan (C) pixel and the magenta (M) pixel, it is possible to increase blue information without reducing resolution. In this manner, it is possible to suppress a decrease in resolution of the captured image while increasing the types of information obtained by the imaging unit 8.
An electronic device 1 according to a second embodiment is different from the electronic device 1 according to the first embodiment in that the two pixels 80b and 82b in the second pixel region are formed by pixels having a polarization element. Differences from the electronic device 1 according to the first embodiment will be described below.
Here, an example of a pixel array and an on-chip lens array in the imaging unit 8 according to the second embodiment will be described with reference to
As illustrated in
As illustrated in
In this manner, each polarization element 9b has a structure in which a plurality of line portions 9d extending in one direction is arranged to be spaced apart in a direction intersecting the one direction. There is a plurality of types of polarization elements 9b having different extending directions of the line portion 9d.
The line portion 9d has a stacked structure in which a light reflecting layer 9f, an insulating layer 9g, and a light absorbing layer 9h are stacked. The light reflecting layer 9f includes, for example, a metal material such as aluminum. The insulating layer 9g includes, for example, SiO2 or the like. The light absorbing layer 9h is, for example, a metal material such as tungsten.
Next, a characteristic operation of the electronic device 1 according to the present embodiment will be described.
External light incident on the display unit 2 may be diffracted by a wiring pattern or the like in the display unit 2, and diffracted light may be incident on the imaging unit 8. In this manner, at least one of the flare or the diffracted light may be captured in the captured image.
The optical system 9 includes one or more lenses 9a and an infrared ray (IR) cut-off filter 9b. The IR cut-off filter 9b may be omitted. As described above, the imaging unit 8 includes the plurality of non-polarization pixels 80 and 82 and the plurality of polarization pixels 80b and 82b.
The output values of the plurality of polarization pixels 80b and 82b and the output values of the plurality of non-polarization pixels 80 and 82 are converted by the analog-digital conversion units 140 and 150 (not illustrated), polarization information data obtained by digitizing output values of the plurality of polarization pixels 80b and 82b is stored in the second region 180b (
The clamp unit 32 performs processing of defining a black level, and subtracts black level data from each of the digital pixel data stored in the first region 180a (
The flare correction signal generation unit 36 corrects the digital pixel data by performing subtraction processing of the correction amount extracted by the flare extraction unit 35 on the digital pixel data output from the color output unit 33. Output data of the flare correction signal generation unit 36 is digital pixel data from which at least one of the flare component or the diffracted light component has been removed. In this manner, the flare correction signal generation unit 36 functions as a correction unit that corrects a captured image photoelectrically converted by the plurality of non-polarization pixels 80 and 82 on the basis of the polarization information.
The digital pixel data at pixel positions of the polarization pixels 80b and 82b has a low signal level because of passing through the polarization element 9b. Therefore, the defect correction unit 37 regards the polarization pixels 80b and 82b as defects and performs predetermined defect correction processing. The defect correction processing in this case may be processing of performing interpolation using digital pixel data of surrounding pixel positions.
The linear matrix unit 38 performs matrix operation on color information such as RGB to perform more correct color reproduction. The linear matrix unit 38 is also referred to as a color matrix portion.
The gamma correction unit 39 performs gamma correction so as to enable display with excellent visibility in accordance with display characteristics of the display unit 2. For example, the gamma correction unit 39 converts 10 bits into 8 bits while changing the gradient.
The luminance chroma signal generation unit 40 generates a luminance chroma signal to be displayed on the display unit 2 on the basis of output data of the gamma correction unit 39.
The focus adjustment unit 41 performs autofocus processing on the basis of the luminance chroma signal after the defect correction processing is performed. The exposure adjustment unit 42 performs exposure adjustment on the basis of the luminance chroma signal after the defect correction processing is performed. When the exposure adjustment is performed, the exposure adjustment may be performed by providing an upper limit clip so that the pixel value of each non-polarization pixel 82 is not saturated. Furthermore, in a case where the pixel value of each non-polarization pixel 82 is saturated even if the exposure adjustment is performed, the pixel value of the saturated non-polarization pixel 82 may be estimated on the basis of the pixel value of the polarization pixel 81 around the non-polarization pixel 82.
The noise reduction unit 43 performs processing of reducing noise included in the luminance chroma signal. The edge emphasizing unit 44 performs processing of emphasizing an edge of the subject image on the basis of the luminance chroma signal. The noise reduction processing by the noise reduction unit 43 and the edge emphasizing processing by the edge emphasizing unit 44 may be performed only in a case where a predetermined condition is satisfied. The predetermined condition is, for example, a case where the correction amount of the flare component or the diffracted light component extracted by the flare extraction unit 35 exceeds a predetermined threshold. The more the flare component or the diffracted light component included in the captured image, the more noise or blurring of the edge occurs in the image when the flare component and the diffracted light component are removed. Therefore, by performing the noise reduction processing and the edge emphasizing processing only in a case where the correction amount exceeds the threshold, the frequency of performing the noise reduction processing and the edge emphasizing processing can be reduced.
The signal processing of at least a part of the defect correction unit 37, the linear matrix unit 38, the gamma correction unit 39, the luminance chroma signal generation unit 40, the focus adjustment unit 41, the exposure adjustment unit 42, the noise reduction unit 43, and the edge emphasizing unit 44 in
Next, the flare extraction unit 35 determines whether or not flare or diffraction has occurred on the basis of the polarization information data stored in the memory unit 180 (step S4). Here, for example, if the polarization information data exceeds a predetermined threshold, it is determined that flare or diffraction has occurred. If it is determined that flare or diffraction has occurred, the flare extraction unit 35 extracts the correction amount of the flare component or the diffracted light component on the basis of the polarization information data (step S5). The flare correction signal generation unit 36 subtracts the correction amount from the digital pixel data stored in the memory unit 180 to generate digital pixel data from which the flare component and the diffracted light component have been removed (step S6).
Next, various types of signal processing are performed on the digital pixel data corrected in step S6 or the digital pixel data determined to have no flare or diffraction in step S4 (step S7). More specifically, in step S7, as illustrated in
The digital pixel data subjected to the signal processing in step S7 may be output from the output unit 45 and stored in a memory that is not illustrated, or may be displayed on the display unit 2 as a live image (step S8).
As described above, in the second pixel regions 8h to 8k, which are pixel regions in which the three first pixel groups and one second pixel group described above are arranged, the red filter, the green filter, and the blue filter are arranged corresponding to the first pixel groups that receive red light, green light, and blue light, and the pixels 80b and 82b having the polarization elements are arranged in at least one of the two pixels of the second pixel group. The outputs of the pixels 80b and 82b having the polarization elements can be corrected as normal pixels by interpolation using digital pixel data of surrounding pixel positions. This makes it possible to increase the polarization information without reducing the resolution.
In this manner, in the second embodiment, the camera module 3 is arranged on the opposite side of the display surface of the display unit 2, and the polarization information of the light passing through the display unit 2 is acquired by the plurality of polarization pixels 80b and 82b. A part of the light passing through the display unit 2 is repeatedly reflected in the display unit 2 and then incident on the plurality of non-polarization pixels 80 and 82 in the camera module 3. According to the present embodiment, by acquiring the above-described polarization information, it is possible to generate a captured image in a state where the flare component and the diffracted light component included in light incident on the plurality of non-polarization pixels 80 and 82 after repeated reflection in the display unit 2 are simply and reliably removed.
Various candidates can be considered as specific candidates of the electronic device 1 having the configuration described in the first and second embodiments. For example,
Furthermore, in the housing 51, a central processing unit (CPU) 56 and a coil (magnetic force/current conversion coil) 57 are provided. The CPU 56 controls image capturing by the camera 52 and data accumulation operation in the memory 53, and controls data transmission from the memory 53 to a data reception device (not illustrated) outside the housing 51 by the wireless transmitter 55. The coil 57 supplies power to the camera 52, the memory 53, the wireless transmitter 55, the antenna 54, and a light source 52b as described later.
Moreover, the housing 51 is provided with a magnetic (read) switch 58 for detecting setting of the capsule endoscope 50 in the data reception device when it is set. The CPU 56 supplies power from the coil 57 to the wireless transmitter 55 at a time when the read switch 58 detects a set to the data reception device and data transmission becomes possible.
The camera 52 includes, for example, an imaging element 52a including an objective optical system 9 for capturing an image in a body cavity, and a plurality of light sources 52b for illuminating the body cavity. Specifically, the camera 52 includes, as the light source 52b, for example, a complementary metal oxide semiconductor (CMOS) sensor including a light emitting diode (LED), a charge coupled device (CCD), or the like.
The display unit 2 in the electronic device 1 according to the first and second embodiments is a concept including a light emitter such as the light source 52b in
Furthermore,
Accordingly, in
In the case of
In this manner, in the third embodiment, the electronic device 1 according to the first and second embodiments can be used for various applications, and the utility value can be increased.
Note that the present technology can have configurations as follows.
(1) An electronic device including an imaging unit that includes a plurality of pixel groups each including two adjacent pixels, in which
at least one first pixel group of the plurality of pixel groups includes
a first pixel that photoelectrically converts a part of incident light condensed through a first lens, and
a second pixel different from the first pixel that photoelectrically converts a part of the incident light condensed through the first lens, and
at least one second pixel group different from the first pixel group among the plurality of pixel groups includes
a third pixel that photoelectrically converts incident light condensed through a second lens, and
a fourth pixel that is different from the third pixel and photoelectrically converts incident light condensed through a third lens different from the second lens.
(2) The electronic device according to (1), in which
the imaging unit includes a plurality of pixel regions in which the pixel groups are arranged in a two-by-two matrix, and
the plurality of pixel regions includes
a first pixel region that is the pixel region in which four of the first pixel groups are arranged, and
a second pixel region that is the pixel region in which three of the first pixel groups and one of the second pixel groups are arranged.
(3) The electronic device according to (2), in which in the first pixel region, one of a red filter, a green filter, and a blue filter is arranged corresponding to the first pixel group that receives red light, green light, and blue light.
(4) The electronic device according to (3), in which in the second pixel region, at least two of the red filter, the green filter, and the blue filter are arranged corresponding to the first pixel group that receives at least two colors among red light, green light, and blue light, and at least one of the two pixels of the second pixel group includes one of a cyan filter, a magenta filter, and a yellow filter.
(5) The electronic device according to (4), in which at least one of the two pixels of the second pixel group is a pixel having a blue wavelength region.
(6) The electronic device according to (4), further including a signal processing unit that performs color correction of an output signal output by at least one of the pixels of the first pixel group on the basis of an output signal of at least one of the two pixels of the second pixel group.
(7) The electronic device according to (2), in which at least one pixel of the second pixel group has a polarization element.
(8) The electronic device according to (7), in which the third pixel and the fourth pixel include the polarization element, and the polarization element included in the third pixel and the polarization element included in the fourth pixel have different polarization orientations.
(9) The electronic device according to (7), further including a correction unit that corrects an output signal of a pixel of the first pixel group by using polarization information based on an output signal of the pixel having the polarization element.
(10) The electronic device according to (9), in which the incident light is incident on the first pixel and the second pixel via a display unit, and the correction unit removes a polarization component captured when at least one of reflected light or diffracted light generated when passing through the display unit is incident on the first pixel and the second pixel and captured.
(11) The electronic device according to (10), in which the correction unit performs, on digital pixel data obtained by photoelectric conversion by the first pixel and the second pixel and digitization, subtraction processing of a correction amount based on polarization information data obtained by digitizing a polarization component photoelectrically converted by the pixel having the polarization element, to correct the digital pixel data.
(12) The electronic device according to any one of (1) to (11), further including:
a drive unit that reads charges a plurality of times from each pixel of the plurality of pixel groups in one imaging frame; and
an analog-to-digital conversion unit that performs analog-to-digital conversion in parallel on each of a plurality of pixel signals based on a plurality of times of charge reading.
(13) The electronic device according to (12), in which the drive unit reads a common black level corresponding to the third pixel and the fourth pixel.
(14) The electronic device according to any one of (1) to (13), in which the plurality of pixels including the two adjacent pixels has a square shape.
(15) The electronic device according to any one of (1) to (14), in which phase difference detection is possible on the basis of output signals of two pixels of the first pixel group.
(16) The electronic device according to (6), in which the signal processing unit performs white balance processing after performing color correction on the output signal.
(17) The electronic device according to (7), further including an interpolation unit that interpolates the output signal of the pixel having the polarization element from an output of a peripheral pixel of the pixel may be further included.
(18) The electronic device according to any one of (1) to (17), in which the first to third lenses are on-chip lenses that condense incident light onto a photoelectric conversion unit of a corresponding pixel.
(19) The electronic device according to any one of (1) to (18), further including a display unit, in which
the incident light is incident on the plurality of pixel groups via the display unit.
Aspects of the present disclosure are not limited to the above-described individual embodiments, but include various modifications that can be conceived by those skilled in the art, and the effects of the present disclosure are not limited to the above-described contents. That is, various additions, modifications, and partial deletions can be made without departing from the conceptual idea and spirit of the present disclosure derived from the contents defined in the claims and equivalents thereof.
Number | Date | Country | Kind |
---|---|---|---|
2020-016555 | Feb 2020 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2020/048174 | 12/23/2020 | WO |