This application is based upon and claims the benefit of priority from the prior Japanese Patent Application No. 2013-62644, filed on Mar. 25, 2013, the entire contents of which are incorporated herein by reference.
Embodiments described herein relate generally to solid state imaging devices.
Recently, CMOS image sensors have been being actively developed. In particular, due to the miniaturization of semiconductor devices (with reduced design rules), the pixel pitch, for example, is moving toward the 1.0 μm level. With such a pixel size, the effect of wave characteristics of incident light becomes remarkable, and the reduction in amount of incident light becomes steeper than the reduction in pixel area. Therefore, a new means for improving the signal-to-noise ratio of solid state imaging devices is needed.
A CMOS image sensor of the aforementioned type generally includes color filters arranged in a Bayer array, in which a 2×2 pixel block includes one red (R) pixel, one blue (B) pixel, and two green (G) pixels arranged diagonally. The reason why each pixel block includes two G pixels is that green is highly visible to human eyes. The G pixels are used to obtain luminance (brightness) information.
Various techniques have been proposed to improve image quality from the arrangement of color filters. For example, a technique is known in which a green pixel is placed in the center of a pixel block, and white pixels used as luminance signals are placed on the up and down, left and right side of the green pixel to secure the amount of signal charge of the luminance signals. In this case, no effective method of processing white pixel data has been disclosed, and due to insufficient color information, false colors may be produced when a subject with a high spatial frequency is photographed.
A solid state imaging device according to an embodiment includes: a pixel array including a plurality of pixel blocks arranged in a matrix form on a first surface of a semiconductor substrate, each pixel block having a first pixel, a second pixel, and a third pixel each having a photoelectric conversion element for converting light to a signal charge, the first pixel having a first filter with a higher transmission to a light in a first wavelength range in a visible wavelength range than lights in other wavelength ranges in the visible wavelength range, the second pixel having a second filter with a higher transmission to a light in a second wavelength range having a complementary color to a color of the light in the first wavelength range than lights in other wavelength ranges in the visible light wavelength range, and the third pixel having a third filter transmitting lights in a wavelength range including the first wavelength range and the second wavelength range; a readout circuit reading signal charges photoelectrically converted by the first to the third pixels of the pixel blocks; and a signal processing circuit processing the signal charges read by the readout circuit.
Embodiments will now be explained with reference to the accompanying drawings.
The pixel array 1 includes a plurality of pixel blocks arranged in a matrix form on a first surface of a semiconductor substrate. Each pixel block includes first to third pixels. Each of the first to the third pixels includes a photoelectric conversion element for converting light to a signal charge. The first pixel has a first filter with a higher transmission with respect to light rays in a first wavelength range in a visible light wavelength range than light rays in the other wavelength ranges in the visible wavelength range. The second pixel has a second filter with a higher transmission with respect to light rays in a second wavelength range, which have a color complementary to the color of the light rays in the first wavelength range, than light rays in the other wavelength ranges in the visible wavelength range. Alternatively, the second pixel may have a second filter with a higher transmission with respect to light rays in a second wavelength range including a wavelength range that has a color complementary to the color of the first wavelength range, than light in the other wavelength ranges in the visible light wavelength range. Still alternatively, the second pixel may have a second filter with a higher transmission with respect to light rays in a second wavelength range including a peak wavelength, a color of which is complementary to the color of a peak wavelength of the first wavelength range, than light rays in the other wavelength ranges in the visible light wavelength range. The third pixel has a third filter that transmits light rays in a wavelength range including the first wavelength range and the second wavelength range.
The readout circuit reads signal charges that are photoelectrically converted by the first to the third pixels of the pixel blocks.
The signal processing circuit processes signals based on signal charges read by the readout circuit.
The pixels in the pixel array 1 are divided into a plurality of pixel blocks in the units of some adjacent pixels. For example,
Each W pixel has a transparent filter transmitting incident light having a visible light wavelength (for example, 400-650 nm), and guiding the visible light passing therethrough to a corresponding photoelectric conversion element. The transparent filter is formed of a material transparent to visible light, and has a high sensitivity over the entire visible light wavelength range.
The G pixel has a color filter having a high transmission with respect to light rays in the green visible light wavelength range. The Mg pixel has a color filter having a high transmission with respect to light rays in the red and the blue visible light wavelength ranges. The B pixel has a color filter having a high transmission with respect to light in the blue visible light wavelength range.
The W pixels are provided to obtain luminance information since the white pixels transmit light rays in the entire visible wavelength range. The G pixel can also be used to obtain the luminance information. Accordingly, the W pixels and the G pixel are arranged on different diagonal lines in the pixel block 10a shown in
As shown in
In order to cope with this, for example, an infrared cut-off filter for shielding light rays having a wavelength of 650 nm or more is provided between a solid-state imaging element and the subject, or the solid-sate imaging element and a lens so that only visible wavelength light is incident to the solid-state imaging element.
The magenta color filter transmits light rays in the wavelength range including red, and light rays in the wavelength range including blue. The Mg pixel may have a photoelectric conversion element for photoelectrically converting light rays in the wavelength range including red, and a photoelectric conversion element for photoelectrically converting light in the wavelength range including blue.
The magenta color filter transmits at least red light and blue light. The Mg pixel may have a photoelectric conversion element for photoelectrically converting red light, and a photoelectric conversion element for photoelectrically converting blue light.
The two photoelectric conversion elements of the Mg pixel can be stacked in a direction perpendicular to the first surface of the semiconductor substrate.
As shown in
Furthermore, as shown in
On the other hand, as shown in
The depletion layers in the G pixel and the W pixel preferably have a large volume in order to efficiently absorb light collected by the microlenses 30. Each Mg pixel includes two types of depletion layers for photoelectric conversion, the Mg depletion layer 22 and the R depletion layer 23, as shown in
The accumulated charge moves to the diffusion layer 25 when the transfer gate 263 is turned on. The potential of the diffusion layer 25 is reset in advance so as to be lower than the potential of the reading depletion layer 21. Accordingly, the signal charge is completely transferred to the diffusion layer 25 via the transfer gate 263. Thereafter, the potential of the diffusion layer 25 is read, by which the signal charge detected by the Mg depletion layer 22 can be read as a voltage. The charge-voltage conversion gain at this time is determined by the sum of capacitance components connected to the diffusion layer 25.
The R depletion layer 23 detects red light rays in the wavelength range of about 600-700 nm that have not been absorbed by the Mg depletion layer 22. The electrons accumulated in the R depletion layer 23 are transferred to the diffusion layer 25 when the transfer gate 262 is turned on.
The diffusion layer 25 used to read the R depletion layer 23 in the first embodiment is the same as that used to read the Mg depletion layer 22. Accordingly, the R color signals and the Mg color signals are alternately transferred, read, and reset at different times. The diffusion layer is not shared in the W pixel and the G pixel.
The signal value W of the W pixel cannot be used as RGB values, which are commonly used video signal values. Therefore, color separation should be performed by converting the white data value W of the W pixel into RGB data of three colors.
Signal values of Mg and R obtained from the Mg depletion layer 22 and the R depletion layer 23 are outputted from the Mg pixel 102. The B signal is first calculated from these signal values:
B=Mg−a×R (1)
where a denotes the proportion of the sensitivity to red light relative to the sensitivity to the Mg color of the Mg depletion layer 22. The value of a is more than 0 and less than 1, for example 0.24, which can be uniquely determined after the manufacturing.
When RGB signals are generated from a complementary color filter, generally the signal-to-noise ratio degrades during the above subtraction process. Assuming that the expression (1) shows average values of the B, Mg, and R signals during a plurality of measurements, the dispersions of the B, Mg, and R signals, ΔB, ΔMg, and ΔR can be expressed as follows:
ΔB2=ΔMg2+(a×ΔR)2 (2)
The signal-to-noise ratio degrades since the average of signal value B is expressed by a subtraction but the dispersion thereof ΔB is expressed as a sum of squares.
However, for ordinary imaging elements, it often happens that a luminance signal Y is calculated from RGB signals based on the following conversion expression, and the signal-to-noise ratio of Y is discussed:
Y=0.299R+0.587G+0.114B (3)
As can be understood from the expression (3), the proportion of the B signal to the Y signal is 11.4%, which is equal to or less than ⅕ of the proportion of the G signal. Therefore, if the above expression (1) is performed, by which the B signal is generated by subtracting the R signal from the Mg signal, the degradation in the signal-to-noise ratio of the luminance signal Y is small.
Subsequently, color separation is performed by converting the W signal into RGB signals. As shown in
R
w
←W×K
1 (4)
G
w
←W×K
2 (5)
B
w
←W×K
3 (6)
Here, the B signal has already been calculated by the expression (1), and W represents the signal value of the W pixel. K1, K2, K3 each represent a color ratio obtained from the RGB pixels around the target W pixel, and can be expressed by the following expressions (7) to (9):
where Raverage, Gaverage, Baverage represent averages of R, G, B color data values obtained from a plurality of pixels around the target W pixel. For example, Raverage represents an average color data value of two R pixels present in a pixel block, Gaverage represents an average color data value of four G pixels, and Baverage represents an average color data value of two B pixels. Thus, the color proportions K1, K2, K3 of the RGB pixels in a pixel block 10b shown in
In color separation, a range of calculation may extend over a plurality of rows. Therefore, for example, color data values of two rows are temporarily stored in a line memory, and read at the timing when the final row of the pixel block is read to perform the expressions (4) to (6).
If, for example, the color data values in a pixel block are W=200 and (Raverage, Gaverage, Baverage)=(80, 100, 70), (Rw, Gw, Bw)=(64, 80, 56) can be obtained from the expressions (4) to (9).
Thus, when the white data value W is converted to the color data values Rw, Gw, Bw, the ratio thereof to the average color data values Raverage, Gaverage, Baverage is (64+80+56)/(80+100+70)=4/5. Therefore, final color data values Rw, Gw, Bw can be obtained by multiplying the right sides of the expressions (4) to (6) by the reciprocal of the above value, 5/4, as a constant.
The color conversion data values Rw, Gw, Bw are obtained by the multiplication and the division of the white data value W, which essentially has a high signal-to-noise ratio, and color data values, of which the signal-to-noise ratios are improved by the averaging. As a result, the signal-to-noise ratios of the generated color data values are higher than those of the R, G, B data values, respectively.
The numbers of the rows and the columns of the pixel block used for the color separation are not limited to 3×3. The capacity of the line memory used for the color separation is dependent on the number of rows of the pixel block. Thus, as the number of rows increases, the capacity of the line memory increases. Therefore, it is not preferable that the number of rows of the pixel block is increased extremely.
After the color separation is completed, an average value R′ of all the R signals and the Rw signals in the pixel block is calculated as shown in
Thus, for all the pixels, final color data values R′, G′, B′ are determined by averaging the RGB data values of the three colors and the color separation data values Rw, Gw, Bw of 3×3 pixels including the target pixel placed at the center.
By repeating the above process, three color data values R′, G′, B′ are generated for all the pixel positions. The color data value G′ is obtained by color interpolation based on pixels 3/2 times those of the R data value and the B data value in the Bayer array, the color data values R′, B′ are obtained based on pixels 3 times those of the R data value and the B data value in the Bayer array. As a result, the signal-to-noise ratio is improved to about 2 times that of conventional devices.
Furthermore, as can be understood from
The color separation and the color data value determination described above are performed by the signal processing circuit 6 shown in
The aforementioned color separation is made possible when the solid state imaging device includes the pixel block 10a having W pixels, G pixels, and Mg pixels as shown in
As described above, according to the first embodiment, a solid state imaging device can be provided, the solid state imaging device having a high signal-to-noise ratio for a low luminance subject, being superior in color reproducibility, and not causing degraded resolution and false colors for a subject with a high spatial frequency.
Although the pixel blocks of the first embodiment includes W pixels. G pixels, and Mg pixels, the colors of the pixels are not limited to these.
For example, a pixel block may have W pixels, first pixels, and second pixels. Each second pixel may include a first photoelectric conversion element for photoelectrically converting light rays in a wavelength range included in a wavelength range transmitted by a filter of the second pixel, and a second photoelectric conversion element for photoelectrically converting light rays in a further wavelength range included in the wavelength range transmitted by the filter of the second pixel.
If the wavelength range of the light rays transmitted by the filter of the second pixel incudes the wavelength of a first primary color and the wavelength of a second primary color, the first photoelectric conversion element may photoelectrically convert light rays in a wavelength range including the wavelength of the first primary color. The second photoelectric conversion element may photoelectrically convert light rays in a wavelength range including the wavelength of the second primary color.
A solid state imaging device according to the second embodiment will be described with reference to
The differences between the layout of the second embodiment and the layout of the first embodiment shown in
With such a structure, the sum of the number of the R diffusion layers 252 and the number of the B diffusion layers 253 becomes the same as the number of Mg pixels. Accordingly, no increase in the effective number of pixels derived from the two depletion layers in each Mg pixel is generated. This facilitates the signal processing after the pixel signals are read.
As in the first embodiment, the solid state imaging device provided according to the second embodiment has a high signal-to-noise ratio with respect to a low luminance subject, is superior in color reproducibility, and does not degrade in resolution or generate false colors for a subject having a high spatial frequency.
While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed, the novel methods and systems described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the methods and systems described herein may be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fail within the scope and spirit of the inventions.
Number | Date | Country | Kind |
---|---|---|---|
2013-062644 | Mar 2013 | JP | national |