Embodiments of the present invention will be explained below with reference to the accompanying drawings. The present invention is not limited to the embodiments.
A solid-state imaging device 100 shown in
The imaging region 101 includes a plurality of pixels each converting an incident light into an electric signal by photoelectric conversion. The pixels are arranged in a two-dimensional matrix. The load transistor 102, which is connected between a substrate potential (not shown) and the imaging region 101, functions as a current source that supplies constant current to the pixels in the imaging region 101. The V selection circuit 104 selects specific pixels in the imaging region 101 and transmits electric signal generated in the selected pixels to the CDS circuit 103. The CDS circuit 103 is a circuit that eliminates amplifier noise and reset noise from the electric signal obtained from the imaging region 101. The H selection circuit 105 outputs signals from the CDS circuit 103 at time series. The AGC circuit 106 controls the amplitude of each signal from the CDS circuit 103. The ADC 107 converts an analog signal from the CDS 103 into a digital signal. The digital amplifier 108 amplifies the digital signal and outputs the amplified digital signal. The TG 109 is a circuit that controls timings of operations performed by the load transistor 102, the CDS circuit 103, the V selection circuit 104, the H selection circuit 105, the AGC 106, the ADC 107, and the digital amplifier 108. Furthermore, the DSP 110 performs digital processings such as interpolation, decision, color processing, and color signal extraction. Namely, the DSP 110 generates RGB signals or YUV signals from the digital signals. The DSP 110 can be provided separately from the solid-state imaging device 100 or included in a chip of the solid-state imaging device 100. Moreover, the TG 109, the AGC 106, the ADC 107, the digital amplifier 108 and the like can be formed on a different chip of the solid-state imaging device 100. Further, a signal processing circuit, which is not shown in
With reference to
The four-pixel block shown in
In the imaging region 101 where the CFA is thus arranged, the pixels are arranged in order of W, R, W, R . . . in the nth row, in order of B, W, B, W . . . in the (n+1)th row, and in order of W, R, W, R . . . in the (n+2)th row. The solid-state imaging device 100 outputs electric signals for the respective pixels according to the array.
If signals obtained from the pixels W are extracted from this array, signals are arranged in order of SW, *, SW, * . . . in the nth row, in order of *, SW, *, SW . . . in the (n+1)th row, in order of SW, *, SW, . . . in the (n+2)th row, and in order of *, SW, *, SW . . . in the (n+3)th row. If signals obtained from the pixels R are extracted from this array, signals are arranged in order of *, SR, *, SR . . . in the nth row, in order of *, *, *, * . . . in the (n+1)th row, in order of *, SR, *, SR . . . in the (n+2)th row, and in order of *, *, *, * . . . in the (n+3)th row. If signals obtained from the pixels B are extracted from this array, signals are arranged in order of *, *, *, * . . . in the nth row, in order of SB, *, SB, * . . . in the (n+1)th row, in order of *, *, *, * . . . in the (n+2)th row, and in order of SB, *, SB, * . . . in the (n+3)th row. Symbols SW, SR, and SB denote electric signals (voltages or currents) obtained from the first pixel W, the second pixel R, and the third pixel B, respectively. Namely, the SW, SR, and SB denote intensities of the respective visible lights or intensities of the lights of respective colors. Symbol “*” indicates that the intensity of the visible light or the intensity of the light of the corresponding color is unclear. For example, if attention is paid to the signals obtained from the first pixels W, the intensities of visible lights are unknown at positions at which the second pixels R and the third pixels B are arranged. Therefore, “*” is shown at the positions at which the second pixels R and the third pixels B are arranged. If attention is paid to the signals obtained from the second pixels R, the intensity of the red light is unknown at positions at which the first pixels W and the third pixels B are arranged. Therefore, “*” is shown at the positions at which the first pixels W and the third pixels B are arranged. If attention is paid to the signals obtained from the third pixels B, the intensity of the blue light is unknown at positions at which the first pixels W and the second pixels R are arranged. Therefore, “*” is shown at the positions at which the first pixels W and the second pixels R are arranged.
The intensity of the visible light or the intensity of the light of each color denoted by “*” is calculated by the DSP 110 serving as an arithmetic unit by interpolation. As an interpolation method, the DSP 110 may calculate an arithmetic mean among the signals from the pixels in the same CFA adjacent to the pixel of interest vertically, horizontally, and/or diagonally. For example, the intensity of the visible light at the second pixel R is calculated by averaging signals SW from four first pixels W adjacent to the second pixel R vertically and horizontally. The average of the four signals SW can be regarded as the intensity of the visible light at the second pixel R. The intensity of the visible light at the third pixel B can be similarly calculated. The intensity of the blue light at the second pixel R is calculated by averaging the signals SB from four third pixels B adjacent to the second pixel R diagonally. Likewise, the intensity of the red light at the third pixel B is calculated by averaging the signals SR from four second pixels R adjacent to the third pixel B diagonally. Furthermore, the intensity of the red light at the first pixel W is calculated by averaging the signals SR from two second pixels R adjacent to the first pixel W vertically or horizontally. The intensity of the blue light at the first pixel W is calculated by averaging the signals SB from two third pixels B adjacent to the first pixel W vertically or horizontally. By performing such interpolation, three types of color information of the SW, SR, and SB at all pixels in the imaging region can be obtained. This interpolation is performed using the signals from the nine-pixel block in three rows by three columns. However, the number of pixels used for the interpolation can be arbitrarily set according to interpolation capability. Moreover, the interpolation is performed by calculating the arithmetic mean. Alternatively, the method other than the arithmetic mean calculation method can be used.
At this stage, the three types of color information of the signals SW, SR, and SB at the respective pixels are obtained. The signals SR and SB obtained herein are the electric signals for red (R) and blue (B) among three primary colors of light (red (R), green (G), and blue (B)). However, a signal SG for green (G) is still unknown. The DSP 110 calculates the signal SG as a fourth signal using the first electric signal SW, the second electric signal SR, and the third electric signal SB at the respective pixels.
In the graph shown in
The spectral sensitivity characteristic results from the fact that the transmittances of the first to third filters are lower than that of the colorless filter. If proportional constants decided by the transmittances of the first to third filters and the spectral sensitivity characteristics of the respective pixels are Kr, Kb, and Kg, respectively, the following equation 2 is established.
SW=Kr×SR+Kb×SB+Kg×SG (Equation 2)
In the specific examples shown in
SG=(SW−Kr×SR−Kb×SB)/Kg=0.82×SW−0.84×SR−SB (Equation 3)
The DSP 110 calculates the signal SG by substituting the signals SW, SR, and SB obtained from the digital amplifier 108 to the equation 3.
In the first embodiment, the DSP 110 multiplies the second electric signal SR and the third electric signal SB by the coefficients Kr and Kb obtained in view of the transmittance of the first filter and that of the second filter, respectively. The DSP 110 subtracts a first result of the multiplication (SR(Kr)−SB(Kb)) from the first electric signal SW. The DSP 11 divides a second result of the subtraction (SW−SR(Kr)−SB(Kb)) by a coefficient Kg obtained in view of the transmittance of the third filter. The DSP 110 thereby calculates the fourth electric signal SG. As shown in the equation 3, the fourth electric signal SG is obtained by subtracting the component of the red signal SR and that of the blue signal SB from the visible light signal SW. This method will be referred to as “subtraction method” hereinafter. By executing this subtraction method, a signal SG1 shown in
Alternatively, the DSP 110 may calculate the fourth electric signal SG using a division method instead of the subtraction method. For example, the DSP 110 can calculate the signal SG using the following equation 4.
SG=SW×(SW×Kg/(Kr×SR+Kb×SB)−1) (Equation 4)
A method using the equation 4 will be referred to as “division method” hereinafter. With the division method, the DSP 110 multiplies the second electric signal SR and the third electric signal SB by the coefficient Kr obtained in view of the spectral transmittance of the first filter and the coefficient Kb obtained in view of the spectral transmittance of the second filter, respectively. Furthermore, the DSP 110 adds up multiplication results, divides the first electric signal SW by a result of this addition (Kr×SR+Kb×SB), and subtracts 1 from a result of this. Then, the DSP 110 multiplies a result of this subtraction by the first electric signal SW. As a result, the signal SG is calculated.
By executing this division method, the signal SG2 shown in
The coefficients Kr, Kg, and Kb can be all set to 1 to simplify calculation. If so, the equation 3 is simplified to equation 5, and the equation 4 is simplified to equation 6. The DSP 110 can calculate the signal SG using the equations 5 and 6. The equations 5 and 6 are relatively simple although they are less accurate than the equations 3 and 4.
SG=SW−SR−SB (Equation 5)
SG=SW×(SW/(SR+SB)−1) (Equation 6)
According to a graph shown in
However, as indicated by the output VW, if the illuminance exceeds I4, the output VW from the pixel W is saturated. As indicated by the output VB, if the illuminance is lower than I3, the output VB is lower than the noise level. Namely, the solid-state imaging device 100 according to the first embodiment can calculate the signal SG using one of the equations 3 to 6 in the illuminance range of I3 to I4. However, if the illuminance is lower than I3 or exceeds I4, the solid-state imaging device 100 cannot calculate the signal SG only by using one of the equations 3 to 6. On the other hand, the conventional solid-state imaging device using the pixels R, G, and B can detect the signals SR, SG, and SB in the illuminance range of I3 to I5 as can be understood from the outputs VR, VG, and VB shown in
In the first embodiment, therefore, the DSP 110 performs calculation shown in
The output VW of the signal SW is compared with a preset saturation value VW1 (S30). The saturation value VW1 is a preset output of the pixel W at the illuminance I4. If the output VW is equal to or lower than the saturation value VW1, it is clear that the outputs VW, VR, and VB are all equal to or lower than saturation levels. Namely, it is known that the illuminance of the incident light is equal to or lower than I4. If the output value VW is equal to or lower than the saturation value VW1, the illuminance VR of the signal SR is then compared with a noise level VR1 (S40). The noise level VR1 is a preset output of the pixel R at an illuminance I7. If the output VR is equal to or higher than the saturation value VR1, the illuminance VB of the signal SB is then compared with a noise level VB1 (S50). The noise level VB1 is a preset output of the pixel B at the illuminance I3. If the output VB is equal to or higher than the saturation value VB1 at the step 550, it is clear that the outputs VW, VR, and VB are all equal to or higher than the noise level VB1. Namely, it is clear that the illuminance of the incident light is equal to or higher than I3. In this case, the illuminance of the incident light is within a range of I3 to I4. The DSP 110 can, therefore, calculate the signal SG using the equations 3 to 6. The signals SR, SG, and SB can be thereby generated (S60). The signals SR, SG, and SB are then output from the solid-state imaging device 100 (S120).
If the output VW exceeds the saturation value VW1 at the step S30, it is known that the output VW exceeds the saturation level. Namely, it is clear that the illuminance of the incident light exceeds I4. In this case, the DSP 110 calculates the output VW by the following equation 7 (S70).
VW−K1×VR+K2×VB (Equation 7).
In the equation 7, coefficients K1 and K2 are constants set in view of the relation among the outputs VW, VR, and VB when white light, for example, is used as the incident light in the illuminance range of I3 to I4. As shown in the equation 7, the output VW can be expressed by a primary function of the outputs VR and VB. By calculating the equation 7, the output VW corresponding to the illuminance of the light incident on the pixel W in a saturation state can be obtained. In
The output VW obtained by the equation 7 is compared with the saturation value VW1 (S80). If the output VW is saturated due to high illuminance of the green light, the output VW cannot be accurately calculated by the equation 7. At the step S80, therefore, the DSP 110 determines whether a cause for the saturation of the output VW is the high illuminance of the green light. If the output VW obtained by the equation 7 is equal to or higher than the saturation value VW1, the output VW is saturated by the visible light, red light or blue light (which may include the green light). In this case, the DSP 110 performs a first color processing (S100). If the output VW obtained by the equation 7 is lower than the saturation value VW1, the output VW is saturated by the green light other than the visible light, red light or blue light. In this case, it is considered that the saturation value VW1 is closer to the actual VW than the output VW obtained by the equation 7. The DSP 110, therefore, substitutes the saturation value VW1 for the output VW (S90), and performs the first color processing.
In the first color processing (S100), color information on a certain replacement-target pixel is replaced by color information on a nearest unsaturated pixel. For example, the DSP 100 substitutes color information (outputs VWp, VRp, and VBp) on pixels, in which the output VW is not saturated and which are adjacent laterally to the replacement-target pixel in which the output VW is saturated, to the following equation 8. Calculated VW′, VR′, and VB′ are set as color information on the replacement-target pixel. In this case, the DSP 110 uses the color information on the replacement-target pixel as color information on a pixel read just before the replacement-target pixel in a certain row in the imaging region 101.
VW′=(VW/VWp)×VWp
VR′=(VW/VWp)×VRp
VB′=(VW/VWp)×VBp (Equation 8)
At the step S60, the DSP 110 substitutes the color information (VW′, VR′, and VB′) on the replacement-target pixel after replacement to the equation 3 for the signals SW, SR, and SB, respectively, thereby calculating the signal SG. Calculated signal SG corresponds to the fourth electric signal VG′ of the replacement-target pixel. As a result, color information on the replacement-target pixel (VR′, VG′, VB′) is obtained.
Alternatively, the DSP 110 can replace the color information on the replacement-target pixel by color information on a pixel vertically adjacent to the replacement-target pixel. In this case, the DSP 110 stores a row read just before the certain row in the imaging region 101 in a memory (not shown), and replace the color information on the replacement-target pixel by color information on the pixel adjacent back and forth to the replacement-target pixel.
More specifically, the DSP 110 performs an operation for adjusting the illuminance of the color information on the pixels, which are adjacent back and forth and around to the replacement-target pixel, to that of the replacement-target pixel. The DSP 110, then, replaces an operation result as the color information on the replacement-target pixel. For example, the color information on the replacement-target pixel is (VW′, VR′, and VB′), and respective color information on the visible light, red light, and blue light of the pixel adjacent back and forth and around to the replacement-target pixels are (VWp, VRp, and VBp). In this case, the DSP 110 performs a color replacement processing by calculating the equations 8 and 3 (or 4). If the VW is saturated in a plurality of laterally continuous pixels, the color replacement processing is performed on the VW-saturated pixel to which saturation occurs first while reading the signals at time series. On the subsequent VW-saturated pixels, the color replacement processing is performed using the pixel signal that has been just subjected to the color replacement processing. The same thing is true if the VW is saturated in a plurality of pixels adjacent back and forth.
In another alternative, the DSP 110 can calculate the SG while the color information on each pixel is made achromatic. For example, at high illuminance exceeding I4 or low illuminance lower than I3, recognition rate of human vision with respect to color information is reduced. Due to this, it often suffices to provide only luminance information on an achromatic color. In this case, the color information on the achromatic color can be calculated. It is rather preferable to calculate the color information on the achromatic color because the burden of the DSP 110 can be reduced. More specifically, the DSP 110 first makes a calculation as expressed by Equation 9.
VW′=(VW/W0)×W0
VR′=(VW/W0)×R0
VB′=(VW/W0)×B0 (Equation 9)
In the equation 9, (W0, R0, B0) indicate constants preset according to spectral transmissions of the colorless filter 10, the first filter 20, and the second filter 30, respectively. In the equation 9, luminances of (R0, G0, B0) are adapted to the output VW. It is to be noted that an achromatic signal (R0, G0, B0) is a signal that satisfies R0:G0:B0=1:1:1.
At the step S60, (VW′, VR′, VB′) are substituted to the equation 3 (or 4) for (SW, SR, SB), thereby obtaining the fourth electric signal VG′ of the replacement-target pixel as the signal SG. The (VR′, VG′, VB′) thus obtained are set as outputs of the replacement-target pixel. Since the calculation of the equation 9 is simpler than that of the equation 8, the calculation for the achromatic color does not impose load on the DSP 110. After performing the first color processing, the DSP 110 generates signals SR, SG, and SB as shown in the step S60.
Alternatively, sine the achromatic color satisfies R0:G0:B0=1:1:1, it is clear that the outputs (VR′, VG′, and VB′) satisfy (VR′, VG′, VB′)=1:1:1. Accordingly, the DSP 110 may calculate only one of VR′=(VW/W0)×R0 or VB′=(VW/W0)×B0 in the equation 9. For example, the DSP 110 can calculate VR′=(VW/W0)×R0 and applies the calculation result to VB′ and VG′. It is thereby unnecessary to make calculation based on the equation 3 (or 4). That is, the signals (SR, SG, SB) can be output at step S120 without executing the step S60 shown in
On the other hand, if the output VR is lower than the noise level VR1 at the step S40 or the output VB is lower than the noise level VB1 at the step S50, the DSP 110 performs a second color processing (S110). As the second color processing, the DSP 110 can perform either the replacement processing using the equation 8 or the achromatic color processing using the equation 9. However, the second color processing may be different from the first color processing. For example, the first color processing may be the replacement processing using the equation 8 whereas the second color processing may be the achromatic color processing using the equation 9 or vice versa.
For example, at illuminance I1 to I3, the output VW from the first pixel W is equal to or higher than the noise level but the output VR from the second pixel R is lower than the noise level due to the low illuminance. In this case, the DSP 110 calculates the equation 8 as the second color processing by using the outputs VW, VR, and VB from the pixels which is near the second pixel R and in which the outputs VR and VB are equal to or higher than the noise level. At this time the output VW from this second pixel R is used. Thus, the outputs (VW′, VR′, VB′) from the second pixel R can be obtained.
Alternatively, the DSP 110 can perform the second color processing by making the color information on the pixel achromatic color information. For example, by applying a new first electric signal SW (output VW) for the second pixel R obtained by the interpolation processing to the equation 9, outputs (VW′, VR′, VB′) are obtained.
After performing the second processing, the DSP 110 generates the signals SR, SG, and SB as shown in the step S60. By performing the second color processing, even if the illuminance of the incident light is I3 or lower, the signals SR, SG, and SB can be obtained.
As stated above, the conventional device can detect the incident light in the illuminance range of I3 to I5. The solid-state imaging device 100 according to the first embodiment can detect the incident light in the illuminance range of I1 to I6 by performing the interpolation processing (S10), the processing for calculating the output VW (S70), and the first and second color processings (S100 and S110). Namely, according to the first embodiment, the incident light at low illuminance equal to or lower than I3 can be detected.
As stated so far, the solid-state imaging device 100 according to the first embodiment incorporates the colorless pixel W including the colorless filter that can use 95% or more of the incident light energy, and dispenses with the pixel G including a monochromatic color filter low in utilization efficiency of the incident light energy. By incorporating the pixel W high in utilization efficiency of the incident light energy in place of the pixel G low in utilization efficiency thereof, the solid-state imaging device 100 according to the first embodiment can acquire three types of color information (R, G, and B) while improving the utilization efficiency of the incident light energy. That is, by incorporating the pixel W high in utilization efficiency of the incident light energy in place of the pixel G low in utilization efficiency thereof, incident light at low illuminance can be detected at high sensitivity. This can further miniaturize the entire device.
In the first embodiment, the pixel G has been dispensed with. However, the pixel B can be dispensed with in place of the pixel G. This is because the utilization efficiency of the incident light energy of the pixel B is almost equal (about 80%) to that of the pixel G.
The same signal processing as that according to the first embodiment can be applied even to the instance of so-called YUV outputs in place of RGB outputs. In a second embodiment, the first embodiment is applied to the YUV outputs. A solid-state imaging device according to the second embodiment can be identical in configuration to that according to the first embodiment.
As shown in
The signals SY, SU, and SV are generated at the step S130 as follows.
The luminance signal SY is expressed by the following equation 10.
SY=0.30×SR+0.59×SG+0.11×SB (Equation 10)
Since the CFA in the second embodiment does not include the pixels G, the DSP 110 calculates the signals SY, SU, and SV from the signals SW, SR, and SB for the respective pixels obtained by the interpolation processing. To do so, the DSP 110 can obtain the signal SY as expressed by the following equation 11 using the equations 3 and 10.
SY=0.30×SR+0.58×(0.82×SW−0.84×SR−SB)+0.11×SB=0.48×W−0.19×R−0.47×B (Equation 11)
Furthermore, the color-difference signals SU and SV can be expressed by equations 12 and 13, respectively.
SU=0.492×(SB−SY)=0.492×(0.53×SB+0.19×SR−0.48×SW)=0.26×SB+0.09×SR−0.24×SW (Equation 12)
SV=0.877×(SR−SY)=0.877×(0.52×SR+0.47×SB−0.48×SW)−0.46×SR+0.41×SB−0.42×SW (Equation 13)
To perform the achromatic color processing, the signals SU and SV can be set to zero (SU, SV)=(0, 0). The second embodiment exhibits the same advantage as that of the first embodiment in that the incident light at lower illuminance than that according to the conventional technique can be detected.
In a third embodiment, either the signal SW or a signal after interpolation is used as it is as the luminance signal SY. The other constituent elements and processings according to the third embodiment can be identical to those according to the second embodiment.
The luminance signal SY and the color-difference signals SU and SV can be expressed by the following equations 14 to 16, respectively.
SY=SW (Equation 14)
SU=0.492×(SB−aSY) (Equation 15)
SV=0.877×(SR−bSY) (Equation 16)
According to the third embodiment, the processing for generating the signals SY, SU, and SV (S130) is simplified. Therefore, the load on the DSP 110 can be reduced. Moreover, since the signal SW is directly used as the signal SY, the S/N ratio of the luminance signal SY is improved.
Modifications of a pixel arrangement in the imaging region 101 will next be described.
In
The modification shown in
In
In a solid-state imaging device using the pixel arrangement shown in any one of
An IR-cut filter can be provided in each of the pixels R and B other than the pixel W. By doing so, the pixels R and B can detect more accurate signals without receiving near-infrared light. Moreover, since the pixel W detects the near-infrared light, an image can be picked up at higher sensitivity.
In the embodiments, the CFA is constituted by the combination of the pixels W, R, and B. However, as the pixels R and B among the pixels W, R, and B, desired two colors can be selected from among the three primary colors of light, i.e., R, G, and B.
Additional advantages and modifications will readily occur to those skilled in the art. Therefore, the invention in its broader aspects is not limited to the specific details and representative embodiments shown and described herein. Accordingly, various modifications may be made without departing from the spirit or scope of the general inventive concept as defined by the appended claims and their equivalents.
Number | Date | Country | Kind |
---|---|---|---|
2006-254732 | Sep 2006 | JP | national |