1. Field of the Invention
The subject matter disclosed generally relates to the field of semiconductor image sensors.
2. Background Information
Photographic equipment such as digital cameras and digital camcorders may contain electronic image sensors that capture light for processing into a still or video image, respectively. There are two primary types of electronic image sensors, charge coupled devices (CCDs) and complimentary metal oxide semiconductor (CMOS) sensors. CCD image sensors have relatively high signal to noise ratios (SNR) that provide quality images. Additionally, CCDs can be fabricated to have pixel arrays that are relatively small while conforming with most camera and video resolution requirements. A pixel is the smallest discrete element of an image. For these reasons, CCDs are used in most commercially available cameras and camcorders.
CMOS sensors are faster and consume less power than CCD devices. Additionally, CMOS fabrication processes are used to make many types of integrated circuits. Consequently, there is a greater abundance of manufacturing capacity for CMOS sensors than CCD sensors.
Both CCD and CMOS image sensors may generate inaccurate image data because of dark current in the pixel and a variation of the dark current from pixel to pixel. Each pixel of an image sensor pixel array provides an output voltage that varies as a function of the light incident on the pixel. Unfortunately, dark currents add to the output voltages. The dark current and its variation from pixel to pixel degrade the picture provided by the image system.
Dark current varies from one pixel to another, thus making it impossible to use a single dark current estimate for the entire pixel array to subtract from all the output voltage values. Dark current also tends to have an exponential dependence on temperature such that its magnitude doubles for every 8 degrees Kelvin. Consequently, a single dark frame captured at a different temperature than the light frame (the actual picture frame) does not accurately represent the dark signals embedded in the light frame and require some sort of scaling. Furthermore, it is common to have temperature variations across the substrate leading to different average dark current at different locations of the pixel array. Additionally, if the temperature of the ambience changes, different locations of the pixels array may have different changes of temperatures, thus leading to different change ratios of dark current at different locations. All these factors make it difficult to use estimate dark signals accurately across the pixel array using a one-time captured dark frame that is scaled and subtracted from the light frames.
There have been efforts to compensate for the dark currents of the pixel array. U.S. Pat. No. 5,926,214 issued to Denyer et al. discloses a dark current compensation scheme where a dark frame is taken contemporaneously with a light frame. Both the dark frame and the light frame are captured at the same temperature. The dark frame is subtracted from the light frame to substantially remove dark signals from the output of the sensor. Unfortunately, such dark frame approaches require the closing of a camera shutter or some other means to prevent light from impinging upon the sensor. Closing the camera shutter contemporaneously with each picture increases the shot-to-shot delay of the camera.
U.S. Pat. No. 6,101,287 issued to Corum, et al. and U.S. Pat. No. 5,278,658 issued to Takase each disclose a scheme where a single dark frame is captured and certain “dark” pixels of the array are used to generate a dark signal reference for scaling the dark frame. The dark pixels are optically shielded from the ambient light. The dark frame is scaled with a ratio of the average dark signal of the dark pixels in the light frame, to the dark frame. Unfortunately, some light still strikes the dark pixels so that the dark pixel outputs are not truly representative of the dark signals. Additionally, the dark signals of interior pixels scale differently than the peripheral dark pixels.
U.S. Pat. No. 6,144,408 issued to MacLean discloses a scheme that does not rely on periphery dark pixels and can generate multiple scaling factors for interiors of the pixel array. In this scheme, certain pixels where dark currents are founded to be strongest are selected and organized into groups. An average Laplacian is calculated for the dark frame for each group. Laplacian basically measures the amount of “spiking” of voltage value at the pixel relative to its neighbors. Subsequently, in the light frame average Laplacian is calculated for each group again. The ratio of average Laplacian in the light frame to that in the dark frame becomes the scaling factors across the entire pixel array from those groups of selected pixels. This technique is relatively inadequate because picture details at the vicinity of those selected pixels interfere with the accuracy of the scaling factor calculation.
There is therefore a need to use a single dark frame to estimate and subtract dark signals for an entire pixel array despite changes in ambient temperature and variations of temperature across the pixel array.
An image sensor with a temperature sensor that can sense the temperature of a pixel array.
Disclosed is an image sensor that has a temperature sensor. The temperature sensor senses the temperature of a pixel array of the image sensor. The sensed temperature is used to scale a dark frame image generated by the pixel array. The scaled dark frame image is subtracted from a light image frame generated by the pixel array. The scaled dark image frame compensates for temperature variations in the pixel array. The scaled dark image frame may be generated by multiplying the dark frame by a scale factor(s). The scale factor may be computed from an equation or determined from a look-up table. The equation or look-up table may compensate for thermal gradients across the pixel.
The entire image sensor is preferably constructed with CMOS fabrication processes and circuits. Although a CMOS image sensor is described, it is to be understood that the temperature compensation schemes disclosed herein may be implemented in other types of image sensors such as CCD sensors.
Referring to the drawings more particularly by reference numbers,
The pixel array 12 is coupled to a light reader circuit 16 and a row decoder 18. The row decoder 18 can select an individual row of the pixel array 12. The light reader 16 can then read specific discrete columns within the selected row. Together, the row decoder 18 and light reader 16 allow for the reading of an individual pixel 14 in the array 12.
The light reader 16 may be coupled to an analog to digital converter 20 (ADC). The ADC 20 generates a digital bit string that corresponds to the amplitude of the signal provided by the light reader 16 and the selected pixels. The digital data may be stored in an external memory 22.
The digital bit strings can be transferred to an external processor 24 and memory 22 by a memory controller 26. The processor 24 may be coupled to another external memory device 28. The external memory 28 may store dark frame information that is used by the processor 24. The processor 24 may operate in accordance with firmware embedded in a non-volatile memory device 30 such as a read only memory (“ROM”). By way of example, the sensor 10, processor 24, and memory 28 and 30 may be integrated into photographic instruments such as a digital camera, a digital camcorder, or a cellular phone unit that contains a camera.
The image sensor 10, processor 24, and memory 28 and 30 may be configured, structured and operated in the same, or similar to, the corresponding image sensors and image sensor systems disclosed in application Ser. Nos. 10/183,218, 10/236,515 and 10/383,450, which are hereby incorporated by reference.
The dark frame estimator 54 may be coupled to a combiner 56 that is also connected to the pixel array 12. The combiner 56 may be an adder that subtracts a scaled dark image frame created by the estimator 54 from a light image frame generated by the pixel array. The dark frame estimator 54 and/or combiner 56 may be either on the sensor 10, or the processor 24, or both. If located in the external processor 24 the functions of the estimator 54 and combiner 56 may actually be performed by instructions executed by the processor 30. If on the sensor 10, the estimator 54 and combiner 56 may be dedicated circuits.
The dark frame estimator 54 may scale the dark image frame by multiplying the dark frame by a scale factor. The dark image frame is generated when no light impinges the pixel array 12. The dark image frame is composed of individual output voltage values for each pixel 14 of the array 12. The exposure time and the temperature of the pixel array 12 for the dark image frame may be stored in memory 22, 28.
The scale factor for the scaled dark image frame may be computed in accordance with the following equation:
S=exp[(T−T0)/(A)]*(t/t0) (1)
where;
S=the scale factor.
A=a temperature constant.
T=temperature provided by the temperature sensor at the time of the light image frame exposure.
T0=temperature provided by the temperature sensor at the time of the dark image frame exposure.
t=exposure time of the light image frame.
t0=exposure time of the dark image frame.
Alternatively, the scale factor may be determined from a look-up table that provides a different scale factor for each sensed temperature.
The estimator 54 may utilize an equation(s) or a look-up table that compensates for a temperature gradient across the pixel array 12. The output values of each pixel 14 in the pixel array 12 may be separately scaled with a factor calculated by the equation(s) or determined from the look-up table. By way of example, the scale factor for a particular pixel may be computed with the following equations:
K(x,y)=a+b*x+c*y+d*x2+e*y2+f*x*y (2)
T′−T′0=K*(T−T0) (3)
S=exp[(T′−T′0)/(A)]*(t/t0)
Or
S=exp[K*(T−T0)/(A)]*(t/t0) (4)
Where;
The coefficients of the polynomial equation may be empirically determined for each particular image sensor layout. The computations of the equations (1),(2), (3) and (4) can be performed by instructions operated by the processor 24 and provided by firmware embedded in non-volatile memory 30.
In operation, the image sensor 10 may be calibrated by initially generating a dark image frame with no light impinging on the pixel array 12. The pixel array 12 may be darkened by closing the shutter or otherwise preventing light from impinging the array 12. The output voltage values of each pixel 14 are stored in memory 22, 28. The temperature and time exposure values for the dark frame may also be stored in memory 22, 28. The dark frame calibration may be performed each time the camera is powered on. Alternatively, the dark image frame calibration may be performed during the manufacturing process.
When a picture is taken the pixel array 12 generates a light image frame. The temperature sensor 52 provides a temperature value to the estimator 54. The estimator 54 either computes, or determines, a scale factor(s) from the sensed temperature of the array 12. The estimator 54 then multiplies the scale factor with each output voltage value of the dark image frame stored in memory 22, 28 to create a scaled dark image frame.
The combiner 56 subtracts the scaled dark image frame from the light image frame to compensate for dark signal variations in the pixel array 12. Providing a temperature sensor 52 to directly sense the temperature of the pixel array 12 more accurately compensates for temperature variations without relying on contemporaneous mechanical shutters or shielded pixels to generate the scaled dark image.
BJTs Q1 and Q2 have different emitter widths defined by the ratio N. The amplifier 60 senses a differential voltage between Q1 and Q2 across resistor R1. The voltage V1 across the resistor R1 will vary with temperature as defined by the equation:
V1=K*T*ln(N) (3)
where;
The temperature sensor 52 may include an analog to digital converter 62 that converts the voltage V2 across resistor R2 into a digital bit string. The voltage V2 across the resistor R2 is defined by the following temperature dependent equation:
V2=K·T·ln(N)·(R1/R2) (W2/W2) (4)
It is the intention of the inventor that only claims which contain the term “means” shall be construed under 35 U.S.C. §112, sixth paragraph.
While certain exemplary embodiments have been described and shown in the accompanying drawings, it is to be understood that such embodiments are merely illustrative of and not restrictive on the broad invention, and that this invention not be limited to the specific constructions and arrangements shown and described, since various other modifications may occur to those ordinarily skilled in the art.
This application claims priority under 35 U.S.C §119(e) to provisional application No. 60/455,631, filed on Mar. 18, 2003.
Number | Name | Date | Kind |
---|---|---|---|
4425501 | Stauffer | Jan 1984 | A |
4473836 | Chamberlain | Sep 1984 | A |
4614966 | Yunoki et al. | Sep 1986 | A |
4647975 | Alston et al. | Mar 1987 | A |
4703442 | Levine | Oct 1987 | A |
4704633 | Matsumoto | Nov 1987 | A |
4760453 | Hieda | Jul 1988 | A |
4858013 | Matsuda | Aug 1989 | A |
4974093 | Murayama et al. | Nov 1990 | A |
5043821 | Suga et al. | Aug 1991 | A |
5047861 | Houchin et al. | Sep 1991 | A |
5113246 | Ninomiya et al. | May 1992 | A |
5138458 | Nagasaki et al. | Aug 1992 | A |
5159457 | Kawabata | Oct 1992 | A |
5162914 | Takahashi et al. | Nov 1992 | A |
5235197 | Chamberlain et al. | Aug 1993 | A |
5278658 | Takase | Jan 1994 | A |
5309243 | Tsai | May 1994 | A |
5420635 | Konishi et al. | May 1995 | A |
5434620 | Higuchi et al. | Jul 1995 | A |
5436662 | Nagasaki et al. | Jul 1995 | A |
5455621 | Morimura | Oct 1995 | A |
5461425 | Fowler et al. | Oct 1995 | A |
5471515 | Fossum et al. | Nov 1995 | A |
5508740 | Miyaguchi et al. | Apr 1996 | A |
5587738 | Shinohara | Dec 1996 | A |
5638118 | Takahashi et al. | Jun 1997 | A |
5665959 | Fossum et al. | Sep 1997 | A |
5675381 | Hieda et al. | Oct 1997 | A |
5729288 | Saito | Mar 1998 | A |
5737016 | Ohzu et al. | Apr 1998 | A |
5801773 | Ikeda | Sep 1998 | A |
5841126 | Fossum et al. | Nov 1998 | A |
5861620 | Takahashi et al. | Jan 1999 | A |
5880460 | Merrill | Mar 1999 | A |
5883830 | Hirt et al. | Mar 1999 | A |
5886659 | Pain et al. | Mar 1999 | A |
5892541 | Merrill | Apr 1999 | A |
5909026 | Zhou et al. | Jun 1999 | A |
5926214 | Denyer et al. | Jul 1999 | A |
5929908 | Takahashi et al. | Jul 1999 | A |
5953061 | Biegelsen et al. | Sep 1999 | A |
5962844 | Merrill et al. | Oct 1999 | A |
5990506 | Fossum et al. | Nov 1999 | A |
6005619 | Fossum | Dec 1999 | A |
6008486 | Stam et al. | Dec 1999 | A |
6021172 | Fossum et al. | Feb 2000 | A |
6040858 | Ikeda | Mar 2000 | A |
6049357 | Shinohara | Apr 2000 | A |
6101287 | Corum et al. | Aug 2000 | A |
6115065 | Yadid-Pecht et al. | Sep 2000 | A |
6115066 | Gowda et al. | Sep 2000 | A |
6144408 | MacLean | Nov 2000 | A |
6204881 | Ikeda et al. | Mar 2001 | B1 |
6246436 | Lin et al. | Jun 2001 | B1 |
6249647 | Cazier et al. | Jun 2001 | B1 |
6300978 | Matsunaga et al. | Oct 2001 | B1 |
6317154 | Beiley | Nov 2001 | B1 |
6369737 | Yang et al. | Apr 2002 | B1 |
6418245 | Udagawa | Jul 2002 | B1 |
6493030 | Kozlowski et al. | Dec 2002 | B1 |
6532040 | Kozlowski et al. | Mar 2003 | B1 |
6538593 | Yang et al. | Mar 2003 | B1 |
6630955 | Takada | Oct 2003 | B1 |
6747696 | Nakata et al. | Jun 2004 | B1 |
6798456 | Sato | Sep 2004 | B1 |
20010040631 | Ewedemi et al. | Nov 2001 | A1 |
20030128285 | Itoh | Jul 2003 | A1 |
20030214590 | Matherson | Nov 2003 | A1 |
20040051797 | Kelly et al. | Mar 2004 | A1 |
20040085072 | Kanou et al. | May 2004 | A1 |
Number | Date | Country | |
---|---|---|---|
20040183928 A1 | Sep 2004 | US |
Number | Date | Country | |
---|---|---|---|
60455631 | Mar 2003 | US |