The field of the invention relates to imaging systems and, in particular, to a CMOS imaging system including an imager and control circuitry that is fabricated on a single chip that requires low power and provides a high quality image.
Traditionally, surveillance systems use off the shelf imagers for acquiring a video image. Typically these imagers generally are not small and require an external power supply. Also, these systems generally do not provide clear images if the captured video has a dark foreground with a bright background or visa versa. When such a video image is viewed on a monitor, little information can be extracted from it.
Furthermore, there are various types of imagers in use today, including charge-coupled device (CCD) imagers and complimentary metal oxide semi-conductor (CMOS) imagers. These imaging systems comprise an array of pixels, each of which contains a light sensitive sensor element such as a photodiode.
CMOS imagers typically utilize an array of active pixel sensors and a row of correlated double-sampling circuits or amplifiers to sample and hold the output of a given row of pixel imagers of the array. The term active pixel sensor (APS) refers to an electronic image sensor in which active devices, such as transistors, are associated with each pixel. APS devices are typically fabricated using CMOS technology.
In a CMOS imaging system, each photodiode accumulates a charge, and therefore a voltage, during the optical integration period, in accordance with the light intensity reaching the photodiode. As charge accumulates, the photodetector begins to fill. In a CMOS system, a voltage temporally stored on the capacitance of a back-biased photodiode falls in accordance with a negative charge generated by photoelectrons. The cumulative amount of charge on the photodiode at the end of the integration period is the pixel value for that pixel position. If, however, a photodetector becomes full before the end of the integration period and any additional photons strike the photodetector, then no additional charge can be accumulated. Thus, for example, a very bright light applied to a photodetector can cause a photodetector to be full before the end of the integration period and thus saturate and lose information.
In CCD imaging systems, the amount of charge that may be integrated in a pixel cell is limited by the depth of the depletion well under the photogate. The depletion gate is formed by applying a potential to the photogate that repels majority carriers from the semiconductor substrate beneath the photogate. Again, as the photogate is exposed to photons and photoelectrons are generated, the depth of the well beneath the photogate decreases. As with CMOS photodiodes, if a CCD photogate is subject to bright illumination it may saturate resulting in the loss of information about relatively bright objects in the image.
U.S. Pat. No. 6,040,570 issued Mar. 21, 2000 to Levine, et al. discloses a method of operating an APS imager to avoid the saturation problem described above. According to this method, the bias potential for the imager is applied in two steps. A first potential is applied before the start of the integration period when the pixels are reset and charge is accumulated for a first subinterval of the integration period. During this first subinterval, bright areas of the image may saturate the photodetectors in parts of the imager. In a second subinterval of the integration period, the bias voltage applied to the photodiode or to the photogate is changed to increase the charge capacity of the pixels. Pixels that previously had been saturated accumulate more charge during this second subinterval, providing a charge differential relative to other pixels that had saturated during the first subinterval. The accumulated charge on each pixel at the end of the integration period is provided as the image signal for that pixel. Thus, the dynamic range of each pixel, and therefore the complete imager, is extended to provide more information per integration period.
Furthermore, U.S. Pat. No. 5,949,918 issued Sep. 7, 1999 to McCaffery discloses a method of performing image enhancement using an APS imager, a video processor and a dual-ported memory. The video processor performs a histogramming operation to creates a look-up-table based on a cumulative distribution function (CDF) for the image. This look-up table requantizes the pixel values to increase differences between closely spaced pixel values in bright and/or dark objects in the image. As the image data is received by the video processor, it is processed through the look-up-table to increase the amount of data visible on the video display no matter what the intensity of the background or foreground of the image.
It would be preferable to use both of these processes in a single chip CMOS imager to provide a low cost, low power imager.
The present invention is a CMOS imaging device implemented on a single integrated circuit. The device includes an APS imager that contains an array of extended dynamic range (XDR) pixels that provide a signal representing a scene. The device further includes an image processor that calculates a controllable function of the image, and uses this function both to adjust the extended dynamic range of the imager and to requantize the signals received from the imager according to the controllable function.
According to one aspect of the invention, the image processor includes a histogramming function that controls the bias potentials applied to the imager to implement the extended dynamic range feature.
According to another aspect of the invention, the imaging device includes a memory for storing the controllable function and the processed video signal. The memory stores a full frame of the image signal and provides the image frame as two sequential fields.
According to yet another aspect of the invention, the imaging device includes circuitry that converts the video images provided by the imager into a standard format.
According to another aspect of the invention, the imaging device includes a power monitoring circuit that triggers the imaging system synchronous with the line current.
The invention is best understood from the following detailed description when read in connection with the accompanying drawing. It is emphasized that, according to common practice, the various features of the drawing are not to scale. On the contrary, the dimensions of the various features are arbitrarily expanded or reduced for clarity. Included in the drawing are the following Figures:
Referring now to
Input voltage is applied at a 3.3 volt regulator 150 which is then fed through a charge pump 160 that provides the operating voltage for the ASIC 120 and other circuitry. In the exemplary embodiment of the invention, the charge pump 160 increases the 3.3 volts provided by the regulator 150 to provide a 5 volt signal to the APS imager 110. This increased supply voltage for the APS imager 110 allows it to produce video signals having wider dynamic range because more voltage levels are available for the extended dynamic range circuitry. 3.3 volt regulator 150 also supplies a signal to watch dog circuit 170 which provides a start up signal to ASIC 120. Watchdog circuit 170, which supplies a clean start-up pulse and allows for almost immediate response by ASIC 120, triggers ASIC 120 as necessary. This allows a scene to be captured in a very short amount of time after initial trigger. In the exemplary embodiment of the invention, the watchdog circuit 170 is responsive to the alternating current (AC) line voltage to provide triggering pulses at a rate of 60 Hz. As described below, these pulses are converted into 30 Hz pulses by the ASIC 120 to extract progressive scan image data from the APS imager 110. The 60 Hz pulses are used to indicate when each of the field images should be provided from the stored frame image.
Pixel reset circuitry 180 is used to apply bias potentials to each pixel element of APS imager 110, as required, to operate the sensor in extended dynamic range mode. The pixel reset circuitry 180 is controlled by ASIC 120, responsive to signals generated by a histogramming function.
Dual-ported static random access memory (SRAM) 130 and a video digital to analog converter (DAC) 140 are coupled to ASIC 120. The SRAM 130 is dual-ported so that it may store frame data transmitted from ASIC 120, as well as a look-up-table (LUT) required for pixel processing and, at the same time, provide stored image data to the video DAC 140.
The ASIC 120 selects only the even lines of the stored progressive scan image, adds horizontal and vertical synchronization signals and provides the composite signal to the DAC 140 to produce an even image field. In the same way, the ASIC 120 processes the odd lines of the stored frame and provides these to the DAC 140 to produce an odd image field. As the odd image field is being provided to the DAC, the ASIC 120 stores the next progressive scan frame into the SRAM 130. In the exemplary embodiment of the invention, the DAC 140 provides a monochrome analog video signal that conforms to an industry standard format (e.g. RS-170) for display and/or recording on industry standard equipment.
The ASIC 120 includes the circuitry that controls the APS imager 110, memory 130 and DAC 140, as well as the circuitry that processes the pixel data collected by the APS imager 110. As shown in
Output control block 218 adds the horizontal and vertical synchronization signals to the interlaced video signal read out from memory 130 and transmits the composite signal to the video DAC 140. This results in a composite video output that is complaint with RS-170 standards.
Memory control and histogram block 216 may, for example, perform video processing as described in U.S. Pat. No. 5,949,918, issued Sep. 7, 1999 to McCaffrey. A pseudo random sampling of the video data is performed to generate a histogram of the luminance levels. The histogram is transformed into a cumulative distribution function (CDF) and is stored in memory 130. A look-up-table (LUT) 220 is created based on the CDF and stored in memory 130 as well. Each unit of pixel data is processed by ASIC 120 through LUT 220 to increase the viewable data in each frame.
As described in the referenced patent, the LUT 220 translates the pixel values returned from the imager into output pixel values that are stored in the memory 130. The LUT 220 requantizes the pixel values to differentiate between closely spaced values. If, for example, the CDF generated by the histogram function of a first image indicates that the image includes i) only relatively dark image data, ii) only relatively bright image data or iii) a mixture of dark image data and bright image data with negligible data having pixel values between the dark image data and the bright image data, then the ASIC 120 will generate a LUT that translates some of the dark and/or bright pixel values into brighter and/or darker values, respectively, to provide more contrast in the areas of the image that do not exhibit significant variation. This translation is based on the relative values of the pixels. Thus, brighter pixels in the image remain bright and darker pixels remain dark.
In the exemplary embodiment of the invention, memory control and histogram circuitry 216 generates a CDF and a LUT for each received image. The LUT, however, is not used on the image from which it was generated, but rather on the next subsequent image. It is contemplated, however, that other schemes may be used. For example, the histogram function may generate an LUT only for every Nth image, where N is an integer, for example 10. Alternatively, the histogram function may use one frame period for analysis and another frame period to generate the LUT. In this alternative embodiment, the LUT would not be used for the next image in the sequence, but for the second image occurring after the image used to generate the LUT.
In an exemplary embodiment of the invention, the memory control and histogram circuitry 216 interacts with the pixel reset circuitry 180 to ensure that the processed image data exhibits good dynamic range with minimal quantization distortion. This interaction is described below with reference to
As shown in
Adding another reset potential (P3), as shown in
The subject invention combines a manipulation of the reset levels with the histogramming circuitry to obtain images having increased contrast from the imager 110. In the exemplary embodiment of the invention, the individual reset levels and timing are fixed and the ASIC signals the pixel reset circuitry to apply a particular reset level using a two-bit value. The timing of the application of the reset level may be predetermined or may be adjusted as a part of the process described below with reference to
In the first step of this flow-chart, step 510, the ASIC 120 receives an image from the imager array 110 and generates a histogram. At step 512, the process determines if the image includes a bright region with low dynamic range. This determination may be made, for example, if the histogram for the image has a significant number of pixels (e.g. more than 100) that are at or near (e.g. within 10% of) the maximum brightness level for the imager.
If such a region does not exist, then the imager may benefit from using a reset sequence that has a lower dynamic range and, thus, greater quantization resolution for each image level. In this instance, step 520 determines if the reset sequence currently in use is the first sequence, that is to say, the sequence corresponding to the lowest dynamic range. If it is, then no further improvement is possible and control transfers to step 526, the end of the process. If the current sequence is not the first sequence then step 522 is executed which determines whether the sequence was previously changed and if so, whether there was an improvement in the image. Improvement in the image may be measured, for example, by comparing the highest level in the histogram for the current image to the corresponding level from the immediately previous image. If the current image has brighter objects then changing the reset sequence improved the image. If, at step 522, there was a previous sequence change but no improvement in the image then control transfers to step 526. Otherwise, step 524 is executed which changes the reset sequence to the one corresponding to the next lower dynamic range and then transfers control to step 526.
If, at step 512, a relatively large bright region does exist, then the imager may benefit from using a reset sequence that has a higher dynamic range. In this instance, step 514 determines if the reset sequence currently in use is the last sequence, that is to say, the sequence corresponding to the highest dynamic range. If it is, then no further improvement is possible and control transfers to step 526. If the current sequence is not the last sequence then step 516 is executed which determines whether the sequence was previously changed and if so, whether there was an improvement in the image. Improvement in the image may be measured, for example, by comparing the number of pixels at the brightest level in the histogram for the current image to the corresponding number of pixels from the immediately previous image. If the current image has fewer pixels at this level than the previous image then changing the reset sequence improved the image. If, at step 516, there was a previous sequence change but no improvement in the image then control transfers to step 526. Otherwise, step 518 is executed which changes the reset sequence to the one corresponding to the next higher dynamic range and then transfers control to step 526.
At the same time that the ASIC 120 is adjusting the reset sequence, it is also performing the histogramming operations. Thus, both the overall contrast of the image and the quantization resolution are iteratively increased until a best possible value is reached. Because the camera is continually monitoring image quality and adjusting the XDR parameters and the histogram LUT, the camera continually adjusts to ambient lighting conditions.
While the system is described in terms of an adaptive method for adjusting the dynamic range of the video signal, it is contemplated that it may be practiced as a programmable system. In a surveillance application, for example, respectively different reset sequences and LUT's can be determined based on camera position in a fixed scan path, time of day and even day of year. These parameters may be programmed into the ASIC 120 or may be externally provided to the ASIC 120, for example by a single-bit I2C bus. Thus, the system may be programmed according to predetermined criteria to produce optimum images.
Although the invention has been described in terms of one or more exemplary embodiments, it is contemplated that it may be practiced as outlined above within the scope of the attached claims.
This application claims the benefit of U.S. Provisional Application No. 60/314,820, filed Aug. 24, 2001, the contents of which are incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
4484223 | Tsunekawa | Nov 1984 | A |
5642163 | Watari et al. | Jun 1997 | A |
5848019 | Matthews et al. | Dec 1998 | A |
5909145 | Zimmerman | Jun 1999 | A |
5920345 | Sauer | Jul 1999 | A |
5929908 | Takahashi et al. | Jul 1999 | A |
5949918 | McCaffrey | Sep 1999 | A |
5969758 | Sauer et al. | Oct 1999 | A |
6002123 | Suzuki | Dec 1999 | A |
6040570 | Levine et al. | Mar 2000 | A |
6101294 | McCaffrey et al. | Aug 2000 | A |
6236394 | Ikeda | May 2001 | B1 |
6259478 | Hori | Jul 2001 | B1 |
6555805 | Afghahi | Apr 2003 | B2 |
Number | Date | Country | |
---|---|---|---|
20030038887 A1 | Feb 2003 | US |
Number | Date | Country | |
---|---|---|---|
60314820 | Aug 2001 | US |