Real-time exposure control for automatic light control

Information

  • Patent Grant
  • 8149326
  • Patent Number
    8,149,326
  • Date Filed
    Tuesday, September 15, 2009
    15 years ago
  • Date Issued
    Tuesday, April 3, 2012
    12 years ago
Abstract
An imager and a method for real-time, non-destructive monitoring of light incident on imager pixels during their exposure to light. Real-time or present pixel signals, which are indicative of present illumination on the pixels, are compared to a reference signal during the exposure. Adjustments, if necessary, are made to programmable parameters such as gain and/or exposure time to automatically control the imager's exposure to the light. In a preferred exemplary embodiment, only a selected number of pixels are monitored for exposure control as opposed to monitoring the entire pixel array.
Description
FIELD OF THE INVENTION

The invention relates generally to imaging devices and more particularly to real-time exposure control for automatic light control in an imaging device.


BACKGROUND

A CMOS imager circuit includes a focal plane array of pixel cells, each one of the cells including a photosensor, for example, a photogate, photoconductor or a photodiode overlying a substrate for accumulating photo-generated charge in the underlying portion of the substrate. Each pixel cell has a readout circuit that includes at least an output field effect transistor formed in the substrate and a charge storage region formed on the substrate connected to the gate of an output transistor. The charge storage region may be constructed as a floating diffusion region. Each pixel may include at least one electronic device such as a transistor for transferring charge from the photosensor to the storage region and one device, also typically a transistor, for resetting the storage region to a predetermined charge level prior to charge transference.


In a CMOS imager, the active elements of a pixel cell perform the necessary functions of: (1) photon to charge conversion; (2) accumulation of image charge; (3) resetting the storage region to a known state; (4) selection of a pixel for readout; and (5) output and amplification of a signal representing pixel charge. The charge at the storage region is typically converted to a pixel output voltage by the capacitance of the storage region and a source follower output transistor.


CMOS imagers of the type discussed above are generally known as discussed, for example, in U.S. Pat. Nos. 6,140,630, 6,376,868, 6,310,366, 6,326,652, 6,204,524 and 6,333,205, assigned to Micron Technology, Inc., which are hereby incorporated by reference in their entirety.



FIG. 1 illustrates a block diagram for a CMOS imager 10. The imager 10 includes a pixel array 20. The pixel array 20 comprises a plurality of pixels arranged in a predetermined number of columns and rows. The pixels of each row in array 20 are all turned on at the same time by a row select line and the pixels of each column are selectively output by a column select line. A plurality of row and column lines are provided for the entire array 20.


The row lines are selectively activated by the row driver 32 in response to row address decoder 30 and the column select lines are selectively activated by the column driver 36 in response to column address decoder 34. Thus, a row and column address is provided for each pixel. The CMOS imager 10 is operated by the control circuit 40, which controls address decoders 30, 34 for selecting the appropriate row and column lines for pixel readout, and row and column driver circuitry 32, 36, which apply driving voltage to the drive transistors of the selected row and column lines.


Each column contains sampling capacitors and switches 38 associated with the column driver 36 reads a pixel reset signal Vrst and a pixel image signal Vsig for selected pixels. A differential signal (Vrst−Vsig) is produced by differential amplifier 40 for each pixel and is digitized by analog-to-digital converter 45 (ADC). The analog-to-digital converter 45 supplies the digitized pixel signals to an image processor 50, which forms a digital image output.


Lighting can effect image exposure. Light conditions may change spatially and over time. Thus, automatic light control is required to ensure that the best image is obtained by controlling the image sensor's exposure to the light. In some imager applications, there is a need to use the illumination during the actual exposure of an image (i.e., “present illumination”) to control the exposure (i.e., perform exposure control). That is, there is a need to use present illumination because the use of the previous picture's illumination may not be sufficient for the intended application.


One exemplary application that would benefit from using present illumination in exposure control is the imager in a swallowable pill application, such as the one described in U.S. application Ser. No. 10/143,578, now U.S. Pat. No. 7,187,049 issued Mar. 6, 2007, the disclosure of which is incorporated herein by reference. Due to the nature of the imager in a pill application, automatic light control using present illumination is required. A proposed solution would be to light the application's light source (e.g., light emitting diodes) prior to the actual exposure periods. This technique, however, creates an undesirable high waste of energy and power by having the light source on longer than the exposure period.


Accordingly, there is a desire and need for automatic light control during an exposure period that uses present illumination, yet does not unnecessarily waste energy or power in the process.


SUMMARY

The invention provides automatic light control during an exposure period using present illumination.


Various exemplary embodiments of the invention provide an imager and a method for real-time, non-destructive monitoring of light incident on imager pixels during their exposure to light. Real-time or present pixel signals, which are indicative of present illumination on the pixels, are compared to a reference signal during the exposure. Adjustments, if necessary, are made to programmable parameters such as gain and/or exposure time to automatically control the imager's exposure to the light. In a preferred exemplary embodiment, only a selected number of pixels are monitored for exposure control as opposed to monitoring the entire pixel array.





BRIEF DESCRIPTION OF THE DRAWINGS

The foregoing and other advantages and features of the invention will become more apparent from the detailed description of exemplary embodiments provided below with reference to the accompanying drawings in which:



FIG. 1 illustrates a block diagram for a CMOS imager;



FIG. 2 illustrates a block diagram of an exemplary imager light control function constructed in accordance with an embodiment of the invention;



FIG. 3 illustrates in flowchart form an exemplary method of performing automatic light control in accordance with an embodiment of the invention;



FIG. 4 illustrates a graph of gain settings and pixel output levels versus time according to an operation of the invention;



FIG. 5 illustrates in schematic form an exemplary embodiment of a voltage reference generator according to the invention;



FIG. 6 illustrates another exemplary imager constructed in accordance with another embodiment of the invention;



FIG. 7 shows a processor system incorporating at least one imaging device constructed in accordance with an embodiment of the invention; and



FIG. 8 shows another exemplary system incorporating at least one imaging device constructed in accordance with another embodiment of the invention.





DETAILED DESCRIPTION

Referring to the figures, where like reference numbers designate like elements, FIG. 2 shows a portion of an exemplary imager 110 containing a light control function constructed in accordance with an embodiment of the invention. The imager 110 includes a pixel array 120 containing a plurality of pixels 122 organized in rows ROW1, . . . , ROWN and columns COLUMN1, . . . , COLUMN256. A plurality of row lines RL and column lines CL are provided for the entire array 20. Pixels 122 in a same row e.g., ROW1 are connected to row selection circuitry 132 by respective row lines RL. Pixels 122 in a same column COLUMN1, . . . , COLUMN256 are connected by a respective column line CL to a dedicated sampling capacitors and switches 1401, . . . , 140256 (collectively “sampling capacitors and switches 140”) for that column COLUMN1, . . . , COLUMN256.


The imager 110 also includes a plurality of comparators 1601, . . . , 16064 (collectively “comparators 160”). In the illustrated embodiment, there are sixty-four comparators 160, one for every four columns of the pixel array 120. As is discussed below in more detail, the invention is not limited to a specific number of comparators 160. For the illustrated embodiment, the inventors have determined that sixty-four comparators 160, connected to sixty-four different columns is desirable. In the illustrated embodiment, the first comparator 1601 is connected to the column line CL of the first column COLUMN1, the second comparator 1602 is connected to the column line CL of the fifth column, etc. The last comparator 16064 is connected to the column line CL of the 253rd column COLUMN253.


In operation, the rows are selected in sequence. A “scan” as used herein is a sequence of consecutive row selections. When a predefined row e.g., ROM in the array 120 is selected, the comparators 160 are connected to the set of pixels 122 in the dedicated columns e.g., COLUMN1, COLUMN5, . . . , COLUMN253. The comparators 160 receive pixel signals from their respective column lines CL. The pixel signals, as is discussed below in more detail, are used to determine the present illumination of the pixels 122 of the respective columns.


The comparators 160 are also connected to a voltage reference generator 170 that outputs a reference voltage Vref to be compared against the pixels signals. As is discussed below, the voltage reference generator 170 is controllable to output different reference voltages Vref when desired. Each comparator 160 outputs one logic value (e.g., logical “1”) when its respective pixel signal exceeds the reference voltage Vref and a second different logical value (e.g., logical “0”) when its respective pixel signal has not exceeded the reference voltages Vref.


A bit collection circuit 162 is used to collect the results of the comparators 160 and to output the results to a counter 164. The counter 164 counts the number of pixels that have exceeded the reference voltage Vref in a single scan (e.g., the predetermined number of consecutively selected rows). The output of the counter 164 is used by a digital logic block 166 and compared to a predetermined number of pixels in the block 166. Depending upon the comparison, as is explained below in more detail with respect to FIG. 3, the digital block 166 may output an analog gain value ANALOG GAIN and/or an illumination stop signal ILLUMINATION STOP. The analog gain value ANALOG GAIN is used during pixel readout to ensure that the appropriate signal strength is used during the readout process. The illumination stop signal ILLUMINATION STOP is used to end the exposure period for all of the pixels 122 in the array 120 (i.e., stop the exposure of light onto the pixels 122).


Although not shown, the imager 110 also includes a differential amplifier (e.g., amplifier 40 of FIG. 1), an analog-to-digital converter (e.g., ADC 45 of FIG. 1) and an image processor (e.g., processor 50 of FIG. 1). As described above with reference to FIG. 1, the sample and hold circuit samples and holds pixel reset Vrst and a pixel image signals Vsig for selected pixels. The differential amplifier produces a differential signal (Vrst−Vsig) for each pixel, which is digitized by the analog-to-digital converter The digitized pixel signals are input by the image processor and output as a digital image.


The illustrated imager 110 also performs automatic light control according to an embodiment of the invention. FIG. 3 illustrates an exemplary method 200 of performing automatic light control in accordance with an embodiment of the invention. The method 200 has some desirable requirements that enable it to achieve automatic light control in a quick, efficient, real-time and non-destructive manner.


For example, method 200 uses a measurement time that is part of and no greater than the total exposure time. Keeping the measurement time within the boundaries of the total exposure helps conserve power. Another desirable requirement is that the measurements taken during execution of method 200 are performed on a subset of pixels, rather than the entire array. The measurements are non-destructive, which means that the pixels are not reset during the exposure.


The method 200 seeks to obtain a predefined number of pixels having a predefined signal level (discussed in more detail below). To ensure a fast light control process, method 200 uses analog pixel signals rather than using analog-to-digital converted pixel signals. The method 200 will not include pixels having “white spots” (i.e., pixels with defects or extremely high dark current) in its final light control determination. The method 200 uses programmable (i.e., adjustable parameters) such as e.g., the analog gain required during pixel readout, required number of pixels at the predefined signal level (“Pr”), preferred exposure time (“t1”) and maximum exposure time (“tm”). “Exposure time” is the time the light source is illuminated.


As will become apparent, method 200 continuously scans the predefined pixels during the exposure period. Decisions on the readout gain and exposure time settings are made based on the time intervals when the required number of pixels Pr reach the reference level Vref (if they reach the level at all). Adjustments to certain parameters, including the reference level Vref, maximum exposure time tm and gain, may be made during the exposure period.


Before execution of method 200 begins, the required number of pixels at the predefined signal level Pr must be set. In addition, the preferred exposure time t1 and maximum exposure time tm must also be set. The values for the required number of pixels Pr, preferred exposure time t1 and maximum exposure time tm are application specific and the invention is not to be limited to any specific values for these parameters. The maximum exposure time tm limits the exposure duration to prevent blurring of the images. As will become apparent, the timing values used to determine changes in the reference level (Vref) and gain are determined based on the preferred exposure time t1.


Execution of method 200 begins by setting the reference level Vref (step 202). In a desired embodiment, Vref is set to Vfs/Gmax, where Vfs is the full scale signal and Gmax is the maximum gain that can be used. An exemplary value for Vfs is 1V and an exemplary value for Gmax is 4. Once the reference level Vref is set, the exposure is started and the current exposure time t is set to 0 (step 204). It should be noted that how the exposure is started is application specific. For example, in a swallowable pill application, or any application with its own light source, the exposure is started by turning on the light source. For other applications where the light is continuous, the exposure period is the integration period. As such, the start of the exposure period is the start of the integration period (which could be activated by a shutter or some other method known in the art).


All of the predefined pixels are scanned (step 206) during an exposure (or integration period). The pixel signals Vsig from all the predefined scanned pixels are sent via a respective column line to a respective comparator. Once all of the pixels are scanned, the present time t is compared to the maximum exposure time tm (step 208). If the present time t is greater than the maximum exposure time tm, the method continues at step 218 where the gain is set to maximum gain Gmax. The exposure is stopped (i.e., the digital block 166 of FIG. 2 outputs the illumination stop signal ILLUMINATION STOP to turn off the illumination devices or to end the integration period, depending upon the application) (step 220) and the method 200 ends.


The new gain setting is reflected as line ‘a’ in FIG. 4, which is a graph of gain settings and pixel output level versus time. In FIG. 4, solid lines 402, 404 and 406 reflect respective gain limits for the various pixel output versus time combinations. Specifically, line 402 reflects the gain limit set to the minimum gain Gmin, line 404 represents the gain limit G2 (a gain halfway between the maximum and minimum) and line 406 reflects the gain limit set to the minimum gain Gmax.


Referring again to FIG. 3, if at step 208 the present time t is not greater than the maximum exposure time tm, the method continues at step 210 where, for each predefined pixel, each comparator determines if the pixel signal Vsig is greater than the reference level Vref. If a required number of pixels Pr of the predefined number of pixels do not have a pixel signal Vsig that is greater than the reference level Vref (step 210), the method 200 continues at step 206 where all of the predefined pixels are scanned once again.


If the required number of pixels Pr of the predefined number of pixels have a pixel signal Vsig that is greater than the reference level Vref (step 210) the method 200 continues at step 212 to determine the appropriate light control action.


If the present time t is less than t1/Gmax, the readout gain is set to the minimum gain Gmin and the reference level Vref is set to Vfs (step 214). The new gain setting is reflected as line ‘b’ in FIG. 4. The exposure is allowed to continue. As such, the method 200 continues at step 222 where all of the predefined pixels are scanned again. At step 224 it is determined, for each predefined pixel, if the pixel signal Vsig is greater than the new reference level Vref. If a required number of pixels Pr of the predefined number of pixels do not have a pixel signal Vsig that is greater than the reference level Vref (step 224), the method 200 continues at step 226 to determine if the present time t is greater than the maximum exposure time tm.


If it is determined that the present time t is not greater than the maximum exposure time tm, the method 200 continues at step 222. If it is determined that the present time t is greater than the maximum exposure time tm (step 226) or that required number of pixels Pr have a pixel signal Vsig that is greater than the reference level Vref (step 224), the exposure is stopped (step 220) and the method 200 terminates.


If at step 212 it is determined that t1/Gmax<t<t1G2/Gmax, the readout gain is set to G2 (i.e., the gain halfway between the maximum and minimum gains), the reference level Vref is set to Vfs/G2 (step 216), and the exposure is allowed to continue. As such, the method 200 continues at step 222 where all of the predefined pixels are scanned again (as discussed above). The new gain setting is reflected as line ‘c’ in FIG. 4.


If at step 212 it is determined that t1G2/Gmax<t, the readout gain is set to the maximum gain Gmax (step 218) and the exposure is stopped (step 220). The new gain setting is reflected as line ‘d’ in FIG. 4.


Thus, the illumination on the pixels is monitored in real-time, with adjustments to exposure time duration and readout gain (if necessary). Present illumination on the pixels is determined in a non-destructive manner. That is, the signal level of the pixels is not altered or effected in any manner so that the eventual digital image reflects the image captured by the pixels. The method 200 conserves power by only utilizing the light source during the exposure period (as opposed to illuminating the light source prior to and longer than the exposure period).


In method 200, the rows are scanned sequentially, but the invention is not so limited. The columns are checked in parallel by comparing the pixel signals to the reference level in the comparators 160 (FIG. 2).


For CMOS image sensors, the pixel is typically reset before the exposure. As such, the pixel signal output level Vout begins at the reset voltage Vrst. When exposed to light, the pixel output signal level (in absolute voltage) gradually drops toward a ground potential during the integration/exposure period. Thus, the pixel signal Vsig is usually defined as Vsig=Vrst−Vout. The defined threshold level Vth is usually defined as Vth=Vrst−Vpix-th, where Vpix-th is the pixel threshold referred to ground.


The reference voltage presented to the comparators is the voltage (referred to ground) that represents the pixel output voltage Vpix-th (referred to ground) at the desired signal level Vth (referred to reset level). Vsig is Vrst minus the pixel output level at any time, thus Vth=Vrst−Vpix-th. During processing, the reference level Vref is Vfs/gain, ideally referenced against Vrst. Vrst, however, is not available during the exposure. As such, an average reset level Vrst,mean is used during the exposure period. Vrst,mean is the average reset level from a set of dark (i.e., light shielded) pixels outside or at the border of the image area. The pixel signal level is given as the difference between the pixel reset level and the instantaneous pixel output voltage, and will this be a positive voltage increasing from 0 during exposure.


During method 200, the results of the first scan of the predetermined pixels (which in the illustrated embodiment is 640 pixels) is used as a check for “white spots.” These pixels are not allowed to contribute to the light control determinations effecting gain and exposure time settings. The method 200 may be modified to scan additional pixels to compensate for the “white spot” pixels. In addition, method 200 may include the option to check for a predetermined number of saturated pixels after each scanned line, or at other intervals based on selected rows, to terminate the scan before it completes. This option increases the exposure time resolution.


The supply voltage in the exposure period may be different from the supply voltage during pixel readout. This means that the pixel reset level may not be correct during exposure. The voltage reference generator 170 according to the invention (FIG. 5) compensates for this. The generator 170 includes several sample and hold switches 502, 504, 506, 512, 514, 516, capacitors 508, 518, 524, 528, 534, 544, three amplifiers 510, 520, 526 and additional switches S1, S2.


In the illustrated generator 170, a mean reset value Vrst,mean from a set of dummy pixels is sampled and stored on capacitor 508 just before the light is illuminated (or the integration period begins). A low droop rate is required as the reset level Vrst,mean must be valid throughout the complete light control method 200. To reduce leakage, the sampled value Vrst,mean is buffered in amplifier 510 and feedback to the switch 506 terminal and to the first sampling capacitor 508. The full scale level Vfs is sampled from a supply voltage source in an identical manner and a switched capacitor circuit (i.e., capacitors 524, 534, 544 and switches S1, S2) generates the reference Vref sent to the comparators. That is, Vref=Vrst,mean−Vfs/x, where x=gain.


In the illustrated embodiment of the generator 170, the generation of the reference Vref is done by subtraction of the predefined fraction of the full scale signal Vfs from the average reset level Vrst,mean. It should be noted that the generation of the reference Vref may be based on addition or multiplication of currents and the invention is not to be limited to the subtraction technique illustrated in FIG. 5. Vfs is divided by the readout gains 1, 2, or 4 according to the position of the switches S1, S2. The value is buffered by the third amplifier 566, which has its reference terminal connected to the Vrst,mean signal. Then, Vfs/x, where x=gain, becomes relative to Vrst,mean and the output becomes Vrst,mean−Vfs/x relative to ground, which is desirable.



FIG. 6 illustrates another exemplary imager 610 constructed in accordance with another exemplary embodiment of the invention. The illustrated imager 610 compensates for comparator input offsets, which may be present in the imager 110 illustrated in FIG. 2. The illustrated imager 610 uses half the number of comparators 6601, 6602, . . . , 66032 that are used in the FIG. 2 imager 110. The illustrated imager 610 compares columns in two consecutive phases. In phase one, the outputs from the first half of the columns (e.g., column 1, column 9, . . . , column 249) are input into the comparators 6601, 6602, . . . , 66032 via input switches 6611a, 6612a, . . . , 66132a and tested against the reference level Vref via input switches 6611b, 6612b, . . . , 66132b. The results are output from the comparators 6601, 6602, . . . , 66032 to the bit collection unit 662 via switch 6631, 6632, . . . , 66332. In the second phase, the outputs from the second half of the columns (e.g., column 5, column 13, . . . , column 253) are input into the comparators 6601, 6602, . . . , 66032 via input switches 6611b, 6612b, . . . , 66132b and tested against the reference level Vref via input switches 6611a, 6612a, . . . , 66132a. The results are output from an inverted output of the comparators 6601, 6602, . . . , 66032 to the bit collection unit 662 via switches 6631, 6632, . . . , 66332.


Using swapped input and output terminals of the comparators 6601, 6602, . . . , 66032, potential offsets are substantially removed from the light control process. This improves the accuracy of the light control process of the invention.



FIG. 7 shows system 700, a typical processor system modified to include an imaging device 708 constructed in accordance with an embodiment of the invention (i.e., imager 110 of FIG. 2, imager 610 of FIG. 6). The processor-based system 700 is exemplary of a system having digital circuits that could include image sensor devices. Without being limiting, such a system could include a computer system, camera system, scanner, machine vision, vehicle navigation, video phone, surveillance system, auto focus system, star tracker system, motion detection system, image stabilization system, and data compression system.


System 700, for example a camera system, generally comprises a central processing unit (CPU) 702, such as a microprocessor, that communicates with an input/output (I/O) device 706 over a bus 704. Imaging device 708 also communicates with the CPU 702 over the bus 704. The processor-based system 700 also includes random access memory (RAM) 710, and can include removable memory 715, such as flash memory, which also communicate with the CPU 702 over the bus 704. The imaging device 708 may be combined with a processor, such as a CPU, digital signal processor, or microprocessor, with or without memory storage on a single integrated circuit or on a different chip than the processor.



FIG. 8 shows another exemplary system 800 having a device 810 incorporating an imager chip 812 constructed in accordance with an embodiment of the invention (i.e., imager 110 of FIG. 2, imager 610 of FIG. 6). The imager chip 812 can include a photosensor array 814, photosensor interface 815, memory circuit 816, and a controller 820 integrated on the same silicon chip. The photosensor interface 815 can be controlled by the controller 820 for addressing the array 814. The system 800 is constructed and operated as described in U.S. Pat. No. 7,187,049.


The memory circuit 816 can communicate with the other operational circuits of the device 810, including, but not limited to, the controller 820 (e.g., an 8051 controller), a serializer module 824, extended shift registers SFRs 822, and an RF (radio frequency) transmitter 828. The memory circuit 816 is capable of storing operational information for the photosensor array 814 and all other circuitry incorporated into the device 810. Further, the memory circuit 816 is be capable of storing images received by the photosensor array 814. The controller 820 operates as the “brain” of the device 810 using programming and/or data stored in the memory circuit 816, and/or in an internal ROM. The controller 820 can utilize the stored programs and/or data in controlling the acquiring of images, the storing of images, and the communication of images to an external system for viewing.


The CMOS photosensor array 814 can download captured images, like a camera. However, the CMOS photosensor array 814 of the invention can also download programming and/or operational information as data-input 834, such as software, programming, or other useful data. A user can select the data desired to be downloaded by utilizing a program command system 830, which can contain a collection of programs, instructions, software, or other data that can be utilized by the device 810. The program command system 830, which can be a standard computer, communicates to a photo-data generator 832, which can be any device capable of outputting light signals, for instance, a computer monitor (CRT) connected to a computer, or an LED unit. Preferably, the photo-data generator 832 can output light at various wavelengths (colors) and intensities, and in various patterns.


The photo-data generator 832 generates light 836, which is input to photosensor array 814 during a period when it is not acquiring images. This period can be controlled and designated by the controller 820. The light 836 can be varied in any means known in the art so that it corresponds to the data desired to be downloaded into the device 810. As an example, the light can be varied in color, where different colors or color patterns can be read by the photosensor array 814, stored in the memory circuit 16, and interpreted by the controller 820 of the device 810, via communication with the photosensor array 814, as different digital information (i.e., “1s” and “0s”). In this way, the memory circuit 814, and device 810 in general, can be programmed by a user with the input of light 836 to the photosensor array 814.


The device 810 functions as an imager camera. The camera function of the device 810 is like that of any other CMOS imager camera to acquire still frames or constant motion video. If necessary, the LED(s) 818 can function as light strobes during camera use, and be synchronized with the image acquisition by the photosensor array 814. Light 836 from the LED 818 can illuminate a subject 838 within an image area to be captured. The reflected light 836 from the illuminated subject 838 can be acquired by the photosensor array 814. The images acquired by the photosensor array 814 are communicated to and translated by the serializer module 824 into a format for image output.


The memory circuit 816 can store programming and/or data so that the controller 820 can use the input programs and/or data acquired during the data input operation to direct the operation of the photosensor array 814, the serializer module 824, and the extended SFRs 822 (all of which can be in communication with the memory circuit 816 and controller 820) for image capture, storage, processing, and output.


At a desired time, or on an ongoing basis, the stored images can be translated into an RF data output 840 generated by an RF transmitter 828 in communication with the serializer module 824 under control of the controller 820. The images, as RF data output 840, are transmitted to an RF data receiver 842. The RF data receiver 842 is in communication with the program command system 830 so that a user can receive the images acquired by the photosensor array 814 for viewing, for example on the same computer monitor (i.e., photo-data generator 832) that could be used to initially program the device 810. In one desired embodiment, the device 810 is incorporated into a swallowable pill as described in U.S. Pat. No. 7,187,049.


The processes and devices described above illustrate preferred methods and typical devices of many that could be used and produced. The above description and drawings illustrate embodiments, which achieve the objects, features, and advantages of the present invention. However, it is not intended that the present invention be strictly limited to the above-described and illustrated embodiments. Any modification, though presently unforeseeable, of the present invention that comes within the spirit and scope of the following claims should be considered part of the present invention.

Claims
  • 1. A method of automatic exposure control for an imager comprising an array of pixels, the method comprising the acts of: setting a reference pixel signal level;starting an exposure period;obtaining pixel signals from a subset of pixels in the array;comparing a value of each pixel signal to the reference pixel signal level;determining if a predetermined number of pixels have a pixel signal value greater than the reference pixel signal level;comparing a current exposure time to a maximum exposure time; andbased on the exposure time comparison and a result of the determining, adjusting at least one exposure control parameter of the imager,wherein the reference pixel signal level is initially set to be equal to a value of a full scale signal divided by a maximum gain value.
  • 2. The method of claim 1, further comprising the act of ending the exposure period if the current exposure time is greater than or equal to the maximum exposure time.
  • 3. The method of claim 1, further comprising the act of ending the exposure period if the predetermined number of pixels have a pixel signal value greater than the reference pixel signal level.
  • 4. The method of claim 1, further comprising the act of repeating the obtaining act through the adjusting act if the current exposure time is less than the maximum exposure time.
  • 5. The method of claim 1, wherein the at least one exposure control parameter is a gain value.
  • 6. The method of claim 1, wherein the at least one exposure control parameter is the maximum exposure time.
  • 7. The method of claim 1, wherein the at least one exposure control parameter is the reference pixel signal level.
  • 8. The method of claim 1, wherein the adjusting act comprises setting a readout gain to a maximum gain and the method further comprising ending the exposure period.
  • 9. The method of claim 1, wherein the act of starting the exposure period comprises illuminating an illumination device.
  • 10. The method of claim 1, wherein the act of starting the exposure period comprises beginning an integration period.
  • 11. The method of claim 1, wherein the act of obtaining comprises the acts of: scanning rows of pixels in the array; andinputting pixel signals from a set of columns of the array.
  • 12. The method of claim 1, wherein the reference pixel signal level is a level based on a full scale voltage level and a mean of a reset level.
  • 13. A method of automatic exposure control for an imager comprising an array of pixels, the method comprising the acts of: setting a reference pixel signal level;starting an exposure period;obtaining pixel signals from a subset of pixels in the array;comparing a value of each pixel signal to the reference pixel signal level,determining if a predetermined number of pixels have a pixel signal value greater than the reference pixel signal level;comparing a current exposure time to a maximum exposure time; andbased on the exposure time comparison and a result of the determining, adjusting at least one exposure control parameter of the imager,wherein, if it is determined that the predetermined number of pixels have a pixel signal value greater than the reference signal value level and the current exposure time is less than the maximum exposure time, the adjusting act comprises setting a readout gain of the imager to a minimum gain value and setting the reference pixel signal value to a value of a full scale signal.
  • 14. A method of automatic exposure control for an imager comprising an array of pixels, the method comprising the acts of: setting a reference pixel signal level;starting an exposure period;obtaining pixel signals from a subset of pixels in the array;comparing a value of each pixel signal to the reference pixel signal level;determining if a predetermined number of pixels have a pixel signal value greater than the reference pixel signal level;comparing a current exposure time to a maximum exposure time; andbased on the exposure time comparison and a result of the determining, adjusting at least one exposure control parameter of the imager,wherein, if it is determined that a predetermined number of pixels have a pixel signal value greater than the reference pixel signal level and the current exposure time is within a predetermined time interval, the adjusting act comprises setting a readout gain to a gain value between a maximum gain value and a minimum gain value and setting the reference pixel signal level to a value of a full scale signal divided by the readout gain value set to be between the maximum gain and the minimum gain value.
  • 15. The method of claim 14, wherein the predetermined time interval spans a first time based on a preferred exposure time and a maximum gain value and a second time based on the preferred exposure time and the gain between the maximum and minimum gains.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a continuation of U.S. patent application Ser. No. 10/846,513, filed on May 17, 2004, now U.S. Pat. No. 7,605,852, issued on Oct. 20, 2009, the subject matter of which is incorporated in its entirety by reference herein.

US Referenced Citations (140)
Number Name Date Kind
2240312 Mintum Feb 1940 A
3683389 Hollis Aug 1972 A
3723644 Haskell et al. Mar 1973 A
3971362 Pope et al. Jul 1976 A
3984628 Sharp Oct 1976 A
4273431 Farmer et al. Jun 1981 A
4278077 Mizumoto Jul 1981 A
4310228 Terada et al. Jan 1982 A
4428005 Kubo Jan 1984 A
4471228 Nishizawa Sep 1984 A
4532918 Wheeler Aug 1985 A
4539603 Takeuchi et al. Sep 1985 A
4631582 Nagasaki et al. Dec 1986 A
4646724 Sato et al. Mar 1987 A
4689621 Kleinberg Aug 1987 A
4741327 Yabe May 1988 A
4786982 Wakahara et al. Nov 1988 A
4841291 Swix et al. Jun 1989 A
4844076 Lesho et al. Jul 1989 A
4854328 Pollack Aug 1989 A
4868645 Kobayashi Sep 1989 A
5187572 Nakamura et al. Feb 1993 A
5202961 Mills et al. Apr 1993 A
5279607 Schentag et al. Jan 1994 A
5351161 MacKay et al. Sep 1994 A
5355450 Garmon et al. Oct 1994 A
5374953 Sasaki et al. Dec 1994 A
5408263 Kikuchi et al. Apr 1995 A
5423312 Siegmund et al. Jun 1995 A
5486861 Miyamoto et al. Jan 1996 A
5519828 Rayner May 1996 A
5572252 Naka et al. Nov 1996 A
5585840 Watanabe et al. Dec 1996 A
5596366 Takashima et al. Jan 1997 A
5604531 Iddan et al. Feb 1997 A
5643175 Adair Jul 1997 A
5678568 Uchikubo et al. Oct 1997 A
5730702 Tanaka et al. Mar 1998 A
5738110 Beal et al. Apr 1998 A
5749830 Kaneko et al. May 1998 A
5809161 Auty et al. Sep 1998 A
5819736 Avny et al. Oct 1998 A
5833603 Kovacs et al. Nov 1998 A
5853005 Scanlon Dec 1998 A
5873830 Hossack et al. Feb 1999 A
5875280 Takaiwa et al. Feb 1999 A
6053873 Govari et al. Apr 2000 A
6074349 Crowley Jun 2000 A
6140630 Rhodes Oct 2000 A
6165128 Cespedes et al. Dec 2000 A
6181830 Sato Jan 2001 B1
6204524 Rhodes Mar 2001 B1
6219091 Yamanaka et al. Apr 2001 B1
6240312 Alfano et al. May 2001 B1
6254531 Higuchi et al. Jul 2001 B1
6310366 Rhodes et al. Oct 2001 B1
6310642 Adair et al. Oct 2001 B1
6326652 Rhodes Dec 2001 B1
6328212 Metlitasky et al. Dec 2001 B1
6333205 Rhodes Dec 2001 B1
6351606 Yamazaki Feb 2002 B1
6364829 Fulghum Apr 2002 B1
6376868 Rhodes Apr 2002 B1
6402689 Scarantino et al. Jun 2002 B1
6428469 Iddan et al. Aug 2002 B1
6462770 Cline et al. Oct 2002 B1
6584348 Glukhovsky Jun 2003 B2
6607301 Glukhovsky et al. Aug 2003 B1
6635834 Wenner Oct 2003 B1
6636263 Oda Oct 2003 B2
6667765 Tanaka Dec 2003 B1
6690412 Higo Feb 2004 B1
6709387 Glukhovsky et al. Mar 2004 B1
6764440 Iddan et al. Jul 2004 B2
6831689 Yadid-Pecht Dec 2004 B2
6865718 Montalcini Mar 2005 B2
6900790 Doi et al. May 2005 B1
6934573 Glukhovsky et al. Aug 2005 B1
6939292 Mizuno et al. Sep 2005 B2
6947788 Gilboa et al. Sep 2005 B2
6984205 Gazdzinski Jan 2006 B2
7009634 Iddan et al. Mar 2006 B2
7022067 Glukhovsky et al. Apr 2006 B2
7053954 Canini May 2006 B1
7099056 Kindt Aug 2006 B1
7104952 Iddan et al. Sep 2006 B2
7112752 Wenner Sep 2006 B1
7118529 Glukhovsky et al. Oct 2006 B2
7214182 Shimizu et al. May 2007 B2
7228166 Kawasaki et al. Jun 2007 B1
7245320 Kaplinsky et al. Jul 2007 B2
7295226 Meron et al. Nov 2007 B1
7316647 Kimoto et al. Jan 2008 B2
7355625 Mochida et al. Apr 2008 B1
7419468 Shimizu et al. Sep 2008 B2
7511133 Baraldi et al. Mar 2009 B2
7511733 Takizawa Mar 2009 B2
20010017649 Yaron Aug 2001 A1
20010051766 Gazdzinski Dec 2001 A1
20020042562 Meron et al. Apr 2002 A1
20020093484 Skala et al. Jul 2002 A1
20020103425 Mault Aug 2002 A1
20020132226 Nair et al. Sep 2002 A1
20020158976 Vni et al. Oct 2002 A1
20020171669 Meron et al. Nov 2002 A1
20020198439 Mizuno Dec 2002 A1
20030040685 Lewkowicz et al. Feb 2003 A1
20030043263 Glukhovsky et al. Mar 2003 A1
20030077223 Glukhovsky et al. Apr 2003 A1
20030114742 Lewkowicz et al. Jun 2003 A1
20030117491 Avni et al. Jun 2003 A1
20030174208 Glukhovsky et al. Sep 2003 A1
20030195415 Iddan Oct 2003 A1
20030197795 Schinner Oct 2003 A1
20030210334 Sarwari Nov 2003 A1
20030210439 Sarwari Nov 2003 A1
20030211405 Venkataraman Nov 2003 A1
20040073087 Glukhovsky et al. Apr 2004 A1
20040087832 Glukhovsky et al. May 2004 A1
20040111011 Uchiyama et al. Jun 2004 A1
20040115877 Iddan et al. Jun 2004 A1
20040180391 Gratzi et al. Sep 2004 A1
20040210105 Hale et al. Oct 2004 A1
20040254455 Iddan Dec 2004 A1
20050110881 Glukhovsky et al. May 2005 A1
20050148816 Glukhovsky et al. Jul 2005 A1
20050171418 Lin Aug 2005 A1
20050183733 Kawano et al. Aug 2005 A1
20050288594 Lewkowicz et al. Dec 2005 A1
20060082648 Iddan et al. Apr 2006 A1
20060155174 Glukhovsky et al. Jul 2006 A1
20060158512 Iddan et al. Jul 2006 A1
20060164511 Krupnik Jul 2006 A1
20060184039 Avni et al. Aug 2006 A1
20060217593 Gilad et al. Sep 2006 A1
20070106111 Horn et al. May 2007 A1
20070225560 Avni et al. Sep 2007 A1
20080103363 Levy et al. May 2008 A1
20080269664 Trovato et al. Oct 2008 A1
20100110168 Avni et al. May 2010 A1
Foreign Referenced Citations (47)
Number Date Country
34 40 177 May 1986 DE
2 374 149 Oct 2002 GB
47-004376 Feb 1972 JP
47-041473 Dec 1972 JP
55-121779 Sep 1980 JP
57-45833 Mar 1982 JP
64-086933 Mar 1989 JP
H03-289779 Dec 1991 JP
41-09927 Apr 1992 JP
H04-109927 Apr 1992 JP
H04-180736 Jun 1992 JP
50-15515 Jan 1993 JP
07-275200 Oct 1995 JP
08-313823 Nov 1996 JP
H10-112118 Apr 1998 JP
11-111795 Apr 1999 JP
11-290269 Oct 1999 JP
11-305144 Nov 1999 JP
2001-025004 Jan 2001 JP
2001-112740 Apr 2001 JP
2001-203910 Jul 2001 JP
2001-224553 Aug 2001 JP
2003-093328 Apr 2003 JP
2005-156215 Jun 2005 JP
2005-156216 Jun 2005 JP
WO 9221307 Dec 1992 WO
WO 9811816 Mar 1998 WO
WO 9930610 Jun 1999 WO
WO 0074391 Dec 2000 WO
WO 0076391 Dec 2000 WO
WO 0108548 Feb 2001 WO
WO 0135813 May 2001 WO
WO 0150941 Jul 2001 WO
WO 0165995 Sep 2001 WO
WO 0187377 Nov 2001 WO
WO 02073507 Sep 2002 WO
WO 02080376 Oct 2002 WO
WO 02082979 Oct 2002 WO
WO 02095351 Nov 2002 WO
WO 02102224 Dec 2002 WO
WO 03009739 Feb 2003 WO
WO 03011103 Feb 2003 WO
WO 03094723 Nov 2003 WO
WO 2004004540 Jan 2004 WO
WO 2004045395 Jun 2004 WO
WO 2004082472 Sep 2004 WO
WO 2004082472 Sep 2004 WO
Related Publications (1)
Number Date Country
20100073512 A1 Mar 2010 US
Divisions (1)
Number Date Country
Parent 10846513 May 2004 US
Child 12560135 US