Dynamic range compression method

Information

  • Patent Grant
  • 7336309
  • Patent Number
    7,336,309
  • Date Filed
    Tuesday, July 3, 2001
    23 years ago
  • Date Issued
    Tuesday, February 26, 2008
    16 years ago
Abstract
A method for compressing the dynamic range of an image sensor (202) including a multiplicity of pixels. The method includes the steps of exposing each of the pixels to light and producing an associated photocurrent per pixel, representative of the light exposure. Then, on a per-pixel basis, controlling exposure time of each of the pixels on the basis of a monotonically rising convex function of the associated photocurrent of each of the pixel.
Description
FIELD OF THE INVENTION

The present invention relates to methods and apparatus for controlling pixels in image sensors generally and per pixel manipulation in particular.


BACKGROUND OF THE INVENTION


FIG. 1 depicts a prior art charge integration pixel 10 located at the i-th row and a j-th column of a vision sensor (not shown). Pixel 10 comprises a photodetector represented by current source Iph(i,j), a switch 12, a capacitor 14 and a readout circuit 16. Switch 12 may be any type of sample and hold a device such as a semiconductor. Capacitor 14 functions as a storage device or, specifically, as an integration capacitor.


Pixel 10 generates photocurrent Iph(i,j), which is generally proportional to the intensity of the light impinging on the photodetector. When closed, switch 12 conducts photocurrent Iph(i,j) and charges the capacitor 14. In some embodiments, prior to charge integration, capacitor 14 is completely discharged. In this instance, initial voltage Vinitial across the capacitor 14 is 0 and rises linearly with the accumulation of charge.


If photocurrent Iph(i,j) is time invariant during charge integration, and if switch 12 is shut for an exposure time t=tE(i,j), then an accumulated charge Qa may be calculated as per equation (1).

Qa(i,j)=Iph(i,jtE(i,j)  (1)


The accumulated voltage Vc(i,j) across capacitor 14 is:











V
c



(

i
,
j

)


=




I

p





h




(

i
,
j

)


·


t
E



(

i
,
j

)




C
I






(
2
)







where C1 is the capacitance of capacitor 14.


Proof for the following is described in full in the attached Appendix. Herein are some of the equations necessary for understanding the present invention. For ease of understanding, the equation numbers in the specification correspond to those in the Appendix.


The ratio between the saturation voltage






V
Sat
c





and the cutoff voltage






V
CO
c





is defined in equation (5) as the electrical signal dynamic range DRS.










DR
s

=


V
Sat
c


V
CO
c






(
5
)







From the Appendix it can be seen that for conventional sensors with global exposure time setting, the captured image dynamic range DRL can be defined as in equation (12),










DR
L

=


V
Sat
c


V
CO
c






(
12
)







For prior art image sensors with globally set exposure time, the captured image dynamic range DRL is equal to the electric signal dynamic range DRS and is exposure time invariant.


Some image sensors have individually per-pixel-controlled electronic shutters, such as those described in U.S. patent applications Ser. No. 09/426,452 “Image Sensor's Unit Cell with Individually Controllable Electronic Shutter Circuit” and Ser. No. 09/516,168 “Image Sensor Architecture for Per-Pixel Charge Integration Control”. For those image sensors, the captured image dynamic range can be shown by equation (17):

DRL=DRS·DRT  (17)


where the shutter or exposure time dynamic range DRT is the ratio of the maximum to the minimum exposure time TE.










DR
T

=


t
E
max


t
E
min






(
18
)







One result from (12) and (17) is that for image sensors with per-pixel-controlled electronic shutters, the captured scene dynamic range DRL may be at most an electrical signal dynamic range DRS times better than the prior art image sensor's dynamic range. For instance, if the electric signal dynamic range DRS is 1,000:1, and the exposure time setup is in the same, then the captured scene dynamic range DRL can be 1,000,000:1, or about 120 db. Thus there is still an incentive to improve the electrical signal dynamic range, since it directly affects the results for image sensors with per-pixel-controlled electronic shutters.


In image sensors with per-pixel exposure time control, since the dynamic range is time dependant and can be significantly improved by efficient management of the exposure time tE, there is a need to develop a method and apparatus for definition of a generally optimal exposure time per cell, or pixel.


SUMMARY

There is therefore provided, in accordance with an embodiment of the present invention, a method for compressing the dynamic range of an image sensor including a multiplicity of pixels. The method includes the steps of exposing each of the pixels to light and producing an associated photocurrent per pixel, representative of the light exposure. Then, on a per-pixel basis, controlling exposure time of each the pixels on the basis of a monotonically rising convex function of the associated photocurrent of each the pixel.


The method may further include calculating the exposure time of each the pixel on the basis of the monotonically rising convex function and/or calculating the exposure time of each the pixel on the basis of the associated photocurrent. The method may also include calculating a first derivative of the monotonically rising convex function based upon a desired sensitivity of the exposure time of each the pixel.


The step of controlling may include storing the calculated exposure times per pixel, accessing the stored exposure times from the memory, and programming the pixels according to the accessed exposure times.


The method may further include accumulating charge generally linearly representative of the photocurrent and calculating the photocurrent as a product of the accumulated charge.


There is therefore provided, in accordance with an embodiment of the present invention, an exposure controller for use in an image sensor including a multiplicity of pixels. The controller may include a memory, a per-pixel parameter and a processor. The memory may store calculated exposure time values. The per-pixel parameter table may contain local parameters of the pixels. The processor may combine, based on a set of convergence criteria, the stored exposure time values and the parameters in order to determine, on a per-pixel basis, an exposure time of-the pixels.


The set of convergence criteria may be based on a set of algorithms capable of deriving an appropriate exposure time for every pixel based on the monotonically rising convex function. The set of algorithms may be capable of deriving a fine tuned exposure time of the pixels.


The exposure controller may be a chip-set external to the image sensor, or alternatively, at least partially integrated on the same substrate as the image sensor.





BRIEF DESCRIPTION OF THE DRAWINGS

The present invention will be understood and appreciated more fully from the following detailed description taken in conjunction with the drawings, in which:



FIG. 1 depicts a prior-art charge integration pixel located at the i-th row and a j-th column of a vision sensor;



FIG. 2 is a graphic illustration (prior-art) voltage Vc as a function of Iph and exposure time TE;



FIG. 3 depicts the dynamic range compression function Vc=f(Iph);


FIGS. 4A/B details one embodiment of an initial setup algorithm;



FIG. 5 details one embodiment of a fine tune algorithm to be used in conjunction with the algorithm of FIG. 4;



FIG. 6 is a block diagram representing the present invention as constructed and operated on an image sensor that facilitates individually controlled pixels.





DETAILED DESCRIPTION

The present invention is a method for controlling dynamic range compression via usage of a monotonically rising convex function. The present specification outlines a set of methods, analytical and tabular functions, and algorithms for capturing images. The methods and algorithms described herein may be used in conjunction with the image sensors described in U.S. patent applications Ser. No. 09/426,452 “Image Sensor's Unit Cell with Individually Controllable Electronic Shutter Circuit” and Ser. No. 09/516,168 “Image Sensor Architecture for Per-Pixel Charge Integration Control”, or with other image sensors capable of per-pixel electronic shutter control. The present invention may be applied to generally dynamic range compression.


In one embodiment of the present invention each pixel's exposure is controlled on the basis of its photocurrent, and is done in a way that allows implementation of a desirable dynamic range compression function.


Another embodiment of the present invention is a per-pixel exposure control hardware/firmware/software apparatus and an exposure time processor usable to derive the per-pixel electronic shutter value.


The present invention defines a class of dynamic range compression functions selectable from a generally infinite set of functions The class of functions exclusively used for compression is mathematically defined as monotonically rising convex functions, or functions which fulfill the following conditions: for Iph1>Iph2, f(Iph1)>f(Iph2), and f′(Iph1)<f′(Iph2), where f′ is the first derivative of the function f by its argument Iph. Thus the derivative function is a monotonically falling function.


Out of an infinite range of possibilities, only the logarithmic function of the kind f(Iph)=c1+c2·log10 (Iph) is known to be used in prior art image sensors. This function is implemented in logarithmic sensors; thus the sensors are capable of a logarithmic response but are incapable of individual per-pixel electronic shutter control.



FIG. 3 depicts three different functions for varying exposure times tE, including a representation of a dynamic range compression function Vc=f(Iph) useful in the shaping of the response in a pixel or unit cell. Typically, the function may be a monotonically rising convex function of Iph.


A linear representation 30 illustrates the accumulated voltage Vc when exposure time tE=tEMIN. A linear representation 34 illustrates the accumulated voltage Vc when the exposure time tE=tEMAX. Finally, a linear representation 32 illustrates the accumulated voltage Vc according to the monotonically rising convex function. For representation 32, when photocurrent Iph1>Iph2 is in the cutoff photocurrent IphCO to saturation photocurrent IphSat range:

f(Iph1)>f(Iph2), and  (23)













f


(

I

p





h


)






I

p





h











I

p





h


=

I

p





h1






<




f


(

I

p





h


)






I

p





h









I

p





h


=

I

p





h2








(
24
)







The monotonically rising characteristics of the function ensure that original scene features are preserved.


The monotonically falling characteristics of the first derivative result in larger accumulated voltage Vc increments for photocurrents Iph closer to the cutoff photocurrent IphCO, which equates to weaker light intensity signals. Conversely, the voltage Vc increments are smaller for photocurrents Iph closer to saturation photocurrent IphSat, which equates to stronger signals. This results in generating greater signal swings for the weaker signals around the level of cutoff photocurrent IphCO and smaller swings for the stronger signals around the saturation photocurrent IphSat.


The first derivative of the function thus produces a change in the accumulated voltage Vc as a result of a change in photocurrent Iph, and may be defined as a sensitivity function S(Iph), where,










S


(

I

p





h


)


=




f


(

I

p





h


)






I

p





h








(
25
)







For some applications it is desirable that the sensitivity be high at or close to the cutoff photocurrent IphCO. This is advantageous even allowing for the tradeoff of less sensitivity closer to the saturation photocurrent IphSat where stronger light-intensity swings are anticipated.


Please note that although the sensitivity itself is compression function dependant, the maximum captured scene dynamic range DRL is not, as has been demonstrated in formulas (17) and (18).


Another feature of the monotonically rising convex function is that there is a one-to-one correspondence between the accumulated voltage Vc function values and the photocurrent Iph function. The monotonically rising convex function is inverse only in the range between cutoff and saturation photocurrents IphCO and IphSat. This means that there exists a function f1 for which,

Iph1=f−1(Vc), in the range [VcCO, VcSat]  (26)


It is important to note that this feature enables the calculation of photocurrent Iph on the basis of a measurable voltage Vc (FIG. 1). This in-turn allows for determining the exposure time tE for every pixel, to be described hereinbelow.


The dynamic range compression function can be selected to meet the required performance or the observer's taste. Included within the scope of this invention are other applicable functions that are of the monotonically rising, convex type, which can be presented either in an analytical or in a tabular form. While prior art has emphasized the usage of logarithmic functions, it is asserted that alternative functions, such as the monotonically rising convex function, are applicable.


Logarithmic function, such as that followed by the human eye's retina response, is well-known and widely used. For the logarithmic function, which spans the entire dynamic range, it can be demonstrated that,










V
c

(



I


p





h

)


=


f


(

I

p





h


)


=






V
Sat
c

·

log
10





I

p





h



I

p





h

CO



+



V
CO
c

·

log
10





I

p





h

Sat


I

p





h







log
10




I

p





h

Sat


I

p





h

CO









and



,





(
27
)







S


(

I

p





h


)


=




V
Sat
c

-

V
CO
c




log
10




·

log
10





I

p





h

Sat


I

p





h

CO




·

1

I

p





h








(
28
)







Thus, for a logarithmic dynamic range compression function, the sensitivity S is inversely proportional to photocurrent Iph or inversely proportional to the light intensity.


One method for shaping the dynamic range to meet the required compression function is described now. The present invention describes a method and apparatus for setting the exposure time tE for each pixel to accomplish a high dynamic range which complies with the selected dynamic range compression function on one hand and with the captured scene on the other.


The exposure time setup may be divided into two parts, an initial setup and fine tuning, illustrated in FIGS. 4A/B and 5 respectively.


Reference is now made to FIGS. 4A/B, a block diagram illustrating an initial setup process 40. Process 40 comprises three phases: a charge integration phase (steps 42-56), an identification of always-off pixels phase (steps 58-80) and an identification of always-on pixels phase (steps 82-102).


Charge integration phase: Process 40 commences (step 42) with selection of the pixel at location (i=0, j=0) (step 44). The time exposure tE of the pixel at (i=0, j=0) is set to tEMIN (step 46). Via steps 48-54, process 40 increments from pixel to pixel along both the V and the H axis, setting the time exposure tE of each pixel to tEMIN. The loop of steps 46-54 is completed when the time exposure tE of the last pixel at the location (i=H−1, j=V−1) (steps 48 and 54, respectively) is set to tEMIN. In step 56 an image is captured and there is charge integration of all the pixels.


It is noted that, similarly, process 40 could commence with initializing all the pixels to tEmax rather than to tEmin.


Identification of the “Always off” Pixels (always saturated). This phase commences with selection of the pixel at location (i=0, j=0) (step 58). In step 60, the voltage Vc of the pixel is read. If the voltage Vc accumulated (during the exposure time tEmin) is greater than or equal to the saturation value VcSat (step 62), their exposure setup is left unchanged at tEmin.


It is noted that the assumption is that the voltage Vc(i,j) over the integration capacitor is directly measurable. Alternatively, even without this assumption, an output voltage VOut(i,j)—video output signal may represent the Vc(i,j). There are also VOut values that correspond to VcCO, and VcSat.


For pixels whose voltage Vc(i,j) is in the linear range, e.g. VcCO≦Vc(i,j)≦VcSat, (step 64) the photocurrent Iph is calculated (step 66), and the exposure time tE is adjusted (step 68) to match the dynamic range compression function f(Iph). Pixels that fall below the cutoff (step 64) are assigned maximal exposure time tEmax (step 70).


Via steps 72-78, the process 40 increments from pixel to pixel along both the V and the H axis, repeating steps 60-70, as appropriate, until the last pixel at the location (i=H−1, j=V−1) (steps 72 and 78, respectively) are assigned an exposure time. In step 80 an image is captured and there is charge integration of all the pixels.


Identification of the “Always On” Pixels. This phase commences with selection of the pixel at location (i=0, j=0) (step 82). Only pixels that were at cutoff in the previous cycle are checked, e.g. those pixels whose exposure time is set to tEmax in step 70. In an alternative embodiment, all the pixels may be rechecked.


In step 84, the voltage Vc of the pixel is read. If the measured Vc(i,j) is below the cutoff (Vc(i,j)<VcCO) (step 88), the exposure time is left at maximum, i.e. tEmax. If the measured value is in the linear range (step 88), the photocurrent Iph is calculated (step 90) and the estimated time tE(i,j) is adjusted (step 92) to match the dynamic range compression function f(Iph).


Via steps 94-100, process 40 increments from pixel to pixel along both the V and the H axis, repeating steps 84-92, as appropriate until the last pixel at the location (i=H−1, j=V−1) (steps 94 and 100, respectively) are assigned an exposure time. Process 40 ends at step 102.


Reference is now made to FIG. 5, a block diagram illustrating a fine tuning process 110. During process 110 corrections are made to those pixels that fall within the linear operation range. The corrections are performed cyclically for all the pixels, such that the match-up to the dynamic range compression function is improved. The corrections may be aborted if a desirable convergence is accomplished or based upon other criteria or considerations.


In theory, an image sensor behaves linearly and the integration capacitor's voltage rises linearly with time for a constant photocurrent Iph between the cutoff and saturation photocurrent IphCO and IphSat, respectively. In this case, the photocurrent Iph derived during the initial setup process 40 may be generally precise. Also, the exposure time tE derived to match the desired dynamic range compression function is generally accurate.


In practice, the linearity assumption may be correct to the first order approximation. Also, the integration capacitor value CI may vary somewhat with voltage VC and with pixel location.


The method described in FIG. 5 is adapted to account for inaccuracies and to finely tune the exposure time tE to compensate for non-linearity and for skews. The method is based on the accurate tracking by the voltage Vc of the dynamic range compression function.


The method corrects the exposure time tE values until the disparity between the measured integration capacitor's voltage VC and the dynamic range compression function value reaches a desirable accuracy.


It is noted that when the f(Iph) function value is equal to the measured voltage Vc(i,j), the current exposure time tE(i,j) and the new calculated exposure time tEnew are the same. The method also takes into account-that during the process of tuning the exposure time tE(i,j), the accumulated voltage Vc(i,j) may exceed the “linear” range and corrects the exposure time tE accordingly.


Process 110 starts (step 112) with the assumption that the exposure time for each pixel has been initially set as described in process 40. Then, an image capture takes place, e.g. charge integration (step 114). In step 116 the process commences with the pixels at location (i=0, j=0) and the voltage Vc is read (step 118). If the Vc(i,j) falls into the linear range (step 120), an exposure time tE correction takes place (steps 122-124), First, photocurrent Iph is calculated on the basis of the measured Vc(i,j) and the integration capacitor capacitance CI (step 122). A new exposure time tEnew is then calculated to adjust for the measured integration capacitor voltage Vc (step 124).


The new exposure time tEnew is checked to see if it falls in the exposure time range of [tEmin, tEmax ] (step 126). If the newly calculated exposure time is below the minimum (step 128), it is set to tEmin (step 130). If it is above the maximum, it is set to tEmax (step 132). If the exposure time tEnew is in the exposure time range of [tEmin, tEmax], and if the difference between the exposure time tEnew and the current value tE is greater than εtE (step 134), the current exposure time tE value is replaced with the newly calculated value tEnew. εtE is an empirically set programmable value.


After the entire array is scanned (steps 138-142 are used for incrementation of the pixels, and steps 118 to 138 are repeated for all pixels), a decision is made (step 142) whether to continue or discontinue (step 144) the fine-tune process. This decision may be based upon reaching a desirable convergence criteria, in average, or in the worst case. Also, the decision may be based upon the observers subjective impression.


The exposure time fine-tuning method 1110 is one representation of many potential variations, all of which are encompassed by the principles of the present invention. Alternatives or variations to the present invention are as follows:


Per-pixel parametric tables: The capacitance cI, saturation voltage VcSat, cutoff voltage VcCO, and other parameters which might vary with the pixel location, may be determined on a per-pixel basis and stored in the image-sensor's pixel-associated memory tables (not shown). The methods described here above and similar methods may be modified accordingly.


The photocurrent Iph and the exposure time tE may be calculated using local parameters rather than global constants. That is, capacitance CI, saturation voltage VcSat, cutoff voltage VcCO may be replaced by CI(i,j), VcSat(i,j), and VcCO(i,j) respectively. This may result in a faster convergence.


Accounting for noise: Due to noise, the exposure time tE may fluctuate. In order to reduce noise dependant fluctuations, the evaluation of the new exposure time tEnew may be modified as follows:










t
E
new

=




N


(

i
,
j

)


-
1
+


f


(

I

p





h


)




V
c



(

i
,
j

)





N


(

i
,
j

)



·


t
E



(

i
,
j

)







(
29
)







where N(i,j) is the number of image acquisition iterations per pixel in the (i,j) location, for which the integration capacitor's voltage Vc is in the linear region.


Convergence Criteria: Convergence criteria based upon the standard deviation may be used:










σ







t
E
new



(

i
,
j

)



=



{



[


N


(

i
,




j

)


-
1

)

·


[

σ







t
E



(

i
,
j

)



]

2


+


[



t
E
new



(

i
,




j

)


-


t
E



(

i
,




j

)



]

2


}


1
2



N


(

i
,




j

)







(
30
)







where σtE(i,j), is the standard deviation in tE(i,j) after the N−1 th iteration, and σtEnew(i,j) is the new standard deviation calculated after the current iteration. Further exposure time iterations can be stopped, if the worst case σtE(i,j) for any i, and j is below a certain σtE value. That is,

σtE(i,j)<σtE, for any 0≦i≦H−1, and any 0≦j≦V−1  (31)


Alternatively, the average exposure time deviation being below a certain σ tE value can serve as the fine-tune termination criteria, that is,














j
=
0


V
-
1








Σ





i
=
0


H
-
1








Σσ







t
E



(

i
,




j

)







H
·
V


<

σ






t
E






(
32
)







The two convergence criteria may be combined. Also, the convergence criteria may be combined with the observers subjective judgement as to when the result accomplished is satisfactory.


Reference is now made to FIG. 6, a block diagram of a pixel electronic shutter control system 200 and associated image sensors based on the considerations and the methods presented in FIGS. 4 and 5. System 200 facilitates individually controlling each pixel.


System 200 may comprise an image sensor 202, an analog-to-digital (A-to-D) converter 204, an exposure controller 206, an exposure memory array 208 and an optional pixel parameter memory array 209.


A-to-D converter 204 may comprise an analog input that is connectable to video output of image sensor 202 for receipt of an analog video Vout. Converter 204 samples video output Vout in-sync with the video pixel-outgoing rate, and converts the analog video Vout into digital video DVout. The digital video DVout may be output from converter 204 on the n-bit DVOut, where n may be equal to or greater than q, the number of bits of the digitally-stored exposure time tE(i,j) values.


The exposure controller 206 receives digital video DVOut from the converter 204 and calculates the new set of exposure time values tE(i,j). An exposure controller program 214 may be stored in an internal or external program memory 210.


Controller 206 may calculate the photocurrent Iph on the basis of the digital video DVOut. Digital video DVOut corresponds to voltage Vc, which in-turn corresponds to the integrated charge Qa stored in the capacitor.


Controller 206 may also calculate the initial and the finely-tuned exposure time value tE(i,j) to match the selected dynamic range compression function f(Iph), for each pixel individually. These calculations may be based on processes 40 and 110. Alternatively, the dynamic range compression function f(Iph) may be selected from a dynamic range compression function menu, such as a parameter table 212.


Controller 206 may then store the calculated q-bits wide tE(i,j) values in exposure time memory 208. The storage may be done at pixel video rate.


Memory 208 may then load the new exposure time tE into the image sensor 202. Controller 206 may control the loading.


The loading may be performed after the completion of the next-to-be-used exposure time values calculation and their storage in the exposure memory 208. Alternatively, the tE(i,j) calculations, their storage in the memory 208, and their loading to image sensor 202 may be performed concurrently. This is especially useful for real-time video image sensors that have to keep up with the video frame rate.


Controller 206 may decide whether to continue image acquisitions and subsequent exposure time calculation iterations or to stop.


Memory 208 may be an H×V memory array, which stores the q-bit tE(i,j) values.


The array may be addressed using the column, the row-, and -i, j indexes. During the charge integration process, the image sensors may be programmed by loading, in one batch for the entire image sensor array, the k-th exposure time bit, which is then subject to integration of length TI. It is noted that the integration time can be of any length.


Concurrently with the integration for the k-th bit, the image sensor is programmed for the i-th bit integration. After the integration, that corresponds to the k-th bit is completed, the integration for the i-th bit can start. It is noted that the programming/integration of all the q bit-planes can be done in any sequence, and the plane bits may be loaded line after line, p bits at a time.


It is noted that each of the functions described can be integrated on the same die with the image sensor or be a part of a separate chip.


The per-pixel exposure time processor may be implemented either externally to the image sensor in a form of a chip-set, or to be partially, or fully integrated on the same substrate with the pixel.


The algorithm for the calculation on the new exposure value tEnew(i,j) may be based on a per pixel parameter table formulated in equation (29). It is noted that additional memory array may be required to store the N(i,j) values.


The algorithm for the calculation of the standard deviation in exposure time σ tE(i,j) formulated in equation (30) also requires an array storage for N(i,j).


An image sensor with per-pixel electronic shutter control enables individual control of each pixel's exposure time and thus allows implementation not only of dynamic range compression but many other useful effects as well. Thus, an image sensor along with the apparatus described herein, may select a variety of functions given in an analytical or a tabular form that can be used to achieve a variety of effects.


It will be appreciated by persons skilled in the art that the present invention is not limited to what has been particularly shown and described hereinabove. Rather the scope of the present invention is defined by the claims that follow:

Claims
  • 1. A method for compressing the dynamic range of an image sensor comprising a multiplicity of pixels, the method comprising the steps of: exposing each of said pixels to light and producing an associated photocurrent per pixel, representative of said light exposure;on a per-pixel basis, controlling exposure time of each of said pixels on the basis of a monotonically rising convex function of said associated photocurrent of each said pixel,
  • 2. The method according to claim 1, further comprising the step of storing said determined exposure time of each said pixel as a q-bit value in memory.
  • 3. The method according to claim 2, wherein said step of controlling comprises the steps of: accessing said stored exposure times from a memory; andprogramming said pixels according to said accessed exposure times.
  • 4. The method according to claim 1, further comprising calculating a first derivative of said monotonically rising convex function based upon a desired sensitivity of said exposure time of each said pixel.
  • 5. The method according to claim 1, further comprising: accumulating charge generally linearly representative of said photocurrent; andcalculating said photocurrent as a product of said accumulated charge.
  • 6. The method according to claim 1, wherein said step of controlling is performed by circuitry outside said multiplicity of pixels.
  • 7. An exposure controller for use in an image sensor comprising a multiplicity of pixels, the controller comprising: a memory for storing calculated exposure time values;a per-pixel parameter table containing local parameters of said pixels; anda processor for combining based on a set of convergence criteria, said stored exposure time values and said parameters in order to determine, on a per-pixel basis, an exposure time of said pixels, wherein:the processor is configured to: calculate a photocurrent from an accumulated voltage; anddetermine an exposure time, based on a monotonically rising convex function of said calculated photocurrent; andwherein said monotonically rising convex function fulfills the following conditions: for Iph1>Iph2, f(Iph1)>f(Iph2) and f′(Iph1)<f′(Iph2),f represents the monotonically rising convex function;f′ represents the first derivative of the function f; andIph1 and Iph2 are arguments to function f and its first derivative f′.
  • 8. The exposure controller of claim 7, wherein said set of convergence criteria is based on a set of algorithms capable of deriving an appropriate exposure time for every pixel based on a monotonically rising convex function.
  • 9. The exposure controller of claim 8, wherein said set of algorithms is capable of deriving a fine tuned exposure time of said pixels.
  • 10. The exposure controller of claim 7, wherein said exposure controller is a chip-set external to said image sensor.
  • 11. The exposure controller of claim 7, wherein said exposure controller is at least partially integrated on the same substrate as said image sensor.
  • 12. The exposure controller of claim 7, wherein said memory and said processor are disposed outside said plurality of pixels.
  • 13. A method for determining an exposure time of each of a multiplicity of pixels belonging to an image sensor, the method comprising: initially setting an exposure time of each pixel to a minimum value tEMIN;performing a first charge integration of each pixel over said exposure time to obtain a first charge voltage Vc, and: if said first charge voltage Vc is less than a cutoff voltage VCO, adjusting the exposure time of said each pixel to a maximum value tEMAX; wherein tEMAX>tEMIN;if said first charge voltage Vc is between said cutoff voltage VCO and a saturation voltage VSat, adjusting the exposure time of said each pixel based on a monotonically rising convex function of a photocurrent of that pixel; andif said first charge voltage Vc is greater than the saturation voltage VSat, leaving the exposure time of said each pixel at tEMIN; andperforming a second charge integration of each pixel whose exposure time was set to tEMAX to obtain a second charge voltage, and: if said second charge voltage is greater than said cutoff voltage VCO, adjusting the exposure time of said each pixel whose exposure time was set to tEMAX, based on a monotonically rising convex function of a photocurrent of that pixel;wherein said monotonically rising convex function fulfills the following conditions: for Iph1>Iph2, f(Iph1)>f(Iph2) and f′(Iph1)<f′(Iph2), where:f represents the monotonically rising convex function;f represents the first derivative of the function f; andIph1 and Iph2 are arguments to function f and its first derivative f′.
  • 14. The method according to claim 13, further comprising: iteratively adjusting the exposure time of each pixel until a predetermined convergence criteria is met.
  • 15. A method for compressing the dynamic range of an image sensor comprising a multiplicity of pixels arranged in a pixel array, the method comprising the steps of: for each pixel: exposing the pixel to light and producing an associated photocurrent representative of said light exposure;determining, with a processor, an exposure time of the pixel on the basis of a monotonically rising convex function of said associated photocurrent;storing said determined exposure time as a q-bit value in memory located outside the pixel array;accessing the stored exposure time from said memory; andprogramming the pixel to accumulate charge for the accessed exposure time;wherein said monotonically rising convex function fulfills the following conditions: for Iph1>Iph2, f(Iph1)>f(Iph2) and f′(Iph1)<f′(Iph2), where:f represents the monotonically rising convex function;f′ represents the first derivative of the function f; andIph1 and Iph2 are arguments to function f and its first derivative f′.
  • 16. A pixel electronic control shutter system comprising: an image sensor comprising a pixel array having multiplicity of pixels, the image sensor producing at least one analog output voltage in response to light received by a pixel;an analog-to-digital converter receiving the at least one analog output voltage and outputting a corresponding digitized voltage; andan exposure controller receiving said digitized voltage from the analog-to-digital converter, the exposure controller comprising: a memory for storing calculated exposure time values;a per-pixel parameter table containing local parameters of said pixels; anda processor for combining based on a set of convergence criteria, said stored exposure time values and said parameters in order to determine, on a per-pixel basis, an exposure time of said pixels, wherein:the processor is configured to: calculate a photocurrent from said digitized voltage; anddetermine an exposure time, based on a monotonically rising function of said calculated photocurrent; andwherein said monotonically rising convex function fulfills the following conditions: for Iph1>Iph2, f(Iph1)>f(Iph2) and f′(Iph1)<f′(Iph2), where:f represents the monotonically rising convex function;f′ represents the first derivative of the function f; andIph1 and Iph2 are arguments to function f and its first derivative f′.
  • 17. The pixel electronic control shutter system according to claim 16, wherein: said memory and said processor are disposed outside said plurality of pixels; andsaid determined exposure time is stored as a q-bit value in said memory.
PCT Information
Filing Document Filing Date Country Kind 371c Date
PCT/IL01/00612 7/3/2001 WO 00 1/6/2003
Publishing Document Publishing Date Country Kind
WO02/03675 1/10/2002 WO A
US Referenced Citations (93)
Number Name Date Kind
3911467 Levine et al. Oct 1975 A
4212034 Kokie et al. Jul 1980 A
4471228 Nishizawa et al. Sep 1984 A
4472638 Nishizawa et al. Sep 1984 A
4583002 Kondo et al. Apr 1986 A
4635126 Kinoshita Jan 1987 A
4706123 Chautemps Nov 1987 A
4758734 Uchida et al. Jul 1988 A
4761689 Takatsu et al. Aug 1988 A
4779004 Tew et al. Oct 1988 A
4839735 Kyomasu et al. Jun 1989 A
4870266 Ishizaki et al. Sep 1989 A
4935636 Gural Jun 1990 A
4942473 Zeevi et al. Jul 1990 A
4974093 Murayama et al. Nov 1990 A
4984002 Kokubo Jan 1991 A
4996600 Nishida et al. Feb 1991 A
5049752 Kalaf et al. Sep 1991 A
5060245 Nelson Oct 1991 A
5218462 Kitamura et al. Jun 1993 A
5220170 Cox et al. Jun 1993 A
5227313 Gluck et al. Jul 1993 A
5262871 Wilder et al. Nov 1993 A
5264944 Takemura Nov 1993 A
5278660 Sugiki Jan 1994 A
5291294 Hirota Mar 1994 A
5323186 Chow et al. Jun 1994 A
5351309 Lee et al. Sep 1994 A
5354980 Rappoport et al. Oct 1994 A
5369047 Hynecek Nov 1994 A
5396091 Kobayashi et al. Mar 1995 A
5452004 Roberts Sep 1995 A
5461491 Degi et al. Oct 1995 A
5463421 Deguchi et al. Oct 1995 A
5471515 Fossum et al. Nov 1995 A
5481301 Cazaux et al. Jan 1996 A
5485004 Suzuki et al. Jan 1996 A
5521366 Wang et al. May 1996 A
5539461 Andoh et al. Jul 1996 A
5541402 Ackland et al. Jul 1996 A
5541654 Roberts Jul 1996 A
5546127 Yamashita et al. Aug 1996 A
5563405 Woolaway, II et al. Oct 1996 A
5576762 Udagawa Nov 1996 A
5576763 Ackland et al. Nov 1996 A
5592219 Nakagawa et al. Jan 1997 A
5604534 Hedges et al. Feb 1997 A
5619262 Uno et al. Apr 1997 A
5638120 Mochizuki et al. Jun 1997 A
5638123 Yamaguchi Jun 1997 A
5650352 Kamasz et al. Jul 1997 A
5666567 Kusaka Sep 1997 A
5694495 Hara et al. Dec 1997 A
5712682 Hannah Jan 1998 A
5721425 Merrill Feb 1998 A
5739562 Ackland et al. Apr 1998 A
5742659 Atac et al. Apr 1998 A
5812191 Orava et al. Sep 1998 A
5835141 Ackland et al. Nov 1998 A
5841126 Fossum et al. Nov 1998 A
5841159 Lee et al. Nov 1998 A
5854498 Merrill Dec 1998 A
5856667 Spirig et al. Jan 1999 A
5867215 Kaplan Feb 1999 A
5877808 Iizuka Mar 1999 A
5881159 Aceti et al. Mar 1999 A
5887204 Iwasaki Mar 1999 A
5892541 Merrill Apr 1999 A
5896172 Korthout et al. Apr 1999 A
5949483 Fossum et al. Sep 1999 A
5955753 Takahashi Sep 1999 A
5969759 Morimoto Oct 1999 A
5973311 Sauer et al. Oct 1999 A
6078037 Booth, Jr. Jun 2000 A
6091449 Matsunaga et al. Jul 2000 A
6115065 Yadid-Pecht et al. Sep 2000 A
6122008 Komobuchi et al. Sep 2000 A
6137533 Azim Oct 2000 A
6141049 Harada Oct 2000 A
6166367 Cho Dec 2000 A
6243134 Beiley Jun 2001 B1
6252217 Pyyhtia et al. Jun 2001 B1
6300977 Waechter et al. Oct 2001 B1
6452633 Merrill et al. Sep 2002 B1
6529241 Clark Mar 2003 B1
6606121 Bohm et al. Aug 2003 B1
6657664 Ueno Dec 2003 B2
6801258 Pain et al. Oct 2004 B1
6956605 Hashimoto Oct 2005 B1
20020101528 Lee et al. Aug 2002 A1
20020179820 Stark Dec 2002 A1
20020186312 Stark Dec 2002 A1
20030201379 Stark Oct 2003 A1
Foreign Referenced Citations (10)
Number Date Country
0757476 May 1997 EP
2181010 Apr 1987 GB
05145959 Jun 1993 JP
05340810 Dec 1993 JP
06339085 Dec 1994 JP
09275525 Oct 1997 JP
WO 9717800 May 1997 WO
WO 9728641 Aug 1997 WO
WO-9933259 Jul 1999 WO
WO-0110117 Feb 2001 WO
Related Publications (1)
Number Date Country
20040036797 A1 Feb 2004 US