This relates generally to imaging devices, and more particularly, to imaging devices having high dynamic range imaging pixels.
Image sensors are commonly used in electronic devices such as cellular telephones, cameras, and computers to capture images. In a typical arrangement, an image sensor includes an array of image pixels arranged in pixel rows and pixel columns. Circuitry may be coupled to each pixel column for reading out image signals from the image pixels. Typical image pixels contain a photodiode for generating charge in response to incident light. Image pixels may also include a charge storage region for storing charge that is generated in the photodiode. Image sensors can operate using a global shutter, rolling shutter, per-pixel controlled, or per-pixel-group controlled scheme.
Some conventional image sensors may be able to operate in a high dynamic range (HDR) mode. HDR operation may be accomplished in image sensors by assigning alternate rows of pixels different integration times. However, conventional HDR image sensors may sometimes experience lower than desired resolution, lower than desired sensitivity, higher than desired noise levels, and lower than desired quantum efficiency.
It would therefore be desirable to be able to provide improved high dynamic range operation in image sensors.
Embodiments of the present invention relate to image sensors. It will be recognized by one skilled in the art that the present exemplary embodiments may be practiced without some or all of these specific details. In other instances, well-known operations have not been described in detail in order not to unnecessarily obscure the present embodiments.
Electronic devices such as digital cameras, computers, cellular telephones, and other electronic devices may include image sensors that gather incoming light to capture an image. The image sensors may include arrays of pixels. The pixels in the image sensors may include photosensitive elements such as photodiodes that convert the incoming light into image signals. Image sensors may have any number of pixels (e.g., hundreds or thousands or more). A typical image sensor may, for example, have hundreds of thousands or millions of pixels (e.g., megapixels). Image sensors may include control circuitry such as circuitry for operating the pixels and readout circuitry for reading out image signals corresponding to the electric charge generated by the photosensitive elements.
As shown in
Each image sensor in camera module 12 may be identical or there may be different types of image sensors in a given image sensor array integrated circuit. During image capture operations, each lens may focus light onto an associated image sensor 14. Image sensor 14 may include photosensitive elements (i.e., pixels) that convert the light into digital data. Image sensors may have any number of pixels (e.g., hundreds, thousands, millions, or more). A typical image sensor may, for example, have millions of pixels (e.g., megapixels). As examples, image sensor 14 may include bias circuitry (e.g., source follower load circuits), sample and hold circuitry, correlated double sampling (CDS) circuitry, amplifier circuitry, analog-to-digital converter circuitry, data output circuitry, memory (e.g., buffer circuitry), address circuitry, etc.
Still and video image data from camera sensor 14 may be provided to image processing and data formatting circuitry 16 via path 28. Path 28 may be a connection through a serializer/deserializer (SERDES) which is used for high speed communication and may be especially useful in automotive systems. Image processing and data formatting circuitry 16 may be used to perform image processing functions such as data formatting, adjusting white balance and exposure, implementing video image stabilization, face detection, etc. Image processing and data formatting circuitry 16 may also be used to compress raw camera image files if desired (e.g., to Joint Photographic Experts Group or JPEG format). In a typical arrangement, which is sometimes referred to as a system on chip (SOC) arrangement, camera sensor 14 and image processing and data formatting circuitry 16 are implemented on a common semiconductor substrate (e.g., a common silicon image sensor integrated circuit die). If desired, camera sensor 14 and image processing circuitry 16 may be formed on separate semiconductor substrates. For example, camera sensor 14 and image processing circuitry 16 may be formed on separate substrates that have been stacked.
Imaging system 10 (e.g., image processing and data formatting circuitry 16) may convey acquired image data to host subsystem 20 over path 18. Path 18 may also be a connection through SERDES. Host subsystem 20 may include processing software for detecting objects in images, detecting motion of objects between image frames, determining distances to objects in images, filtering or otherwise processing images provided by imaging system 10.
If desired, system 100 may provide a user with numerous high-level functions. In a computer or advanced cellular telephone, for example, a user may be provided with the ability to run user applications. To implement these functions, host subsystem 20 of system 100 may have input-output devices 22 such as keypads, input-output ports, joysticks, and displays and storage and processing circuitry 24. Storage and processing circuitry 24 may include volatile and nonvolatile memory (e.g., random-access memory, flash memory, hard drives, solid-state drives, etc.). Storage and processing circuitry 24 may also include microprocessors, microcontrollers, digital signal processors, application specific integrated circuits, etc.
An example of an arrangement for camera module 12 of
Column control and readout circuitry 42 may include column circuitry such as column amplifiers for amplifying signals read out from array 32, sample and hold circuitry for sampling and storing signals read out from array 32, analog-to-digital converter circuits for converting read out analog signals to corresponding digital signals, and column memory for storing the read out signals and any other desired data. Column control and readout circuitry 42 may output digital pixel values to control and processing circuitry 44 over line 26.
Array 32 may have any number of rows and columns. In general, the size of array 32 and the number of rows and columns in array 32 will depend on the particular implementation of image sensor 14. While rows and columns are generally described herein as being horizontal and vertical, respectively, rows and columns may refer to any grid-like structure (e.g., features described herein as rows may be arranged vertically and features described herein as columns may be arranged horizontally).
Pixel array 32 may be provided with a color filter array having multiple color filter elements which allows a single image sensor to sample light of different colors. As an example, image sensor pixels such as the image pixels in array 32 may be provided with a color filter array which allows a single image sensor to sample red, green, and blue (RGB) light using corresponding red, green, and blue image sensor pixels arranged in a Bayer mosaic pattern. The Bayer mosaic pattern consists of a repeating unit cell of two-by-two image pixels, with two green image pixels diagonally opposite one another and adjacent to a red image pixel diagonally opposite to a blue image pixel. In another suitable example, the green pixels in a Bayer pattern are replaced by broadband image pixels having broadband color filter elements (e.g., clear color filter elements, yellow color filter elements, etc.). These examples are merely illustrative and, in general, color filter elements of any desired color and in any desired pattern may be formed over any desired number of image pixels 34.
If desired, array 32 may be part of a stacked-die arrangement in which pixels 34 of array 32 are split between two or more stacked substrates. In such an arrangement, each of the pixels 34 in the array 32 may be split between the two dies at any desired node within the pixel. As an example, a node such as the floating diffusion node may be formed across two dies. Pixel circuitry that includes the photodiode and the circuitry coupled between the photodiode and the desired node (such as the floating diffusion node, in the present example) may be formed on a first die, and the remaining pixel circuitry may be formed on a second die. The desired node may be formed on (i.e., as a part of) a coupling structure (such as a conductive pad, a micro-pad, a conductive interconnect structure, or a conductive via) that connects the two dies. Before the two dies are bonded, the coupling structure may have a first portion on the first die and may have a second portion on the second die. The first die and the second die may be bonded to each other such that first portion of the coupling structure and the second portion of the coupling structure are bonded together and are electrically coupled. If desired, the first and second portions of the coupling structure may be compression bonded to each other. However, this is merely illustrative. If desired, the first and second portions of the coupling structures formed on the respective first and second dies may be bonded together using any metal-to-metal bonding technique, such as soldering or welding.
As mentioned above, the desired node in the pixel circuit that is split across the two dies may be a floating diffusion node. Alternatively, the desired node in the pixel circuit that is split across the two dies may be the node between a floating diffusion region and the gate of a source follower transistor (i.e., the floating diffusion node may be formed on the first die on which the photodiode is formed, while the coupling structure may connect the floating diffusion node to the source follower transistor on the second die), the node between a floating diffusion region and a source-drain node of a transfer transistor (i.e., the floating diffusion node may be formed on the second die on which the photodiode is not located), the node between a source-drain node of a source follower transistor and a row select transistor, or any other desired node of the pixel circuit.
In general, array 32, row control circuitry 40, column control and readout circuitry 42, and control and processing circuitry 44 may be split between two or more stacked substrates. In one example, array 32 may be formed in a first substrate and row control circuitry 40, column control and readout circuitry 42, and control and processing circuitry 44 may be formed in a second substrate. In another example, array 32 may be split between first and second substrates (using one of the pixel splitting schemes described above) and row control circuitry 40, column control and readout circuitry 42, and control and processing circuitry 44 may be formed in a third substrate.
It may be desirable to increase the dynamic range of imaging pixels within image sensor 14. To increase the dynamic range of an imaging pixel, the imaging pixel may include an overflow path. A schematic diagram of an imaging pixel with an overflow path is shown in
The overflow path in the imaging pixel may be controlled by a transistor that sets a dynamic potential barrier for charge to overflow from the photodiode. When the transistor is deasserted (e.g., the signal provided to the transistor gate is low), the capacity of the photodiode may be large (e.g., a large amount of charge needs to accumulate before charge overflows from the photodiode to the overflow node). The signal provided to the transistor gate may be raised to an intermediate level to lower the capacity of the photodiode and allow charge to overflow from the photodiode to the overflow node at a lower level. Multiple overflow nodes may optionally be arranged in series such that overflow charge cascades through the multiple overflow nodes.
In high light conditions, charge will overflow from the photodiode to the overflow node during overflow integration time 208. The overflow charge is then sampled from the overflow node at the conclusion of overflow integration time 208 and/or just prior to photodiode sampling at the end of integration time 210. Meanwhile, if light conditions are low, no charge will overflow during overflow integration time 208. However, charge is allowed to accumulate throughout integration time 210 (both when the photodiode has a reduced capacity and when the photodiode has a full capacity). This long integration time enables the pixel to obtain useful signal even at very low light levels. In this way, the imaging pixel has an increased dynamic range. A more detailed timing diagram is shown and discussed in connection with
An overflow scheme of the type shown in
Transistor 108 (sometimes referred to as threshold transistor 108) is coupled between photodiode 102 and floating diffusion region 124. Floating diffusion region 124 may be a doped semiconductor region (e.g., a region in a silicon substrate that is doped by ion implantation, impurity diffusion, or other doping process). Gain select transistor 110 has a first terminal coupled to floating diffusion region 124 and a second terminal coupled to storage capacitor 112. Storage capacitor 112 may be coupled between gain select transistor 110 and a bias voltage supply terminal 126 that provides bias voltage VXX. In other words, capacitor 112 has a first plate coupled to gain select transistor 110 (and reset transistor 114) and a second plate coupled to bias voltage supply terminal 126.
Source follower transistor 118 has a gate terminal coupled to floating diffusion region 124. Source follower transistor 118 also has a first source-drain terminal coupled to voltage supply 116. Voltage supply 116 may provide a power supply voltage VAAPIX. In this application, each transistor is illustrated as having three terminals: a source, a drain, and a gate. The source and drain terminals of each transistor may be changed depending on how the transistors are biased and the type of transistor used. For the sake of simplicity, the source and drain terminals are referred to herein as source-drain terminals or simply terminals. A second source-drain terminal of source follower transistor 118 is coupled to output terminal 122 (pixout) through row select transistor 120. The source follower transistor, row select transistor, and output terminal may sometimes collectively be referred to as a readout circuit or as readout circuitry. Reset transistor 114 may be coupled between capacitor 112 and voltage supply 116.
A gate terminal of transistor 108 (sometimes referred to as transfer transistor 108 or threshold transistor 108) receives control signal TXOF. A gate terminal of transistor 114 (sometimes referred to as reset transistor 114) receives control signal RST. A gate terminal of transistor 120 (sometimes referred to as row select transistor 120) receives control signal RS. A gate terminal of transistor 110 (sometimes referred to as gain transistor 110, conversion gain transistor 110, gain select transistor 110, conversion gain select transistor 110, etc.) receives control signal DCG. Control signals TXOF, RST, RS, and DCG may be provided by row control circuitry (e.g., row control circuitry 40 in
Similar to as discussed in connection with
Gain select transistor 110 and dual conversion gain capacitor 112 may be used by pixel 34 to implement a dual conversion gain mode. In particular, pixel 34 may be operable in a high conversion gain mode and in a low conversion gain mode. If gain select transistor 110 is disabled, pixel 34 will be placed in a high conversion gain mode. If gain select transistor 110 is enabled, pixel 34 will be placed in a low conversion gain mode. When gain select transistor 110 is turned on, the dual conversion gain capacitor 112 may be switched into use to provide floating diffusion region 124 with additional capacitance. This results in lower conversion gain for pixel 34. When gain select transistor 110 is turned off, the additional loading of the capacitor is removed and the pixel reverts to a relatively higher pixel conversion gain configuration.
The imaging pixel of
Imaging pixel 34 also includes a gain select transistor 110 coupled between floating diffusion region 124 and capacitor 112. However, in
Similar to as discussed in connection with
Imaging pixel 34 also includes a transistor coupled between photodiode 102 and floating diffusion region 124. However, in
Similar to as discussed in connection with
An imaging pixel with more than one photodiode may also use an overflow scheme of the type described herein.
Similar to as discussed in connection with
In double sampling, a reset value and a signal value are obtained during readout. The reset value may then be subtracted from the signal value during subsequent processing to help correct for noise. The double sampling may be correlated double sampling (in which the reset value is sampled before the signal value) or uncorrelated double sampling (in which the reset value is sampled after the signal value is sampled, sometimes referred to as simply double sampling).
After the E2 sample readout at t3, the reset transistor may be asserted by pulsing control signal RST at t4. This may reset the overflow node (e.g., the floating diffusion region 124 and/or capacitor 112). The E2 reset level (RE2) is then sampled (e.g., by asserting the row select transistor). The RE2 sample may be subtracted from the SE2 sample to determine the amount of overflow charge at the overflow nodes. Because the sample level is obtained before the reset level, the E2 sampling is an example of uncorrelated double sampling (not correlated double sampling). The E2 sample may therefore be referred to as an uncorrelated double sample. There is more noise than if correlated double sampling was performed. However, since the overflow charge is generated during relatively high light exposure conditions, the noise may not significantly impact the image data (e.g., the signal-to-noise ratio will remain sufficiently high).
Also at t4, the TXOF control signal is lowered. This increases the capacity of photodiode 102 (e.g., a larger amount of charge can accumulate in PD 102 without overflowing). Charge continues to accumulate in the photodiode even when the overflow node values are being sampled at t3 and t4. The photodiode integration time 210 concludes at t5 when the reset transistor is asserted to reset the floating diffusion region. The E1 reset level (RE1) is then obtained. The E1 readout may refer to readout of the non-overflow charge (that is stored at photodiode 102 at the end of integration time 210). At t6, TXOF is asserted to transfer charge from the photodiode to the floating diffusion region. This E1 sample level (SE1) is then read out (by asserting the row select transistor). The RE1 sample may be subtracted from the SE1 sample to determine the amount of charge present in the photodiode at the end of the integration period. Because the sample level is obtained after the reset level, the E1 sampling is an example of correlated double sampling (and may be referred to as a correlated double sample).
In the overflow operation of
The ratio of the lengths of time of integration times 210 and 208 may be any desired ratio (e.g., 2:1, 3:1, more than 1:1, more than 2:1, more than or equal to 2:1, more than 3:1, more than 5:1, more than 10:1, more than 20:1, less than 1:1, less than 2:1, less than 3:1, less than 5:1, less than 10:1, less than 20:1, between 1.5:1 and 3.5:1, between 1:1 and 10:1, between 2:1 and 4:1, between 2:1 and 3:1, etc.). The length of time of integration time 208 may be greater than one microsecond, greater than three microseconds, greater than five microseconds, greater than ten microseconds, greater than fifty microseconds, less than one microsecond, less than three microseconds, less than five microseconds, less than ten microseconds, less than fifty microseconds, between five and twenty microseconds, etc. The length of time of integration time 210 may be greater than one microsecond, greater than three microseconds, greater than five microseconds, greater than ten microseconds, greater than fifty microseconds, greater than one hundred microseconds, less than one microsecond, less than three microseconds, less than five microseconds, less than ten microseconds, less than fifty microseconds, less than one hundred microseconds, between five and twenty microseconds, between five and fifty microseconds, etc. The integration times may be selected to be sufficiently long to detect flickering light-emitting diodes in the captured scene.
At t5, the photodiode capacity is increased. The photodiode charge may be read out after the conclusion of photodiode integration time 210. Overflow charge may optionally be sampled at the end of integration time 210 in addition to throughout overflow integration time 208.
The times at which the overflow charge is readout and then reset (e.g., t2, t3, t4, and t5 in
Buffer 140 may be incorporated into each imaging pixel in the array of imaging pixels or may incorporated at the periphery of the array of imaging pixels. In some cases, buffer 140 may be shared between multiple pixels. Buffer 140 may be a storage capacitor, storage diode, storage gate, a digital accumulator, or any other desired component. In general, the buffer may sum the samples from the overflow nodes in the digital or analog domain and may be located at any desired location within the image sensor.
In one illustrative example, control circuitry such as column control and readout circuitry 42 and/or control and processing circuitry 44 in
This example of the buffer being included in control circuitry such as column control and readout circuitry 42 and/or control and processing circuitry 44 is merely illustrative. Another possible arrangement is for each imaging pixel to have an in-pixel buffer.
In addition, the imaging pixel of
Similar to as discussed in connection with
The example of
Initially, at t1, reset transistor 114, threshold transistor 108, and gain select transistor 110 may all be asserted. This resets the charge at floating diffusion region 124, photodiode 102, and capacitor 112. The bias voltage VBIAS provided to terminal 126 (sometimes referred to as VXX) may be high during the reset period then dropped low after t1. Keeping VXX low during the integration time may minimize dark current in the imaging pixel. After the reset period, the TXOF control signal for transistor 108 is set to an intermediate value. This sets a potential barrier for charge accumulating in the photodiode. Once the accumulating charge exceeds the potential barrier, the charge overflows to an overflow node (e.g., floating diffusion region 124 and/or storage capacitor 112). The DCG control signal may be held at an intermediate level to allow overflow charge to be distributed between the floating diffusion region and storage capacitor.
Throughout the integration period, the charge at the overflow node may be sampled then subsequently reset. At t2, the charge from the overflow nodes may be sampled (e.g., by asserting row select transistor 120). The readout may begin with an E2 sample (SE2) that is obtained at t3. The E2 readout may refer to readout of the overflow charge (that is stored at floating diffusion region 124 and/or storage capacitor 112). The E2 readout may include readout of a sample level and a reset level for a double sampling.
After the E2 sample is obtained at t2, the overflow nodes may be reset at t3. The reset transistor may be asserted to reset the charge at the storage capacitor and floating diffusion region. Then the E2 reset level (RE2) is sampled (e.g., by asserting the row select transistor) at t4. The RE2 sample may be subtracted from the SE2 sample to determine the amount of overflow charge at the overflow nodes. Because the sample level is obtained before the reset level, the E2 sampling is an uncorrelated double sample. There is therefore more noise than if correlated double sampling was performed. However, since the overflow charge is generated during relatively high light exposure conditions, the noise may not significantly impact the image data (e.g., the signal-to-noise ratio will remain sufficiently high).
This process of obtaining an uncorrelated double sample of the charge at the overflow nodes is repeated during integration period 208. In
Throughout overflow integration time 208 and subsequent to overflow integration time 208, charge accumulates in photodiode 102 in a photodiode integration period 210. After the sampling at t6 TXOF may be lowered, increasing the capacity of the photodiode. At t7, t8, and t9 an optional final uncorrelated double sampling of the overflow nodes may be performed (to detect any additional charge that overflowed the photodiode between t6 and t7 despite the increased capacity of the photodiode during this time period). The sample level is obtained, the overflow nodes are reset, and the reset level is obtained similar to the earlier overflow uncorrelated double samplings. This charge may also be added to buffer 140 that includes summed overflow charge from the overflow integration time 208. During the final overflow sampling starting at t7, gain select transistor 110 is asserted (e.g., DCG is high). This readout is therefore a low conversion gain readout.
At t10, the gain select transistor is deasserted for a high conversion gain readout of the charge in the photodiode. The floating diffusion region is reset at t10 then the E1 reset level is sampled at t11. The transfer transistor is then asserted to transfer charge from the photodiode to the floating diffusion region and the sample level is obtained at t12. This E1 sample level may be subtracted from the E1 reset level to obtain a high conversion gain correlated double sampling E1 result. At t13, the gain select transistor is asserted and the sample level is then sampled again at t14 for a low conversion gain E1 result.
The total overflow signal (e.g., from the buffer) and the signal from the photodiode (e.g., the E1 readout) may be combined (linearized) into a single representative pixel output signal.
In the example of
The foregoing is merely illustrative of the principles of this invention and various modifications can be made by those skilled in the art. The foregoing embodiments may be implemented individually or in any combination.