This disclosure relates generally to optics, and in particular but not exclusively, relates to high dynamic range image sensors.
High dynamic range (“HDR”) image sensors are useful for many applications. In general, ordinary image sensors, including for example charge coupled device (“CCD”) and complementary metal oxide semiconductor (“CMOS”) image sensors, have a dynamic range of approximately 70 dB dynamic range. In comparison, the human eye has a dynamic range of up to approximately 100 dB. There are a variety of situation in which an image sensor having an increased dynamic range is beneficial. For example, image sensors having a dynamic range of more than 100 dB are needed in the automotive industry in order to handle different driving conditions, such as driving from a dark tunnel into bright sunlight. Indeed, many applications may require image sensors with at least 90 dB of dynamic range or more to accommodate a wide range of lighting situations, varying from low light conditions to bright light conditions.
One known approach for implementing HDR image sensors is to use a combination of a photodiodes in each pixel. One of the photodiodes can be used to sense bright light conditions while another photodiode can be used to sense low light conditions. In this approach, the photodiode used to sense bright light is typically smaller (having a smaller light exposure area) than the photodiode used to sense low light conditions. However, this approach requires an asymmetric layout that tends to increase costs. In addition to increasing cost, asymmetric fabrication of the photodiodes in each pixel includes optical asymmetry that may introduce image light ray angle separation. Image light ray angle separation can cause asymmetric blooming, crosstalk, and other undesirable effects, especially when the image light is angled relative to the face of the image sensor.
Non-limiting and non-exhaustive embodiments of the invention are described with reference to the following figures, wherein like reference numerals refer to like parts throughout the various views unless otherwise specified.
In the following description, numerous specific details are set forth to provide a thorough understanding of the embodiments. One skilled in the relevant art will recognize, however, that the techniques described herein can be practiced without one or more of the specific details, or with other methods, components, materials, etc. In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring certain aspects.
Reference throughout this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
In one example, pixel array 102 is a two-dimensional (2D) array of imaging sensors or pixels 110 (e.g., pixels P1, P2 . . . , Pn). In one example, each pixel 110 is a CMOS imaging pixel including at least a large sub-pixel and a small sub-pixel. The large sub-pixels and the small sub-pixels in the pixel array may receive separate shutter signals. As illustrated, each pixel 110 is arranged into a row (e.g., rows R1 to Ry) and a column (e.g., column C1 to Cx) to acquire image data of a person, place, object, etc., which can then be used to render an image of the person, place, object, etc.
In one example, after each pixel 110 has acquired its image data or image charge, the image data is read out by readout circuitry 104 through readout columns 112 and then transferred to function logic 106. In various examples, readout circuitry 104 may include amplification circuitry, analog-to-digital (ADC) conversion circuitry, or otherwise. Function logic 106 may simply store the image data or even manipulate the image data by applying post image effects (e.g., crop, rotate, remove red eye, adjust brightness, adjust contrast, or otherwise). In one example, readout circuitry 104 may read out a row of image data at a time along readout column lines (illustrated) or may read out the image data using a variety of other techniques (not illustrated), such as a serial read out or a full parallel read out of all pixels simultaneously. The image charge generated by the large sub-pixel and the small sub-pixel may be read out separately during different time periods.
Image light incident on pixel 210 will generate image charge in each of the photodiodes PDA through PDω. First image charge is generated in first photodiode 235 PDA. When transfer transistor 233 T1A receives a first transfer signal TXS 231 at its transfer gate, the first image charge is transferred to shared floating diffusion region 229. Photodiodes PDB through PDω in large sub-pixel 285 will also generate image charge in response to incident image light. Collectively, the image charge generated by the photodiodes in large sub-pixel 285 will be referred to as “distributed image charge” as it is distributed among the photodiodes, at least initially. When transfer transistors T1B-T1ω receive second transfer signal TXL 241 at their transfer gates, the distributed image charge from each photodiode in the plurality of photodiodes in large sub-pixel 285 is transferred to shared floating diffusion region 229. As
The first image charge that accumulates in first photodiode PDA is switched through transfer transistor T1A 233 into shared floating diffusion region 229 in response to a control signal TXS being received on a first transfer gate of transfer transistor T1A 233. The distributed image charge that accumulates in the plurality of photodiodes PDB through PDω is switched through a second transfer transistor (which may include transfer gates of transfer transistors T1B-T1ω coupled together) into shared floating diffusion region 229 in response to control signal TXL being received on the second transfer gate of the second transfer transistor. It is understood that shared floating diffusion region 229 may be a physical combination of the drains of transfer transistors T1A-T1ω.
As shown in the example, pixel 210 also includes an amplifier transistor T3 224 that has a gate terminal coupled to shared floating diffusion region 229. Thus, in the illustrated example, the image charge from small sub-pixel 275 and large sub-pixel 285 are separately switched to shared floating diffusion region 229, respectively, which shares the same amplifier transistor T3 224. In one example, amplifier transistor T3 224 is coupled in a source follower configuration as shown, which therefore amplifies an input signal at the gate terminal of amplifier transistor T3 224 to an output signal at the source terminal of amplifier transistor T3 224. As shown, row select transistor T4 226 is coupled to the source terminal of amplifier transistor T3 224 to selectively switch the output of amplifier transistor T3 224 to readout column 212 in response to a control signal SEL. As shown in the example, pixel 210 also includes reset transistor T2 222 coupled to shared floating diffusion region 229, which may be used to reset charge accumulated in pixel 210 in response to a reset signal RST. In one example, the charge accumulated in shared floating diffusion region 229 can be reset during an initialization period of pixel 210, or for example each time after charge information has been read out from pixel 210 and prior to accumulating charge in small sub-pixel 275 and large sub-pixel 285 for the acquisition of a new HDR image in accordance with the embodiments of the disclosure.
In one embodiment, each photodiode PDB through PDω is substantially identical to the first photodiode PDA 235. For example, each photodiode PDB through PDω may have the same charge capacity and other electrical characteristics as PDA. This may reduce or eliminate the need to compensate for physical differences that impact the electrical function of the photodiodes. For example, some HDR pixel configurations include a single physically larger photodiode as a large sub-pixel. However, these singular physically larger photodiodes serving as a large sub-pixel often suffer higher lag, which can negatively influence the image charge transferred and the timing of the transfer. Furthermore, a singular physically larger photodiode as the large sub-pixel also introduces optical asymmetry that can introduce undesirable artifacts. In contrast, the photodiodes in large sub-pixel 285 being substantially identical to first photodiode PDA 235, allows image charge to transfer out of each photodiode PDB through PDω with essentially the same electrical characteristics as PDA 235, while still leveraging the increased semiconductor size for capturing image light by utilizing multiple photodiodes PDB through PDω. These shared electrical characteristics may reduce lag time in the transfer of image charge from the large sub-pixel. The optical artifacts (e.g. crosstalk, ray angle separation) associated with the singular physically large photodiode are also mitigated as each photodiode in the plurality of photodiodes PDB through PDω are substantially identical.
In
In
In the disclosed embodiments of this disclosure it is appreciated that the first photodiode PDA and the plurality of photodiodes PDB-PDω are included in an HDR image sensor that is capable of capturing an HDR image in a single frame. In other words, the first photodiode PDA and the plurality of photodiodes PDB-PDω are able to accumulate image charge in overlapping time periods. First photodiode PDA may be designed to capture bright light image data while the plurality of photodiodes PDB-PDω are designed to capture low light image data. The first image charge from first photodiode PDA is read out separately from the distributed image charge from the plurality of photodiodes PDB-PDω to generate a bright light signal. The distributed image charge from the plurality of photodiodes PDB-PD107 is read out separately from the first image charge to generate a low light signal. The bright light signal and the low light signal can be utilized by HDR algorithms to generate an HDR image.
The above description of illustrated embodiments of the invention, including what is described in the Abstract, is not intended to be exhaustive or to limit the invention to the precise forms disclosed. While specific embodiments of, and examples for, the invention are described herein for illustrative purposes, various modifications are possible within the scope of the invention, as those skilled in the relevant art will recognize.
These modifications can be made to the invention in light of the above detailed description. The terms used in the following claims should not be construed to limit the invention to the specific embodiments disclosed in the specification. Rather, the scope of the invention is to be determined entirely by the following claims, which are to be construed in accordance with established doctrines of claim interpretation.
Number | Name | Date | Kind |
---|---|---|---|
7525077 | Kim et al. | Apr 2009 | B2 |
7863647 | Veliadis | Jan 2011 | B1 |
20040207734 | Horiuchi | Oct 2004 | A1 |
20060118837 | Choi | Jun 2006 | A1 |
20070141801 | Kwon et al. | Jun 2007 | A1 |
20090002528 | Manabe et al. | Jan 2009 | A1 |
20090200580 | Rhodes et al. | Aug 2009 | A1 |
20100102206 | Cazaux et al. | Apr 2010 | A1 |
20100277607 | Choi et al. | Nov 2010 | A1 |
20100277623 | Tejada et al. | Nov 2010 | A1 |
20100309333 | Smith et al. | Dec 2010 | A1 |
20130076910 | Scott | Mar 2013 | A1 |
20130242147 | Lee | Sep 2013 | A1 |
20130258098 | Ikemoto | Oct 2013 | A1 |
20140239154 | Chen et al. | Aug 2014 | A1 |
20140246561 | Chen et al. | Sep 2014 | A1 |
20150179695 | Lyu et al. | Jun 2015 | A1 |
Number | Date | Country | |
---|---|---|---|
20150123172 A1 | May 2015 | US |