This relates to solid-state image sensors and, more specifically, to image sensors having pixel arrays with non-uniform pixel sizes.
Typical image sensors sense light by converting impinging photons into electrons or holes that are integrated (collected) in sensor pixels. After completion of an integration cycle, collected charge is converted into a voltage, which is supplied to the output terminals of the sensor. In CMOS image sensors, the charge to voltage conversion is accomplished directly in the pixels themselves and the analog pixel voltage is transferred to the output terminals through various pixel addressing and scanning schemes. The analog signal can be also converted on-chip to a digital equivalent before reaching the chip output. The pixels have incorporated in them a buffer amplifier, typically a Source Follower (SF), which drives the sense lines that are connected to the pixels by suitable addressing transistors.
After charge to voltage conversion is completed and the resulting signal transferred out from the pixels, the pixels are reset in order to be ready for accumulation of new charge. In pixels that use a Floating Diffusion (FD) as the charge detection node, the reset is accomplished by turning on a reset transistor that conductively connects the FD node to a voltage reference, which is typically the pixel drain node. This step removes collected charge; however, it also generates kTC-reset noise as is well known in the art. This kTC-reset noise is removed from the signal using a Correlated Double Sampling (CDS) signal processing technique in order to achieve the desired low noise performance. CMOS image sensors that utilize a CDS technique usually include three transistors (3T) or four transistors (4T) in the pixel, one of which serves as the charge transferring (Tx) transistor. It is possible to share some of the pixel circuit transistors among several photodiodes, which also reduces the pixel size. An example of a 4T pixel circuit with pinned photodiode can be found in U.S. Pat. No. 5,625,210 to Lee, incorporated herein as a reference.
The surface of epitaxial layer 114 is covered by an oxide layer 109 that isolates the doped poly-silicon charge transfer gate Tx 110 from the substrate. The PD is formed by an n-type doped layer 108 and a p+ type doped potential pinning layer 107.
The FD diode 104 that senses charge transferred from the PD is connected to the pixel source follower SF transistor (not shown). The FD, SF, and the remaining pixel circuit components are all built in the p-type doped well 103 that diverts the photon generated charge into the photodiode potential well located in layer 108. The pixels are isolated from each other by p+ type doped regions 105 and 106, which may extend all the way to the p+ type doped layer 102 and by the shallow p+ type doped implanted regions 115 that are typically aligned directly above regions 105 and 106 and implanted through the same mask. The whole pixel is covered by several inter-level (IL) oxide layers 112 (only one is shown in
Pixels such a Pixel 1 and Pixel 2 of
Typically, image sensors sense color by including various color filters and microlenses on the back of the substrate to make the pixels sensitive to predetermined bands of the electromagnetic spectrum. A typical color filter and microlens arrangement is shown in
While this concept works reasonably well, it also has several problems. For example, the color filters typically have different absorption coefficients, which results in uneven pixel saturation and thus a sacrifice of some pixel dynamic range. This is typically corrected by adjusting the filter thicknesses.
The Bayer color filter scheme also sacrifices approximately ⅔ of the photons that fall on the sensor, which results in poor low light level sensitivity. This has been recently countered by eliminating one of the green filters. For example, green filter 204 may be replaced with a clear layer to improve low light sensitivity. However, now that the clear pixel collects photons of all colors, it saturates at much lower light intensities than the rest of the pixels in the sensor. For normal light intensities, the information from this pixel is often discarded, which affects the sensor resolution.
It would therefore be desirable to be able to provide image pixel arrays with improved dynamic range, color response, and sensitivity that saturate uniformly.
Electronic devices such as digital cameras, computers, cellular telephones, and other electronic devices include image sensors that gather incoming light to capture an image. The image sensors may include arrays of image sensor pixels (sometimes referred to as pixels or image pixels). The image pixels in the image sensors may include photosensitive elements such as photodiodes that convert the incoming light into electric charge. The electric charge may be stored and converted into image signals. Image sensors may have any number of pixels (e.g., hundreds or thousands or more). A typical image sensor may, for example, have hundreds of thousands or millions of pixels (e.g., megapixels). Image sensors may include control circuitry such as circuitry for operating the image pixels and readout circuitry for reading out image signals corresponding to the electric charge generated by the photosensitive elements.
Image sensor pixels in an image sensor pixel array may have non-uniform sizes. For example, image sensor pixels may be designed to have different sizes and thus different sensitivities. Sensitivities may, if desired, be adjusted to match a particular color filter scheme. A simplified cross-sectional side view of a portion of an image pixel array having pixels of different sizes is shown in
The surface of epitaxial layer 314 may be covered by an oxide layer such as oxide layer 309. Oxide layer 309 may be used to isolate a doped poly-silicon charge transfer (Tx) gate such as charge transfer gate 310 from substrate 301. The PD is formed by n-type doped layer 308 and p+ type doped potential pinning layer 307, which may help reduce the interface states generated dark current (similarly to p+ type doped layer 302). Each pixel includes a floating diffusion (FD) such as n+ type doped floating diffusion 304.
Each FD diode 304 is connected to a pixel source follower (SF) transistor and a reset transistor (not shown), and each FD is configured to sense charge transferred from the PD. The FD, SF, and the remaining pixel circuit components that are formed in the top region of the substrate are now separated from the silicon bulk by a bottom p+ type doped layer (BTP) 303. This is substantially different from the arrangement of
As shown in
In this type of arrangement, the pixel charge storage regions may be built with identical sizes while the pixel charge-generating regions may have different sizes, thereby resulting in pixels that have equal charge storage capacity but different sensitivities.
In addition to improving the pixel well capacity, BTP layer 303 may also allow more flexibility in the design of transfer gate 310. For example, a stronger body effect may help prevent charge transfer transistor punch-through, which in turn allows the gate length of transfer gate 310 to be shorter (if desired). BTP layer 303 may be located very close to the silicon surface, thereby minimizing the silicon volume in which stray carriers can be generated by longer wavelength light that has not been completely absorbed in the underlying silicon bulk. This effect can be minimized by optimizing the thickness of epitaxial layer 314 in comparison to the thickness of the remaining silicon above BTP layer 303. This is particularly advantageous for pixels that are designed with additional charge carrier storage sites (not shown) and that operate in global shutter mode.
The whole pixel surface may be covered by several inter-level (IL) oxide layers 312 (only one is shown here) that are used for the pixel metal wiring and interconnect isolation. The pixel active circuit components are connected to the wiring by metal vias 313 (sometimes referred to as metal plugs) deposited through contact via holes 311.
The example of
There are now several ways to arrange the color filters and microlenses on the back of substrate 301 (e.g., back surface 301B of substrate 301).
If desired, clear pixels such as pixels 402 may include filters that pass two or more colors of light (e.g., two or more colors of light selected from the group that includes red light, blue light, and green light). These filters may sometimes be referred to as “broadband” or “complementary” filter elements. For example, yellow color filter elements that are configured to pass red and green light and clear color filter elements that are configured to pass red, green, and blue light may both be referred to as broadband filters or broadband color filter elements. Similarly, image pixels that include a broadband filter (e.g., a yellow or clear color filter) and that are therefore sensitive to two or more colors of light (e.g., two or more colors of light selected from the group that includes red light, blue light, and green light) may sometimes be referred to as broadband pixels or broadband image pixels. In contrast, “colored” pixel may be used to refer to image pixels that are primarily sensitive to one color of light (e.g., red light, blue light, green light, or light of any other suitable color).
The sizes of regions 400, 402, 403, and 404 may be adjusted to balance the sensitivities of these pixels in accordance with their filter in-band and out-of-band absorption characteristics. For example, the sizes of pixels in pixel array 401 may be adjusted such that pixel charge saturation for a given light intensity and color temperature occur at the same level for all pixels. This improves sensor resolution, dynamic range, and sensitivity.
In the type of arrangement of
In the configurations of
In order to balance pixel saturation levels in pixel array 401, pixels with color filters such as pixels 702 and 703 may have photodiodes with smaller storage areas, while broadband pixels (e.g., clear pixels 701) may have photodiodes with larger storage areas. For example, color pixels 702 and 703 may correspond to Pixel 2 of
Processor system 500, which may be a digital still or video camera system, may include a lens such as lens 596 for focusing an image onto a pixel array such as pixel array 401 when shutter release button 597 is pressed. Processor system 500 may include a central processing unit such as central processing unit (CPU) 595. CPU 595 may be a microprocessor that controls camera functions and one or more image flow functions and communicates with one or more input/output (I/O) devices 591 over a bus such as bus 593. Imaging device 801 may also communicate with CPU 595 over bus 593. System 500 may include random access memory (RAM) 592 and removable memory 594. Removable memory 594 may include flash memory that communicates with CPU 595 over bus 593. Imaging device 801 may be combined with CPU 595, with or without memory storage, on a single integrated circuit or on a different chip. Although bus 593 is illustrated as a single bus, it may be one or more buses or bridges or other communication paths used to interconnect the system components.
Various embodiments have been described illustrating image pixel arrays with non-uniform pixel sizes. This is accomplished by incorporating a special p+ type doped BTP layer under the whole pixel array and by providing the BTP layer with openings to allow photo-generated carriers to flow from the silicon bulk to the PD regions. The presence of the BTP layer allows flexibility in the placement of the deep pixel separation implants. For example, the pixel separation implants can be placed at varying distances from each other to individually adjust pixel charge collection volume and thereby adjust the pixel sensitivity. If desired, the storage area of the pixels may remain uniform throughout the array.
Image pixel arrays having pixels of different sizes may adjust pixel sensitivity according to the color filter layout of the pixel array. For example, broadband pixels that are sensitive to a larger band of wavelengths of light may be made smaller than color pixels that are sensitive to a smaller band of wavelengths of light. In another suitable arrangement, broadband pixels may be made larger than color pixels.
If desired, a pixel array may be designed such that the storage area of pixels is varied while the charge-generating volume remains uniform across the array. For example, broadband pixels that are sensitive to a larger band of wavelengths of light may have a larger charge storage area than color pixels that are sensitive to a smaller band of wavelengths of light.
The foregoing embodiments are intended to be illustrative and not limiting; it is noted that persons skilled in the art can make modifications and variations in light of the above teachings. It is therefore to be understood that changes may be made in the particular embodiments of the invention disclosed, which are within the scope and spirit of the invention as defined by the appended claims. The foregoing is merely illustrative of the principles of this invention which can be practiced in other embodiments.
This application claims the benefit of provisional patent application No. 61/869,444, filed Aug. 23, 2013, which is hereby incorporated by reference herein in its entirety.
Number | Date | Country | |
---|---|---|---|
61869444 | Aug 2013 | US |