1. Technical Field
This disclosure relates generally to digital image sensors and, more specifically, to image sensors having good sensitivity in the near-infrared (NIR) spectral regions.
2. Description of Related Art
Solid-state image sensors work by converting incident photons into electron-hole pairs. An image sensor typically includes a two-dimensional array of light sensing elements called “pixels.” Either the electron or hole is then collected by the sensor and converted into an output signal for each pixel or group of pixels. The depth at which the photon conversion occurs depends on the absorption coefficient of the detector material. The absorption coefficient varies by material, but decreases with longer wavelengths for a given material. As the absorption coefficient decreases, light penetrates more deeply into the detector material.
Photodetectors based on silicon are typically sensitive to light in the 350-1100 nm wavelength range, where short-wavelength light is detected near the silicon surface and long-wavelength light can pass through thicker silicon without generating an electron-hole pair. For example, at 850 nm (a wavelength in the NIR spectral region), the absorption depth for a silicon-based photodetector is around 12 μm.
Silicon-based image sensor designs include Charge-Coupled Devices (CCD), Complementary Metal-Oxide-Semiconductor (CMOS) image sensors, and the like. CMOS image sensors have the advantages of lower power consumption, and include built-in analog-to-digital converters that provide digital output pixel values.
A basic CMOS image sensor pixel consists of a photodetector such as a photodiode, readout transistors, a floating diffusion (FD), and an output node. A key aspect of pixel operation in a CMOS image sensor is the photodiode pinning voltage Vpin. Typically, a photodiode is designed so that it becomes depleted throughout its thickness at a predetermined voltage. This voltage at which the photodiode becomes fully depleted is known as its pinning voltage. To achieve full depletion, the photodiode is sandwiched between a shallow, highly doped region of opposite doping type, and an epitaxial region which is also of the opposite doping type. The pinning voltage can be increased by increasing the doping concentration in the photodiode, by making the photodiode thicker, or by decreasing the dopant concentration of the shallow surface implant. Increasing Vpin typically increases the charge that can be collected by the photodiode, and so increases the dynamic range of the pixel.
However, the floating diffusion must hold all of the charge collected by the photodiode, and so the floating diffusion charge capacity should be slightly larger than the capacity of the photodiode. The charge capacity of the floating diffusion is proportional to its maximum voltage swing. The voltage on the floating diffusion must remain slightly higher than the minimum photodiode potential, so the maximum floating diffusion voltage swing is roughly given by its reset voltage minus the photodiode Vpin. Therefore, there is a maximum Vpin above which the pixel signal no longer increases, but lag problems continue to increase. Additionally, increasing Vpin typically leads to an increase in dark current as well as hot defective pixels.
To capture deeply penetrating NIR light, CMOS sensors need deep charge collection regions. However, because CMOS sensors are generally fabricated using typical CMOS processing steps, the photodiode regions tend to be shallow. This means that the generated carriers must travel a long distance to the photodiode, which may result in an increase in pixel cross-talk and a decrease in quantum efficiency (QE) and modulation transfer function (MTF). To improve MTF, the photodiode may be implanted deeper. However, as explained above, increasing the photodiode thickness makes Vpin too high, causing the photodiode to not function correctly.
Accordingly, there is a need for an image sensor and photodiode wherein Vpin is kept as low as possible while still meeting dynamic range needs.
The present disclosure is directed to a photodiode, image sensing device, and electronic apparatus. In one aspect of the present disclosure, an image sensing device comprises a photodiode region of a first dopant type, the photodiode region including a shallow photodiode region and a first deep photodiode region, wherein a length of the shallow photodiode region is larger than a length of the first deep photodiode region; a depleting region of a second dopant type, the depleting region including a shallow depleting region and a deep depleting region, wherein the deep depleting region surrounds the first deep photodiode region on at least two opposite sides; and an epitaxial layer, wherein the second dopant type is of opposite dopant type to the first dopant type.
In another aspect of the present disclosure, the photodiode region further includes a second deep photodiode region, wherein a length of the second deep photodiode region is larger than the length of the first deep photodiode region.
In another aspect of the present disclosure, a method of manufacturing an image sensing device comprises forming a photodiode region of a first dopant type, the photodiode region including a shallow photodiode region and a first deep photodiode region, wherein a length of the shallow photodiode region is larger than a length of the first deep photodiode region; forming a depleting region of a second dopant type, the depleting region including a shallow depleting region and a deep depleting region, wherein the deep depleting region surrounds the first deep photodiode region on at least two opposite sides; and forming an epitaxial layer, wherein the second dopant type is of opposite dopant type to the first dopant type.
The present disclosure may be better understood upon consideration of the detailed description below and the accompanying figures.
[Image Sensing Device—General Configuration]
The image sensor includes various driving sections preferably arranged around the periphery of the pixel array unit 11. These driving sections control the operations of the image sensor, and may be collectively referred to as a “control section” when differentiation between individual components thereof is not necessary. The operations of the pixels 110 are controlled by a vertical driving unit 12, which is configured to apply signals to control lines 16 that are connected to respective rows of pixels. The vertical driving unit 12 may include address decoders, shift registers, and the like, as is familiar in the art, for generating control pulses. An operation of reading out signals from the pixels 110 is performed via a column processing unit 13, which is connected to respective columns of pixels via column readout lines 17. A horizontal driving unit 14 controls the readout operations of the column processing unit 13, and may include shift registers and the like, as is familiar in the art. A system control unit 15 is provided to control the vertical driving unit 12, the column processing unit 13, and the horizontal driving unit 14 by, for example, generating various clocks and control pulses. The signals read out by the column processing circuit 13 are output via a horizontal output line 19 to a signal processing unit 18 which is configured to receive signals, perform various signal processing functions, and output image data.
The general configuration described above is merely an example, and it will be understood that alternative configurations could also be implemented. For example, the signal processing unit 18 and/or a storage unit (not illustrated) could be configured in a column-parallel manner similarly to the column processing unit 13, such that the pixels signals from respective columns undergo signal processing in parallel. As another example, the signal processing unit 18 and/or a storage unit (not illustrated) can be included in the same integrated circuit as the pixel array unit 11, or may be provided in a separate circuit not integrated with the pixel array unit 11.
The A/D converter 23 preferably includes a comparator 31, a counter 32, a switch 33, and a memory (latch) 34. Comparator 31 has one input connected to a respective column readout line 17, and the other input connected to a reference voltage Vref generated by a reference signal generation section 20. Although reference signal generation section 20 is illustrated as being separate from system control unit 15, reference signal generation section 20 may be integrated with system control unit 15. The output of comparator 31 is connected to counter 32, whose output is in turn connected via switch 33 to memory 34. During a readout operation, the reference signal generation section 20 causes the voltage Vref, beginning at an initial time t0, to take the form of a ramp voltage which changes magnitude approximately linearly with time at a set rate. Counter 32 starts counting at time t0, and when the voltage Vref becomes equal to the potential carried on column readout line 17 (for example, at a time t1), comparator 31 inverts its output causing the counter to stop counting. The count of counter 23 therefore corresponds to the amount of time between time t0 (when Vref starts to change magnitude) and time t1 (when Vref becomes equal to the potential of column readout line 17). Because the rate of change of Vref is known, the time between t0 and t1 corresponds to the magnitude of the potential of column readout line 17. Thus, the analog potential of column readout line 17 is converted into a digital value which is output by the counter 23. The digital value is output by counter 23 via switch 33 to memory 34, where the digital value is held until horizontal driving unit 14 causes the memory to output the value via horizontal output line 19. A correlated double sampling (CDS) technique may be employed, in which a reset level is subtracted from a pixel signal, so as to cancel out any variations between reset levels across pixels and time.
System control unit 15 may preferably generate clock signals and control signals for controlling various other sections, such as a clock signal CK and control signals CS1, CS2, and CS3, based on a master clock signal MCK input into system control unit 15. Master clock signal MCK may be, for example, an input from a circuit other than the integrated circuit in which pixel array unit 11 is included; for example, from a processor of a device in which the image sensor is installed.
Control lines 16 and column readout lines 17 are preferably formed in multiple wiring layers that are laminated on top of one another with inter-layer insulating films therebetween. The wiring layers are preferably formed on top of the pixels on a front-face side of the semiconductor substrate for front-side illuminated pixels, and on a back-side face of the semiconductor substrate for back-side illuminated pixels.
[Pixel Circuit—General Configuration and Operation]
The particular CMOS image sensor pixel 110a, illustrated in
In the exemplary 5T configuration illustrated here, a photodiode-only reset operation may be accomplished simply by setting a global shutter signal AB high and thereby causing a global shutter transistor M5 to be turned on. This is in contrast to the photodiode-and-floating-diffusion reset operation described above, which requires turning on both the reset transistor M1 and the transfer transistor M2 together. The photodiode-only reset operation allows for the photodiode PD to be affirmatively reset without affecting a charge held on the floating diffusion FD.
[Photodiode Structure—Comparative Example]
As noted above, the pixels 110 of the present disclosure include, regardless of their general configuration, a photodiode that is configured to convert incident light into electrical signals. Various advantages of aspects of the present disclosure are related to the structure of the photodiode. In order to aid an understanding of these advantages, a comparative example will first be considered in which the photodiode has a different structure from that of aspects of the present disclosure.
[Photodiode Structure—Exemplary Embodiments]
In accordance with the principles of the present disclosure, an active pixel CMOS image sensor implements a photodiode exhibiting improved long wavelength performance. Exemplary photodiodes include a shallow photodiode region and a deep photodiode region. All regions of the exemplary photodiode are fully depleted of carriers during a photodiode reset operation; that is, no neutral regions remain.
The exemplary photodiodes preferably include a shallow wide photodiode region of medium dose, and a deep narrow stripe photodiode region of lower dose, the photodiode regions having a first (that is, p or n) dopant type. Exemplary photodiodes also include a shallow high dose depleting region of opposite dopant type that depletes the top of the photodiode, and a deep low dose depleting region of opposite dopant type that depletes the side of the stripe. The deep photodiode region is connected to the shallow photodiode region in order to facilitate the collection of deeply generated carriers.
The deep depleting region is formed at approximately the same depth as the deep photodiode region, and surrounds the deep photodiode region on at least two sides. The deep depleting region is preferably done at a higher dose than the deep photodiode region in order to facilitate depletion of the deep photodiode region. The deep depleting region is placed deep enough below the shallow photodiode region to prevent it from undesirably compensating the shallow photodiode region and thereby degrading its charge collection efficiency.
As seen in
In photodiode 500, photodiode implants 502, 503 are formed of a first dopant type, and depleting implants 501, 505 are formed of a second dopant type. In order to provide a p-n junction, the first and second dopant types are of opposite dopant type to one another.
As seen in
In photodiode 600, photodiode implants 602-604 are formed of a first dopant type, and depleting implants 601, 605 are formed of a second dopant type. In order to provide a p-n junction, the first and second dopant types are of opposite dopant type to one another.
While photodiode6700 may require more process complexity during formation thereof to achieve a sufficiently low pinning voltage, the wide area of the photodiode regions provide an improvement in QE and MTF.
In
While
Although the photodiode regions in the above preferred photodiodes are referred to as “implants,” this disclosure is not limited to regions formed by an implantation method. In various aspects of the present disclosure, the photodiode regions may be formed by epitaxial growth, ion implantation, dopant diffusion, or any other known method of forming a semiconductor p-n junction.
The deep photodiodes illustrated in
[Photodiode Manufacturing Process]
Preferably, the deep photodiode and depleting implants are performed early in the manufacturing process; that is, prior to the formation of the gate oxide. In this manner, the additional thermal budget helps deepen the photodiode implant, further improving MTF performance. Additionally, in this manner, implant defects can be better prevented and/or annealed out. Alternatively, either the deep photodiode and/or the deep depleting implants can be done after gate formation. In this manner, sharper p-n junctions may be realized; however, the implants may not diffuse as deeply, thereby producing less of a NIR improvement.
At step S901, STI isolation regions are formed. At step S902, a p-well implant for photodiode isolation and/or well formation in the periphery is formed or provided. At step S903, an n-well implant (for example, for n-well formation in the periphery) is formed. At step S904, a deep p-type implant is formed (for example, a deep depleting implant). At step S905, a deep n-type implant is formed (for example, a deep photodiode implant or first and second deep photodiode implants). At step S906, a gate stack is formed (for example, including a gate oxide layer). At step S907, shallow p- and n-type implants are formed (for example, a shallow photodiode implant and a shallow depleting implant). At step S908, NFET lightly doped drain (LDD) implants are formed (for example, corresponding to NMOS transistors of transistors M1-M5). At step S909, PFET LDD implants are formed (for example, corresponding to PMOS transistors of transistors M1-M5). At step S910, spacer elements are formed. At step S911 n-type source/drain implants (SDN) and p-type source/drain implants (SDP) are formed (for example, corresponding to transistors M1-M5).
[Electronic Apparatus]
An electronic apparatus may be configured to include the image sensor 10 described above. For example, electronic apparatus may include digital cameras (including both cameras configured to take still images and those configured to take moving images), cellular phones, smartphones, tablet devices, personal digital assistants (PDAs), laptop computers, desktop computers, webcams, telescopes, sensors for scientific experiments, and any electronic apparatus for which it may be advantageous to detect light and/or capture images.
An exemplary electronic apparatus in the form of a digital camera is shown in
Moreover, a digital signal processing section (DSP) 1002 may be provided to perform signal processing on signals received from the image sensor 1010 (for example, to receive signals from image sensor 1010 and output data); a storage section 1003 may be provided to store data generated by the image sensor 1010; a control section 1004 may be provided to control operations of the image sensor 1010; a power supply section 1005 may be provided to supply power to the image sensor 1010; and an output unit 1005 may be provided to output captured image data. Individual sections may be integrated with one or more other sections, or each individual section may be a separate integrated circuit. Individual sections may be connected to one another via a bus 1009, including a wired or wireless connection. Control section 1004 may include a processor that executes instructions stored on a non-transitory computer-readable medium, for example a memory included in storage section 1003. Output unit 1006 may be an interface for facilitating transmission of the stored data to external devices and/or for displaying the stored data as an image on a display device, which display device may be provided separate from or integral with the camera 1000.
Image sensor 1010 itself may include various sections therein for performing signal processing of the pixel signals generated by the pixel array, and/or signal processing sections may be provided in the electronic apparatus separate from image sensor 1010. Preferably, image sensor 1010 itself performs at least some signal processing functions, in particular A/D conversion and CDS noise cancellation. The electronic apparatus may also preferably perform some signal processing functions, for example converting the raw data from the image sensor 1010 into an image/video storage format (e.g., MPEG-4 or any known format), via the processor and/or via a dedicated signal processing section such as a video encoder/decoder unit.
In general, computing systems and/or devices, such as some of the above-described electronic apparatus, may employ any of a number of computer operating systems, including, but by no means limited to, versions and/or varieties of the Microsoft Windows® operating system, the Unix operating system (e.g., the Solaris® operating system distributed by Oracle Corporation of Redwood Shores, Calif.), the AIX UNIX operating system distributed by International Business Machines of Armonk, N.Y., the Linux operating system, the Mac OSX and iOS operating systems distributed by Apple Inc. of Cupertino, Calif., the BlackBerry OS distributed by Research In Motion of Waterloo, Canada, and the Android operating system developed by the Open Handset Alliance.
Computing devices generally include computer-executable instructions, where the instructions may be executable by one or more computing devices such as those listed above. Computer-executable instructions may be compiled or interpreted from computer programs created using a variety of programming languages and/or technologies, including, without limitation, and either alone or in combination, Java™, C, C++, C#, Objective C, Visual Basic, Java Script, Perl, etc. In general, a processor (e.g., a microprocessor) receives instructions, e.g., from a memory, a computer-readable medium, etc., and executes these instructions, thereby performing one or more processes, including one or more of the processes described herein. Such instructions and other data may be stored and transmitted using a variety of computer-readable media.
A computer-readable medium (also referred to as a processor-readable medium) includes any non-transitory (e.g., tangible) medium that participates in providing data (e.g., instructions) that may be read by a computer (e.g., by a processor of a computer). Such a medium may take many forms, including, but not limited to, non-volatile media and volatile media. Non-volatile media may include, for example, optical or magnetic disks and other persistent memory. Volatile media may include, for example, dynamic random access memory (DRAM), which typically constitutes a main memory. Such instructions may be transmitted by one or more transmission media, including coaxial cables, copper wire and fiber optics, including the wires that comprise a system bus coupled to a processor of a computer. Common forms of computer-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, any other magnetic medium, a CD-ROM, DVD, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, a RAM, a PROM, an EPROM, a FLASH-EEPROM, any other memory chip or cartridge, or any other medium from which a computer can read.
With regard to the processes, systems, methods, heuristics, etc. described herein, it should be understood that, although the steps of such processes, etc. have been described as occurring according to a certain ordered sequence, such processes could be practiced with the described steps performed in an order other than the order described herein. It further should be understood that certain steps could be performed simultaneously, that other steps could be added, or that certain steps described herein could be omitted. In other words, the descriptions of processes herein are provided for the purpose of illustrating certain embodiments, and should in no way be construed so as to limit the claims.
Accordingly, it is to be understood that the above description is intended to be illustrative and not restrictive. Many embodiments and applications other than the examples provided would be apparent upon reading the above description. The scope should be determined, not with reference to the above description, but should instead be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled. It is anticipated and intended that future developments will occur in the technologies discussed herein, and that the disclosed systems and methods will be incorporated into such future embodiments. In sum, it should be understood that the application is capable of modification and variation.
All terms used in the claims are intended to be given their broadest reasonable constructions and their ordinary meanings as understood by those knowledgeable in the technologies described herein unless an explicit indication to the contrary in made herein. In particular, use of the singular articles such as “a,” “the,” “said,” etc. should be read to recite one or more of the indicated elements unless a claim recites an explicit limitation to the contrary.
The Abstract of the Disclosure is provided to allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, it can be seen that various features are grouped together in various embodiments for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separately claimed subject matter.