This relates generally to imaging systems and, more particularly, to imaging systems with phase detection capabilities.
Modern electronic devices such as cellular telephones, cameras, and computers often use digital image sensors. Imager sensors (sometimes referred to as imagers) may be formed from a two-dimensional array of image sensing pixels. Each pixel receives incident photons (light) and converts the photons into electrical signals. Image sensors are sometimes designed to provide images to electronic devices using a Joint Photographic Experts Group (JPEG) format.
Some applications such as automatic focusing and three-dimensional (3D) imaging may require electronic devices to provide stereo and/or depth sensing capabilities. For example, to bring an object of interest into focus for an image capture, an electronic device may need to identify the distances between the electronic device and object of interest. To identify distances, conventional electronic devices use complex arrangements. Some arrangements require the use of multiple image sensors and camera lenses that capture images from various viewpoints. Other arrangements require the addition of lenticular arrays that focus incident light on sub-regions of a two-dimensional pixel array. Due to the addition of components such as additional image sensors or complex lens arrays, these arrangements lead to reduced spatial resolution, increased cost, and increased complexity.
It would therefore be desirable to be able to provide improved imaging systems with depth sensing capabilities.
Embodiments of the present invention relate to image sensors with depth sensing capabilities. An electronic device with a digital camera module is shown in
Still and video image data from image sensor 14 may be provided to image processing and data formatting circuitry 16 via path 26. Image processing and data formatting circuitry 16 may be used to perform image processing functions such as automatic focusing functions, depth sensing, data formatting, adjusting white balance and exposure, implementing video image stabilization, face detection, etc. For example, during automatic focusing operations, image processing and data formatting circuitry 16 may process data gathered by phase detection pixels in image sensor 14 to determine the magnitude and direction of lens movement (e.g., movement of lens 28) needed to bring an object of interest into focus.
Image processing and data formatting circuitry 16 may also be used to compress raw camera image files if desired (e.g., to Joint Photographic Experts Group or JPEG format). In a typical arrangement, which is sometimes referred to as a system on chip (SOC) arrangement, camera sensor 14 and image processing and data formatting circuitry 16 are implemented on a common integrated circuit. The use of a single integrated circuit to implement camera sensor 14 and image processing and data formatting circuitry 16 can help to reduce costs. This is, however, merely illustrative. If desired, camera sensor 14 and image processing and data formatting circuitry 16 may be implemented using separate integrated circuits. If desired, camera sensor 14 and image processing circuitry 16 may be formed on separate semiconductor substrates. For example, camera sensor 14 and image processing circuitry 16 may be formed on separate substrates that have been stacked.
Camera module 12 may convey acquired image data to host subsystems 20 over path 18 (e.g., image processing and data formatting circuitry 16 may convey image data to subsystems 20). Electronic device 10 typically provides a user with numerous high-level functions. In a computer or advanced cellular telephone, for example, a user may be provided with the ability to run user applications. To implement these functions, host subsystem 20 of electronic device 10 may include storage and processing circuitry 24 and input-output devices 22 such as keypads, input-output ports, joysticks, and displays. Storage and processing circuitry 24 may include volatile and nonvolatile memory (e.g., random-access memory, flash memory, hard drives, solid state drives, etc.). Storage and processing circuitry 24 may also include microprocessors, microcontrollers, digital signal processors, application specific integrated circuits, or other processing circuits.
It may be desirable to provide image sensors with depth sensing capabilities (e.g., to use in automatic focusing applications, 3D imaging applications such as machine vision applications, etc.). To provide depth sensing capabilities, image sensor 14 may include phase detection pixel groups such as phase detection pixel group 100 shown in
Color filters such as color filter elements 104 may be interposed between microlens 102 and substrate 108. Color filter elements 104 may filter incident light by only allowing predetermined wavelengths to pass through color filter elements 104 (e.g., color filter 104 may only be transparent to the wavelengths corresponding to a green color, a red color, a blue color, a yellow color, a cyan color, a magenta color, visible light, infrared light, etc.). Color filter 104 may be a broadband color filter. Examples of broadband color filters include yellow color filters (e.g., yellow color filter material that passes red and green light) and clear color filters (e.g., transparent material that passes red, blue, and green light). In general, broadband filter elements may pass two or more colors of light. Photodiodes PD1 and PD2 may serve to absorb incident light focused by microlens 102 and produce pixel signals that correspond to the amount of incident light absorbed.
Photodiodes PD1 and PD2 may each cover approximately half of the substrate area under microlens 102 (as an example). By only covering half of the substrate area, each photosensitive region may be provided with an asymmetric angular response (e.g., photodiode PD1 may produce different image signals based on the angle at which incident light reaches pixel pair 100). The angle at which incident light reaches pixel pair 100 relative to a normal axis 116 (i.e., the angle at which incident light strikes microlens 102 relative to the optical axis 116 of lens 102) may be herein referred to as the incident angle or angle of incidence.
An image sensor can be formed using front side illumination imager arrangements (e.g., when circuitry such as metal interconnect circuitry is interposed between the microlens and photosensitive regions) or back side illumination imager arrangements (e.g., when photosensitive regions are interposed between the microlens and the metal interconnect circuitry). The example of
In the example of
In the example of
The positions of photodiodes PD1 and PD2 may sometimes be referred to as asymmetric positions because the center of each photosensitive area 110 is offset from (i.e., not aligned with) optical axis 116 of microlens 102. Due to the asymmetric formation of individual photodiodes PD1 and PD2 in substrate 108, each photosensitive area 110 may have an asymmetric angular response (e.g., the signal output produced by each photodiode 110 in response to incident light with a given intensity may vary based on an angle of incidence). In the diagram of
Line 160 may represent the output image signal for photodiode PD2 whereas line 162 may represent the output image signal for photodiode PD1. For negative angles of incidence, the output image signal for photodiode PD2 may increase (e.g., because incident light is focused onto photodiode PD2) and the output image signal for photodiode PD1 may decrease (e.g., because incident light is focused away from photodiode PD1). For positive angles of incidence, the output image signal for photodiode PD2 may be relatively small and the output image signal for photodiode PD1 may be relatively large.
The size and location of photodiodes PD1 and PD2 of pixel pair 100 of
Output signals from pixel pairs such as pixel pair 100 may be used to adjust the optics (e.g., one or more lenses such as lenses 28 of
For example, by creating pairs of pixels that are sensitive to light from one side of the lens or the other, a phase difference can be determined. This phase difference may be used to determine both how far and in which direction the image sensor optics should be adjusted to bring the object of interest into focus.
When an object is in focus, light from both sides of the image sensor optics converges to create a focused image. When an object is out of focus, the images projected by two sides of the optics do not overlap because they are out of phase with one another. By creating pairs of pixels where each pixel is sensitive to light from one side of the lens or the other, a phase difference can be determined. This phase difference can be used to determine the direction and magnitude of optics movement needed to bring the images into phase and thereby focus the object of interest. Pixel groups that are used to determine phase difference information such as pixel pair 100 are sometimes referred to herein as phase detection pixels or depth-sensing pixels.
A phase difference signal may be calculated by comparing the output pixel signal of PD1 with that of PD2. For example, a phase difference signal for pixel pair 100 may be determined by subtracting the pixel signal output of PD1 from the pixel signal output of PD2 (e.g., by subtracting line 162 from line 160). For an object at a distance that is less than the focused object distance, the phase difference signal may be negative. For an object at a distance that is greater than the focused object distance, the phase difference signal may be positive. This information may be used to automatically adjust the image sensor optics to bring the object of interest into focus (e.g., by bringing the pixel signals into phase with one another).
As previously mentioned, the example in
Image sensors can operate using a global shutter or a rolling shutter scheme. In a global shutter, every pixel in the image sensor may simultaneously capture an image, whereas in a rolling shutter each row of pixels may sequentially capture an image. In order to implement global shutter, a pixel may include a storage node for storing charge from the photodiode.
A storage region 54 may also be provided for one or more of pixels 44. Storage region 54 may be a doped-semiconductor region. For example, storage region 54 may be a doped-silicon region. Each storage region 54 may be capable of storing charge from one or more photodiode. However, the storage region (sometimes referred to as a storage node) may also collect charge that was not generated in a photodiode. This is generally not desirable as the charge from the photodiode stored in the storage node will be affected by noise if the storage node collects additional charge. To reduce noise, it is preferable to limit the exposure of each storage region 54 to incident light. In order to reduce how much incident light reaches storage node 54, a shielding layer 56 may be provided above the storage node. Shielding layer 56 may be formed from a material that is opaque to visible light (e.g., tungsten, metal oxide, a composite material, etc.). It may also be desirable for shielding layer 56 to be an absorptive material that absorbs incident light as opposed to reflecting it. If shielding layer 56 is reflective, incident light may reflect off of the sidewalls of the shielding layer and end up reaching storage node 54. By absorbing any incident light, shielding layer 56 better reduces noise in the image sensor.
Shielding layer 56 may also ensure that the underlying photodiodes have an asymmetric response to incident light. In other words, shielding layer 56 may cover approximately half of two neighboring photodiodes to make a phase detection pixel pair 100. Shielding layer 56 does not have to cover half of the underlying photodiodes, and can cover any portion of the underlying photodiodes (e.g., 0.3×, 0.4×, 0.6×, between 0.3× and 0.6×, less than 0.3×, more than 0.3×, etc.). Storage node 54 may be used to store charge for any desired photodiodes. In an illustrative embodiment, each storage region 54 shown in
In
Forming shielding layer 56 as backside trench isolation may allow for better protection of storage region 54 from noise. The backside trench isolation has a depth 60. In other words, the trench for the backside trench isolation extends from a back surface of substrate 48 towards a front surface of substrate 48 some distance 60. This enhanced depth provides greater shielding of storage region 54, and also may improve the asymmetric response to incident light of the pixels in phase detection pixel group 100. The depth, width, and height of shielding layer 56 may be chosen to optimize the amount of noise exposed to charge storage region 54 and phase detection performance of the phase detection pixels.
If desired, an additional absorptive layer 62 may be provided in addition to shielding layer 56, as shown in
In
Shielding layer 56 does not have to be the only component that contributes to the asymmetric response of the pixels in each phase detection pixel group 100. As shown in
Storage regions 54 as described in connection with
In various embodiments of the invention, an image sensor may include a pixel array. The pixel array may include a first global shutter phase detection pixel with a first photodiode that is covered by a first microlens, a second global shutter phase detection pixel with a second photodiode that is covered by a second microlens, an shielding layer that covers at least a portion of the first photodiode and at least a portion of the second photodiode, and a charge storage region formed underneath the shielding layer. The shielding layer may shield the charge storage region from incident light.
The shielding layer may be opaque to visible light. The shielding layer may be formed from an absorptive material. The shielding layer may include tungsten. The shielding layer may include backside trench isolation. The shielding layer may be coated with an anti-reflective coating. The shielding layer may be coated with an absorptive coating. The shielding layer may cover approximately half of the first photodiode and approximately half of the second photodiode.
In various embodiments, an image sensor may include a silicon substrate with first and second opposing sides, a first photodiode for a first phase detection pixel formed in the silicon substrate, a second photodiode for a second phase detection pixel formed in the silicon substrate, a trench formed in the silicon substrate that extends from the first side of the silicon substrate towards the second side of the silicon substrate, a shielding layer formed in the trench, and a global shutter storage region formed in the silicon substrate. The shielding layer may overlap the global shutter storage region.
The shielding layer may cover at least a portion of the first photodiode and at least a portion of the second photodiode. The shielding layer may cover approximately half of the first photodiode and approximately half of the second photodiode. The shielding layer may be formed from an absorptive material that absorbs incident light. The image sensor may also include an anti-reflective coating on the shielding layer and an absorptive coating on the shielding layer. The image sensor may also include a first microlens formed over the first photodiode and a second microlens formed over the second photodiode. The first and second microlenses may be formed on the first side of the silicon substrate. The global shutter storage region may be formed in the second side of the silicon substrate.
In various embodiments, an image sensor may include a substrate with a front side and an opposing back side, a first photodiode for a first phase detection pixel formed in the substrate, a second photodiode for a second phase detection pixel formed in the substrate, a first microlens on the back side of the substrate that covers the first photodiode, a second microlens on the back side of the substrate that covers the second photodiode, back side trench isolation formed in the substrate that extends from the back side of the substrate towards the front side of the substrate, and a storage region formed in the substrate. The back side trench isolation may overlap the first and second photodiodes, and the back side trench isolation may shield the storage region from incident light. The back side trench isolation may overlap approximately half of the first photodiode and approximately half of the second photodiode. The back side trench isolation may include tungsten and an anti-reflective coating. The first and second phase detection pixels may be configured to operate in a global shutter mode, and the storage region may be a global shutter storage region.
The foregoing is merely illustrative of the principles of this invention and various modifications can be made by those skilled in the art without departing from the scope and spirit of the invention.