The invention relates generally to the field of image sensors, and more particularly to a stacked image sensor construction.
A typical image sensor has an image sensing portion that includes a photosensitive area or charge collection area for collecting a charge in response to incident light and a transfer gate for transferring charge from the photosensitive area to a transfer mechanism. Usually, the sensing portion is fabricated within the same material layer and with similar processes as the control circuitry for the image sensor. In an effort to increase the number of pixels provided in an image sensor, pixel size has been decreasing. However, as the pixel size shrinks, the illuminated area of the photodetector is also typically reduced, in turn decreasing the captured signal level and degrading performance.
Stacked image sensor structures are known that consist of a sensor-only wafer over one (or more) circuit wafers. A stacked structure such as this requires electrical interconnects between the wafers to be able to operate the sensor as well as read out the collected charges. The interconnects require the use of areas on the sensor and circuit wafers that could otherwise be used for charge collection and storage or support circuitry. Some interconnects also have the potential to cause noise in adjacent pixels.
Thus, a need exists for an improved stacked image sensor structure.
An image sensor and associated image capture device and method include a sensing wafer with a plurality of charge storage elements. A floating diffusion is associated with the plurality of charge storage elements, and a charge is transferred among the charge storage elements to the floating diffusion. The floating diffusion is electrically connected to support circuitry of a circuit wafer.
The present invention thus provides the advantage of an improved image sensor structure.
In the following Detailed Description, reference is made to the accompanying drawings, which form a part hereof, and in which is shown by way of illustration specific embodiments in which the invention may be practiced. In this regard, directional terminology, such as “top,” “bottom,” “front,” “back,” “leading,” “trailing,” etc., is used with reference to the orientation of the Figure(s) being described. Because components of embodiments of the present invention can be positioned in a number of different orientations, the directional terminology is used for purposes of illustration and is in no way limiting. It is to be understood that other embodiments may be utilized and structural or logical changes may be made without departing from the scope of the present invention. The following detailed description, therefore, is not to be taken in a limiting sense, and the scope of the present invention is defined by the appended claims.
Turning now to
The amount of light reaching the sensor 20 is regulated by an iris block 14 that varies the aperture and the neutral density (ND) filter block 13 that includes one or more ND filters interposed in the optical path. Also regulating the overall light level is the time that the shutter block 18 is open. The exposure controller block 40 responds to the amount of light available in the scene as metered by the brightness sensor block 16 and controls all three of these regulating functions.
This description of a particular camera configuration will be familiar to one skilled in the art, and it will be apparent to such a skilled person that many variations and additional features are present. For example, an autofocus system is added, or the lens is detachable and interchangeable. It will be understood that the present disclosure applies to various types of digital cameras where similar functionality is provided by alternative components. For example, the digital camera is a relatively simple point and shoot digital camera, where the shutter 18 is a relatively simple movable blade shutter, or the like, instead of the more complicated focal plane arrangement. Aspects of the present invention can also be practiced on imaging components included in non-camera devices such as mobile phones and automotive vehicles.
An analog signal from the image sensor 20 is processed by an analog signal processor 22 and applied to an analog to digital (A/D) converter 24. A timing generator 26 produces various clocking signals to select rows and pixels and synchronizes the operation of the analog signal processor 22 and the A/D converter 24. The image sensor stage 28 includes the image sensor 20, the analog signal processor 22, the A/D converter 24, and the timing generator 26. The components of the image sensor stage 28 can be separately fabricated integrated circuits, or they could be fabricated as a single integrated circuit as is commonly done with CMOS image sensors. The resulting stream of digital pixel values from the A/D converter 24 is stored in a memory 32 associated with the digital signal processor (DSP) 36.
The digital signal processor 36 is one of three processors or controllers in the illustrated embodiment, in addition to a system controller 50 and an exposure controller 40. Although this partitioning of camera functional control among multiple controllers and processors is typical, these controllers or processors are combined in various ways without affecting the functional operation of the camera and the application of the present invention. These controllers or processors can comprise one or more digital signal processor devices, microcontrollers, programmable logic devices, or other digital logic circuits. Although a combination of such controllers or processors has been described, it should be apparent that one controller or processor can be designated to perform all of the needed functions. All of these variations can perform the same function and fall within the scope of this invention, and the term “processing stage” will be used as needed to encompass all of this functionality within one phrase, for example, as in processing stage 38 in
In the illustrated embodiment, the DSP 36 manipulates the digital image data in its memory 32 according to a software program permanently stored in program memory 54 and copied to the memory 32 for execution during image capture. The DSP 36 executes the software necessary for practicing image processing. The memory 32 includes of any type of random access memory, such as SDRAM. A bus 30 comprising a pathway for address and data signals connects the DSP 36 to its related memory 32, A/D converter 24 and other related devices.
The system controller 50 controls the overall operation of the camera based on a software program stored in the program memory 54, which can include Flash EEPROM or other nonvolatile memory. This memory can also be used to store image sensor calibration data, user setting selections and other data which must be preserved when the camera is turned off. The system controller 50 controls the sequence of image capture by directing the exposure controller 40 to operate the lens 12, ND filter 13, iris 14, and shutter 18 as previously described, directing the timing generator 26 to operate the image sensor 20 and associated elements, and directing the DSP 36 to process the captured image data. After an image is captured and processed, the final image file stored in memory 32 is transferred to a host computer via an interface 57, stored on a removable memory card 64 or other storage device, and displayed for the user on an image display 88.
A bus 52 includes a pathway for address, data and control signals, and connects the system controller 50 to the DSP 36, program memory 54, system memory 56, host interface 57, memory card interface 60 and other related devices. The host interface 57 provides a high speed connection to a personal computer (PC) or other host computer for transfer of image data for display, storage, manipulation or printing. This interface is an IEEE1394 or USB2.0 serial interface or any other suitable digital interface. The memory card 64 is typically a Compact Flash (CF) card inserted into a socket 62 and connected to the system controller 50 via a memory card interface 60. Other types of storage that are utilized include, for example, PC-Cards, MultiMedia Cards (MMC), or Secure Digital (SD) cards.
Processed images are copied to a display buffer in the system memory 56 and continuously read out via a video encoder 80 to produce a video signal. This signal is output directly from the camera for display on an external monitor, or processed by the display controller 82 and presented on an image display 88. This display is typically an active matrix color liquid crystal display (LCD), although other types of displays are used as well.
The user interface, including all or any combination of viewfinder display 70, exposure display 72, status display 76 and image display 88, and user inputs 74, is controlled by a combination of software programs executed on the exposure controller 40 and the system controller 50. User inputs 74 typically include some combination of buttons, rocker switches, joysticks, rotary dials or touchscreens. The exposure controller 40 operates light metering, exposure mode, autofocus and other exposure functions. The system controller 50 manages the graphical user interface (GUI) presented on one or more of the displays, for example, on the image display 88. The GUI typically includes menus for making various option selections and review modes for examining captured images.
The exposure controller 40 accepts user inputs selecting exposure mode, lens aperture, exposure time (shutter speed), and exposure index or ISO speed rating and directs the lens and shutter accordingly for subsequent captures. The brightness sensor 16 is employed to measure the brightness of the scene and provide an exposure meter function for the user to refer to when manually setting the ISO speed rating, aperture and shutter speed. In this case, as the user changes one or more settings, the light meter indicator presented on viewfinder display 70 tells the user to what degree the image will be over or underexposed. In an automatic exposure mode, the user changes one setting and the exposure controller 40 automatically alters another setting to maintain correct exposure.
For example, for a given ISO speed rating when the user reduces the lens aperture, the exposure controller 40 automatically increases the exposure time to maintain the same overall exposure.
Embodiments of the disclosed image sensor 20 incorporate a charge transfer element into the sensor wafer that enables charge to be moved across several pixels to an interconnect node 116 that need not be immediately adjacent to the charge collection point. This reduces the number of interconnects 116 required, preserving wafer area for other uses.
In another embodiment in accordance with the invention, the charge storage elements 130 are CCDs, with one or more of the CCDs configured as light-sensitive CCDs. The light-sensitive CCDs collect charge in response to received light.
A CCD array is typically composed of an array of closely spaced gates 134 that are used to effect transfer of charge in the CCD. The illustrated embodiment uses micro gaps between a single polycrystalline silicon (polysilicon) level to allow multiple adjacent gates on the same poly level. A charge is transferred among the charge storage elements 130 to the floating diffusion 132, which is electrically connected to support circuitry 140 of the circuit wafer 112. In some embodiments, the support circuitry 140 includes a floating diffusion 142 that is electrically connected to the floating diffusion 132 on the sensing wafer 112 by the interconnects 116. In addition to the floating diffusion 142, the support circuitry can include, for example, a reset transistor connected to the floating diffusion 142, with the reset transistor including a reset gate 144, a VDD voltage supply 146, and a source follower transistor coupled to VDD and having input and output terminals 148, 150. In other embodiments, an input to the source follower transistor is coupled to the floating diffusion 132 on the sensing wafer 110, allowing an amplified signal to be transferred to the circuit wafer 112.
In the illustrated embodiments, the charge storage elements 130 are arranged in an array of rows and columns.
In yet another embodiment in accordance with the invention, the charge storage elements 130 are arranged in an array of rows and columns and the floating diffusions 132 are all disposed in one column at the edge or border of the array or within the array. Each row of charge storage elements has a floating diffusion associated with the entire row. Charge is transferred among the charge storage elements in each row to the floating diffusion associated with the entire row.
In some embodiments, such as the embodiment illustrated in
The invention has been described in detail with particular reference to certain preferred embodiments thereof, but it will be understood that variations and modifications can be effected within the spirit and scope of the invention. Additionally, even though specific embodiments of the invention have been described herein, it should be noted that the application is not limited to these embodiments. In particular, any features described with respect to one embodiment may also be used in other embodiments, where compatible. And the features of the different embodiments may be exchanged, where compatible.
This application claims the benefit of U.S. Provisional Application No. 61/122,860 filed on Dec. 16, 2008, which is incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
61122860 | Dec 2008 | US |