The present invention relates generally to methods and devices for image sensing, and particularly to sensing motion using film-based image sensors.
In film-based image sensors, a silicon-based switching array is overlaid with a photosensitive film such as a film containing a dispersion of quantum dots. Films of this sort are referred to as “quantum films.” The switching array, which can be similar to those used in complementary metal-oxide sandwich (CMOS) image sensors that are known in the art, is coupled by suitable electrodes to the film in order to read out the photocharge that accumulates in each pixel of the film due to incident light.
U.S. Pat. No. 7,923,801, whose disclosure is incorporated herein by reference, describes materials, systems and methods for optoelectronic devices based on such quantum films.
Embodiments of the present invention that are described hereinbelow provide enhanced image sensor designs and methods for operation of image sensors with enhanced performance.
There is therefore provided, in accordance with an embodiment of the invention, imaging apparatus, including a photosensitive medium configured to convert incident photons into charge carriers and a common electrode, which is at least partially transparent, overlying the photosensitive medium and configured to apply a bias potential to the photosensitive medium. An array of pixel circuits is formed on a semiconductor substrate. Each pixel circuit defines a respective pixel and is configured to collect the charge carriers from the photosensitive medium while the common electrode applies the bias potential and to output a signal responsively to the collected charge carriers. Control circuitry is configured to read out the signal from the pixel circuits in each of a periodic sequence of readout frames and to drive the common electrode to apply the bias potential to the photosensitive medium during each of a plurality of distinct shutter periods within at least one of the readout frames.
In a disclosed embodiment, the photosensitive medium includes a quantum film.
In one embodiment, the plurality of the distinct shutter periods includes at least a first shutter period and a second shutter period of equal, respective durations. Alternatively, the first shutter period and second shutter period have different, respective durations.
In some embodiments, the photosensitive medium includes a first photosensitive layer, which is configured to convert the incident photons in a first wavelength band into the charge carriers, and a second photosensitive layer, which is configured to convert the incident photons in a second wavelength band, different from the first wavelength band, into the charge carriers. The control circuitry is configured to drive the common electrode to apply the bias potential only to the first photosensitive layer during a first shutter period and to apply the bias potential only to the second photosensitive layer during a different, second shutter period among the plurality of distinct shutter periods within the at least one of the readout frames.
In a disclosed embodiment, the first wavelength band is a visible wavelength band, while the second wavelength band is an infrared wavelength band.
The first and second photosensitive layers may both be overlaid on a common set of the pixel circuits, which collect the charge carriers in response to the photons that are incident during both of the first and second shutter periods. Alternatively, the first and second photosensitive layers are overlaid on different, respective first and second sets of the pixel circuits.
In a disclosed embodiment, the control circuitry is configured to synchronize the shutter periods with a pulsed illumination source, which illuminates a scene while an image of the scene is captured by the apparatus.
In some embodiments, the control circuitry is configured to process the signal in the at least one of the readout frames so as to identify, responsively to the plurality of the distinct shutter periods, a moving object in an image captured by the apparatus. In one embodiment, the control circuitry is configured to estimate a velocity of the moving object responsively to a distance between different locations of the moving object that are detected respectively during the distinct shutter periods.
There is also provided, in accordance with an embodiment of the invention, a method for imaging, which includes overlaying a common electrode, which is at least partially transparent, on a photosensitive medium configured to convert incident photons into charge carriers. An array of pixel circuits, each defining a respective pixel, is coupled to collect the charge carriers from the photosensitive medium while the common electrode applies a bias potential to the photosensitive medium and to output a signal responsively to the collected charge carriers. The signal is read out from the pixel circuits in each of a periodic sequence of readout frames. The common electrode is driven to apply the bias potential to the photosensitive medium during each of a plurality of distinct shutter periods within at least one of the readout frames.
There is additionally provided, in accordance with an embodiment of the invention, imaging apparatus, including a photosensitive medium configured to convert incident photons into charge carriers. Pixel circuitry is coupled to the photosensitive medium and configured to create one or more imprints of an object in an image that is formed on the photosensitive medium, wherein each of the imprints persists over one or more image frames.
The present invention will be more fully understood from the following detailed description of the embodiments thereof, taken together with the drawings in which:
The image sensors described herein may be used within any suitable imaging device, such as a camera, spectrometer, light sensor, or the like.
The camera module may further comprise one or more optional filters, such as a filter 106, which may be placed along the optical path. Filter 106 may reflect or otherwise block certain wavelengths of light, and may substantially prevent, based on the effectiveness of the filter, these wavelengths of light from reaching image sensor 102. As an example, when an image sensor is configured to measure visible light, filter 106 may comprise an infrared cutoff filter. While shown in
Image sensor 200 may further comprise row circuitry 204 and column circuitry 206, which collectively may be used to convey various signals (e.g., bias voltages, reset signals) to individual pixels as well as to read out signals from individual pixels. For example, row circuitry 204 may be configured to simultaneously control multiple pixels in a given row, while column circuitry 206 may convey pixel electrical signals to other circuitry for processing. Accordingly, image sensor 200 may comprise control circuitry 208, which may control the row circuitry 204 and column circuitry 206, as well as performing input/output operations (e.g., parallel or serial IO operations) for image sensor 200.
In particular, in the embodiments that are described hereinbelow, control circuitry 208 reads out the signals from the pixel circuits in pixels 212 in each of a periodic sequence of readout frames, while driving array 202 to apply a global shutter to the pixels during each of a plurality of distinct shutter periods within one or more of the readout frames. The control circuitry may include a combination of analog circuits (e.g., circuits to provide bias and reference levels) and digital circuits (e.g., image enhancement circuitry, line buffers to temporarily store lines of pixel values, register banks that control global device operation and/or frame format).
Additionally or alternatively, control circuitry 208 may be configured to perform higher-level image processing functions on the image data output by pixel array 202. For this purpose, in some embodiments, control circuitry 208 comprises a programmable processor, such as a microprocessor or digital signal processor, which can be programmed in software to perform image processing functions. For example, such a processor can be programmed to detect motion in image frames, as described hereinbelow. Alternatively, such processing functions can be performed by a separate computer or other image processor (not shown in the figures), which receives image data from image sensor 200.
Photosensitive material layer 304 may be configured to absorb photons and generate one or more electron-hole pairs in response to photon absorption. In some instances, photosensitive material layer 304 may include one or more films formed from quantum dots, such as those described in the above-mentioned U.S. Pat. No. 7,923,801. The materials of photosensitive material layer 304 may be tuned to change the absorption profile of photosensitive material layer 304, whereby the image sensor may be configured to absorb light of certain wavelengths (or range of wavelengths) as desired. It should be appreciated that while discussed and typically shown as a single layer, photosensitive material layer 304 may be made from a plurality of sub-layers. For example, the photosensitive material layer may comprise a plurality of distinct sub-layers of different photosensitive material layers.
Additionally or alternatively, photosensitive material layer 304 may include one or more sub-layers that perform additional functions, such as providing chemical stability, adhesion or other interface properties between photosensitive material layer 304 and pixel circuitry layer 302, or for facilitate charge transfer across the photosensitive material layer 304. It should be appreciated that sub-layers of photosensitive material layer 304 may optionally be patterned such that different portions of the pixel circuitry may interface with different materials of the photosensitive material layer 304. For the purposes of discussion in this application, photosensitive material layer 304 will be discussed as a single layer, although it should be appreciated that a single layer or a plurality of different sub-layers may be selected based on the desired makeup and performance of the image sensor.
To the extent that the image sensors described here comprise a plurality of pixels, in some instances a portion of photosensitive material layer 304 may laterally span multiple pixels of the image sensor. Additionally or alternatively, photosensitive material layer 304 may be patterned such that different segments of photosensitive material layer 304 may overlie different pixels (such as an embodiment in which each pixel has its own individual segment of photosensitive material layer 304). As mentioned above, photosensitive material layer 304 may be in a different plane from pixel circuitry layer 302, such as above or below the readout circuitry relative to light incident thereon. That is, the light may contact photosensitive material layer 304 without passing through a plane (generally parallel to a surface of the photosensitive material layer) in which the readout circuitry resides.
In some instances, it may be desirable for photosensitive material layer 304 to comprise one or more direct bandgap semiconductor materials while pixel circuitry layer 302 comprises an indirect bandgap semiconductor. Examples of direct bandgap materials include indium arsenide and gallium arsenide, among others. The bandgap of a material is direct if a momentum of holes and electrons in a conduction band is the same as a momentum of holes and electrons in a valence band. Otherwise, the bandgap is an indirect bandgap. In embodiments in which pixel circuitry layer 302 includes an indirect bandgap semiconductor and photosensitive material layer 304 includes a direct bandgap semiconductor, photosensitive material layer 304 may promote light absorption and/or reduce pixel-to-pixel cross-talk, while pixel circuitry layer 302 may facilitate storage of charge while reducing residual charge trapping.
Pixel 300 typically comprises at least two electrodes for applying a bias to at least a portion of photosensitive material layer 304. In some instances, these electrodes may comprise laterally-spaced electrodes on a common side of the photosensitive material layer 304. In other variations, two electrodes are on opposite sides of the photosensitive material layer 304. In these variations, a top electrode 306 is overlaid on photosensitive material layer 304. The pixel circuits in pixel circuitry layer 302 collect the charge carriers from photosensitive material layer 304 while top electrode 306 applies an appropriate bias potential across layer 304. The pixel circuits output a signal corresponding to the charge carriers collected in each image readout frame.
In embodiments that include top electrode 306, the image sensor is positioned within an imaging device such that oncoming light passes through top electrode 306 before reaching photosensitive material layer 304. Accordingly, it may be desirable for top electrode 306 to be formed from a conductive material that is at least partially transparent to the wavelengths of light that the image sensor is configured to detect. For example, top electrode 306 may comprise a transparent conductive oxide. In some instances, electrode 306 is configured as a common electrode, which spans multiple pixels of an image sensor. Additionally or alternatively, electrode 306 optionally may be patterned into individual electrodes such that different pixels have different top electrodes. For example, there may be a single top electrode that addresses every pixel of the image sensor, one top electrode per pixel, or a plurality of top electrodes wherein at least one top electrode address multiple pixels.
The bias potential applied to top electrode 306 may be switched on and off at specified times during each readout frame to define a shutter period, during which the pixels integrate photocharge. In some embodiments, control circuitry (such as control circuitry 208) drives top electrode 306 to apply the bias potential to photosensitive material 304 during multiple distinct shutter periods within one or more readout frames. These embodiments enable the control circuitry to acquire multiple time slices within each such frame, as described further hereinbelow.
In some instances pixel 300 may further comprise one or more filters 308 overlaying the photosensitive material layer 304. In some instances, one or more filters may be common to the pixel array, which may be equivalent to moving filter 106 of
Additionally, in some variations, pixel 300 may comprise a microlens overlying at least a portion of the pixel. The microlens may aid in focusing light onto photosensitive material layer 304.
As shown in
To facilitate the collection and transfer of charge within the pixel, one or more transistors, diodes, and photodiodes may formed in or on a semiconductor substrate layer 312, for example, and are suitably connected with portions of metal stack 314 to create a light-sensitive pixel and a circuit for collecting and reading out charge from the pixel. Pixel circuitry layer 302 may facilitate maintaining stored charges, such as those collected from the photosensitive layer. For example, semiconductor substrate layer 312 may comprise a sense node 318, which may be used to temporarily store charges collected from the photosensitive layer. Metal stack 314 may comprise first interconnect circuitry that provides a path from pixel electrode 316 to sense node 318. While metal stack 314 is shown in
To reach second photosensitive layer 304b, at least a portion of second electrode 322 may pass through a portion of first photosensitive layer 304a and insulating layer 324. This portion of second electrode 322 may be insulated to insulate the second electrode from first photosensitive layer 304a. A first bias may be applied to first photosensitive layer 304a via first electrode 316 and the common electrodes, and a second bias may be applied to second photosensitive layer 304b via second electrode 322 and the common electrodes. While shown in
Each photosensitive layer may be connected to the pixel circuitry in such a way that the photosensitive layers may be independently biased, read out, and/or reset. Having different photosensitive layers may allow the pixel to independently read out different wavelengths (or wavelength bands) and/or read out information with different levels of sensitivity. For example, first photosensitive layer 304a may be connected to a first sense node 318 while second photosensitive layer 304b may be connected to a second sense node 320, which in some instances may be separately read out to provide separate electrical signals representative of the light collected by the first and second photosensitive layers respectively.
Turning to
Sense node 402 may further be connected to an input of a source follower switch 406, which may be used to measure changes in sense node 402. Source follower switch 406 may have its drain connected to a voltage source VSUPPLY and its source connected to a common node with the drain of a select switch 408 (controlled by a select signal SELECT). The source of select switch 408 is in turn connected to an output bus COLUMN. When select switch 408 is turned on, changes in sense node 402 detected by follower switch 406 will be passed via select switch 408 to the bus for further processing.
The image sensors described here may be configured to read out images using rolling shutter or global shutter techniques. For example, to perform a rolling shutter readout using the pixel circuitry of
Similarly, the pixel circuitry of
The tracking of objects in space and time is of interest in a number of applications. For example, user interfaces benefit from the capability to recognize certain gestures. An example is a left-right swipe, which could signal turning forward to the next page of a book; a right-left swipe, which could signify turning back; and up-to-down and down-to-up swipes, signaling scrolling directions.
In applications such as these, it is important to ascertain that such gestures are being implemented and to distinguish their directions. Such gesture recognition is of interest on multiple timescales. One unit of time common to most image sensors and cameras is the frame time, i.e., the time it takes to read out one image or frame.
In some cases, a gesture may be substantially completed within a given frame time (such as within a 1/15, 1/30, or 1/60 second frame duration). In other cases, the gesture may be completed over longer time periods, in which case acquisition and recognition can occur on a multi-frame timescale. In some applications, information on both of these timescales may be of interest: For example, fine-grained information may be obtained on the within-frame timescale, while coarser information may be obtained on the multi-frame timescale.
Implementations of gesture recognition can take advantage of the acquisition of multiple independent frames, each of which is acquired, saved in a frame memory (on or off a given integrated circuit), and processed. In the present embodiments, it is of interest to capture the information related to a gesture or other motion within a single image or frame. In this case, the image data acquired within this single frame is processed and stored for the purpose of identifying moving objects and analyzing the distances and velocity by which they have moved.
The embodiments that are described hereinbelow enable capture of this sort of information using the global shutter (GS) functionality of film-based image sensors. In particular, image sensors based on quantum films are capable of global shutter operation without additional transistors or storage nodes. Photon collection of the pixels can be turned on and off by changing the bias across the film, as explained above, and in particular can be turned on during multiple distinct shutter periods within each of the readout frames.
Furthermore, although two shutter periods 506 of 5 ms duration are shown in each blanking period 504 in
The film bias, marked at the right side of the figure, corresponds to the potential applied by the common electrode across the photosensitive medium, such as a quantum film, in the pixels of an image sensing array. Referring to
Readout period 502 represents the time required for all the rows of the image sensor to be read out, typically in rolling order, as depicted by the slanted line in
Blanking period 504, is the time in each frame 500 after all the rows have been read out and before the next frame readout begins. As explained above, the film bias is switched to the ON voltage during one or more variable shutter periods 506 during blanking period 504 so that the pixels in the array collect the photocharge generated by the photosensitive medium. Shutter periods 506 are also referred to as the integration times. Following the shutter periods, the film bias is set back to the OFF voltage, so that photocharge generation is stopped, and the pixels in all rows can then be read out as the next frame or image.
As illustrated by
A similar approach can be applied in other motion-sensing applications, such as detecting and analyzing rapid hand gestures. As another example of the possible use of this sort of multi-exposure scheme, in a structured light application, the illuminator can be moved during each frame, and the spots captured in multiple shutter periods can then be combined to create a more accurate depth map in cases in which the number of spots created by the illuminator is limited.
Varied shutter periods of this sort can be used for slower-moving objects, for which the difference in exposures does not dramatically change the perceived shape of the object, and the illumination does not change dramatically in the timescales of the multiple exposures. Additionally or alternatively, the control circuitry can use the varying shutter periods in extracting directional information with respect to moving objects. When the signal timing shown in
In some embodiments, the techniques described above for creating multiple shutter periods during a given frame can be used in synchronization with a pulsed illumination source, which illuminates a scene while an image of the scene is captured by the image sensor. The illumination source, such as an LED or laser, is typically pulsed during the shutter periods of the image sensor. The illuminator power can be varied, so that the image sensor is exposed to a different intensity level in each shutter period, and the difference in image brightness can then be used in determining both the magnitude and the direction of the velocity of motion. In this case, the shutter periods can be identical and short in order to prevent motion smear and object distortion.
Alternatively, other sorts of synchronized pulse patterns, with various durations and power levels, may be used. For example, it is possible to simultaneously modulate the power and pulse durations in order to reduce illuminator power. An optimal tradeoff between the pulse duration and power modulation can be determined based on the object distance and acceptable distortion.
In some embodiments, as illustrated in
With this approach, motion and velocity vectors can be estimated by staggering the exposures of the photosensitive layers in the different wavelength bands. Specifically, although the bias controls for the different layers are separated, the same pixel readout timing is maintained for both layers. Thus, the shutter periods can be staggered between the layers, making it possible to determine the motion and velocity vectors in a single frame readout of the multi-wavelength-band information.
Traces 1402 and 1404 in
In other embodiments, an image sensor can be designed such that application of a bright light temporarily imprints the sensor with an image. The imprint can stay for a controllable amount of time, ranging from 0-10 frame lengths. The image sensor can be designed so that the imprint only happens when the photosensitive material, such as a quantum film, in the pixels is biased in a particular direction and magnitude, so as to drive charge into a region of the material where it becomes trapped. Because the charge only becomes trapped when the photosensitive material is biased in a certain way, by carefully controlling the bias timing of the device, an image can be temporarily imprinted in the sensor at a particular location, corresponding to the coincidence of a bright light illuminating pixels that are biased to drive charge toward the direction where they will become trapped.
In
In these embodiments, once charge is trapped, it can stay trapped for a period of time ranging from milliseconds to several seconds. The trapped charge can alter the field applied across the photosensitive medium, such that where charge is trapped, the global shutter capability of the pixel in question is temporarily disabled. When the pixels are subsequently illuminated with ambient light, they will produce a signal only where charge is trapped. In the other parts of the image sensor where no charge (or less charge) was trapped, the global shutter is still enabled and the device does not record any signal, even though there is light incident upon it. The result of this behavior is to create one or more “imprints” of a bright object on the image sensor. The imprints are spaced apart by a number of pixels equal to the product of the frame duration by the object velocity (in pixels), and get dimmer as they get farther from the original object. By using single-frame image processing to locate all the imprints in a single frame, it is possible to determine the velocity of motion from the spacing of the imprints, as well as the direction of motion, by the direction of increasing imprint intensity.
In some embodiments, the image sensor can be designed so that the imprint is created only by parts of the scene that are much brighter than the background. This feature can be implemented because the creation of the imprint by trapping charge against the hole-blocking layer of the pixels can occur only when the photosensitive medium is biased to drive holes in that direction. The bias can be chosen so that holes are driven toward the hole-blocking layer only when a light is sufficiently bright to drive the sense node voltage very low during the main integration period. Increasing the duration of the main integration period causes the sense node voltage to go lower for a given light intensity, thus making it easier for light of a given intensity to cause holes to be driven toward the hole-blocking layer and create an imprint. Alternatively, the main integration time can be decreased so that only very bright lights drive the sense node voltage low enough for holes to be driven toward the hole-blocking layer and create an imprint. The main integration time can thus be used to adjust how bright a light must be before it creates an imprint. In embodiments in which there are many bright objects in a scene, this tuning can be used so that only the brightest moving image creates an imprint, while the static background, which is slightly less bright, does not create an imprint.
These techniques can be used to acquire gesture information that spans multiple frames. For example, if the object is more slowly-moving, its image may traverse the imaged scene over multiple tenths of a second.
In some cases, it can be desirable for objects, especially those lying in a high range of intensities (e.g., very bright objects, such as those actively illuminated to the point of saturation), to provide signals within the acquired image frame that indicate their locations over multiple frame intervals. In some embodiments of this sort, an optically-sensitive layer acquires an image of a bright object during a first period, and as a result rapidly integrates down the voltage on a pixel electrode. During a second period, the optically-sensitive layer acquires a large signal selectively only in cases in which the pixel electrode is now lower than a certain bias level. As a result, the region that was illuminated brightly in the first period provides an imprint, during an ensuing frame period, of the illumination position during the first period. The amplitude of the imprint may be controlled, for example, by providing a reset having a specific timing relative to the shutter transition.
In this embodiment, the magnitude of the imprint signal can also be tuned by adjusting imprint integration time 1606 (as opposed to the main integration time, as described above). The addition of second reset 1604 after the rolling readout and the main integration time controls how much signal is collected in the pixels in which charge is trapped. The magnitude of the imprint is determined by the amount of charge trapped in the imprinted pixels, the intensity of ambient light incident on the imprint-affected pixels after the imprint is created, and the imprint integration time.
The ability to detect the imprint can be increased by moving second reset 1604 closer in time to rolling readout period 1602, such that imprint integration time 1606 increases. This approach can be advantageous in scenes in which the moving object to be detected is of similar brightness to a static background, in enhancing detection of the imprint against the static background.
Alternatively, the imprint can be decreased in magnitude by moving second reset 1604 closer in time to the main integration period. This approach can be advantageous in scenes in which the moving object is much brighter than the static background so that it is easy to pick the imprint out, and there is a desire to limit the amount of time the imprint endures. By reducing the imprint integration time, the oldest imprints, which are also the faintest, can be reduced in intensity such that they are not detectable. In such embodiments, imprint integration time 1606 can be used to effectively control the number of imprints that appear, for example in a range between zero and ten imprints.
It will be appreciated that the embodiments described above are cited by way of example, and that the present invention is not limited to what has been particularly shown and described hereinabove. Rather, the scope of the present invention includes both combinations and subcombinations of the various features described hereinabove, as well as variations and modifications thereof which would occur to persons skilled in the art upon reading the foregoing description and which are not disclosed in the prior art.
This application claims the benefit of U.S. Provisional Patent Application 62/411,515, filed Oct. 21, 2016, which is incorporated herein by reference.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2017/057766 | 10/22/2017 | WO | 00 |
Number | Date | Country | |
---|---|---|---|
62411515 | Oct 2016 | US |