This disclosure relates generally to imaging systems. For example, several embodiments of the present technology relate to imaging systems that include hybrid image sensors and that can (a) dynamically or periodically adjust contrast thresholds used by event vision sensors for detecting event data, (b) use the event data to perform event-guided deblur and/or rolling-shutter-distortion correction on CMOS image sensor (CIS) data captured by corresponding CIS pixels, and/or (c) use the event data and/or the CIS data to perform video frame interpolation.
Image sensors have become ubiquitous and are now widely used in digital cameras, cellular phones, security cameras, as well as medical, automobile, and other applications. As image sensors are integrated into a broader range of electronic devices, it is desirable to enhance their functionality, performance metrics, and the like in as many ways as possible (e.g., resolution, power consumption, dynamic range, etc.) through both device architecture design as well as image acquisition processing.
A typical image sensor operates in response to image light from an external scene being incident upon the image sensor. The image sensor includes an array of pixels having photosensitive elements (e.g., photodiodes) that absorb a portion of the incident image light and generate image charge upon absorption of the image light. The image charge photogenerated by the pixels may be measured as analog output image signals on column bitlines that vary as a function of the incident image light. In other words, the amount of image charge generated is proportional to the intensity of the image light, which is read out as analog image signals from the column bitlines and converted to digital values to provide information that is representative of the external scene.
Non-limiting and non-exhaustive embodiments of the present technology are described below with reference to the following figures, in which like or similar reference numbers are used to refer to like or similar components throughout unless otherwise specified.
Skilled artisans will appreciate that elements in the figures are illustrated for simplicity and clarity and are not necessarily drawn to scale. For example, the dimensions of some of the elements in the figures may be exaggerated relative to other elements to aid in understanding of various aspects of the present technology. In addition, common but well-understood elements or methods that are useful or necessary in a commercially feasible embodiment are often not depicted in the figures, or described in detail below, to avoid unnecessarily obscuring the description of various aspects of the present technology.
The present disclosure relates generally to imaging systems with adjustable contrast thresholds. For example, several embodiments disclosed herein relate to imaging systems that generate event-based vision sensor (EVS) data based at least in part on an adjustable contrast threshold. In turn, the imaging systems can utilize the EVS data to deblur CMOS image sensor (CIS) data, correct the CIS data for rolling shutter distortion, and/or generate additional video/image frames via video frame interpolation. The imaging systems can include EVS sensors and separate active (CIS) sensors. In other embodiments, the imaging systems include hybrid image sensors that include both EVS sensing and active sensing components. The EVS sensors/EVS sensing components can include EVS pixels configured to generate EVS data, and the CIS sensors/CIS sensor components can include CIS pixels configured to generate CIS data.
In the following description, specific details are set forth to provide a thorough understanding of aspects of the present technology. One skilled in the relevant art will recognize, however, that the systems, devices, and techniques described herein can be practiced without one or more of the specific details set forth herein, or with other methods, components, materials, etc.
Reference throughout this specification to an “example” or an “embodiment” means that a particular feature, structure, or characteristic described in connection with the example or embodiment is included in at least one example or embodiment of the present technology. Thus, use of the phrases “for example,” “as an example,” or “an embodiment” herein are not necessarily all referring to the same example or embodiment and are not necessarily limited to the specific example or embodiment discussed. Furthermore, features, structures, or characteristics of the present technology described herein may be combined in any suitable manner to provide further examples or embodiments of the present technology.
Spatially relative terms (e.g., “beneath,” “below,” “over,” “under,” “above,” “upper,” “top,” “bottom,” “left,” “right,” “center,” “middle,” and the like) may be used herein for ease of description to describe one element's or feature's relationship relative to one or more other elements or features as illustrated in the figures. It will be understood that the spatially relative terms are intended to encompass different orientations of a device or system in use or operation, in addition to the orientation depicted in the figures. For example, if a device or system illustrated in the figures is rotated, turned, or flipped about a horizontal axis, elements or features described as “below” or “beneath” or “under” one or more other elements or features may then be oriented “above” the one or more other elements or features. Thus, the exemplary terms “below” and “under” are non-limiting and can encompass both an orientation of above and below. The device or system may additionally, or alternatively, be otherwise oriented (e.g., rotated ninety degrees about a vertical axis, or at other orientations) than illustrated in the figures, and the spatially relative descriptors used herein are interpreted accordingly. In addition, it will also be understood that when an element is referred to as being “between” two other elements, it can be the only element between the two other elements, or one or more intervening elements may also be present.
Throughout this specification, several terms of art are used. These terms are to take on their ordinary meaning in the art from which they come, unless specifically defined herein or the context of their use would clearly suggest otherwise. It should be noted that element names and symbols may be used interchangeably through this document (e.g., Si vs. silicon); however, both have identical meaning.
An active pixel sensor employs an array of pixels that are used to capture intensity images/video of an external scene. More specifically, the pixels are used to obtain CIS information (e.g., intensity information) corresponding to light from the external scene that is incident on the pixels. CIS information obtained during an integration period is read out at the end of the integration period and used to generate a corresponding intensity image of the external scene.
The pixels of an active pixel sensor typically have an integration time that is globally defined. Thus, pixels in an array of an active pixel sensor typically have an identical integration time, and each pixel in the array is typically converted into a digital signal regardless of its content (e.g., regardless of whether there has been a change in an external scene that was captured by a pixel since the last time the pixel was read out). As such, a relatively large amount of memory and power can be required to operate an active pixel sensor at high frame rates. Thus, due in part to memory and power constraints, it is difficult to use an active pixel sensor on its own to obtain intensity images/video of an external scene at ultra-high frame rates.
Moreover, when motion or other changes occur in an external scene during an integration period, motion artifacts can be observed as blurring in the resulting intensity image of the external scene. Blurring can be especially prominent in low light conditions in which longer exposure times are used. As such, active pixel image sensors on their own are not great at obtaining sharp intensity images/video of highly dynamic scenes.
In comparison, event vision sensors (e.g., event driven sensors or dynamic vision sensors) employ EVS pixels that are usable to obtain non-CIS information (e.g., contrast information, intensity changes, event data) corresponding to light from an external scene that is incident on those EVS pixels. Event vision sensors read out an EVS pixel and/or convert a corresponding pixel signal into a digital signal only when the EVS pixel detects a change (e.g., an event) in the external scene. In other words, EVS pixels of an event vision sensor that do not detect a change in the external scene are not read out and/or pixel signals corresponding to such EVS pixels are not converted into digital signals (thereby saving power). Thus, each EVS pixel of an event vision sensor can be independent from other EVS pixels of the event vision sensor, and only EVS pixels that detect a change in the external scene need be read out and/or have their corresponding pixel signals converted into digital signals. As a result, unlike active pixel sensors with synchronous integration times, event vision sensors do not suffer from limited dynamic ranges and are able to accurately capture high-speed motion. Thus, event visions sensors are often more robust than active pixel sensors in low lighting conditions and/or in highly dynamic scenes because they are not affected by under/over exposure or motion blur associated with a synchronous shutter. Stated another way, event vision sensors can be used to provide ultra-high frame rates and to accurately capture high-speed motions.
Hybrid image sensors employ an array of pixels that includes a combination of (i) active (CIS) pixels usable to obtain CIS information corresponding to light from an external scene and (ii) EVS pixels usable to obtain non-CIS information corresponding to light from the external scene. Such hybrid image sensors are therefore able to simultaneously capture (a) intensity images/video of an external scene and (b) events occurring within the external scene. Event data captured by the EVS pixels can be used to resolve/mitigate (i) the low frame-rate intensity image problem discussed above and (ii) the blurry effect inherent in intensity images captured using CIS pixels in the presence of motion. For example, using an event-based double integral (EDI) model, high frame-rate intensity images/video of an external scene can be reconstructed from a single (e.g., blurry) intensity image and its event sequence.
A description of the EDI model is provided here for the sake of clarity and understanding. Instantaneous intensity (or irradiance/flux) at a pixel (x, y) at any time t, related to the rate of photon arrival at that pixel (x, y), is known as a latent image Lxy(t). The latent image Lxy(t) is not directly output from a hybrid image sensor corresponding to the pixel (x, y). Instead, the hybrid image sensor outputs (i) an intensity image (e.g., a blurry image) that represents a combination of multiple latent images (including the latent image Lxy(t)) captured by one or more CIS pixels over an exposure period, and (ii) a sequence of events captured during the exposure period that record changes in intensity between the latent images. A description of how events can be detected is provided below, followed by a description of how detected events may be used to obtain the latent image Lxy(t) for a pixel. Because each pixel of an image sensor can be treated separately, the subscripts x, y are omitted from the equations and variables that follow for readability. One should appreciate, however, that the latent image Lxy(t) for a pixel of a pixel array at a given time represents a portion of a latent image LF(t) for the full pixel array at that given time. Thus, the latent image LF(t) for the full pixel array at the given time can be obtained by determining the latent image Lxy(t) of every pixel in the array at that given time.
The photosensor 101 is configured to photogenerate image charge (photocurrent) in response to incident light. Photocurrent photogenerated by the photosensor 101 at time t is directly proportional to the latent image L(t) (e.g., irradiance of the incident light) at time t, as indicated by Equation 1 below:
As discussed above, the latent image L(t) denotes the instantaneous intensity at the EVS pixel 100 at time t, related to the rate of photon arrival at the EVS pixel 100.
Photocurrent photogenerated by the photosensor 101 is fed into the logarithmic amplifier 102. In turn, the logarithmic amplifier 102 transduces (a) the photocurrent that is linearly proportional to the latent image L(t) into (b) a voltage VFE that is logarithmically dependent on the latent image L(t), as indicated by Equation 2 below:
Temporal contrast (also referred to herein as “linear contrast”) is defined as a change in light contrast on the EVS pixel 100 (e.g., on the photosensor 101) relative to a reference time t0, and is provided by Equation 3 below:
The difference detector 104 of the EVS pixel 100 is used to monitor temporal contrast of light incident on the photosensor 101. More specifically, when reset, the difference detector 104 samples the voltage VFE at a reference time t0 and thereafter generates an output VO shown by Equation 4 below:
The output VO of the difference detector 104 tracks a change of the voltage VFE over time relative to the voltage VFE at the reference time t0. As shown by Equation 5 below, as the voltage VFE changes over time, the corresponding change in the output VO of the difference detector 104 is proportional to the log of the temporal contrast:
The output VO of the difference detector 104 is fed into the up/down comparators 105. In turn, the up/down comparators 105 compare the output VO to corresponding threshold voltages V+TH and V−TH that are given by Equation 6 below in which Clog−TH is a contrast threshold parameter determining whether an event should be recorded. As shown by Equation 7 below, the up comparator detects an event when the output VO of the difference detector 104 exceeds the threshold voltage V+TH. As shown by Equation 8 below, the down comparator detects an event when the output ΔVO of the difference detector 104 is less than the threshold voltage V−TH.
Using the log ratio rule, Equations 7 and 8 provide Equations 9 and 10, respectively, that specify when an event is detected by the EVS pixel 100:
Time ti in the above equations corresponds to each time an event is detected, and time ti−1 denotes the timestamp of the previous event. When an event is triggered in an EVS pixel, L(ti−1) is updated to a new intensity level (e.g., by resetting the difference detector 104 such that the difference detector 104 newly samples the voltage VFE). Detection of an event at time ti therefore indicates that a change in log intensity exceeding the contrast threshold parameter Clog−TH has occurred relative to the previous event detected at time ti−1. In other words, each detected event indicates intensity changes between latent images (a current latent image L(ti) and a previous latent image L(ti−1)). Therefore, Equations 9 and 10 above provide Equation 11 below in which c is equivalent to Clog−TH and pi is event polarity:
Event polarity pi at each time ti is given by Equation 12 below. A polarity pi of +1 denotes an increase in irradiance of light incident on the photosensor 101, and a polarity pi of −1 denotes a decrease in irradiance of light incident on the photosensor 101.
Events detected at each time ti can be modeled using a unit impulse (Dirac function 6) multiplied by a corresponding polarity pi. Detected events can be defined as a function of continuous time. For example,
Because each event detected by the EVS pixel indicates a change between latent images captured at different times, a proportional change in intensity on pixels of the bottom two rows of the CIS pixel array over the exposure period for these pixel rows can be provided by the sum (or combination) of events detected by the corresponding EVS pixel between time ts and time t, as shown in Equation 14 below:
Plot 214 of
Using Equations 11 and 12 above, one can determine ln[L(ti)] when ln[L(ti−1)] is known. Therefore, given a sequence of events specified by time continuous signal e(t), and assuming that c in Equation 11 above remains constant, one can (in the linear domain) determine a latent image L(t) at any given time t for CIS pixels of the bottom two rows of the array by incrementing over all events from a starting latent image L(s) at time ts to time t, as shown by Equation 15 below:
As discussed above with reference to Equation 1, photocurrent photogenerated by the photosensor 101 is proportional to the latent image L(t) (or irradiance) at time t. Thus, the integral of the latent image L(t) over an exposure period extending between time ts and time ts+T corresponds to “charge” and can be related to frame information captured by CIS pixels of a frame-based image sensor. Moreover, as discussed above, a blurry intensity image captured by a frame-based image sensor can be regarded as the integral of a sequence of latent images over—or a combination of multiple latent images captured by the frame-based image sensor within—an exposure period extending between time ts and time ts+T during which events are accumulated. Therefore, a blurry frame B captured by a frame-based image sensor can, using Equation 15 above, be expressed by Equation 16 below:
Plot 216 of
Equation 16 above is known as the EDI model and provides a relation between a blurry frame B captured by a frame-based image sensor and a latent image L(s) at a CIS pixel at time ts (corresponding to the start of the frame/exposure period for that CIS pixel). This relation can be rearranged to find the latent image L(s), as shown by Equation 17 below:
The latent image L(s) in Equation 17 above takes the interpretation of a deblurred frame based on (a) the blurry frame B captured by a frame-based image sensor and (b) events detected by an EVS pixel across a corresponding exposure period. In other words, because events detected by the EVS pixel during an exposure period indicate changes between latent images captured by one or more active (CIS) pixels during the exposure period, the detected events can be used to perform event-guided deblur.
As discussed above, a rolling shutter can be used to capture and read out CIS information. For example, the exposure period for the top two rows of the CIS pixel array shown in the plot 210 of
Events detected by the EVS pixel corresponding to the bottom two rows of the pixel array can be used to correct such distortion. More specifically, events detected by the EVS pixel between time t0 and time ts (shown in plot 218 of
The plot 218 of
As discussed above, Equations 11 and 12 can be used to determine ln[L(ti)] when ln[L(ti−1)] is known. Therefore, given the sequence of events specified by the time continuous signal e(t) between time t0 and time ts, and assuming that c in Equation 11 above remains constant, one can (in the linear domain) determine a latent image L(s) for a CIS pixel at time ts by (i) integrating/incrementing over all events detected by a corresponding EVS pixel from time t0 to time ts and (ii) multiplying the accumulated events by a starting latent image L(0), as shown by Equation 19 below:
Therefore, given Equations 17 and 19 above, one can solve for the latent image L(0) of a CIS pixel and obtain it using Equation 20 below:
L(0) in Equation 20 above corresponds to a deblurred, rolling-shutter-distortion-corrected latent frame. Thus, the latent frame LF(0) for the entire pixel array can be obtained using the latent frames L(0) for individual CIS pixels in the array.
As discussed above, Equation 15 can be used to determine a latent image L(t) at any given time t for CIS pixels by incrementing over all events corresponding to those CIS pixels from a starting latent image L(s) at time ts to time t. Using Equation 15 and Equation 19 above, the latent image L(t) at any given time t can be expressed in terms of a starting latent image L(0) corresponding to time t0 (corresponding to the start of the exposure period for the top two pixel rows of the CIS pixel array shown in
Assuming the contrast threshold c in the above equations remains constant, Equation 15 and/or Equation 21 above can be used for video frame interpolation. Video frame interpolation (VFI) is a technique that involves generating additional (e.g., otherwise non-existent) frames of video/image data between consecutive video/image frames. For example, referring again to the plot 210 of
Video frame interpolation is often used to make video playback smoother and more fluid, such as by making videos appear to have a higher refresh rate. Video frame interpolation is also commonly used in various video processing applications, such as video compression and video restoration. Additionally, or alternatively, video frame interpolation can be used to achieve slow motion video.
In some instances, it may be desirable to (e.g., dynamically, periodically) adjust the contrast threshold used to determine whether an event should be recorded. For example, it may be desirable to adjust the contrast threshold over time (a) to account for temperature drift of an image sensor and/or a corresponding imaging system, (b) to balance noise and/or saturation, and/or (c) to regulate data rate and/or power. For instance, it may be desirable to adjust the contrast threshold based on light levels in an external scene (e.g., as part of auto-exposure functions) and/or based on signals received from the external scene. As a specific example, EVS pixels of an image sensor may be exposed to a flicker signal (e.g., a ripple voltage from pure sunlight). Continuing with this example, it may be desirable to adjust the contrast threshold (e.g., above the ripple voltage), such as to control the rate at which the EVS pixels detect events due to the flicker signal. As still another example, it may be desirable to adjust the contrast threshold to control an event rate of the EVS sensor (e.g., to control the data rate and/or power consumption of the image sensor). The contrast threshold can be adjusted at any time, such as before, during, and/or after an exposure period for CIS pixels of an image sensor.
When the contrast threshold is dynamically or periodically adjusted, the EDI equations discussed above are modified slightly to account for the temporal change of the contrast threshold. For example, a latent image L(t) at any given time t can be determined for CIS pixels by integrating over (a) all events and all contrast thresholds from a starting latent image L(s) at time ts to time t, as shown by Equation 22 below:
Similarly, a blurry frame B captured by a frame-based image sensor can, using Equation 22 above, be expressed by Equation 23 below:
Equation 23 above provides a relation between a blurry frame B captured by a frame-based image sensor and a latent image L(s) at a CIS pixel at time ts (corresponding to a start of the frame/exposure period for that CIS pixel). Therefore, this relation can be rearranged to find the latent image L(s), as shown by Equation 24 below:
The latent image L(s) in Equation 24 above takes the interpretation of a deblurred frame based on (a) the blurry frame B captured by a frame-based image sensor, (b) events detected by an EVS pixel across a corresponding exposure period, and (c) dynamic adjustments of the contrast threshold (if any) across the corresponding exposure period. In other words, because events detected by the EVS pixel during an exposure period indicate changes between latent images captured by one or more active (CIS) pixels during the exposure period, the detected events can be used to perform event-guided deblur.
As discussed above, events detected by EVS pixels between time t0 and time ts (shown in plot 218 of
Therefore, given Equations 24 and 25 above, one can solve for the latent image L(0) of a CIS pixel and obtain it using Equation 26 below:
L(0) in Equation 26 above corresponds to a deblurred, rolling-shutter-distortion-corrected latent frame. Thus, the latent frame LF(0) for the entire pixel array can be obtained using the latent frames L(0) for individual CIS pixels in the array.
As discussed above, Equation 22 can be used to determine a latent image L(t) at any given time t for CIS pixels by incrementing over all events corresponding to those CIS pixels from a starting latent image L(s) at time ts to time t while accounting for the temporal change of the contrast threshold. Using Equation 22 and Equation 25 above, for embodiments in which the contrast threshold is dynamically or periodically adjusted, the latent image L(t) at any given time t can be expressed in terms of a starting latent image L(0) corresponding to time t0 (corresponding to the start of the exposure period for the top two pixel rows of the CIS pixel array shown in
As discussed in greater detail below, when the contrast threshold is dynamically or periodically adjusted, Equation 22 and/or Equation 27 can be used to perform event-guided video frame interpolation. For example, several embodiments of the present technology are directed to imaging systems with hybrid image sensors that capture CIS data and corresponding EVS data. Continuing with this example, the hybrid image sensors can be configured to accumulate EVS data and output the accumulated EVS data to downstream components (e.g., application processors) of the imaging systems. In turn, the accumulated EVS data can be used to generate one or more interpolated video frames based on the CIS data.
In at least some of these embodiments, the imaging systems can further utilize the EVS data (e.g., raw EVS data or accumulated EVS data) to deblur and/or correct for rolling-shutter distortion in the CIS data. Such deblurring and/or rolling-shutter-distortion correction can be performed on-chip (e.g., on the hybrid image sensors) such that the hybrid image sensors are configured to output deblurred and/or rolling-shutter-distortion-corrected CIS data to the downstream application processors or image signal processors of the imaging systems. On-chip deblurring and/or on-chip rolling-shutter-distortion correction can avoid many of the drawbacks discussed in detail below with reference to off-chip deblurring and/or off-chip rolling-shutter-distortion-correction techniques. In other embodiments of the present technology, deblurring and/or rolling-shutter-distortion correction can be performed off-chip (e.g., off of the hybrid images sensors, such as after the hybrid image sensors output raw CIS data, raw EVS data, and/or accumulated EVS data).
As discussed above, many event-guided-deblur solutions use an active image sensor to capture CIS data and a separate event vision sensor to capture EVS data. Such dual-sensor configurations, however, have several shortcomings, such as parallax errors introduced because the sensors are not collocated, complexities in spatial and temporal synchronization of the CIS data and the EVS data, and added costs (e.g., due to the need for two pairs of lenses, packages, etc.). Although these shortcomings can be overcome by using hybrid image sensors, all existing event-guided-deblur solutions known to the inventors of the present disclosure output CIS data and EVS data to application processors external to the hybrid image sensors, for the application processors to perform event-guided deblur and rolling-shutter distortion off-chip. Such off-chip, event-guided-deblur solutions and rolling-shutter-correction solutions suffer from several additional shortcomings, many of which are discussed below with reference to
One drawback of off-chip, event-guided-deblur solutions is the need for a relatively large amount of memory. For example, to perform event-guided deblur, the application processor 323 uses a first buffer 324, a second buffer 325, a third buffer 326, and a fourth buffer 327. More specifically, the first buffer 324 is used to store two frames of CIS image data. One of the frames is used to sync the CIS data 321 with the EVS data 322 at a CIS/EVS sync block 328 of the application processor 323, and the other of the frames is deblurred by the application processor 323 using the EVS data 322 at a deblur block 329 of the application processor 323. In one example, the first buffer 324 can be approximately 44 MB in size for a 12-megapixel image sensor. The second buffer 325 is used to store the EVS data 322 prior to decoding, and the third buffer 326 is used to store the EVS data 322 after decoding. In one example, the second buffer 325 can be configured to store approximately 50 ms of the EVS data 322 and can therefore be approximately 90 MB in size. As such, depending on the decoding, the third buffer 326 can be approximately 100-400 MB in size. The fourth buffer 327 can be configured to store a deblurred image frame and can therefore be approximately 22 MB in size for a 12-megapixel image sensor. As such, continuing with the above examples, the application processor 323 can require approximately 550 MB of memory to perform event-guided deblur of the CIS data 321 using the EVS data 322.
Other drawbacks of off-chip, event-guided deblur solutions include that such solutions (i) suffer from relatively large latency and (ii) cannot support real-time video. For example, as shown in
Still other drawbacks of off-chip, event guided deblur solutions include (i) the requirement for relatively high input/output (IO) bandwidth between the image sensor(s) and an external application processor, and (ii) consumption of a relatively large amount of power. For example, as shown in
One other drawback of off-chip, event guided deblur solutions is the complexity of the interface between the image sensor(s) and downstream components of an imaging system. For example, rather than outputting deblurred image frames, the image sensor(s) (not shown) of the imaging system 320 of
To address at least some of these concerns, several embodiments of the present technology described herein are generally directed to hybrid image sensors with on-chip image deblur and rolling shutter distortion correction capabilities. For example, several embodiments of the present technology described in detail below are directed to image sensors with the following on-chip capabilities: (a) synchronization of CIS data captured using active (CIS) pixels with EVS data captured using EVS pixels, (b) image deblur, (c) rolling shutter distortion correction, and/or (d) dynamic contrast threshold calibration. In some embodiments, the on-chip image deblur can include on-chip, event-guided deblur. In these and other embodiments, the on-chip rolling shutter distortion correction can include on-chip, event-guided rolling shutter distortion correction.
Embodiments of the present technology that include on-chip image deblur and/or on-chip rolling-shutter-distortion correction are expected to offer several advantages. For example, in comparison to the off-chip, event-guided deblur solutions discussed above, the present technology is expected to reduce or minimize (a) an amount of memory required to perform image deblur and rolling shutter distortion correction; (b) latency associated with performing image deblur and rolling shutter distortion correction; (c) required IO bandwidth/throughput; and/or (d) an amount of power required to perform image deblur and rolling shutter distortion correction. As such, the present technology is also expected to support real-time video in addition to the processing of still images. Moreover, because image deblur and rolling shutter distortion correction are performed on-chip (e.g., entirely or partially internal the image sensor, and/or without first outputting or needing to first output raw CIS data and/or raw EVS data from the image sensor), images sensors configured in accordance with various embodiments of the present technology are able to output deblurred, rolling-shutter-distortion-corrected image frames (e.g., in addition to or in lieu of raw CIS data and/or raw EVS data), meaning that the interface between such images sensors and downstream components of corresponding imaging systems can be simplified in comparison to the off-chip, event-guided deblur solutions discussed above.
In some embodiments, the pixel array 438 is a two-dimensional (2D) array including a plurality of pixel cells (also referred to as “pixels”) that each includes at least one photosensor (e.g., at least one photodiode) exposed to incident light. As shown in the illustrated embodiment, the pixels are arranged into rows and columns. Some of the pixels can be configured as CMOS image sensor (CIS) pixels that are configured to acquire image data of a person, place, object, etc., which can then be used to render images and/or video of a person, place, object, etc. For example, each CIS pixel is configured to photogenerate image charge in response to the incident light. After each CIS pixel has acquired its image charge, the corresponding analog image charge data can be read out by the image readout circuitry 446 in the third die 436 through the column bit lines. In some embodiments, the image charge from each row of the pixel array 438 may be read out in parallel through column bit lines by the image readout circuitry 446. As discussed in greater detail below, others of the pixels of the pixel array 438 can be configured as event vision sensor (EVS) pixels.
The image readout circuitry 446 in the third die 436 can include amplifiers, analog to digital converter (ADC) circuitry, associated analog support circuitry, associated digital support circuitry, etc., for normal image readout and processing. In some embodiments, the image readout circuitry 446 may also include event driven readout circuitry, which will be described in greater detail below. In operation, the photogenerated analog image charge signals are read out from the pixel cells of pixel array 438, amplified, and converted to digital values in the image readout circuitry 446. In some embodiments, image readout circuitry 446 may read out a row of image data at a time. In other examples, the image readout circuitry 446 may read out the image data using a variety of other techniques (not illustrated), such as a serial readout or a full parallel readout of all pixels simultaneously. The image data may be stored or even manipulated by applying post image effects (e.g., crop, rotate, remove red eye, adjust brightness, adjust contrast, and the like).
In the illustrated embodiment, the second die 434 (also referred to herein as the “middle die”) includes an event driven sensing array 442 that is coupled to at least some of the pixels (e.g., EVS pixels) of the pixel array 438 in the first die 432. In some embodiments, the event driven sensing array 442 is coupled to the pixels of the pixel array 438 through hybrid bonds between the first die 432 and the second die 434. The event driven sensing array 442 can include an array of event driven circuits. In some embodiments, each one of the event driven circuits in the event driven sensing array 442 is coupled to at least one of the plurality of pixels of the pixel array 438 through hybrid bonds between the first die 432 and the second die 434 to asynchronously detect events that occur in light that is incident upon the pixel array 438 in accordance with the teachings of the present disclosure.
In some embodiments, corresponding event detection signals are generated by the event driven circuits (e.g., that are similar to the event sensing front-end circuit illustrated in
The portion of the pixel array 438 shown in
Referring again to
In some embodiments, row/column control circuitry corresponding to the CIS pixels 435 can be allocated on a same die as—or a different die from—the die (e.g., the third die 436) on which the ADC 451 is allocated. In these and other embodiments, row/column control circuitry corresponding to the EVS pixels 437 can be allocated on a same die as—or a different die from—the die (e.g., the third die 436) on which the scan readout circuitry 453 is allocated. In these and still other embodiments, the ADC 451 and/or the row/column control circuitry corresponding to the CIS pixels 435 can be allocated on a same die as—or a different die from—the die (e.g., the third die 436) on which the scan readout circuitry 453 and/or the row/column control circuitry corresponding to the EVS pixel 437 is/are allocated.
In the illustrated embodiment, the EVS pixel 437 is dedicated to capturing non-CIS (EVS) information while the CIS pixels 435 are dedicated to capturing CIS information. In other embodiments, the EVS pixel 437 and/or one or more of the CIS pixels 435 can be switched between being configured to capture CIS information and non-CIS information. This can enable the stacked system 430 to operate in a CIS-only mode in which all of the pixels 435 and the pixel 437 are used to capture CIS information, an EVS-only mode in which all of the pixels 435 and the pixel 437 are used to capture non-CIS (EVS) information, and/or a hybrid CIS and EVS mode in which a first subset of the pixels 435, 437 are used to capture CIS information and a second subset of the pixels 435, 437 are used to capture non-CIS (EVS) information.
In some embodiments, the event driven circuit 400 on the second die 434 has a same die size as the 4×4 pixel cluster on the first die 432. In other embodiments, the event driven circuit 400 can have a different die size from the 4×4 pixel cluster. Additionally, or alternatively, although the ratio of CIS pixels to EVS pixels is 15:1 in the 4×4 pixel cluster, other ratios of CIS pixels to EVS pixels (e.g., 14:2, 12:4, 8:8, 4:12, 2:14, 15:1) are possible and fall within the scope of the present technology. Moreover, although the EVS pixel 457 of
As discussed above, event data captured using EVS pixels can be used to perform event-guided deblur and/or rolling shutter distortion correction of CIS (frame) information captured using CIS pixels. To this end,
As shown, the image sensor 530 includes a CIS pixel array 538 (e.g., similar to the pixel array 438 of
The image sensor 530 further includes a common control block 568 for synchronizing operation of the pixel array 538 with operation of the event driven sensing array 542. More specifically, although CIS pixels of the pixel array 538 and EVS pixels of the event driven sensing array 542 include their own row/column control circuitry and are independently read through their own readout circuitry, the common control block synchronizes operation (e.g., reset, exposure start times, exposure end times) of the CIS pixels, the EVS pixels, the row/column control circuits, and/or the readout circuits. This synchronization is described in greater detail below with reference to
In some embodiments, the image sensor 530 can include a first multiplexer 565, a second multiplexer 566, and/or a third multiplexer 567. As shown, the first multiplexer 565, the second multiplexer 566, and the third multiplexer 567 can be controlled using a deblur enable signal deblurEN. When the deblur enable signal deblurEN is un-asserted (e.g., is in a low or ‘0’ state), the first multiplexer 565 and the third multiplexer 567 can stream CIS data (e.g., raw intensity image frames, blurry intensity image frames) to an image signal processor 552 of the image sensor 530, such as in lieu of streaming the raw CIS data to a deblur and rolling-shutter-distortion correction circuit 570 (“the deblur circuit 570” or “the rolling shutter distortion correction circuit 570”) of the image sensor 530. In turn, the image signal processor 552 can provide the CIS data to a synchronous communications interface 555a (e.g., a MIPI interface/transmitter), such as for output from the image sensor 530.
Additionally, or alternatively, when the deblur enable signal deblurEN is un-asserted (e.g., is in a low or ‘0’ state), the second multiplexer 566 can stream EVS data to the column-scan readout circuitry 553, such as in lieu of streaming the EVS data to the deblur circuit 570. In turn, the column-scan readout circuitry 553 can provide the EVS data to an event signal processor 554 of the image sensor 530, and the event signal processor 554 can provide the EVS data to a synchronous communications interface 555b (e.g., a MIPI interface/transmitter), such as for output from the image sensor 530.
In the illustrated embodiment, the synchronous communications interface 555a and the synchronous communications interface 555b can be independent physical interfaces. Alternatively, the synchronous communications interface 555a and the synchronous communications interface 555b can be merged. For example, CIS data and EVS data can be output from the image sensor 530 via a shared synchronous communications interface 555 (e.g., a shared MIPI interface, a shared virtual channel, embedded line).
Referring again to the first multiplexer 565, the second multiplexer 566, and the third multiplexer 567, when the deblur enable signal deblurEN is asserted (e.g., is in a high or ‘1’ state), the first multiplexer 565 is enabled to stream CIS information read from CIS pixels of the pixel array 538 into the deblur circuit 570, and the second multiplexer 566 is enabled to stream EVS information read from EVS pixels of the event driven sensing array 542 into the deblur circuit 570. For example, when the deblur enable signal deblurEN is asserted, EVS information read from EVS pixels of the event driven sensing array 542 can be constantly streamed into the deblur circuit 570 via the second multiplexer 566 (e.g., while CIS pixels of the pixel array 538 integrate photogenerated charge over an exposure period). Additionally, or alternatively, when CIS information is read out from CIS pixels of the pixel array 538 after the exposure period, digitized CIS information can be streamed into the deblur circuit 570 via the first multiplexer 565.
In turn, the deblur circuit 570 (a) can compute a fused image/video stream from the CIS data and the EVS data received via the first multiplexer 565 and the second multiplexer 566, respectively, and (b) can output the fused image/video stream into the third multiplexer 567 for streaming to the image signal processor 552. The fused image/video stream may then be provided from the image signal processor 552 to the synchronous communications interface 555a for output from the image sensor 530. The fusion computations performed by the deblur circuit 570 can be targeted at deblurring the CIS frame information captured by CIS pixels of the pixel array 538, correcting for rolling shutter artifacts, and/or creating interpolated video frames. On-chip deblurring of CIS frame information and rolling shutter correction of CIS frame information are discussed in greater detail below with reference to
In embodiments in which the image sensor 530 is a stacked system, components of the deblur circuit 570 can be positioned on one or more of the dies (e.g., a top die, a middle die, or a bottom die) of the stacked system. As a specific example, the image sensor 530 can be generally similar to the stacked system 430 of
In some embodiments, the first multiplexer 565, the second multiplexer 566, and/or the third multiplexer 567 shown in
As shown in
In the illustrated embodiment, the EDI computation block 671 includes a plurality of EDI components. More specifically, the EDI computation block 671 includes a product computation block 675, a first integration computation block 673, a first integration buffer 674, an exponential computation block 676, a second integration computation block 677, and a second integration buffer 678. As events detected by EVS pixels during an exposure period are read out from those EVS pixels into the deblur circuit 670, the product computation block 675 is configured to multiply the polarity p of the events by the contrast threshold c, which is equivalent to Clog−TH (described above with reference to Equations 6-12, 15-17, and 19-26). As discussed above, it may be desirable to dynamically or periodically adjust the contrast threshold c. Thus, the contrast threshold c can vary over time. For example, it may be desirable to adjust the contrast threshold over time (a) to account for temperature drift of the hybrid image sensor and/or a corresponding imaging system, (b) to balance noise and/or saturation, and/or (c) to regulate data rate and/or power of the hybrid image sensor. For instance, it may be desirable to adjust the contrast threshold c based on light levels in an external scene (e.g., as part of auto-exposure functions) and/or based on signals detected in the external scene. As a specific example, EVS pixels of an image sensor may be exposed to a flicker signal (e.g., a ripple voltage from pure sunlight). Continuing with this example, it may be desirable to adjust the contrast threshold c (e.g., above the ripple voltage), such as to control the rate at which the EVS pixels detect events due to the flicker signal. As still another example, it may be desirable to adjust the contrast threshold c to control the rate at which the EVS pixels of the hybrid image sensor detect events, for example, to control the data rate and/or power consumption of the hybrid image sensor.
Referring again to
In some embodiments, to set or adjust the contrast threshold c that is used by the EVS pixels to detect events and/or that is used by the product computation block 675 (
In some embodiments, the contrast threshold c can be set or adjusted at any time, such as on-the-fly. For example, the contrast threshold c can be set or adjusted continuously or at any time before, during, and/or after exposure periods for corresponding CIS pixels. In other embodiments, the contrast threshold c can be set or adjusted at preset times and/or intervals. For example, the contrast threshold c can be set or adjusted at starts of exposure periods for corresponding CIS pixels, at ends of exposure periods for corresponding CIS pixels, at frame interpolation timing points, periodically (e.g., after a preset amount of time has elapsed since a last time the contrast threshold c was set or adjusted, after a present amount of events have occurred since a last time the contrast threshold c was set or adjusted, etc.).
In some embodiments, a contrast threshold c can be applied globally. For example, a contrast threshold c can be used for every EVS pixel across the hybrid image sensor. Thus, when the contrast threshold c is set or adjusted, the contrast threshold c can be set or adjusted for every EVS pixel. In other embodiments, a contrast threshold c can be applied locally. For example, the hybrid image sensor can maintain and use a plurality of contrast thresholds c, with each contrast threshold c corresponding to (i) a single EVS pixel or (ii) a group of EVS pixels representing less than all EVS pixels across the hybrid image sensor. Thus, when a contrast threshold c is set or adjusted for one or more EVS pixels, other contrast threshold(s) c used for other EVS pixels of the hybrid image sensor can (a) remain unchanged, (b) be set or adjusted independently from the setting or adjusting of the contrast threshold c, or (c) be set or adjusted based at least in part on the setting or adjusting of the contrast threshold c.
Although shown as part of the various auxiliary circuits 456 on the third die 436 of the stacked system 430 in
Referring again to
For example, the first integration computation block 673 can, for each EVS pixel, integrate (e.g., continuously or in accordance with the clock signal EVS-CLK that is asserted when events are detected) the output of the product computation block 675 over time. More specifically, the first integration computation block 673 can integrate each of the outputs of the product computation block 675 from time ts (corresponding to the start of the current exposure period) to time t, ending at time ts+T (corresponding to the end of the current exposure period). The first integration buffer 674 can track/store the results of the integration performed by the first integration computation block 673, which are each equivalent to Σi∈[s,t]ci·pi at the end of the exposure period. Computations performed by the product computation block 675, the first integration computation block 673, and the first integration buffer 674 can be performed in floating point representation, such as 9-bit mantissa and 4-bit exponent.
The exponential computation block 676, the second integration computation block 677, and the second integration buffer 678 can be configured to compute the second, outer integral (e.g., ∫ss+T exp(Σi∈[s,t]ci·pi)dt) of the EDI model described above. For example, for each EVS pixel, the exponential computation block 676 can determine the exponential of the output of the first integration buffer 674, resulting in exp(Σ∈[,]ci·pi) at the output of the exponential computation block 676.
As shown in
The second integration computation block 677 of the EDI computation block 671 can, for each EVS pixel, continuously integrate the output of the exponential computation block 676 over time. More specifically, the second integration computation block 677 can integrate each of the outputs of the exponential computation block 676 from time ts (corresponding to the start of the current exposure period) to time t, ending at time ts+T (corresponding to the end of the current exposure period). The second integration buffer 678 can track/store the results of this time continuous integration, which are each equivalent to ∫ss+T exp(Σi∈[s,t]ci·pi) dt at the end of the exposure period.
Each of the computations performed by the exponential computation block 676, the second integration computation block 677, and the second integration buffer 678 can be performed in floating point representation, such as 9-bit mantissa and 4-bit exponent. In addition, although operation of the product computation block 675, the first integration computation block 673, and/or the first integration buffer 674 can be clocked by the control signal EVS_CLK, the exponential computation block 676, the second integration computation block 677, and/or the second integration buffer 678 can be enabled to continuously perform their respective operations over time. As a specific example, in some embodiments, operation of the second integration computation block 677 is not clocked by the control signal EVS_CLK nor triggered by events. Rather, the second integration computation block 677 is configured to continuously integrate the outputs of the exponential computation block 676 over time, at least between time ts and time ts+T corresponding to the start and stop times, respectively, of a corresponding exposure period/EVS accumulation period. In these embodiments, operation of the exponential computation block 676 can be clocked by the control signal EVS_CLK or enabled to continuously perform their respective operations over time.
Furthermore, because events detected at each EVS pixel are accumulated by the EDI computation block 671, raw EVS data input into the EDI computation block 671 of the deblur circuit 670 can be discarded once events of the raw EVS data have been accumulated by the EDI computation block 671. As a result, the second integration buffer 678 need only store/maintain the accumulated results of the integration computation block 677, meaning that the second integration buffer 678 can have a relatively small buffer size in comparison to buffers utilized in off-chip, event-guided deblur solutions. In addition, because the raw EVS data can be discarded rather than output from an image sensor corresponding to the deblur circuit 670, IO throughput and power consumption can be reduced in comparison to off-chip, event-guided deblur solutions in which the raw EVS data is output from the image sensor to an external application processor. In other embodiments of the present technology, all or a subset of the raw EVS data can be stored and/or output from the image sensor after events in the raw EVS data are accumulated.
After the exposure period ends, CIS data can be read out from CIS pixels of the image sensor and streamed into the latent frame computation block 672 of the deblur circuit 670. At this point, the latent frame computation block 672 can deblur the CIS data by combining/fusing the CIS data with the accumulated EVS data stored in the second integration buffer 678 of the EDI computation block 671. More specifically, the latent frame computation block 672 can compute a latent image frame L(s) corresponding to time ts (the start of the exposure period) by performing the operation specified in Equation 24 above for each EVS pixel. The final, deblurred image data (e.g., the latent image frame L(s)) can be output from the latent frame computation block 672 to the application processor and/or an image signal processor of a corresponding imaging system. Because the CIS data can be read directly into the latent frame computation block 672 after the exposure period and because the accumulated EVS data from the second integration buffer 678 is readily available and already aligned at this time (as discussed in greater detail below), no CIS frame buffer is required to perform on-chip deblur using the deblur circuit 670. Therefore, the deblur circuit 670 and/or the corresponding image sensor can lack a CIS frame buffer in some embodiments. In other embodiments, the deblur circuit 670 and/or the corresponding image sensor can include a CIS frame buffer, such as in embodiments in which raw CIS data can be output in addition to fused image/video data.
As discussed above, the output of the exponential computation block 676 (or the first integration buffer 674) can be streamed to a downstream application processor for the application processor to perform video frame interpolation. In addition, after (i) the exposure period ends, (ii) the CIS data is read out from CIS pixels of the image sensor, and/or (iii) the latent frame computation block 672 computes the latent image frame L(s) corresponding to time ts, the latent image frame L(s) computed by the latent frame computation block 672 can be output to a CIS key frame buffer, such as of the downstream application processor. Additionally, or alternatively, raw CIS data can be read out from the CIS pixels of the image sensor and provided to the downstream application processor. In turn, the downstream application processor can (using the output of the exponential computation block 676 or the first integration buffer 674, the latent image frame L(s) and/or the raw CIS data, and Equation 22 above) perform video frame interpolation to compute one or more latent image frames L(t) corresponding to one or more times t between time ts and time ts+T (the end of the exposure period). The one or more latent image frame L(t) can represent one or more additional, interpolated image frames that, for example, can be used to increase frame rate of the imaging system and/or be used to produce slow motion video.
In addition, three additional latent frames LF(t1), LF(t2), and LF(ts+T) can be generated using video frame interpolation. For example, a downstream application processor (a) can receive accumulated EVS data from the deblur circuit during the exposure periods illustrated in
As shown in
As such, in embodiments in which a rolling shutter is used, at least some deblur circuits configured in accordance with various embodiments of the present technology can additionally include rolling shutter distortion correction components that are usable to correct for rolling-shutter distortion that may be present in computed latent image frames LF(s) and LF(t). Two such deblur circuits are described in greater detail below with reference to
In addition, three additional latent frames LF(t1), LF(t2), and LF(t3) can be generated using video frame interpolation. For example, a downstream application processor (a) can receive accumulated EVS data from the deblur circuit before and during the exposure periods illustrated in
In the illustrated embodiment, the computation block 971a includes EDI components. The EDI components can be generally similar to the EDI components of the computation block 671 of the deblur circuit 670 of
As events detected by EVS pixels during an exposure period are read out from those EVS pixels into the deblur circuit 970, the product computation block 975 is configured to multiply the polarity p of the events by the contrast threshold c, which is equivalent to Clog−TH (described above with reference to Equations 6-12, 15-17, and 19-26). The contrast threshold c can be set, adjusted, or maintained in accordance with the discussion above (e.g., with reference to
For example, the first integration computation block 973 can, for each EVS pixel, integrate (e.g., continuously or in accordance with the clock signal EVS-CLK that is asserted when events are detected) the output of the product computation block 975 over time. More specifically, the first integration computation block 973 can integrate each of the outputs of the product computation block 975 from time ts (corresponding to the start of the current exposure period) to time t, ending at time ts+T (corresponding to the end of the current exposure period). The first integration buffer 974 can track/store the results of the integration performed by the first integration computation block 973. Computations performed by the product computation block 975, the first integration computation block 973, and the first integration buffer 974 can be performed in floating point representation, such as 9-bit mantissa and 4-bit exponent.
The exponential computation block 976, the second integration computation block 977, and the second integration buffer 978 can be configured to compute the second, outer integral of the EDI model described above. For example, for each EVS pixel, the exponential computation block 976 can determine the exponential of the output of the first integration buffer 974, resulting in exp(c Σi∈[s,t]ci·pi) at the output of the exponential computation block 976.
As shown in
The second integration computation block 977 of the deblur circuit 970a can, for each EVS pixel, continuously integrate the output of the exponential computation block 976 over time. More specifically, the second integration computation block 977 can integrate each of the outputs of the exponential computation block 976 from time ts (corresponding to the start of an exposure period for CIS pixels corresponding to a respective EVS pixel) to time t, ending at time ts+T (corresponding to the end of the exposure period for the CIS pixels corresponding to the respective EVS pixel). The second integration buffer 978 can track/store the results of this time continuous integration, which are each equivalent to f3s+T exp(Σi∈[s,t]ci·pi) dt at the end of each corresponding exposure period.
Each of the computations performed by the exponential computation block 976, the second integration computation block 977, and the second integration buffer 978 can be performed in floating point representation, such as 9-bit mantissa and 4-bit exponent. In addition, although operation of the product computation block 975, the first integration computation block 973, and/or the first integration buffer 974 can be clocked by the control signal EVS_CLK, the exponential computation block 976, the second integration computation block 977, and/or the second integration buffer 978 can be enabled to continuously perform their respective operations over time. As a specific example, in some embodiments, operation of the second integration computation block 977 is not clocked by the control signal EVS_CLK nor triggered by events. Rather, the second integration computation block 977 is configured to continuously integrate the outputs of the exponential computation block 976 over time, at least between time ts and time ts+T corresponding to the start and stop times, respectively, of a corresponding exposure period. In these embodiments, operation of the exponential computation block 976 can be clocked by the control signal EVS_CLK or enabled to continuously perform their respective operations over time.
Because events detected at each EVS pixel during a corresponding exposure period are accumulated by the EDI components of the computation block 971a, raw EVS data input into the computation block 971a of the deblur circuit 970a during a corresponding exposure period can be discarded once events of the raw EVS data have been accumulated by the EDI components of the computation block 971a. As a result, in some embodiments, the second integration buffer 978 stores/maintains only the accumulated results of the integration computation block 977, meaning that the second integration buffer 978 can have a relatively small buffer size in comparison to buffers utilized in off-chip, event-guided deblur solutions. In addition, because the raw EVS data can be discarded rather than output from an image sensor corresponding to the deblur circuit 970a, IO throughput and power consumption can be reduced in comparison to off-chip, event-guided deblur solutions in which the raw EVS data is output from the image sensor to an external application processor. In other embodiments of the present technology, all or a subset of the raw EVS data can be stored and/or output from the image sensor after events in the raw EVS data are accumulated.
After the exposure period ends, CIS data can be read out from CIS pixels of the image sensor and streamed into the latent frame computation block 972 of the deblur circuit 970a. At this point, the latent frame computation block 972 can deblur the CIS data by combining/fusing the CIS data with the accumulated EVS data stored in the second integration buffer 978 of the computation block 971a. More specifically, for one or more of the CIS pixels, the latent frame computation block 972 can compute one or more latent image frames L(s), each corresponding to a time ts (representing a start of a corresponding exposure period), by performing the operation specified in Equation 24 above using CIS data captured by the one or more CIS pixels and corresponding EVS data accumulated in the second integration buffer 978. Because CIS data can be read directly into the latent frame computation block 972 at or after the end of an exposure period and because EVS data accumulated in the second integration buffer 978 is readily available and already aligned with the CIS data at this time (as discussed in greater detail below), no CIS frame buffer is required to perform on-chip deblur using the deblur circuit 970a. Therefore, the deblur circuit 970a and/or the corresponding image sensor can lack a CIS frame buffer in some embodiments. In other embodiments, the deblur circuit 970a and/or the corresponding image sensor can include a CIS frame buffer, such as in embodiments in which raw CIS data can be output in addition to fused image/video data.
As discussed above with reference to
As shown in
As events detected by EVS pixels during an exposure period are read out from those EVS pixels into the deblur circuit 970, the product computation block 981 is configured to multiply the polarity p of the events by the contrast threshold c, which is equivalent to Clog−TH (described above with reference to Equations 6-12, 15-17, and 19-26). The contrast threshold c can be set, adjusted, or maintained in accordance with the discussion above (e.g., with reference to
For example, the integration computation block 973 can, for each EVS pixel, integrate (e.g., continuously or in accordance with the clock signal EVS-CLK that is asserted when events are detected) the output of the product computation block 981 over time. More specifically, the integration computation block 979 can integrate each of the outputs of the product computation block 981 from time t0 (corresponding to a start of a first exposure period for a given image frame) to time ts (representing a start of another exposure period for CIS pixels corresponding to the EVS pixel, for the given image frame). The integration buffer 980 can track/store the results of the integration performed by the integration computation block 979. Computations performed by the product computation block 981, the integration computation block 979, and the integration buffer 980 can be performed in floating point representation, such as 9-bit mantissa and 4-bit exponent.
For example, consider the last two CIS pixel rows shown in the plot 810 of
Integration results of the integration computation block 979 that are stored in the integration buffer 980 can be output to the exponential computation block 982. The exponential computation block 982 can determine the exponential of the output of the integration buffer 980, resulting in exp(Σ∈[,]ci·pi) at the output of the exponential computation block 982. The exponential of the output of the product computation block 981 determined by the exponential computation block 982 can be output (i) to the latent frame computation block 972 of the deblur circuit 970a and (ii) to a downstream application processor, such as an off-chip application processor of an imaging system that includes a hybrid image sensor incorporating the deblur circuit 970a. As discussed in greater detail below, the application processor can be configured to use the output of the exponential computation block 982 to perform video frame interpolation. Because the exponential at the output of the exponential computation block 982 includes accumulated EVS data, corresponding raw EVS data read out of the EVS pixels of a corresponding hybrid image sensor can be discarded in some embodiments without reading the corresponding raw EVS data out of the hybrid image sensor and/or to the application processor. Alternatively, the corresponding raw EVS data can be read out of the hybrid image sensor and/or to the application processor, such as in addition to the accumulated EVS data and/or the output of the exponential computation block 976.
In some embodiments, computations performed by the exponential computation block 982 can be performed in floating point representation, such as 9-bit mantissa and 4-bit exponent. Additionally, or alternatively, although operation of the product computation block 981, the first integration computation block 979, and/or the integration buffer 980 can be clocked by the control signal EVS_CLK or another control signal, the exponential computation block 982 can be enabled to continuously perform its respective operations over time. Alternatively, operation of the exponential computation block 982 can also be clocked by the control signal EVS_CLK.
As discussed above, after an exposure period for CIS pixels ends, CIS data captured by the CIS pixels can be read out into the latent frame computation block 972 of the deblur circuit 970a. Per the discussion above, the latent frame computation block 972 can deblur the CIS data by combining/fusing the CIS data with accumulated EVS data stored in the second integration buffer 978 of the computation block 971a. In addition, the latent frame computation block 972 can correct the CIS data for rolling shutter distortion using the corresponding output from the exponential computation block 982 of the computation block 971a. More specifically, the latent frame computation block 972 can compute a latent image frame L(s) corresponding to time ts (the start of the exposure period) by performing the operation specified in Equation 24 above for each EVS pixel. Furthermore, the latent frame computation block 972 can, for each pixel, compute a latent image frame L(0) corresponding to time t0 (the start of the corresponding image frame, such as the start of the first exposure period for the corresponding image frame) using (i) the latent image frame L(s), (ii) the corresponding output from the exponential computation block 982, and/or (iii) the operation specified in Equation 26 above. The latent image frame L(0) can correspond to deblurred, rolling-shutter-distortion-corrected CIS data, and can be output from the latent frame computation block 972 to the application processor and/or an image signal processor of a corresponding imaging system.
The deblur circuit 970a of
For example, a maximum framerate usable by a corresponding image sensor can be increased by starting a first exposure period for a second frame before an end of a last exposure period for a first frame. Such an arrangement, however, requires tracking both (a) events corresponding to a first image frame and (b) events corresponding to a consecutive image frame. Implementing a ping pong buffer into the rolling shutter distortion correction components of a deblur circuit can enable such functionality.
For example,
As shown in
The computation block 971b of the deblur circuit 970b further includes rolling shutter distortion correction components. In contrast with the rolling shutter distortion correction components of the computation block 971a of the deblur circuit 970a of
As shown, the rolling shutter distortion components of the computation block 971b also include an exponential computation block 982a (also referred to herein as a “second exponential computation block”) and an exponential computation block 982b (also referred to herein as a “third exponential computation block”). In other embodiments, the rolling shutter distortion components can include a single (e.g., only one) instance of an exponential computation block 982. In such embodiments, the exponential computation block 982 can be positioned downstream the multiplexer 984, such as between the multiplexer 984 and the latent frame computation block 972. Continuing with this example, the exponential computation block 982 can be configured to perform operations on integration results output from the integration buffer 980a or the integration buffer 980b via the multiplexer 984.
The product computation block 981a and the product computation block 981b can be generally similar to the product computation block 981 of the deblur circuit 970a of
In the illustrated embodiment, the product computation block 981a, the integration computation block 979a, the integration buffer 980a, and the exponential computation block 982a (collectively referred to herein as a “first set of rolling shutter distortion correction (RSDC) components”) can correspond to different frames from the product computation block 981b, the integration computation block 979b, the integration buffer 980b, and the exponential computation block 982b (collectively referred to herein as a “second set of RSDC components”). For example, the first set of RSDC components can be used to accumulate events detected during a first frame, and the second set of RSDC components can be used to accumulate events detected during a second, consecutive (or immediately adjacent) frame. Thereafter, the first set of RSDC components can be used to accumulate events detected during a third frame; the second set of RSDC components can be used to accumulate events detected during a fourth frame; and so on. Therefore, continuing with the above example, events detected by EVS pixels between time t0 and a start time ts of a corresponding exposure period for a first frame can be accumulated using the first set of RSDC components. In addition, events detected by EVS pixels between time t0 and a start time ts of a corresponding exposure period for a second frame can be accumulated using the second set of RSDC components.
Routing of events detected by EVS pixels to the appropriate set of RSDC components can be handled via the routing switch 983. More specifically, the routing switch 983 is configured to receive a control signal ping_pong. In some embodiments, the control signal ping_pong can be provided and/or controlled by a common control block of an image sensor corresponding to the deblur circuit 970b, such as the common control block 568 of the image sensor 530 of
The control signal ping_pong can be used to control into which product computation block (the product computation block 981a or the product computation block 981b) an event detected by an EVS pixel is routed via the routing switch 983. For example, between time t0 and time ts for a first frame, the control signal ping_pong can be transitioned or held in a first state (e.g., an asserted state, a high state, a “1” state). As such, events detected by EVS pixels between time t0 and a start time ts of a corresponding exposure period for the first frame can be routed to the product computation block 981a via the routing switch 983. Thereafter, between time t0 and time ts for a second frame, the control signal ping_pong can be transitioned or held in a second state (e.g., a de-asserted state, a low state, a “0” state). As a result, events detected by EVS pixels between time t0 and a start time ts of an exposure period of the second frame can be routed to the product computation block 981b via the routing switch 983. Events detected by an EVS pixel during an exposure period for CIS pixels corresponding to that EVS pixel can be routed to product computation block 975 of the EDI components of the computation block 971b, consistent with the description of the EDI components of the computation block 971a of the deblur circuit 970a above with reference to
Referring now to the multiplexer 984, the control signal ping_pong can be used to control which one of the inputs into the multiplexer 984 (e.g., which of the outputs of the exponential computation block 982a and the exponential computation block 982b) is output from the multiplexer 984 (e.g., to the latent frame computation block 972 and/or a downstream application processor). In the illustrated embodiment, when the control signal ping_pong is transitioned or held in the first state, the output of the exponential computation block 982b can be routed, via the multiplexer 984, to (i) a downstream application processor and (ii) the latent frame computation block 972. On the other hand, when the control signal ping_pong is transitioned or held in the second state, the output of the exponential computation block 982a can be routed, via the multiplexer 984, to (i) the downstream application processor and (ii) the latent frame computation block 972. Thus, as detected events are routed to the product computation block 981a via the routing switch 983, the output of the exponential computation block 982b can be passed to the downstream application processor and the latent frame computation block 972 via the multiplexer 984. In addition, as detected events are routed to the product computation block 981b via the routing switch 983, the output of the exponential computation block 982a can be passed to the downstream application processor and the latent frame computation block 972 via the multiplexer 984. In this manner, the ping pong buffer enables roller shutter distortion correction of CIS data corresponding to two different frames that at least partially overlap in time.
The method 1000 begins at block 1001 by aligning CIS pixel data with corresponding EVS pixel data. In some embodiments, aligning CIS pixel data with corresponding EVS pixel data can be performed at least in part using a common control block of a corresponding image sensor. For example, the common control block can synchronize operations of row/column control circuitry and/or a deblur block of the image sensor, such as by using one or more control signals.
Aligning the CIS pixel data with the corresponding EVS pixel data at block 1001 can include aligning/synchronizing the timings of exposure period(s) of one or more rows of CIS pixels with event accumulation period(s) of one or more corresponding EVS pixels. In some embodiments, aligning the exposure periods(s) with an event accumulation period can include aligning the exposure period(s) with one another and/or with the event accumulation period such that the exposure period(s) and the event accumulation period have a same start time ts and/or a same end time ts+T. For example, prior to the start of the exposure period(s) and the event accumulation period, CIS pixels of one or more CIS pixel rows can be reset at a same time as (a) one another and/or (b) one or more EVS pixels of one or more EVS pixel rows that correspond to the one or more CIS pixel rows. As a result, the exposure period(s) for the CIS pixels and the event accumulation period for the EVS pixel(s) can start at the same time as one another. In addition, assuming that the exposure period(s) and the event accumulation period have a same duration, aligning the start times of the exposure period(s) and the event accumulation period with one another can also align their stop times.
Furthermore, as discussed above with reference to
In addition, accumulated EVS pixel data can be output to and stored in one or more EVS frame buffers (e.g., of a downstream application processor) during and/or after integration periods for CIS pixels used to capture CIS pixel data. For example, results of the integration of the products stored in the first integration buffer and/or exponentials output from an exponential computation block can be provided to one or more EVS frame buffers. Thus, to ensure that accumulated EVS pixel data stored in the EVS frame buffer(s) corresponds to only events that relate to a given image frame, corresponding portions of the EVS frame buffer(s) can be reset before the aligned start time of the event accumulation period and the exposure period(s). Corresponding portions of CIS key frame buffer(s) (e.g., of a downstream application processor) may also be reset before the aligned start time of the event accumulation period and the exposure period(s). In some embodiments, the corresponding portions of the EVS frame buffer(s) and/or the CIS key frame buffer(s) can be reset at a same time as the EVS pixels and/or the corresponding CIS pixels.
For the sake of clarity and understanding of the alignment conducted at block 1001 of the method 1000, consider
CIS data captured by CIS pixels of the CIS pixel rows 4N, 4N−1, 4N−2, and 4N−3 is frame-based and is synchronously read out after each corresponding exposure period ends. In many active pixel sensors, CIS data is read out row-by-row. In such configurations, different exposure period start and stop times are often used for the different rows. For example, in many active pixel sensors, CIS pixels of CIS pixel row 4N often will have a first exposure period that starts and stops at different times from a second exposure period used for CIS pixels of CIS pixel row 4N−1. This can be problematic for event-guided deblur when the CIS pixel row 4N and the CIS pixel row 4N−1 correspond to a same EVS pixel row because the misalignment between the first exposure period and the second exposure period means that the start and/or stop times for an event accumulation period used for the corresponding EVS pixel row will be different from the start and/or stop times of the first exposure period and/or the second exposure period. As a result, EVS data captured by an EVS pixel of the EVS pixel row will be misaligned from CIS data captured by CIS pixels of CIS pixel row 4N and/or CIS pixel row 4N−1. Such misalignment can affect the accuracy and/or efficacy of event-guided deblur operations performed on the CIS data and/or may require additional memory/processing to align the CIS data with the EVS data post data capture and/or readout.
To address this concern, at block 1001 of the method 1000, the exposure periods of CIS pixels rows corresponding to a same EVS pixel row can be aligned with one another and with an event accumulation period of the EVS pixel row. For example, as shown in
Continuing with the above example, alignment between the exposure periods 1197 and the event accumulation period 1198 can be achieved by resetting the EVS pixels of the EVS pixel row N and the CIS pixels of the CIS pixel rows 4N, 4N−1, 4N−2, and 4N−3 at a same time and/or before the start time t0. For example, referring to
In addition, to ensure that deblur computations performed by a deblur circuit of a corresponding image sensor correspond to only the exposure periods 1197 of the CIS pixel rows 4N, 4N−1, 4N−2, and 4N−3 and the event accumulation period 1198 of the EVS pixel row N, a first integration buffer and/or a second integration buffer of the deblur circuit can be reset before the start time t0, such as (i) within the time period 1193 of
Referring again to
In some embodiments, EVS pixels can be selectively enabled to capture EVS data (e.g., selectively enabled to detect events) during a corresponding event accumulation period. For example, referring to
At block 1003, the method 1000 continues by reading out events detected by the EVS pixels of the event driven sensing array. Block 1003 can be performed while performing block 1002. For example, when an EVS pixel detects an event, the event can be read out from the EVS pixel. When an event detected by an EVS pixel is read out from the EVS pixel, the EVS pixel can be reset and thereby enabled to detect subsequent events.
A single EVS pixel may detect hundreds of events over a single event accumulation period. In some embodiments, each of these events can be read out of the EVS pixel and/or provided to a deblur circuit of the corresponding image sensor for accumulation. Therefore, for a single CIS frame, a relatively large amount of EVS data can be provided to the deblur circuit for accumulation and subsequent use in event-guided deblur of CIS data corresponding to the CIS frame.
Such a large amount of EVS data can, in some cases, complicate and/or slow down deblur computations performed by the deblur circuit of the corresponding image sensor, which may not be appropriate or acceptable for certain applications. Therefore, in some embodiments, EVS data can be readout of EVS pixels using a row-by-row scan readout. More specifically, the image sensor can scan/step through the event driven sensing array row-by-row and spend a uniform amount of time reading out each EVS pixel row. In this manner, the scan readout can limit a number of EVS readouts per CIS frame, which can simplify and/or speed up deblur computations performed by the deblur circuit.
For the sake of clarity and example, consider
To read out EVS data captured by the EVS pixels 00-57 of the event driven sensing array 1342, the image sensor can scan cyclically, row-by-row, through the rows Row_0-Row_5 of the event driven sensing array 1342, spending a same amount of time at each row to read out events from EVS pixels of that row. For example, although none of the EVS pixels 00-07 of the row Row_0 have detected events, the image sensor can spend a fixed/preset amount of time (e.g., 50 ns) at Row_0 before moving on the row Row_1. Then, at row Row_1, the image sensor can spend the same fixed amount of time (e.g., 50 ns) reading out events detected by the EVS pixels 10-17. In the illustrated example, EVS pixels 12, 13, and 15 in row Row_1 have detected events. Therefore, during the fixed/preset amount of time allocated for row Row_1, the image sensor can (i) read out the events detected by EVS pixels 12, 13, and 15, and/or (ii) reset EVS pixels 12, 13, and 15 such that they are enabled to detect subsequent events. At the end of the fixed/preset amount of time allocated for row Row_1, the image sensor can move on to row Row_2 of the event driven sensing array 1342 to read out events (if any) detected by EVS pixels 20-27 of row Row_2. The plot 1305 of
The row-by-row scan readout scheme described above therefore limits the total number of EVS readouts per CIS frame and keeps the time required to scan every row in the event driven sensing array constant through each scan cycle. For example, given (i) an event driven sensing array having 2,000 rows of EVS pixels and (ii) a 50 ns preset amount of time t0 read out each row of EVS pixels in the event driven sensing array, the scan readout can take 100 μs (2,000 EVS pixel rows×50 ns) to scan one time through the entire event driven sensing array. Thus, assuming a CIS exposure period is 32 ms in duration, the scan readout can limit the number of EVS readouts per EVS pixel to 320 (320 ms divided by 100 μs) for each CIS readout. Stated another way, each CIS frame readout can correspond to a maximum number of 320 EVS readouts per EVS pixel. This can limit the amount of EVS data fed to the deblur circuit, which can simplify computations performed by the deblur circuit and/or speed up availability of final results of such computations.
In the examples of the row-by-row scan readout described above, the image sensor spends a fixed/preset amount of time at each EVS pixel row reading out detected events (if any). As discussed above, this can keep the total time required to scan once through the entire event driven sensing array unchanged for each cycle of the scan readout. In other embodiments, the image sensor can skip EVS pixel rows in which no events have been detected. For example, referring again to
Referring again to
As discussed above, accumulating EVS data can include, for each EVS pixel, (i) multiplying a polarity of each event read out from the EVS pixel by a corresponding contrast threshold, (ii) performing, over a corresponding EVS accumulation period, a first integration of the products of the events by the contrast thresholds for the EVS pixel, and (iii) maintaining or storing the results of the first integration for the EVS pixel in a first integration buffer (e.g., of a deblur circuit of a hybrid image sensor). In addition, accumulating EVS data can include, for each EVS pixel, (a) determining exponentials of the results of the first integration for the EVS pixel output from the first integration buffer, (b) performing, over the corresponding EVS accumulation period, a second integration of the exponentials for the EVS pixel, and (c) maintaining or storing the results of the second integration for the EVS pixel in a second integration buffer (e.g., of a deblur circuit of a hybrid image sensor).
In some embodiments, such as at block 1004, block 1005, or block 1007 of the method 1000, the results of the first integration (e.g., stored in the first integration buffer) and/or the exponentials of the results of the first integration (e.g., output from an exponential computation block of a deblur circuit of a hybrid image sensor) can be output to a downstream application processor, such as to an EVS frame buffer of the application processor and/or for use in video frame interpolation computations. For example, the results of the first integration and/or the exponentials of the results of the first integration can be output to the downstream application processor and/or loaded into the EVS frame buffer at a timing corresponding to when the results of the first integration are stored in the first integration buffer and/or when the exponentials of the results of the first integration are computed. As another example, the results of the first integration and/or the exponentials of the results of the first integration can be output to the downstream application processor and/or loaded into the EVS frame buffer at specified timings, such as at ends of accumulation periods, at interpolation frame timing points, at ends of exposure periods, at starts of exposure periods, etc. The results of the first integration and/or the exponentials of the results of the first integration can be output to the downstream application processor and/or loaded into the EVS frame buffer row by row. Additionally, or alternatively, loading of (a) the results of the first integration (e.g., from the first integration buffer) and/or (b) the exponentials of the results of the first integration (e.g., from the exponential computation block) into the application processor and/or a corresponding EVS frame buffer can be gated or otherwise controlled via a trigger and a corresponding switch, as discussed in greater detail below.
As discussed above, for each EVS pixel, EVS data is accumulated over a corresponding event accumulation period. For example, for each EVS pixel, the first integration buffer and the corresponding portion of the second integration buffer can be reset prior to a start of an event accumulation period of the EVS pixel to reset (i) results of a first integration stored in the first integration buffer and (ii) results of a second integration stored in the corresponding portion of the second integration buffer. Event accumulation can then be enabled at the start of the event accumulation period and thereafter disabled at the end of the event accumulation period such that results of the second integration stored in the corresponding portion of the second integration buffer at the end of the event accumulation period correspond to only events detected by the corresponding EVS pixel during the event accumulation period. In some embodiments, once event data has been accumulated, the corresponding raw event data can be discarded.
Referring to
At block 1005, the method 1000 continues by reading out CIS data at the end of the corresponding exposure period(s). In some embodiments, reading out the CIS data can include reading out the CIS data into the deblur circuit, such as into a latent frame computation block of the deblur circuit. In these and other embodiments, reading out the CIS data can include reading out the CIS data from CIS pixel in rows or groups of rows. For example, in embodiments in which multiple CIS pixel rows correspond to a same EVS pixel row, CIS data captured by CIS pixels of the multiple CIS pixel rows can be read out together/at the same time at or after the end of the corresponding exposure period. In some embodiments, reading out the CIS data can include skipping readout of one or more rows and/or columns of CIS pixels (e.g., to reduce resolution of the CIS data, to match a resolution of EVS data captured by EVS pixels of the event driven sensing array, and/or to reduce a mismatch between resolution of CIS data captured by CIS pixels and resolution of the EVS data captured by the EVS pixels of the event driven sensing array). In these and other embodiments, reading out the CIS data can include binning one or more rows and/or columns of CIS pixels (e.g., to reduce resolution of the CIS data, to match a resolution of EVS data captured by EVS pixels of the event driven sensing array, and/or to reduce a mismatch between resolution of CIS data captured by CIS pixels and resolution of the EVS data captured by the EVS pixels of the event driven sensing array). Additional details on (a) skipping readout of one or more rows and/or columns of CIS pixels and/or (b) binning one or more rows and/or columns of CIS pixels during readout, are provided in the cofiled, copending, and coassigned application titled “METHODS FOR OPERATING HYBRID IMAGE SENSORS HAVING DIFFERENT CIS-TO-EVS RESOLUTIONS,” which has been incorporated by reference herein in its entirety above.
Referring again to
At block 1006, the method 1000 continues by deblurring the CIS data (read out from CIS pixels at block 1005) using accumulated EVS data generated and stored in a corresponding portion of the second integration buffer at block 1004. In some embodiments, deblurring the CIS data can include combining the CIS data with the accumulated EVS data to compute one or more latent image frames, such as (a) the latent image frame L(s) corresponding to the start time ts of the exposure periods of the CIS pixels. In some embodiments, combining the CIS data with the accumulated EVS data can include interpolating the EVS data to generate additional EVS data corresponding to additional rows and/or columns of EVS pixels (e.g., to increase resolution of the EVS data, to match a resolution of CIS data captured by CIS pixels of the CIS pixel array, and/or to reduce a mismatch between resolution of CIS data captured by CIS pixels and resolution of the EVS data captured by the EVS pixels of the event driven sensing array). Additional details on interpolating EVS data corresponding to additional rows and/or columns of EVS pixels are provided (a) in the cofiled, copending, and coassigned application titled “METHODS FOR OPERATING HYBRID IMAGE SENSORS HAVING DIFFERENT CIS-TO-EVS RESOLUTIONS,” which has been incorporated by reference herein in its entirety above.
At block 1007, the method 1000 continues by outputting deblurred image data. Outputting the deblurred image data can include outputting the deblurred image data from the image sensor to a downstream application processor. As a specific example, outputting the deblurred image data can include outputting the deblurred image data to a CIS key frame buffer and/or a video frame interpolation computation block of the downstream application processor. Additionally, or alternatively, outputting the deblurred image data can include outputting the deblurred image data to an image signal processor, such as for previewing the deblurred image data. In these and other embodiments, outputting the deblurred image data can include outputting the deblurred image data (e.g., the latent image frame L(s)) computed at block 1006, such as in addition to or in lieu of outputting raw CIS data that is read out from the CIS pixels at block 1005 and/or raw EVS data that is generated and read out from EVS pixels at block 1003.
The timing diagram 1195 of
Referring again to block 1005 of the method 1000, after reading out the CIS pixel data at block 1005 at the ends of the exposure periods, the method 1000 can additionally proceed to blocks 1008-1012 to collect additional EVS pixel data outside of the CIS exposure periods. More specifically, at block 1008, the method 1000 can continue by resetting EVS pixels and/or corresponding EVS integration buffers (e.g., of EDI components of a deblur circuit) at ends of corresponding CIS integration periods. As a specific example, referring again to
At blocks 1009-1011, the method 1000 of
Referring to EVS row N shown in
Accumulated EVS pixel data can, at block 1011 or block 1012 of the method 1000, be output to a downstream application processor, such as to an EVS frame buffer of an application processor. For example, the results of the first integration and/or the exponentials of the results of the first integration that are computed during the accumulation period 1199 of
Referring again to
At block 1012, the method 1000 continues by outputting accumulated EVS pixel data and deblurred image data to a video frame interpolation block. For example, during the accumulation period 1198 illustrated in
At block 1013, the method 1000 continues by computing one of more interpolated image/video frames. In some embodiments, computing an interpolated image/video frame can include computing an interpolated image/video frame for a given time t, such as by incrementing over all events starting from the latent image frame L(s) at time ts (corresponding to the starts of each CIS integration period) to the given time t. More specifically, computing an interpolated image/video frame can include computing an interpolated image/video frame for a given time t using (a) the latent image frame L(s) output at block 1007, (b) all or a subset of the accumulated EVS pixel data output at block 1012, and (c) Equation 22 above.
As a specific example, the method 1000 can compute a first interpolated image frame corresponding to time t12 shown in
Although the blocks 1001-1013 of the method 1000 are described and illustrated in a particular order, the method 1000 of
As a specific example, block 1008 of the method 1000 can be omitted in some embodiments. For example, for each EVS pixel row, the EVS pixels can continuously be enabled to detect events occurring in the external scene between time ts (representing a start time of an integration period for corresponding CIS pixels) and an interpolation frame timing point corresponding to an end of the associated accumulation period. Referring to EVS row N of
As another example, although interpolated image frames are described above at block 1013 as being latent image frames that each correspond to times that occur after CIS integration periods for corresponding CIS pixels have ended, the method 1000 is not so limited. For example, at block 1013, the method 1000 can compute one or more latent image frames that correspond to one or more times that occur while the CIS integration periods for corresponding CIS pixels are ongoing. In such embodiments, EVS pixel data accumulated at times occurring outside of the integration periods can go unused in the video frame interpolation calculations. Stated another way, video frame interpolation calculations that are used to generate one of more latent image frames corresponding to times that occur during the integration periods for corresponding CIS pixels can be based on (i) the latent image frame L(s) and (ii) all or a subset of the EVS pixel data accumulated during the integration periods, such as without considering EVS pixel data that is captured and accumulated after the CIS integration periods have ended. Specific examples of this are described below with reference to
As still another example, the method 1000 of
As shown, the method 1400 includes aligning CIS pixel data and EVS pixel data at a row-by-row level (block 1401); capturing CIS pixel data during corresponding exposure/integration periods and capturing EVS pixel data during corresponding accumulation periods aligned with the exposure/integration periods (block 1402); reading out the EVS pixel data from the EVS pixels (block 1403); accumulating the EVS pixel data, such as (i) using a deblur circuit and/or (ii) in one or more EVS integration buffers (e.g., of a hybrid image sensor) and/or an EVS frame buffer (e.g., of a downstream application processor) (block 1404); reading out the CIS pixel data at ends of the integration periods (block 1405); deblurring the CIS pixel data using the accumulated EVS pixel data to generate a latent image frame L(s) (block 1406); and outputting the latent image frame L(s) to (e.g., a CIS key frame buffer) of an application processor and/or to an image signal processor (block 1407). Blocks 1401-1407 can be identical or at least generally similar to blocks 1001-1007 of the method 1000 of
Referring again to block 1404, the method 1400 can proceed from block 1404 to block 1408 to output accumulated EVS pixel data. For example, during an accumulation period for EVS pixels, EVS pixel data can be captured by the EVS pixels, read out, and accumulated at blocks 1402-1404 of the method 1400 of
After (i) loading accumulated EVS pixel data corresponding to the interpolation frame time into the EVS frame buffer of the downstream application processor at block 1408 and (ii) accumulating event data at blocks 1402-1404 through an end of an accumulation period that aligns with an exposure period for corresponding CIS pixels, the method 1400 (at block 1408) can output the accumulated EVS pixel data from the EVS frame buffer to a video frame interpolation computation block (e.g., of an application processor). For example, accumulated EVS pixel data stored in the EVS frame buffer and corresponding to the interpolation frame time can be output to the video frame interpolation computation block at a timing corresponding to when deblurred image data is (a) output from a deblur circuit to a CIS key frame buffer (e.g., of the application processor) and/or (b) provided to the video frame interpolation computation block.
As a specific example, consider
At block 1408, EVS pixel data captured by EVS pixels of EVS row N and accumulated between time t0 and time t5 can be output and stored in an EVS frame buffer (e.g., of the application processor), such as (a) by streaming the accumulated EVS pixel data to the EVS frame buffer between time t0 and time t5 or (b) reading out the accumulated EVS pixel data (e.g., row by row) to the EVS frame buffer at or after time t5. As discussed above, in some embodiments, a trigger and corresponding switch can be used to control when accumulated EVS pixel data is stored to the EVS frame buffer. For example, a trigger can be fired to selectively enable a switch at time t0 such that accumulated EVS pixel data can be streamed from the first integration buffer or the exponential computation block into the EVS frame buffer. Then, at time t5 (corresponding to the interpolation frame timing point for EVS pixel row N in
As shown in
Referring back to
Although the blocks 1401-1409 of the method 1400 are described and illustrated in a particular order, the method 1400 of
As a specific example, blocks 1408 and 1409 can be repeated to generate more than one interpolated video/image frame. For example,
As another example, the method 1400 of
The method 1600 begins at block 1601 by resetting EVS pixels and corresponding portions of one or more EVS integration buffer(s), such as one or more EDI integration buffers and/or one or more RSDC integration buffers. In some embodiments, resetting the EVS pixels and the corresponding portions of the EVS integration buffer(s) can include resetting the EVS pixels and the corresponding portions of the EVS integration buffer(s) at a same time and/or at within a time period that occurs before a start of an event accumulation period that precedes an integration period of corresponding CIS pixels. In these and other embodiments, corresponding portions of one or more EVS frame buffer(s) and/or of one or more CIS key frame buffer(s) (e.g., of a downstream application processor) can also be reset at block 1601, such as at a same timing as the EVS pixels and/or the corresponding portions of the EVS integration buffer(s). The resetting can be generally similar to the resetting performed at block 1001 of the method 1000 described above with reference to
Referring again to
Reading out the EVS data from the EVS pixels at block 1603 of the method 1600 can be performed subsequent to and/or while performing block 1602. For example, when an EVS pixel detects an event, the event can be read out from the EVS pixel. When the event is read out from the EVS pixel, the EVS pixel can be reset and thereby enabled to detect subsequent events. Reading out the EVS data can include reading out the EVS data to RSDC components of a computation block of a deblur circuit. Additionally, or alternatively, reading out the EVS data can include reading out the EVS data to EDI components of a computation block of a deblur circuit. In these and other embodiments, reading out the EVS data can include reading out the EVS data using a scan readout technique, such as the scan readout technique described in greater detail above with reference to
Accumulation performed at block 1604 of the method 1600 can include, for each EVS pixel, accumulating EVS pixel data read out from the EVS pixel during the corresponding accumulation period. The corresponding accumulation period (e.g., the accumulation period 1796 of
At block 1605, the method 1600 can continue by outputting accumulated EVS pixel data. For example, during an accumulation period for EVS pixels, EVS pixel data can be captured by the EVS pixels, read out, and accumulated at blocks 1602-1604 of the method 1600 of
Additionally, or alternatively, outputting the accumulated EVS pixel data at block 1605 can include outputting all or a subset of the accumulated EVS pixel data to a latent frame computation block of a deblur circuit, such as to perform rolling-shutter-distortion correction of corresponding CIS pixel data. In these and other embodiments, outputting the accumulated EVS pixel data can include outputting the accumulated EVS pixel data from one or more EVS frame buffers to a video frame interpolation block (e.g., of an application processor), such as at a timing corresponding to when CIS key frames are output to the video frame interpolation block. In some embodiments, the accumulated EVS pixel data can be output at block 1605 from the EDI integration buffer(s), the exponential computation block, the RSDC buffer(s), and/or the EVS frame buffers as part of a row-by-row readout of the accumulated EVS pixel data that occurs before a start of CIS integration (as shown by arrow 1705 in
As a specific example, referring again to
At blocks 1606-1613, the method 1600 continues by aligning CIS pixel data and EVS pixel data at a row-by-row level (block 1606); capturing CIS pixel data during corresponding exposure/integration periods and capturing EVS pixel data during corresponding accumulation periods aligned with the exposure/integration periods (block 1607); reading out the EVS pixel data from the EVS pixels (block 1608); accumulating the EVS pixel data, such as (i) using a deblur circuit and/or (ii) in one or more EVS integration buffers (e.g., of a hybrid image sensor) and/or an EVS frame buffer (e.g., of a downstream application processor) (block 1609); reading out the CIS pixel data at ends of the integration periods (block 1610); deblurring the CIS pixel data using the accumulated EVS pixel data to generate a latent image frame L(s) (block 1611); correcting the CIS pixel data for rolling-shutter distortion using all or a subset of the EVS pixel data accumulated between a start of the image frame and starts of corresponding integration periods to generate a latent image frame L(0) (block 1612); and outputting the latent image frame L(s) and/or the latent image frame L(0) to (e.g., a CIS key frame buffer) of an application processor and/or to an image signal processor (block 1613). Blocks 1606-1611 can be identical or at least generally similar to blocks 1001-1006 of the method 1000 of
Correcting the CIS data for rolling shutter distortion at block 1612 of the method 1600 can include correcting the CIS data for rolling shutter distortion using EVS data accumulated in corresponding portions of RSDC integration buffers at block 1604. More specifically, accumulated EVS data stored in an RSDC integration buffer of a deblur circuit between a start time t0 of the image frame and a start of a corresponding exposure period for the CIS pixels that are aligned with integration periods of the image frame, can be fed to an exponential computation block of the deblur circuit. In turn, the exponentials generated by the exponential computation block can be output to a latent frame computation block of the deblur circuit. Thereafter, the exponentials can be used to correct the CIS data (e.g., the CIS data captured by the CIS pixels at block 1607 and/or the deblurred CIS data from block 1611) for rolling shutter distortion, such as using Equation 26 above to determine a corresponding latent image frame L(0).
Outputting the deblurred and/or rolling-shutter-distortion-corrected image data at block 1613 of the method 1600 can include outputting one or more latent image frames (e.g., a latent image frame L(s) and/or a latent image frame L(0)) computed at block 1611 and/or block 1612, such as in addition to or in lieu of outputting raw CIS data that is read out from the CIS pixels at block 1610 and/or raw EVS data that is generated and read out from EVS pixels at blocks 1607 and 1608. Outputting the deblurred and/or rolling-shutter-distortion-corrected image data can include outputting the deblurred image data from an image sensor to a downstream application processor. As a specific example, outputting the deblurred and/or rolling-shutter-distortion-corrected image data can include outputting the deblurred and/or rolling-shutter-distortion-corrected image data to a CIS key frame buffer and/or a video frame interpolation computation block of the downstream application processor. In some embodiments, a trigger and a corresponding switch can be used to gate or control loading of the deblurred and/or rolling-shutter-distortion-corrected image data into the CIS key frame buffer. Additionally, or alternatively, outputting the deblurred and/or rolling-shutter-distortion-corrected image data can include outputting the deblurred and/or rolling-shutter-distortion-corrected image data to an image signal processor, such as for previewing the deblurred and/or rolling-shutter-distortion-corrected image data.
Referring again to block 1609, the method 1600 can proceed from block 1609 to block 1614 to output accumulated EVS pixel data. For example, during an accumulation period for EVS pixels, EVS pixel data can be captured by the EVS pixels, read out, and accumulated at blocks 1607-1609 of the method 1600 of
After (i) loading accumulated EVS pixel data corresponding to the interpolation frame time into the EVS frame buffer of the downstream application processor at block 1614 and (ii) accumulating event data at blocks 1607-1609 through an end of an accumulation period that aligns with an exposure period for corresponding CIS pixels, the method 1600 (at block 1614) can output the accumulated EVS pixel data from the EVS frame buffer to a video frame interpolation computation block (e.g., of an application processor). For example, accumulated EVS pixel data stored in the EVS frame buffer and corresponding to the interpolation frame time can be output to the video frame interpolation computation block at a timing corresponding to when deblurred image data is (a) output from a deblur circuit to a CIS key frame buffer (e.g., of the application processor) and/or (b) provided to the video frame interpolation computation block.
As a specific example, referring to
At block 1614, EVS pixel data captured by EVS pixels of EVS row N and accumulated between time t0 and time t5 can be output and stored in an EVS frame buffer (e.g., of the application processor), such as (a) by streaming the accumulated EVS pixel data to the EVS frame buffer between time t0 and time t5 or (b) reading out the accumulated EVS pixel data (e.g., row by row) to the EVS frame buffer at or after time t5. As discussed above, in some embodiments, a trigger and corresponding switch can be used to control when accumulated EVS pixel data is stored to the EVS frame buffer. For example, a trigger can be fired to selectively enable a switch at time t0 (corresponding to a start of the exposure period 1797 for CIS pixels corresponding to the EVS pixels of EVS row N) such that accumulated EVS pixel data can be streamed from the first integration buffer or the exponential computation block into the EVS frame buffer. Then, at time t5 (corresponding to the interpolation frame timing point for EVS pixel row N in
As shown in
Referring back to
Although blocks 1601-1615 of the method 1600 are described and illustrated in a particular order, the method 1600 of
As a specific example, blocks 1605, 1614, and/or 1615 can be repeated to generate more than one interpolated video/image frame. For example,
As another specific example, resetting the EVS pixels, corresponding portions of EDI integration buffers, corresponding portions of RSDC integration buffers, and/or corresponding portions of EVS frame buffers can be omitted from block 1606 of the method 1600 in some embodiments. For example, for each EVS pixel row, the EVS pixels can continuously be enabled to detect events occurring in the external scene between the start of the corresponding accumulation period preceding the integration period (e.g., the integration period 1797 of
As still another example, although interpolated image frames are described above with reference to
As yet another example, the method 1600 of
The video frame interpolation computation block 1845 can include a CIS/EVS synchronization block 1845a, an event-guided deblur block 1845b, and/or a video frame interpolation block 1845c. The CIS/EVS synchronization block 1845a can be configured to synchronize accumulated EVS data output from the second frame buffer 1844 with CIS key frames output from the first frame buffer 1843. After the accumulated EVS data is synchronized with the CIS key frames, the event-guided deblur block 1845b can be configured to deblur the CIS key frames using all or a subset of the accumulated EVS data. Furthermore, the video frame interpolation block 1845c can be configured to compute one or more interpolated video/image frames based at least in part on the accumulated EVS data and the CIS data of the key frames. As discussed above, the interpolated video/image frames can include one or more latent images calculated based on, for example, a latent image L(s), a latent image L(0), and/or accumulated EVS data.
In at least some embodiments in which the accumulated EVS data is synchronized with the CIS key frames upstream (e.g., by resetting EVS pixels and corresponding CIS pixels at a same time), at least a portion of the CIS/EVS synchronization performed by the CIS/EVS synchronization block 1845a can be skipped or omitted. Additionally, or alternatively, in at least some embodiments in which the CIS key frames are deblurred upstream (e.g., by a hybrid image sensor coupled to the system processor 1830) using accumulated EVS data, at least a portion of the deblur process performed by the event-guided deblur block 1845b can be skipped or omitted.
Although not shown in
The third frame buffer 1846 is configured to store N frames of CIS data output from the video frame interpolation computation block 1845, where N represents the interpolation ratio. Thus, when N is four, the third frame buffer 1846 can be configured to store four frames of CIS data output from the video frame interpolation computation block 1845. In some embodiments, the four frames of CIS data can include at least one CIS key frame (e.g., a latent image L(s) and/or a latent image L(0)) and up to three interpolated video/image frames (e.g., three latent images L(t), such as corresponding to three different times). In other embodiments, the four frames of CIS data can include up four interpolated video/image frames (e.g., four latent images L(t), such as corresponding to four different times).
As shown in
In the illustrated embodiment, the system processor 1830 further includes a first trigger check block 1842a and a second trigger check block 1842b that are each responsive to a trigger 1841. In some embodiments, the trigger 1841 can used to control when CIS data 1821 and EVS data 1871 are provided to and/or loaded in the first frame buffer 1843 and the second frame buffer 1844, respectively. For example, a preprocessor (not shown) can perform data analysis, such as on EVS data captured by EVS pixels of an upstream hybrid image sensor (not shown), to identify when motion has occurred within an external scene. In response to identifying motion in the external scene, the trigger 1841 can enable the first trigger check block 1842a (e.g., a first switch or a first multiplexer) and the second trigger check block 1842b (e.g., a second switch or a second multiplexer) to pass CIS data 1821 to the first frame buffer 1843 and EVS data 1822 to the second frame buffer 1844, respectively.
As another example, the trigger 1481 can be fired or activated at specified timings. For example, the trigger 1481 can be fired to selectively enable the first trigger check block 1842a at a timing corresponding to when a corresponding deblur circuit outputs deblurred image data (e.g., a latent image frame L(s)) and/or deblurred and rolling-shutter-distortion-corrected image data (e.g., a latent image frame L(0)) to the system processor 1830 for storage in the first frame buffer 1843. Additionally, or alternatively, the trigger 1481 can be fired to selectively enable the second trigger check block 1842b at timings corresponding to when accumulated EVS pixel data used in video frame interpolation computations should be loaded into the second frame buffer 1844, such as at starts of exposure periods, ends of exposure periods, starts of integration periods, ends of integration periods, ends of reset periods, interpolation frame timings, etc. In other words, the trigger 1841 and the corresponding first trigger check block 1842a and second trigger check block 1842b can be used to selectively enable loading of CIS data and EVS data into the first frame buffer 1843 and the second frame buffer 1844, respectively, at given times and/or for given periods of time.
As shown in
As shown, CIS data 2021 can be input into the VFI pipeline 2050 via a multiplexer 2042 and stored in a first frame buffer 2053 (e.g., a cyclic buffer). As discussed in greater detail below, the multiplexer 2042 (or switch) is controllable using a trigger 2041 of the VFI pipeline 2050. Additionally, or alternatively, the CIS data 2021 can be provided to a preview ISP 2062 for previewing the CIS data 2021. In some embodiments, the CIS data 2021 includes raw CIS data. In other embodiments, the CIS data 2021 includes deblurred and/or rolling-shutter-distortion-corrected CIS data, such as one or more latent image frame (e.g., a latent image frame L(s) and/or a latent image frame L(0). For example, an upstream deblur circuit (not shown), such as of an upstream hybrid image sensor (not shown) coupled to the VFI pipeline 2050, can be configured to deblur CIS data and/or correct CIS data for rolling shutter distortion and thereafter output the CIS data 2021 to the VFI pipeline 2050.
EVS data 2022 can be input into the VFI pipeline 2050 and stored to a second frame buffer 2054 (e.g., a cyclic buffer). The EVS data 2022 can include raw EVS data. Additionally, or alternatively, the EVS data 2022 can include accumulated EVS data accumulated by (e.g., a deblur circuit of) a upstream hybrid image sensor (not shown).
EVS data 2022 stored to the second frame buffer 2054 can be output to a pre-processor block 2055 configured to pre-process events in the EVS data 2022 for further processing in the VFI pipeline 2050. As shown, the pre-processor block 2055 includes an activity monitor block 2055a, a decoder block 2055b, and a denoiser block 2055c. The denoiser block 2055c can be configured to denoise the EVS data 2022, and the decoder block 2055b can be configured to decode the EVS data 2022 for interpretation by the activity monitor block 2055a and/or other components of the VFI pipeline 2050. In embodiments in which events in the EVS data 2022 are not encoded, the decoder block 2055b can be omitted. The activity monitor block 2055a is configured to analyze the EVS data to identify motion in an external scene. When motion is identified by the activity monitor block 2055a in the EVS data 2022, the pre-processor block 2055 can activate the trigger 2041 to enable the multiplexer 2042 to allow CIS data 2021 to pass to the first frame buffer 2053. The CIS data 2021 and the EVS data 2022 can be buffered in the first frame buffer 2053 and a third frame buffer 2056, respectively, for synchronization.
In some embodiments, the trigger 2041 can be automatically triggered when motion is identified the external scene. In these and other embodiments, the trigger 2041 can be manually triggered, such as in response to motion being identified in the external scene and/or independent of motion identified in the external scene. In these and still other embodiments, the trigger 2041 can be triggered based on a timer (e.g., after a preset duration has elapsed), such as in response to motion being identified in the external scene and/or independent of motion identified in the external scene.
As another example, the trigger 2041 can be fired or activated at specified timings. For example, the trigger 2041 can be fired to selectively enable the multiplexer 2042 at a timing corresponding to when a corresponding deblur circuit outputs deblurred image data (e.g., a latent image frame L(s)) and/or deblurred and rolling-shutter-distortion-corrected image data (e.g., a latent image frame L(0)) to the processor(s) 2057 for storage in the first frame buffer 2053. Additionally, or alternatively, pre-processor block 2055 can be used to gate or control when EVS data and/or accumulated EVS data stored in the second frame buffer 2054 is loaded into the third frame buffer 2056, such as at starts of exposure periods, ends of exposure periods, starts of integration periods, ends of integration periods, ends of reset periods, interpolation frame timings, etc. In other words, the trigger 2041, the multiplexer 2042, and/or the pre-processor block 2055 can be used to selectively enable loading of CIS data and EVS data into the first frame buffer 2053 and the third frame buffer 2056, respectively, at given times and/or for given periods of time.
CIS data 2021 stored in the first frame buffer 2053 and EVS data 2022 pre-processed by the pre-processor block 2055 and stored to the third frame buffer 2056 can be output to one or more processors 2057 (e.g., one or more CPUs, GPUs, NPUs, and/or DSPs) of the VFI pipeline 2050. As shown, the processor(s) 2057 include an EVS/CIS synchronization block 2057a, a contrast threshold calibration block 2057b or circuit, a deblurring and/or rolling-shutter-distortion-correction block 2057c, and/or a video frame interpolation block 2057d. The EVS/CIS synchronization block 2057a can be configured to synchronize CIS data 2021 output from the first frame buffer 2053 with corresponding EVS data 2022 output from the third frame buffer 2056. The contrast threshold calibration block 2057b is configured to configured to dynamically or periodically set or adjust the contrast threshold used by EVS pixels and deblur circuits in accordance with the discussion above. The rolling-shutter-distortion-correction block 2057c is configured to use the EVS data 2022 to deblur the CIS data 2021 and/or correct the CIS data 2021 for rolling-shutter distortion, such as to generate a latent image frame L(s) and/or a latent image frame L(0). The video frame interpolation block 2057d is configured to interpolate one or more additional video/image frames using the deblurred and/or rolling-shutter-distortion-corrected CIS data 2021 and all or a subset of the EVS data 2022. As discussed above, the interpolated video/image frames can be used to generate slow-motion videos.
Interpolated video/images frames can be output from the processor(s) 2057 (e.g., from the video frame interpolation block 2057d) to a fourth frame buffer 2058. In some embodiments, the fourth frame buffer 2058 can be a ping-pong buffer that enables reading out one interpolated video/image frame to ISP components 2052 while another video/image frame is being interpolated. The ISP components 2052 can be configured to output interpolated video/image frames to an MPEG encoder, which in turn can be configured to provide encoded, interpolated video/image frames to memory for storage.
The VFI pipeline 2050 can include or be embodied by various components of an imaging system. In some embodiments, the VFI pipeline 2050 can be embodied by an image sensor, such as a hybrid image sensor that includes a deblur circuit. In such embodiments, the image sensor can include on-chip deblur capabilities, on-chip rolling-shutter-distortion-correction capabilities, on-chip contrast threshold calibration capabilities, and/or on-chip video frame interpolation capabilities. In other embodiments, the VFI pipeline 2050 can be embodied by an off-chip processor, such as an application processor positioned downstream from one or more images sensors. In such embodiments, the imaging system can include off-chip deblur capabilities, off-chip rolling-shutter-distortion-correction capabilities, off-chip contrast threshold calibration capabilities, and/or off-chip video frame interpolation capabilities. In still other embodiments, the VFI pipeline 2050 can be embodied in part by one or more image sensors (e.g., a hybrid image sensor) and in part by an off-chip processor (e.g., an application processor downstream from the hybrid image sensor). In such embodiments, all or a subset of the deblur processes, all or a subset of the rolling-shutter-distortion-correction processes, all or a subset of the contrast threshold calibration processes, and/or all or a subset of the video frame interpolation processes can be performed on-chip while all or a subset of the deblur processes, all or a subset of the rolling-shutter-distortion-correction processes, all or a subset of the contrast threshold calibration processes, and/or all or a subset of the video frame interpolation processes can be performed off-chip.
In comparison to the imaging system 320 of
The above detailed descriptions of embodiments of the technology are not intended to be exhaustive or to limit the technology to the precise form disclosed above. Although specific embodiments of, and examples for, the technology are described above for illustrative purposes, various equivalent modifications are possible within the scope of the technology as those skilled in the relevant art will recognize. For example, although steps are presented in a given order above, alternative embodiments may perform steps in a different order. Furthermore, the various embodiments described herein may also be combined to provide further embodiments.
From the foregoing, it will be appreciated that specific embodiments of the technology have been described herein for purposes of illustration, but well-known structures and functions have not been shown or described in detail to avoid unnecessarily obscuring the description of the embodiments of the technology. To the extent any material incorporated herein by reference conflicts with the present disclosure, the present disclosure controls. Where context permits, singular or plural terms may also include the plural or singular term, respectively. In addition, unless the word “or” is expressly limited to mean only a single item exclusive from the other items in reference to a list of two or more items, then the use of “or” in such a list is to be interpreted as including (a) any single item in the list, (b) all of the items in the list, or (c) any combination of the items in the list. Furthermore, as used herein, the phrase “and/or” as in “A and/or B” refers to A alone, B alone, and both A and B. Additionally, the terms “comprising,” “including,” “having,” and “with” are used throughout to mean including at least the recited feature(s) such that any greater number of the same features and/or additional types of other features are not precluded. Moreover, as used herein, the phrases “based on,” “depends on,” “as a result of,” and “in response to” shall not be construed as a reference to a closed set of conditions. For example, an exemplary step that is described as “based on condition A” may be based on both condition A and condition B without departing from the scope of the present disclosure. In other words, as used herein, the phrase “based on” shall be construed in the same manner as the phrase “based at least in part on” or the phrase “based at least partially on.” Also, the terms “connect” and “couple” are used interchangeably herein and refer to both direct and indirect connections or couplings. For example, where the context permits, element A “connected” or “coupled” to element B can refer (i) to A directly “connected” or directly “coupled” to B and/or (ii) to A indirectly “connected” or indirectly “coupled” to B.
From the foregoing, it will also be appreciated that various modifications may be made without deviating from the disclosure or the technology. For example, one of ordinary skill in the art will understand that various components of the technology can be further divided into subcomponents, or that various components and functions of the technology may be combined and integrated. In addition, certain aspects of the technology described in the context of particular embodiments may also be combined or eliminated in other embodiments. Furthermore, although advantages associated with certain embodiments of the technology have been described in the context of those embodiments, other embodiments may also exhibit such advantages, and not all embodiments need necessarily exhibit such advantages to fall within the scope of the technology. Accordingly, the disclosure and associated technology can encompass other embodiments not expressly shown or described herein.
The present application claims the benefit of U.S. Provisional Patent Application No. 63/597,638, filed Nov. 9, 2023, which is incorporated by reference herein in its entirety. This application contains subject matter related to cofiled, copending, and coassigned U.S. patent application Ser. No. 18/938,208, filed Nov. 5, 2024, and titled “HYBRID IMAGE SENSORS WITH ON-CHIP IMAGE DEBLUR,” which is incorporated herein by reference in its entirety. This application contains subject matter related to cofiled, copending, and coassigned U.S. patent application Ser. No. 18/938,184, filed Nov. 5, 2024, and titled “HYBRID IMAGE SENSORS WITH ON-CHIP IMAGE DEBLUR AND ROLLING SHUTTER DISTORTION CORRECTION,” which is incorporated herein by reference in its entirety. This application contains subject matter related to cofiled, copending, and coassigned U.S. patent application Ser. No. 18/938,125, filed Nov. 5, 2024, and titled “METHODS FOR OPERATING HYBRID IMAGE SENSORS HAVING DIFFERENT CIS-TO-EVS RESOLUTIONS,” which is incorporated herein by reference in its entirety. This application contains subject matter related to cofiled, copending, and coassigned U.S. patent application Ser. No. 18/938,080, filed Nov. 5, 2024, and titled “HYBRID IMAGE SENSORS WITH VIDEO FRAME INTERPOLATION,” which is incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
63597638 | Nov 2023 | US |