The disclosure relates generally to image sensors, and more specifically to pixel cell structure including interfacing circuitries for determining light intensity for image generation.
A typical image sensor includes a photodiode to measure the intensity of incident light by converting photons into charge (e.g., electrons or holes), and a charge storage device to store the charge. To reduce image distortion, a global shutter operation can be performed in which each photodiode of the array of photodiodes senses the incident light simultaneously in a global exposure period to generate charge. The charge generated by the array of photodiodes can then be quantized by an analog-to-digital converter (ADC) into digital values to generate the image. One important performance metrics of an image sensor is global shutter efficiency (GSE), which measures how much of the charge stored in the charge storage device is contributed by parasitic light received outside the global exposure period.
The present disclosure relates to image sensors. More specifically, and without limitation, this disclosure relates to a pixel cell. This disclosure also relates to operating the circuitries of pixel cells to generate a digital representation of the intensity of incident light.
In one example, an apparatus is provided. The apparatus comprises: a photodiode, charge sensing unit including a first charge storage device and a second charge storage device, an analog-to-digital converter (ADC), and a controller. The controller is configured to: within an exposure period: enable the photodiode to, in response to incident light, accumulate residual charge and transfer overflow charge to the first charge storage device and the second charge storage device when the photodiode saturates. The controller is also configured to, after the exposure period ends: disconnect the second charge storage from the first charge device, transfer the residual charge to the first charge storage device to cause the charge sensing unit to generate a first voltage, control the ADC to quantize the first voltage to generate a first digital value to measure the residual charge, connect the first charge storage with the second charge storage to cause the charge sensing unit to generate a second voltage, quantize the second voltage to generate a second digital value to measure the overflow charge, and generate a digital representation of an intensity of the incident light based on the first digital value, the second digital value, and based on whether the photodiode saturates.
In some aspects, the apparatus further comprises: a first switch coupled between the photodiode and the first charge storage device, the first switch controllable by the controller to transfer the residual charge or the overflow charge from the photodiode; and a second switch controllable by the controller to connect the first charge storage device and the second charge storage device in parallel, or to disconnect the second charge storage device from the first charge storage device.
In some aspects, the controller is configured to, during the exposure period: control the second switch to connect the first charge storage device and the second charge storage device in parallel; and control the first switch to enable the photodiode to transfer the overflow charge to the first charge storage device and the second charge storage device connected in parallel if the photodiode saturates.
In some aspects, the controller is configured to: control the second switch to disconnect the second charge storage device from the first charge storage device and from the photodiode to enable the first charge storage device to store a first portion of the overflow charge and the second charge storage device to store a second portion of the overflow charge; and control the first switch to transfer the residual charge to the first charge storage device to cause the charge sensing unit to generate the first voltage.
In some aspects, the first voltage is based on a quantity of the residual charge and a quantity of the first portion of the overflow charge and a capacitance of the first charge storage device.
In some aspects, the second voltage is based on the quantity of the residual charge, the quantity of the overflow charge, and a total capacitance of the first charge storage device and the second charge storage device.
In some aspects, the controller is configured to empty the first charge storage device prior to transferring the residual charge of the each pixel cell to the first charge storage device. The first voltage is based on a quantity of the residual charge and a capacitance of the first charge storage device.
In some aspects, the second voltage is based on the quantity of the residual charge, a quantity of the second portion of the overflow charge, and a total capacitance of the first charge storage device and the second charge storage device.
In some aspects, the controller is configured to: generate a first digital representation of the residual charge and a second digital representation of the overflow charge based on the first digital value, the second digital value, and capacitances of the first charge storage device and of the second charge storage device; and based on whether the photodiode saturates, generate the digital representation of the intensity of the incident light based on the first digital representation of the residual charge or the second digital representation of the overflow charge.
In some aspects, the apparatus further comprises a memory and a counter. The ADC further comprises a comparator configured to: compare the first voltage against a first ramping voltage to output a first decision, obtain the first digital value from the counter based on the first decision, compare the second voltage against a second ramping voltage to output a second decision, obtain the second digital value from the counter based on the second decision, and store both the first digital value and the second digital value in the memory.
In some aspects, the controller is configured to reset the comparator and the first and second charge storage devices simultaneously prior to comparing the first voltage against the first ramping voltage.
In some aspects, the controller is configured to reset the comparator, and then reset the first and second charge storage devices when the comparator is out of reset, prior to comparing the second voltage against the second ramping voltage.
In some aspects, the first ramping voltage and the second ramping voltage have opposite ramping directions.
In some aspects, a polarity of comparison of the comparator is inverted in the generation of the second digital value with respect to the generation of the first digital value.
In some aspects, the apparatus further comprises an input multiplexor controllable to swap inputs to a first input terminal and a second input terminal of the comparator between the generation of the first digital value and the generation of the second digital value to invert the polarity of the comparison.
In some aspects, the apparatus further comprises an output multiplexor controllable to invert an output of the comparator between the generation of the first digital value and the generation of the second digital value to invert the polarity of the comparison.
In one example, a method is provided. The method comprises: within an exposure period, enabling a photodiode to, in response to incident light, accumulate residual charge, and to transfer overflow charge to a first charge storage device and a second charge storage device when the photodiode saturates; disconnecting the second charge storage device from the first charge storage device; enabling the photodiode to transfer the residual charge to the first charge storage device to cause the charge sensing unit to output a first voltage; quantizing the first voltage to generate a first digital value to measure the residual charge; connecting the second charge storage device with the first charge storage device to cause the charge sensing unit to output a second voltage; quantizing the second voltage to generate a second digital value to measure the overflow charge; and generating a digital representation of the incident light intensity based on the first digital value and the second digital value.
In some aspects, the method further comprises: connecting the first charge storage device and the second charge storage device in parallel to receive the overflow charge from the photodiode; and disconnecting the first charge storage device from the second charge storage device such that the first charge storage device stores a first portion of the overflow charge and the second charge storage device stores a second portion of the overflow charge.
In some aspects, the residual charge combines with the first portion of the overflow charge to generate the first voltage.
In some aspects, the method further comprises emptying the first charge storage device prior to transferring the residual charge of to the first charge storage device. The residual charge is stored at the emptied first charge storage device to generate the first voltage.
Illustrative examples are described with reference to the following figures.
The figures depict examples of the present disclosure for purposes of illustration only. One skilled in the art will readily recognize from the following description that alternative examples of the structures and methods illustrated may be employed without departing from the principles, or benefits touted, of this disclosure.
In the appended figures, similar components and/or features may have the same reference label. Further, various components of the same type may be distinguished by following the reference label by a dash and a second label that distinguishes among the similar components. If only the first reference label is used in the specification, the description is applicable to any one of the similar components having the same first reference label irrespective of the second reference label.
In the following description, for the purposes of explanation, specific details are set forth in order to provide a thorough understanding of certain inventive examples. However, it will be apparent that various examples may be practiced without these specific details. The figures and description are not intended to be restrictive.
A typical image sensor includes an array of pixel cells. Each pixel cell includes a photodiode to measure the intensity incident light by converting photons into charge (e.g., electrons or holes). The charge generated by the photodiode can be temporarily stored in a charge storage device, such as a floating drain node, within the pixel cell. The charge stored at the each pixel cell can be quantized by an analog-to-digital converter (ADC) into digital values. The ADC can quantize the charge by, for example, using a comparator to compare a voltage representing the charge with one or more quantization levels, and a digital value can be generated based on the comparison result. The digital values from the pixel cells can then be stored in a memory to generate an image.
Due to power and chip area limitation, typically the ADC and the memory are shared by at least some of the pixel cells, instead of providing a dedicated ADC and a memory to each pixel cell. A rolling shutter operation can be performed to accommodate the sharing of the ADC and the memory among the pixel cells. For example, the array of pixel cells can be divided into multiple groups (e.g., rows or columns of pixel cells), with the pixels of each group sharing an ADC and the memory. To accommodate the sharing of the ADC and the memory, a rolling shutter operation can be performed in which each pixel cell within the group can take a turn to be exposed to incident light to generate the charge, followed by accessing the ADC to perform the quantization of the charge into a digital value, and storing the digital value into the memory. As the rolling shutter operation exposes different pixel cells to incident light at different times, an image generated from the rolling shutter operation can experience distortion, especially for images of a moving object and/or images captured when the image sensor is moving. The potential distortion introduced by rolling shutter operation makes it unsuitable for augmented reality/mixed reality/virtual reality (AR/MR/VR) applications, wearable applications, etc., in which the image sensor can be part of a headset and can be in motion when capturing images.
To reduce image distortion, a global shutter operation can be performed in which each pixel cell of the array of pixel cells is exposed to incident light to generate charge simultaneously within a global shutter period (or a global integration period). Each pixel cell can also include a charge storage device to temporarily store charge generated by the photodiode within the global exposure period. The charge stored in the charge storage device can be quantized to generate a digital value for the each pixel cell. The digital values of the pixel cells can represent a distribution of intensities of the incident light received by the pixel cells within the global shutter period.
One important performance metrics for a global shutter operation is global shutter efficiency (GSE), which measures how much of the charge stored in the charge storage device and quantized by the ADC is contributed by parasitic light which is not the object of the intensity measurement. One source of parasitic light can be due to non-uniform exposure periods caused by, for example, non-uniform exposure start times and/or end times. For example, in a multi-stage readout and quantization scheme, a first stage readout and quantization operation may be performed when the photodiode is still transferring charge to the charge storage device, followed by a second stage readout and quantization operation after the charge transfer stops. As a result, the exposure periods for the two stages of readout and quantization operations may have different end times. The non-uniform exposure periods can lead to different pixel cells generating pixel data based on light detected within different time periods rather than within the same global exposure period. This can introduce motion blur when imaging a bright, fast moving object similar to a rolling shutter operation.
This disclosure relates to an image sensor that can provide an improved global shutter operation by addressing some of the issues discussed above. The image sensor includes a pixel cell array to measure the intensity of incident light within a global exposure period. Each pixel cell includes a photodiode and a charge sensing unit comprising a buffer, a first charge storage device, and a second charge storage device. The first charge storage device can be a floating drain, whereas the second charge device can be a capacitor (e.g., a metal oxide silicon (MOS) capacitor, a metal capacitor, etc.). The first charge storage device and the second charge storage device can be connected in parallel to receive charge from the photodiode, or can be disconnected such that only the first charge storage device receives charge from the photodiode. The charge sensing unit can output a voltage based on the charge accumulated at the first charge one or more charge storage devices. The image sensor further includes one or more ADCs and a controller. The controller can enable the photodiode of each pixel cell to, within the global exposure period, generate charge in response to incident light. The photodiode can accumulate at least part of the charge as residual charge and transfer the remaining charge as overflow charge to the charge sensing unit after the photodiode saturates. The overflow charge can be accumulated by the parallel combination of the first charge storage device and the second charge storage device. After the global exposure period ends, the controller can disconnect the photodiode from the charge sensing unit to stop the transfer of charge to the charge sensing unit, control the one or more ADCs to perform a first quantization operation and a second quantization operation on the output voltage of the charge sensing unit to generate, respectively, a first digital value and a second digital value, and output one of the first digital value or the second digital value for each pixel cell based on whether the photodiode of the each pixel cell saturates.
Specifically, after the global exposure period ends, the controller can first cause the photodiode of the each pixel cell to transfer the residual charge to the first charge storage device, which causes the charge sensing unit to output a first voltage. In some examples, the first voltage can represent a quantity of the residual charge as well as a part of the overflow charge. In some examples, the first charge storage device can be reset prior to the transfer of the residual charge, and the first voltage can represent a quantity of the residual charge. The global exposure period ends after the transfer of the residual charge to the first storage device at the each pixel ends. After the transfer of the residual charge (as well as the global exposure period) ends, the controller can control the one or more ADCs to perform a first quantization operation of the first voltage to measure the residual charge generated at the each pixel cell during the global exposure period.
After the first quantization operation, the controller can then connect the first charge storage device with the second charge storage device, which causes the charge sensing unit to output a second voltage. The second voltage can represent the quantities of the residual charge and a part of the overflow charge (if the first charge storage device is reset prior to the transfer of the residual charge) or the entirety of the overflow charge. The controller can control the one or more ADCs to perform a second quantization operation of the second voltage to measure the overflow charge generated at the each pixel cell during the global exposure period.
In some examples, the one or more ADCs may include a comparator at the each pixel cell, together with a counter and a memory. The memory can update a count value periodically. In both the first quantization and second quantization operations, the comparator can compare the first voltage and second voltage against a ramping threshold voltage to generate decisions. The decisions can indicate that a matching threshold voltage that matches the first voltage (or the second voltage) is found. The decisions can control the time when the memory stores a count value of the counter which corresponds to the matching threshold voltage in each quantization operation as the first digital value (for the first quantization operation) or as the second digital value (for the second quantization operation). The decision generated from comparing the first voltage against the ramping threshold voltage can also indicate whether the photodiode saturates, based on which the controller can decide whether to store the second digital value from the second quantization operation (of overflow charge) into the memory to represent the intensity of light. Various noise and offset compensation techniques, such as correlated double sampling, can be employed to mitigate the effect of reset noise and comparator offset on the quantization operations. In some examples, a first ramping threshold voltage can be used for the first quantization operation which can have an opposite ramping direction from a second ramping threshold voltage used for the second quantization operation.
With examples of the present disclosure, light intensity measurement at each pixel cell can be based on charge generated a photodiode of the pixel cell within a global exposure period. Moreover, as the photodiode stops transferring charge to the charge storage devices after the global exposure period ends, each of the subsequent read out and quantization operations can be based on charge generated within the same global exposure period. As a result, the image sensor not only can support a global shutter operation but also can provide improved global shutter efficiency. Further, various techniques employed in the multi-stage readout and quantization operations can further extend the dynamic range of the image sensor. All these can improve the performances of the image sensor as well as the applications that rely on the image sensor outputs.
The disclosed techniques may include or be implemented in conjunction with an artificial reality system. Artificial reality is a form of reality that has been adjusted in some manner before presentation to a user, which may include, e.g., a virtual reality (VR), an augmented reality (AR), a mixed reality (MR), a hybrid reality, or some combination and/or derivatives thereof. Artificial reality content may include completely generated content or generated content combined with captured (e.g., real-world) content. The artificial reality content may include video, audio, haptic feedback, or some combination thereof, any of which may be presented in a single channel or in multiple channels (such as stereo video that produces a three-dimensional effect to the viewer). Additionally, in some examples, artificial reality may also be associated with applications, products, accessories, services, or some combination thereof, that are used to, e.g., create content in an artificial reality and/or are otherwise used in (e.g., perform activities in) an artificial reality. The artificial reality system that provides the artificial reality content may be implemented on various platforms, including a head-mounted display (HMD) connected to a host computer system, a standalone HMD, a mobile device or computing system, or any other hardware platform capable of providing artificial reality content to one or more viewers.
Near-eye display 100 includes a frame 105 and a display 110. Frame 105 is coupled to one or more optical elements. Display 110 is configured for the user to see content presented by near-eye display 100. In some examples, display 110 comprises a waveguide display assembly for directing light from one or more images to an eye of the user.
Near-eye display 100 further includes image sensors 120a, 120b, 120c, and 120d. Each of image sensors 120a, 120b, 120c, and 120d may include a pixel array configured to generate image data representing different fields of views along different directions. For example, sensors 120a and 120b may be configured to provide image data representing two fields of view towards a direction A along the Z axis, whereas sensor 120c may be configured to provide image data representing a field of view towards a direction B along the X axis, and sensor 120d may be configured to provide image data representing a field of view towards a direction C along the X axis.
In some examples, sensors 120a-120d can be configured as input devices to control or influence the display content of the near-eye display 100, to provide an interactive VR/AR/MR experience to a user who wears near-eye display 100. For example, sensors 120a-120d can generate physical image data of a physical environment in which the user is located. The physical image data can be provided to a location tracking system to track a location and/or a path of movement of the user in the physical environment. A system can then update the image data provided to display 110 based on, for example, the location and orientation of the user, to provide the interactive experience. In some examples, the location tracking system may operate a SLAM algorithm to track a set of objects in the physical environment and within a view of field of the user as the user moves within the physical environment. The location tracking system can construct and update a map of the physical environment based on the set of objects, and track the location of the user within the map. By providing image data corresponding to multiple fields of views, sensors 120a-120d can provide the location tracking system a more holistic view of the physical environment, which can lead to more objects to be included in the construction and updating of the map. With such an arrangement, the accuracy and robustness of tracking a location of the user within the physical environment can be improved.
In some examples, near-eye display 100 may further include one or more active illuminators 130 to project light into the physical environment. The light projected can be associated with different frequency spectrums (e.g., visible light, infra-red light, ultra-violet light, etc.), and can serve various purposes. For example, illuminator 130 may project light in a dark environment (or in an environment with low intensity of infra-red light, ultra-violet light, etc.) to assist sensors 120a-120d in capturing images of different objects within the dark environment to, for example, enable location tracking of the user. Illuminator 130 may project certain markers onto the objects within the environment, to assist the location tracking system in identifying the objects for map construction/updating.
In some examples, illuminator 130 may also enable stereoscopic imaging. For example, one or more of sensors 120a or 120b can include both a first pixel array for visible light sensing and a second pixel array for infra-red (IR) light sensing. The first pixel array can be overlaid with a color filter (e.g., a Bayer filter), with each pixel of the first pixel array being configured to measure intensity of light associated with a particular color (e.g., one of red, green or blue colors). The second pixel array (for IR light sensing) can also be overlaid with a filter that allows only IR light through, with each pixel of the second pixel array being configured to measure intensity of IR lights. The pixel arrays can generate an RGB image and an IR image of an object, with each pixel of the IR image being mapped to each pixel of the RGB image. Illuminator 130 may project a set of IR markers on the object, the images of which can be captured by the IR pixel array. Based on a distribution of the IR markers of the object as shown in the image, the system can estimate a distance of different parts of the object from the IR pixel array, and generate a stereoscopic image of the object based on the distances. Based on the stereoscopic image of the object, the system can determine, for example, a relative position of the object with respect to the user, and can update the image data provided to display 100 based on the relative position information to provide the interactive experience.
As discussed above, near-eye display 100 may be operated in environments associated with a very wide range of light intensities. For example, near-eye display 100 may be operated in an indoor environment or in an outdoor environment, and/or at different times of the day. Near-eye display 100 may also operate with or without active illuminator 130 being turned on. As a result, image sensors 120a-120d may need to have a wide dynamic range to be able to operate properly (e.g., to generate an output that correlates with the intensity of incident light) across a very wide range of light intensities associated with different operating environments for near-eye display 100.
As discussed above, to avoid damaging the eyeballs of the user, illuminators 140a, 140b, 140c, 140d, 140e, and 140f are typically configured to output lights of very low intensities. In a case where image sensors 150a and 150b comprise the same sensor devices as image sensors 120a-120d of
Moreover, the image sensors 120a-120d may need to be able to generate an output at a high speed to track the movements of the eyeballs. For example, a user's eyeball can perform a very rapid movement (e.g., a saccade movement) in which there can be a quick jump from one eyeball position to another. To track the rapid movement of the user's eyeball, image sensors 120a-120d need to generate images of the eyeball at high speed. For example, the rate at which the image sensors generate an image frame (the frame rate) needs to at least match the speed of movement of the eyeball. The high frame rate requires short total exposure time for all of the pixel cells involved in generating the image frame, as well as high speed for converting the sensor outputs into digital values for image generation. Moreover, as discussed above, the image sensors also need to be able to operate at an environment with low light intensity.
Waveguide display assembly 210 is configured to direct image light to an eyebox located at exit pupil 230 and to eyeball 220. Waveguide display assembly 210 may be composed of one or more materials (e.g., plastic, glass, etc.) with one or more refractive indices. In some examples, near-eye display 100 includes one or more optical elements between waveguide display assembly 210 and eyeball 220.
In some examples, waveguide display assembly 210 includes a stack of one or more waveguide displays including, but not restricted to, a stacked waveguide display, a varifocal waveguide display, etc. The stacked waveguide display is a polychromatic display (e.g., a red-green-blue (RGB) display) created by stacking waveguide displays whose respective monochromatic sources are of different colors. The stacked waveguide display is also a polychromatic display that can be projected on multiple planes (e.g., multi-planar colored display). In some configurations, the stacked waveguide display is a monochromatic display that can be projected on multiple planes (e.g., multi-planar monochromatic display). The varifocal waveguide display is a display that can adjust a focal position of image light emitted from the waveguide display. In alternate examples, waveguide display assembly 210 may include the stacked waveguide display and the varifocal waveguide display.
Waveguide display 300 includes a source assembly 310, an output waveguide 320, and a controller 330. For purposes of illustration,
Source assembly 310 generates image light 355. Source assembly 310 generates and outputs image light 355 to a coupling element 350 located on a first side 370-1 of output waveguide 320. Output waveguide 320 is an optical waveguide that outputs expanded image light 340 to an eyeball 220 of a user. Output waveguide 320 receives image light 355 at one or more coupling elements 350 located on the first side 370-1 and guides received input image light 355 to a directing element 360. In some examples, coupling element 350 couples the image light 355 from source assembly 310 into output waveguide 320. Coupling element 350 may be, e.g., a diffraction grating, a holographic grating, one or more cascaded reflectors, one or more prismatic surface elements, and/or an array of holographic reflectors.
Directing element 360 redirects the received input image light 355 to decoupling element 365 such that the received input image light 355 is decoupled out of output waveguide 320 via decoupling element 365. Directing element 360 is part of, or affixed to, first side 370-1 of output waveguide 320. Decoupling element 365 is part of, or affixed to, second side 370-2 of output waveguide 320, such that directing element 360 is opposed to the decoupling element 365. Directing element 360 and/or decoupling element 365 may be, e.g., a diffraction grating, a holographic grating, one or more cascaded reflectors, one or more prismatic surface elements, and/or an array of holographic reflectors.
Second side 370-2 represents a plane along an x-dimension and a y-dimension. Output waveguide 320 may be composed of one or more materials that facilitate total internal reflection of image light 355. Output waveguide 320 may be composed of e.g., silicon, plastic, glass, and/or polymers. Output waveguide 320 has a relatively small form factor. For example, output waveguide 320 may be approximately 50 mm wide along x-dimension, 30 mm long along y-dimension and 0.5-1 mm thick along a z-dimension.
Controller 330 controls scanning operations of source assembly 310. The controller 330 determines scanning instructions for the source assembly 310. In some examples, the output waveguide 320 outputs expanded image light 340 to the user's eyeball 220 with a large field of view (FOV). For example, the expanded image light 340 is provided to the user's eyeball 220 with a diagonal FOV (in x and y) of 60 degrees and/or greater and/or 150 degrees and/or less. The output waveguide 320 is configured to provide an eyebox with a length of 20 mm or greater and/or equal to or less than 50 mm; and/or a width of 10 mm or greater and/or equal to or less than 50 mm.
Moreover, controller 330 also controls image light 355 generated by source assembly 310, based on image data provided by image sensor 370. Image sensor 370 may be located on first side 370-1 and may include, for example, image sensors 120a-120d of
After receiving instructions from the remote console, mechanical shutter 404 can open and expose the set of pixel cells 402 in an exposure period. During the exposure period, image sensor 370 can obtain samples of lights incident on the set of pixel cells 402, and generate image data based on an intensity distribution of the incident light samples detected by the set of pixel cells 402. Image sensor 370 can then provide the image data to the remote console, which determines the display content, and provide the display content information to controller 330. Controller 330 can then determine image light 355 based on the display content information.
Source assembly 310 generates image light 355 in accordance with instructions from the controller 330. Source assembly 310 includes a source 410 and an optics system 415. Source 410 is a light source that generates coherent or partially coherent light. Source 410 may be, e.g., a laser diode, a vertical cavity surface emitting laser, and/or a light emitting diode.
Optics system 415 includes one or more optical components that condition the light from source 410. Conditioning light from source 410 may include, e.g., expanding, collimating, and/or adjusting orientation in accordance with instructions from controller 330. The one or more optical components may include one or more lenses, liquid lenses, mirrors, apertures, and/or gratings. In some examples, optics system 415 includes a liquid lens with a plurality of electrodes that allows scanning of a beam of light with a threshold value of scanning angle to shift the beam of light to a region outside the liquid lens. Light emitted from the optics system 415 (and also source assembly 310) is referred to as image light 355.
Output waveguide 320 receives image light 355. Coupling element 350 couples image light 355 from source assembly 310 into output waveguide 320. In examples where coupling element 350 is diffraction grating, a pitch of the diffraction grating is chosen such that total internal reflection occurs in output waveguide 320, and image light 355 propagates internally in output waveguide 320 (e.g., by total internal reflection), toward decoupling element 365.
Directing element 360 redirects image light 355 toward decoupling element 365 for decoupling from output waveguide 320. In examples where directing element 360 is a diffraction grating, the pitch of the diffraction grating is chosen to cause incident image light 355 to exit output waveguide 320 at angle(s) of inclination relative to a surface of decoupling element 365.
In some examples, directing element 360 and/or decoupling element 365 are structurally similar. Expanded image light 340 exiting output waveguide 320 is expanded along one or more dimensions (e.g., may be elongated along x-dimension). In some examples, waveguide display 300 includes a plurality of source assemblies 310 and a plurality of output waveguides 320. Each of source assemblies 310 emits a monochromatic image light of a specific band of wavelength corresponding to a primary color (e.g., red, green, or blue). Each of output waveguides 320 may be stacked together with a distance of separation to output an expanded image light 340 that is multi-colored.
Near-eye display 100 is a display that presents media to a user. Examples of media presented by the near-eye display 100 include one or more images, video, and/or audio. In some examples, audio is presented via an external device (e.g., speakers and/or headphones) that receives audio information from near-eye display 100 and/or control circuitries 510 and presents audio data based on the audio information to a user. In some examples, near-eye display 100 may also act as an AR eyewear glass. In some examples, near-eye display 100 augments views of a physical, real-world environment, with computer-generated elements (e.g., images, video, sound, etc.).
Near-eye display 100 includes waveguide display assembly 210, one or more position sensors 525, and/or an inertial measurement unit (IMU) 530. Waveguide display assembly 210 includes source assembly 310, output waveguide 320, and controller 330.
IMU 530 is an electronic device that generates fast calibration data indicating an estimated position of near-eye display 100 relative to an initial position of near-eye display 100 based on measurement signals received from one or more of position sensors 525.
Imaging device 535 may generate image data for various applications. For example, imaging device 535 may generate image data to provide slow calibration data in accordance with calibration parameters received from control circuitries 510. Imaging device 535 may include, for example, image sensors 120a-120d of
The input/output interface 540 is a device that allows a user to send action requests to the control circuitries 510. An action request is a request to perform a particular action. For example, an action request may be to start or end an application or to perform a particular action within the application.
Control circuitries 510 provide media to near-eye display 100 for presentation to the user in accordance with information received from one or more of: imaging device 535, near-eye display 100, and input/output interface 540. In some examples, control circuitries 510 can be housed within system 500 configured as a head-mounted device. In some examples, control circuitries 510 can be a standalone console device communicatively coupled with other components of system 500. In the example shown in
The application store 545 stores one or more applications for execution by the control circuitries 510. An application is a group of instructions, that, when executed by a processor, generates content for presentation to the user. Examples of applications include: gaming applications, conferencing applications, video playback applications, or other suitable applications.
Tracking module 550 calibrates system 500 using one or more calibration parameters and may adjust one or more calibration parameters to reduce error in determination of the position of the near-eye display 100.
Tracking module 550 tracks movements of near-eye display 100 using slow calibration information from the imaging device 535. Tracking module 550 also determines positions of a reference point of near-eye display 100 using position information from the fast calibration information.
Engine 555 executes applications within system 500 and receives position information, acceleration information, velocity information, and/or predicted future positions of near-eye display 100 from tracking module 550. In some examples, information received by engine 555 may be used for producing a signal (e.g., display instructions) to waveguide display assembly 210 that determines a type of content presented to the user. For example, to provide an interactive experience, engine 555 may determine the content to be presented to the user based on a location of the user (e.g., provided by tracking module 550), or a gaze point of the user (e.g., based on image data provided by imaging device 535), a distance between an object and user (e.g., based on image data provided by imaging device 535).
Before a global exposure period starts, electronic shutter switch 603 of each pixel cell can be enabled to steer any charge generated by photodiode 602 to charge sink 610. To start the global exposure period, electronic shutter switch 603 can be disabled. Photodiode 602 of each pixel cell can generate and accumulate charge. Towards the end of the exposure period, transfer switch 604 of each pixel cell can be enabled to transfer the charge stored in photodiode 602 to charge storage device 608a to develop a voltage. The global exposure period ends when transfer switch 604 is disabled to stop the transfer, while electronic shutter switch 603 can also be enabled to remove new charge generated by photodiode 602. To support a global shutter operation, the global exposure period can start and end at the same time for each pixel cells. An array of voltages, including v00, v01, . . . vji, can be obtained at the end of the global exposure period. The array of voltages can be quantized by an A/D converter (which can be external or internal to the pixel cells) into digital values. The digital values can be further processed to generate an image 612. Another global exposure period can start when electronic shutter switch 603 is disabled again to generate a new image.
Reference is now made to
The definitions of low light intensity range 706 and medium light intensity range 708, as well as thresholds 702 and 704, can be based on the storage capacities of photodiode 602 and charge storage device 608a. For example, low light intensity range 706 can be defined such that the total quantity of charge stored in photodiode 602, at the end of the exposure period, is below or equal to the storage capacity of the photodiode, and threshold 702 can be based on the storage capacity of photodiode 602. As to be described below, threshold 702 can be set based on a scaled storage capacity of photodiode 602 to account for potential capacity variation of the photodiode. Such arrangements can ensure that, when the quantity of charge stored in photodiode 602 is measured for intensity determination, the photodiode does not saturate, and the measured quantity relates to the incident light intensity. Moreover, medium light intensity range 708 can be defined such that the total quantity of charge stored in charge storage device 608a, at the end of the exposure period, is below or equal to the storage capacity of the measurement capacitor, and threshold 704 can be based on the storage capacity of charge storage device 608a. Typically threshold 704 is also set to be based on a scaled storage capacity of charge storage device 608a to ensure that when the quantity of charge stored in charge storage device 608a is measured for intensity determination, the measurement capacitor does not saturate, and the measured quantity also relates to the incident light intensity. As to be described below, thresholds 702 and 704 can be used to detect whether photodiode 602 and charge storage device 608a saturate, which can determine the intensity range of the incident light and the measurement result to be output.
In addition, in a case where the incident light intensity is within high light intensity range 710, the total overflow charge accumulated at charge storage device 608a may exceed threshold 704 before the exposure period ends. As additional charge is accumulated, charge storage device 608a may reach full capacity before the end of the exposure period, and charge leakage may occur. To avoid measurement error caused due to charge storage device 608a reaching full capacity, a time-to-saturation measurement can be performed to measure the time duration it takes for the total overflow charge accumulated at charge storage device 608a to reach threshold 704. A rate of charge accumulation at charge storage device 608a can be determined based on a ratio between threshold 704 and the time-to-saturation, and a hypothetical quantity of charge (Q3) that could have been accumulated at charge storage device 608a at the end of the exposure period (if the capacitor had limitless capacity) can be determined by extrapolation according to the rate of charge accumulation. The hypothetical quantity of charge (Q3) can provide a reasonably accurate representation of the incident light intensity within high light intensity range 710.
Reference is now made to
To measure high light intensity range 710 and medium light intensity range 708, transfer switch 604 can be biased by transfer signal 802 in a partially turned-on state. For example, the gate voltage of transfer switch 604 can be set based on a voltage developed at photodiode 602 corresponding to the charge storage capacity of the photodiode. With such arrangements, only overflow charge (e.g., charge generated by the photodiode after the photodiode saturates) will transfer through transfer switch 604 to reach charge storage device 608a, to measure time-to-saturation (for high light intensity range 710) and the quantity of charge stored in charge storage device 608a (for medium light intensity range 708). For measurement of medium and high light intensity ranges, the capacitance of charge storage device 608a can also be maximized to increase threshold 704.
Moreover, to measure low light intensity range 706, transfer switch 604 can be controlled in a fully turned-on state to transfer the charge stored in photodiode 602 to charge storage device 608a. Moreover, the capacitance of charge storage device 608a can be reduced. The reduction in the capacitance of charge storage device 608a can increase the charge-to-voltage conversion ratio at charge storage device 608a, such that a higher voltage can be developed for a certain quantity of stored charge. The higher charge-to-voltage conversion ratio can reduce the effect of measurement errors (e.g., quantization error, comparator offset, etc.) introduced by subsequent quantization operation on the accuracy of low light intensity determination. The measurement error can set a limit on a minimum voltage difference that can be detected and/or differentiated by the quantization operation. By increasing the charge-to-voltage conversion ratio, the quantity of charge corresponding to the minimum voltage difference can be reduced, which in turn reduces the lower limit of a measurable light intensity by pixel cell 601 and extends the dynamic range.
The charge (residual charge and/or overflow charge) accumulated at charge storage device 608a can develop an analog voltage 806, which can be quantized by an analog-to-digital converter (ADC) 808 to generate a digital value to represent the intensity of incident light during the exposure period. As shown in
Comparator 810 can compare analog voltage 806 against the threshold provided by threshold generator 809, and generate a decision 826 based on the comparison result. For example, comparator 810 can generate a logical one for decision 826 if analog voltage 806 equals or exceeds the threshold generated by threshold generator 809. Comparator 810 can also generate a logical zero for decision 826 if the analog voltage falls below the threshold. Decision 826 can control the count values of counter 811 to be stored in memory 812 as a result of the quantization operation.
As discussed above, ADC 808 can introduce quantization errors when there is a mismatch between a quantity of charge represented by the quantity level output by ADC 808 (e.g., represented by the total number of quantization steps) and the actual input quantity of charge that is mapped to the quantity level by ADC 808. The quantization error can be reduced by using a smaller quantization step size. In the example of
Although quantization error can be reduced by using smaller quantization step sizes, area and performance speed may limit how far the quantization step can be reduced. With smaller quantization step size, the total number of quantization steps needed to represent a particular range of charge quantities (and light intensity) may increase. A larger number of data bits may be needed to represent the increased number of quantization steps (e.g., 8 bits to represent 255 steps, 7 bits to represent 127 steps, etc.). The larger number of data bits may require additional buses to be added to pixel output buses 816, which may not be feasible if pixel cell 601 is used on a head-mounted device or other wearable devices with very limited spaces. Moreover, with a larger number of quantization step size, ADC 808 may need to cycle through a larger number of quantization steps before finding the quantity level that matches (with one quantization step), which leads to increased processing power consumption and time, and reduced rate of generating image data. The reduced rate may not be acceptable for some applications that require a high frame rate (e.g., an application that tracks the movement of the eyeball).
One way to reduce quantization error is by employing a non-uniform quantization scheme, in which the quantization steps are not uniform across the input range.
One advantage of employing a non-uniform quantization scheme is that the quantization steps for quantizing low input charge quantities can be reduced, which in turn reduces the quantization errors for quantizing the low input charge quantities, and the minimum input charge quantities that can be differentiated by ADC 808 can be reduced. Therefore, the reduced quantization errors can push down the lower limit of the measureable light intensity of the image sensor, and the dynamic range can be increased. Moreover, although the quantization errors are increased for the high input charge quantities, the quantization errors may remain small compared with high input charge quantities. Therefore, the overall quantization errors introduced to the measurement of the charge can be reduced. On the other hand, the total number of quantization steps covering the entire range of input charge quantities may remain the same (or even reduced), and the aforementioned potential problems associated with increasing the number of quantization steps (e.g., increase in area, reduction in processing speed, etc.) can be avoided.
Reference is now made to
Pixel cell 1100 further includes a buffer 1102 and an example of pixel ADC 808. For example, transistors M3 and M4 can form a source follower to buffer an analog voltage developed at the PIXEL_OUT node, which represents a quantity of charge stored at charge storage device 608a. Further, the CC cap, comparator 1104, transistor switch M5, NOR gate 1112, together with memory 812, can be part of pixel ADC 808 to generate a digital output representing the analog voltage at the OF node. As described above, the quantization can be based on a comparison result (VOUT), generated by comparator 1104, between the analog voltage developed at the PIXEL_OUT node and VREF. Here, the CC cap is configured as a sampling capacitor to generate a COMP_IN voltage (at one input of comparator 1104) which tracks the output of buffer 1102 (and PIXEL_OUT), and provides the COMP_IN voltage to comparator 1104 to compare against VREF. VREF can be a static voltage for time-of-saturation measurement (for high light intensity range) or a ramping voltage for quantization of an analog voltage (for low and medium light intensity ranges). The count values (labelled “Cnt” in
Pixel cell 1100 can include features that can further improve the accuracy of the incident light intensity determination. For example, the combination of the CC capacitor, transistor M5, as well as transistor M2, can be operated to perform a correlated double sampling operation to compensate for measurement errors (e.g., comparator offset) introduced by comparator 1104, as well as other error signals such as, for example, reset noise introduced to charge storage device 608a by transistor M2.
In addition, pixel cell 1100 further includes a controller 1110, which can include digital logic circuits and can be part of or external to ADC 808. Controller 1110 can generate a sequence of control signals, such as AB, TG, RST, COMP_RST, etc., to operate pixel cell 1100 to perform a three-phase measurement operation corresponding to the three light intensity ranges of
Reference is now made to
Vcc(T1)=(Vref_high+Vcomp_offset)−(Vpixel_out_rstVσKTC) (Equation 1)
At time T1, the RST signal, the AB signal, and the COMP_RST signal are released, which starts an exposure period (labelled Texposure) in which photodiode PD can accumulate and transfer charge. Exposure period Texposure can end at time T2. Between times T1 and T3, TG signal can set transfer switch M1 in a partially turned-on state to allow PD to accumulate residual charge before photodiode PD saturates. If the light intensity in the medium or high intensity ranges of
Vcomp_in(Tx)=Vpixel_out_sig1−Vpixel_out_rst+Vref_high+Vcomp_offset (Equation 2)
In Equation 2, the difference between Vpixel_out_sig1 Vpixel_out_rst represents the quantity of overflow charge stored in charge storage device 608a. The comparator offset in the COMP_IN voltage can also cancel out the comparator offset introduced by comparator 1104 when performing the comparison.
Between times T1 and T3, two phases of measurement of the COMP_IN voltage can be performed, including a time-to-saturation (TTS) measurement phase for high light intensity range 710 and an FD ADC phase for measurement of overflow charge for medium light intensity 708. Between times T1 and T2 (Texposure) the TTS measurement can be performed by comparing COMP_IN voltage with a static Vref_low representing a saturation level of charge storage device 608a by comparator 1104. When PIXEL_OUT voltage reaches the static VREF, the output of comparator 1104 (VOUT) can trip, and a count value from counter 811 at the time when VOUT trips can be stored into memory 812. At time T2, controller 1110 can determine the state of VOUT of comparator 1104 at the end of the TTS phase, and can assert FLAG_1 signal if VOUT is asserted. The assertion of the FLAG_1 signal can indicate that charge storage device 608a saturates and can prevent subsequent measurement phases (FD ADC and PD ADC) from overwriting the count value stored in memory 812. The count value from TTS can then be provided to represent the intensity of light received by the photodiode PD during the integration period.
Between times T2 and T3 (labelled TFDADC), the FD ADC operation can be performed by comparing COMP_IN voltage with a ramping VREF voltage that ramps from Vref_low to Vref_high, which represents the saturation level of photodiode PD (e.g., threshold 702), as described in
Between times T3 and T4 (labelled TPDADC-transfer) can be the second reset phase, in which both RST and COMP_RST signals are asserted to reset charge storage device 608a (comprising the parallel combination of CFD capacitor and CEXT capacitor) and comparator 1104 to prepare for the subsequent PD ADC operation. The VCC voltage can be set according to Equation 1.
After RST and COMP_RST are released, LG is turned off to disconnect CEXT from CFD to increase the charge-to-voltage conversion rate for the PD ADC operation. TG is set at a level to fully turn on the M1 transfer switch to transfer the residual charge stored in the photodiode PD to CFD. The residual charge develops a new PIXEL_OUT voltage, Vpixel_out_sig2 The CC capacitor can AC couple the new PIXEL_OUT voltage Vpixel_out_sig2 into COMP_IN voltage by adding the VCC voltage. Between times T3 and T4, the photodiode PD remains capable of generating additional charge in addition to the charge generated between times T1 to T3, and transferring the additional charge to charge storage device 608a. The Vpixel_out_sig2 also represents the additional charge transferred between times T3 and T4. At time T4, the COMP_IN voltage can be as follows:
Vcomp_in(T4)=Vpixel_out_sig2−Vpixel_out_rst+Vref_high+Vcomp_offset (Equation 3)
In Equation 3, the difference between Vpixel_out_sig2 Vpixel_out_rst represents the quantity of charge transferred by the photodiode to charge storage device 608a between times T3 and T4. The comparator offset in the COMP_IN voltage can also cancel out the comparator offset introduced by comparator 1104 when performing the comparison.
At time T4, the AB signal is asserted to prevent the photodiode PD from accumulating and transferring additional charge. Moreover, VREF can be set a static level Vref_low margin. Comparator 1104 can compare the COMP_IN voltage with Vref_low margin to determine whether the photodiode PD saturates. Vref_low margin is slightly higher than Vref_low, which represents the saturation level of photodiode PD (e.g., threshold 702), to prevent false tripping of comparator 1104 when the quantity of residual charge is close to but does not exceed the saturation level. Controller 1110 can determine the state of VOUT of comparator 1104 and can assert FLAG_2 if VOUT is asserted to indicate that photodiode PD saturates. If the FLAG_2 is asserted, memory 812 can be locked to preserve the count value stored in memory 812 (from FD ADC) and prevents memory 812 from be overwritten by the subsequent PD ADC operation.
Between times T4 and T5, controller 1110 can perform the PD ADC operation by comparing the COMP_IN voltage with a VREF ramp that starts from Vref_low margin to Vref_high. In PD ADC phase, Vref_high can represent the minimum detectable quantity of residual charge stored in photodiode PD, whereas Vref_low margin can represent the saturation threshold of photodiode PD with margin to account for dark current, as described above. If neither FLAG_1 nor FLAG_2 is asserted prior to PD ADC, the count value obtained when comparator 1104 trips during PD ADC can be stored into memory 812, and the count value from PD ADC can be provided to represent the intensity of light.
An image sensor can include a plurality of pixel cells 1100, with each pixel cell operated using the sequence of control signals as described in
As shown in Equation 4, the GSE of pixel cell 1100 operated according to
Reference is now made to
At time T1, the AB, COMP_RST, and the RST signals are released, which starts an exposure period (labelled Texposure) in which photodiode PD can accumulate and transfer charge. TG signal can set transfer switch M1 in a partially turned-on state to allow PD to transfer overflow charge to charge storage device 608a. LG signal can remain asserted to operate in low gain mode, in which both CFD capacitor and CEXT capacitor are connected in parallel to form charge storage device 608a to store the overflow charge. The overflow charge develops a new PIXEL_OUT voltage, Vpixel_out_sig1. The CC capacitor can AC couple the PIXEL_OUT voltage to become the COMP_IN voltage. The COMP_IN voltage between times T1 and T2 can be set based on Equation 2 above.
Between times T1 and T2, a time-to-saturation (TTS) measurement can be performed by comparator 1104 comparing COMP_IN voltage with a static Vref_low to generate VOUT, as in
Following the TTS measurement, between times T2 and T5, the PD ADC operation can be performed to measure the residual charge stored in the photodiode PD. The LG signal is de-asserted to disconnect CEXT from CFD to increase charge-to-voltage conversion ratio, as described above. The overflow charge (if any) is divided between CFD and CEXT based on a ratio of capacitances between CFD and CEXT such that CFD stores a first portion of the overflow charge and CEXT stores a second portion of the overflow charge. Vpixel_out_sig1 can correspond to the first portion of the overflow charge stored in CFD.
To prepare for the PD ADC operation, between times T2 and T3, COMP_RST signal is asserted again to reset comparator 1104. The resetting of comparator 1104 can set a new VCC voltage across the CC capacitor based on a difference between Vpixel_out_sig1 and the output of comparator 1104 in the reset state, as follows:
Vcc(T3)=(Vref_high+Vcomp_offset)−(Vpixel_out_sig1(T3)+VσKTC) (Equation 5)
Between times T3 and T4, COMP_RST signal is released so that comparator 1104 exits the reset state. Moreover, the TG signal can set transfer switch M1 in a fully turned-on state to transfer the residual charge to CFD. The residual charge can be added to the overflow charge in CFD, which changes the PIXEL_OUT voltage to Vpixel_out_sig2. The new PIXEL_OUT voltage can be AC coupled into a new COMP_IN voltage at time T4, as follows:
Vcomp_in(T4)=Vpixel_out_sig2−Vpixel_out_sig1(T3)+Vref_high+Vcomp_offset (Equation 6)
In Equation 6, the difference between Vpixel_out_sig2 Vpixel_out_sig1(T3) represents the quantity of residual charge transferred by the photodiode to charge storage device 608a between times T3 and T4.
In
The GSE in Equation 7 can be much higher than the GSE in Equation 4, at least because the duration of time between times T4 and T2 in
Between times T4 and T5, controller 1110 can perform the PD ADC operation by comparing the COMP_IN voltage with a VREF ramp that starts from Vref_high to Vref_low margin. In PD ADC phase, Vref_high can represent the minimum detectable quantity of residual charge stored in photodiode PD, whereas Vref_low margin can represent the saturation threshold of photodiode PD with margin to account for dark current, as described above. If FLAG_1 is asserted prior to PD ADC, the count value obtained when comparator 1104 trips during PD ADC can be stored into memory 812, and the count value from the PD ADC operation can be provided to represent the intensity of light.
Moreover, towards time T5, controller 1110 can also check whether the COMP_IN voltage falls below Vref_low margin, which can indicate whether the photodiode PD saturates. If the COMP_IN voltage falls below Vref_low margin, which indicates the photodiode PD saturates, controller 1110 can de-assert FLAG_2 to allow the subsequent FD ADC operation to overwrite the count value stored in memory 812 (if FLAG_1 is also de-asserted). If the COMP_IN voltage stays above Vref_low margin, controller 1110 can assert FLAG_2 to lock the count value stored by the PD ADC operation.
Between times T5 and T8, a FD ADC operation can be made to measure the overflow charge transferred by the photodiode PD within the exposure period Texposure. As photodiode PD remains disconnected from CFD and CEXT, no additional charge is transferred to CFD and CEXT, and the total charge stored in CFD and CEXT is mostly generated in the exposure period Texposure, together with additional charge generated by the photodiode between times T3 and T4. With such arrangement, the GSE of pixel
At time T5, the LG signal is asserted to connect CFD with CEXT, which allows the second portion of the overflow charge stored in CEXT to combine with the first portion of the overflow charge and the residual charge stored in CFD, and a new PIXEL_OUT voltage Vpixel_out_sig3 can develop at the parallel combination of CFD and CEXT. Vpixel_out_sig3 can represent a total quantity of residual charge and overflow charge generated by the photodiode PD between times T1 and T2, plus charge generated between times T2 and T4 due to parasitic light. Controller 1110 can perform the FD ADC operation to quantize Vpixel_out_sig3 and, if FLAG_2 indicates that photodiode saturates, provide the quantization result of Vpixel_out_sig3 to represent the intensity of light within the exposure period Texposure.
Between times T5 and T7, a double sampling technique can be performed to mitigate the effect of reset noise and comparator offset on the FD ADC operation. Specifically, between times T5 and T6, comparator 1104 can be reset as part of the first sampling operation. The positive terminal of comparator 1104 is connected to the lower end of VREF, Vref_low. The VCC voltage can include components of reset noise and comparator offset as described above. The VCC voltage can be as follows:
Vcc(T5)=(Vref_low+Vcomp_offset)−(Vpixel_out_sig3+VσKTC1) (Equation 8)
Between times T6 and T7, both CFD and CEXT can be reset, while comparator 1104 exits the reset state, as part of a second sampling operation. As a result of resetting, PIXEL_OUT can be reset to a reset voltage Vpixel_out_rst. Moreover, second reset noise charge is also introduced into charge storage device 608a, which can be represented by VσKTC2. The second reset noise charge typically tracks the first reset noise charge. At time T6, as the result of the second sampling operation, Vpixel_out can be as follows:
Vpixel_out(T6)=Vpixel_out_rst+VσKTC2 (Equation 9)
At time T7, COMP_RST is released, and comparator 1104 exits the reset state. Via AC-coupling, the COMP_IN voltage can track Vpixel_out(T6) in addition to Vcc(T5) as follows:
Vcomp_in(T7)=(Vref_low+Vcomp_offset)+(Vpixel_out_rst−Vpixel_out_sig3)+(VσKTC2−VσKTC1) (Equation 10)
The noise/offset compensation scheme as described above in Equations 8-10 can be used for PD ADC to mitigate the effect of leakage on the sampled noise/offset information in the VCC voltage. PD ADC can be more susceptible to leakage than FD ADC as it takes place after FD ADC, which allows more time for leakage to impact the VCC voltage and the sampled noise/offset information represented by the VCC voltage.
Following the second sampling operation, the COMP_IN voltage can be quantized by comparing against a VREF ramp between times T7 and T8. The VREF ramp can start from Vref_low, which can represent a minimum quantity of overflow charge detectable in charge storage device 608a including CEXT and CFD, and Vref_high, which can represent the quantity of overflow charge when charge storage device 608a saturates. A count value from counter 811 when VOUT trips can be stored into memory 812 to represent the intensity of light received in the exposure period. After time T8, the digital value stored in memory 812 can be read out to represent the intensity of light received by the photodiode PD within the exposure period Texposure.
As shown in Equation 8, the polarity of comparison in PD ADC operation, where Vcomp_in represents Vpixel_out_sig2 Vpixel_out_sig1(T3), is opposite to the polarity of comparison in FD ADC operation, where Vcomp_in represents Vpixel_out_rst Vpixel_out_sig3. In PD ADC, the VOUT of comparator 1104 of
In
V2=(Qres+Qov)×(CFD+CEXT) (Equation 12)
In Equation 11, the first voltage can be based on a quantity of the residual charge Qres as well as a quantity of a first portion of the overflow charge Qov stored in CFD (based on a ratio between CFD and the total capacitance CFD CEXT) prior to CFD being disconnected from CEXT. In Equation 12, the second voltage can be based on the residual charge Qres (which remains in CFD) and the total overflow charge Qov which are redistributed within CFD+CEXT. In the scheme of
To account for the opposite polarity of comparisons between the PD ADC operation and the FD ADC operation, pixel cell 1100 can be modified to reverse the polarity of comparison of comparator 1104 between the PD ADC and FD ADC operations.
In some examples, to reverse the polarity of comparison, the positive and negative inputs to comparator 1104 can also be swapped.
The operation in
In
V1==Qres×CFD (Equation 13)
In Equation 13, the first voltage can be based on a quantity of the residual charge Qres stored in CFD as the first portion of overflow charge is removed from CFD prior to transfer of the residual charge. In Equation 14, the second voltage can be based on the residual charge Qres (which remains in CFD) and a second portion of overflow charge Qov which CEXT receives prior to being disconnected from CFD which are redistributed within CFD+CEXT. The second portion can be based on a ratio between CEXT and total capacitance CFD+CEXT. To obtain Qres and Qov, which can be used to measure the intensity of light, a post-processor can obtain the quantization outputs of PD ADC (V1) and FD ADC (V2) and compute Qres and Qov based on Equations 13 and 14 above. Based on whether the photodiode saturates, the post-processor can output a digital representation of Qres or a digital representation of Qov.
Between times T3 and T4, the TG signal can set the M1 switch at the fully-on state to transfer the residual charge from the photodiode PD to CFD, which is emptied of charge prior to the transfer by the assertion of the RST signal between times T2 and T3. A PD ADC operation is performed to quantize the residual charge between times T4 and T5 as described above.
Between times T5 and T8, the LG signal can be asserted to reconnect the CFD with CEXT. The residual charge stored in CFD can combine with the second portion of the overflow charge in CEXT. The new PIXEL_OUT voltage Vpixel_out_sig3 at the parallel combination of CFD and CEXT can represent a total quantity of residual charge and the second portion of the overflow charge. The digital value obtained from the quantization of Vpixel_out_sig3 by the FD ADC operation can be scaled based on the ratio of capacitances between CFD and CEXT to obtain the full overflow charge.
Method 1800 starts with step 1802, in which within an exposure period (e.g., Texposure) a controller can enable a photodiode to, in response to incident light, accumulate residual charge, and to transfer overflow charge to a first charge storage device and a second charge storage device of a charge sensing unit when the photodiode saturates. The controller can de-assert the shutter switch to allow the photodiode to accumulate residual charge. The controller can also bias the transfer switch by a bias voltage to set a residual charge capacity of the photodiode. The residual charge capacity can correspond to, for example, threshold 702 of the low light intensity range 706 of
In step 1804, the controller can disconnect the second charge storage device from the first charge storage device. The disconnection can be based on disabling the capacitor switch. After the disconnection, the first charge storage device can store a first portion of the overflow charge received from the photodiode during the exposure period, whereas the second charge storage device can store a second portion of the overflow charge.
In step 1806, the controller can enable the photodiode to transfer the residual charge to the first charge storage device to cause the charge sensing unit to output a first voltage. The controller can fully turn on the transfer switch to remove the residual charge from the photodiode. In some examples, the controller can reset the first charge storage device (without resetting the second charge storage device), and the first voltage can be based on a quantity of the residual charge and the capacitance of the first charge storage device. In some examples, the controller does not reset the first charge storage device, and the first voltage can be based on a quantity of the residual charge, a quantity of the first portion of the overflow charge, and the capacitance of the first charge storage device. The buffer of the charge sensing unit can buffer the first voltage to increase its driving strength.
In step 1808, the controller can quantize the first voltage to generate a first digital value to measure the residual charge. The quantization can be part of the PD ADC operation by comparing, using comparator 1104, the first voltage against a first voltage ramp. The first voltage ramp can start from Vref_high to Vref_low margin. In PD ADC phase, Vref_high can represent the minimum detectable quantity of residual charge stored in photodiode PD, whereas Vref_low margin can represent the saturation threshold of photodiode PD with margin to account for dark current. The comparator can be coupled with a latch signal to memory 812. When the comparator trips, memory 812 can latch in a count value from counter 811 representing the first digital value. The first digital value can be used to measure the quantity of the residual charge based on Equations 11-14 above. In some examples, the ADC can include an AC capacitor coupled between the charge sensing unit and the comparator to sample the reset noise and the comparator offset information. The sampling can be based on resetting the comparator and the charge storage devices simultaneously to set a VCC voltage across the AC capacitor, and to adjust the first voltage based on the VCC voltage to compensate for the effect of reset noise and the comparator on the quantization operation based on Equations 5 and 6 above.
In step 1810, the controller can connect the second charge storage device with the first charge storage device (e.g., by enabling the capacitor switch) to cause the charge sensing unit to output a second voltage. Specifically, via the connection, charge stored in the first charge storage device (the residual charge, as well as a first portion of the overflow charge if the first charge storage device is not reset), as well as the charged in the second charge storage device (the second portion of the overflow charge), can redistribute between the charge storage devices, and a second voltage may be generated at the parallel combination of the charge storage devices.
In step 1812, the controller can quantize the second voltage to generate a second digital value to measure the overflow charge. Specifically, the quantization can be part of the FD ADC operation by comparing, using comparator 1104, the second voltage against a second voltage ramp. The second voltage ramp can start from Vref_low to Vref_high. In FD ADC phase, Vref_high can represent the saturation level of photodiode PD, whereas Vref_low can represent the minimum detectable quantity of overflow charge, and the polarity of comparison in FD ADC phase can be opposite from PD ADC phase. This can be due to different noise/offset compensation schemes used between FD ADC and PD ADC phases. For example, a different noise/offset compensation scheme as described in Equations 8-10, in which the comparator can be reset followed by the resetting of the charge storage devices when the comparator is out of the reset state, can be used to mitigate the effect of leakage, and the polarity of the comparison can be swapped as a result. Example circuits described in
In step 1814, the controller can generate a digital representation of the incident light intensity based on the first digital value and the second digital value. As described above, the controller can generate a measurement of the residual charge and a measurement of the overflow charge based on the first digital value (V1) and the second digital value (V2) and based on Equations 11-14 above. Based on whether the photodiode saturates (e.g., based on FLAG_2 signal), the controller can output the measurement of the residual charge or the measurement of the overflow charge to represent the incident light intensity.
Some portions of this description describe the examples of the disclosure in terms of algorithms and symbolic representations of operations on information. These algorithmic descriptions and representations are commonly used by those skilled in the data processing arts to convey the substance of their work effectively to others skilled in the art. These operations, while described functionally, computationally, or logically, are understood to be implemented by computer programs or equivalent electrical circuits, microcode, or the like. Furthermore, it has also proven convenient at times, to refer to these arrangements of operations as modules, without loss of generality. The described operations and their associated modules may be embodied in software, firmware, and/or hardware.
Steps, operations, or processes described may be performed or implemented with one or more hardware or software modules, alone or in combination with other devices. In some examples, a software module is implemented with a computer program product comprising a computer-readable medium containing computer program code, which can be executed by a computer processor for performing any or all of the steps, operations, or processes described.
Examples of the disclosure may also relate to an apparatus for performing the operations described. The apparatus may be specially constructed for the required purposes, and/or it may comprise a general-purpose computing device selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a non-transitory, tangible computer readable storage medium, or any type of media suitable for storing electronic instructions, which may be coupled to a computer system bus. Furthermore, any computing systems referred to in the specification may include a single processor or may be architectures employing multiple processor designs for increased computing capability.
Examples of the disclosure may also relate to a product that is produced by a computing process described herein. Such a product may comprise information resulting from a computing process, where the information is stored on a non-transitory, tangible computer readable storage medium and may include any example of a computer program product or other data combination described herein.
The language used in the specification has been principally selected for readability and instructional purposes, and it may not have been selected to delineate or circumscribe the inventive subject matter. It is therefore intended that the scope of the disclosure be limited not by this detailed description, but rather by any claims that issue on an application based hereon. Accordingly, the disclosure of the examples is intended to be illustrative, but not limiting, of the scope of the disclosure, which is set forth in the following claims.
This patent application is a continuation of U.S. Non-Provisional application Ser. No. 16/454,787, filed Jun. 27, 2019, entitled “GLOBAL SHUTTER IMAGE SENSOR” which claims priority to U.S. Provisional Patent Application Ser. No. 62/691,223, filed Jun. 28, 2018, entitled “DIGITAL PIXEL SENSOR WITH ENHANCED SHUTTER EFFICIENCY,” and which are incorporated by reference in their entirety for all purposes.
Number | Name | Date | Kind |
---|---|---|---|
4596977 | Bauman et al. | Jun 1986 | A |
5053771 | McDermott | Oct 1991 | A |
5844512 | Gorin et al. | Dec 1998 | A |
5963369 | Steinthal et al. | Oct 1999 | A |
6181822 | Miller | Jan 2001 | B1 |
6384905 | Barrows | May 2002 | B1 |
6522395 | Bamji et al. | Feb 2003 | B1 |
6529241 | Clark | Mar 2003 | B1 |
6864817 | Salvi et al. | Mar 2005 | B1 |
6963369 | Olding | Nov 2005 | B1 |
7326903 | Ackland | Feb 2008 | B2 |
7362365 | Reyneri et al. | Apr 2008 | B1 |
7659772 | Nomura et al. | Feb 2010 | B2 |
7659925 | Krymski | Feb 2010 | B2 |
7719589 | Turchetta et al. | May 2010 | B2 |
7880779 | Storm | Feb 2011 | B2 |
7956914 | Xu | Jun 2011 | B2 |
8134623 | Purcell et al. | Mar 2012 | B2 |
8144227 | Kobayashi | Mar 2012 | B2 |
8369458 | Wong et al. | Feb 2013 | B2 |
8426793 | Barrows | Apr 2013 | B1 |
8754798 | Lin | Jun 2014 | B2 |
8773562 | Fan | Jul 2014 | B1 |
8779346 | Fowler et al. | Jul 2014 | B2 |
8946610 | Iwabuchi et al. | Feb 2015 | B2 |
9001251 | Smith et al. | Apr 2015 | B2 |
9094629 | Ishibashi | Jul 2015 | B2 |
9185273 | Beck et al. | Nov 2015 | B2 |
9274151 | Lee et al. | Mar 2016 | B2 |
9282264 | Park et al. | Mar 2016 | B2 |
9332200 | Hseih et al. | May 2016 | B1 |
9343497 | Cho | May 2016 | B2 |
9363454 | Ito et al. | Jun 2016 | B2 |
9478579 | Dai et al. | Oct 2016 | B2 |
9497396 | Choi | Nov 2016 | B2 |
9531990 | Wilkins et al. | Dec 2016 | B1 |
9800260 | Banerjee | Oct 2017 | B1 |
9819885 | Furukawa et al. | Nov 2017 | B2 |
9832370 | Cho et al. | Nov 2017 | B2 |
9909922 | Schweickert et al. | Mar 2018 | B2 |
9948316 | Yun et al. | Apr 2018 | B1 |
9955091 | Dai et al. | Apr 2018 | B1 |
9967496 | Ayers et al. | May 2018 | B2 |
10003759 | Fan | Jun 2018 | B2 |
10015416 | Borthakur et al. | Jul 2018 | B2 |
10090342 | Gambino et al. | Oct 2018 | B1 |
10096631 | Ishizu | Oct 2018 | B2 |
10154221 | Ogino et al. | Dec 2018 | B2 |
10157951 | Kim et al. | Dec 2018 | B2 |
10321081 | Watanabe et al. | Jun 2019 | B2 |
10345447 | Hicks | Jul 2019 | B1 |
10419701 | Liu | Sep 2019 | B2 |
10574925 | Otaka | Feb 2020 | B2 |
10594974 | Ivarsson et al. | Mar 2020 | B2 |
10598546 | Liu | Mar 2020 | B2 |
10608101 | Liu | Mar 2020 | B2 |
10686996 | Liu | Jun 2020 | B2 |
10726627 | Liu | Jul 2020 | B2 |
10750097 | Liu | Aug 2020 | B2 |
10764526 | Liu et al. | Sep 2020 | B1 |
10804926 | Gao et al. | Oct 2020 | B2 |
10812742 | Chen et al. | Oct 2020 | B2 |
10825854 | Liu | Nov 2020 | B2 |
10834344 | Chen et al. | Nov 2020 | B2 |
10897586 | Liu et al. | Jan 2021 | B2 |
10903260 | Chen et al. | Jan 2021 | B2 |
10917589 | Liu | Feb 2021 | B2 |
10951849 | Liu | Mar 2021 | B2 |
10969273 | Berkovich et al. | Apr 2021 | B2 |
11004881 | Liu et al. | May 2021 | B2 |
11057581 | Liu | Jul 2021 | B2 |
11089210 | Berkovich et al. | Aug 2021 | B2 |
20020067303 | Lee et al. | Jun 2002 | A1 |
20020113886 | Hynecek | Aug 2002 | A1 |
20020118289 | Choi | Aug 2002 | A1 |
20030001080 | Kummaraguntla et al. | Jan 2003 | A1 |
20030020100 | Guidash | Jan 2003 | A1 |
20030049925 | Layman et al. | Mar 2003 | A1 |
20040095495 | Inokuma et al. | May 2004 | A1 |
20040118994 | Mizuno | Jun 2004 | A1 |
20040251483 | Ko et al. | Dec 2004 | A1 |
20050046715 | Lim et al. | Mar 2005 | A1 |
20050057389 | Krymski | Mar 2005 | A1 |
20050104983 | Raynor | May 2005 | A1 |
20050206414 | Cottin et al. | Sep 2005 | A1 |
20050237380 | Kakii et al. | Oct 2005 | A1 |
20050280727 | Sato et al. | Dec 2005 | A1 |
20060023109 | Mabuchi et al. | Feb 2006 | A1 |
20060146159 | Farrier | Jul 2006 | A1 |
20060157759 | Okita et al. | Jul 2006 | A1 |
20060158541 | Ichikawa | Jul 2006 | A1 |
20070013983 | Kitamura et al. | Jan 2007 | A1 |
20070076109 | Krymski | Apr 2007 | A1 |
20070076481 | Tennant | Apr 2007 | A1 |
20070092244 | Pertsel et al. | Apr 2007 | A1 |
20070102740 | Ellis-Monaghan et al. | May 2007 | A1 |
20070131991 | Sugawa | Jun 2007 | A1 |
20070208526 | Staudt et al. | Sep 2007 | A1 |
20070222881 | Mentzer | Sep 2007 | A1 |
20080001065 | Ackland | Jan 2008 | A1 |
20080007731 | Botchway et al. | Jan 2008 | A1 |
20080042888 | Danesh | Feb 2008 | A1 |
20080068478 | Watanabe | Mar 2008 | A1 |
20080088014 | Adkisson et al. | Apr 2008 | A1 |
20080191791 | Nomura et al. | Aug 2008 | A1 |
20080226170 | Sonoda | Sep 2008 | A1 |
20080226183 | Lei et al. | Sep 2008 | A1 |
20080266434 | Sugawa et al. | Oct 2008 | A1 |
20090002528 | Manabe et al. | Jan 2009 | A1 |
20090033588 | Kajita et al. | Feb 2009 | A1 |
20090040364 | Rubner | Feb 2009 | A1 |
20090091645 | Trimeche et al. | Apr 2009 | A1 |
20090128640 | Yumiki | May 2009 | A1 |
20090140305 | Sugawa | Jun 2009 | A1 |
20090219266 | Lim et al. | Sep 2009 | A1 |
20090224139 | Buettgen et al. | Sep 2009 | A1 |
20090237536 | Purcell et al. | Sep 2009 | A1 |
20090244346 | Funaki | Oct 2009 | A1 |
20090245637 | Barman et al. | Oct 2009 | A1 |
20090261235 | Lahav et al. | Oct 2009 | A1 |
20090321615 | Sugiyama et al. | Dec 2009 | A1 |
20100013969 | Ui | Jan 2010 | A1 |
20100140732 | Eminoglu et al. | Jun 2010 | A1 |
20100194956 | Yuan et al. | Aug 2010 | A1 |
20100232227 | Lee | Sep 2010 | A1 |
20100276572 | Iwabuchi et al. | Nov 2010 | A1 |
20110049589 | Chuang et al. | Mar 2011 | A1 |
20110122304 | Sedelnikov | May 2011 | A1 |
20110149116 | Kim | Jun 2011 | A1 |
20110155892 | Neter et al. | Jun 2011 | A1 |
20110254986 | Nishimura et al. | Oct 2011 | A1 |
20120016817 | Smith et al. | Jan 2012 | A1 |
20120039548 | Wang et al. | Feb 2012 | A1 |
20120068051 | Ahn et al. | Mar 2012 | A1 |
20120092677 | Suehira et al. | Apr 2012 | A1 |
20120105475 | Tseng | May 2012 | A1 |
20120105668 | Velarde et al. | May 2012 | A1 |
20120113119 | Massie | May 2012 | A1 |
20120127284 | Bar-Zeev et al. | May 2012 | A1 |
20120133807 | Wu et al. | May 2012 | A1 |
20120138775 | Cheon et al. | Jun 2012 | A1 |
20120153123 | Mao et al. | Jun 2012 | A1 |
20120188420 | Black et al. | Jul 2012 | A1 |
20120200499 | Osterhout et al. | Aug 2012 | A1 |
20120205520 | Hsieh et al. | Aug 2012 | A1 |
20120212465 | White et al. | Aug 2012 | A1 |
20120241591 | Wan et al. | Sep 2012 | A1 |
20120262616 | Sa et al. | Oct 2012 | A1 |
20120267511 | Kozlowski | Oct 2012 | A1 |
20120273654 | Hynecek et al. | Nov 2012 | A1 |
20120305751 | Kusuda | Dec 2012 | A1 |
20130020466 | Ayers et al. | Jan 2013 | A1 |
20130056809 | Mao et al. | Mar 2013 | A1 |
20130057742 | Nakamura et al. | Mar 2013 | A1 |
20130068929 | Solhusvik et al. | Mar 2013 | A1 |
20130069787 | Petrou | Mar 2013 | A1 |
20130082313 | Manabe | Apr 2013 | A1 |
20130113969 | Manabe et al. | May 2013 | A1 |
20130126710 | Kondo | May 2013 | A1 |
20130141619 | Lim et al. | Jun 2013 | A1 |
20130187027 | Qiao et al. | Jul 2013 | A1 |
20130207219 | Ahn | Aug 2013 | A1 |
20130214127 | Ohya et al. | Aug 2013 | A1 |
20130214371 | Asatsuma et al. | Aug 2013 | A1 |
20130218728 | Hashop et al. | Aug 2013 | A1 |
20130221194 | Manabe | Aug 2013 | A1 |
20130229543 | Hashimoto et al. | Sep 2013 | A1 |
20130229560 | Kondo | Sep 2013 | A1 |
20130234029 | Bikumandla | Sep 2013 | A1 |
20130293752 | Peng et al. | Nov 2013 | A1 |
20130299674 | Fowler et al. | Nov 2013 | A1 |
20140021574 | Egawa | Jan 2014 | A1 |
20140042299 | Wan et al. | Feb 2014 | A1 |
20140042582 | Kondo | Feb 2014 | A1 |
20140070974 | Park et al. | Mar 2014 | A1 |
20140078336 | Beck et al. | Mar 2014 | A1 |
20140085523 | Hynecek | Mar 2014 | A1 |
20140176770 | Kondo | Jun 2014 | A1 |
20140211052 | Choi | Jul 2014 | A1 |
20140232890 | Yoo et al. | Aug 2014 | A1 |
20140247382 | Moldovan et al. | Sep 2014 | A1 |
20140306276 | Yamaguchi | Oct 2014 | A1 |
20140368687 | Yu et al. | Dec 2014 | A1 |
20150083895 | Hashimoto et al. | Mar 2015 | A1 |
20150085134 | Novotny et al. | Mar 2015 | A1 |
20150090863 | Mansoorian et al. | Apr 2015 | A1 |
20150172574 | Honda et al. | Jun 2015 | A1 |
20150179696 | Kurokawa et al. | Jun 2015 | A1 |
20150189209 | Yang et al. | Jul 2015 | A1 |
20150201142 | Smith et al. | Jul 2015 | A1 |
20150208009 | Oh et al. | Jul 2015 | A1 |
20150229859 | Guidash et al. | Aug 2015 | A1 |
20150237274 | Yang et al. | Aug 2015 | A1 |
20150279884 | Kusumoto | Oct 2015 | A1 |
20150287766 | Kim et al. | Oct 2015 | A1 |
20150309311 | Cho | Oct 2015 | A1 |
20150309316 | Osterhout et al. | Oct 2015 | A1 |
20150312461 | Kim et al. | Oct 2015 | A1 |
20150312502 | Borremans | Oct 2015 | A1 |
20150312557 | Kim | Oct 2015 | A1 |
20150350582 | Korobov et al. | Dec 2015 | A1 |
20150358569 | Egawa | Dec 2015 | A1 |
20150358571 | Dominguez Castro et al. | Dec 2015 | A1 |
20150358593 | Sato | Dec 2015 | A1 |
20150381907 | Boettiger et al. | Dec 2015 | A1 |
20150381911 | Shen et al. | Dec 2015 | A1 |
20160011422 | Thurber et al. | Jan 2016 | A1 |
20160018645 | Haddick et al. | Jan 2016 | A1 |
20160021302 | Cho et al. | Jan 2016 | A1 |
20160028974 | Guidash et al. | Jan 2016 | A1 |
20160028980 | Kameyama et al. | Jan 2016 | A1 |
20160037111 | Dai et al. | Feb 2016 | A1 |
20160078614 | Ryu et al. | Mar 2016 | A1 |
20160088253 | Tezuka | Mar 2016 | A1 |
20160100113 | Oh et al. | Apr 2016 | A1 |
20160100115 | Kusano | Apr 2016 | A1 |
20160111457 | Sekine | Apr 2016 | A1 |
20160112626 | Shimada | Apr 2016 | A1 |
20160118992 | Milkov | Apr 2016 | A1 |
20160165160 | Hseih et al. | Jun 2016 | A1 |
20160197117 | Nakata et al. | Jul 2016 | A1 |
20160204150 | Oh et al. | Jul 2016 | A1 |
20160210785 | Balachandreswaran et al. | Jul 2016 | A1 |
20160240570 | Barna et al. | Aug 2016 | A1 |
20160249004 | Saeki et al. | Aug 2016 | A1 |
20160255293 | Gessei | Sep 2016 | A1 |
20160277010 | Park et al. | Sep 2016 | A1 |
20160307945 | Madurawe | Oct 2016 | A1 |
20160337605 | Ito | Nov 2016 | A1 |
20160353045 | Kawahito et al. | Dec 2016 | A1 |
20160360127 | Dierickx et al. | Dec 2016 | A1 |
20170013215 | McCarten | Jan 2017 | A1 |
20170039906 | Jepsen | Feb 2017 | A1 |
20170041571 | Tyrrell et al. | Feb 2017 | A1 |
20170053962 | Oh et al. | Feb 2017 | A1 |
20170059399 | Suh et al. | Mar 2017 | A1 |
20170062501 | Velichko et al. | Mar 2017 | A1 |
20170069363 | Baker | Mar 2017 | A1 |
20170070691 | Nishikido | Mar 2017 | A1 |
20170099422 | Goma et al. | Apr 2017 | A1 |
20170099446 | Cremers et al. | Apr 2017 | A1 |
20170104021 | Park et al. | Apr 2017 | A1 |
20170104946 | Hong | Apr 2017 | A1 |
20170111600 | Wang et al. | Apr 2017 | A1 |
20170141147 | Raynor | May 2017 | A1 |
20170154909 | Ishizu | Jun 2017 | A1 |
20170170223 | Hynecek et al. | Jun 2017 | A1 |
20170195602 | Iwabuchi | Jul 2017 | A1 |
20170201693 | Sugizaki et al. | Jul 2017 | A1 |
20170207268 | Kurokawa | Jul 2017 | A1 |
20170228345 | Gupta et al. | Aug 2017 | A1 |
20170270664 | Hoogi et al. | Sep 2017 | A1 |
20170272667 | Hynecek | Sep 2017 | A1 |
20170272768 | Tall et al. | Sep 2017 | A1 |
20170280031 | Price et al. | Sep 2017 | A1 |
20170293799 | Skogo et al. | Oct 2017 | A1 |
20170310910 | Smith et al. | Oct 2017 | A1 |
20170324917 | Mlinar et al. | Nov 2017 | A1 |
20170338262 | Hirata | Nov 2017 | A1 |
20170339327 | Koshkin et al. | Nov 2017 | A1 |
20170346579 | Barghi | Nov 2017 | A1 |
20170350755 | Geurts | Dec 2017 | A1 |
20170359497 | Mandelli et al. | Dec 2017 | A1 |
20170366766 | Geurts et al. | Dec 2017 | A1 |
20180019269 | Klipstein | Jan 2018 | A1 |
20180077368 | Suzuki | Mar 2018 | A1 |
20180115725 | Zhang et al. | Apr 2018 | A1 |
20180136471 | Miller et al. | May 2018 | A1 |
20180143701 | Suh et al. | May 2018 | A1 |
20180152650 | Sakakibara et al. | May 2018 | A1 |
20180167575 | Watanabe et al. | Jun 2018 | A1 |
20180175083 | Takahashi | Jun 2018 | A1 |
20180176545 | Aflaki Beni | Jun 2018 | A1 |
20180204867 | Kim et al. | Jul 2018 | A1 |
20180220093 | Murao et al. | Aug 2018 | A1 |
20180224658 | Teller | Aug 2018 | A1 |
20180227516 | Mo et al. | Aug 2018 | A1 |
20180241953 | Johnson | Aug 2018 | A1 |
20180270436 | Ivarsson et al. | Sep 2018 | A1 |
20180276841 | Krishnaswamy et al. | Sep 2018 | A1 |
20180376046 | Liu | Dec 2018 | A1 |
20180376090 | Liu | Dec 2018 | A1 |
20190035154 | Liu | Jan 2019 | A1 |
20190046044 | Tzvieli et al. | Feb 2019 | A1 |
20190052788 | Liu | Feb 2019 | A1 |
20190056264 | Liu | Feb 2019 | A1 |
20190057995 | Liu | Feb 2019 | A1 |
20190058058 | Liu | Feb 2019 | A1 |
20190098232 | Mori et al. | Mar 2019 | A1 |
20190104263 | Ochiai et al. | Apr 2019 | A1 |
20190104265 | Totsuka et al. | Apr 2019 | A1 |
20190110039 | Linde et al. | Apr 2019 | A1 |
20190123088 | Kwon | Apr 2019 | A1 |
20190141270 | Otaka et al. | May 2019 | A1 |
20190149751 | Wise | May 2019 | A1 |
20190157330 | Sato et al. | May 2019 | A1 |
20190172227 | Kasahara | Jun 2019 | A1 |
20190172868 | Chen et al. | Jun 2019 | A1 |
20190191116 | Madurawe | Jun 2019 | A1 |
20190246036 | Wu et al. | Aug 2019 | A1 |
20190253650 | Kim | Aug 2019 | A1 |
20190327439 | Chen et al. | Oct 2019 | A1 |
20190331914 | Lee et al. | Oct 2019 | A1 |
20190335151 | Rivard et al. | Oct 2019 | A1 |
20190348460 | Chen et al. | Nov 2019 | A1 |
20190355782 | Do et al. | Nov 2019 | A1 |
20190363118 | Berkovich et al. | Nov 2019 | A1 |
20190371845 | Chen et al. | Dec 2019 | A1 |
20190379827 | Berkovich | Dec 2019 | A1 |
20190379846 | Chen et al. | Dec 2019 | A1 |
20200007800 | Berkovich et al. | Jan 2020 | A1 |
20200053299 | Zhang et al. | Feb 2020 | A1 |
20200059589 | Liu et al. | Feb 2020 | A1 |
20200068189 | Chen et al. | Feb 2020 | A1 |
20200186731 | Chen et al. | Jun 2020 | A1 |
20200195875 | Berkovich et al. | Jun 2020 | A1 |
20200217714 | Liu | Jul 2020 | A1 |
20200228745 | Otaka | Jul 2020 | A1 |
20200374475 | Fukuoka et al. | Nov 2020 | A1 |
20210026796 | Graif et al. | Jan 2021 | A1 |
20210099659 | Miyauchi et al. | Apr 2021 | A1 |
20210185264 | Wong et al. | Jun 2021 | A1 |
20210227159 | Sambonsugi | Jul 2021 | A1 |
20210368124 | Berkovich et al. | Nov 2021 | A1 |
Number | Date | Country |
---|---|---|
1490878 | Apr 2004 | CN |
1728397 | Feb 2006 | CN |
1812506 | Aug 2006 | CN |
103207716 | Jul 2013 | CN |
104125418 | Oct 2014 | CN |
104204904 | Dec 2014 | CN |
104469195 | Mar 2015 | CN |
104704812 | Jun 2015 | CN |
104733485 | Jun 2015 | CN |
104754255 | Jul 2015 | CN |
105144699 | Dec 2015 | CN |
105706439 | Jun 2016 | CN |
106255978 | Dec 2016 | CN |
106791504 | May 2017 | CN |
109298528 | Feb 2019 | CN |
202016105510 | Oct 2016 | DE |
0675345 | Oct 1995 | EP |
1681856 | Jul 2006 | EP |
1732134 | Dec 2006 | EP |
1746820 | Jan 2007 | EP |
1788802 | May 2007 | EP |
2037505 | Mar 2009 | EP |
2063630 | May 2009 | EP |
2538664 | Dec 2012 | EP |
2804074 | Nov 2014 | EP |
2833619 | Feb 2015 | EP |
3032822 | Jun 2016 | EP |
3229457 | Oct 2017 | EP |
3258683 | Dec 2017 | EP |
3425352 | Jan 2019 | EP |
3439039 | Feb 2019 | EP |
3744085 | Dec 2020 | EP |
H08195906 | Jul 1996 | JP |
2002199292 | Jul 2002 | JP |
2003319262 | Nov 2003 | JP |
2005328493 | Nov 2005 | JP |
2006197382 | Jul 2006 | JP |
2006203736 | Aug 2006 | JP |
2007074447 | Mar 2007 | JP |
2011216966 | Oct 2011 | JP |
2012054495 | Mar 2012 | JP |
2012054876 | Mar 2012 | JP |
2012095349 | May 2012 | JP |
2013009087 | Jan 2013 | JP |
2013172203 | Sep 2013 | JP |
2014107596 | Jun 2014 | JP |
2014165733 | Sep 2014 | JP |
2014236183 | Dec 2014 | JP |
2015065524 | Apr 2015 | JP |
2015126043 | Jul 2015 | JP |
2015211259 | Nov 2015 | JP |
2016092661 | May 2016 | JP |
2017509251 | Mar 2017 | JP |
100574959 | Apr 2006 | KR |
20080019652 | Mar 2008 | KR |
20090023549 | Mar 2009 | KR |
20110050351 | May 2011 | KR |
20110134941 | Dec 2011 | KR |
20120058337 | Jun 2012 | KR |
20120117953 | Oct 2012 | KR |
20150095841 | Aug 2015 | KR |
20160008267 | Jan 2016 | KR |
20160008287 | Jan 2016 | KR |
201448184 | Dec 2014 | TW |
201719874 | Jun 2017 | TW |
201728161 | Aug 2017 | TW |
I624694 | May 2018 | TW |
2006124592 | Nov 2006 | WO |
2006129762 | Dec 2006 | WO |
2010117462 | Oct 2010 | WO |
2013099723 | Jul 2013 | WO |
2014055391 | Apr 2014 | WO |
2015135836 | Sep 2015 | WO |
2015182390 | Dec 2015 | WO |
2016095057 | Jun 2016 | WO |
2016194653 | Dec 2016 | WO |
2017003477 | Jan 2017 | WO |
2017013806 | Jan 2017 | WO |
2017047010 | Mar 2017 | WO |
2017058488 | Apr 2017 | WO |
2017069706 | Apr 2017 | WO |
2017169446 | Oct 2017 | WO |
2017169882 | Oct 2017 | WO |
2019018084 | Jan 2019 | WO |
2019111528 | Jun 2019 | WO |
2019145578 | Aug 2019 | WO |
2019168929 | Sep 2019 | WO |
Entry |
---|
U.S. Appl. No. 15/668,241 , Advisory Action, dated Oct. 23, 2019, 5 pages. |
U.S. Appl. No. 15/668,241 , Final Office Action, dated Jun. 17, 2019, 19 pages. |
U.S. Appl. No. 15/668,241 , Non-Final Office Action, dated Dec. 21, 2018, 3 pages. |
U.S. Appl. No. 15/668,241 , Notice of Allowance, dated Jun. 29, 2020, 8 pages. |
U.S. Appl. No. 15/668,241 , Notice of Allowance, dated Mar. 5, 2020, 8 pages. |
U.S. Appl. No. 15/668,241 , “Supplemental Notice of Allowability”, dated Apr. 29, 2020, 5 pages. |
U.S. Appl. No. 15/719,345 , Final Office Action, dated Apr. 29, 2020, 14 pages. |
U.S. Appl. No. 15/719,345 , Non-Final Office Action, dated Nov. 25, 2019, 14 pages. |
U.S. Appl. No. 15/719,345 , Notice of Allowance, dated Aug. 12, 2020, 11 pages. |
U.S. Appl. No. 15/719,345 , Notice of Allowance, dated Sep. 3, 2020, 12 pages. |
U.S. Appl. No. 15/801,216 , Advisory Action, dated Apr. 7, 2020, 3 pages. |
U.S. Appl. No. 15/801,216 , Final Office Action, dated Dec. 26, 2019, 5 pages. |
U.S. Appl. No. 15/801,216 , Non-Final Office Action, dated Jun. 27, 2019, 13 pages. |
U.S. Appl. No. 15/801,216 , Notice of Allowance, dated Jun. 23, 2020, 5 pages. |
U.S. Appl. No. 15/847,517 , Non-Final Office Action, dated Nov. 23, 2018, 21 pages. |
U.S. Appl. No. 15/847,517 , Notice of Allowance, dated May 1, 2019, 11 pages. |
U.S. Appl. No. 15/861,588 , Non-Final Office Action, dated Jul. 10, 2019, 11 pages. |
U.S. Appl. No. 15/861,588 , Notice of Allowance, dated Nov. 26, 2019, 9 pages. |
U.S. Appl. No. 15/876,061 , “Corrected Notice of Allowability”, dated Apr. 28, 2020, 3 pages. |
U.S. Appl. No. 15/876,061 , Non-Final Office Action, dated Sep. 18, 2019, 23 pages. |
U.S. Appl. No. 15/876,061 , “Notice of Allowability”, dated May 6, 2020, 2 pages. |
U.S. Appl. No. 15/876,061 , Notice of Allowance, dated Feb. 4, 2020, 13 pages. |
U.S. Appl. No. 15/927,896 , Non-Final Office Action, dated May 1, 2019, 10 pages. |
U.S. Appl. No. 15/983,379 , Notice of Allowance, dated Oct. 18, 2019, 9 pages. |
U.S. Appl. No. 15/983,391 , Non-Final Office Action, dated Aug. 29, 2019, 12 pages. |
U.S. Appl. No. 15/983,391 , Notice of Allowance, dated Apr. 8, 2020, 8 pages. |
U.S. Appl. No. 16/177,971 , Final Office Action, dated Feb. 27, 2020, 9 pages. |
U.S. Appl. No. 16/177,971 , Non-Final Office Action, dated Sep. 25, 2019, 9 pages. |
U.S. Appl. No. 16/177,971 , Notice of Allowance, dated Apr. 24, 2020, 6 pages. |
U.S. Appl. No. 16/210,748 , Final Office Action, dated Jul. 7, 2020, 11 pages. |
U.S. Appl. No. 16/210,748 , Non-Final Office Action, dated Jan. 31, 2020, 11 pages. |
U.S. Appl. No. 16/249,420 , Non-Final Office Action, dated Jul. 22, 2020, 9 pages. |
U.S. Appl. No. 16/249,420 , Notice of Allowance, dated Nov. 18, 2020, 8 pages. |
U.S. Appl. No. 16/286,355 , Non-Final Office Action, dated Oct. 1, 2019, 6 pages. |
U.S. Appl. No. 16/286,355 , Notice of Allowance, dated Feb. 12, 2020, 7 pages. |
U.S. Appl. No. 16/286,355 , Notice of Allowance, dated Jun. 4, 2020, 7 pages. |
U.S. Appl. No. 16/369,763 , Non-Final Office Action, dated Jul. 22, 2020, 15 pages. |
U.S. Appl. No. 16/382,015 , Notice of Allowance, dated Jun. 11, 2020, 11 pages. |
U.S. Appl. No. 16/384,720 , Non-Final Office Action, dated May 1, 2020, 6 pages. |
U.S. Appl. No. 16/384,720 , Notice of Allowance, dated Aug. 26, 2020, 8 pages. |
U.S. Appl. No. 16/431,693 , Non-Final Office Action, dated Jan. 30, 2020, 6 pages. |
U.S. Appl. No. 16/431,693 , Notice of Allowance, dated Jun. 24, 2020, 7 pages. |
U.S. Appl. No. 16/435,449 , Notice of Allowance, dated Sep. 16, 2020, 7 pages. |
U.S. Appl. No. 16/435,449 , Notice of Allowance, dated Jul. 27, 2020, 8 pages. |
U.S. Appl. No. 16/436,049 , Non-Final Office Action, dated Jun. 30, 2020, 11 pages. |
U.S. Appl. No. 16/436,049 , Non-Final Office Action, dated Mar. 4, 2020, 9 pages. |
U.S. Appl. No. 16/436,137 , Non-Final Office Action, dated Dec. 4, 2020, 12 pages. |
U.S. Appl. No. 16/454,787 , Notice of Allowance, dated Apr. 22, 2020, 10 pages. |
U.S. Appl. No. 16/454,787 , Notice of Allowance, dated Jul. 9, 2020, 9 pages. |
U.S. Appl. No. 16/454,787 , Notice of Allowance, dated Sep. 9, 2020, 9 pages. |
U.S. Appl. No. 16/566,583 , “Corrected Notice of Allowability”, dated Dec. 11, 2020, 2 pages. |
U.S. Appl. No. 16/566,583 , Final Office Action, dated Apr. 15, 2020, 24 pages. |
U.S. Appl. No. 16/566,583 , Non-Final Office Action, dated Oct. 1, 2019, 10 pages. |
U.S. Appl. No. 16/566,583 , Non-Final Office Action, dated Jul. 27, 2020, 11 pages. |
U.S. Appl. No. 16/566,583 , Notice of Allowance, dated Nov. 3, 2020, 11 pages. |
U.S. Appl. No. 16/707,988 , Non-Final Office Action, dated Sep. 22, 2020, 15 pages. |
Cho et al., “A Low Power Dual CDS for a Column-Parallel CMOS Image Sensor”, Journal of Semiconductor Technology and Science, vol. 12, No. 4, Dec. 30, 2012, pp. 388-396. |
Application No. EP18179838.0 , Extended European Search Report, dated May 24, 2019, 17 pages. |
EP18179838.0 , “Partial European Search Report”, dated Dec. 5, 2018, 14 pages. |
Application No. EP18179846.3 , Extended European Search Report, dated Dec. 7, 2018, 10 pages. |
Application No. EP18179851.3 , Extended European Search Report, dated Dec. 7, 2018, 8 pages. |
Application No. EP18188684.7 , Extended European Search Report, dated Jan. 16, 2019, 10 pages. |
Application No. EP18188684.7 , Office Action, dated Nov. 26, 2019, 9 pages. |
Application No. EP18188962.7 , Extended European Search Report, dated Oct. 23, 2018, 8 pages. |
Application No. EP18188962.7 , Office Action, dated Aug. 28, 2019, 6 pages. |
Application No. EP18188968.4 , Extended European Search Report, dated Oct. 23, 2018, 8 pages. |
Application No. EP18188968.4 , Office Action, dated Aug. 14, 2019, 5 pages. |
Application No. EP18189100.3 , Extended European Search Report, dated Oct. 9, 2018, 8 pages. |
Kavusi et al., “Quantitative Study of High-Dynamic-Range Image Sensor Architectures”, Proceedings of Society of Photo-Optical Instrumentation Engineers—The International Society for Optical Engineering, vol. 5301, Jun. 2004, pp. 264-275. |
Application No. PCT/US2018/039350 , International Preliminary Report on Patentability, dated Jan. 9, 2020, 10 pages. |
Application No. PCT/US2018/039350 , International Search Report and Written Opinion, dated Nov. 15, 2018, 13 pages. |
Application No. PCT/US2018/039352 , International Search Report and Written Opinion, dated Oct. 26, 2018, 10 pages. |
Application No. PCT/US2018/039431 , International Search Report and Written Opinion, dated Nov. 7, 2018, 14 pages. |
Application No. PCT/US2018/045661 , International Search Report and Written Opinion, dated Nov. 30, 2018, 11 Pages. |
Application No. PCT/US2018/045666 , International Preliminary Report on Patentability, dated Feb. 27, 2020, 11 pages. |
Application No. PCT/US2018/045666 , International Search Report and Written Opinion, dated Dec. 3, 2018, 13 pages. |
Application No. PCT/US2018/045673 , International Search Report and Written Opinion, dated Dec. 4, 2018, 13 pages. |
Application No. PCT/US2018/046131 , International Search Report and Written Opinion, dated Dec. 3, 2018, 10 pages. |
Application No. PCT/US2018/064181 , International Preliminary Report on Patentability, dated Jun. 18, 2020, 9 pages. |
Application No. PCT/US2018/064181 , International Search Report and Written Opinion, dated Mar. 29, 2019, 12 pages. |
Application No. PCT/US2019/014044 , International Search Report and Written Opinion,dated May 8, 2019, 11 pages. |
Application No. PCT/US2019/019756 , International Search Report and Written Opinion, dated Jun. 13, 2019, 11 pages. |
Application No. PCT/US2019/025170 , International Search Report and Written Opinion, dated Jul. 9, 2019, 11 pages. |
Application No. PCT/US2019/027727 , International Search Report and Written Opinion, dated Jun. 27, 2019, 11 pages. |
Application No. PCT/US2019/027729 , International Search Report and Written Opinion, dated Jun. 27, 2019, 10 pages. |
Application No. PCT/US2019/031521 , International Search Report and Written Opinion, dated Jul. 11, 2019, 11 pages. |
Application No. PCT/US2019/035724 , International Search Report and Written Opinion, dated Sep. 10, 2019, 12 pages. |
Application No. PCT/US2019/036484 , International Search Report and Written Opinion, dated Sep. 19, 2019, 10 pages. |
Application No. PCT/US2019/036492 , International Search Report and Written Opinion, dated Sep. 25, 2019, 9 pages. |
Application No. PCT/US2019/036536 , International Search Report and Written Opinion, dated Sep. 26, 2019, 14 pages. |
Application No. PCT/US2019/036575 , International Search Report and Written Opinion, dated Sep. 30, 2019, 16 pages. |
Application No. PCT/US2019/039410 , International Search Report and Written Opinion, dated Sep. 30, 2019, 11 pages. |
Application No. PCT/US2019/039758 , International Search Report and Written Opinion, dated Oct. 11, 2019, 13 pages. |
Application No. PCT/US2019/047156 , International Search Report and Written Opinion, dated Oct. 23, 2019, 9 pages. |
Application No. PCT/US2019/048241 , International Search Report and Written Opinion, dated Jan. 28, 2020, 16 pages. |
Application No. PCT/US2019/049756 , International Search Report and Written Opinion, dated Dec. 16, 2019, 8 pages. |
Application No. PCT/US2019/059754 , International Search Report and Written Opinion, dated Mar. 24, 2020, 15 pages. |
Application No. PCT/US2019/065430 , International Search Report and Written Opinion, dated Mar. 6, 2020, 15 pages. |
Snoeij , “A Low Power Column-Parallel 12-Bit ADC for CMOS Imagers”, Institute of Electrical and Electronics Engineers Workshop on Charge-Coupled Devices and Advanced Image Sensors, Jun. 2005, pp. 169-172. |
Tanner et al., “Low-Power Digital Image Sensor for Still Picture Image Acquisition”, Visual Communications and Image Processing, vol. 4306, Jan. 22, 2001, 8 pages. |
Xu et al., “A New Digital-Pixel Architecture for CMOS Image Sensor with Pixel-Level ADC and Pulse Width Modulation using a 0.18 Mu M CMOS Technology”, Institute of Electrical and Electronics Engineers Conference on Electron Devices and Solid-State Circuits, Dec. 16-18, 2003, pp. 265-268. |
U.S. Appl. No. 16/435,451, “Final Office Action”, dated Jul. 12, 2021, 13 pages. |
U.S. Appl. No. 16/435,451, “Non-Final Office Action”, dated Feb. 1, 2021, 14 pages. |
U.S. Appl. No. 16/436,049, “Notice of Allowance”, dated Oct. 21, 2020, 8 pages. |
U.S. Appl. No. 16/566,583, “Corrected Notice of Allowability”, dated Feb. 3, 2021,2 pages. |
U.S. Appl. No. 16/707,988, “Corrected Notice of Allowability”, dated Jul. 26, 2021,2 pages. |
U.S. Appl. No. 16/707,988, “Notice of Allowance”, dated May 5, 2021, 13 pages. |
U.S. Appl. No. 16/820,594, “Non-Final Office Action”, dated Jul. 2, 2021, 8 pages. |
U.S. Appl. No. 16/896,130, “Non-Final Office Action”, dated Mar. 15, 2021, 16 pages. |
U.S. Appl. No. 16/896,130, “Notice of Allowance”, dated Jul. 13, 2021, 8 pages. |
U.S. Appl. No. 16/899,908, “Notice of Allowance”, dated Sep. 17, 2021,11 pages. |
U.S. Appl. No. 17/072,840, “Non-Final Office Action”, dated Jun. 8, 2021, 7 pages. |
EP19737299.8, “Office Action”, dated Jul. 7, 2021, 5 pages. |
Communication Pursuant Article 94(3) dated Dec. 23, 2021 for European Application No. 19744961.4, filed Jun. 28, 2019, 8 pages. |
Final Office Action dated Dec. 3, 2021 for U.S. Appl. No. 17/072,840, filed Oct. 16, 2020, 23 pages. |
International Search Report and Written Opinion for International Application No. PCT/US2019/014904, dated Aug. 5, 2019, 7 Pages. |
International Search Report and Written Opinion for International Application No. PCT/US2019/019765, dated Jun. 14, 2019, 9 Pages. |
Non-Final Office Action dated Apr. 21, 2021 for U.S. Appl. No. 16/453,538, filed Jun. 26, 2019, 16 Pages. |
Non-Final Office Action dated Apr. 27, 2021 for U.S. Appl. No. 16/829,249, filed Mar. 25, 2020, 9 Pages. |
Notice of Allowance dated Jan. 7, 2022 for U.S. Appl. No. 16/899,908, filed Jun. 12, 2020,10 pages. |
Notice of Allowance dated Nov. 17, 2021 for U.S. Appl. No. 16/899,908, filed Jun. 12, 2020, 7 Pages. |
Notice of Allowance dated Dec. 21, 2021 for U.S. Appl. No. 16/550,851, filed Aug. 26, 2019,10 pages. |
Notice of Allowance dated Jan. 22, 2021 for U.S. Appl. No. 16/369,763, filed Mar. 29, 2019, 8 Pages. |
Notice of Allowance dated Nov. 22, 2021 for U.S. Appl. No. 16/820,594, filed Mar. 16, 2020, 18 pages. |
Notice of Allowance dated Nov. 22, 2021 for U.S. Appl. No. 16/820,594, filed Mar. 16, 2020, 8 pages. |
Notice of Allowance dated Oct. 25, 2021 for U.S. Appl. No. 16/435,451, filed Jun. 7, 2019, 8 Pages. |
Notice of Allowance dated Oct. 26, 2021 for U.S. Appl. No. 16/896,130, filed Jun. 8, 2020, 8 Pages. |
Office Action dated Sep. 30, 2021 for Taiwan Application No. 107124385, 17 Pages. |
Snoeij M.F., et al., “A low Power col. Parallel 12-bit ADC for CMOS Imagers,” XP007908033, Jun. 1, 2005, pp. 169-172. |
Advisory Action dated Oct. 8, 2020 for U.S. Appl. No. 16/210,748, filed Dec. 5, 2018, 4 Pages. |
Amir M.F., et al., “3-D Stacked Image Sensor With Deep Neural Network Computation,” IEEE Sensors Journal, IEEE Service Center, New York, NY, US, May 15, 2018, vol. 18 (10), pp. 4187-4199, XP011681876. |
Chuxi L., et al., “A Memristor-Based Processing-in-Memory Architechture for Deep Convolutional Neural Networks Approximate Computation,” Journal of Computer Research and Development, Jun. 30, 2017, vol. 54 (6), pp. 1367-1380. |
Communication Pursuant Article 94(3) dated Jan. 5, 2022 for European Application No. 19740456.9, filed Jun. 27, 2019, 12 pages. |
Corrected Notice of Allowability dated Apr. 9, 2021 for U.S. Appl. No. 16/255,528, filed Jan. 23, 2019, 5 Pages. |
Extended European Search Report for European Application No. 19743908.6, dated Sep. 30, 2020, 9 Pages. |
Final Office Action dated Oct. 18, 2021 for U.S. Appl. No. 16/716,050, filed Dec. 16, 2019, 18 Pages. |
Final Office Action dated Oct. 21, 2021 for U.S. Appl. No. 16/421,441, filed May 23, 2019, 23 Pages. |
Final Office Action dated Jan. 27, 2021 for U.S. Appl. No. 16/255,528, filed Jan. 23, 2019, 31 Pages. |
Final Office Action dated Jul. 28, 2021 for U.S. Appl. No. 17/083,920, filed Oct. 29, 2020, 19 Pages. |
International Search Report and Written Opinion for International Application No. PCT/US2019/034007, dated Oct. 28, 2019, 18 Pages. |
International Search Report and Written Opinion for International Application No. PCT/US2019/066805, dated Mar. 6, 2020, 9 Pages. |
International Search Report and Written Opinion for International Application No. PCT/US2019/066831, dated Feb. 27, 2020, 11 Pages. |
International Search Report and Written Opinion for International Application No. PCT/US2020/044807, dated Sep. 30, 2020, 12 Pages. |
International Search Report and Written Opinion for International Application No. PCT/US2020/058097, dated Feb. 12, 2021, 09 Pages. |
International Search Report and Written Opinion for International Application No. PCT/US2020/059636, dated Feb. 11, 2021, 18 Pages. |
International Search Report and Written Opinion for International Application No. PCT/US2021/031201, dated Aug. 2, 2021, 13 Pages. |
International Search Report and Written Opinion for International Application No. PCT/US2021/033321, dated Sep. 6, 2021, 11 pages. |
International Search Report and Written Opinion for International Application No. PCT/US2021/041775, dated Nov. 29, 2021, 14 pages. |
Millet L., et al., “A 5500-Frames/s 85-GOPS/W 3-D Stacked BSI Vision Chip Based on Parallel In-Focal-Plane Acquisition and Processing,” IEEE Journal of Solid-State Circuits, USA, Apr. 1, 2019, vol. 54 (4), pp. 1096-1105, XP011716786. |
Non-Final Office Action dated Jan. 1, 2021 for U.S. Appl. No. 16/715,792, filed Dec. 16, 2019, 15 Pages. |
Non-Final Office Action dated Sep. 2, 2021 for U.S. Appl. No. 16/910,844, filed Jun. 24, 2020, 7 Pages. |
Non-Final Office Action dated May 7, 2021 for U.S. Appl. No. 16/421,441, filed May 23, 2019, 17 Pages. |
Non-Final Office Action dated Jul. 10, 2020 for U.S. Appl. No. 16/255,528, filed Jan. 23, 2019, 27 Pages. |
Non-Final Office Action dated May 14, 2021 for U.S. Appl. No. 16/716,050, filed Dec. 16, 2019, 16 Pages. |
Non-Final Office Action dated Apr. 21, 2021 for U.S. Appl. No. 17/083,920, filed Oct. 29, 2020, 17 Pages. |
Non-Final Office Action dated Oct. 21, 2021 for U.S. Appl. No. 17/083,920, filed Oct. 29, 2020, 19 Pages. |
Non-Final Office Action dated Jul. 25, 2019 for U.S. Appl. No. 15/909,162, filed Mar. 1, 2018, 20 Pages. |
Notice of Allowance dated Apr. 1, 2021 for U.S. Appl. No. 16/255,528, filed Jan. 23, 2019, 7 Pages. |
Notice of Allowance dated Nov. 2, 2021 for U.S. Appl. No. 16/453,538, filed Jun. 26, 2019, 8 Pages. |
Notice of Allowance dated Dec. 8, 2021 for U.S. Appl. No. 16/829,249, filed Mar. 25, 2020, 6 pages. |
Notice of Allowance dated Oct. 14, 2020 for U.S. Appl. No. 16/384,720, filed Apr. 15, 2019, 8 Pages. |
Notice of Allowance dated Oct. 15, 2020 for U.S. Appl. No. 16/544,136, filed Aug. 19, 2019, 11 Pages. |
Notice of Allowance dated Apr. 16, 2021 for U.S. Appl. No. 16/715,792, filed Dec. 16, 2019, 10 Pages. |
Notice of Allowance dated Mar. 18, 2020 for U.S. Appl. No. 15/909,162, filed Mar. 1, 2018, 9 Pages. |
Notice of Allowance dated Dec. 22, 2021 for U.S. Appl. No. 16/910,844, filed Jun. 24, 2020, 7 pages. |
Notice of Allowance dated Nov. 24, 2021 for U.S. Appl. No. 16/910,844, filed Jun. 24, 2020, 8 pages. |
Notice of Allowance dated Aug. 25, 2021 for U.S. Appl. No. 16/715,792, filed Dec. 16, 2019, 9 Pages. |
Notice of Allowance dated Aug. 30, 2021 for U.S. Appl. No. 16/829,249, filed Mar. 25, 2020, 8 pages. |
Notice of Reason for Rejection dated Nov. 16, 2021 for Japanese Application No. 2019-571699, filed Jun. 25, 2018, 13 pages. |
Office Action dated Jul. 3, 2020 for Chinese Application No. 201810821296, filed Jul. 24, 2018, 17 Pages. |
Office Action dated Jul. 7, 2021 for European Application No. 19723902.3, filed Apr. 1, 2019, 3 Pages. |
Office Action dated Mar. 9, 2021 for Chinese Application No. 201810821296, filed Jul. 24, 2018, 10 Pages. |
Office Action dated Dec. 14, 2021 for Japanese Application No. 2019571598, filed Jun. 26, 2018, 12 pages. |
Office Action dated Jun. 28, 2020 for Chinese Application No. 201810821296, filed Jul. 24, 2018, 2 Pages. |
Partial International Search Report and Provisional Opinion for International Application No. PCT/US2021/041775, dated Oct. 8, 2021, 12 pages. |
Restriction Requirement dated Feb. 2, 2021 for U.S. Appl. No. 16/716,050, filed Dec. 16, 2019, 7 Pages. |
Sebastian A., et al., “Memory Devices and Applications for In-memory Computing,” Nature Nanotechnology, Nature Publication Group, Inc, London, Mar. 30, 2020, vol. 15 (7), pp. 529-544, XP037194929. |
Shi C., et al., “A 1000fps Vision Chip Based on a Dynamically Reconfigurable Hybrid Architecture Comprising a PE Array and Self-Organizing Map Neural Network,” International Solid-State Circuits Conference, Session 7, Image Sensors, Feb. 10, 2014, pp. 128-130, XP055826878. |
International Search Report and Written Opinion for International Application No. PCT/US2021/057966, dated Feb. 22, 2022, 15 pages. |
International Search Report and Written Opinion for International Application No. PCT/US2021/054327, dated Feb. 14, 2022, 8 pages. |
International Search Report and Written Opinion for International Application No. PCT/US2021/065174 dated Mar. 28, 2022, 10 pages. |
Non-Final Office Action dated Mar. 2, 2022 for U.S. Appl. No. 17/127,670, filed Dec. 18, 2020, 18 pages. |
Non-Final Office Action dated Apr. 13, 2022 for U.S. Appl. No. 16/820,594, filed Mar. 16, 2020, 7 pages. |
Non-Final Office Action dated Mar. 28, 2022 for U.S. Appl. No. 17/072,840, filed Oct. 16, 2020, 8 Pages. |
Notice of Allowance dated Mar. 2, 2022 for U.S. Appl. No. 16/453,538, filed Jun. 26, 2019, 8 pages. |
Notice of Allowance dated Mar. 7, 2022 for U.S. Appl. No. 16/421,441, filed May 23, 2019, 18 pages. |
Notice of Allowance dated Mar. 11, 2022 for U.S. Appl. No. 16/716,050, filed Dec. 16, 2019, 13 pages. |
Notice of Allowance dated Apr. 19, 2022 for U.S. Appl. No. 16/899,908, filed Jun. 12, 2020, 10 pages. |
Notice of Allowance dated Apr. 27, 2022 for U.S. Appl. No. 16/896,130, filed Jun. 8, 2020, 08 pages. |
Notice of Allowance dated Apr. 28, 2022 for U.S. Appl. No. 16/435,451, filed Jun. 7, 2019, 09 pages. |
Notification of the First Office Action dated Oct. 28, 2021 for Chinese Application No. 2019800218483, filed Jan. 24, 2019, 17 pages. |
Office Action dated Mar. 15, 2022 for Japanese Patent Application No. 2020505830, filed on Aug. 9, 2018, 12 pages. |
Office Action dated Mar. 17, 2022 for Taiwan Application No. 20180124384, 26 pages. |
Office Action dated Mar. 29, 2022 for Japanese Patent Application No. 2020520431, filed on Jun. 25, 2018, 10 pages. |
Notice of Allowance dated Jul. 5, 2022 for U.S. Appl. No. 16/899,908, filed Jun. 12, 2020, 10 pages. |
Notice of Allowance dated Aug. 10, 2022 for U.S. Appl. No. 16/896,130, filed Jun. 8, 2020, 8 pages. |
Notice of Allowance dated Aug. 22, 2022 for U.S. Appl. No. 16/435,451, filed Jun. 7, 2019, 08 pages. |
Notice of Allowance dated Aug. 23, 2022 for U.S. Appl. No. 16/896,130, filed Jun. 8, 2020, 2 pages. |
Office Action for European Application No. 18179851.3, dated May 19, 2022, 7 pages. |
Office Action dated Jul. 5, 2022 for Korean Application No. 10-2020-7002533, filed Jun. 25, 2018, 13 pages. |
Office Action dated Jul. 12, 2022 for Japanese Application No. 2019-571699, filed Jun. 25, 2018, 5 pages. |
Office Action dated May 18, 2022 for Taiwan Application No. 108122878, 24 pages. |
Office Action dated Jul. 19, 2022 for Japanese Application No. 2019571598, filed Jun. 26, 2018, 10 pages. |
Final Office Action dated Dec. 2, 2022 for U.S. Appl. No. 17/072,840, filed Oct. 16, 2020, 9 pages. |
Notice of Allowance dated Dec. 6, 2022 for U.S. Appl. No. 16/896,130, filed Jun. 8, 2020, 8 pages. |
Notice of Allowance dated Dec. 9, 2022 for U.S. Appl. No. 16/435,451, filed Jun. 7, 2019, 8 pages. |
Notice of Allowance dated Dec. 13, 2022 for U.S. Appl. No. 16/820,594, filed Mar. 16, 2020, 5 pages. |
Notice of Allowance dated Nov. 21, 2022 for U.S. Appl. No. 17/242,152, filed Apr. 27, 2021, 10 pages. |
Notice of Allowance dated Oct. 21, 2022 for U.S. Appl. No. 16/899,908, filed Jun. 12, 2020, 10 pages. |
Office Action dated Nov. 2, 2022 for Taiwan Application No. 107128759, filed Aug. 17, 2018, 16 pages. |
Office Action dated Dec. 1, 2022 for Korean Application No. 10-2020-7002306, filed Jun. 25, 2018, 13 pages. |
Office Action dated Nov. 1, 2022 for Japanese Patent Application No. 2020-520431, filed on Jun. 25, 2018, 11 pages. |
Office Action dated Nov. 15, 2022 for Taiwan Application No. 108120143, filed Jun. 11, 2019, 8 pages. |
Office Action dated Sep. 26, 2022 for Korean Patent Application No. 10-2020-7002496, filed on Jun. 26, 2018, 17 pages. |
Office Action dated Sep. 29, 2022 for Taiwan Application No. 108122878, filed Jun. 28, 2019, 9 pages. |
Number | Date | Country | |
---|---|---|---|
20210243390 A1 | Aug 2021 | US |
Number | Date | Country | |
---|---|---|---|
62691223 | Jun 2018 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16454787 | Jun 2019 | US |
Child | 17150925 | US |