The present disclosure relates to in-pixel image processing. More particularly, the present disclosure relates to utilizing programmable analog computing elements for in-pixel image processing for multi-frame imaging.
Three-dimensional (3D) cameras, four-dimensional (4D) cameras, and related high performance multi-frame imaging systems are capable of providing more than just two-dimensional images of a scene. Multi-frame imaging systems can provide, for example, distance measurements, motion measurements, and/or photonic measurements for physical objects in a scene. An example of a multi-frame camera system that generates lighting-invariant depth maps for in-motion applications and attenuating environments is disclosed in U.S. Pat. No. 10,873,738 (Retterath).
On-chip or in-sensor image processing has been used to: 1) increase the performance of image processing by adding computing parallelism, 2) reduce the amount of information sent from a sensor, and/or 3) reduce the power consumption for image processing.
One of the earliest on-chip image processing systems was the SCAMP chip. (Piotr Dudek, “A General-Purpose Processor-per-Pixel Analog SIMD Vision Chip”, IEEE Transactions on Circuits and Systems-I: Regular Papers, Vol. 51, No. 1, January 2005). The most current version of SCAMP chip is the SCAMP-5 chip which features a high speed analog VLSI image acquisition and low-level image processing system. The architecture of the SCAMP-5 chip is based on a dynamically reconfigurable SIMD processor array that features a massively parallel architecture enabling the computation of programmable mask-based image processing in each pixel. (Wong, the SCAMP-5 Vision Chip is a Focal-Plane Sensor-Processor (FPSP) developed at the University of Manchester (Carey et al., 2013a), 6 pages). The chip can capture raw images up to 10,000 fps and runs low-level image processing at a frame rate of 2,000-5,000 fps.
Various examples of on-chip processing systems for high performance imaging systems are described U.S. Pat. Nos. 8,102,426, 8,629,387, 9,094,628, and 10,218,913, U.S. Publ. Appl. US2019/0056498A1, and Martel et al. (“Parallel HDR Tone Mapping and Auto-Focus on a Cellular Processor Array Vision Chip,” 2016 IEEE International Symposium on Circuits and Systems, May 2016, 4 pages).
In view of limitations in the art, it is desirable to have a sensing and computing system for multi-frame imaging that performs in-pixel computing for high parallelism, reduced information flow, and reduced power consumption.
Neighbor-in-space image processing, which relies on convolution of information from neighboring pixels within an image, has led to advances in signal processing, artificial intelligence, and machine learning. Neighbor-in-time image processing for single-frame images, which relies on recursive computing to identify commonality and differences between successive images in an imaging sequence, has led to advances in object tracking, visual odometry, and Structure from Motion. Neighbor-in-time image processing for multi-frame images has led to advances in HDR (high dynamic range) sensing, XDR (extended Dynamic Range) sensing, and 3D imaging.
In contrast to conventional neighbor-in-time and neighbor-in-space processing that is performed off-sensor and uses significant computational and power resources, various embodiment as disclosed provide for in-pixel embedded analog image processing whereby computation is performed within an image pixel takes advantage of high parallelism because each pixel has its own processor, and takes advantage of locality of data because all data is located within a pixel or within a neighboring pixel. Embodiments of in-pixel embedded analog image processing also provide reduced power consumption because fewer transistors are energized for math, logic and register transfer operations with analog computing than the equivalent operations in a digital processing environment.
In embodiments, an in-pixel analog image processing device comprises an array of analog in-pixel processing elements. Each in-pixel processing element includes a photodetector, photodetector capture circuitry, analog circuitry configured to process both neighbor-in-space and neighbor-in-time functions for analog data representing an electrical current from the photodetector capture circuitry, and a set of north-east-west-south (NEWS) registers, each register interconnected between a unique pair of neighboring in-pixel processing elements to transfer analog data between the pair of neighboring in-pixel processing elements.
In embodiments, a sub-frame imaging pixel is implemented in a four-substrate hardware configuration whereby information flows from a photodetector substrate to a photodetector control (PDC) substrate to an analog pixel processing (APP) substrate to a digital memory substrate. In various embodiments, circuitry within the PDC substrate can be controlled by instruction bits from a PDC instruction word and circuitry within the APP substrate can be controlled by instruction bits from an APP instruction word. In various embodiments, circuitry within the APP substrate performs neighbor-in time processing on sub-frames, performs neighbor-in-time processing on frames within a stream of frames, and performs neighbor-in-space processing by utilizing analog North-East-West-South (NEWS) connection registers for transfer of information to/from neighboring pixels. In embodiments, the pitch of the four-substrate, sub-frame imaging pixels ranges from 1.5 μm to 40 μm.
In embodiments, a sub-frame imaging pixel is implemented in a single-substrate hardware configuration whereby information flows from a photodetector to PDC circuitry to APP circuitry to digital memory. In various embodiments, the PDC circuitry is controlled by instruction bits from a PDC instruction word and the APP circuitry is controlled by instruction bits from an APP instruction word. In various embodiments, APP circuitry performs neighbor-in time processing on sub-frames, performs neighbor-in-time processing on frames within a stream of frames, and performs neighbor-in-space processing by utilizing analog NEWS registers for transfer of information to/from neighboring pixels. In embodiments, the pitch of the single-substrate, sub-frame imaging pixels ranges from 1.5 μm to 40 μm.
In embodiments, a sub-frame imaging pixel is implemented in a two-substrate hardware configuration whereby information flows from a first photodetector substrate to a second substrate that includes PDC circuitry, APP circuitry, and digital memory. In various embodiments, the PDC circuitry is controlled by instruction bits from a PDC instruction word and the APP circuitry is controlled by instruction bits from an APP instruction word. In various embodiments, APP circuitry performs neighbor-in time processing on sub-frames, performs neighbor-in-time processing on frames within a stream of frames, and performs neighbor-in-space processing by utilizing analog NEWS registers for transfer of information to/from neighboring pixels. In embodiments, the first photodetector substrate contains a plurality of bottom-side bonding pads for each photodetector and the second substrate contains a plurality of top-side bonding pads for photodetector input. During substrate integration, top-side bonding pads and bottom-side bonding pads are aligned without the use of an interconnect layer and are bonded directly to one-another. In embodiments, the pitch of the two-substrate, sub-frame imaging pixels ranges from 1.5 μm to 40 μm.
In embodiments, a sub-frame imaging pixel is implemented in a two-substrate hardware configuration, the two substrates having non-aligned bonding pads due to pixel pitch differences or other layout differences, whereby information flows from a first photodetector substrate to a second substrate that includes PDC circuitry, APP circuitry, and digital memory. In various embodiments, the PDC circuitry is controlled by instruction bits from a PDC instruction word and the APP circuitry is controlled by instruction bits from an APP instruction word. In various embodiments, APP circuitry performs neighbor-in time processing on sub-frames, performs neighbor-in-time processing on frames within a stream of frames, and performs neighbor-in-space processing by utilizing analog NEWS registers for transfer of information to/from neighboring pixels. In embodiments, the first photodetector substrate contains a plurality of bottom-side bonding pads for each photodetector and the second substrate contains a plurality of top-side bonding pads for photodetector input. During substrate integration, an interposer or other electrical connection component is used to align top-side bonding pads and bottom-side bonding pads. In embodiments, the pixel pitch of the photodetector substrate of the two-substrate, sub-frame imaging pixels ranges from 1.5 μm to 40 μm.
This disclosure claims priority to U.S. Provisional Application 63/027,227, the contents of which are hereby incorporated by reference in its entirety.
For purposes of describing the various embodiments, the following terminology and references may be used with respect to analog sub-frame pixel processing in accordance with one or more embodiments as described.
“CPU” means central processing unit.
“GPU” means graphics processing unit.
“APU” means associative processing unit.
“VPU” means vision processing unit.
“QNN” and “Quantized Neural Network” refer to a hardware and software architecture that utilizes highly-parallelized computing with very limited instruction types.
“Module” refers to a software component that performs a particular function. A module, as defined herein, may execute on various hardware components.
“Component” refers to a hardware construct that may execute software contained within a module. A component may include a CPU, GPU, VPU, NNE or other digital computing capability. A component may contain all digital electronics, all analog electronics, mixed signal electronics, all optical computing elements, or mixed signal and optical computing elements.
In mission-critical applications like ADAS (Advanced Driver Assist Systems) and autonomous vehicle systems, the computer vision stack is defined as the software modules that convert raw sensor input into actionable descriptions of objects located within a sensor's field of view.
A neighbor-in-time processing module 104 accepts sensor information from a single-frame sensor or a multi-frame sensor. Some techniques for single-frame and multi-frame processing that are performed by neighbor-in-time processing are disclosed U.S. Pat. No. 9,866,816 (Retterath), which is hereby incorporated by reference. Neighbor-in-time processing includes, but is not limited to, HDR (High Dynamic Range) imaging, XDR (extended Dynamic Range) imaging, lighting-invariant imaging, radiance determination, and image time stamping for downstream object tracking and feature vector clustering.
A signal processing module 106 performs convolutional functions like image filtering, noise reduction, sharpening, and contrast control.
A segmentation module 108 performs mostly convolutional functions that segment objects within the image. Common segmentation algorithms are instance segmentation, semantic segmentation, and panoptic segmentation. The output of a segmentation module is a bit-level mask set that defines the separate regions of interest within an image.
An object tracking module 110 identifies common objects within successive images.
A feature vector creation module 112 produces a smaller-data-size descriptor of all objects identified by a segmentation module 108. Inputs to a feature vector creation module 112 include a pixel-level image mask and the imaged pixels that represent the object. The imaged pixels and the associated object mask may contain 10,000+ or 100,000+ pieces of information that describe an object. The conversion of the object descriptor information to a feature vector allows smaller sets of data to be passed to a decision-making module 102. Techniques for producing feature vectors in a vision stack are disclosed in PCT Appl. No. PCT/US20/24200, which is hereby incorporated by reference.
Vision stacks similar to
Convolution in image processing and neural network processing is a mathematical operation whereby a convolutional mask is applied to each pixel in an image. Typical convolutional mask sizes are 3×3, 5×5, and 7×7. The mathematical equation for a 3×3 convolutional for a pixel i,j is:
Iconv=Σx=−11Σy=−11I(i+x,j+y)*M(x,y) Eq. 1
Where IConv is the intensity result of the convolutional mask operation
For Eq. 1 there are nine multiply-accumulate (MAC) operations performed on each image pixel. The use of larger convolutional masks will typically provide better information for vision stack functions. However, larger convolutional masks, when applied to entire images, increase the computational needs for a vision stack. Table 1 shows the number of MACs required per pixel for several convolutional mask sizes.
It is the challenge of image processing and neural network processing functions within vision stacks to select convolutional mask sizes that maximize the quality of the information while minimizing the MACs.
Because of the high percentages of MACs for image processing with neural networks, providing MAC performance metrics for various analog and digital architectures is a good indicator for overall neural network performance. Table 2 below illustrates the approximate number of MACs required for a typical DNN implementation for the signal processing, segmentation, object tracking and feature vector creation modules from
Various digital hardware architectures are used today for data center, domain controller, and edge processing. Table 3 below shows a performance analysis comparison for in-pixel analog processing in accordance with various embodiments of the present disclosure against such digital hardware architectures as a general-purpose device like a CPU, a general-purpose graphics device like a GPU, and a best-in-class NNE (neural network engine) like the Tesla FSD. In various embodiments, the NitAPP/QNN (Neighbor-in-time Analog Pixel Processing/Quantized Neural Network) exhibits favorable performance metrics in Table 3 below, which shows the throughput comparisons for four architectures and the corresponding number of images per second that can be processed.
General purpose digital CPUs/GPUs and digital NNEs: 1) store information in digital form, 2) perform math operations using digital ALUs (Arithmetic Logic Units), 3) expend energy by using an instruction sequencer, and 4) expend energy to fetch information from memory and store results in memory. The number of picoJoules (pJ) per MAC for digital architectures is determined by adding up the amount of electrical current that is utilized by all of the transistors that are switched and the amount of electrical current that is conducted by all of the transistors that are required to conduct current during the performance of a MAC. For digital hardware architectures, each MAC requires the switching and/or conducting of current for thousands of transistors. In contrast in embodiments of the present disclosure, a NitAPP/QNN: 1) stores information in analog form, 2) requires no transistors to implement an analog ALU, 3) requires no transistors to perform instruction sequencing, and 4) does not require any off-pixel memory transactions. In embodiments, a MAC is performed with a NitAPP/QNN by switching as few as 10 transistors. In embodiments, the switching of as few as ten transistors, versus thousands of transistors for digital architectures, allows NitAPP/QNN to consume far less power per neural network image processed. Table 4 below illustrates the energy per MAC and the number of MACs per Watt for three digital hardware architectures versus the NitAPP/QNN in accordance with various embodiments of the present disclosure.
In embodiments, an in-pixel analog processor architecture in accordance with various embodiments can utilize panoptic segmentation to realize capabilities from instance and semantic segmentation that provides system-level advantages over off-sensor, digital processing hardware architectures.
In embodiments, photodetector control circuitry operates by utilizing a process called integration. During a photodetector integration time, current that is produced by a photodetector is gated to a storage element like a charge capacitor. The collected charge is a function of the duration of the integration and the amplitude of the photodetector current. Most digital cameras utilize the process of photodetector integration to produce intensity values for the camera's image pixels.
Event cameras contain pixels that independently respond to changes in brightness as they occur. Each pixel stores a reference brightness level, and continuously compares it to the current level of brightness. If the difference in brightness exceeds a preset threshold, that pixel resets its reference level and generates an event; a discrete packet of information containing the pixel address and timestamp Events may also contain the polarity (increase or decrease) of a brightness change, or an instantaneous measurement of the current level of illumination. Thus, event cameras output an asynchronous stream of events triggered by changes in scene illumination.
In embodiments, all sub-frame circuits within PDC circuitry utilize integration circuitry. In other embodiments, all sub-frame circuits within PDC circuitry utilize event circuitry. In other embodiments, sub-frame circuits within PDC circuitry utilize integration circuitry and event circuitry.
In embodiments, sub-frame information is produced as charge collection at three floating diffusion storage elements, labeled FD0, FD1 and FD2. Charge is collected at FD0 when the photodetector is conducting current and the transfer signal TX_0 is activated. Charge is collected at FD1 when the photodetector is conducting current and the transfer signal TX_1 is activated. Charge is collected at FD2 when the photodetector is conducting current and the transfer signal TX_2 is activated. FD0, FD1 and FD2 are utilized in circuitry for integration pixels. FD_3, on the other hand, is used as part of an event pixel. When TX_3 is activated the log I circuit monitors the change (direction and amplitude) in the photodetector current level. Any change, either positive (increase in current) or negative (decrease in current) is stored at FD3.
In embodiments, a four sub-frame photodetector control circuit may utilize 0, 1, 2, or 3 integration circuits and may utilize 3, 2, 1, or 0 event circuits. In embodiments, an N-sub-frame photodetector control circuit may utilize 0→N integration circuits and may utilize N→0 event circuits.
A functional block diagram of embodiments of an analog sub-frame processing element for NitAPP (Neighbor-in-time Analog Pixel Processing) and neighbor-in-space computation using QNN is shown in
In embodiments, NEWS registers, which signify North East West South operations, allow processing elements to pass information to neighboring processors. The N register of a processing element is the same physical register as the S register of the pixel processor to the north. N register mnemonics are Rd_N for a read operation and Wrt_N for a write operation. The E register of a processing element is the same physical register as the W register of the pixel processor to the east. E register mnemonics are Rd_E for a read operation and Wrt_E for a write operation. The W register of a processing element is the same physical register as the E register of the pixel processor to the west. W register mnemonics are Rd_W for a read operation and Wrt_W for a write operation. The S register of a processing element is the same physical register as the N register of the pixel processor to the south. S register mnemonics are Rd_S for a read operation and Wrt_S for a write operation.
SRAM 228 is used to communicate with off-device digital processing elements. One to four SRAM 228 elements are utilized per pixel, with each consisting of from eight to sixteen bits per SRAM 228 element. CPUs, GPUs and other digital communication processors read information in digital format from, or write information in digital format to, the addressable digital memory elements via an SRAM 228 digital port. In embodiments, the digital memory connection to the digital element may be SRAM, DRAM, DDR, etc.
In embodiments, an SRAM 228 input read functional block allows a digital-to-analog (D/A) converted value to be enabled onto the analog bus. A result register 230 is used to store analog values that will be transferred to digital memory. An analog-to-digital (A/D) circuit converts an analog value contained in the result register 230 to a multi-bit digital value that is written to a selected SRAM 228 location.
In embodiments, PDC input read 232 enables an analog value from a sub-frame storage element in the PDC circuitry onto the analog bus. PDC circuitry and analog computing circuitry are controlled by separate instruction bits. In embodiments, a four sub-frame PDC circuit is controlled by as few as six instruction bits.
Table 5 below illustrates the analog pixel processing (APP) instruction bit names and descriptions for the 46-bit APP instruction bus that controls all processing elements within an array of sub-frame pixels.
In embodiments, the functionality provided by PDC (photodetector control) circuitry is controlled through PDC instruction bits and the functionality provided by APP (analog pixel processing) circuitry is controlled through APP instruction bits.
PDC_Sel(1:0) 252 are bits from the APP instruction bus and select which analog memory element from PDC circuitry, FD0, FD1, FD2 or FD3, is enabled onto the analog bus. The PDC_Rd 254 signal determines the time during which the selected FD value from the PDC circuitry is enabled onto the analog bus. In accordance with
In embodiments, switched current (SI) circuitry is used to convey basic functionality. In practice, more complex circuitry is used in order to reduce processing errors, to increase accuracy, and to reduce power dissipation.
S2I registers have the ability to store positive and negative current values. The design of S2I registers yields a built-in negation of current levels. In embodiments, if a sourcing element sources a positive current to an analog bus, any register that writes the analog value must sink that same amount of current. Therefore, a positive current value on an analog bus is stored into a receiving register as a negative current value. In embodiments, because of this built-in negation, micro-code instructions generated for eventual reduction to APP instructions are written in the form (−Ax)→Bx. The microcode instruction directs the APP element to move the negated contents of Ax to Bx.
In order to translate software algorithms that are created by humans in human-readable form into operations that are performed by APP circuitry, it is important to understand the relationship between micro-code, mnemonics, and APP instruction bits. Micro-code is a software construct whereby logic and math operations are expressed in human-readable form. In embodiments, some examples of APP micro-code instructions are shown in Table 6 below.
Mnemonics describe functions that are executed with APP circuitry during the execution of an APP instruction. In embodiments, mnemonics include descriptors to write values to or read values from select registers. In embodiments, an APP with four register banks of eight registers each that includes NEWS registers, PDC circuitry and an SRAM interface will include the mnemonics shown in Table 7 below.
Register transfer, logic and math operations are performed by way of enabling selected analog values to an APP analog bus while selectively writing a resulting analog bus value to registers or other storage elements.
A Robinson compass mask is a convolution-based algorithm used for edge detection in imagery. It has eight major compass orientations, each will extract edges in respect to its direction. A combined use of compass masks of different directions detects edges oriented at different angles. A Robinson compass mask is defined by taking a single mask and rotating it to form eight orientations. As part of the algorithm, pixel-level computations are performed by applying 3×3 convolutional masks from Table 7.1 below for each image pixel in an image.
One of the advantages of using a Robinson compass mask for edge detection is that only four of the masks need to be computed, because the results of the four non-computed masks can be obtained by negating the results of the computed masks. The final value of a pixel-level algorithm is a mask computation that yields the highest absolute value.
Table 8 below illustrates microcode instructions and associated NitAPP/QNN mnemonics for a Robinson compass mask algorithm.
Design criteria such as crosstalk, APP instruction bus frequency, APP instruction settling time, and semiconductor process geometry are important considerations when fabricating analog computing circuitry. Analog storage elements like analog registers are susceptible to noise from sources like parasitic capacitance, thermal variations, and fabrication process variation. In order to understand the effects of noise on the results of APP computing circuitry, a hardware simulator is used to inject selected amounts of noise in the APP computing process and analyze the results. A hardware simulator also allows a user to define the analog set points for A/D conversion, D/A conversion, and the maximum current-carrying capacity of analog registers.
A NitAPP/QNN simulator executes the Table 8 mnemonics for a Robinson compass mask and produces an ideal filter image 312 that shows the edge detection results. For subsequent executions in a simulator, a random amount of noise is introduced into the current level for every write operation. The introduced noise has a Gaussian distribution with an amplitude of 5 nA, 6 nA, 7 nA, 8 nA, 9, nA, 10 nA, 12 nA, 14 nA, 16 nA, 18, nA, 20 nA, 22 nA, 24 nA, 26 nA, 28 nA, 30 nA, 35 nA, 40 nA, 45 nA, 50 nA, 55 nA and 60 nA for outputs shown in
Artificial Intelligence (AI) hardware, software and imaging contained within a single module is referred to as AIoC (AI on a Chip) or AI SoC (System on Chip).
Persons of ordinary skill in the relevant arts will recognize that embodiments may comprise fewer features than illustrated in any individual embodiment described above. The embodiments described herein are not meant to be an exhaustive presentation of the ways in which the various features of the embodiments may be combined. Accordingly, the embodiments are not mutually exclusive combinations of features; rather, embodiments can comprise a combination of different individual features selected from different individual embodiments, as understood by persons of ordinary skill in the art. Moreover, elements described with respect to one embodiment can be implemented in other embodiments even when not described in such embodiments unless otherwise noted. Although a dependent claim may refer in the claims to a specific combination with one or more other claims, other embodiments can also include a combination of the dependent claim with the subject matter of each other dependent claim or a combination of one or more features with other dependent or independent claims. Such combinations are proposed herein unless it is stated that a specific combination is not intended. Furthermore, it is intended also to include features of a claim in any other independent claim even if this claim is not directly made dependent to the independent claim.
Any incorporation by reference of documents above is limited such that no subject matter is incorporated that is contrary to the explicit disclosure herein. Any incorporation by reference of documents above is further limited such that no claims included in the documents are incorporated by reference herein. Any incorporation by reference of documents above is yet further limited such that any definitions provided in the documents are not incorporated by reference herein unless expressly included herein.
For purposes of interpreting the claims, it is expressly intended that the provisions of Section 112, sixth paragraph of 35 U.S.C. are not to be invoked unless the specific terms “means for” or “step for” are recited in a claim.
Number | Name | Date | Kind |
---|---|---|---|
6765566 | Tsao | Jul 2004 | B1 |
7634061 | Tumer et al. | Dec 2009 | B1 |
8102426 | Yahav et al. | Jan 2012 | B2 |
8150902 | Bates | Apr 2012 | B2 |
8543254 | Schut et al. | Sep 2013 | B1 |
8629387 | Pflibsen et al. | Jan 2014 | B2 |
9094628 | Williams | Jul 2015 | B2 |
9185363 | Murillo Amaya et al. | Nov 2015 | B2 |
9189689 | Chandraker et al. | Nov 2015 | B2 |
9280711 | Stein | Mar 2016 | B2 |
9294754 | Billerbeck et al. | Mar 2016 | B2 |
9325920 | Van Nieuwenhove et al. | Apr 2016 | B2 |
9367922 | Chandraker et al. | Jun 2016 | B2 |
9513768 | Zhao et al. | Dec 2016 | B2 |
9514373 | Jeromin et al. | Dec 2016 | B2 |
9524434 | Gee et al. | Dec 2016 | B2 |
9607377 | Lovberg et al. | Mar 2017 | B2 |
9651388 | Chapman et al. | May 2017 | B1 |
9671243 | Stein | Jun 2017 | B2 |
9671328 | Retterath et al. | Jun 2017 | B2 |
9734414 | Samarasekera et al. | Aug 2017 | B2 |
9778352 | Mizutani | Oct 2017 | B2 |
9797734 | Mizutani et al. | Oct 2017 | B2 |
9811731 | Lee et al. | Nov 2017 | B2 |
9824586 | Sato et al. | Nov 2017 | B2 |
9836657 | Hilldore et al. | Dec 2017 | B2 |
9842254 | Brailovskiy et al. | Dec 2017 | B1 |
9846040 | Hallberg | Dec 2017 | B2 |
9866816 | Retterath | Jan 2018 | B2 |
9870513 | Thiel et al. | Jan 2018 | B2 |
9870624 | Narang et al. | Jan 2018 | B1 |
9902401 | Stein et al. | Feb 2018 | B2 |
9905024 | Shin et al. | Feb 2018 | B2 |
9928605 | Bleiweiss et al. | Mar 2018 | B2 |
9934690 | Kuroda | Apr 2018 | B2 |
9940539 | Han et al. | Apr 2018 | B2 |
9943022 | Alam | Apr 2018 | B1 |
9946260 | Shashua et al. | Apr 2018 | B2 |
9953227 | Utagawa et al. | Apr 2018 | B2 |
9959595 | Livyatan et al. | May 2018 | B2 |
9971953 | Li et al. | May 2018 | B2 |
9981659 | Urano et al. | May 2018 | B2 |
9984468 | Kasahara | May 2018 | B2 |
9992468 | Osanai et al. | Jun 2018 | B2 |
9996941 | Roumeliotis et al. | Jun 2018 | B2 |
10012504 | Roumeliotis et al. | Jul 2018 | B2 |
10012517 | Protter et al. | Jul 2018 | B2 |
10019014 | Prasad et al. | Jul 2018 | B2 |
10019635 | Kido et al. | Jul 2018 | B2 |
10025984 | Rajkumar et al. | Jul 2018 | B2 |
10037712 | Dayal | Jul 2018 | B2 |
10046770 | Sabri | Aug 2018 | B1 |
10049307 | Pankanti et al. | Aug 2018 | B2 |
10054517 | Liu et al. | Aug 2018 | B2 |
10055854 | Wan et al. | Aug 2018 | B2 |
10062010 | Kutliroff | Aug 2018 | B2 |
10071306 | Vandonkelaar | Sep 2018 | B2 |
10073531 | Hesch et al. | Sep 2018 | B2 |
10215856 | Xu | Feb 2019 | B1 |
10218913 | Somasundaram et al. | Feb 2019 | B2 |
10302768 | Godbaz et al. | May 2019 | B2 |
10382742 | Retterath | Aug 2019 | B2 |
10397552 | Van Nieuwenhove et al. | Aug 2019 | B2 |
20110007160 | Okumura | Jan 2011 | A1 |
20110285866 | Bhrugumalla | Nov 2011 | A1 |
20130215290 | Solhusvik | Aug 2013 | A1 |
20140126769 | Reitmayr et al. | May 2014 | A1 |
20140218480 | Knighton et al. | Aug 2014 | A1 |
20140347448 | Hegemann et al. | Nov 2014 | A1 |
20160189372 | Lovberg et al. | Jun 2016 | A1 |
20160255289 | Johnson et al. | Sep 2016 | A1 |
20170230638 | Rhoads et al. | Aug 2017 | A1 |
20170236037 | Wajs et al. | Aug 2017 | A1 |
20170323429 | Godbaz et al. | Nov 2017 | A1 |
20180031681 | Yoon et al. | Feb 2018 | A1 |
20180063508 | Trail et al. | Mar 2018 | A1 |
20180113200 | Steinberg et al. | Apr 2018 | A1 |
20180176514 | Kirmani et al. | Jun 2018 | A1 |
20180188059 | Wheeler et al. | Jul 2018 | A1 |
20180330526 | Corcoran | Nov 2018 | A1 |
20180358393 | Sato | Dec 2018 | A1 |
20190033448 | Molnar et al. | Jan 2019 | A1 |
20190056498 | Sonn et al. | Feb 2019 | A1 |
20190230297 | Knorr et al. | Jul 2019 | A1 |
20190286153 | Rankawat et al. | Sep 2019 | A1 |
20200057146 | Steinkogler et al. | Feb 2020 | A1 |
20200278194 | Kawahito | Sep 2020 | A1 |
Number | Date | Country |
---|---|---|
102018107801 | Oct 2018 | DE |
10-2016-0135482 | Nov 2016 | KR |
WO 2018127789 | Jul 2018 | WO |
WO 2020198134 | Oct 2020 | WO |
Entry |
---|
Krestinskaya, O. and Pappachen James, A., “Real-time Analog Pixel-to-pixel Dynamic Frame Differencing with Memristive Sensing Circuits”, arXiv e-prints, arXiv:1808.06780v1 [cs.ET], Aug. 21, 2018, doi: 10.48550/arXiv.1808.06780. (Year: 2018). |
P. Dudek and P. J. Hicks, “A general-purpose processor-per-pixel analog SIMD vision chip,” in IEEE Transactions on Circuits and Systems I: Regular Papers, vol. 52, No. 1, pp. 13-20, Jan. 2005, doi: 10.1109/TCSI.2004.840093. (Year: 2005). |
Abuelsamid, “Bosch Launches Silicon Carbide Chips to Make Evs More Efficient,” Forbes, Oct. 13, 2019, (accessed at https://www.forbes.com/sites/samabuelsamid/2019/10/13/bosch-launches-silicon-carbide-chips-to-make-evs-more-efficient/amp/), 7 pages. |
Barati et al., “Hot Carrier-Enhanced Interlayer Electron-Hole Pair Multiplication in 2D Semiconductor Heterostructure Photocells,” Nature Nanotechnology, Oct. 9, 2017, 7 pages. |
Becker et al., “Smartphone Video Guidance Sensor for Small Satellites,” NASA Marshall Space Flight Center, 27th Annual AIAA/USU Conference on Small Satellites, Aug. 2013, 8 pages. |
Bie et al., A MoTe2-Based Light-Emitting Diode and Photodetecto for Silicon Phototonic Integated Circuits, Nature Nanotechnology, Oct. 23, 2017, 8 pages. |
Dionne et al., “Silicon-Based Plasmonics for On-Chip Phototonics,” IEEE Journal, Jan.-Feb. 2010, 13 pages. |
Dudek et al., “A General-Purpose CMOS Vision Chip with a Process-Per-Pixel SIMD Array,” Computer Science, 2001, 4 pages. |
Dudek, “Adaptive Sensing and Image Processing with a General-Purpose Pixel-Parallel Sensor/Processor Array Integrated Circuit,” University of Manchester, 2006, 6 pages. |
Evans, “Cascading Individual Analog Counters,” Radiant Technologies, Inc., Oct. 2016, 7 pages. |
Foix et al., “Exploitation of Time-of-Flight (ToF) Camera,” IRI Technical Report, 2007, 22 pages. |
Hall et al., “Guide for Pavement Friction,” NCHRP, National Academies of Sciences, Engineering, and Medicine, 2009, 257 pages. |
Martel et al., “Parallel HDR Tone Mapping and Auto-Focus on a Cellular Processor Array Vision Chip,” 2016 IEEE International Symposium on Circuits and Systems, May 2016, 4 pages. |
Panina et al., “Compact CMOS Analog Counter for SPAD Pixel Arrays,” IEEE, Apr. 2014, 5 pages. |
Peizerat et al., “An Analog Counter Architecture for Pixel-Level ADC,” CEA/LETI-MINATEC, 2009, 3 pages. |
Pinson et al., “Orbital Express Advanced Video Guidance Sensor: Ground Testing, Flight Results and Comparisons,” NASA Marshall Space Flight Center, American Institute of Aeronautics and Astronautics, Aug. 2008, 12 pages. |
Sun et al., “Single-Chip Microprocessor that Communicates Directly Using Light,” Nature, Dec. 23, 2015, 29 pages. |
Tang et al., “2D Materials for Silicon Photonics,” Nature Nanotechnology, Oct. 23, 2017, 2 pages. |
Torrens, “4QD-TEC: Electronics Circuits Reference Archive Analogue Pulse Counter,” [undated], 2 pages. |
University of Bonn., “Teaching Cars to Drive with Foresight: Self-Learning Process,” Science Daily, Oct. 2019, 4 pages. |
Vijayaraghavan et al., “Design for MOSIS Educational Program (Research),” EE Department, University of Tennessee, [undated], 8 pages. |
Wong et al., “Analog Vision—Neural Network Inference Acceleration Using Analog SIMD Computation in the Focal Plane,” Imperial College London, Diplomarbeit, 2018, 112 pages. |
Wong, the SCAMP-5 Vision Chip is a Focal-Plane Sensor-Processor (FPSP) developed at the University of Manchester (Carey et al., 2013a), 6 pages. |
Number | Date | Country | |
---|---|---|---|
20210366952 A1 | Nov 2021 | US |
Number | Date | Country | |
---|---|---|---|
63027227 | May 2020 | US |