As is known in the art, pixel circuits used in photonic detection systems look for active transient signals indicating optical returns from pulse-illuminated scenes of interest. The pulsed illumination propagates from the optical source, reflects off objects in the active imaging system field of view, and returns towards the active imaging system. Pixels in the active imaging convert this input energy, generally photo-current from a photo-detector device, into a voltage signal that is compared to a threshold for detecting the presence and timing of an active optical return. The timing information from this active system is used to calculate range to an object in the field of view of the active imaging system.
Embodiments of the disclosure provide a photodetection system having a programmable pixel test injection circuit. In example embodiments, a system can include a photo-detector, an amplifier, a differential voltage discriminator, a charge injection circuit, and a selection control circuit. The photo-detector converts incident photon energy striking the photo-detector into current flow that is proportional to the number of photons striking the photo-detector. A common photo-detector device is a photo-diode which creates reverse-bias current flow proportional to photons of particular wavelengths striking the device and has an intrinsic capacitance between anode and cathode nodes. In embodiments, the amplifier comprises a transimpedance amplifier that converts photo-current into a corresponding voltage output. In some embodiments, the circuit is implemented as a single-ended circuit with implicit reference to ground or another reference voltage.
In other embodiments, the circuit is implemented as a differential circuit which converts the photo-current into a differential output voltage. The output is coupled to the input of the differential voltage discriminator either though AC coupling capacitors, for example, or with a direct connection to the input. In embodiments, the differential voltage discriminator detects when the positive input exceeds the negative input and produces a digital output pulse corresponding to time a duration of the positive input exceedance. In single-ended pixel systems, the threshold may be directly applied to the negative input of the differential voltage discriminator. In differential pixel systems, the threshold is injected into the differential signal path.
In embodiments, the charge injection circuit is constructed from an injection capacitor, switches, and rate-limited switch drivers and control logic. The switches serve to transition the voltage on the injection capacitor from ground to a reference voltage injecting charge proportional to the change in voltage and the injection capacitor size. The charge injection circuit may provide a moment of charge transfer onto the photo-detector node which is equivalent to a current pulse. This current pulse may mimic an active photo-current return pulse from the photo-detector and allow testing the active imaging functions of the pixel.
The injection capacitor may be kept small to limit the performance impact on the active pixel circuit. In one embodiment, the injection capacitor comprises a parasitic metal-metal capacitor. Other embodiments may use capacitor devices and be limited to a minimum capacitance according to device minimum sizes. Other embodiments may be constructed using devices or materials having capacitive properties between two conductive nodes.
In embodiments, the reference voltages used for switching the voltage on the injection capacitor may comprise any practical voltage. One embodiment uses ground for one reference and a programmable reference voltage for the other reference. The programmable reference voltage allows a variable charge injection which mimics various return energies and allows the test feature to evaluate the sensitivity of each pixel in-situ.
In example embodiments, the rate-limited switch drivers control the rate of the of the charge injection, and thus, the width of the current injection pulse. It may be desirable to test the active pixel with a current injection pulse similar to the width of the optical pulses used in the active imaging system. One embodiment includes controllable rate-limited switch drivers to allow for programmable switch transition rates, and therefore, programmable current injection pulse widths. Another embodiment drives the switches with logic devices allowing the injection current pulse width to be limited by the logic drive speed and switch on-resistance.
The control logic may comprise a logical decoder to allow for control of the pixel injection in a number of different ways. One embodiment uses row and column select controls to enable a system allowing individual pixels to be selected active for the test current injection. This arrangement also allows multiple pixels on a row or multiple pixels on a column to be selected, as well as all pixels on a row, all pixels on a column, multiple rows or columns of pixels, or all pixels to be globally selected. Another embodiment comprises a 1-dimensional pixel array with a single pixel-select input to enable the pixel and allow single pixel, multiple pixel, or all pixel selection as desired. Another embodiment may remove selection options entirely to enable driving all pixels with the test signal at the same time. Selection functions may be implemented external to the pixel and the step signal delivered to the pixel may be filtered before delivery to the pixel.
In embodiments, photodetector systems are configured for automotive or other safety-sensitive applications and meet safety standards, such as ISO 26262 which includes specification of an Application Safety Integration Level (ASIL). In order to meet the high fault detectability standards required, validation of the active operation of all pixels in the active imaging system may be required. The active testing of pixels may enable safety reporting to detect and report on unsafe active imager status which may develop over the lifetime or may be related to other system failures.
In one aspect, a detector system comprises: a photodetector; an amplifier having an input to receive an output from the photodetector; a discriminator to receive an output from the amplifier and generate an active output signal when the output from the amplifier is greater than a threshold; and an injection circuit coupled to the input of the amplifier, wherein the injection circuit is configured to selectively inject a test pulse that mimics a pulse from the photodetector for verifying operation of the detector system.
A detector system can further include one or more of the following features: the injection circuit comprises an injection capacitor coupled to the input of the amplifier, the amplifier comprises a transimpedance amplifier, the discriminator comprises a voltage discriminator, the injection circuit includes first and second switches and an injection capacitor having a terminal coupled between the first and second switches, the first and second switches transition a voltage on the injection capacitor from a first reference voltage to a second reference voltage injecting charge proportional to a change in voltage and size of the injection capacitor, a selection control circuit coupled to the injection circuit to control generation of the test pulse and state of the first and second switches, the selection control circuit controls the first and second switches with rate-limited signals to control a width of the injection pulse, the selection control circuit controls generation of the injection pulse based on row and column of the photodetector within an array, the amplifier and the discriminator receive differential signals so that that discriminator detects when a positive input to the discriminator exceeds a negative input to the discriminator, a safety module to detect a fault by monitoring an output of the discriminator, and/or the safety module is configured to generate an alert after detection of the fault.
In another aspect, a method comprises: amplifying, with an amplifier, an input from an output from a photodetector; generating, by a discriminator, an active output signal when the output from the amplifier is greater than a threshold; and selectively injecting, by an injection circuit, a test pulse that mimics a pulse from the photodetector for verifying operation of the detector system.
A method can further include one or more of the following features: the injection circuit comprises an injection capacitor coupled to the input of the amplifier, the amplifier comprises a transimpedance amplifier, the discriminator comprises a voltage discriminator, the injection circuit includes first and second switches and an injection capacitor having a terminal coupled between the first and second switches, the first and second switches transition a voltage on the injection capacitor from a first reference voltage to a second reference voltage injecting charge proportional to a change in voltage and size of the injection capacitor, a selection control circuit coupled to the injection circuit to control generation of the test pulse and state of the first and second switches, the selection control circuit controls the first and second switches with rate-limited signals to control a width of the injection pulse, the selection control circuit controls generation of the injection pulse based on row and column of the photodetector within an array, the amplifier and the discriminator receive differential signals so that that discriminator detects when a positive input to the discriminator exceeds a negative input to the discriminator, a safety module to detect a fault by monitoring an output of the discriminator, and/or the safety module is configured to generate an alert after detection of the fault.
In a further aspect, a detector system comprises: a means for detecting photons; an amplifier having an input to receive an output from the means for detecting photons; a discriminator means for receiving an output from the amplifier and generating an active output signal when the output from the amplifier is greater than a threshold; and an injection circuit means for selectively injecting a test pulse that mimics a pulse from the photodetector for verifying operation of the detector system.
The foregoing features of this disclosure, as well as the disclosure itself, may be more fully understood from the following description of the drawings in which:
Prior to describing example embodiments of the disclosure some information is provided. Laser ranging systems can include laser radar (ladar), light-detection and ranging (lidar), and rangefinding systems, which are generic terms for the same class of instrument that uses light to measure the distance to objects in a scene. This concept is similar to radar, except optical signals are used instead of radio waves. Similar to radar, a laser ranging and imaging system emits an optical signal, e.g. a pulse or continuous optical signal, toward a particular location and measures return reflections to extract range information.
Laser ranging systems generally work by emitting a laser pulse and recording the time it takes for the laser pulse to travel to a target, reflect, and return to a photoreceiver. The laser ranging instrument records the time of the outgoing pulse—either from a trigger or from calculations that use measurements of the scatter from the outgoing laser light—and then records the time that a laser pulse returns. The difference between these two times is the time of flight to and from the target. Using the speed of light, the round-trip time of the pulses is used to calculate the distance to the target.
Lidar systems may scan the beam across a target area to measure the distance to multiple points across the field of view, producing a full three-dimensional range profile of the surroundings. More advanced flash lidar cameras, for example, contain an array of detector elements, each able to record the time of flight to objects in their field of view.
When using light pulses to create images, the emitted pulse may intercept multiple objects, at different orientations, as the pulse traverses a 3D volume of space. The reflected laser-pulse waveform contains a temporal and amplitude imprint of the scene. By sampling the light reflections, a record of the interactions of the emitted pulse is extracted with the intercepted objects of the scene, allowing an accurate multi-dimensional image to be created. To simplify signal processing and reduce data storage, laser ranging and imaging can be dedicated to discrete-return systems, which record only the time of flight (TOF) of the first, or a few, individual target returns to obtain angle-angle-range images. In a discrete-return system, each recorded return corresponds, in principle, to an individual laser reflection (i.e., a reflection from one particular reflecting surface, for example, a vehicle, a person, a tree, pole or building). By recording just a few individual ranges, discrete-return systems simplify signal processing and reduce data storage, but they do so at the expense of lost target and scene reflectivity data. Because laser-pulse energy has significant associated costs and drives system size and weight, recording the TOF and pulse amplitude of more than one laser pulse return per transmitted pulse, to obtain angle-angle-range-intensity images, increases the amount of captured information per unit of pulse energy. All other things equal, capturing the full pulse return waveform offers significant advantages, such that the maximum data is extracted from the investment in average laser power. In full-waveform systems, each backscattered laser pulse received by the system is digitized at a high sampling rate (e.g., 500 MHz to 1.5 GHz). This process generates digitized waveforms (amplitude versus time) that may be processed to achieve higher-fidelity 3D images.
Of the various laser ranging instruments available, those with single-element photoreceivers generally obtain range data along a single range vector, at a fixed pointing angle. This type of instrument—which is, for example, commonly used by golfers and hunters—either obtains the range (R) to one or more targets along a single pointing angle or obtains the range and reflected pulse intensity (I) of one or more objects along a single pointing angle, resulting in the collection of pulse range-intensity data, (R,I)i, where i indicates the number of pulse returns captured for each outgoing laser pulse.
More generally, laser ranging instruments can collect ranging data over a portion of the solid angles of a sphere, defined by two angular coordinates (e.g., azimuth and elevation), which can be calibrated to three-dimensional (3D) rectilinear cartesian coordinate grids; these systems are generally referred to as 3D lidar and ladar instruments. The terms “lidar” and “ladar” are often used synonymously and, for the purposes of this discussion, the terms “3D lidar,” “scanned lidar,” or “lidar” are used to refer to these systems without loss of generality. 3D lidar instruments obtain three-dimensional (e.g., angle, angle, range) data sets. Conceptually, this would be equivalent to using a rangefinder and scanning it across a scene, capturing the range of objects in the scene to create a multi-dimensional image. When only the range is captured from the return laser pulses, these instruments obtain a 3D data set (e.g., angle, angle, range)n, where the index n is used to reflect that a series of range-resolved laser pulse returns can be collected, not just the first reflection.
Some 3D lidar instruments are also capable of collecting the intensity of the reflected pulse returns generated by the objects located at the resolved (angle, angle, range) objects in the scene. When both the range and intensity are recorded, a multi-dimensional data set [e.g., angle, angle, (range-intensity)n] is obtained. This is analogous to a video camera in which, for each instantaneous field of view (FOV), each effective camera pixel captures both the color and intensity of the scene observed through the lens. However, 3D lidar systems, instead capture the range to the object and the reflected pulse intensity.
Lidar systems can include different types of lasers, including those operating at different wavelengths, including those that are not visible (e.g., those operating at a wavelength of 840 nm or 905 nm), and in the near-infrared (e.g., those operating at a wavelength of 1064 nm or 1550 nm), and the thermal infrared including those operating at wavelengths known as the “eyesafe” spectral region (i.e., generally those operating at a wavelength beyond about 1400-nm), where ocular damage is less likely to occur. Lidar transmitters are generally invisible to the human eye. However, when the wavelength of the laser is close to the range of sensitivity of the human eye—roughly 350 nm to 730 nm—the energy of the laser pulse and/or the average power of the laser must be lowered such that the laser operates at a wavelength to which the human eye is not sensitive. Thus, a laser operating at, for example, 1550 nm, can—without causing ocular damage—generally have 200 times to 1 million times more laser pulse energy than a laser operating at 840 nm or 905 nm.
One challenge for a lidar system is detecting poorly reflective objects at long distance, which requires transmitting a laser pulse with enough energy that the return signal—reflected from the distant target—is of sufficient magnitude to be detected. To determine the minimum required laser transmission power, several factors must be considered. For instance, the magnitude of the pulse returns scattering from the diffuse objects in a scene is proportional to their range and the intensity of the return pulses generally scales with distance according to 1/R{circumflex over ( )}4 for small objects and 1/R{circumflex over ( )}2 for larger objects; yet, for highly-specularly reflecting objects (i.e., those objects that are not diffusively-scattering objects), the collimated laser beams can be directly reflected back, largely unattenuated. This means that—if the laser pulse is transmitted, then reflected from a target 1 meter away—it is possible that the full energy (J) from the laser pulse will be reflected into the photoreceiver; but—if the laser pulse is transmitted, then reflected from a target 333 meters away—it is possible that the return will have a pulse with energy approximately 10{circumflex over ( )}12 weaker than the transmitted energy.
In many cases of lidar systems highly-sensitive photoreceivers are used to increase the system sensitivity to reduce the amount of laser pulse energy that is needed to reach poorly reflective targets at the longest distances required, and to maintain eyesafe operation. Some variants of these detectors include those that incorporate photodiodes, and/or offer gain, such as avalanche photodiodes (APDs) or single-photon avalanche detectors (SPADs). These variants can be configured as single-element detectors-segmented-detectors, linear detector arrays, or area detector arrays. Using highly sensitive detectors such as APDs or SPADs reduces the amount of laser pulse energy required for long-distance ranging to poorly reflective targets. The technological challenge of these photodetectors is that they must also be able to accommodate the incredibly large dynamic range of signal amplitudes.
As dictated by the properties of the optics, the focus of a laser return changes as a function of range; as a result, near objects are often out of focus. Furthermore, also as dictated by the properties of the optics, the location and size of the “blur”—i.e., the spatial extent of the optical signal—changes as a function of range, much like in a standard camera. These challenges are commonly addressed by using large detectors, segmented detectors, or multi-element detectors to capture all of the light or just a portion of the light over the full-distance range of objects. It is generally advisable to design the optics such that reflections from close objects are blurred, so that a portion of the optical energy does not reach the detector or is spread between multiple detectors. This design strategy reduces the dynamic range requirements of the detector and prevents the detector from damage.
Acquisition of the lidar imagery can include, for example, a 3D lidar system embedded in the front of car, where the 3D lidar system, includes a laser transmitter with any necessary optics, a single-element photoreceiver with any necessary dedicated or shared optics, and an optical scanner used to scan (“paint”) the laser over the scene. Generating a full-frame 3D lidar range image—where the field of view is 20 degrees by 60 degrees and the angular resolution is 0.1 degrees (10 samples per degree)—requires emitting 120,000 pulses [(20*10*60*10)=120,000)]. When update rates of 30 frames per second are required, such as is required for automotive lidar, roughly 3.6 million pulses per second must be generated and their returns captured.
There are many ways to combine and configure the elements of the lidar system—including considerations for the laser pulse energy, beam divergence, detector array size and array format (single element, linear, 2D array), and scanner to obtain a 3D image. If higher power lasers are deployed, pixelated detector arrays can be used, in which case the divergence of the laser would be mapped to a wider field of view relative to that of the detector array, and the laser pulse energy would need to be increased to match the proportionally larger field of view. For example—compared to the 3D lidar above—to obtain same-resolution 3D lidar images 30 times per second, a 120,000-element detector array (e.g., 200×600 elements) could be used with a laser that has pulse energy that is 120,000 times greater. The advantage of this “flash lidar” system is that it does not require an optical scanner; the disadvantages are that the larger laser results in a larger, heavier system that consumes more power, and that it is possible that the required higher pulse energy of the laser will be capable of causing ocular damage. The maximum average laser power and maximum pulse energy are limited by the requirement for the system to be eyesafe.
As noted above, while many lidar system operate by recording only the laser time of flight and using that data to obtain the distance to the first target return (closest) target, some lidar systems are capable of capturing both the range and intensity of one or multiple target returns created from each laser pulse. For example, for a lidar system that is capable of recording multiple laser pulse returns, the system can detect and record the range and intensity of multiple returns from a single transmitted pulse. In such a multi-pulse lidar system, the range and intensity of a return pulse from a from a closer-by object can be recorded, as well as the range and intensity of later reflection(s) of that pulse—one(s) that moved past the closer-by object and later reflected off of more-distant object(s). Similarly, if glint from the sun reflecting from dust in the air or another laser pulse is detected and mistakenly recorded, a multi-pulse lidar system allows for the return from the actual targets in the field of view to still be obtained.
The amplitude of the pulse return is primarily dependent on the specular and diffuse reflectivity of the target, the size of the target, and the orientation of the target. Laser returns from close, highly-reflective objects, are many orders of magnitude greater in intensity than the intensity of returns from distant targets. Many lidar systems require highly sensitive photodetectors, for example avalanche photodiodes (APDs), which along with their CMOS amplification circuits allow low reflectivity targets to be detected, provided the photoreceiver components are optimized for high conversion gain. Largely because of their high sensitivity, these detectors may be damaged by very intense laser pulse returns.
However, capturing the intensity of pulses over a larger dynamic range associated with laser ranging may be challenging because the signals are too large to capture directly. One can infer the intensity by using a recording of a bit-modulated output obtained using serial-bit encoding obtained from one or more voltage threshold levels. This technique is often referred to as time-over-threshold (TOT) recording or, when multiple-thresholds are used, multiple time-over-threshold (MTOT) recording.
The detector array 102, which can comprise a sensor chip assembly (SCA) 105 having an array of pixels, is coupled to a readout module 104, such as a readout integrated circuit (ROIC). Although the SCA 105 is shown as a ROIC and detector array in another embodiment they may comprise one piece of material, for example a monolithic silicon detector. In addition, the READOUT module 106 may comprise a silicon circuit and the detector module 102 may comprise a different material, such as, but not limited to GaAs, InGaAs, InGaAsP, and/or other detector materials.
In embodiments, the detector array 102 can comprise a single pixel, or pixels in one dimension (1D), two dimensions (2D) An interface module 106 can output the information from the readout module 104. A safety module 108 can analyze operation of the detector system 100 and may generate alerts upon detecting one or more faults. In embodiments, the safety module 108 can include active pixel test injection functionality. In embodiments, the safety module 108 can provide Automotive Safety Integrity Level (ASIL) related functionality, as described more fully below. The detector system 100 can include a regulator 110 to provide one or more regulated voltages for the system.
In the illustrated embodiment of
The voltage discriminator 208 detects when the amplifier 206 output exceeds a threshold voltage Vthresh. When the threshold voltage Vthresh is exceeded, the voltage discriminator 208 produces a digital output pulse corresponding to a time for a duration of the threshold voltage Vthresh exceedance. In single-ended pixel systems, the threshold is directly applied to the negative input of the voltage discriminator 208. In embodiments, the amplifier 206 output may be coupled to the input of the voltage discriminator 208 though an AC coupling capacitor 212.
In the example embodiment of
The slope 230 of the step_lim signal and the step_limB signal can be controlled to shape the current pulse Itest from the injection capacitor 211. In the illustrated embodiment, the injection capacitor 211 is coupled between the first and second switches 224, 226. The first switch 224 is coupled to an injection voltage signal Vinj and the second switch 226 is coupled to a voltage reference, such as ground. The injection voltage Vinj level may also define the characteristics of the injection pulse Itest. In embodiments, respective buffer elements 217, 218 can define characteristics, such as ramp slope, of the switch control signals step_lim, step_limB to shape the pulse. For example, the drive strength, impedance, capacitance, fabrication technology and the like, can be used to control the switch signals, and therefore, the shape of the injection pulse Itest.
As described above, the injection pulse Itest is amplified by the amplifier 206 and, if above a voltage threshold Vthresh, the voltage discriminator 208 outputs 210 a pulse disc out signal corresponding to the injection pulse Itest.
The differential voltage discriminator 308 detects when the positive input exceeds the negative input and produces a digital output pulse corresponding to time and duration of the positive input exceedance. In the illustrated embodiment, a threshold generator 319 can be coupled to the negative input of the voltage discriminator 308. An alternative embodiment injects a differential threshold though coupled capacitors to both positive and negative inputs of the voltage discriminator 308.
In embodiments, the system can confirm that the test pulse Itest is seen at the output of the voltage discriminator 208,308. For example, the safety module 108 of
It is understood that control logic can comprise a decoder allowing for control of the pixel injection in a number of different ways. In some embodiments, row and column select controls to enable individual pixel selection for active test current injection. Multiple pixels on a row or multiple pixels on a column can be selected, as well as all pixels on a row, all pixels on a column, multiple rows or columns of pixels, or all pixels to be globally selected.
It is understood that selecting pixels for pulse injection can be implemented in a wide variety of configurations in hardware, software, and combinations thereof, to meet the needs of a particular application. It is further understood that one pixel, one row of pixels, one column of pixels, or any practical subset of pixels can be selected for active pixel test injection.
Processing may be implemented in hardware, software, or a combination of the two. Processing may be implemented in computer programs executed on programmable computers/machines that each includes a processor, a storage medium or other article of manufacture that is readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device, and one or more output devices. Program code may be applied to data entered using an input device to perform processing and to generate output information.
The system can perform processing, at least in part, via a computer program product, (e.g., in a machine-readable storage device), for execution by, or to control the operation of, data processing apparatus (e.g., a programmable processor, a computer, or multiple computers). Each such program may be implemented in a high-level procedural or object-oriented programming language to communicate with a computer system. However, the programs may be implemented in assembly or machine language. The language may be a compiled or an interpreted language and it may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program may be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a communication network. A computer program may be stored on a storage medium or device (e.g., RAM/ROM, CD-ROM, hard disk, or magnetic diskette) that is readable by a general or special purpose programmable computer for configuring and operating the computer when the storage medium or device is read by the computer.
Processing may also be implemented as a machine-readable storage medium, configured with a computer program, where upon execution, instructions in the computer program cause the computer to operate.
Processing may be performed by one or more programmable processors executing one or more computer programs to perform the functions of the system. All or part of the system may be implemented as, special purpose logic circuitry (e.g., an FPGA (field programmable gate array), a general purpose graphical processing units (GPGPU), and/or an ASIC (application-specific integrated circuit)).
Having described exemplary embodiments of the disclosure, it will now become apparent to one of ordinary skill in the art that other embodiments incorporating their concepts may also be used. The embodiments contained herein should not be limited to disclosed embodiments but rather should be limited only by the spirit and scope of the appended claims. All publications and references cited herein are expressly incorporated herein by reference in their entirety.
Elements of different embodiments described herein may be combined to form other embodiments not specifically set forth above. Various elements, which are described in the context of a single embodiment, may also be provided separately or in any suitable subcombination. Other embodiments not specifically described herein are also within the scope of the following claims.
Number | Name | Date | Kind |
---|---|---|---|
5917320 | Scheller et al. | Jun 1999 | A |
6091239 | Vig et al. | Jul 2000 | A |
6297627 | Towne et al. | Oct 2001 | B1 |
6693419 | Stauth et al. | Feb 2004 | B2 |
6760145 | Taylor et al. | Jul 2004 | B1 |
6778728 | Taylor et al. | Aug 2004 | B2 |
6894823 | Taylor et al. | May 2005 | B2 |
6989921 | Bernstein et al. | Jan 2006 | B2 |
7015780 | Bernstein et al. | Mar 2006 | B2 |
7160753 | Williams, Jr. | Jan 2007 | B2 |
7253614 | Forrest et al. | Aug 2007 | B2 |
7321649 | Lee | Jan 2008 | B2 |
7432537 | Huntington | Oct 2008 | B1 |
7504053 | Alekel | Mar 2009 | B1 |
7605623 | Yun et al. | Oct 2009 | B2 |
7724050 | Lee | May 2010 | B2 |
7764719 | Munroe et al. | Jul 2010 | B2 |
7782911 | Munroe et al. | Aug 2010 | B2 |
7787262 | Mangtani et al. | Aug 2010 | B2 |
7852549 | Alekel et al. | Dec 2010 | B2 |
7885298 | Munroe | Feb 2011 | B2 |
7990194 | Shim | Aug 2011 | B2 |
7994421 | Williams et al. | Aug 2011 | B2 |
8207484 | Williams | Jun 2012 | B1 |
8319307 | Williams | Nov 2012 | B1 |
8570372 | Russell | Oct 2013 | B2 |
8597544 | Alekel | Dec 2013 | B2 |
8630036 | Munroe | Jan 2014 | B2 |
8630320 | Munroe et al. | Jan 2014 | B2 |
8729890 | Donovan et al. | May 2014 | B2 |
8730564 | Alekel | May 2014 | B2 |
8743453 | Alekel et al. | Jun 2014 | B2 |
8760499 | Russell | Jun 2014 | B2 |
8766682 | Williams | Jul 2014 | B2 |
8853639 | Williams, Jr. | Oct 2014 | B2 |
8917128 | Baek et al. | Dec 2014 | B1 |
9121762 | Williams et al. | Sep 2015 | B2 |
9164826 | Fernandez | Oct 2015 | B2 |
9197233 | Gaalema et al. | Nov 2015 | B2 |
9269845 | Williams et al. | Feb 2016 | B2 |
9329057 | Foletto et al. | May 2016 | B2 |
9368933 | Nijjar et al. | Jun 2016 | B1 |
9397469 | Nijjar et al. | Jul 2016 | B1 |
9447299 | Schut et al. | Sep 2016 | B2 |
9451554 | Singh et al. | Sep 2016 | B1 |
9466745 | Williams et al. | Oct 2016 | B2 |
9520871 | Eagen et al. | Dec 2016 | B2 |
9553216 | Williams et al. | Jan 2017 | B2 |
9591238 | Lee et al. | Mar 2017 | B2 |
9621041 | Sun et al. | Apr 2017 | B2 |
9693035 | Williams et al. | Jun 2017 | B2 |
9759602 | Williams | Sep 2017 | B2 |
9804264 | Villeneuve et al. | Oct 2017 | B2 |
9810775 | Welford et al. | Nov 2017 | B1 |
9810777 | Williams et al. | Nov 2017 | B2 |
9810786 | Welford et al. | Nov 2017 | B1 |
9812838 | Villeneuve et al. | Nov 2017 | B2 |
9823353 | Eichenholz et al. | Nov 2017 | B2 |
9835490 | Williams et al. | Dec 2017 | B2 |
9841495 | Campbell et al. | Dec 2017 | B2 |
9843157 | Williams | Dec 2017 | B2 |
9847441 | Huntington | Dec 2017 | B2 |
9857468 | Eichenholz et al. | Jan 2018 | B1 |
9869754 | Campbell et al. | Jan 2018 | B1 |
9874635 | Eichenholz et al. | Jan 2018 | B1 |
9897687 | Campbell et al. | Feb 2018 | B1 |
9905992 | Welford et al. | Feb 2018 | B1 |
9910088 | Milano et al. | Mar 2018 | B2 |
9923331 | Williams | Mar 2018 | B2 |
9941433 | Williams et al. | Apr 2018 | B2 |
9958545 | Eichenholz et al. | May 2018 | B2 |
9989629 | LaChapelle | Jun 2018 | B1 |
9995622 | Williams | Jun 2018 | B2 |
10003168 | Villeneuve | Jun 2018 | B1 |
10007001 | LaChapelle et al. | Jun 2018 | B1 |
10012732 | Eichenholz et al. | Jul 2018 | B2 |
10056909 | Qi et al. | Aug 2018 | B1 |
10061019 | Campbell et al. | Aug 2018 | B1 |
10073136 | Milano et al. | Sep 2018 | B2 |
10088559 | Weed et al. | Oct 2018 | B1 |
10094925 | LaChapelle | Oct 2018 | B1 |
10110128 | Raval et al. | Oct 2018 | B2 |
10114111 | Russell et al. | Oct 2018 | B2 |
10121813 | Eichenholz et al. | Nov 2018 | B2 |
10139478 | Gaalema et al. | Nov 2018 | B2 |
10156461 | Snyder et al. | Dec 2018 | B2 |
10169678 | Sachdeva et al. | Jan 2019 | B1 |
10169680 | Sachdeva et al. | Jan 2019 | B1 |
10175345 | Rhee et al. | Jan 2019 | B2 |
10175697 | Sachdeva et al. | Jan 2019 | B1 |
10191155 | Curatu | Jan 2019 | B2 |
10209359 | Russell et al. | Feb 2019 | B2 |
10211592 | Villeneuve et al. | Feb 2019 | B1 |
10211593 | Lingvay et al. | Feb 2019 | B1 |
10217889 | Dhulla et al. | Feb 2019 | B2 |
10218144 | Munroe et al. | Feb 2019 | B2 |
10241198 | LaChapelle et al. | Mar 2019 | B2 |
10254388 | LaChapelle et al. | Apr 2019 | B2 |
10254762 | McWhirter et al. | Apr 2019 | B2 |
10267898 | Campbell et al. | Apr 2019 | B2 |
10267899 | Weed et al. | Apr 2019 | B2 |
10267918 | LaChapelle et al. | Apr 2019 | B2 |
10275689 | Sachdeva et al. | Apr 2019 | B1 |
10291125 | Raval et al. | May 2019 | B2 |
10295668 | LaChapelle et al. | May 2019 | B2 |
10310058 | Campbell et al. | Jun 2019 | B1 |
10324170 | Engberg, Jr. et al. | Jun 2019 | B1 |
10324185 | McWhirter et al. | Jun 2019 | B2 |
10338199 | McWhirter et al. | Jul 2019 | B1 |
10338223 | Englard et al. | Jul 2019 | B1 |
10340651 | Drummer et al. | Jul 2019 | B1 |
10345437 | Russell et al. | Jul 2019 | B1 |
10345447 | Hicks | Jul 2019 | B1 |
10348051 | Shah et al. | Jul 2019 | B1 |
10386489 | Albelo et al. | Aug 2019 | B2 |
10394243 | Ramezani et al. | Aug 2019 | B1 |
10401480 | Gaalema et al. | Sep 2019 | B1 |
10401481 | Campbell et al. | Sep 2019 | B2 |
10418776 | Welford et al. | Sep 2019 | B2 |
10445599 | Hicks | Oct 2019 | B1 |
10451716 | Hughes et al. | Oct 2019 | B2 |
10473788 | Englard et al. | Nov 2019 | B2 |
10481181 | Bussing et al. | Nov 2019 | B2 |
10481605 | Maila et al. | Nov 2019 | B1 |
10488458 | Milano et al. | Nov 2019 | B2 |
10488496 | Campbell et al. | Nov 2019 | B2 |
10491885 | Hicks | Nov 2019 | B1 |
10498384 | Briano | Dec 2019 | B2 |
10502831 | Eichenholz | Dec 2019 | B2 |
10503172 | Englard et al. | Dec 2019 | B2 |
10509127 | Englard et al. | Dec 2019 | B2 |
10514462 | Englard et al. | Dec 2019 | B2 |
10520602 | Villeneuve et al. | Dec 2019 | B2 |
10523884 | Lee | Dec 2019 | B2 |
10535191 | Sachdeva et al. | Jan 2020 | B2 |
10539665 | Danziger et al. | Jan 2020 | B1 |
10545240 | Campbell et al. | Jan 2020 | B2 |
10551485 | Maheshwari et al. | Feb 2020 | B1 |
10551501 | LaChapelle | Feb 2020 | B1 |
10557939 | Campbell et al. | Feb 2020 | B2 |
10557940 | Eichenholz et al. | Feb 2020 | B2 |
10571567 | Campbell et al. | Feb 2020 | B2 |
10571570 | Paulsen et al. | Feb 2020 | B1 |
10578720 | Hughes et al. | Mar 2020 | B2 |
10591600 | Villeneuve et al. | Mar 2020 | B2 |
10591601 | Hicks et al. | Mar 2020 | B2 |
10606270 | Englard et al. | Mar 2020 | B2 |
10613158 | Cook et al. | Apr 2020 | B2 |
10627495 | Gaalema et al. | Apr 2020 | B2 |
10627512 | Hicks | Apr 2020 | B1 |
10627516 | Eichenholz | Apr 2020 | B2 |
10627521 | Englard et al. | Apr 2020 | B2 |
10634735 | Kravljaca et al. | Apr 2020 | B2 |
10636285 | Haas et al. | Apr 2020 | B2 |
10641874 | Campbell et al. | May 2020 | B2 |
10663564 | LaChapelle | May 2020 | B2 |
10663585 | McWhirter | May 2020 | B2 |
10677897 | LaChapelle et al. | Jun 2020 | B2 |
10677900 | Russell et al. | Jun 2020 | B2 |
10684360 | Campbell | Jun 2020 | B2 |
10908190 | Bussing et al. | Feb 2021 | B2 |
10948537 | Forrest et al. | Mar 2021 | B2 |
11029176 | Geiger et al. | Jun 2021 | B2 |
11115244 | Briano et al. | Sep 2021 | B2 |
11177814 | Kim et al. | Nov 2021 | B2 |
11313899 | Milano et al. | Apr 2022 | B2 |
11409000 | Behzadi et al. | Aug 2022 | B1 |
11451234 | Austin et al. | Sep 2022 | B1 |
20030112913 | Balasubramanian | Jun 2003 | A1 |
20040169753 | Gulbransen | Sep 2004 | A1 |
20070257193 | Macciocchi | Nov 2007 | A1 |
20110270543 | Schmidt | Nov 2011 | A1 |
20130169329 | Searles | Jul 2013 | A1 |
20130176061 | Haerle et al. | Jul 2013 | A1 |
20140094993 | Johnson | Apr 2014 | A1 |
20160013796 | Choi | Jan 2016 | A1 |
20160054434 | Williams | Feb 2016 | A1 |
20170250694 | Im et al. | Aug 2017 | A1 |
20180054206 | Im et al. | Feb 2018 | A1 |
20180068699 | Choi et al. | Mar 2018 | A1 |
20180069367 | Villeneuve et al. | Mar 2018 | A1 |
20180191356 | Kesarwani | Jul 2018 | A1 |
20180191979 | Mu | Jul 2018 | A1 |
20180284239 | LaChapelle et al. | Oct 2018 | A1 |
20180284240 | LaChapelle et al. | Oct 2018 | A1 |
20180284275 | LaChapelle | Oct 2018 | A1 |
20180284280 | Eichenholz et al. | Oct 2018 | A1 |
20190033460 | Lipson | Jan 2019 | A1 |
20190310368 | LaChapelle | Oct 2019 | A1 |
20210124050 | Puglia et al. | Apr 2021 | A1 |
20210132229 | Milkov | May 2021 | A1 |
20220236376 | Li et al. | Jul 2022 | A1 |
20220294172 | Taylor | Sep 2022 | A1 |
Number | Date | Country |
---|---|---|
201422772 | Jun 2014 | TW |
Entry |
---|
U.S. Appl. No. 17/645,118, filed Dec. 20, 2021, Babushkin, et al. |
U.S. Appl. No. 17/657,140, filed Mar. 20, 2022, Myers. |
U.S. Appl. No. 17/659,033, filed Apr. 13, 2022, Cadugan et al. |
U.S. Appl. No. 17/659,035, filed Apr. 13, 2022, Cadugan et al. |
U.S. Appl. No. 17/660,221, filed Apr. 22, 2022, Filippini et al. |
U.S. Appl. No. 17/663,896, filed May 18, 2022, Cadugan et al. |
U.S. Appl. No. 17/805,070, filed Jun. 2, 2022, Myers et al. |
U.S. Appl. No. 17/809,990, filed Jun. 30, 2022, Quirk et al. |
U.S. Notice of Allowance dated Oct. 24, 2022 for U.S. Appl. No. 17/660,221; 9 pages. |
U.S. Appl. No. 17/566,763, filed Dec. 31, 2021, Huntington et al. |
U.S. Appl. No. 17/648,702, filed Jan. 24, 2022, Lee et al. |
U.S. Appl. No. 17/651,250, filed Feb. 16, 2022, Marshall. |
U.S. Appl. No. 17/653,881, filed Mar. 8, 2022, Keuleyan et al. |
U.S. Appl. No. 17/656,977, filed Mar. 29, 2022, Myers et al. |
U.S. Appl. No. 17/656,978, filed Mar. 29, 2022, Myers et al. |
U.S. Appl. No. 17/656,981, filed Mar. 29, 2022, Myers et al. |
U.S. Appl. No. 17/197,314, filed Mar. 10, 2021, Taylor et al. |
U.S. Appl. No. 17/197,328, filed Mar. 30, 2021, Taylor et al. |
U.S. Appl. No. 17/230,253, filed Apr. 14, 2021, Judkins, III et al. |
U.S. Appl. No. 17/230,276, filed Apr. 14, 2021, Cadugan. |
U.S. Appl. No. 17/230,277, filed Apr. 14, 2021, Judkins, III et al. |
U.S. Appl. No. 17/352,829, filed Jun. 21, 2021, Huntington et al. |
U.S. Appl. No. 17/352,937, filed Jun. 21, 2021, Cadugan et al. |
U.S. Appl. No. 17/376,607, filed Jul. 15, 2021, Stewart et al. |
U.S. Appl. No. 17/402,065, filed Aug. 13, 2021, Lee et al. |
U.S. Notice of Allowance dated Jan. 25, 2023 for U.S. Appl. No. 17/660,221; 7 pages. |
Number | Date | Country | |
---|---|---|---|
20230051974 A1 | Feb 2023 | US |