As is known in the art, photodetector arrays can be used to detect photons in a wide range of applications. Conventional detector array accuracy may be impacted by temperature changes. Some known detector arrays include a discrete temperature sensor placed on the same substrate as the detector. Other known arrays use the temperature of the read out integrated circuit (ROIC). These techniques may have less than optimal accurate temperature sensing.
Embodiments of the disclosure provide methods and apparatus for sensing the temperature of a photodetector or photodetector array. In example embodiments, a photodetector comprises an array of photodiodes The resulting temperature indication is then used by circuitry integral to, or in addition to, circuitry used to determine the photonic response.
In embodiments, a detector array includes using aggregate dark current that can be DC coupled to eliminate photonic transient events and has a known temperature coefficient. The aggregate dark current can be sensed as the average output current of the photodetector bias power supply.
In some embodiments, one or more photodiodes can be used for sensing temperature information. Photodiodes can comprise vertical junctions, front- or backside-illuminated, for example, either forward or reverse biased, and/or covered with an opaque material to remove undesired photonic response. The forward or reverse voltage and/or reverse current can be used as an indicator of temperature(s). Circuits can be implemented in a silicon-based (Group IV) photodetector or array and/or III-IV materials, e.g., InGaAs, InP, etc.
In some embodiments, a photolithographic process can be used during fabrication of a photodiode or photodiode array for creating one or more lateral junctions with a selected structure and composition that can be forward or reverse biased. The forward or reverse voltage and/or reverse current can be used as an indicator of temperature. The obtained temperature can be used in example embodiments to enhance accuracy of a detector array.
In embodiments, the collected temperature information can be processed in a readout integrated circuit (ROTC). In some embodiments, analog temperature signal information can be converted into a digital indication of temperature (analog-to-digital conversion). The digital temperature information can be linearized, such as with a lookup table or arithmetic post-processing. In some embodiments, digital and/or analog temperature information can be transmitted as a buffered output (e.g., DAC, push-pull output, etc.). In embodiments, the temperature information can be compared to one or more threshold values.
In some embodiments, the photodetector bias voltage can be adjusted as a function of temperature. The temperature information can be used as a feedback signal in a temperature control loop for controlling a temperature-stabilizing mechanism, such as a thermoelectric cooler (TEC). The temperature information can be compared against one or more threshold values indicating the temperature limits beyond which the detector may be out of specification or have reduced performance. The temperature information can be compared against one or more threshold values indicating the temperature limits beyond which the detector may be out of specification or have reduced performance, indicating a potential functional safety fault, such as in an ISO 26262-compliant automotive application.
In one aspect, a method comprises: reverse biasing a first pixel in a pixel array; forward biasing a second pixel in the pixel array; measuring a voltage for the second pixel; and determining a temperature of the pixel array from the measured voltage for the second pixel.
A method can further include one or more of the following features: the first and second pixels are coupled to a reverse bias voltage source, the second pixel, and not the first pixel, are coupled to a forward bias voltage source that has a higher voltage than the reverse bias voltage source, determining the voltage for the second pixel includes measuring a voltage at a point between the forward bias voltage source and the second pixel, the second pixel comprises a photodiode and measuring the voltage comprises measuring the voltage at an anode of the photodiode, the first and second pixels are contained in a pixel layer, and wherein an oxide layer is above the pixel layer, a material above the second pixel to block photons from the second pixel, the material comprises metal, the material blocks a selected frequency bandwidth, and/or the material is wider than the second pixel.
In another aspect, a system comprises: a first pixel in a pixel array, wherein the first pixel is reverse-biased; a second pixel in the pixel array, wherein the second pixel is forward-biased; a first circuit to measure a voltage for the second pixel; and a second circuit to determine a temperature of the pixel array from the measured voltage for the second pixel.
A system can further include one or more of the following features: the first and second pixels are coupled to a reverse bias voltage source, the second pixel, and not the first pixel, are coupled to a forward bias voltage source that has a higher voltage than the reverse bias voltage source, determining the voltage for the second pixel includes measuring a voltage at a point between the forward bias voltage source and the second pixel, the second pixel comprises a photodiode and measuring the voltage comprises measuring the voltage at an anode of the photodiode, the first and second pixels are contained in a pixel layer, and wherein an oxide layer is above the pixel layer, a material above the second pixel to block photons from the second pixel, the material comprises metal, the material blocks a selected frequency bandwidth, and/or the material is wider than the second pixel.
The foregoing features of this disclosure, as well as the disclosure itself, may be more fully understood from the following description of the drawings in which:
Prior to describing example embodiments of the disclosure some information is provided. Laser ranging systems can include laser radar (ladar), light-detection and ranging (lidar), and rangefinding systems, which are generic terms for the same class of instrument that uses light to measure the distance to objects in a scene. This concept is similar to radar, except optical signals are used instead of radio waves. Similar to radar, a laser ranging and imaging system emits an optical signal, e.g., a pulse or continuous optical signal, toward a particular location and measures the return echoes to extract the range.
Laser ranging systems generally work by emitting a laser pulse and recording the time it takes for the laser pulse to travel to a target, reflect, and return to a photoreceiver. The laser ranging instrument records the time of the outgoing pulse—either from a trigger or from calculations that use measurements of the scatter from the outgoing laser light—and then records the time that a laser pulse returns. The difference between these two times is the time of flight to and from the target. Using the speed of light, the round-trip time of the pulses is used to calculate the distance to the target.
Lidar systems may scan the beam across a target area to measure the distance to multiple points across the field of view, producing a full three-dimensional range profile of the surroundings. More advanced flash lidar cameras, for example, contain an array of detector elements, each able to record the time of flight to objects in their field of view.
When using light pulses to create images, the emitted pulse may intercept multiple objects, at different orientations, as the pulse traverses a 3D volume of space. The echoed laser-pulse waveform contains a temporal and amplitude imprint of the scene. By sampling the light echoes, a record of the interactions of the emitted pulse is extracted with the intercepted objects of the scene, allowing an accurate multi-dimensional image to be created. To simplify signal processing and reduce data storage, laser ranging and imaging can be dedicated to discrete-return systems, which record only the time of flight (TOF) of the first, or a few, individual target returns to obtain angle-angle-range images. In a discrete-return system, each recorded return corresponds, in principle, to an individual laser reflection (i.e., an echo from one particular reflecting surface, for example, a vehicle, a person, a tree, pole or building). By recording just a few individual ranges, discrete-return systems simplify signal processing and reduce data storage, but they do so at the expense of lost target and scene reflectivity data. Because laser-pulse energy has significant associated costs and drives system size and weight, recording the TOF and pulse amplitude of more than one laser pulse return per transmitted pulse, to obtain angle-angle-range-intensity images, increases the amount of captured information per unit of pulse energy. All other things equal, capturing the full pulse return waveform offers significant advantages, such that the maximum data is extracted from the investment in average laser power. In full-waveform systems, each backscattered laser pulse received by the system is digitized at a high sampling rate (e.g., 500 MHz to 1.5 GHz). This process generates digitized waveforms (amplitude versus time) that may be processed to achieve higher-fidelity 3D images.
Of the various laser ranging instruments available, those with single-element photoreceivers generally obtain range data along a single range vector, at a fixed pointing angle. This type of instrument—which is, for example, commonly used by golfers and hunters—either obtains the range (R) to one or more targets along a single pointing angle or obtains the range and reflected pulse intensity (I) of one or more objects along a single pointing angle, resulting in the collection of pulse range-intensity data, (R,I)i, where i indicates the number of pulse returns captured for each outgoing laser pulse.
More generally, laser ranging instruments can collect ranging data over a portion of the solid angles of a sphere, defined by two angular coordinates (e.g., azimuth and elevation), which can be calibrated to three-dimensional (3D) rectilinear cartesian coordinate grids; these systems are generally referred to as 3D lidar and ladar instruments. The terms “lidar” and “ladar” are often used synonymously and, for the purposes of this discussion, the terms “3D lidar,” “scanned lidar,” or “lidar” are used to refer to these systems without loss of generality. 3D lidar instruments obtain three-dimensional (e.g., angle, angle, range) data sets. Conceptually, this would be equivalent to using a rangefinder and scanning it across a scene, capturing the range of objects in the scene to create a multi-dimensional image. When only the range is captured from the return laser pulses, these instruments obtain a 3D data set (e.g., angle, angle, range)n, where the index n is used to reflect that a series of range-resolved laser pulse returns can be collected, not just the first reflection.
Some 3D lidar instruments are also capable of collecting the intensity of the reflected pulse returns generated by the objects located at the resolved (angle, angle, range) objects in the scene. When both the range and intensity are recorded, a multi-dimensional data set [e.g., angle, angle, (range-intensity)n] is obtained. This is analogous to a video camera in which, for each instantaneous field of view (FOV), each effective camera pixel captures both the color and intensity of the scene observed through the lens. However, 3D lidar systems, instead capture the range to the object and the reflected pulse intensity.
Lidar systems can include different types of lasers, including those operating at different wavelengths, including those that are not visible (e.g., those operating at a wavelength of 840 nm or 905 nm), and in the near-infrared (e.g., those operating at a wavelength of 1064 nm or 1550 nm), and the thermal infrared including those operating at wavelengths known as the “eyesafe” spectral region (i.e., generally those operating at a wavelength beyond about 1300-nm), where ocular damage is less likely to occur. Lidar transmitters are generally invisible to the human eye. However, when the wavelength of the laser is close to the range of sensitivity of the human eye—roughly 350 nm to 730 nm—the energy of the laser pulse and/or the average power of the laser must be lowered such that the laser operates at a wavelength to which the human eye is not sensitive. Thus, a laser operating at, for example, 1550 nm, can—without causing ocular damage—generally have 200 times to 1 million times more laser pulse energy than a laser operating at 840 nm or 905 nm.
One challenge for a lidar system is detecting poorly reflective objects at long distance, which requires transmitting a laser pulse with enough energy that the return signal—reflected from the distant target—is of sufficient magnitude to be detected. To determine the minimum required laser transmission power, several factors must be considered. For instance, the magnitude of the pulse returns scattering from the diffuse objects in a scene is proportional to their range and the intensity of the return pulses generally scales with distance according to 1/R{circumflex over ( )}4 for small objects and 1/R{circumflex over ( )}2 for larger objects; yet, for highly-specularly reflecting objects (i.e., those objects that are not diffusively-scattering objects), the collimated laser beams can be directly reflected back, largely unattenuated. This means that—if the laser pulse is transmitted, then reflected from a target 1 meter away—it is possible that the full energy (J) from the laser pulse will be reflected into the photoreceiver; but—if the laser pulse is transmitted, then reflected from a target 333 meters away—it is possible that the return will have a pulse with energy approximately 10{circumflex over ( )}12 weaker than the transmitted energy.
In many cases of lidar systems highly-sensitive photoreceivers are used to increase the system sensitivity to reduce the amount of laser pulse energy that is needed to reach poorly reflective targets at the longest distances required, and to maintain eyesafe operation. Some variants of these detectors include those that incorporate photodiodes, and/or offer gain, such as avalanche photodiodes (APDs) or single-photon avalanche detectors (SPADs). These variants can be configured as single-element detectors, -segmented-detectors, linear detector arrays, or area detector arrays. Using highly sensitive detectors such as APDs or SPADs reduces the amount of laser pulse energy required for long-distance ranging to poorly reflective targets. The technological challenge of these photodetectors is that they must also be able to accommodate the incredibly large dynamic range of signal amplitudes.
As dictated by the properties of the optics, the focus of a laser return changes as a function of range; as a result, near objects are often out of focus. Furthermore, also as dictated by the properties of the optics, the location and size of the “blur”—i.e., the spatial extent of the optical signal—changes as a function of range, much like in a standard camera. These challenges are commonly addressed by using large detectors, segmented detectors, or multi-element detectors to capture all of the light or just a portion of the light over the full-distance range of objects. It is generally advisable to design the optics such that reflections from close objects are blurred, so that a portion of the optical energy does not reach the detector or is spread between multiple detectors. This design strategy reduces the dynamic range requirements of the detector and prevents the detector from damage.
Acquisition of the lidar imagery can include, for example, a 3D lidar system embedded in the front of car, where the 3D lidar system, includes a laser transmitter with any necessary optics, a single-element photoreceiver with any necessary dedicated or shared optics, and an optical scanner used to scan (“paint”) the laser over the scene. Generating a full-frame 3D lidar range image—where the field of view is 20 degrees by 60 degrees and the angular resolution is 0.1 degrees (10 samples per degree)—requires emitting 120,000 pulses [(20*10*60*10)=120,000)]. When update rates of 30 frames per second are required, such as is required for automotive lidar, roughly 3.6 million pulses per second must be generated and their returns captured.
There are many ways to combine and configure the elements of the lidar system—including considerations for the laser pulse energy, beam divergence, detector array size and array format (single element, linear, 2D array), and scanner to obtain a 3D image. If higher power lasers are deployed, pixelated detector arrays can be used, in which case the divergence of the laser would be mapped to a wider field of view relative to that of the detector array, and the laser pulse energy would need to be increased to match the proportionally larger field of view. For example—compared to the 3D lidar above—to obtain same-resolution 3D lidar images 30 times per second, a 120,000-element detector array (e.g., 200×600 elements) could be used with a laser that has pulse energy that is 120,000 times greater. The advantage of this “flash lidar” system is that it does not require an optical scanner; the disadvantages are that the larger laser results in a larger, heavier system that consumes more power, and that it is possible that the required higher pulse energy of the laser will be capable of causing ocular damage. The maximum average laser power and maximum pulse energy are limited by the requirement for the system to be eyesafe.
As noted above, while many lidar system operate by recording only the laser time of flight and using that data to obtain the distance to the first target return (closest) target, some lidar systems are capable of capturing both the range and intensity of one or multiple target returns created from each laser pulse. For example, for a lidar system that is capable of recording multiple laser pulse returns, the system can detect and record the range and intensity of multiple returns from a single transmitted pulse. In such a multi-pulse lidar system, the range and intensity of a return pulse from a from a closer-by object can be recorded, as well as the range and intensity of later reflection(s) of that pulse—one(s) that moved past the closer-by object and later reflected off of more-distant object(s). Similarly, if glint from the sun reflecting from dust in the air or another laser pulse is detected and mistakenly recorded, a multi-pulse lidar system allows for the return from the actual targets in the field of view to still be obtained.
The amplitude of the pulse return is primarily dependent on the specular and diffuse reflectivity of the target, the size of the target, and the orientation of the target. Laser returns from close, highly-reflective objects, are many orders of magnitude greater in intensity than the intensity of returns from distant targets. Many lidar systems require highly sensitive photodetectors, for example avalanche photodiodes (APDs), which along with their CMOS amplification circuits allow poorly-reflective targets to be detected, provided the photoreceiver components are optimized for high conversion gain. Largely because of their high sensitivity, these detectors may be damaged by very intense laser pulse returns.
However, capturing the intensity of pulses over a larger dynamic range associated with laser ranging may be challenging because the signals are too large to capture directly. One can infer the intensity by using a recording of a bit-modulated output obtained using serial-bit encoding obtained from one or more voltage threshold levels. This technique is often referred to as time-over-threshold (TOT) recording or, when multiple-thresholds are used, multiple time-over-threshold (MTOT) recording.
In embodiments, the detector array 102 can comprise pixels in one dimension (1D), two dimensions (2D), and/or three dimensions (3D). An interface module 108 can output the information from the readout module 108. The detector system 100 can comprise an integrated circuit (IC) package having external I/O including array outputs and temperature alert signals. The detector system 100 can include a regulator 110 to provide one or more regulated voltages for the system.
In the illustrated embodiment, a series of photodetectors 202a,b,c, such as photodiodes, are coupled to a common bias voltage 204 at one terminal and respective amplifiers 206a,b,c at the other terminal. It is understood that the common bias voltage 204 can be coupled to any practical number of photodiodes. The amplifiers 206 provide a respective output signal for each of the photodiodes 202a,b,c. In embodiments, the amplifiers 206 can be coupled to ground via an optional sense resistor 208, which may comprise a precision temperature stabilized resistor.
A voltage SA on the sense resistor 208 can be provided an input to a low pass filter 210 the output SB of which can be an input to a low bandwidth voltage measurement device 212.
The filtered signal SA represents dark current for the detector array. As used herein, dark current refers to a current generated by the array when not receiving target photonic energy. The array dark current can be used to calibrate and obtain temperature information for the array photodetectors.
In example arrays, dark current can change as a function of e{circumflex over ( )}0.05 T, where T is in Kelvin for ambient room temperature. This translates to about 5% per degree. By analyzing the dark current over time, temperature information can be obtained for the array. This information can be translated by use of a look-up table, or more complex mathematical algorithm to determined implied temperature.
An example circuit 300 implementation includes first and second pixels having respective first and second photodiodes 302, 304 biased in a reverse direction by bias voltage source 306. As will be appreciated by one skilled in the art, reverse biasing is ‘normal’ for a photodetector or Avalanche diode. A third pixel has a forward biased photodiode 308 coupled between the reverse bias voltage source 306 and a forward bias voltage source 310, which has a higher voltage than the reverse bias voltage source 306.
In embodiments, a voltage Vc at the anode of the third photodiode 308 can be measured to determine the current, potentially through the use of a known sense resistor to have a resultant voltage below the voltage source 310 which corresponds to temperature.
By eliminating a photonic response by the third pixel 308, which is proximate the first and second pixels 302, 304, any change in the measured current Vc is due to a change in temperature. Thus, the measured current can be used to determine a temperature of the array.
In the illustrated embodiment, first and second PN junctions 500, 502 are reverse biased by a common bias voltage 504. The first and second PN junctions are separated by an N type material 506. Intrinsic material 507, for example, can be provided between the N type material of the first and second PN junctions 500, 502. A path from a first contact 508 to a second contact 510 goes through the reverse biased first PN junction. A forward biased junction 511 is provided at the interface of the N type material 506 and the P-Type material 513 of first PN junction. By measuring the current flowing between the first and second contacts 508, 510, one can extract the temperature of the array using the temperature dependence of the first junction 500. Extracting temperature by sensing the change in the forward biased junction voltage drop is a technique well known in the art for using a diode as a temperature sensor.
In embodiments, a current is measured that corresponds to temperature. In embodiments, a measured current is converted to voltage for comparison against one or more thresholds, example. In some embodiments, lookup tables can be used to covert current and/or voltage to temperature for a given array or array element.
It is understood that the PN junctions can form an array of pixels, as described above.
It should be noted that when the forward biased junction is used to detect temperature, it is the change in the voltage drop across the junction that is significant. When it is a reverse bias junction, it is the current that is significant.
It is understood that a variety of thresholds and Boolean logic configurations can be used to process the amplifier and/or comparator outputs to meet the needs of a particular application.
In the illustrated embodiment, the TEC unit 806 has a hot side 808 and a cold side 810 separated by a layer 812 of NP junctions disposed on an interconnect layer 814. First and second electrical connections 816, 818 can be coupled to the controller 804. One or more measured currents from the TEC unit 806 can be provided as feedback signals to the amplifier 802 for controlling the temperature of the TEC unit.
Processing may be implemented in hardware, software, or a combination of the two. Processing may be implemented in computer programs executed on programmable computers/machines that each includes a processor, a storage medium or other article of manufacture that is readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device, and one or more output devices. Program code may be applied to data entered using an input device to perform processing and to generate output information.
The system can perform processing, at least in part, via a computer program product, (e.g., in a machine-readable storage device), for execution by, or to control the operation of, data processing apparatus (e.g., a programmable processor, a computer, or multiple computers). Each such program may be implemented in a high-level procedural or object-oriented programming language to communicate with a computer system. However, the programs may be implemented in assembly or machine language. The language may be a compiled or an interpreted language and it may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program may be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a communication network. A computer program may be stored on a storage medium or device (e.g., RAM/ROM, including FLASH memory, or EEPROM, CD-ROM, hard disk, or magnetic diskette) that is readable by a general or special purpose programmable computer for configuring and operating the computer when the storage medium or device is read by the computer.
Processing may also be implemented as a machine-readable storage medium, configured with a computer program, where upon execution, instructions in the computer program cause the computer to operate.
Processing may be performed by one or more programmable processors executing one or more computer programs to perform the functions of the system. All or part of the system may be implemented as, special purpose logic circuitry (e.g., an FPGA (field programmable gate array), a general purpose graphical processing units (GPGPU), and/or an ASIC (application-specific integrated circuit)).
Having described exemplary embodiments of the disclosure, it will now become apparent to one of ordinary skill in the art that other embodiments incorporating their concepts may also be used. The embodiments contained herein should not be limited to disclosed embodiments but rather should be limited only by the spirit and scope of the appended claims. All publications and references cited herein are expressly incorporated herein by reference in their entirety.
Elements of different embodiments described herein may be combined to form other embodiments not specifically set forth above. Various elements, which are described in the context of a single embodiment, may also be provided separately or in any suitable subcombination. Other embodiments not specifically described herein are also within the scope of the following claims.
Number | Name | Date | Kind |
---|---|---|---|
4785191 | Ondris | Nov 1988 | A |
6760145 | Taylor et al. | Jul 2004 | B1 |
6778728 | Taylor et al. | Aug 2004 | B2 |
6894823 | Taylor et al. | May 2005 | B2 |
6989921 | Bernstein et al. | Jan 2006 | B2 |
7015780 | Bernstein et al. | Mar 2006 | B2 |
7160753 | Williams, Jr. | Jan 2007 | B2 |
7432537 | Huntington | Oct 2008 | B1 |
7504053 | Alekel | Mar 2009 | B1 |
7764719 | Munroe et al. | Jul 2010 | B2 |
7782911 | Munroe et al. | Aug 2010 | B2 |
7852549 | Alekel et al. | Dec 2010 | B2 |
7885298 | Munroe | Feb 2011 | B2 |
7994421 | Williams et al. | Aug 2011 | B2 |
8207484 | Williams | Jun 2012 | B1 |
8319307 | Williams | Nov 2012 | B1 |
8570372 | Russell | Oct 2013 | B2 |
8597544 | Alekel | Dec 2013 | B2 |
8630036 | Munroe | Jan 2014 | B2 |
8630320 | Munroe et al. | Jan 2014 | B2 |
8730564 | Alekel | May 2014 | B2 |
8743453 | Alekel et al. | Jun 2014 | B2 |
8760499 | Russell | Jun 2014 | B2 |
8766682 | Williams | Jul 2014 | B2 |
8853639 | Williams, Jr. | Oct 2014 | B2 |
9121762 | Williams et al. | Sep 2015 | B2 |
9197233 | Gaalema et al. | Nov 2015 | B2 |
9269845 | Williams et al. | Feb 2016 | B2 |
9368933 | Nijjar et al. | Jun 2016 | B1 |
9389060 | Romero et al. | Jul 2016 | B2 |
9397469 | Nijjar et al. | Jul 2016 | B1 |
9447299 | Schut et al. | Sep 2016 | B2 |
9451554 | Singh et al. | Sep 2016 | B1 |
9466745 | Williams et al. | Oct 2016 | B2 |
9529079 | Droz | Dec 2016 | B1 |
9553216 | Williams et al. | Jan 2017 | B2 |
9591238 | Lee et al. | Mar 2017 | B2 |
9693035 | Williams et al. | Jun 2017 | B2 |
9759602 | Williams | Sep 2017 | B2 |
9804264 | Villeneuve et al. | Oct 2017 | B2 |
9810775 | Welford et al. | Nov 2017 | B1 |
9810777 | Williams et al. | Nov 2017 | B2 |
9810786 | Welford et al. | Nov 2017 | B1 |
9812838 | Villeneuve et al. | Nov 2017 | B2 |
9823353 | Eichenholz et al. | Nov 2017 | B2 |
9835490 | Williams et al. | Dec 2017 | B2 |
9841495 | Campbell et al. | Dec 2017 | B2 |
9843157 | Williams | Dec 2017 | B2 |
9847441 | Huntington | Dec 2017 | B2 |
9857468 | Eichenholz et al. | Jan 2018 | B1 |
9869754 | Campbell et al. | Jan 2018 | B1 |
9874635 | Eichenholz et al. | Jan 2018 | B1 |
9897687 | Campbell et al. | Feb 2018 | B1 |
9905992 | Welford et al. | Feb 2018 | B1 |
9923331 | Williams | Mar 2018 | B2 |
9941433 | Williams et al. | Apr 2018 | B2 |
9958545 | Eichenholz et al. | May 2018 | B2 |
9989629 | LaChapelle | Jun 2018 | B1 |
9995622 | Williams | Jun 2018 | B2 |
10003168 | Villeneuve | Jun 2018 | B1 |
10007001 | LaChapelle et al. | Jun 2018 | B1 |
10012732 | Eichenholz et al. | Jul 2018 | B2 |
10061019 | Campbell et al. | Aug 2018 | B1 |
10088559 | Weed et al. | Oct 2018 | B1 |
10094925 | LaChapelle | Oct 2018 | B1 |
10114111 | Russell et al. | Oct 2018 | B2 |
10121813 | Eichenholz et al. | Nov 2018 | B2 |
10139478 | Gaalema et al. | Nov 2018 | B2 |
10169678 | Sachdeva et al. | Jan 2019 | B1 |
10169680 | Sachdeva et al. | Jan 2019 | B1 |
10175345 | Rhee et al. | Jan 2019 | B2 |
10175697 | Sachdeva et al. | Jan 2019 | B1 |
10191155 | Curatu | Jan 2019 | B2 |
10209359 | Russell et al. | Feb 2019 | B2 |
10209732 | Cook | Feb 2019 | B2 |
10211592 | Villeneuve et al. | Feb 2019 | B1 |
10211593 | Lingvay et al. | Feb 2019 | B1 |
10217889 | Dhulla et al. | Feb 2019 | B2 |
10218144 | Munroe et al. | Feb 2019 | B2 |
10241198 | LaChapelle et al. | Mar 2019 | B2 |
10254388 | LaChapelle et al. | Apr 2019 | B2 |
10254762 | McWhirter et al. | Apr 2019 | B2 |
10267898 | Campbell et al. | Apr 2019 | B2 |
10267899 | Weed et al. | Apr 2019 | B2 |
10267918 | LaChapelle et al. | Apr 2019 | B2 |
10275689 | Sachdeva et al. | Apr 2019 | B1 |
10295668 | LaChapelle et al. | May 2019 | B2 |
10310058 | Campbell et al. | Jun 2019 | B1 |
10324170 | Engberg, Jr. et al. | Jun 2019 | B1 |
10324185 | McWhirter et al. | Jun 2019 | B2 |
10338199 | McWhirter et al. | Jul 2019 | B1 |
10338223 | Englard et al. | Jul 2019 | B1 |
10340651 | Drummer et al. | Jul 2019 | B1 |
10345437 | Russell et al. | Jul 2019 | B1 |
10345447 | Hicks | Jul 2019 | B1 |
10348051 | Shah et al. | Jul 2019 | B1 |
10386489 | Albelo et al. | Aug 2019 | B2 |
10394243 | Ramezani et al. | Aug 2019 | B1 |
10401480 | Gaalema et al. | Sep 2019 | B1 |
10401481 | Campbell et al. | Sep 2019 | B2 |
10418776 | Welford et al. | Sep 2019 | B2 |
10445599 | Hicks | Oct 2019 | B1 |
10451716 | Hughes et al. | Oct 2019 | B2 |
10473788 | Englard et al. | Nov 2019 | B2 |
10481605 | Maila et al. | Nov 2019 | B1 |
10488496 | Campbell et al. | Nov 2019 | B2 |
10491885 | Hicks | Nov 2019 | B1 |
10502831 | Eichenholz | Dec 2019 | B2 |
10503172 | Englard et al. | Dec 2019 | B2 |
10509127 | Englard et al. | Dec 2019 | B2 |
10514462 | Englard et al. | Dec 2019 | B2 |
10520602 | Villeneuve et al. | Dec 2019 | B2 |
10523884 | Lee et al. | Dec 2019 | B2 |
10535191 | Sachdeva et al. | Jan 2020 | B2 |
10539665 | Danziger et al. | Jan 2020 | B1 |
10545240 | Campbell et al. | Jan 2020 | B2 |
10551485 | Maheshwari et al. | Feb 2020 | B1 |
10551501 | LaChapelle | Feb 2020 | B1 |
10557939 | Campbell et al. | Feb 2020 | B2 |
10557940 | Eichenholz et al. | Feb 2020 | B2 |
10571567 | Campbell et al. | Feb 2020 | B2 |
10571570 | Paulsen et al. | Feb 2020 | B1 |
10578720 | Hughes et al. | Mar 2020 | B2 |
10591600 | Villeneuve et al. | Mar 2020 | B2 |
10591601 | Hicks et al. | Mar 2020 | B2 |
10606270 | Englard et al. | Mar 2020 | B2 |
10627495 | Gaalema et al. | Apr 2020 | B2 |
10627512 | Hicks | Apr 2020 | B1 |
10627516 | Eichenholz | Apr 2020 | B2 |
10627521 | Englard et al. | Apr 2020 | B2 |
10636285 | Haas et al. | Apr 2020 | B2 |
10641874 | Campbell et al. | May 2020 | B2 |
10663564 | LaChapelle | May 2020 | B2 |
10663585 | McWhirter | May 2020 | B2 |
10677897 | LaChapelle et al. | Jun 2020 | B2 |
10677900 | Russell et al. | Jun 2020 | B2 |
10684360 | Campbell | Jun 2020 | B2 |
20040052299 | Jay | Mar 2004 | A1 |
20100038520 | Yenisch et al. | Feb 2010 | A1 |
20140293265 | Stettner et al. | Oct 2014 | A1 |
20180069367 | Villeneuve et al. | Mar 2018 | A1 |
20180284239 | LaChapelle et al. | Oct 2018 | A1 |
20180284240 | LaChapelle et al. | Oct 2018 | A1 |
20180284275 | LaChapelle | Oct 2018 | A1 |
20180284280 | Eichenholz et al. | Oct 2018 | A1 |
20190124286 | Huang | Apr 2019 | A1 |
20190242995 | Dort et al. | Aug 2019 | A1 |
20190310368 | LaChapelle | Oct 2019 | A1 |
20200284883 | Ferreira et al. | Sep 2020 | A1 |
20200341126 | Yates et al. | Oct 2020 | A1 |
20210098512 | Kaklin | Apr 2021 | A1 |
20220163401 | Herceg | May 2022 | A1 |
20220182563 | Mun | Jun 2022 | A1 |
Number | Date | Country |
---|---|---|
19936441 | Mar 2001 | DE |
201422772 | Jun 2014 | TW |
Entry |
---|
PCT International Search Report and Written Opinion dated May 24, 2022 for International Application No. PCT/US2022/017405; 13 Pages. |
U.S. Appl. No. 17/199,790, filed Mar. 12, 2021, Stary et al. |
U.S. Appl. No. 17/566,763, filed Dec. 31, 2021, Huntington et al. |
U.S. Appl. No. 17/648,702, filed Jan. 24, 2022, Lee et al. |
U.S. Appl. No. 17/651,250, filed Feb. 16, 2022, Marshall. |
U.S. Appl. No. 17/653,881, filed Mar. 8, 2022, Keuleyan et al. |
U.S. Appl. No. 17/656,977, filed Mar. 29, 2022, Myers et al. |
U.S. Appl. No. 17/656,978, filed Mar. 29, 2022, Myers et al. |
U.S. Appl. No. 17/656,981, filed Mar. 29, 2022, Myers et al. |
U.S. Appl. No. 17/197,314, filed Mar. 10, 2021, Taylor et al. |
U.S. Appl. No. 17/197,328, filed Mar. 30, 2021, Taylor et al. |
U.S. Appl. No. 17/230,276, filed Apr. 14, 2021, Cadugan. |
U.S. Appl. No. 17/230,277, filed Apr. 14, 2021, Judkins, III et al. |
U.S. Appl. No. 17/352,829, filed Jun. 21, 2021, Huntington et al. |
U.S. Appl. No. 17/352,937, filed Jun. 21, 2021, Cadugan et al. |
U.S. Appl. No. 17/376,607, filed Jul. 15, 2021, Stewart et al. |
U.S. Appl. No. 17/400,300, filed Aug. 12, 2021, Myers et al. |
U.S. Appl. No. 17/402,065, filed Aug. 13, 2021, Lee et al. |
U.S. Appl. No. 18/152,994, filed Jan. 11, 2023, Taylor et al. |
U.S. Notice of Allowance dated Oct. 5, 2022 for U.S. Appl. No. 17/197,328; 9 pages. |
Number | Date | Country | |
---|---|---|---|
20220337770 A1 | Oct 2022 | US |