The inventors describe improved ladar systems and methods, including improved bistatic ladar systems. As used herein, the term “ladar” refers to and encompasses any of laser radar, laser detection and ranging, and light detection and ranging (“lidar”). With a bistatic ladar system, the system's ladar transmitter and ladar receiver are not commonly bore-sited. Accordingly, with a bistatic ladar system, there is an offset of the ladar transmitter relative to the ladar receiver such that the ladar transmitter and the ladar receiver are non-coaxial in terms of their gaze.
Vehicles may employ ladar systems to support autonomous operation and/or driver-assistance, such as advanced driver-assistance systems (ADAS) to detect obstacles that may be nearby or in the vehicle's path. Such vehicles may be airborne vehicles, ground vehicles, or sea-based vehicles. Effective ladar systems for vehicles rely on rapid detection, characterization of, and response to, dynamic obstacles. Toward this end, reduced latency is beneficial for ladar systems because low latency generally translates to improved safety for ladar-equipped vehicles.
For example, at highway speeds, the closing distance between two approaching vehicles can exceed 200 km/hour. This maps to 50 m/sec. A 10 Hz scanning ladar system with a requirement for 10 detections requires 1 second just to collect sufficient data to confidently declare the presence of an unsafe approaching vehicle. This means that by the time the ladar system is able to declare the presence of the unsafe approaching vehicle, that unsafe vehicle is now about 150 feet closer.
The inventors disclose ladar systems and methods that use any of (1) pulse duration agility to mitigate background noise, (2) polarization diversity to minimize sidelobe leakage from the returns produced by retroreflective objects, and/or (3) a cross-receiver with linear arrays to isolate interference. One or more of these improvements can be incorporated into bistatic ladar systems to improve the performance of such bistatic ladar systems. A receiver in a bistatic ladar system allows for a (nearly) arbitrarily large receive aperture and longer dwell on receive, while still retaining a small transmit aperture. This enables a practitioner to simultaneously enjoy highly agile transmit while still being capable of low light detection. A small transmit aperture implies small mass and hence small momentum for scanning components—and hence small momentum and ultimately high speed agile gaze scan. Furthermore, a large receive aperture allows for low light detection by increasing the noise equivalent power (NEP) to flux ratio. However, the increased collection size (aperture) and collection time (dwell) results in increased sensitivity to interference (be it from retroreflectors and/or background light) relative to systems where the transmitter and receiver are coaxial. Through the techniques described herein, sensitivity to interference can be mitigated to provide a bistatic ladar system with performance benefits similar to a coaxial system.
The techniques disclosed herein can be advantageously applied to bistatic ladar systems where the ladar receivers employ either imaging optics or non-imaging optics. With imaging optics, the ladar system employs lenses which focus light that is incident on the receiver onto an imaging focal plane akin to a camera. With non-imaging optics (e.g., a compound parabolic light collector or a fiber taper), light is collected but no image is formed. The inventors believe that the techniques described herein can be particularly advantageous when used with non-imaging ladar receivers. Imaging receivers typically provide better mitigation of background clutter and other interference, but are intrinsically costlier than non-imaging receivers—largely due to multichannel receiver requirements (ROIC). The technical innovations described herein relating to agile pulse duration, polarization diversity, and/or a cross-receiver can close the gap in cost and performance for non-imaging receivers vis a vis imaging receivers. Furthermore, these innovations can be mutually reinforcing when combined together, which can lead to even greater performance improvements.
In an example embodiment, the inventors disclose a ladar system that employs a variable duration pulse on transmit. With this example embodiment, pulse duration is adapted based on measured background light. The system senses the background light level from prior ladar returns and adjusts the pulse duration for new ladar pulse shots accordingly.
In another example embodiment, the inventors disclose a ladar system that employs polarization diversity on receive. With this example embodiment, dual channel receivers can be used to measure incident polarization for each ladar return. The system can then use a known polarity on transmit for the ladar pulses to separate returns corresponding to retroreflectors from returns corresponding to standard objects (e.g., Lambertian). Retroreflective returns can be classified as specular or quasi-specular in nature, while non-retroreflective returns can be classified as Lambertian in nature.
In another example embodiment, the inventors disclose a ladar system where two one-dimensional (1D) receive arrays are arranged at different angles (e.g., a cross pattern) in order to create a two dimensional isolation of desired signal returns from interference. By employing 1D arrays, the system can greatly simplify cost and complexity.
Further still, the inventors disclose example embodiments where the ladar system employs two or more of any combination of variable pulse duration on transmit, polarization diversity on receive, and cross-receivers to isolate interference.
Further still, the inventors disclose herein a variety of additional technical innovations, including but not limited to techniques for frequency domain shuttering, polarization-based point cloud augmentation, polarization-based rich data sets, and others.
These and other features and advantages of the present invention will be described hereinafter to those having ordinary skill in the art.
Discussed below are example embodiments of ladar systems that employ (1) adaptive pulse duration where the duration of the ladar pulses is adjusted as a function of measured background light (2) polarization diversity in the ladar receiver to improve detection capabilities, and (3) a cross-receiver to isolate interference. For ease of discussion, these features will be separately discussed with reference to example embodiments. However, it should be understood that a practitioner may choose to incorporate any or all of these technical innovations into the same ladar system if desired.
Ladar transmitter 102 can be configured to transmit ladar pulses 110 toward a range point 112 to support range detection with respect to such range point 112. The range point 112 will reflect the incident pulse 110 to produce a ladar pulse return 114 that is received by ladar receiver 104.
The ladar transmitter 102 can be a scanning ladar transmitter if desired by a practitioner. As examples, the ladar transmitter 102 can employ technology as described in U.S. Pat. No. 10,078,133, the entire disclosure of which is incorporated herein by reference. This incorporated patent describes, inter alia, ladar transmitters that can employ compressive sensing to intelligent select and target range points for interrogation via ladar pulses 110.
The ladar receiver 104 can include one or more photo-receivers to convert incident light (which may include ladar pulse returns 114) into a signal 120 that is indicative of the ladar pulse returns 114 so that range information (and intensity information) for the range point 112 can be determined. Examples of commercially available photo-receivers that can be used in the ladar receiver 104 include diode detectors, thermopile detectors, linear avalanche detectors, and single photon avalanche detectors (SPADs). As examples, the ladar receiver 104 can employ technology as described in U.S. Pat. Nos. 9,933,513 and 10,185,028, the entire disclosures of each of which are incorporated by reference. These incorporated patents describe, inter alia, ladar receivers that can selectively activate regions of a photodetector array for improved noise reduction when sensing ladar pulse returns 114.
Also, in an example embodiment, the ladar receiver 104 can be shot noise limited (e.g., with respect to Johnson noise and dark current exceeded by shot levels). There are five types of noise/interference sources in a ladar system: (i) Johnson noise (electronic noise from amplification and conductance), (ii) dark current (spontaneous current flow in the detector in the absence of light), (iii) background light which appears as shot noise, (iv) ladar pulse return shot noise and (v) interference from other ladar or retroreflectors leaking in through sidelobes. Example embodiments discussed herein are focused on reducing (iii) and (v) (background light and interference). Both of these types of noise differ in terms of angle of arrival and/or polarization. Detection of ladar returns 114 can be extant above the initial background clutter and shot noise background levels as discussed below or they may be below these levels; but we will assume that the noise sources (i), (ii), and (iv) discussed above have been adequately and prudently suppressed to below the ladar returns using existing knowledge in the art. Further, in an example embodiment, the ladar receiver 104 can have a receiver aperture whose diffraction limit is less than the beam-waist of the ladar transmitter 102 (e.g., the aperture is at least 20× larger than the ladar transmitter 102's spot size (i.e., effective aperture which determines 102's beam-waist).
The example system 100 of
Control circuit 108 can include circuitry for processing the background light signal 116 and computing an adjusted pulse duration for use by the ladar transmitter 102 with respect to a new ladar pulse 110 to be transmitted toward a range point 112. The adjusted pulse duration to be used for the new ladar pulse 110 can be communicated to the ladar transmitter 102 via control signal 118. The control circuit 108 may include a processor to facilitate the signal processing and computational tasks that are to performed by control circuit 108 for adjusting pulse duration as a function of detected background light. While any suitable processor capable of performing the operations described herein can be employed by control circuit 108, in an example embodiment, a field programmable gate array (FPGA) can be used due to its ability to perform tasks at low latency via the massive parallelization that is available via is programmable logic gates. The control circuit 108 may also be configured to process the signal 120 from the ladar receiver 104 to facilitate the extraction of range, intensity, and/or other information from signal 120. Control circuit 108 may include an analog-to-digital converter (ADC) for facilitating this extraction.
At step 120, the control circuit 108 measures a background light level based on signal 116 from sensor 106. This can be accomplished in any of a number of ways. Typically, the camera will have a shutter which controls exposure time; where the exposure time can be labeled as time T. Also, the camera will have an effective aperture (which can be labeled as A) and an optical bandwidth (which can be labeled as b); and the laser will have a bandwidth B. The passages below will describe an example embodiment for the case where the camera is a visible light camera without polarization; but the concepts can be readily extended to other types of cameras.
Example Process Flow for Step 120 Using a Passive Camera:
At step 122, the control circuit 108 computes a pulse duration for a new ladar pulse shot as a function of the measured background light level. The relationship between pulse duration and potential interference from background light results from the amount of time that the ladar receiver 104 will need to collect light in order to detect a ladar pulse return. Longer time spent collecting light for the ladar receiver 104 typically means that the light sensed by the ladar receiver 104 will suffer from more background clutter. Accordingly, at step 122, the control circuit 108 can operate to reduce pulse duration in situations where a bright background is detected by sensor 106 and increase pulse duration in situations where a dark background is detected by sensor 106. In an example where the pulse energy is kept relatively fixed from shot-to-shot, narrowing the pulse duration will translate to a higher amplitude pulse while lengthening the pulse duration will translate to a lower amplitude pulse. The higher amplitude/short duration pulse will allow for easier detection of the ladar pulse return 114 over the bright background light ambience. Therefore, we want pulse duration as short as possible to remove background light. But the use of the bistatic ladar receiver complicates this arrangement because the lower amplitude/long duration pulse will allow for less Johnson and dark current noise but will also have the deleterious effect of adding more background light. This is because a detector that handles shorter pulses will have relaxed (less) feedback resistance, in turn adding more Johnson and dark current. More precisely, for fixed energy per pulse in the laser, increasing pulse duration increases background noise with square root proportionality. The result is a net square root dependent loss in SNR. However, increasing pulse duration lowers Johnson noise bandwidth which leads to a square root noise decline, with another square root or more loss in noise from a decline in feedback resistance. The result is SNR parity at worst, and SNR gain, in general, from Johnson noise, with the opposite for background noise.
Thus to optimize bistatic detection, background light levels are assessed, and the ladar pulse (in the transmitter 102) and subsequent matched filter (in the receiver 104) are both tuned accordingly. Too short a pulse swamps the system in Johnson noise, and too long a pulse swamps the system in background clutter. The trade that is optimum places the background clutter below, or at least near, the Johnson noise, since otherwise less bandwidth would yield further SNR gain.
At step 124, the control circuit 108 provides control signal 124 to the ladar transmitter 102, where this control signal 124 includes a specification of the computed pulse duration from step 122. The computed pulse duration may be provided to the ladar transmitter along with a number of other parameters for use to control the scanning and firing of ladar pulses 110.
The bistatic architecture of
Source:
The laser 202 can be in optical communication, by either free space or fiber coupling to a 2D scanner 200. The scanner 200 can employ an optical phased array, spatial light modulator, digital mirror device, galvometer, or MEMs device. The ladar transmitter 102 defined by the scanner-laser configuration can be flash or divergent limited. Many vendors provide such systems, examples of which include MTI, Fraunhoffer, BNS, and Meadowlark. The scanner 200 has an interface with the outside environment, where the interface may include an antireflectivity coated “first lens”, possibly bandpass filtering (to avoid amplified spontaneous emission (ASE) and other leakage), and (if desired) a telescope. An example of a vendor of such systems include Edmond Optics. The ladar transmitter 102 can also include a pulse generator for the laser 202 that is responsive to a control signal from the control circuit 108 to control the pulse duration for the ladar pulses. Such a pulse generator can adjust pulse duration commensurate with an inter-pulse period between ladar pulses for the system 100.
Pulse Launch and Return (Variable Pulse Length, Short Range Coaxial Bistatics):
The photon packet 110 then arrives at a target (e.g., range point 112) in the environmental scene, through free space propagation and returns via free space propagation to the photo-receiver 208 of the ladar receiver 104. The front end of ladar receiver 104 may have receive optics 210. The receive optics 210 may include a concentrator such as a compound parabolic concentrator (available, for example, from Edmond) or may be comprised simply of a staring photodetector equipped with antireflective and bandpass coated material (available, for example, from Edmond, Thor), in which case cold stops 212 may be employed to limit background light in the receiver 104 from outside the field of view. The receive optics 210 may also include a bandpass optical filter that limits the light received by the photo-receiver 208. As an example, such a bandpass optical filter may have an acceptance wavelength width of 10 nm-100 nm.
The target 112 also injects passively derived incoherent light into cameras 206a and 206b. In this embodiment, the system employs two cameras 206a and 206b, so as to remove parallax and induce redundancy. This is preferred to direct optical co-bore siting with beam splitters and common optics, as disclosed in U.S. patent application Ser. No. 16/106,374, filed Aug. 21, 2018, and entitled “Ladar Receiver with Co-Bore Sited Camera” (the entire disclosure of which is incorporated herein by reference), when using 900 nm lasers, since the wavelength offset from visible is low. In an example embodiment, visible band color cameras can be used for cameras 206a, 206b. However, it should be understood that infrared, grey scale, and/or hyperspectral cameras could also be used for cameras 206a, 206b.
A drawback of the bistatic architecture is that objects very close to the ladar system 100 may not be visible to the ladar receiver 104. This is shown in
As an alternative to the conventional blind spot solution, a practitioner may wish to use the light returning onto the ladar transmitter 102 to remove this blind spot. As shown by
Detection and Processing (Pulse Duration Control Using Background Light, Shuttering):
The photons, born in the laser 202, and journeying into the environment and back, end their brief lives at the photo-receiver 208, whereupon the give birth to electrons. Signal 120 produced by the ladar receiver 104 can be representative of the incident light on the photo-receiver 208, including any ladar pulse returns from the target 112. The goal in a non-imaging receiver is to drive down the background light as low as possible. This can be accomplished by a number of technologies such as large area avalanche photodiode (APD) modules (available, for example, from Hamamatsu), silicon photomultipliers and single photon avalanche diodes (SPADs) (available, for example, from Sensl, Ketex, First Sensor, Excelitas), or arrayed PINs (available, for example, from GPD).
The control circuit can interact with the cameras 206a, 206b and the ladar transmitter 102 to provide adaptive pulse duration. As noted above, such adaptive pulse duration can help improve, in real time, the SNR of the ladar system 100 by trading Johnson noise (which drops as the pulse widens) and background noise (which grows as the pulse widens).
Adaptive pulse duration can also help improve range resolution of the ladar system 100. In the absence of noise, the better the range resolution, the better the ladar system can measure the 3D position of each object, which in turn improves velocity estimations, improves shape and feature extraction, and reduces ambiguity between objects. For a given system bandwidth, the longer the pulse 110, the lower the range uncertainty (or the higher the range resolution) in a linearly dependent fashion. Ladar return pulses can be digitized, and then interpolation can be performed between successive digital samples. This allows for improved estimation of the range of the return. For example, if the time sequence of samples is {1,2,3,3,2,1}, it is safe to say the range peak is close to, or at, the time sample exactly between the two 3's in the set. In contrast if the time sequence of samples is {1,2,3,4,3}, it is safe to say the peak is at the time sample corresponding to the value 4 in the set. When ones applies this method, we find that the interpolation accuracy is, in theory, invariant to pulse length, but does depend on bandwidth, provided there is only one return from each laser shot fired. If there are multiple objects at a given azimuth and elevation there can be multiple returns. If these returns are very close together, relative to the ladar pulse length, in can be difficult to separate these returns. This scenario can be referred to as a “discrete clutter” problem. Hence, a benefit in range resolution is to increase robustness against nearby clutter. Such discrete clutter mitigation can be particularly helpful in ADAS because the road surface is a ubiquitous form of clutter that can contaminate the return from vehicles. Improved pulse duration for optimal balancing can be achieved by first measuring, and updating in real time, the background clutter level.
In an example embodiment, the control circuit 108 can adjust the pulse duration so that a measured SNR of a detected ladar return is maximized for the measured background clutter levels. Furthermore, the control circuit 108 may maintain a plurality of solar thermal mass models for use when adjusting the pulse duration. A thermal emissivity measurement can be used in concert with a reflectivity model to generate a reflectivity map, which can then be used by the ladar system. The reason this is true is that conservation of energy dictates that all the solar dosage across the globe must match the earth's radiation into space, assuming constant global temperature, which is a good approximation for our purposes. Thus emissivity, which is what a passive thermal camera models, can be translated into the ladar system's sensed background clutter levels for evaluation, and this information can be used to adjust the solar thermal mass models of or other background light models in the SNR calculations when adjusting pulse duration.
Steps 300-304 of
Steps 306-322 can be performed pre-shot to adaptively control the pulse duration for the ladar pulse shot as a function of measured background light.
At step 306, the control circuit 108 measures the intensity in the optical return from the passive visible light cameras 206a/206b at the azimuth and elevation associated with the range point corresponding to each laser pixel (the range point from the current element in the shot list). Note that this passive measurement senses the intensity of background light that the activated detector will sense from the laser pulse return. The ability of a passive measurement to determine background light is predicated on known physics of background solar radiation, and relies on the temporal invariance of such light, so that the camera can collect over a long dwell time and “learn” what background light will manifest itself on the activated detector if and when such detector 208 is used. As part of step 306, the system can analyze the full waveform sensed by the photo-receiver 208 and as digitized by an ADC. The digitized waveform can be fed into a processor (e.g., silicon on a chip (Sic), such as an FPGA). The ladar receiver 104 may include an embedded amplifier and low pass filter to facilitate these operations. The Sic analyzes the distribution of background light energy as a function of incidence angle onto the clutter ground plane, and then computes, for each shot list range point, the anticipated range point background light clutter level. This process can be done either (i) in advance of each range point on the shot list, or (ii) when that range point gets to the top of the list and is selected at step 300. A practitioner may optionally employ logic in the ladar system to determine which path is chosen based on the scenario at hand. For example, if the range point is near the horizon, and the vehicle path for the ladar system is unobstructed to the horizon approach (i) identified above is sensible and warranted, since it allows a long camera collection time and therefore the highest fidelity in capturing background light levels that will interfere with the return pulse from the laser. At the other extreme, if the subject vehicle is poised to enter a tunnel, then approach (ii) identified above is warranted until said vehicle egresses the tunnel and the horizon is once again available. Approach (ii) can also be warranted in a city or urban environment generally, where the background light level for every point in the shot list is likely to vary quickly from street to street as the user's car navigates about the city landscape. If there are multiple ladar receiver 104 staring at different angles, this step can be helpful to account for solar anisotropy as apportioned against the non-imaging photodetector array.
Cameras 206a/206b can be used to not only align color to the ladar pulses to help facilitate compressive sensing, but can also be used to assist in the pulse duration adjustment via links 116a and 116b. This can be accomplished via intensity measurement in RGB channels when looking skyward from a color camera, or by directly measuring earth emissivity with passive infrared imaging. Mass models can then be used to translate emissivity to reflectivity.
Also, one can emulate (to an imperfect degree) a passive thermal camera using a history of data collected by the ladar system itself. This can be done by aggregating shot returns as follows:
Example Process Flow for Determining Background Light Levels Using an Aggregated History of Ladar Return Data (as an Alternative to Cameras):
At step 308, the control circuit 108 applies the measured intensity in the optical return from step 206 to a running average of intensity across the range point field of view. This operation yields an updated running average of intensity across the range point field of view.
At step 310, the control circuit 108 further adjusts the updated running average from step 308 to account for the shift in wavelength from visible light to the laser wavelength. An example for step 310 would be as follows. Suppose the camera measures a red object with an intensity of 10 Watts per square meter. We now want to know how much background light this translates to at our subject laser wavelength. This can be done as follows, assuming that the camera 206a/206b collects all the energy in the 620-720 nm band on the red color channel. To simplify the computations, we treat the energy of background sunlight as constant across the color bands in the camera.
Example Process Flow for Translating from Visible Light to Laser Light when Computing Background Light Levels:
At this point we have accounted for the translation from camera background light to ladar wavelength background light. It remains to account for the effect of the photo receiver 208 on the background light, specifically in the wavelength dependence of this receiver. This is accomplished in step 312. At step 312, the control circuit 108 further adjusts the adjusted running average from step 310 to the acceptance band in wavelength of the photo-receiver 208. This can be accomplished by taking the result in step 5) above, at each wavelength (measured in nanometers (nm)) and multiplying by the response of that wavelength in the photo receiver 208, and then summing across nm, to obtain a quantity expressed in W/square-meter.
At step 314, the control circuit 108 converts the adjusted running average from step 312 into its background shot noise equivalent. This step is performed because, for ladar detection it is not the light energy (in the classical physics regime) from background light per se that is the cause of interference, but rather the quantum fluctuation of said light. If background sunlight was exactly constant, then the background light (which is the composite of all sunlight reflected from everything in the environment) would be exactly constant for a given environment. In that case, we would simply subtract off this constant (which would then be essentially a DC bias) and eliminate the noise. But, in reality there is fluctuation; and that fluctuation is what we call background shot noise. This quantity can be computed from the classical energy by using the fact that the quantum fluctuation has a variation equal to the classical energy value.
Thus far, we have not accounted for polarization. Background light has polarization structure generally. If polarization lenses are used in the ladar receiver 104, then it is desirable to accurately account for the reduction, or quenching, of background light on each polarized receiver. At step 316, the control circuit 108 can thus decrement the result from step 314 to account for polarity quenching if the ladar receiver 104 employs a polarization lens. For example, as derived from infrared detection textbooks, the polarity quenching from a pure Lambertian target will be 3 dB or 50%.
At step 318, the control circuit 108 computes the theoretical signal-to-noise ratio (SNR) for the upcoming range point shot to be detected as a laser return pulse, including all five interference terms. Note, at this point this pulse is a conceptual entity, the shot has not yet been taken. As a result, we can employ a mathematical expression which is a function of the pulse duration D of the pulse which has yet to be formed (see Dereniak, E. L., Boreman, G. D., “Infrared Detectors and Systems”, 1st Edition, Wiley, 1996; the entire disclosure of which is incorporated herein by reference). We denote Epp as the energy per pulse, p(t) as the pulse received, k is Boltzman's constant, Ω is the effective trans impedance resistance, Δf is the bandwidth of the receiver and the pulse, B is the optical bandwidth, and P is the average Planck spectral density across the optical bandwidth, and A is the effective pupil diameter, where h,c,λc, μ, q, idark, clutter, Selfshot are respectively Planck's constant, the speed of light, the optical carrier, the computed net background light effective reflectivity, an electron's universal invariant charge, the dark current, the clutter background, target shot noise, and L is the aggregated loss factors:
In a simplified example embodiment, which is probably sufficient in most cases, the last three terms in the denominator can ignored. The control circuit 108 can then find the pulse duration (as a function of the available pulse energy for the shot) that maximizes the SNR. Such a process (finding a quantity that maximizes another quantity) is referred to as an argmax operation.
At step 320, the control circuit 108 adjusts the computed SNR, reducing the duration (as needed) to account for the possibility of clutter discretes. The effect of such terms is accounted for directly in the denominator term “clutter” in the immediately above clutter formula taken from Boreman. For instance, if the range point is grazing the road surface, it is desirable to reduce the pulse duration so that the system avoids comingling road clutter with the return pulse from a car. In contrast, a range point fired above the horizon does not need to have a shortened pulse because no clutter discretes are anticipated since no nearby clutter is deemed likely to occur in the sky.
SNR is linearly proportional to range resolution. Thus, if we double the SNR, then range measurement uncertainty drops by 50%. This allows us to adjust SNR to obtain a desired range accuracy. For example, if our intent is to extract velocity from a moving vehicle, we require high range measurement, so that we can determine motion from small range offsets frame to frame. At step 322, the control circuit 108 further adjusts the computed SNR based on desired range measurement accuracy. This can be done in a manner similar to that in step 314. Specifically, we can pretend ahead of time that the upcoming range point shot has been taken, and then find the resulting range resolution as a function of this shot's pulse duration. We then find the pulse duration (again using argmax procedures) to achieve our predetermined, desired, range resolution.
The example of
At this point, step 324, the desired pulse duration has been determined and is available. Now the system can use communication link 118 to pass the defined pulse duration from control circuit 108 to the laser 102 (and the laser can then be instructed to take the shot.
Steps 306-322 as described above can be performed as pre-shot operations during daytime use of the system. For nighttime use, a practitioner may choose to perform steps 318-322 with corrections as needed for any estimated thermal sky noise, measured earth emissivity, or headlights (or other unnatural lighting conditions) if these are deemed to exceed the thermal and dark current noise.
Thereafter, at step 326, the control circuit 108 compares the pulse return to a threshold, and, if this threshold is exceeded, the pulse position (range) location and intensity can then be passed to a user (which may take the form of a consuming application) via link 250. The information passed to the user can include fused RGB shot returns, (added range, azimuth, elevation, and intensity). Furthermore, this information can include, as desired, radiometry and metrology associated with background light. If thermal and dark current is below the background shot noise, it should be noted that the pulse duration SNR/resolution trade has no dependency on the latter terms. Accordingly, this can make it beneficial to build the ladar receiver 104 using photodetectors with built-in gain, such as APDs.
In another example embodiment, the control circuit 108 can be configured to provide control instructions to the ladar receiver 104 that are effective to provide a frequency domain “shuttering effect” on the photo-receiver 208 of ladar receiver 104 thereby effectively admitting light across a reduced bandwidth into the photo-receiver detection chain. Since a longer pulse has a lower bandwidth, this process reduces the background light interfering with the target return by removing the incoming background light not required to capture the target return. This shuttering can be adjusted as a function of the measured background light. By employing such shuttering, the system 100 can achieve an effect similar to adaptively modifying the optical circuit in the photodetector (feedback resistance change) to account for changes in pulse duration. This shuttering effect is useful to increase pulse energy when the peak power limit has been met and higher SNR is required. Since the feedback resistance is not changed, shuttering is less effective than true optical photodiode circuit tuning, but it is far easier to implement since it can be accomplished in software. An example shuttering process can be summarized as follows:
Example Process Flow for Shuttering to Adaptively Reduce Bandwidth:
The ladar system may also be designed in any of a number of ways so that polarization properties can be advantageously leveraged with a ladar receiver capable of discriminating polarity. There can be 4 basic options in this regard:
Pre-Digital Ladar Polarization Architecture:
The system 100 may include multiple receive channels (e.g., channels 402 and 404). Channel 402 can correspond to a first polarization (e.g., horizontal (H) polarization), and channel 404 can correspond to a second polarization (e.g., vertical (V) polarization). Channel 402 can include photo-receiver 412, and channel 404 can include photo-receiver 414. Accordingly,
The photo-receivers 412 and 414 can be each attached optically to a compound parabolic concentrator (CPC) whose boundaries are shown in
To begin, the return from range point target 112 can arrive at lenses 416 and 418 via optical paths 422 and 424. The lenses 416 and 418 are not drawn to scale for visual simplicity. Furthermore,
In addition, passive light from the range point target 112 can be collected after free space propagation onto the camera 406. The camera 206 can be equipped with polarization sensitive imaging sensors, and the camera 406 can produce H pixels 472 and V pixels 474 that get sent to the processor 454 of control circuit 108. Processor 454 may take the form of any suitable processor for performing the operations described herein (e.g., the processor 454 can be a SiC device such as an FPGA). Only one camera 406 is shown in
Post-Digital Ladar Polarization Architecture:
To recap, channels 402 and 404 and camera 406 can produce four data streams: H and V polarization from the target return (see 442 and 444), each of which has an intensity measurement, and azimuth elevation and range measurement, as well as camera-derived color intensity, RGB, also in polarization space, via the H pixels 472 and the V pixels 474. These data streams provide a rich data set of camera and ladar registered pairs.
The ladar data dimension count via 442 and 444 is invariant per polarization, remaining at 4 for H and 4 for V, for a net of 8 (where the data dimensions include (1) range from the H ladar return, (2) intensity from the H ladar return, (3) cross range (azimuth) from the H ladar return, (4) height (elevation) from the H ladar return, (5) range from the V ladar return, (6) intensity from the V ladar return, (7) cross range (azimuth) from the V ladar return, and (8) height (elevation) from the V ladar return. It should be noted that that the range and intensity may vary, and if interpolation is used the measured arrival angle may vary as well. As described in U.S. Pat. App. Pub. 2019/0086514 (the entire disclosure of which is incorporated herein by reference), there are two kinds of azimuth (Az) and elevation (El) quantities, those of the transmitter 102, determined by each entry in the range point shot list, and those of the receiver 104. The receiver Az and El are identical to the transmit Az and El unless we interpolate. For example, if we find there is in a frame, at a given range, two returns spaced 0.1 degrees apart, at 0 degree and 0.1 degrees in azimuth each with measured intensity 5, i.e. we obtain {5,5}. In this situation, we may safely surmise the actual target is situated at 0.15 degrees. However, if we found three returns at 0, 0.1, 0.2 degrees with intensity values {2,8,2}, we could safely surmise that the correct angle in azimuth for the receiver is 0.1 degrees. Note that we are using symmetry here, as we did in range interpolation discussed earlier, which is legitimate since the spatial point spread function of a laser pulse is circularly symmetric.
The camera channel can have multiple dimensions as well, since two angles are needed in 3D space to define polarization. In other words, H and V separation gives you two angles for each x, y data point. This implies that some practitioners might desire more than two channels in
With the example of
Furthermore, while the example of
To recap, the channels in
The control circuit 108 of
To support these operations, control circuit 108 can include an ADC (not shown) that digitizes the ladar return polarization signals 442 and 444. The control circuit 108 can also include channel mixing logic 450 that computes a weighted sum 466 of the digitized H ladar return polarization signal and the digitized V ladar return polarization signal. The weighted sum 466 can be compared to a threshold by threshold comparison logic 452 to determine whether a retroreflector is present in the frame. The threshold can be determined based on how stable the laser polarization is, and how accurately the retroreflector preserves polarization, accounting for any atmospheric artifacts. This may be achieved using numerical modeling, or simply historical behavior from actual known retroreflectors. Processor 454 can then process the result of this threshold comparison which indicates whether the retroreflector is present. If the predetermined threshold is exceeded, then target presence is declared. If the predetermined threshold is not exceeded, then no target is declared in the shot return. In the latter case, no output is passed through communication link 250.
In a related development, the inventors note that polarization measurements can be used to precisely pinpoint (in daytime, of course) the sun's position—even in dark, cloudy conditions or fog conditions. See Hegedus, R. et al., “Could Vikings have navigated under foggy and cloudy conditions by skylight polarization? On the atmospheric optical prerequisites of polarimetric Viking navigation under foggy and cloudy skies”, Proceedings of the Royal Society A, Volume 463, Issue 2080, Jan. 16, 2007, the entire disclosure of which is incorporated herein by reference. Today we have many other techniques for finding where the sun is positioned relative to ourselves, (e.g., a GPS unit), and access to the internet where many web sites (e.g. Google Earth and Heavens-Above) indicate relative solar movement relative to latitude and longitude.
Details of a method of finding polarization angle anticipated from a given angle is found in
In terms of a process flow, the operation of the system of
Example Process Flow A for Down Selecting which Range Points to Interrogate in the Ladar Point Cloud Based on Analysis of the Passive Polarization Camera
The benefit of the above three stage process is that it reduces the processing burden by limiting retroreflective detection and mitigation to only those shots where it is needed.
At step 500 of
The benefit of using polar coordinates is that the retroreflective analysis in
At step 502, the processor segments the returns x(H),x(V) into regions using ladar-based edge detection and/or camera-based image segmentation. A region that is to be interrogated for retro reflection can be denoted by the label A. This segmentation can employ machine-learning techniques, such as mixture model decomposition. As an example, the segmentation can use k-mean clustering whereby the segmentation is performed by clustering x(H),x(V) spatially into contiguous voxels (blobs), each denoted by A, where in each blob the polarization angle ϕ is constant.
At step 504, the processor checks whether r exceeds a threshold for multiple returns in a given blob A. If so, the processor computes the standard deviation (std) of ϕ. This process can also employ machine-learning techniques, such as an analysis of variance process. For example, the analysis of variance process can be useful for texture analysis in an image. Smooth surfaces, like painted dry wall, tend to have RGB values that are very stable (low std) whereas rougher surfaces such as wood, carpets, and plants tend to have fluctuating, unstable RGB values (high std). To use an example, suppose the returns {1,2,3,4,5,6} have polarization angles, in degrees, of {12,13,17,1,1.2,1} respectively. Then, the machine learning logic would identify two blobs, A(1), A(2) as follows:
A(1)=returns {1,2,3} with mean angle 14 and standard deviation 2.6; and
A(2)=returns{4,5,6} with mean 1 and std 0.1.
At step 506, the processor checks whether the std(ϕ) computed at step 504 is less than a threshold. If the std(ϕ) is less than the threshold, the processor declares that retro reflectivity exists for the subject object. If the std(ϕ) is greater than the threshold, the processor declares that non-retro reflectivity exists for the subject object. If the threshold was chosen as 0.2, for example, we would declare A(2) to be a retro reflector and A(1) as not a retro reflector (i.e., Lambertian). The threshold can be chosen based on multiple considerations. For example, a first consideration can be that the threshold should be consistent with known polarization std values from Lambertian targets. It is known that such non-retro reflective targets have a polarization std of about 10 degrees. So, it is desirable for the threshold not to be anywhere near that big. Next, a second consideration can be that we will want to use an std that is large enough to capture variation in the camera accuracy (digital quantization will give its own std for example), and finally we will want a threshold that addresses std values for actual observed highway and road signage. The field of polarization imagery is sufficiently nascent that, at present, there is limited empirical data to formulate a good numerical threshold value including this latter consideration, but nonetheless we present methodology forthwith to accomplish the pursuit of such a value, which will generally be in the range of 1 Os of percent.
At step 508, for an object declared at step 506 to be a retroreflector, the processor computes an orthogonal projection of the retroreflective basis vector into higher dimensional polarization space. Basis vector can be expressed (for isomorphism) as the angle ϕ, and the orthogonal basis is simply
Then, at step 510, the processor can compute the weights 456 and 458 be used for the H and V channel outputs respectively based on the results from step 508. For example, let us treat 442, as x(H) and 444, as x(V). Then suppose the system finds that in an analysis of ladar returns in a neighborhood of a range interval, across an Az, El cluster, A(i) has a polarization angle of 36 degrees, with an std of 0.1 degrees. Then, we declare the presence of a retro reflector. It should be noted that the system may also have extracted this information, 36-degree mean, 0.1 degree std, from analysis of camera data or any combination thereof. We then deem that the matched filter weights 456, 458 must suppress this signal. The system can do this by setting the weights 456, 458 to sin(54 degrees) and cos(54 degrees) respectively.
Once the weights are computed by processor 454, using inputs from 472, x(H), and 474, x(V), as well as detected returns from threshold comparison logic 452, these weights (denoted by 456, 458) are communicated to the multiplier logic in processing logic 450 as shown by
The discussion thus far assumes that the goal is to suppress the retro returns. This is the case when there is a concern that a retroreflector will bleed into other pixels in the receiver 104. At other times, it may actually be desirable to find retroreflectors. For example, if the ladar system is pointing at the road, a retro reflector is quite likely to be a license plate or a tail light or headlight. These are useful features to exploit to improve on-road sensing. As such, a variant of the procedure discussed above is also desirable. An example embodiment of such a variant is as follows, which assumes a polarization stable ladar source with two polarization receivers:
Example Process Flow B for Object Fingerprinting Based on Polarization (Ladar Polarization Only):
While the example of
Another longer range example is shown in
We now present another example of how a non-polarization camera can nonetheless be useful for cueing a polarization ladar receiver, using color cueing. In the example discussed below, we use red as a cue that a highly reflective object is available for ladar interrogation.
Example Process Flow C for Down Selecting which Range Points to Interrogate in the Ladar Point Cloud Based on Analysis of the Passive Polarization Camera
Passive Polarization for Point Cloud Augmentation:
MIT researchers have developed methods for creating 3D images from passive polarization cameras in indoor, controlled light, settings, an example of which is described by Kadambi, A. et al., “Polarized 3D: High Quality Depth Sensing with Polarization Cues”, 2015 IEEE International Conference on Computer Vision (ICCV), Dec. 7-13, 2015, the entire disclosure of which is incorporated herein by reference. The inventors herein are unaware of this work being applied in an outdoor context or in conjunction with ladar.
The two cameras 914, which are embedded inside chassis 906, have lens pupils 918 which can be situated symmetrically about the laser pupil 916 (for sending ladar pulses and receiving ladar pulse returns). By triangulating, we can then identify at each range point sample the angle of the cameras 914 that corresponds to each range return from the laser. We can also have a full co-bore sited laser 904 (inside chassis 906 like the camera(s), but shown here in a blow-up expanded view for clarity) and camera 914, at the expense of more complicated optics. We assume that the camera alignment is sufficient to register camera 914 to laser 904, but cannot provide reliable depth information at range. It should be understood that the dual cameras 914 are only required to resolve parallax, i.e. align laser and camera, and can be substituted for a truly co-bore sited single camera if the practitioner desires.
The ladar transmit 938 and receive path for a point 922 on object 936 is shown as 940,942. We use “?” to indicate unknown values and “!” as measured values. The distance from the ladar system to said object point can be computed by the embedded processor situated inside the ladar transceiver 906 (e.g., see processor 454 in
910 is the resulting range from the laser shot. Our specific goal is to find (i) the slope of the surface at 922, and subsequently (ii) the slope of other points on the object, such as 920, and finally (iii) the 3D position of all points on the object. We assume here that the material is polarization altering, but preserving some structure, i.e. specular or retro thereby being governed by Brewster angle effects.
At 968, the angle from the wedge formed from this object to the sun, on one side, and object to the ladar is computed (e.g., see the angle formed by 916-922-902 in
Image 976 in
The resulting polarization augmented point cloud can be transmitted to a machine learning perception system, and treated as if it derived from a denser point cloud using more ladar shots. In so doing the augmentation is not apparent to the end user, but truly behaves as if a more powerful ladar system, capable of generating a denser frame, were used at the onset.
In so doing the augmented point cloud can be used not only to reduce ladar cost, for a given performance level, but can also be configured to reduce latency for time critical operations, since passive collection does not require time to position shoot and collect laser pulse return light. In this case the augmented point cloud can be used to improve classification accuracy, thereby reducing latency for high confidence motion planning decision-making.
To begin we consider the ladar system, which includes two receivers 782, 780, and a transmitter 778. The transmitter 778 includes a laser source, and may also include an optical collimating lens, either fixed or scanning, with the lens either fiber-coupled or free space-coupled to the laser source. The transmitter 778 is shown in more detail at the top
The top of
These are the blow up of the receivers 786, 784 in the bottom of
To trace the data flow in a cross receiver, first, we note that the solid paths from the target are only two-fold, one travels to the azimuth receiver 766, arriving at cell 760, and one travels to the elevation receiver 720, arriving at cell 724.
Next we note that the dotted paths from the retro reflector are also two-fold, one travels to the azimuth receiver 766, arriving at cell 772, and one travels to the elevation receiver 720, arriving at cell 724. The third dotted line 788 hits neither receivers nor the retro retroreflector. The reason that there are paths from the laser source to the retro reflector and from the laser source to nowhere in particular (788) (in fact there are many more, but
This is desirable because the asymmetric shape allows for more light collection while keeping the aspect ratio of the overall array fairly symmetric, which helps in optics design. 1D photodetector arrays are common in bar code readers and industrial inspection, where the second dimension is obtaining from object motion. However, the use of a 1D photodetector array can be problematic in a ladar system because of the fact that retro reflectors can cause confusion, as we show in
The dotted lines forming the boundaries of the cross receivers 766 and 720 can be, respectively, anamorphic and spherical lenses. Spherical lenses are preferred because they have more light collection, but they are not always available, depending on the aspect ratio and scan angle desired for the receiver. Anamorphic lenses are available from many vendors including Edmond Optics, and spherical lenses (or aspherical lenses which are similar and have symmetric profiles) are ubiquitous amongst optical merchants.
The receive optics for the ladar receiver have the property that they focus incoming light onto a cell of each of the receivers 720 and 766, as shown respectively as 774, 776. At any point in time most (if not all) the cells will be splattered with numerous spots; so actually instead of just two spots 774,776 as shown by
Each cell in each 1D array can be provisioned with a photodetector circuit that converts light to current and amplifies this current. The 1D array can also include a MUX circuit, in communication with the transmitter 778. This MUX circuit can be provisioned so that the direction 704 that the laser is fired is shared with said MUX circuit, and the MUX circuit can be provisioned with a look up table (LUT) that directs it to select which cell to access from each 1D array based on the laser direction or targeted range point.
For elevation, we see that there is a read 726 of a cell selected by the elevation 1D array MUX 736. Other cell currents (e.g., 732, 734) are ignored. For ease of illustration we do not show the pathways to these non-accessed MUX lines, but the associated cells are shown notionally by inactive cells 730 and 728.
The MUX circuit is then in electrical communication with a detection circuit 740. Detection circuit 740 includes a comparator within 736 that compares the current output 726 from the MUX with a threshold (after any desired waveform conditioning such as matched filtering and possible channel combining as described in
This process is repeated for the Azimuthal 1D array 766, where MUX circuit 756 selects mux line 758 which accesses cell 760 which contains amplified current from target return 764. The MUX circuit 756 rejects other MUX lines (e.g., see 755 and 752) which correspond to cells, nominally, 768, 762. Note also that the MUX circuit also rejects MUX line 750, which corresponds to cell 772 which contains the return from the retro reflector. MUX circuit 756 can also include a comparator that performs the threshold comparison as discussed above in connection with MUX circuit 736. The threshold comparison results in a binary output that declares a target return presence (solid line) while effectively rejecting the retro reflector presence (dotted line) due to the non-selection of the mux line from the cell that received the retro reflector return.
Finally, the decisions resulting from 736 and 756 (each a binary state, e.g., 0 for no; 1 for yes) are fed into a binary adder 748. The presence of a ladar pulse return from the target is declared if both inputs to the adder 748 yield a detection (in this example, if 742 equals two). If so, this target return presence detection indicator is passed to the user or consuming application along with ancillary information, range, intensity, RGB, polarization etc., when available.
In this example shown by
Note that the target spot sizes 774, 776 are generally much smaller than the cell area (e.g., 728,730, etc.) of the photodetector cells of the 1D arrays. In an example embodiment, in order to obtain improved target angular accuracy, the lens can be constructed so that there is optical gain variation in each cell of the 1D array, so as to make the measurement in the cells sensitive to angle changes. We can construct an optical gain gradient (convex or Gaussian or other shape) on the lens so that there is a change in current produced based on where the spot lies on a cell. This is shown in the enlarged cell 790 of
1. The optical gain variation on each cell is measured, either at factory, or from dynamic updates based on measurements against fiducials.
2. Based on this previously measured data from 1 above, the controller generates a look up table, LUT, so that the optical gain is stored for each azimuth and elevation angle of an incident light ray impinging on the cells accessed on both 1D arrays.
3. A shot is fired; and if/when a detection occurs, the LUT from 2 above is employed to ascertain the azimuth and elevation of the detected pulses.
4. The actual centerline azimuth and elevation where the laser was pointed when the shot took place is recorded.
5. The difference is computed between the measured azimuth and elevation obtained from 3 above and the actual centerline from 4 above.
6. The full width half maximum point spread of the ladar beam is accessed from memory (having been premeasured).
7. If the difference from 5 above exceeds the point spread from 6 above, the detection is deemed invalid (it is deemed either a retro reflection sidelobe leakage or laser interference)
8. If the difference from 5 above is equal to or is exceeded by the point spread from 6 above, then the detection is deemed valid.
9. If the detection is deemed valid, the measured azimuth and elevation is used in lieu of the range point shot centerline, thereby furnishing an enhanced resolution pulse.
In another embodiment, the cross receiver performance can be improved if multiple cells are configured to be read for a single range point. In such an embodiment, each of the 1D arrays in the cross receiver can be configured so that in addition to accessing the row and column entries corresponding to the range point fired, one or both adjacent cells are accessed as well. This allows for improved angle accuracy (as described in the above-referenced and incorporated US Pat App Pub. 2019/0086514—see also the above-referenced and incorporated U.S. Pat. No. 9,933,513). This allows multiple shots to be fired at different angles and collected simultaneously increasing the shot rate of the system.
In an example embodiment, one of the receivers (say, for example, Horizontal) has high gain optically and the other receiver (say, for example, Vertical) has low gain optically. This can be desirable because the low gain will be far less expensive, nearly cutting the receiver cost in half. Such a cost savings is viable because only one receiver needs to be high gain for detection. Once detection has been accomplished, a lower gain (and therefore lower SNR) complementary cross receiver is often viable since to accept or reject a target is generally viable at much lower SNR than the initial detection per se. If different optical gains are employed, different thresholds will be required to balance the outputs before applying the detection logic. For example, if Horizontal gain is double the vertical gain, we must halve the vertical channel data's threshold for detection.
In one such embodiment, the low optical gain 1D array component of the cross receiver can be safely ignored for a predetermined (low) SNR. This reduces the false acceptance of targets when one has weak returns, and is thus ignorant about whether the low gain channel has pure noise or a small amount of retro return. The state of ignoring the low gain channel can, for example, be achieved without adding circuit complexity by setting the detection threshold for the 1D array to a radiometrically unapproachably large number.
This current is now again amplified after splitting into a high and a low gain amplification pair of parallel paths, as shown as 707, 711, which are then summed by adder 743 (while still in analog RF domain 717), after (1) one channel is reversed in polarity by polarity reverser 745 and (2) the other channel is delayed by delay element 715). It is immaterial which channel is polarized and which channel is delayed, so any permutation will do.
Next, the system migrates from RF to the digital domain via a low pass filtering (LPF) and analog-to-digital converter (ADC), as shown as 721. As shown, the digital boundary 719 resides within 721. Our goal is to obtain more bits of accuracy than the ADC normally allows, using what is essentially a variant of compressive sensing as described in the above-referenced and incorporated U.S. Pat. No. 10,078,133.
At the output of 721, we obtain a pulse that looks like the plot shown in 737. The high gain channel is shown by 739, which is found to be clipped because the ADC dynamic range has been exceeded, but where the missing “tip” of the pulse is still “alive” in the reversed low gain channel 741. Note that the tip 741 looks shrunk relative to its base, which is the result of the top and bottom of the pulse being digitized at different effective quantization levels. Therefore, we can view this scheme as a non-uniform quantization process, similar to a log amplifier, but operating at much higher bandwidth because the complexity is placed in the analog domain rather than the digital domain.
The system next recombines the negative and positive pulses, renormalizing to a common scale, in a digital reconstruction stage 725 to produce the final output 727. Plot 735 shows an example of the output 727. The steps involved are summarized in 723. First, the system replaces signal values at the ADC output that lie below zero with their sign inverse, overriding any positive quantity in the process (steps 1,2). Next, the logic performs an optional whitening operation to erase spectral shaping from the feedback circuit in the 1D receiver. Then, the logic applies a match filter, matched to the true pulse that would be obtained if the ADC was perfect, i.e. had no quantization noise. By adjusting how many bits to allocate to the negative portion of the pulse, the gain differential between high gain and low gain can be used to create an effective nonlinear (pseudo-logarithmic) quantization level digital converter. The pulse in plot 735 is shown as a superposition of 100 random trials. The tip 733 is now apparent (the clipping from high gain channel 739 shown in plot 737 is no longer apparent). The variation in the tip 733, as opposed to the lack of variation in the skirts 731, is because the low gain channel has less effective resolution. The noise has larger variance 729, so we see that the lowest amplitude of the pulse has low SNR, the midrange skirt higher SNR, and the tip has a lower SNR.
While the invention has been described above in relation to its example embodiments, various modifications may be made thereto that still fall within the invention's scope. Such modifications to the invention will be recognizable upon review of the teachings herein.
This patent application claims priority to U.S. provisional patent application Ser. No. 62/837,767, filed Apr. 24, 2019, and entitled “Agile Bistatic Ladar System and Method”, the entire disclosure of which is incorporated herein by reference. This patent application is also related to (1) U.S. patent application Ser. No. 16/407,544, filed this same day, and entitled “Ladar System and Method with Adaptive Pulse Duration”, (2) U.S. patent application Ser. No. 16/407,558, filed this same day, and entitled “Agile Bistatic Ladar System and Method”, (3) U.S. patent application Ser. No. 16/407,570, filed this same day, and entitled “Ladar System and Method with Frequency Domain Shuttering”, (4) U.S. patent application Ser. No. 16/407,598, filed this same day, and entitled “Ladar System and Method with Multi-Dimensional Point Cloud Data Including Polarization-Specific Data”, (5) U.S. patent application Ser. No. 16/407,615, filed this same day, and entitled “Ladar System and Method with Polarized Polarization Camera for Augmenting Point Cloud Data”, and (6) U.S. patent application Ser. No. 16/407,626, filed this same day, and entitled “Ladar System and Method with Cross-Receiver”, the entire disclosures of each of which are incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
3443502 | Harvey | May 1969 | A |
4555627 | McRae, Jr. | Nov 1985 | A |
4579430 | Bille | Apr 1986 | A |
5221928 | Dahl | Jun 1993 | A |
5231401 | Kaman | Jul 1993 | A |
5231480 | Ulich | Jul 1993 | A |
5528354 | Uwira | Jun 1996 | A |
5552893 | Akasu | Sep 1996 | A |
5625644 | Myers | Apr 1997 | A |
5638164 | Landau | Jun 1997 | A |
5808775 | Inagaki et al. | Sep 1998 | A |
5815250 | Thomson et al. | Sep 1998 | A |
5831719 | Berg et al. | Nov 1998 | A |
5880836 | Lonnqvist | Mar 1999 | A |
6031601 | McCusker et al. | Feb 2000 | A |
6091335 | Breda et al. | Jul 2000 | A |
6205275 | Melville | Mar 2001 | B1 |
6245590 | Wine et al. | Jun 2001 | B1 |
6288816 | Melville et al. | Sep 2001 | B1 |
6847462 | Kacyra et al. | Jan 2005 | B1 |
6926227 | Young et al. | Aug 2005 | B1 |
6967617 | McMillan et al. | Nov 2005 | B1 |
7038608 | Gilbert | May 2006 | B1 |
7072039 | Dobbs et al. | Jul 2006 | B2 |
7206063 | Anderson et al. | Apr 2007 | B2 |
7236235 | Dimsdale | Jun 2007 | B2 |
7436494 | Kennedy et al. | Oct 2008 | B1 |
7701558 | Walsh et al. | Apr 2010 | B2 |
7800736 | Pack et al. | Sep 2010 | B2 |
7894044 | Sullivan | Feb 2011 | B1 |
7944548 | Eaton | May 2011 | B2 |
8072663 | O'Neill et al. | Dec 2011 | B2 |
8081301 | Stann et al. | Dec 2011 | B2 |
8120754 | Kaehler | Feb 2012 | B2 |
8228579 | Sourani | Jul 2012 | B2 |
8427657 | Milanovi | Apr 2013 | B2 |
8635091 | Amigo et al. | Jan 2014 | B2 |
8681319 | Tanaka et al. | Mar 2014 | B2 |
8797550 | Hays et al. | Aug 2014 | B2 |
8810796 | Hays et al. | Aug 2014 | B2 |
8866322 | Tchoryk, Jr. et al. | Oct 2014 | B2 |
8892569 | Bowman et al. | Nov 2014 | B2 |
8896818 | Walsh et al. | Nov 2014 | B2 |
9069061 | Harwit | Jun 2015 | B1 |
9085354 | Peeters et al. | Jul 2015 | B1 |
9086488 | Tchoryk, Jr. et al. | Jul 2015 | B2 |
9116243 | Brown | Aug 2015 | B1 |
9128190 | Ulrich et al. | Sep 2015 | B1 |
9261881 | Ferguson et al. | Feb 2016 | B1 |
9278689 | Delp | Mar 2016 | B1 |
9285477 | Smith et al. | Mar 2016 | B1 |
9305219 | Ramalingam et al. | Apr 2016 | B2 |
9315178 | Ferguson et al. | Apr 2016 | B1 |
9336455 | Withers et al. | May 2016 | B1 |
9360554 | Retterath et al. | Jun 2016 | B2 |
9383753 | Templeton et al. | Jul 2016 | B1 |
9423484 | Aycock et al. | Aug 2016 | B2 |
9437053 | Jenkins et al. | Sep 2016 | B2 |
9516244 | Borowski | Dec 2016 | B2 |
9575184 | Gilliland et al. | Feb 2017 | B2 |
9581967 | Krause | Feb 2017 | B1 |
9841495 | Campbell et al. | Dec 2017 | B2 |
9885778 | Dussan | Feb 2018 | B2 |
9897687 | Campbell et al. | Feb 2018 | B1 |
9897689 | Dussan | Feb 2018 | B2 |
9933513 | Dussan et al. | Apr 2018 | B2 |
9958545 | Eichenholz et al. | May 2018 | B2 |
10007001 | LaChapelle et al. | Jun 2018 | B1 |
10042043 | Dussan | Aug 2018 | B2 |
10042159 | Dussan et al. | Aug 2018 | B2 |
10073166 | Dussan | Sep 2018 | B2 |
10078133 | Dussan | Sep 2018 | B2 |
10088558 | Dussan | Oct 2018 | B2 |
10108867 | Vallespi-Gonzalez et al. | Oct 2018 | B1 |
10185028 | Dussan et al. | Jan 2019 | B2 |
10209349 | Dussan et al. | Feb 2019 | B2 |
10215848 | Dussan | Feb 2019 | B2 |
10282591 | Lindner et al. | May 2019 | B2 |
20020060784 | Pack | May 2002 | A1 |
20020176067 | Charbon | Nov 2002 | A1 |
20030122687 | Trajkovic et al. | Jul 2003 | A1 |
20030151542 | Steinlechner et al. | Aug 2003 | A1 |
20040130702 | Jupp et al. | Jul 2004 | A1 |
20050057654 | Byren | Mar 2005 | A1 |
20050216237 | Adachi et al. | Sep 2005 | A1 |
20060007362 | Lee et al. | Jan 2006 | A1 |
20060054782 | Olsen et al. | Mar 2006 | A1 |
20060176468 | Anderson et al. | Aug 2006 | A1 |
20060197936 | Liebman et al. | Sep 2006 | A1 |
20060227315 | Beller | Oct 2006 | A1 |
20060265147 | Yamaguchi et al. | Nov 2006 | A1 |
20080136626 | Hudson et al. | Jun 2008 | A1 |
20080159591 | Ruedin | Jul 2008 | A1 |
20090059201 | Willner et al. | Mar 2009 | A1 |
20090128864 | Inage | May 2009 | A1 |
20090242468 | Corben et al. | Oct 2009 | A1 |
20090292468 | Wu et al. | Nov 2009 | A1 |
20100027602 | Abshire et al. | Feb 2010 | A1 |
20100053715 | O'Neill et al. | Mar 2010 | A1 |
20100165322 | Kane et al. | Jul 2010 | A1 |
20100204964 | Pack et al. | Aug 2010 | A1 |
20110019188 | Ray et al. | Jan 2011 | A1 |
20110043785 | Cates et al. | Feb 2011 | A1 |
20110066262 | Kelly et al. | Mar 2011 | A1 |
20110085155 | Stann et al. | Apr 2011 | A1 |
20110097014 | Lin | Apr 2011 | A1 |
20110146908 | Kobayashi et al. | Jun 2011 | A1 |
20110149268 | Marchant et al. | Jun 2011 | A1 |
20110149360 | Sourani | Jun 2011 | A1 |
20110153367 | Amigo et al. | Jun 2011 | A1 |
20110224840 | Vanek | Sep 2011 | A1 |
20110260036 | Baraniuk et al. | Oct 2011 | A1 |
20110282622 | Canter | Nov 2011 | A1 |
20110317147 | Campbell et al. | Dec 2011 | A1 |
20120038817 | McMackin et al. | Feb 2012 | A1 |
20120044093 | Pala | Feb 2012 | A1 |
20120044476 | Earhart et al. | Feb 2012 | A1 |
20120050750 | Hays et al. | Mar 2012 | A1 |
20120075432 | Bilbrey et al. | Mar 2012 | A1 |
20120169053 | Tchoryk, Jr. et al. | Jul 2012 | A1 |
20120236379 | da Silva et al. | Sep 2012 | A1 |
20120249996 | Tanaka et al. | Oct 2012 | A1 |
20120257186 | Rieger et al. | Oct 2012 | A1 |
20120274937 | Hays et al. | Nov 2012 | A1 |
20130314694 | Tchoryk, Jr. et al. | Nov 2013 | A1 |
20140021354 | Gagnon et al. | Jan 2014 | A1 |
20140022539 | France | Jan 2014 | A1 |
20140078514 | Zhu | Mar 2014 | A1 |
20140152974 | Ko | Jun 2014 | A1 |
20140211194 | Pacala et al. | Jul 2014 | A1 |
20140291491 | Shpunt et al. | Oct 2014 | A1 |
20140300732 | Friend et al. | Oct 2014 | A1 |
20140350836 | Stettner et al. | Nov 2014 | A1 |
20140368651 | Irschara et al. | Dec 2014 | A1 |
20150065803 | Douglas et al. | Mar 2015 | A1 |
20150081211 | Zeng et al. | Mar 2015 | A1 |
20150153452 | Yamamoto et al. | Jun 2015 | A1 |
20150269439 | Versace et al. | Sep 2015 | A1 |
20150304634 | Karvounis | Oct 2015 | A1 |
20150308896 | Darty | Oct 2015 | A1 |
20150331113 | Stettner et al. | Nov 2015 | A1 |
20150369920 | Setono et al. | Dec 2015 | A1 |
20150378011 | Owechko | Dec 2015 | A1 |
20150378187 | Heck et al. | Dec 2015 | A1 |
20160003946 | Gilliland et al. | Jan 2016 | A1 |
20160005229 | Lee et al. | Jan 2016 | A1 |
20160041266 | Smits | Feb 2016 | A1 |
20160047895 | Dussan | Feb 2016 | A1 |
20160047896 | Dussan | Feb 2016 | A1 |
20160047897 | Dussan | Feb 2016 | A1 |
20160047898 | Dussan | Feb 2016 | A1 |
20160047899 | Dussan | Feb 2016 | A1 |
20160047900 | Dussan | Feb 2016 | A1 |
20160047903 | Dussan | Feb 2016 | A1 |
20160146595 | Boufounos et al. | May 2016 | A1 |
20160157828 | Sumi et al. | Jun 2016 | A1 |
20160274589 | Templeton et al. | Sep 2016 | A1 |
20160293647 | Lin et al. | Oct 2016 | A1 |
20160320486 | Murai et al. | Nov 2016 | A1 |
20160379094 | Mittal et al. | Dec 2016 | A1 |
20170158239 | Dhome et al. | Jun 2017 | A1 |
20170199280 | Nazemi et al. | Jul 2017 | A1 |
20170205873 | Shpunt et al. | Jul 2017 | A1 |
20170211932 | Zadravec et al. | Jul 2017 | A1 |
20170219695 | Hall et al. | Aug 2017 | A1 |
20170234973 | Axelsson | Aug 2017 | A1 |
20170242102 | Dussan et al. | Aug 2017 | A1 |
20170242103 | Dussan | Aug 2017 | A1 |
20170242104 | Dussan | Aug 2017 | A1 |
20170242105 | Dussan et al. | Aug 2017 | A1 |
20170242106 | Dussan et al. | Aug 2017 | A1 |
20170242107 | Dussan et al. | Aug 2017 | A1 |
20170242108 | Dussan et al. | Aug 2017 | A1 |
20170242109 | Dussan et al. | Aug 2017 | A1 |
20170263048 | Glaser et al. | Sep 2017 | A1 |
20170269197 | Hall et al. | Sep 2017 | A1 |
20170269198 | Hall et al. | Sep 2017 | A1 |
20170269209 | Hall et al. | Sep 2017 | A1 |
20170269215 | Hall et al. | Sep 2017 | A1 |
20170307876 | Dussan et al. | Oct 2017 | A1 |
20180011174 | Miles | Jan 2018 | A1 |
20180031703 | Ngai et al. | Feb 2018 | A1 |
20180075309 | Sathyanarayana et al. | Mar 2018 | A1 |
20180081034 | Guo | Mar 2018 | A1 |
20180100928 | Keilaf et al. | Apr 2018 | A1 |
20180120436 | Smits | May 2018 | A1 |
20180137675 | Kwant et al. | May 2018 | A1 |
20180143300 | Dussan | May 2018 | A1 |
20180143324 | Keilaf et al. | May 2018 | A1 |
20180156895 | Hinderling | Jun 2018 | A1 |
20180188355 | Bao et al. | Jul 2018 | A1 |
20180224533 | Dussan et al. | Aug 2018 | A1 |
20180238998 | Dussan et al. | Aug 2018 | A1 |
20180239000 | Dussan et al. | Aug 2018 | A1 |
20180239001 | Dussan et al. | Aug 2018 | A1 |
20180239004 | Dussan et al. | Aug 2018 | A1 |
20180239005 | Dussan et al. | Aug 2018 | A1 |
20180268246 | Kondo | Sep 2018 | A1 |
20180284234 | Curatu | Oct 2018 | A1 |
20180284278 | Russell et al. | Oct 2018 | A1 |
20180284279 | Campbell et al. | Oct 2018 | A1 |
20180299534 | LaChapelle et al. | Oct 2018 | A1 |
20180306905 | Kapusta et al. | Oct 2018 | A1 |
20180306927 | Slutsky et al. | Oct 2018 | A1 |
20180341103 | Dussan et al. | Nov 2018 | A1 |
20190025407 | Dussan | Jan 2019 | A1 |
20190041521 | Kalscheur et al. | Feb 2019 | A1 |
20190086514 | Dussan et al. | Mar 2019 | A1 |
20190086550 | Dussan et al. | Mar 2019 | A1 |
20190113603 | Wuthishuwong et al. | Apr 2019 | A1 |
20190116355 | Schmidt | Apr 2019 | A1 |
20190154439 | Binder | May 2019 | A1 |
20190180502 | Englard et al. | Jun 2019 | A1 |
Number | Date | Country |
---|---|---|
103885065 | Jun 2014 | CN |
2004034084 | Apr 2004 | WO |
2006076474 | Jul 2006 | WO |
2008008970 | Jan 2008 | WO |
2016025908 | Feb 2016 | WO |
2017143183 | Aug 2017 | WO |
2017143217 | Aug 2017 | WO |
2018152201 | Aug 2018 | WO |
2019010425 | Jan 2019 | WO |
Entry |
---|
“Compressed Sensing,” Wikipedia, 2019, downloaded Jun. 22, 2019 from https://en.wikipedia.org/wiki/Compressed_sensing, 16 pgs. |
“Entrance Pupil,” Wikipedia, 2016, downloaded Jun. 22, 2019 from https://enwikipedia.org/wiki/Entrance_pupil, 2 pgs. |
Analog Devices, “Data Sheet AD9680”, 98 pages, 2014-2015. |
Chen et al., “Estimating Depth from RGB and Sparse Sensing”, European Conference on Computer Vision, Springer, 2018, pp. 176-192. |
Donoho, “Compressed Sensing”, IEEE Transactions on Inmformation Theory, Apr. 2006, vol. 52, No. 4, 18 pgs. |
Howland et al., “Compressive Sensing LIDAR for 3D Imaging”, Optical Society of America, May 1-6, 2011, 2 pages. |
Kessler, “An afocal beam relay for laser XY scanning systems”, Proc. of SPIE vol. 8215, 9 pages, 2012. |
Kim et al., “Investigation on the occurrence of mutual interference between pulsed terrestrial LIDAR scanners”, 2015 IEEE Intelligent Vehicles Symposium (IV), Jun. 28-Jul. 1, 2015, COEX, Seoul, Korea, pp. 437-442. |
Maxim Integrated Products, Inc., Tutorial 800, “Design a Low-Jitter Clock for High Speed Data Converters”, 8 pages, Jul. 17, 2002. |
Meinhardt-Llopis et al., “Horn-Schunk Optical Flow with a Multi-Scale Strategy”, Image Processing Online, Jul. 19, 2013, 22 pages. |
Moss et al., “Low-cost compact MEMS scanning LADAR system for robotic applications”, Proc. of SPIE, 2012, vol. 8379, 837903-1 to 837903-9. |
Office Action for U.S. Appl. No. 16/356,046 dated Jun. 3, 2019. |
Office Action for U.S. Appl. No. 16/356,089 dated Jun. 12, 2019. |
Office Action for U.S. Appl. No. 16/407,544 dated Jul. 25, 2019. |
Office Action for U.S. Appl. No. 16/407,570 dated Jul. 31, 2019. |
Office Action for U.S. Appl. No. 16/407,615 dated Jul. 23, 2019. |
Redmayne et al., “Understanding the Effect of Clock Jitter on High Speed ADCs”, Design Note 1013, Linear Technology, 4 pages, 2006. |
Rehn, “Optical properties of elliptical reflectors”, Opt. Eng. 43(7), pp. 1480-1488, Jul. 2004. |
Sharafutdinova et al., “Improved field scanner incorporating parabolic optics. Part 1: Simulation”, Applied Optics, vol. 48, No. 22, p. 4389-4396, Aug. 2009. |
U.S. Appl. No. 16/106,350, filed Aug. 21, 2018. |
U.S. Appl. No. 16/106,406, filed Aug. 21, 2018. |
Number | Date | Country | |
---|---|---|---|
62837767 | Apr 2019 | US |