Optical detection of range, often referenced by a mnemonic, LIDAR, for light detection and ranging, is used for a variety of applications, from altimetry, to imaging, to collision avoidance. LIDAR provides finer scale range resolution with smaller beam sizes than conventional microwave ranging systems, such as radio-wave detection and ranging (RADAR). Optical detection of range can be accomplished with several different techniques, including direct ranging based on round trip travel time of an optical pulse to a target, and chirped detection based on a frequency difference between a transmitted chirped optical signal and a returned signal scattered from a target.
To achieve acceptable range accuracy and detection sensitivity, direct long range LIDAR systems use short pulse lasers with low pulse repetition rate and extremely high pulse peak power. The high pulse power can lead to rapid degradation of optical components. Chirped LIDAR systems use long optical pulses with relatively low peak optical power. In this configuration, the range accuracy depends on the chirp bandwidth rather than the pulse duration, and therefore excellent range accuracy can still be obtained.
Useful optical chirp bandwidths have been achieved using wideband radio frequency (RF) electrical signals to modulate an optical carrier. Recent advances in chirped LIDAR include using the same modulated optical carrier as a reference signal that is combined with the returned signal at an optical detector to produce in the resulting electrical signal a relatively low beat frequency that is proportional to the difference in frequencies between the references and returned optical signals. This kind of beat frequency detection of frequency differences at a detector is called heterodyne detection. It has several advantages known in the art, such as the advantage of using RF components of ready and inexpensive availability. Recent work described in U.S. Pat. No. 7,742,152 shows a novel simpler arrangement of optical components that uses, as the reference optical signal, an optical signal split from the transmitted optical signal. This arrangement is called homodyne detection in that patent.
The current inventors have recognized circumstances and applications in which motion of an object, to which range is being detected using an optical chirp, noticeably affects such applications due to Doppler frequency shifts. Techniques are provided for detecting the Doppler effect and compensating for the Doppler effect in such optical chirp range measurements.
In a first set of embodiments, a method implemented on a processor includes obtaining a first set of one or more ranges based on corresponding frequency differences in a return optical signal compared to a first chirped transmitted optical signal. The first chirped transmitted optical signal includes an up chirp that increases its frequency with time. The method further includes obtaining a second set of one or more ranges based on corresponding frequency differences in a return optical signal compared to a second chirped transmitted optical signal. The second chirped transmitted optical signal includes a down chirp that decreases its frequency with time. The method still further includes determining a matrix of values for a cost function, one value for the cost function for each pair of ranges, in which each pair of ranges includes one range in the first set and one range in the second set. Even further, the method includes determining a matched pair of ranges which includes one range in the first set and a corresponding one range in the second set, where the correspondence is based on the matrix of values. Still further, the method includes determining a Doppler effect on range based on combining the matched pair of ranges. Still further, the method includes operating a device based on the Doppler effect.
In a second set of embodiments, an apparatus includes a laser source configured to provide a first optical signal consisting of an up chirp in a first optical frequency band and a simultaneous down chirp in a second optical frequency band that does not overlap the first optical frequency band. The apparatus includes a first splitter configured to receive the first signal and produce a transmitted signal and a reference signal. The apparatus also includes an optical coupler configured to direct the transmitted signal outside the apparatus and to receive any return signal backscattered from any object illuminated by the transmitted signal. The apparatus also incudes a frequency shifter configured to shift the transmitted signal or the return signal a known frequency shift relative to the reference signal. Still further, the apparatus includes an optical detector disposed to receive the reference signal and the return signal after the known frequency shift is applied. In addition, the apparatus still further includes a processor configured to perform the steps of receiving an electrical signal from the optical detector. The processor is further configured to support determination of a Doppler effect due to motion of any object illuminated by the transmitted signal by determining a first set of zero or more beat frequencies in a first frequency band and a second set of zero or more beat frequencies in a second non-overlapping frequency band of the electrical signal. The first frequency band and the second non-overlapping frequency band are determined based on the known frequency shift.
In some embodiments of the second set, the laser source is made up of a laser, a radio frequency waveform generator, and a modulator. The laser is configured to provide a light beam with carrier frequency ƒ0. The radio frequency waveform generator is configured to generate a first chirp in a radio frequency band extending between ƒa and ƒb, wherein ƒb>ƒa>0. The modulator is configured to produce, based on the first chirp, the first optical signal in which the first optical frequency band is in a first sideband of the carrier frequency and the second optical frequency band is in a second sideband that does not overlap the first sideband.
In a third set of embodiments, an apparatus includes a laser source configured to provide a first optical signal consisting of an up chirp in a first optical frequency band and a simultaneous down chirp in a second optical frequency band that does not overlap the first optical frequency band. The apparatus includes a first splitter configured to receive the first signal and produce a transmitted signal and a reference signal. The apparatus also includes an optical coupler configured to direct the transmitted signal outside the apparatus and to receive any return signal backscattered from any object illuminated by the transmitted signal. Still further, the apparatus includes a second splitter configured to produce two copies of the reference signal and two copies of the return signal. Even further, the apparatus includes two optical filters. A first optical filter is configured to pass the first optical frequency band and block the second optical frequency band. A second optical filter is configured to pass the second optical frequency band and block the first optical sideband. The apparatus yet further includes two optical detectors. A first optical detector is disposed to receive one copy of the reference signal and one copy of the return signal after passing through the first optical filter. A second optical detector is disposed to receive a different copy of the reference signal and a different copy of the return signal after passing through the second optical filter. In addition, the apparatus still further includes a processor configured to perform the steps of receiving a first electrical signal from the first optical detector and a second electrical signal from the second optical detector. The processor is further configured to support determination of a Doppler effect due to motion of any object illuminated by the transmitted signal by determining a first set of zero or more beat frequencies in the first electrical signal, and determining a second set of zero or more beat frequencies in the second electrical signal.
In other embodiments, a system or apparatus or computer-readable medium is configured to perform one or more steps of the above methods.
Still other aspects, features, and advantages are readily apparent from the following detailed description, simply by illustrating a number of particular embodiments and implementations, including the best mode contemplated for carrying out the invention. Other embodiments are also capable of other and different features and advantages, and its several details can be modified in various obvious respects, all without departing from the spirit and scope of the invention. Accordingly, the drawings and description are to be regarded as illustrative in nature, and not as restrictive.
Embodiments are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings in which like reference numerals refer to similar elements and in which:
A method and apparatus and system and computer-readable medium are described for Doppler correction of optical chirped range detection. In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to avoid unnecessarily obscuring the present invention.
Notwithstanding that the numerical ranges and parameters setting forth the broad scope are approximations, the numerical values set forth in specific non-limiting examples are reported as precisely as possible. Any numerical value, however, inherently contains certain errors necessarily resulting from the standard deviation found in their respective testing measurements at the time of this writing. Furthermore, unless otherwise clear from the context, a numerical value presented herein has an implied precision given by the least significant digit. Thus a value 1.1 implies a value from 1.05 to 1.15. The term “about” is used to indicate a broader range centered on the given value, and unless otherwise clear from the context implies a broader rang around the least significant digit, such as “about 1.1” implies a range from 1.0 to 1.2. If the least significant digit is unclear, then the term “about” implies a factor of two, e.g., “about X” implies a value in the range from 0.5X to 2X, for example, about 100 implies a value in a range from 50 to 200. Moreover, all ranges disclosed herein are to be understood to encompass any and all sub-ranges subsumed therein. For example, a range of “less than 10” can include any and all sub-ranges between (and including) the minimum value of zero and the maximum value of 10, that is, any and all sub-ranges having a minimum value of equal to or greater than zero and a maximum value of equal to or less than 10, e.g., 1 to 4.
Some embodiments of the invention are described below in the context of a linear frequency modulated optical signal but chirps need not be linear and can vary frequency according to any time varying rate of change. Embodiments are described in the context of a single optical beam and its return on a single detector or pair of detectors, which can then be scanned using any known scanning means, such as linear stepping or rotating optical components or with arrays of transmitters and detectors or pairs of detectors.
The returned signal is depicted in graph 130 which has a horizontal axis 112 that indicates time and a vertical axis 124 that indicates frequency as in graph 120. The chirp 126 of graph 120 is also plotted as a dotted line on graph 130. A first returned signal is given by trace 136a, which is just the transmitted reference signal diminished in intensity (not shown) and delayed by Δt. When the returned signal is received from an external object after covering a distance of 2R, where R is the range to the target, the returned signal start at the delayed time Δt is given by 2R/c, were c is the speed of light in the medium (approximately 3×108 meters per second, m/s). Over this time, the frequency has changed by an amount that depends on the range, called ƒR, and given by the frequency rate of change multiplied by the delay time. This is given by Equation 1a.
ƒR=(ƒ2−ƒ1)/τ*2R/c=2BR/cτ (1a)
The value of ƒR is measured by the frequency difference between the transmitted signal 126 and returned signal 136a in a time domain mixing operation referred to as de-chirping. So the range R is given by Equation 1b.
R=ƒRcτ/2B (1b)
Of course, if the returned signal arrives after the pulse is completely transmitted, that is, if 2R/c is greater than r, then Equations 1a and 1b are not valid. In this case, the reference signal is delayed a known or fixed amount to ensure the returned signal overlaps the reference signal. The fixed or known delay time of the reference signal is multiplied by the speed of light, c, to give an additional range that is added to range computed from Equation 1b. While the absolute range may be off due to uncertainty of the speed of light in the medium, this is a near-constant error and the relative ranges based on the frequency difference are still very precise.
In some circumstances, a spot illuminated by the transmitted light beam encounters two or more different scatterers at different ranges, such as a front and a back of a semitransparent object, or the closer and farther portions of an object at varying distances from the LIDAR, or two separate objects within the illuminated spot. In such circumstances, a second diminished intensity and differently delayed signal will also be received, indicated on graph 130 by trace 136b. This will have a different measured value of ƒR that gives a different range using Equation 1b. In some circumstances, multiple returned signals are received.
Graph 140 depicts the difference frequency ƒR between a first returned signal 136a and the reference chirp 126. The horizontal axis 112 indicates time as in all the other aligned graphs in
A common method for de-chirping is to direct both the reference optical signal and the returned optical signal to the same optical detector. The electrical output of the detector is dominated by a beat frequency that is equal to, or otherwise depends on, the difference in the frequencies of the two signals converging on the detector. A Fourier transform of this electrical output signal will yield a peak at the beat frequency. This beat frequency is in the radio frequency (RF) range of Megahertz (MHz, 1 MHz=106 Hertz=106 cycles per second) rather than in the optical frequency range of Terahertz (THz, 1 THz=1012 Hertz). Such signals are readily processed by common and inexpensive RF components, such as a Fast Fourier Transform (FFT) algorithm running on a microprocessor or a specially built FFT or other digital signal processing (DSP) integrated circuit. In other embodiments, the return signal is mixed with a continuous wave (CW) tone acting as the local oscillator (versus a chirp as the local oscillator). This leads to the detected signal which itself is a chirp (or whatever waveform was transmitted). In this case the detected signal would undergo matched filtering in the digital domain as described in Kachelmyer 1990. The disadvantage is that the digitizer bandwidth requirement is generally higher. The positive aspects of coherent detection are otherwise retained.
If the object detected (the source) is moving at velocity vs and the LIDAR system (the observer) is moving at velocity vo on the vector connecting the two, then the returned signal may be Doppler shifted and the beat frequency detected ƒR is also shifted, which can lead to errors in the detected range. In many circumstances, a shape of the object being detected is identified based on the relative location of multiple returns. Thus the shape of the object may be in error and the ability to identify the object may be compromised.
The observed frequency ƒ′ of the return differs from the correct frequency ƒ of the return by the Doppler effect and is approximated by Equation 2a.
Where c is the speed of light in the medium. Note that the two frequencies are the same if the observer and source are moving at the same speed in the same direction on the vector between the two. The difference between the two frequencies, Δƒ=ƒ′−ƒ, is the Doppler shift, D, which constitutes an error in the range measurement, and is given by Equation 2b.
Note that the magnitude of the error increases with the frequency ƒ of the signal. Note also that for a stationary LIDAR system (vo=0), for a target moving at 10 meters a second (vo=10), and visible light of frequency about 500 THz, then the size of the error is on the order of 16 MHz which is 75% of the size of ƒR, which is about 22 MHz in
In order to depict how the chirped detection approach is implemented, some generic and specific hardware approaches are described.
The reference beam is delayed in a reference path 220 sufficiently to arrive at the detector array 230 with the scattered light. In some embodiments, the splitter 216 is upstream of the modulator 214, and the reference beam 207 is unmodulated. In some embodiments, the reference signal is independently generated using a new laser (not shown) and separately modulated using a separate modulator (not shown) in the reference path 220 and the RF waveform from generator 215. In some embodiments, as described below with reference to
In various embodiments, multiple portions of the target scatter a respective returned light 291 signal back to the detector array 230 for each scanned beam resulting in a point cloud based on the multiple ranges of the respective multiple portions of the target illuminated by multiple beams and multiple returns. The detector array is a single or balanced pair optical detector or a 1D or 2D array of such optical detectors arranged in a plane roughly perpendicular to returned beams 291 from the target. The phase or amplitude of the interface pattern, or some combination, is recorded by acquisition system 240 for each detector at multiple times during the pulse duration τ. The number of temporal samples per pulse duration affects the down-range extent. The number is often a practical consideration chosen based on pulse repetition rate and available camera frame rate. The frame rate is the sampling bandwidth, often called “digitizer frequency.” Basically, if X number of detector array frames are collected during a pulse with resolution bins of Y range width, then a X*Y range extent can be observed. The acquired data is made available to a processing system 250, such as a computer system described below with reference to
A Doppler compensation module 270 determines the size of the Doppler shift and the corrected range based thereon. In some embodiments, the Doppler compensation module 270 controls the RF waveform generator 215.
For example, in some embodiments, the laser used was actively linearized with the modulation applied to the current driving the laser. Experiments were also performed with electro-optic modulators providing the modulation. The system is configured to produce a chirp of bandwidth B and duration r, suitable for the down-range resolution desired, as described in more detail below for various embodiments. For example, in some illustrated embodiments, a value for B of about 90 GHz and τ of about 200 milliseconds (ms, 1 ms=10 seconds) were chosen to work within the confines of the relatively low detector array frame rate in the experiments performed. These choices were made to observe a reasonably large range window of about 30 cm, which is often important in determining a shape of an object and identification of the object. This technique will work for chirp bandwidths from 10 MHz to 5 THz. However, for the 3D imaging applications, typical ranges are chirp bandwidths from about 300 MHz to about 20 GHz, chirp durations from about 250 nanoseconds (ns, ns=10−9 seconds) to about 1 millisecond (ms, 1 ms=10−3 seconds), ranges to targets from about 0 meters to about 20 km, spot sizes at target from about 3 millimeters (mm, 1 mm=10−3 meters) to about 1 meter (m), depth resolutions at target from about 7.5 mm to about 0.5 m. It is noted that the range window can be made to extend to several kilometers under these conditions and that the Doppler resolution can also be quite high (depending on the duration of the chirp). Although processes, equipment, and data structures are depicted in
The other part of the beam, beam 307a is used to generate a local oscillator (LO) for coherent detection. An acoustic speaker produces an acoustic signal with frequency ƒm to drive an acousto-optic modulator (AOM) 370 to shift the optical frequency by ƒm in beam 307b, which serves as an intermediate frequency (IF) for heterodyne detection. Optical coupler 322 directs beam 307b onto one of the balanced photodetector 330.
A return optical signal 391 is also directed by optical coupler 322 to the other part of the balanced photodetector. The balanced photodiode 330 rejects the direct detection component. The output electrical signal is amplified in operational amplifier 344a and the IF signal is selected by a bandpass filter 341 and detected by a Schottky diode 342 which recovers the baseband waveform. The resulting electrical signal is directed through low pass filter 343 and operational amplifier 344b.
A de-chirping mixer compares this detected signal with the original chirp waveform output by power splitter 351 and operational amplifier 352b to produce an electrical signal with the beat frequency that depends on the frequency difference between the RF reference waveform and the detected waveform. Another operational amplifier 344c and a FFT process 345 is used to find the beating frequency. Processor 346 is programmed to do data analysis. Coherent detection systems like 300a significantly improve receiver signal to noise ratio (SNR) compared to direct detection of pulse travel time, however, at the cost of greatly increased system complexity. The electrical components from operational amplifier 344a and de-chirping mixer 360 through processor 346 constitute a signal processing component 340.
According to the illustrated embodiment, the light beam emitted from optical coupler 320 impinges on one or more objects 390 with a finite beam size that illuminates an illuminated portion 392 of the one or more objects. Backscattered light from an illuminated portion is returned through the telescope to be directed by optical coupler 322 onto the optical detector, such as one photodiode of a balanced photodetector 330. The one or more objects are moving relative to the system 300a with object velocity components 394, which can differ for returns from different parts of the illuminated portion. Based on this motion, the frequency of a signal detected by system 300a differs from the frequency based solely on the range. The processor 346 includes a Doppler compensation module 380, as described below, to detect the Doppler effect, and to correct the resulting range for the Doppler effect.
In this system, both the optical signal and the local oscillator LO are driven by the same waveform generator 350 and amplified in operational amplifier 352. The beam output by the modulator 310 is split by beam splitter 302 to a beam part 305 and a beam part 307c. The beam part 305, with most of the beam energy, e.g., 90% or more, is transmitted through the optical coupler 320 to illuminate the illuminated portion 392 of the object 390 moving with velocity component 394, as described above. The beam part 307c is delayed a desired amount in delay 308 to produce the reference signal 307d. In some embodiments, there is no delay and delay 308 is omitted. The reference signal 307d and the return signal 309 from the telescope or other optical coupler 320 are directed to the photodetector 330 by optical couplers 322. In some embodiments, described in more detail below, a frequency shifter 318 is added to the optical path of the transmitted signal 305 before transmission through the coupler 320. In other embodiments described below, the frequency shifter 318 is disposed in the optical path of the return signal 309 after passing through the coupler 320.
The de-chirping process is accomplished within the balanced photodiode 330 and therefore eliminates the need of de-chirping mixing and the associated RF processing. Because the original chirp optical waveform, which is carried by the LO, beats with its delayed version at the photodiode as indicated, target distance can be directly obtained by a frequency analysis in an FFT component 345 of the photocurrent signal output by operational amplifier 344. Processor 362 is programmed to do data analysis. The processor 362 includes a Doppler compensation module 380, as described below, to detect the Doppler effect, and to correct the resulting range for the Doppler effect. The electrical components from operational amplifier 344 through processor 362 constitute a signal processing component 360. Considering that shot noise is the dominant noise with coherent detection, SNR at the beating frequency is reduced compared to SNR of direct detection and SNR of the system 300a.
The first observed peak 414a is due to a first scatterer moving toward the LIDAR system at a speed sufficient to introduce a shift toward higher frequencies, called a blue shift. The range effect 415 of the blue shift is depicted. Since the chirp is an up chirp, as depicted in graph 120 of
In contrast, the second observed peak 414b is due to a second scatterer moving away from the LIDAR system at a speed sufficient to introduce a shift toward lower frequencies, called a red shift. The range effect 416 of the red shift is depicted. Since the chirp is an up chirp, the decrease in frequency is associated with a decreased range, and the actual position 413b is to the right of the inferred range. The object is actually at a range that, if there were no Doppler effect, would be at a range 413b. But neither the red shift range effect 416 nor the actual range 413b is known to the LIDAR system data processing component.
One approach to determine the size of the Doppler effect on range is to determine range once with an up chirp and again using a down chirp, in which the transmitted optical signal decreases in frequency with time. With a down chirp transmitted signal, the later arrivals have lower frequencies rather than higher frequencies and each red shift, or blue shift, would have the opposite effect on range than it does with an up chirp transmitted signal. The two oppositely affected ranges can be combined, e.g., by averaging, to determine a range that has reduced or eliminated Doppler effect. The difference between the two ranges can indicate the size of the Doppler effect (e.g., the range effect can be determined to be half the difference in the two ranges).
As stated above, the first observed peak 414a is due to the first scatterer moving toward the LIDAR system at a speed sufficient to introduce a shift toward higher frequencies, called a blue shift. The range effect 425 of the blue shift is different than for the up chirp; the range effect goes in the opposite direction. Since the chirp is a down chirp, the increase in frequency is associated with a decreased range, and the actual position 413a, the same range as in graph 410, is to the right of the inferred range. As also stated above, the second observed peak 424b is due to a second scatterer moving away from the LIDAR system at a speed sufficient to introduce a shift toward lower frequencies, called a red shift. The range effect 426 of the red shift is toward longer ranges. The object is actually at a range 413b to the left, closer, than the inferred range.
As stated above, the two oppositely affected ranges can be combined, e.g., by averaging, to determine a range that has reduced or eliminated Doppler effect. As can be seen, a combination of range 414a with 424a, such as an average, would give a value very close to the actual range 413a. Similarly, a combination of range 414b with 424b, such as an average, would give a value very close to the actual range 413b. The magnitude of the range effect 415 for object A (equal to the magnitude of the range effect 425 for object A) can be determined to be half the difference of the two ranges, e.g., ½*(range of 414a-range of 425a). Similarly, the magnitude of the range effect 416 for object B (equal to the magnitude of the range effect 426 for object B) can be determined to be half the difference of the two ranges, e.g., ½*(range of 424b-range of 414b). In some embodiments, the up-chirp/down-chirp returns are combined in a way that incorporates range extent to the up-chirp/down-chirp peaks detected. Basically, the distributions of the detected peak on the up-chirp/down-chirp return is combined (many different statistical options) in an effort to reduce the variance of the Doppler estimate.
This correction depends on being able to pair a peak in the up chirp returns with a corresponding peak in the down chirp returns. In some circumstances this is obvious. However, the inventors have noted that, because of spurious peaks such as peak 414c, it is not always clear how to automatically determine which peaks to pair, e.g., which of the three peaks of
As indicated above, if the sequential range measurements can be successfully paired and averaged, the range and Doppler of the target can be correctly inferred by averaging the range of the sequential measurements. However, the sequential up/down approach leads to errors when the Doppler shift changes between measurements or when a translated beam (e.g., a LIDAR scanner) translates to a new location between measurements which could lead to a change in the objects being measured or a change in the actual range to the same objects, or some combination. As explained in more detail below, a cost function is used to generate a cost matrix that is then used to determine a desirable pairing of ranges from up and down chirps.
In some embodiments, the LIDAR system is changed to produce simultaneous up and down chirps. This approach eliminates variability introduced by object speed differences, or LIDAR position changes relative to the object which actually does change the range, or transient scatterers in the beam, among others, or some combination. The approach then guarantees that the Doppler shifts and ranges measured on the up and down chirps are indeed identical and can be most usefully combined. The Doppler scheme guarantees parallel capture of asymmetrically shifted return pairs in frequency space for a high probability of correct compensation.
In some embodiments, two different laser sources are used to produce the two different optical frequencies in each beam at each time. However, in some embodiments, a single optical carrier is modulated by a single RF chirp to produce symmetrical sidebands that serve as the simultaneous up and down chirps. In some of these embodiments, a double sideband Mach-Zehnder modulator is used that, in general, does not leave much energy in the carrier frequency; instead, almost all of the energy goes into the sidebands.
When producing an optical chirp using a RF down chirp varying from ƒb to ƒa<ƒb, the bandwidth B=(ƒb−ƒa). The upper sideband varies from ƒ0+ƒa+B=ƒ0+ƒb to ƒ0+ƒa, as indicated by the left pointing arrow on frequency 536a, producing a signal in band 538a. The lower sideband simultaneous varies from ƒ0−fa−B=ƒ0−ƒb to ƒ0−ƒa, as indicated by the right pointing arrow on frequency 536b, producing a signal in band 538b. In other embodiments, a RF down chirp is used to modulate the optical carrier, and the frequencies 536a and 536b move through the bands 538a and 538b, respectively, in the opposite directions, e.g., from left to right in band 538a and right to left in band 538b. The returns from the up-chirp and the down chirp are distinguished using different methods in different embodiments. In some preferred embodiments the separation is performed by adding a frequency shift to remove the symmetry of the upper and lower sidebands, as described below. In other embodiment, in which the sidebands are widely enough separated to be optically filtered, the signals from each are split. One signal from each of the reference and return is passed through a low pass filter starting at ƒpl to filter out the carrier ƒ0 and the high band 538a to obtain the low frequency band 538b. Similarly, one signal from each of the reference and return is passed through a high pass filter starting at ƒph to filter out the carrier ƒ0 and the low band 538b to obtain the high frequency band 538a. The two bands are processed as described above to produce the up-chirp ranges and the down-chirp ranges. After pairing the ranges from the up chirp and down chirp, the Doppler effect and the corrected ranges are determined.
As a result of sideband symmetry, the bandwidth of the two optical chirps will be the same if the same order sideband is used. In other embodiments, other sidebands are used, e.g., two second order sideband are used, or a first order sideband and a non-overlapping second sideband is used, or some other combination.
When selecting the transmit (TX) and local oscillator (LO) chirp waveforms, it is advantageous to ensure that the frequency shifted bands of the system take maximum advantage of available digitizer bandwidth. In general this is accomplished by shifting either the up chirp or the down chirp to have a range frequency beat close to zero.
For example, in another embodiment, the transmitted (TX) signal and the reference (LO) signal are generated independently using upper and lower sidebands on two different modulators on the carrier frequency.
In this case, to get an up-chirp beat frequency near zero, it makes sense to select the shift frequency Δƒs=ƒo such that the up chirp is aligned with the transmit. The down chirps will be separated by 2*ƒo.
Any frequency shifter 318 known in the art may be used. For example, in some embodiments an acousto-optic modulator (AOM) is used; and, in other embodiments, serradyne based phase shifting is used with a phase modulator.
For example, in some embodiments, the laser 601 produces an optical carrier at frequency ƒ0, the waveform generator 650 produces an RF down chirp from ƒb to ƒa, where ƒa<ƒb, with power augmented by operational amplifier 652, to drive modulator 610 to produce the optical carrier frequency ƒ0 and optical sidebands 538a and 538b depicted in
In either type of embodiments, the optical signal entering beam splitter 602 includes the simultaneous up chirp and down chirp. This beam is split into two beams of identical waveforms but different power, with most of the power, e.g., 90% or more, passing as beam 605 through a scanning optical coupler 320 to impinge on an object 390. The remaining power passes as beam 607a to a delay 308 to add any desired delay or phase or other properties to produce reference signal 607b. In some embodiments, delay 308 is omitted and reference signal 607b is identical to signal 607a.
Optical coupler 624 is configured to split the reference signal into two reference beams, and to split the returned signal received from the object 390 through scanning optical coupler 320 into two returned signals. The beams can be split into equal or unequal power beams. One reference signal and one returned signal pass through low pass filter 626 to remove the optical carrier and high frequency band. The low passed reference and returned signals are directed onto photodetector 330 to produce a beat electrical signal that is amplified in operational amplifier 344 and transformed in FFT component 345 and fed to data analysis module 662. The other reference signal and the other returned signal pass through high pass filter 628 to remove the optical carrier and low frequency band. The high passed reference and returned signals are directed onto photodetector 630 to produce a beat electrical signal that is amplified in operational amplifier 644a and transformed in FFT component 645 and fed to data analysis module 662. The Doppler compensation module 680 matches pairs of the ranges in the up chirp and down chirp paths of system 600, determines the Doppler effects, or corrects the ranges based on a corresponding Doppler effects, or both. In some embodiments, the module 680 also operates a device based on the Doppler effect or Doppler corrected range or some combination. For example, a display device 1214 is operated to produce an image or point cloud of ranges to objects or speed of objects or some combination. The components 344, 345, 644, 645, 662 and 680 constitute a data processing system 660. In many embodiments, the separation of the up and down chirps in the sidebands is less than 1 GHz and not large enough to be cleanly separated using existing optical filters. In such embodiments, the system of
As stated above, the third observed peak 414c is an example extraneous peak due to system error that can occur in an actual system in a natural setting and is not associated with a range to any actual object of interest. This can lead to problems in determining which up-chirp range and down-chirp range to pair when determining the Doppler effect or corrected range or both. That is, a complication arises as FM chirp waveform (FMCW) systems resolve returns from all scatterers along a given line of site. In this scenario, any Doppler processing algorithm must effectively deal with the requirement that multiple returns on the up-chirp and down-chirp range profile be paired correctly for compensation. In the case of a single range return, one peak on the “up” and one peak on the “down” would be isolated and correctly paired. The confounding situation arises when multiple peaks exist, from multiple scatterers, on the “up” and “down” side. Which peaks should be paired with which to get the desired compensation effect has to be determined, and preferably is determined using a method that can be implemented to proceed automatically, e.g., on a processor or integrated circuit.
Here is demonstrated an approach to automatically pair up-chirp ranges and down-chirp ranges for calculating Doppler effects and Doppler corrected ranges. This approach uses a bi-partite graph matching formulation to achieve correct up/down return parings with high probability.
For each possible pair, a cost is determined. A cost is a measure of dissimilarity between the two ranges in the pair. Any method may be used to evaluate the dissimilarity. For example, in some embodiments, the cost for each pair is a difference in detected ranges for the two ranges. In some embodiments, the cost is a difference in detected peak heights associated with the two ranges. In some embodiments, the cost is a difference in peak widths associated with the two ranges. In some embodiments, the cost is a scalar valued function of two or more of individual dissimilarities, such as range difference, peak height difference, and peak width difference, among others. In some embodiments, the cost is a vector function with two or more elements selected from range difference, peak height difference, and peak width difference, among others. The cost is represented by the symbol Cnm, where m indicates an index for a down-chirp range and n represents an index for an up-chirp range.
The set (Sup) of range returns Ri up from the up-chirp range profiles and the set (Sdown) of range returns Rj down from the down-chirp range profiles are determined using Equation 1b and frequencies of peaks selected via a standard thresholding and peak fitting procedure (e.g., peak finding based on height and width of the peak) of the FFT spectrum of the electrical output of the photodetectors.
Sup=[R1up,R2up, . . . RNup] (3a)
and
Sdown=[R1down,R2down, . . . RMdown] (3b)
with N “up” peaks and M “down” peaks. The two sets of range profiles then are used to define a cost matrix C where each matrix element is a function of the pair Cnm=F(Rn,up, Rm,down). A good cost function for static scenes (imager not moving relative to scene) is simply the magnitude of the Doppler effect for the pairing. F(Rn,up, Rm,down)=|Rn,up, −Rm,down|/2 where |.| is the absolute value operation. The general approach allows for flexible definition of cost functions. For example, the cost could include other parameters such as the intensity of the set of peaks, the uncompensated range, external information such as imager motion in the imaging space (for mobile scanning), or combinations thereof.
Once the cost matrix C is defined, the approach proceeds with a bipartite graph matching procedure. This procedure often restricts the pairings so that a given up-chirp range can only be paired with a single down-chirp range, and vice versa. Other constraints can be imposed, such as that lines connecting matched pairs do not cross. In some embodiments, the matching process proceeds in a greedy manner whereby the algorithm always chooses the lowest cost pairing, minimum (Cmn), from the set until no further pairings exist. Alternatively, the Hungarian bi-partite graph matching algorithm (e.g., see Munkres, 1957) can be used to generate the optimal “lowest cost” set of pairings averaged over all pairs. In other embodiments, other graph matching methods are used. It has been found that with observed real world scenarios, the greedy approach is both faster and sufficient for the intended purposes. This method was chosen as it limits a peak on either side to be paired with maximum of one peak on the other side. This is the most “physical” interpretation of the pairing procedure as a peak on one side being paired with two on the other side would imply a non-physical or confounding scenario. There are certainly other matching procedures (that were looked at but did not perform as well in the experimental embodiments). For example, a “range ordered” approach sorted the up and down returns according to range and sequentially paired them (from closest to furthest). Extra peaks on either side when one side “ran out of peaks” were discarded. This failed in the common scenario of close peaks with slight Doppler shifts. Similarly “amplitude ordered” sorted the peaks according to amplitude and paired them in descending order of amplitude. This method did not work well because speckle effects in coherent detection cause large variance in the amplitude of detected peaks. This led to many incorrect pairings. The cost matrix approach seems to be the most general way of considering the options and minimizing globally across the set of options. In
In step 801, a transceiver, e.g., a LIDAR system, is configured to transmit up and down chirped optical signals. A portion (e.g., 1% to 10%) of each of the up chirp and down chirp signals is also directed to a reference optical path. The transceiver is also configured to receive a backscattered optical signal from any external object illuminated by the transmitted signals. In other embodiments the up chirp and down chirp signals are transmitted simultaneously. In some embodiments, step 801 includes configuring other optical components in hardware to provide the functions of one or more of the following steps as well, as illustrated for example in
In some embodiments, the up chirp optical signal is symmetric with the down chirp, that is, they have the same bandwidth, B, and they have the same duration, τ. But a range can be determined for any bandwidth and duration that spans the returns of interest. Thus, in some embodiments, the up chirp optical signals and downs chirp optical signals have different bandwidths, Bu and Bd, respectively, or different durations, τu and τd, respectively, or some combination. In some embodiments, it is convenient to produce slightly different values for B and τ, e.g., by using one first order sideband and one second order sideband of an optical carrier, rather than positive and negative first order sidebands, as depicted in
In step 802 the transmitted signal is directed to a spot in a scene where there might be, or might not be, an object or a part of an object.
In step 803, the returned up chirp return signal is separated from the down chirp return signal. For example, the up chirp and down chirp are in different optical frequency bands and the separation is accomplished by a return path that includes splitting the return signal into two copies and passing each through a different optical filter. One filter (e.g., optical filter 626) passes the frequencies of the up chirp while blocking the frequencies of the down chirp; and, the other filter (e.g., optical filter 628) passes the frequencies of the down chirp while blocking the frequencies of the up chirp. When the up chirp and down chirp are transmitted sequentially, the separation is done by processing the signals in different time windows rather than by passing through an optical filter, and both chirps can use the same frequency band.
In step 805, the separated up chirp return is combined with the up chirp reference signal at a first detector to determine zero or more frequency differences (or resulting beat frequencies) in the up chirp return signal. For example, the electrical signal from a detector is operated on by a FFT module in hardware or software on a programmable processor. When the up chirp signal and down chirp signal are transmitted simultaneously, then the up-chirp and down chirp portions of the return signal are separated using one or more of the methods and systems described above, e.g., a frequency shift of transmitted or return signal relative to the reference signal as in
In step 807, one or more up-chirp ranges, R1up, . . . RNup, are determined based on the one or more up chirp frequency differences (beat frequencies), e.g., using Equation 1b or equivalent in processor 662. In addition, one or more down-chirp ranges, R1down, . . . RMdown, are determined based on the one or more down chirp frequency differences (beat frequencies), e.g., using Equation 1b or equivalent in processor 662. Returns that have no up chirp beat frequencies or have no down chirp beat frequencies are discarded during step 807. As a result of step 807, the sets Sup and Sdown are generated, as expressed in Equations 3a and 3b for each transmitted signal illuminating some spot in a scene.
In step 811, values for elements of a cost matrix are determined, using any of the measures of dissimilarity identified above, such as difference in range, differences in peak height, differences in peak width, or differences in any other beat frequency peak characteristic, or some scalar or vector combination. In some embodiments, the cost for each pair of ranges is a scalar weighted function of several of these measures, e.g., giving highest weight (e.g., about 60%) to differences in range, a moderate weight (e.g., about 30%) to differences in peak height, and a low weight (e.g., about 10%) to differences in peak width. In some embodiments, a bias term is added to each element of the cost matrix, which accounts for the motion of the imaging system and the direction in which it may be pointed. This is advantageous in embodiments for mobile imaging where a non-zero, signed Doppler value may be expected in the case that the imaging system is targeting a stationary object but itself is in motion. In other embodiments, the object is moving or the object and imaging are both moving and it is advantageous to add bias terms based on the relative motion and tracking of the object. The magnitude of the correction depends only on the element of the velocity on a line connecting the imaging system and the target, called the radial direction. Thus, in some embodiments, the bias depends on the direction of beam pointing relative to the motion of the sensor. Note that the imaging system could itself be used to estimate relative velocities. The cost matrix includes a cost for every pair of ranges in the two sets, one up-chirp range and one down-chirp range.
In step 813, at least one up-chirp range is matched to at least one down-chirp range based on the cost matrix, for example using bi-partite graph matching algorithms. It was found that greedy matching provides the advantages of being simpler to implement, faster to operate, and sufficient to make good automatic matches for Doppler correction purposes. This approach has generated imagery with better Doppler compensation than naïve approaches such as sequential pairing of brightest peaks on the up and down sides or always pairing of the closest peaks in range and proceeding outwards, as shown, for example, in
In step 815, at least one matched pair for the current illuminated spot is used to determine a Doppler effect at the spot for at least one object or portion of an object illuminated in the spot by differencing the ranges in the pair. In some embodiments, the Doppler corrected range is determined based on combining the two ranges in the pair, e.g., by averaging. In some embodiments, a weighted average is used if one of the up-chirp range or down-chirp range is considered more reliable. One range may be considered more reliable, for example, because it is based on a broader bandwidth or longer duration. In some embodiments, a measure of reliability is used to weight the range, giving more weight to the more reliable range. If several matched ranges are available for the current illuminated spot, all matched pairs are used to find multiple ranges or multiple Doppler effects for the spot.
In step 821 it is determined whether there is another spot to illuminate in a scene of interest, e.g., by scanning the scanning optical coupler 320 to view a new spot in the scene of interest. If so, control passes back to step 802 and following steps to illuminate the next spot and process any returns. If not, then there are no further spots to illuminate before the results are used, and control passes to step 823.
In step 823, a device is operated based on the Doppler effect or the corrected ranges. In some embodiments, this involves presenting on a display device an image that indicates a Doppler corrected position of any object at a plurality of spots illuminated by the transmitted optical signal. In some embodiments, this involves communicating, to the device, data that identifies at least one object based on a point cloud of Doppler corrected positions at a plurality of spots illuminated by transmitted optical signal. In some embodiments, this involves presenting on a display device an image that indicates a size of the Doppler effect at a plurality of spots illuminated by the transmitted optical signal, whereby moving objects are distinguished from stationary objects and absent objects. In some embodiments, this involves moving a vehicle to avoid a collision with an object, wherein a closing speed between the vehicle and the object is determined based on a size of the Doppler effect at a plurality of spots illuminated by the transmitted optical signal. In some embodiments, this involves identifying the vehicle or identifying the object on the collision course based on a point cloud of Doppler corrected positions at a plurality of spots illuminated by the transmitted optical signal. Filtering the point cloud data based on Doppler has the effect of identifying and removing vegetation that may be moving in the breeze. Hard targets, man-made targets, or dense targets are then better revealed by the filtering process. This can be advantageous in defense and surveillance scenarios. In the vehicle scenario—the Doppler can be used to segment targets (i.e. road surface versus moving vehicle).
In these example embodiments, the LIDAR system used components illustrated above to produce simultaneous up and down chirp transmitted signals. This system is commercially available as the HRS-3D from BLACKMORE SENSORS AND ANALYTICS, INC.™ of Bozeman Mont. In these example embodiments, greedy matching was used. The first match is the one with the lowest cost. That pair is then removed and the pair with the next lowest cost is used.
The image of
Another test was run with a fast frame rate, narrow field of view imager that produces 10,000 data-point point clouds per frame at a 10 Hz frame rate. A test person ran back and forth in the field of view of the sensor. Each image of the person is cut from a time series of several hundred 3D imaging frames (cut from the same 3D orientation perspective).
Thus the system not only correctly places all pixels on a person's form with corrected ranges, but also ascertains the movement of the figure based on the Doppler effect.
In another embodiment, the system is used to correct for swaying vegetation. A collimated laser beam is scanned through overhead trees. The probability of having multiple returns with Doppler values present is very high, especially with a slight breeze or wind present. Such vegetation provides dense scatterers separated closely in range along a given line of sight. Wind itself can cause Doppler shifts in the vegetation, thus making the imaging of the vegetation even more challenging.
A sequence of binary digits constitutes digital data that is used to represent a number or code for a character. A bus 1210 includes many parallel conductors of information so that information is transferred quickly among devices coupled to the bus 1210. One or more processors 1202 for processing information are coupled with the bus 1210. A processor 1202 performs a set of operations on information. The set of operations include bringing information in from the bus 1210 and placing information on the bus 1210. The set of operations also typically include comparing two or more units of information, shifting positions of units of information, and combining two or more units of information, such as by addition or multiplication. A sequence of operations to be executed by the processor 1202 constitutes computer instructions.
Computer system 1200 also includes a memory 1204 coupled to bus 1210. The memory 1204, such as a random access memory (RAM) or other dynamic storage device, stores information including computer instructions. Dynamic memory allows information stored therein to be changed by the computer system 1200. RAM allows a unit of information stored at a location called a memory address to be stored and retrieved independently of information at neighboring addresses. The memory 1204 is also used by the processor 1202 to store temporary values during execution of computer instructions. The computer system 1200 also includes a read only memory (ROM) 1206 or other static storage device coupled to the bus 1210 for storing static information, including instructions, that is not changed by the computer system 1200. Also coupled to bus 1210 is a non-volatile (persistent) storage device 1208, such as a magnetic disk or optical disk, for storing information, including instructions, that persists even when the computer system 1200 is turned off or otherwise loses power.
Information, including instructions, is provided to the bus 1210 for use by the processor from an external input device 1212, such as a keyboard containing alphanumeric keys operated by a human user, or a sensor. A sensor detects conditions in its vicinity and transforms those detections into signals compatible with the signals used to represent information in computer system 1200. Other external devices coupled to bus 1210, used primarily for interacting with humans, include a display device 1214, such as a cathode ray tube (CRT) or a liquid crystal display (LCD), for presenting images, and a pointing device 1216, such as a mouse or a trackball or cursor direction keys, for controlling a position of a small cursor image presented on the display 1214 and issuing commands associated with graphical elements presented on the display 1214.
In the illustrated embodiment, special purpose hardware, such as an application specific integrated circuit (IC) 1220, is coupled to bus 1210. The special purpose hardware is configured to perform operations not performed by processor 1202 quickly enough for special purposes. Examples of application specific ICs include graphics accelerator cards for generating images for display 1214, cryptographic boards for encrypting and decrypting messages sent over a network, speech recognition, and interfaces to special external devices, such as robotic arms and medical scanning equipment that repeatedly perform some complex sequence of operations that are more efficiently implemented in hardware.
Computer system 1200 also includes one or more instances of a communications interface 1270 coupled to bus 1210. Communication interface 1270 provides a two-way communication coupling to a variety of external devices that operate with their own processors, such as printers, scanners and external disks. In general the coupling is with a network link 1278 that is connected to a local network 1280 to which a variety of external devices with their own processors are connected. For example, communication interface 1270 may be a parallel port or a serial port or a universal serial bus (USB) port on a personal computer. In some embodiments, communications interface 1270 is an integrated services digital network (ISDN) card or a digital subscriber line (DSL) card or a telephone modem that provides an information communication connection to a corresponding type of telephone line. In some embodiments, a communication interface 1270 is a cable modem that converts signals on bus 1210 into signals for a communication connection over a coaxial cable or into optical signals for a communication connection over a fiber optic cable. As another example, communications interface 1270 may be a local area network (LAN) card to provide a data communication connection to a compatible LAN, such as Ethernet. Wireless links may also be implemented. Carrier waves, such as acoustic waves and electromagnetic waves, including radio, optical and infrared waves travel through space without wires or cables. Signals include man-made variations in amplitude, frequency, phase, polarization or other physical properties of carrier waves. For wireless links, the communications interface 1270 sends and receives electrical, acoustic or electromagnetic signals, including infrared and optical signals, that carry information streams, such as digital data.
The term computer-readable medium is used herein to refer to any medium that participates in providing information to processor 1202, including instructions for execution. Such a medium may take many forms, including, but not limited to, non-volatile media, volatile media and transmission media. Non-volatile media include, for example, optical or magnetic disks, such as storage device 1208. Volatile media include, for example, dynamic memory 1204. Transmission media include, for example, coaxial cables, copper wire, fiber optic cables, and waves that travel through space without wires or cables, such as acoustic waves and electromagnetic waves, including radio, optical and infrared waves. The term computer-readable storage medium is used herein to refer to any medium that participates in providing information to processor 1202, except for transmission media.
Common forms of computer-readable media include, for example, a floppy disk, a flexible disk, a hard disk, a magnetic tape, or any other magnetic medium, a compact disk ROM (CD-ROM), a digital video disk (DVD) or any other optical medium, punch cards, paper tape, or any other physical medium with patterns of holes, a RAM, a programmable ROM (PROM), an erasable PROM (EPROM), a FLASH-EPROM, or any other memory chip or cartridge, a carrier wave, or any other medium from which a computer can read. The term non-transitory computer-readable storage medium is used herein to refer to any medium that participates in providing information to processor 1202, except for carrier waves and other signals.
Logic encoded in one or more tangible media includes one or both of processor instructions on a computer-readable storage media and special purpose hardware, such as ASIC 1220.
Network link 1278 typically provides information communication through one or more networks to other devices that use or process the information. For example, network link 1278 may provide a connection through local network 1280 to a host computer 1282 or to equipment 1284 operated by an Internet Service Provider (ISP). ISP equipment 1284 in turn provides data communication services through the public, world-wide packet-switching communication network of networks now commonly referred to as the Internet 1290. A computer called a server 1292 connected to the Internet provides a service in response to information received over the Internet. For example, server 1292 provides information representing video data for presentation at display 1214.
The invention is related to the use of computer system 1200 for implementing the techniques described herein. According to one embodiment of the invention, those techniques are performed by computer system 1200 in response to processor 1202 executing one or more sequences of one or more instructions contained in memory 1204. Such instructions, also called software and program code, may be read into memory 1204 from another computer-readable medium such as storage device 1208. Execution of the sequences of instructions contained in memory 1204 causes processor 1202 to perform the method steps described herein. In alternative embodiments, hardware, such as application specific integrated circuit 1220, may be used in place of or in combination with software to implement the invention. Thus, embodiments of the invention are not limited to any specific combination of hardware and software.
The signals transmitted over network link 1278 and other networks through communications interface 1270, carry information to and from computer system 1200. Computer system 1200 can send and receive information, including program code, through the networks 1280, 1290 among others, through network link 1278 and communications interface 1270. In an example using the Internet 1290, a server 1292 transmits program code for a particular application, requested by a message sent from computer 1200, through Internet 1290, ISP equipment 1284, local network 1280 and communications interface 1270. The received code may be executed by processor 1202 as it is received, or may be stored in storage device 1208 or other non-volatile storage for later execution, or both. In this manner, computer system 1200 may obtain application program code in the form of a signal on a carrier wave.
Various forms of computer readable media may be involved in carrying one or more sequence of instructions or data or both to processor 1202 for execution. For example, instructions and data may initially be carried on a magnetic disk of a remote computer such as host 1282. The remote computer loads the instructions and data into its dynamic memory and sends the instructions and data over a telephone line using a modem. A modem local to the computer system 1200 receives the instructions and data on a telephone line and uses an infra-red transmitter to convert the instructions and data to a signal on an infra-red a carrier wave serving as the network link 1278. An infrared detector serving as communications interface 1270 receives the instructions and data carried in the infrared signal and places information representing the instructions and data onto bus 1210. Bus 1210 carries the information to memory 1204 from which processor 1202 retrieves and executes the instructions using some of the data sent with the instructions. The instructions and data received in memory 1204 may optionally be stored on storage device 1208, either before or after execution by the processor 1202.
In one embodiment, the chip set 1300 includes a communication mechanism such as a bus 1301 for passing information among the components of the chip set 1300. A processor 1303 has connectivity to the bus 1301 to execute instructions and process information stored in, for example, a memory 1305. The processor 1303 may include one or more processing cores with each core configured to perform independently. A multi-core processor enables multiprocessing within a single physical package. Examples of a multi-core processor include two, four, eight, or greater numbers of processing cores. Alternatively or in addition, the processor 1303 may include one or more microprocessors configured in tandem via the bus 1301 to enable independent execution of instructions, pipelining, and multithreading. The processor 1303 may also be accompanied with one or more specialized components to perform certain processing functions and tasks such as one or more digital signal processors (DSP) 1307, or one or more application-specific integrated circuits (ASIC) 1309. A DSP 1307 typically is configured to process real-world signals (e.g., sound) in real time independently of the processor 1303. Similarly, an ASIC 1309 can be configured to performed specialized functions not easily performed by a general purposed processor. Other specialized components to aid in performing the inventive functions described herein include one or more field programmable gate arrays (FPGA) (not shown), one or more controllers (not shown), or one or more other special-purpose computer chips.
The processor 1303 and accompanying components have connectivity to the memory 1305 via the bus 1301. The memory 1305 includes both dynamic memory (e.g., RAM, magnetic disk, writable optical disk, etc.) and static memory (e.g., ROM, CD-ROM, etc.) for storing executable instructions that when executed perform one or more steps of a method described herein. The memory 1305 also stores the data associated with or generated by the execution of one or more steps of the methods described herein.
In the foregoing specification, the invention has been described with reference to specific embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. Throughout this specification and the claims, unless the context requires otherwise, the word “comprise” and its variations, such as “comprises” and “comprising,” will be understood to imply the inclusion of a stated item, element or step or group of items, elements or steps but not the exclusion of any other item, element or step or group of items, elements or steps. Furthermore, the indefinite article “a” or “an” is meant to indicate one or more of the item, element or step modified by the article. As used herein, unless otherwise clear from the context, a value is “about” another value if it is within a factor of two (twice or half) of the other value. While example ranges are given, unless otherwise clear from the context, any contained ranges are also intended in various embodiments. Thus, a range from 0 to 10 includes the range 1 to 4 in some embodiments.
This application claims benefit of Patent Cooperation Treaty (PCT) Appln. PCT/US2017/62703 Filed Nov. 21, 2017, which claims priority to Provisional Appln. 62/428,109, filed Nov. 30, 2016, the entire contents of which are hereby incorporated by reference as if fully set forth herein, under 35 U.S.C. § 119(e).
This invention was made with government support under contract W9132V-14-C-0002 awarded by the Department of the Army. The government has certain rights in the invention.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2017/062703 | 11/21/2017 | WO |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2018/160240 | 9/7/2018 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
4099249 | Casasent | Jul 1978 | A |
4620192 | Collins | Oct 1986 | A |
4648276 | Klepper et al. | Mar 1987 | A |
4804893 | Melocik | Feb 1989 | A |
5075864 | Sakai | Dec 1991 | A |
5216534 | Boardman et al. | Jun 1993 | A |
5223986 | Mayerjak et al. | Jun 1993 | A |
5227910 | Khattak | Jul 1993 | A |
5231401 | Kaman et al. | Jul 1993 | A |
5687017 | Katoh et al. | Nov 1997 | A |
5781156 | Krasner | Jul 1998 | A |
5828585 | Welk et al. | Oct 1998 | A |
5947903 | Ohtsuki et al. | Sep 1999 | A |
5999302 | Sweeney et al. | Dec 1999 | A |
6029496 | Kreft | Feb 2000 | A |
6211888 | Ohtsuki et al. | Apr 2001 | B1 |
6671595 | Lu et al. | Dec 2003 | B2 |
6753950 | Morcom | Jun 2004 | B2 |
6871148 | Morgen et al. | Mar 2005 | B2 |
6931055 | Underbrink et al. | Aug 2005 | B1 |
7122691 | Oshima et al. | Oct 2006 | B2 |
7152490 | Freund et al. | Dec 2006 | B1 |
7486802 | Hougen | Feb 2009 | B2 |
7511824 | Sebastian et al. | Mar 2009 | B2 |
7639347 | Eaton | Dec 2009 | B2 |
7742152 | Hui et al. | Jun 2010 | B2 |
7917039 | Delfyett | Mar 2011 | B1 |
8135513 | Bauer et al. | Mar 2012 | B2 |
8531650 | Feldkhun et al. | Sep 2013 | B2 |
8751155 | Lee | Jun 2014 | B2 |
8805197 | Delfyett | Aug 2014 | B2 |
8818609 | Boyko et al. | Aug 2014 | B1 |
8831780 | Zelivinski et al. | Sep 2014 | B2 |
8954252 | Urmson et al. | Feb 2015 | B1 |
9041915 | Earhart et al. | May 2015 | B2 |
9046909 | Leibowitz et al. | Jun 2015 | B2 |
9086273 | Gruver et al. | Jul 2015 | B1 |
9097800 | Zhu | Aug 2015 | B1 |
9348137 | Plotkin et al. | May 2016 | B2 |
9383753 | Templeton et al. | Jul 2016 | B1 |
9607220 | Smith et al. | Mar 2017 | B1 |
9618742 | Droz et al. | Apr 2017 | B1 |
9753462 | Gilliland et al. | Sep 2017 | B2 |
10036812 | Crouch et al. | Jul 2018 | B2 |
10231705 | Lee | Mar 2019 | B2 |
10345434 | Hinderling et al. | Jul 2019 | B2 |
10422649 | Engelman et al. | Sep 2019 | B2 |
10485508 | Miyaji et al. | Nov 2019 | B2 |
10520602 | Villeneuve et al. | Dec 2019 | B2 |
10534084 | Crouch et al. | Jan 2020 | B2 |
10568258 | Wahlgren | Feb 2020 | B2 |
10571567 | Campbell et al. | Feb 2020 | B2 |
11002856 | Heidrich | May 2021 | B2 |
11041954 | Crouch et al. | Jun 2021 | B2 |
11249192 | Crouch et al. | Feb 2022 | B2 |
11402506 | Ohtomo et al. | Aug 2022 | B2 |
11441899 | Pivac et al. | Sep 2022 | B2 |
20020071109 | Allen et al. | Jun 2002 | A1 |
20020140924 | Wangler et al. | Oct 2002 | A1 |
20030117312 | Nakanishi | Jun 2003 | A1 |
20040034304 | Sumi | Feb 2004 | A1 |
20040109155 | Deines | Jun 2004 | A1 |
20040158155 | Njemanze | Aug 2004 | A1 |
20040222366 | Frick | Nov 2004 | A1 |
20050149240 | Tseng et al. | Jul 2005 | A1 |
20060132752 | Kane | Jun 2006 | A1 |
20060239312 | Kewitsch et al. | Oct 2006 | A1 |
20070005212 | Xu et al. | Jan 2007 | A1 |
20080018881 | Hui et al. | Jan 2008 | A1 |
20080024756 | Rogers | Jan 2008 | A1 |
20080040029 | Breed | Feb 2008 | A1 |
20080100822 | Munro | May 2008 | A1 |
20090002679 | Ruff et al. | Jan 2009 | A1 |
20090009842 | Destain et al. | Jan 2009 | A1 |
20090030605 | Breed | Jan 2009 | A1 |
20100094499 | Anderson | Apr 2010 | A1 |
20100183309 | Etemad et al. | Jul 2010 | A1 |
20100188504 | Dimsdale et al. | Jul 2010 | A1 |
20100312432 | Hamada et al. | Dec 2010 | A1 |
20110007299 | Moench | Jan 2011 | A1 |
20110015526 | Tamura | Jan 2011 | A1 |
20110026007 | Gammenthaler | Feb 2011 | A1 |
20110026008 | Gammenthaler | Feb 2011 | A1 |
20110205523 | Rezk | Aug 2011 | A1 |
20110292371 | Chang | Dec 2011 | A1 |
20120038902 | Dotson | Feb 2012 | A1 |
20120127252 | Lim et al. | May 2012 | A1 |
20120229627 | Wang | Sep 2012 | A1 |
20120274922 | Hodge | Nov 2012 | A1 |
20120281907 | Samples et al. | Nov 2012 | A1 |
20120306383 | Munro | Dec 2012 | A1 |
20130120989 | Sun et al. | May 2013 | A1 |
20130268163 | Comfort et al. | Oct 2013 | A1 |
20130325244 | Wang et al. | Dec 2013 | A1 |
20140036252 | Amzajerdian | Feb 2014 | A1 |
20140064607 | Grossmann et al. | Mar 2014 | A1 |
20150005993 | Breuing | Jan 2015 | A1 |
20150046119 | Sandhawalia et al. | Feb 2015 | A1 |
20150130607 | MacArthur | May 2015 | A1 |
20150160332 | Sebastian et al. | Jun 2015 | A1 |
20150177379 | Smith et al. | Jun 2015 | A1 |
20150185244 | Inoue et al. | Jul 2015 | A1 |
20150260836 | Hayakawa | Sep 2015 | A1 |
20150267433 | Leonessa et al. | Sep 2015 | A1 |
20150269438 | Samarasekera et al. | Sep 2015 | A1 |
20150270838 | Chan et al. | Sep 2015 | A1 |
20150282707 | Tanabe et al. | Oct 2015 | A1 |
20150323660 | Hampikian | Nov 2015 | A1 |
20150331103 | Jensen | Nov 2015 | A1 |
20150331111 | Newman et al. | Nov 2015 | A1 |
20160078303 | Samarasekera et al. | Mar 2016 | A1 |
20160084946 | Turbide | Mar 2016 | A1 |
20160091599 | Jenkins | Mar 2016 | A1 |
20160123720 | Thorpe | May 2016 | A1 |
20160125739 | Stewart et al. | May 2016 | A1 |
20160377721 | Lardin | Jun 2016 | A1 |
20160216366 | Phillips et al. | Jul 2016 | A1 |
20160245903 | Kalscheur et al. | Aug 2016 | A1 |
20160245919 | Kalscheur et al. | Aug 2016 | A1 |
20160260324 | Tummala et al. | Sep 2016 | A1 |
20160266243 | Marron | Sep 2016 | A1 |
20160274589 | Templeton et al. | Sep 2016 | A1 |
20160350926 | Flint et al. | Dec 2016 | A1 |
20160377724 | Crouch et al. | Dec 2016 | A1 |
20170160541 | Carothers et al. | Jun 2017 | A1 |
20170248691 | McPhee et al. | Aug 2017 | A1 |
20170299697 | Swanson | Oct 2017 | A1 |
20170329014 | Moon et al. | Nov 2017 | A1 |
20170329332 | Pilarski et al. | Nov 2017 | A1 |
20170343652 | De Mersseman et al. | Nov 2017 | A1 |
20170350964 | Kaneda | Dec 2017 | A1 |
20170350979 | Uyeno | Dec 2017 | A1 |
20170356983 | Jeong | Dec 2017 | A1 |
20180003805 | Popovich et al. | Jan 2018 | A1 |
20180136000 | Rasmusson et al. | May 2018 | A1 |
20180188355 | Bao et al. | Jul 2018 | A1 |
20180224547 | Crouch et al. | Aug 2018 | A1 |
20180267556 | Templeton et al. | Sep 2018 | A1 |
20180276986 | Delp | Sep 2018 | A1 |
20180284286 | Eichenholz et al. | Oct 2018 | A1 |
20180299534 | Lachapelle et al. | Oct 2018 | A1 |
20180307913 | Finn et al. | Oct 2018 | A1 |
20190064831 | Gali et al. | Feb 2019 | A1 |
20190086514 | Dussan et al. | Mar 2019 | A1 |
20190107606 | Russell et al. | Apr 2019 | A1 |
20190154439 | Binder | May 2019 | A1 |
20190154832 | Maleki et al. | May 2019 | A1 |
20190154835 | Maleki et al. | May 2019 | A1 |
20190258251 | Ditty et al. | Aug 2019 | A1 |
20190317219 | Smith et al. | Oct 2019 | A1 |
20190318206 | Smith et al. | Oct 2019 | A1 |
20190346856 | Berkemeier et al. | Nov 2019 | A1 |
20190361119 | Kim et al. | Nov 2019 | A1 |
20200025879 | Pacala et al. | Jan 2020 | A1 |
20200049819 | Cho et al. | Feb 2020 | A1 |
20210089047 | Smith et al. | Mar 2021 | A1 |
20210165102 | Crouch et al. | Jun 2021 | A1 |
20210325664 | Adams et al. | Oct 2021 | A1 |
20230025474 | Baker et al. | Jan 2023 | A1 |
Number | Date | Country |
---|---|---|
101346773 | Jan 2009 | CN |
102150007 | Aug 2011 | CN |
103227559 | Jul 2013 | CN |
104793619 | Jul 2015 | CN |
104956400 | Sep 2015 | CN |
105425245 | Mar 2016 | CN |
105629258 | Jun 2016 | CN |
105652282 | Jun 2016 | CN |
107015238 | Aug 2017 | CN |
107193011 | Sep 2017 | CN |
207318710 | May 2018 | CN |
10 2007 001 103 | Jul 2008 | DE |
10 2017 200 692 | Aug 2018 | DE |
1 298 453 | Apr 2003 | EP |
3 330 766 | Jun 2018 | EP |
2 349 231 | Oct 2000 | GB |
63-071674 | Apr 1988 | JP |
S63-071674 | Apr 1988 | JP |
H06-148556 | May 1994 | JP |
09-257415 | Oct 1997 | JP |
H09-257415 | Oct 1997 | JP |
2765767 | Jun 1998 | JP |
H11-153664 | Jun 1999 | JP |
2000-338244 | Dec 2000 | JP |
2002-249058 | Sep 2002 | JP |
3422720 | Jun 2003 | JP |
2003-185738 | Jul 2003 | JP |
2006-148556 | Jun 2006 | JP |
2006-226931 | Aug 2006 | JP |
2007-155467 | Jun 2007 | JP |
2007-214564 | Aug 2007 | JP |
2007-214694 | Aug 2007 | JP |
2009-257415 | Nov 2009 | JP |
2009-291294 | Dec 2009 | JP |
2011-044750 | Mar 2011 | JP |
2011-107165 | Jun 2011 | JP |
2011-203122 | Oct 2011 | JP |
2012-502301 | Jan 2012 | JP |
2012-103118 | May 2012 | JP |
2012-154863 | Aug 2012 | JP |
2012-196436 | Oct 2012 | JP |
2015-125062 | Jul 2015 | JP |
2015-172510 | Oct 2015 | JP |
2015-212942 | Nov 2015 | JP |
2018-173346 | Nov 2018 | JP |
2018-204970 | Dec 2018 | JP |
2018-0058068 | May 2018 | KR |
2018-0126927 | Nov 2018 | KR |
201516612 | May 2015 | TW |
201818183 | May 2018 | TW |
201832039 | Sep 2018 | TW |
201833706 | Sep 2018 | TW |
202008702 | Feb 2020 | TW |
WO-2007124063 | Nov 2007 | WO |
WO-2010127151 | Nov 2010 | WO |
WO-2011102130 | Aug 2011 | WO |
WO-2014132020 | Sep 2014 | WO |
WO-2015037173 | Mar 2015 | WO |
WO-2016134321 | Aug 2016 | WO |
WO-2016164435 | Oct 2016 | WO |
WO-2017018065 | Feb 2017 | WO |
WO-2018066069 | Apr 2018 | WO |
WO-2018067158 | Apr 2018 | WO |
WO-2018102188 | Jun 2018 | WO |
WO-2018102190 | Jun 2018 | WO |
WO-2018107237 | Jun 2018 | WO |
WO-2018125438 | Jul 2018 | WO |
WO-2018144853 | Aug 2018 | WO |
WO-2018160240 | Sep 2018 | WO |
WO-2019014177 | Jan 2019 | WO |
WO-2020062301 | Apr 2020 | WO |
Entry |
---|
International Search Report in International Application Serial No. PCT/US2017/062703 dated Aug. 27, 2018. |
Foreign Action other than Search Report on KR 10-2019-7018575 dated Jun. 23, 2020. |
Foreign Search Report on EP 17876731.5 dated Jun. 17, 2020. |
Japanese Office Action JP 2019527155 dated Dec. 1, 2020. |
Decision of Rejection on JP Appl. Ser. No. 2020-559530 dated Aug. 31, 2021 (13 pages). |
First Office Action on CN Appl. Ser. No. 201780081215.2 dated Mar. 3, 2021 (14 pages). |
First Office Action on CN Appl. Ser. No. 201980033898.3 dated Apr. 20, 2021 (14 pages). |
International Preliminary Report and Written Opinion on Patentability on Appl. Ser. No. PCT/US2018/041388 dated Jan. 23, 2020 (12 pages). |
International Preliminary Report and Written Opinion on Patentability on Appl. Ser. No. PCT/US2019/028532 dated Oct. 27, 2020 (11 pages). |
International Preliminary Report and Written Opinion on Patentability on Appl. Ser. No. PCT/US2019/068351 dated Jul. 15, 2021 (8 pages). |
International Search Report and Written Opinion on Appl. Ser. No. PCT/US2021/032515 dated Aug. 3, 2021 (18 pages). |
Notice of Allowance on KR Appl. Ser. No. 10-2019-7019062 dated Feb. 10, 2021 (4 Pages). |
Notice of Allowance on KR Appl. Ser. No. 10-2019-7019076 dated Feb. 15, 2021 (4 pages). |
Notice of Allowance on KR Appl. Ser. No. 10-2019-7019078 dated Feb. 15, 2021 (4 pages). |
Notice of Preliminary Rejection on KR Appl. Ser. No. 10-2021-7014545 dated Aug. 19, 2021 (17 pages). |
Notice of Preliminary Rejection on KR Appl. Ser. No. 10-2021-7014560 dated Aug. 19, 2021 (5 pages). |
Notice of Preliminary Rejection on KR Appl. Ser. No. 10-2021-7019744 dated Aug. 19, 2021 (15 pages). |
Notice of Reasons for Refusal on JP Appl. Ser. No. 2019-527156 dated Dec. 1, 2020 (12 pages). |
Notice of Reasons for Refusal on JP Appl. Ser. No. 2020-559530 dated Apr. 20, 2021 (11 pages). |
Office Action on JP Appl. Ser. No. 2019-527224 dated Dec. 1, 2020 (6 pages). |
Office Action on JP Appl. Ser. No. 2019-538482 dated Feb. 2, 2021 (6 pages). |
Office Action on KR Appl. Ser. No. 10-2019-7019062 dated Oct. 5, 2020 (6 pages). |
Office Action on KR Appl. Ser. No. 10-2019-7019076 dated Jun. 9, 2020 (18 pages). |
Office Action on KR Appl. Ser. No. 10-2019-7019078 dated Jun. 9, 2020 (14 pages). |
Office Action on KR Appl. Ser. No. 10-2019-7022921 dated Aug. 26, 2020 (6 pages). |
Second Office Action for KR Appl. Ser. No. 10-2021-7020076 dated Jun. 30, 2021 (5 pages). |
Second Office Action on CN Appl. Ser. No. 201780081968.3 dated May 12, 2021 (7 pages). |
Supplementary European Search Report on EP Appl. Ser. No. 18748729.3 dated Nov. 20, 2020 (8 pages). |
Supplementary European Search Report on EP Appl. Ser. No. 18831205.2 dated Feb. 12, 2021 (7 pages). |
Decision of Rejection on JP Appl. Ser. No. 2019-527155 dated Jun. 8, 2021 (8 pages). |
El Gayar, N. (Ed.) et al., “Multiple Classifier Systems”, 9th International Workshop, MCS 2010, Cairo, Egypt, Apr. 7-9, 2010, 337 pages. |
Office Action on JP App. Ser. No. 2019-527155 dated Dec. 1, 2020 (10 pages). |
Volume 43, No. 2, Feb. 1, 2011 pp. 61-69 (Miyasaka T., et al., “Moving Object Tracking and Identification in Traveling Environment Using High Resolution Laser Radar”, Graphic Information Industrial, vol. 43, No. 2, pp. 61-69, Feb. 1, 2011.). |
Anonymous: “Occlusion | Shadows and Occlusion | Peachpit”, , Jul. 3, 2006 (Jul. 3, 2006), XP055697780, Retrieved from the Internet: URL:https://www.peachpit.com/articles/article.aspx?p=486505&seqNum=7 [retrieved on May 25, 2020]. |
Chao-Hung Lin et al: “Eigen-feature analysis of weighted covariance matrices for LiDAR point cloud classification”, ISPRS Journal of Photogrammetry and Remote Sensing., vol. 94, Aug. 1, 2014 (Aug. 1, 2014), pp. 70-79, XP055452341, Amsterdam, NL ISSN: 0924-2716, DOI: 10.1016/j.isprsjprs.2014.04.016. |
Farhad Samadzadegan et al: “A Multiple Classifier System for Classification of LIDAR Remote Sensing Data Using Multi-class SVM”, Apr. 7, 2010 (Apr. 7, 2010), Multiple Classifier Systems, Springer Berlin Heidelberg, Berlin, Heidelberg, pp. 254-263, XP019139303, ISBN: 978-3-642-12126-5. |
Hong Cheng: “Autonomous Intelligent Vehicles” In: “Autonomous Intelligent Vehicles”, Jan. 1, 2011 (Jan. 1, 2011), Springer London, London,XP055699929, ISBN: 978-1-4471-2280-7. |
Johnson A E et al: “Using Spin Images for Efficient Object Recognition in Cluttered 30 Scenes”, IEEE Transactions on Pattern Analysis and Machine Intelligence, IEEE Computer Society, USA, vol. 21, No. 5, May 1, 1999 (May 1, 1999), pp. 433-448, XP000833582, ISSN: 0162-8828, DOI: 10.1109/34. 765655. |
Weinmann Martin et al: “Semantic point cloud interpretation based on optimal neighborhoods, relevant features and efficient classifiers”, ISPRS Journal of Photogrammetry and Remote Sensing,Amsterdam [U.A.] : Elsevier, Amsterdam, NL, vol. 105, Feb. 27, 2015 (Feb. 27, 2015), pp. 286-304, XP029575087, ISSN: 0924-2716, DOI: 10.1016/J.ISPRSJPRS.2015.01.016. |
“Fundamentals of Direct Digital Synthesis,” Analog Devices, MT-085 Tutorial Rev. D, Oct. 2008, pp. 1-9. |
Adany et al., “Chirped Lidar Using Simplified Homodyne Detection,” Jour. Lightwave Tech., Aug. 2009; vol. 27, Issue 26, pp. 1-7. |
Aull et al., “Geiger-Mode avalanche photodiodes for three-dimensional imaging,” Lincoln Lab. J., Jan. 1, 2002, vol. 13, pp. 335-350. |
Bashkannky et al., “RF phase-coded random-modulation LIDAR,” Optics Communications, Feb. 15, 2004, vol. 231, pp. 93-98. |
Beck et al., “Synthetic-aperture imaging laser radar: laboratory demonstration and signal processing,” Appl. Opt., Dec. 10, 2005, vol. 44, pp. 7621-7629. |
Berkovic, G. and Shafir, E., “Optical methods for distance and displacement measurements”, Adv. Opt. Photon., Dec. 2012, vol. 4, Issue 4, pp. 441-471. |
Besl, P.J. and N.D. McKay, “Method for registration of 3-D shapes”, Feb. 1992, vol. 1611, No. 2, pp. 586-606. |
Campbell et al., “Super-resolution technique for CW lidar using Fourier transform reordering and Richardson-Lucy deconvolution.” Opt Lett. Dec. 15, 2014, vol. 39, No. 24, pp. 6981-6984. |
Cao et al., “Lidar Signal Depolarization by Solid Targets and its Application to Terrain Mapping and 3D Imagery,” Defence R&D, Contract Report DRDC Valcartier CR 2011-236, Mar. 2011, pp. 1-74, URL:http://publications.gc.ca/collections/collection_2016/rddc-drdc/D68-3-236-2011-eng.pdf. |
Contu, F., “The Do's and Don'ts of High Speed Serial Design in FPGA's”. Xilinix All Programmable, Copyright J013, High Speed Digital Design & Validation Seminars 2013, pp. 1-61. |
Crouch et al., “Three dimensional digital holographic aperture synthesis”, Sep. 7, 2015, Optics Express, vol. 23, No. 18, pp. 3811-23816. |
Crouch, S. and Barber, Z. W., “Laboratory demonstrations of interferometric and spotlight synthetic aperture ladar techniques,” Opt. Express, Oct. 22, 2012, vol. 20, No. 22, pp. 24237-24246. |
Dapore et al., “Phase noise analysis of two wavelength coherent imaging system,” Dec. 16, 2013, Opt. Express, vol. 21, No. 25, pp. 30642-30652. |
Duncan et al., “Holographic aperture ladar”, Applied Optics, Feb. 19, 2009, vol. 48, Issue 6, pp. 1-10. |
Duncan, B.D. and Dierking, M. P., “Holographic aperture ladar: erratum,” Feb. 1, 2013, Appl. Opt. 52, No. 4, pp. 706-708. |
Fehr et al., “Compact Covariance descriptors in 3D point clouds for object recognition,” presented at the Robotics and Automation (ICRA), May 14, 2012, IEEE International Conference, pp. 1793-1798. |
Foucras et al., “Detailed Analysis of the Impact of the Code Doppler on the Acquisition Performance of New GNSS Signals,” ION ITM, International Technical Meeting of The Institute of Navigation, San Diego, California, Jan. 27, 2014, pp. 1-13. |
Google Patents Machine Translation of German Patent Pub. No. DE102007001103A1 to Bauer. |
Haralick et al., “Image Analysis Using Mathematical Morphology,” IEEE Transactions Jn Pattern Analysis and Machine Intelligence, Jul. 1987, v. PAMI-9, pp. 532-550. |
International Preliminary Report on Patentability issued on PCT/US2018/041388 dated Jan. 23, 2020, 11 pages. |
International Search Report and Written Opinion for PCT/US2018/44007, dated Oct. 25, 2018, 17 pages. |
International Search Report and Written Opinion on PCT/US2017/062708, dated Mar. 16, 2018, 14 pages. |
International Search Report and Written Opinion on PCT/US2017/062714, dated Aug. 23, 2018 , 13 pages. |
International Search Report and Written Opinion on PCT/US2017/062721, dated Feb. 6, 2018, 12 pages. |
International Search Report and Written Opinion on PCT/US2018/016632, dated Apr. 24, 2018, 6 pages. |
International Search Report and Written Opinion on PCT/US2018/041388, dated Sep. 20, 2018, 13 pages. |
International Search Report and Written Opinion on PCT/US2019/28532, dated Aug. 16, 2019, 16 pages. |
Johnson, A., “Spin-Images: A Representation for 3-D Surface Matching,” doctoral dissertation, tech. report CMU-RI-TR-97-47, Robotics Institute, Carnegie Mellon University, Aug. 1997, pp. 1-288. |
Kachelmyer, “Range-Doppler Imaging with a Laser Radar,” The Lincoln Laboratory Journal, 1990, vol. 3, No. 1, pp. 87-118. |
Klasing et al., “Comparison of Surface Normal Estimation Methods for Range Sensing Applications,” in Proceedings of the 2009 IEEE International Conference on Robotics and Automation May 12, 2009, pp. 1977-1982. |
Krause et al., “Motion compensated frequency modulated continuous wave 3D coherent imaging ladar with scannerless architecture”, Appl. Opt., Dec. 20, 2012, vol. 51, No. 36, pp. 8745-8761. |
Le, Trung-Thanh., “Arbitrary Power Splitting Couplers Based on 3x3 Multimode Interference Structures for All-0ptical Computing”, IACSIT International Journal of Engineering and Technology, Oct. 2011, vol. 3, No. 5, pp. 565-569. |
Lu et al., “Recognizing Objects in 3D Point Clouds with Multi-Scale Local Features,” Sensors 2014, Dec. 15, 2014, pp. 24156-24173 URL:www.mdpi.com/1424-8220/14/12/24156/pdf. |
Maroon et al., “Three-dimensional Lensless Imaging Using Laser Frequency Diversity”, Appl. Opt., vol. 31, Jan. 10, 1992, pp. 255-262. |
Monreal et al., “Detection of Three Dimensional Objects Based on Phase Encoded Range Images,” Sixth International Conference on Correlation Optics, Jun. 4, 2004, vol. 5477, pp. 269-280. |
Munkres, J., “Algorithms for the Assignment and Transportation Problems”, Journal of the Society for Industrial and Applied Mathematics, Mar. 1957, vol. 5, No. 1, pp. 32-38. |
O'Donnell, R.M., “Radar Systems Engineering Lecture 11 Waveforms and Pulse Compression,” IEE New Hampshire Section, Jan. 1, 2010, pp. 1-58. |
OIF (Optical Internetworking Forum), “Implementation Agreement for Integrated Dual Polarization Micro-Intradyne Coherent Receivers,” R. Griggs, Ed., IA# OIF-DPC-MRX-01.0, published by Optical Internetworking Forum available at domain oiforum at category com, Mar. 31, 2015, pp. 1-32. |
Optoplex Corporation. “90 degree Optical Hybrid”. Nov. 9, 2016, 2 pages. |
Rabb et al., “Multi-transmitter aperture synthesis”, Opt. Express 18, Nov. 22, 2010, vol. 28, No. 24, pp. 24937-24945. |
Roos et al., “Ultrabroadband optical chirp linearization for precision melrology applications”, Opt. Lett. vol. 34, No. 23, Dec. 1, 2009, 3pp. 692-3694. |
Salehian et al., “Recursive Estimation of the Stein Center of SPD Matrices and Its Applications,” in 2013 IEEE International Conference on Computer Vision {ICCV), Dec. 1, 2013, pp. 1793-1800. |
Satyan et al., “Precise control of broadband frequency chirps using optoelectronic feedback”, Opt. Express, Aug. 31, 2009, vol. 17, No. 18, pp. 15991-15999. |
Stafford et al., “Holographic aperture ladarwith range compression,” Journal of Optical Society of America, May 1, 2017, vol. 34, No. 5, pp. A1-A9. |
Tippie et al., “High-resolution synthetic-aperture digital holography with digital phase and pupil correction”, Optics Express, Jun. 20, 2011, vol. 19, No. 13, pp. 12027-12038. |
Wikipedia, Digital-to-analog converter, https://en.wikipedia.org/wiki/Digital-to-analog_converter, 7 pages (as of Apr. 15, 2017). |
Wikipedia, Field-programmable gate array, https://en.wikipedia.org/wiki/Field-programmable_gate_array, 13 pages (as of Apr. 15, 2017). |
Wikipedia, In-phase and quadrature components, https://en.wikipedia.org/wiki/In-phase_and_quadrature_components (as of Jan. 26, 2018 20:41 GMT), 3 pages. |
Wikipedia, Phase-shift keying, https://en.wikipedia.org/wiki/Phase-shift_keying#Binary_phase-shift_keying.28BPSK.29 (as of Oct. 23, 2016), 9 pages. |
Ye, J., “Least Squares Linear Discriminant Analysis,” Proceedings of the 24th International Conference on Machine Learning, pp. 1087-1093 (as of Nov. 27, 2016). |
Extended European Search Report issued on EP 17898933.1 dated May 12, 2020, (7 pages). |
Foreign Search Report on EP Appl. Ser. No. 17876081.5 dated Jun. 3, 2020 (9 pages). |
Foreign Search Report on EP Appl. Ser. No. 17888807.9 dated Jun. 3, 2020 (9 pages). |
Mackinnon et al: “Adaptive laser range scanning”, American Control Conference, Piscataway, NJ, 2008, pp. 3857-3862. |
International Search Report and Written Opinion issued on PCT/US2019/068351 dated Apr. 9, 2020 pp. 1-14. |
Notice of Reasons for Refusal on JP Appl. Ser. No. 2021-538998 dated Nov. 30, 2021 (20 pages). |
Notice of Reasons for Refusal on JP Appl. Ser. No. 2021-165072 dated Nov. 30, 2021 (9 pages). |
Supplementary European Search Report on EP Appl. Ser. No. 19791789.1 dated Dec. 9, 2021 (4 pages). |
Lu et al., “Recognizing objects in 3D point clouds with multi-scale features”, Sensors 2014, 14, 24156-24173; doi: 10.3390/s141224156 (Year: 2014). |
Notice of Reasons for Refusal on JP Appl. Ser. No. 2021-165072 dated Apr. 19, 2022 (10 pages). |
Notice of Reasons for Refusal on JP Appl. Ser. No. 2021-538998 dated Apr. 26, 2022 (11 pages). |
Examination Report on EP Appl. Ser. No. 17898933.1 dated May 25, 2022 (5 pages). |
Notice of Reasons for Refusal on JP Appl. Ser. No. 2021-118743 dated Jun. 7, 2022 (9 pages). |
Office Action on EP Appl. Ser. No. 19791789.1 dated Dec. 21, 2021 (12 pages). |
Farhad Samadzadegan et al., “A Multiple Classifier System for Classification of LIDAR Remote Sensing Data Using Multi-class SVM”, International Workshop on Multiple Classifier Systems, MCS 2010, Lecture Notes in Computer Science, 2010, vol. 5997, pp. 254-263. |
Notice of Reasons for Rejection issued in connection with JP Appl. Ser. No. JP 2021-126516 dated Jun. 21, 2022 (16 pages). |
Chinese Office Action issued in related CN Appl. Ser. No. 201780081804.0 dated Dec. 1, 2022 (20 pages). |
Chester, David B. “A Parameterized Simulation of Doppler Lidar”, All Graduate Thesis and Dissertions, Dec. 2017, Issue 6794, <URL: https://digitalcommons.usu.edu/etd/6794 > * pp. 13-14, 27-28, 45*. |
Notice of Reasons of Rejection issued in connection with JP Appl. Ser. No. 2022-000212 dated Feb. 7, 2023. |
Korean Office Action issued in connection with KR Appl. Ser. No. 10-2021-7023519 dated Feb. 13, 2023. |
Notice of Reasons for Rejection issued in connection with JP Appl. Ser. No. JP 2021-126516 dated Dec. 13, 2022 (with English translation, 14 pages). |
Office Action issued in connection with Japanese Appl. No. 2022-569030 dated Aug. 22, 2023. |
Number | Date | Country | |
---|---|---|---|
20190310372 A1 | Oct 2019 | US |
Number | Date | Country | |
---|---|---|---|
62428109 | Nov 2016 | US |