LiDAR homodyne transceiver using pulse-position modulation

Information

  • Patent Grant
  • 11313969
  • Patent Number
    11,313,969
  • Date Filed
    Monday, October 28, 2019
    4 years ago
  • Date Issued
    Tuesday, April 26, 2022
    2 years ago
Abstract
A LiDAR system includes an optical source for generating a continuous wave (CW) optical signal. A control processor generates a pulse-position modulation (PPM) signal, and an amplitude modulation (AM) modulator generates a pulse-position amplitude-modulated optical signal, which is transmitted through a transmit optical element into a region. A receive optical element receives reflected versions of the pulse-position amplitude-modulated optical signal reflected from at least one target object in the region. An optical detector generates a first baseband signal. A signal processor receives the first baseband signal and processes the first baseband signal to generate an indication related to a target object in the region.
Description
BACKGROUND
1. Technical Field

The present disclosure is related to LiDAR systems and, in particular, to a homodyne LiDAR system and method with pulse-code modulation (PCM) transmission, which can be used in an automotive or other motor vehicle application.


2. Discussion of Related Art

LiDAR is commonly referred to as an acronym for light detection and ranging, in the sense that LiDAR is commonly considered an optical analog to radar. In general, there are two types of LiDAR systems, namely, incoherent LiDAR and coherent LiDAR. Incoherent LiDAR, also commonly referred to as direct detection or direct energy detection LiDAR, primarily uses an amplitude measurement in light returns, while coherent LiDAR is better suited for phase-sensitive measurements or other more sophisticated transmitter waveform modulation techniques. Coherent systems generally use optical heterodyne or homodyne detection, which, being more sensitive than direct detection, allows them to operate at a much lower power and provide greater measurement accuracy and resolution.


SUMMARY

According to a first aspect, a LiDAR system is provided. The LiDAR system includes an optical source for generating a continuous wave (CW) optical signal; a control processor for generating a pulse-position modulation (PPM) signal; an amplitude modulation (AM) modulator for receiving the CW optical signal and the PPM signal and generating therefrom a pulse-position amplitude-modulated optical signal; a transmitter for transmitting the pulse-position amplitude-modulated optical signal through a transmit optical element into a region; a receive optical element for receiving reflected versions of the pulse-position amplitude-modulated optical signal reflected from at least one target object in the region; a first optical detector for receiving the CW optical signal from the optical source and a received version of the reflected versions of the pulse-position amplitude-modulated optical signal, and generating therefrom a first baseband signal; and a signal processor for receiving the first baseband signal and processing the first baseband signal to generate an indication related to the object.


In some exemplary embodiments, the LiDAR system is a homodyne LiDAR system. In other exemplary embodiments, the LiDAR system is a heterodyne LiDAR system.


In some exemplary embodiments, the first optical detector comprises a first mixer for generating the first baseband signal.


In some exemplary embodiments, the system further comprises a second optical detector for receiving the CW optical signal from the optical source and a received version of the reflected versions of the pulse-position amplitude-modulated optical signal, and generating therefrom a second baseband signal. In some exemplary embodiments, the second optical detector comprises a second mixer for generating the second baseband signal. In some exemplary embodiments, the first and second baseband signals are in quadrature. In some exemplary embodiments, the first optical detector generates an in-phase-channel voltage signal, and the second optical detector generates a quadrature-channel voltage signal. In some exemplary embodiments, at least one of the first and second optical detectors comprises a phase shifter for introducing a phase difference between the first and second baseband signals.


In some exemplary embodiments, the LiDAR system further comprises a first low-pass filter for filtering the in-phase-channel voltage signal to generate a filtered in-phase-channel voltage signal and a second low-pass filter for filtering the quadrature-channel voltage signal to generate a filtered quadrature-channel voltage signal. In some exemplary embodiments, the LiDAR system further comprises a first analog-to-digital converter (ADC) for converting the in-phase-channel voltage signal to a digital in-phase-channel voltage signal and a second ADC for converting the quadrature-channel voltage signal to a digital quadrature-channel voltage signal.


In some exemplary embodiments, the signal processor receives the first baseband signal and the second baseband signal and processes the first and second baseband signals to generate the indication related to the object. In some exemplary embodiments, the processor, in processing the first and second baseband signals to generate the indication related to the object, performs Doppler processing. In some exemplary embodiments, the processor, in processing the first and second baseband signals to generate the indication related to the object, performs correlation processing.


According to another aspect, a LiDAR method is provided. The LiDAR method includes: generating a continuous wave (CW) optical signal; generating a pulse-position modulation (PPM) signal; generating a pulse-position amplitude-modulated optical signal from the CW optical signal and the PPM signal; transmitting the pulse-position amplitude-modulated optical signal though a transmit optical element into a region; receiving reflected versions of the pulse-position amplitude-modulated optical signal reflected from at least one object in the region; mixing the CW optical signal from the optical source and the reflected versions of the pulse-position amplitude-modulated optical signal to generate therefrom a first baseband signal; and processing the first baseband signal to generate an indication related to the object.


In some exemplary embodiments, the LiDAR method is a homodyne LiDAR method. In other exemplary embodiments, the LiDAR method is a heterodyne LiDAR method.


In some exemplary embodiments, the LiDAR method further comprises mixing the CW optical signal from the optical source and the reflected versions of the pulse-position amplitude-modulated optical signal to generate therefrom a second baseband signal and processing the first and second baseband signals to generate the indication related to the object. In some exemplary embodiments, the first and second baseband signals are in quadrature. In some exemplary embodiments, the LiDAR method further comprises performing optical detection to generate an in-phase-channel voltage signal from the first baseband signal and a quadrature-channel voltage signal from the second baseband signal. In some exemplary embodiments, the LiDAR method further comprises performing phase shifting to introduce a phase difference between the first and second baseband signals.


In some exemplary embodiments, the LiDAR method further comprises low-pass filtering the in-phase-channel voltage signal to generate a filtered in-phase-channel voltage signal and low-pass filtering the quadrature-channel voltage signal to generate a filtered quadrature-channel voltage signal. In some exemplary embodiments, the LiDAR method further comprises converting the in-phase-channel voltage signal to a digital in-phase-channel voltage signal and converting the quadrature-channel voltage signal to a digital quadrature-channel voltage signal.


In some exemplary embodiments, processing the first and second baseband signals to generate the indication related to the object comprises performing Doppler processing on the digital in-phase-channel voltage signal and the digital quadrature-channel voltage signal.


In some exemplary embodiments, processing the first and second baseband signals to generate the indication related to the object comprises performing correlation processing on the digital in-phase-channel voltage signal and the digital quadrature-channel voltage signal.





BRIEF DESCRIPTION OF THE DRAWINGS

The present disclosure is further described in the detailed description which follows, in reference to the noted plurality of drawings by way of non-limiting examples of embodiments of the present disclosure, in which like reference numerals represent similar parts throughout the several views of the drawings.



FIG. 1A includes a schematic time waveform diagram illustrating a linear frequency modulation (LFM) pulse compression transmit waveform.



FIG. 1B includes a schematic time waveform diagram illustrating a binary phase shift keying (BPSK) pulse compression transmit waveform.



FIG. 2A includes a schematic waveform diagram illustrating pulse position modulation (PPM) encoding, which can be applied to a transmit waveform, according to some exemplary embodiments.



FIG. 2B includes a schematic waveform diagram illustrating non-coherent pulse compression (NCPC) according to some exemplary embodiments, using a length-13 Barker Code encoding on the transmit waveform.



FIG. 3 includes two schematic waveform diagrams illustrating a transmit waveform, according to some exemplary embodiments.



FIG. 4 includes a schematic functional block diagram of a pulse-code modulation (PCM) homodyne LiDAR transceiver, according to some exemplary embodiments.



FIG. 5A includes schematic waveform diagrams of recovered I-channel and Q-channel waveforms in the absence of motion, i.e., the stationary or static case, according to some exemplary embodiments.



FIG. 5B includes schematic waveform diagrams of recovered I-channel and Q-channel waveforms in the presence of motion, i.e., the presence of Doppler, according to some exemplary embodiments.



FIG. 6A includes schematic waveform diagrams illustrating data sampling of recovered I-channel and Q-channel waveforms in the absence of motion, i.e., the stationary or static case, according to some exemplary embodiments.



FIG. 6B includes schematic waveform diagrams illustrating data sampling of recovered I-channel and Q-channel waveforms in the presence of motion, i.e., the presence of Doppler, according to some exemplary embodiments.



FIG. 7 includes a schematic waveform diagram illustrating data acquisition for sliding correlator implementation, according to some exemplary embodiments.



FIG. 8 includes a schematic diagram illustrating the mathematical structure of the sliding correlator, according to some exemplary embodiments.



FIG. 9A includes a schematic diagram illustrating a return signal in either the I-channel or Q-channel, within range bin 17, according to some exemplary embodiments.



FIG. 9B includes a schematic waveform diagram illustrating the sliding correlator code signal, where the state of interrogation is in range bin 10, according to some exemplary embodiments.



FIG. 10 includes a schematic illustration of the sliding correlator output for range bin 17, as referenced in connection with FIGS. 9A and 9B, according to some exemplary embodiments.



FIGS. 11A and 11B provide an illustration of the reduced sidelobe levels resulting from increasing the code length, according to some exemplary embodiments.



FIGS. 12A and 12B include schematic diagrams illustrating I-channel and Q-channel signal samples, respectively, of a moving target object at high signal-to-noise ratio (SNR), according to some exemplary embodiments.



FIGS. 13A and 13B include schematic diagrams illustrating complex Fast Fourier Transform (FFT) of I-channel and Q-channel data plotted with linear amplitude and logarithmic amplitude, respectively, according to some exemplary embodiments.



FIGS. 14A and 14B include schematic diagrams illustrating complex FFT of smoothed I-channel and Q-channel data plotted with linear amplitude and logarithmic amplitude, respectively, according to some exemplary embodiments.



FIG. 15 includes a schematic perspective view of an automobile, equipped with one or more LiDAR systems equipped with the LiDAR transceiver described herein in detail, according to some exemplary embodiments.



FIG. 16 includes a schematic top view of automobile equipped with two LiDAR systems, according to some exemplary embodiments.





DETAILED DESCRIPTION

According to the present disclosure, a PCM LiDAR transceiver can utilize a variety of codes and code lengths to address the operational environment. The technique of non-coherent pulse compression (NCPC) is expanded to application in a coherent, i.e., homodyne, LiDAR architecture. According to the present disclosure, the PCM homodyne LiDAR transceiver is described in conjunction with data acquisition for range and Doppler measurement techniques. Signal processing gain is achieved via code length and correlation receiver techniques. Also, PCM LiDAR using NCPC requires only amplitude modulation (AM) pulse modulation for implementation and enables the signal processing gain benefits realized by the embodiments of the present disclosure. Furthermore, a significant advantage of the PCM homodyne LiDAR system architecture and the use of NCPC according to the present disclosure is the reduction of the laser transmitter spectral quality required for coherent pulse compression using direct FM or PM modulation.


Unlike a direct detection LiDAR transceiver, a homodyne LiDAR transceiver according to the present disclosure utilizes frequency translation or mixing as the first stage of the receiver to transfer the return signal from a bandpass signal at carrier frequency fo to a baseband signal for signal processing and measurement data extraction. It should be noted that NCPC is also applicable to direct detection LiDAR, but without the ability to determine Doppler frequency. To a moderate extent, the homodyne LiDAR transceiver of the disclosure increases the complexity of the system architecture; however, the receiver detection sensitivity is significantly improved, which reduces the transmit power, extends the operational range and increases range measurement accuracy.


According to exemplary embodiments, some transmit modulation waveforms require an additional level of complexity, which requires extension of the laser coherence time, also referred to as coherence length, which is the time over which a propagating wave (especially a laser or maser beam) may be considered coherent. That is, it is the time interval within which its phase is, on average, predictable. Specifically, the linear frequency-modulated continuous-wave (FMCW) waveform requires a highly linear change in frequency versus time as well as precise frequency deviation to insure range measurement accuracy and resolution. In addition, laser frequency-modulated noise and modulation bandwidth limitations further reduce the available signal-to-noise ratio and thereby degrade range measurement performance. An example of the level of complexity required to achieve linear frequency modulation and reduce laser phase noise to acceptable levels is the utilization of an electro-optical phase-locked loop (PLL). However, due to cost, complexity and operational environmental conditions, implementation of an electro-optical PLL is not compatible with automotive equipment requirements. It is noted that the PCM homodyne LiDAR of the present disclosure can be limited by the laser coherence time or length.


Non-coherent pulse compression (NCPC) waveforms for direct detection LiDAR systems, offer the opportunity to achieve comparable range measurement capability to linear FMCW, and do not require the spectral quality and FM modulation parameters of linear FMCW coherent LiDAR transceivers. According to the present disclosure, NCPC is implemented within coherent LiDAR transceivers, in particular, homodyne LiDAR transceivers. It is noted that the NCPC technique is also applicable to heterodyne LiDAR transceivers, although the present disclosure emphasizes application of the technique to homodyne LiDAR transceivers. The PCM homodyne LiDAR transceiver of the disclosure is described herein in conjunction with data acquisition for range and Doppler measurement. Signal processing gain is achieved via code length and correlation receiver techniques.


Pulse compression is a term which describes frequency modulation (FM) or phase modulation (PM) within the transmit pulse of radar systems for the purpose of increasing the transmit signal spectrum, thereby improving the range measurement resolution. In addition, pulse compression allows a wider pulse to be utilized for the purpose of increasing the average transmit power while maintaining range measurement resolution. The two most common techniques for pulse compression implementation are linear frequency modulation (LFM) and coded binary phase shift keying (BPSK).


Pulse compression is a signal processing technique commonly used by radar, sonar and echography to increase the range resolution as well as the signal-to-noise ratio. Pulse compression is achieved by transmitting a pulse, within which a parameter, i.e., amplitude, frequency or phase, of the transmitted pulsed signal, is subject to intra-pulse modulation and, upon receive correlation, a narrower pulse is produced. A quantitative measurement term for pulse compression implementation is the pulse compression ratio (PCR), which is defined as the increase in range resolution over the un-modulated pulse and is often expressed as the time-bandwidth product. The pulse compression ratio, or PCR, may be mathematically defined by the equation:

PCR=τw·Btx

where, τw is the modulated pulse width,


and, Btx is the spectral width of the modulated pulse.


For the linear FM waveform, the pulse compression ratio may be written:

PCRLFM=custom charactercustom characterF

where, custom characterT is the pulse width,


and, custom characterF is the frequency deviation.



FIGS. 1A and 1B are schematic time waveform diagrams illustrating LFM (FIG. 1A) and BPSK (FIG. 1B) pulse compression transmit waveforms. Referring to FIG. 1A, in LFM, the frequency of the transmit waveform varies along a linear ramp from a first frequency f1 to a second frequency f2 over a pulse period τω. In the BPSK diagram of FIG. 1B, either 0° or 180° phase shift is imposed on the transmit waveform, depending on a desired encoding, for example a length-13 Barker code.


Although the term “coherent” is not typically specifically employed in the description of pulse compression, it is clear that a coherent signal must be utilized to perform the de-chirp function in the case of the LFM waveform, and phase demodulation in the case of the BPSK waveform. The “non-coherent” as used herein means that a coherent de-chirp or demodulation signal is not required to perform the pulse compression function. Pulse compression is achieved via correlation/convolution within the receiver using a stored replica of the modulation code. Pulse compression, as performed within the receiver, may be implemented using analog or digital methods. In either case, a “sliding” correlator is used to perform a range bin search of the return signal. The range bin search includes multiplication of the received signal by discrete or continuous time increments of the modulation code, followed by integration, i.e., summation, of the multiplied signal components.



FIG. 2A is a schematic waveform diagram illustrating pulse position modulation (PPM) encoding, which can be applied to a transmit waveform. FIG. 2B is a schematic waveform diagram illustrating NCPC according to the present disclosure, using a length-13 Barker Code encoding on the transmit waveform. To describe and illustrate the NCPC technique of the disclosure and the details of coherent and non-coherent pulse compression, reference is made to FIGS. 2A and 2B, wherein the length-13 Barker code is implemented in a pulse position modulation (PPM) format.


According to the present disclosure, the NCPC technique is implemented in a different format than “coherent” pulse compression and utilizes amplitude modulation (AM) pulse modulation as opposed to frequency modulation (FM) or phase modulation (PM). Notwithstanding the implementation method, the NCPC technique achieves similar performance advantages as the classical method and eliminates the complexities related to modulation, spectral quality within the transmitter, and significantly, coherent architectures within the receiver. FIG. 3 includes two schematic waveform diagrams illustrating the transmit waveform according to the exemplary embodiments. The top curve of FIG. 3 illustrates the transmit (Tx) modulation code used to amplitude modulation the transmit signal to encode the signal with the desired coding, e.g., the length-13 Barker code. The lower curve illustrates the amplitude modulated transmit (Tx) signal, which has been amplitude modulated according to the Tx modulation code of the upper curve. It is noted that the PPM increases the code length by a factor of two. The normal Barker Code of length 13 employs two-state phase modulation (BPSK) of 0° or 180°. According to the present disclosure, PPM is carried out using the length 13 Barker Code as an example, with PPM for each of the Barker Code states being as illustrated in FIGS. 1, 2A and 2B. The left position is defined as a binary “1”, and the right position is defined as a binary “0”.



FIG. 4 includes a schematic functional block diagram of a PCM homodyne LiDAR transceiver 100, according to some exemplary embodiments. Referring to FIG. 4, LiDAR transceiver 100 includes a receive optical element or receive optics 102 at which optical energy, including optical returns from one or more target objects 104, are received from region 105 being observed by the LiDAR system using transceiver 100. Receive optics 102 can include, for example, one or more lenses and/or other optical elements used in such transceivers 100. The optical energy is received from receive optics 102 via optical conductor or line, e.g., optical cable, 128 at in-phase channel (I-Ch) detector 106 and quadrature channel (Q-Ch) detector 108. Under control of digital signal processor (DSP) and control system 110, via control line 120, continuous-wave (CW) laser 112 generates an optical carrier signal at nominal frequency f0 and applies the optical carrier signal along optical conductor or line, e.g., optical cable, 122 to optical signal splitter 114. Optical signal splitter 114 applies one of its split signal outputs to amplitude modulator 116 along optical conductor or line, e.g., optical cable, 124, and applies another split signal output to I-Ch detector 106 and Q-Ch detector 108 along optical conductor line, e.g., optical cable, 126.


In the in-phase channel, I-Ch detector 106 performs optical detection with an optical detector and homodyne conversion by mixing with a mixer the received optical signal via optical line 128 with the received optical signal from optical splitter 114 on line 126 at frequency f0 to generate an in-phase voltage signal, VI-Ch, and outputs signal VI-Ch to in-phase channel low-noise amplifier (LNA) 132. LNA 132 amplifies the signal and applies the amplified signal to low-pass filter (LPF) 132, which filters the amplified signal using a low-pass cut-off frequency fLPF=1/τb, where τb_is defined as the bit chip time, i.e., the pulse width. The resulting filtered signal is applied to DSP and control system 110, which processes the received signal according to the present disclosure. LPF 132 reduces the noise bandwidth, but allows the signal energy to pass, thereby improving signal-to-noise ratio (SNR).


Similarly, in the quadrature channel, Q-Ch detector 108 performs optical detection with an optical detector and homodyne conversion by mixing with a mixer the received optical signal via optical line 128 with the received optical signal from optical splitter 114 on line 126 at frequency f0 to generate a quadrature voltage signal, VQ-Ch, and outputs signal VQ-Ch to quadrature channel low-noise amplifier (LNA) 136. LNA 136 amplifies the signal and applies the amplified signal to low-pass filter (LPF) 138, which filters the amplified signal using a low-pass cut-off frequency fLPF=1/τb. The resulting filtered signal is applied to DSP and control system 110, which processes the received signal according to the present disclosure. One or both of optical detectors 106, 108 includes an optical phase shifter which phase shifts one or both of the received optical signal on line 128 to provide the necessary phase shift to develop the in-phase and quadrature channel signals VI-Ch and VQ-Ch. DSP and control system 110 also provides an input/output interface 162, for interfacing with external elements, such as control systems, processing systems, user input/output systems, and other such systems.


The optical signal used to illuminate target objects, such as target object 104, in region 105 being observed by the LiDAR system using LiDAR transceiver 100, is transmitted into region 105 via transmit optical element or transmit optics 160, which can include one or more lenses and/or other optical elements used in such transceivers 100. The optical signal being transmitted is amplitude modulated by amplitude modulator 116, which applies a pulse-position amplitude modulation to the optical signal received on optical line 124 from optical signal splitter 114, under the control of one or more control signals generated by DSP and control module 110. According to the present disclosure, in some exemplary embodiments, pulse-position modulation is used to encode the transmitted signal with a code, which in some exemplary embodiments can be a 13-bit pseudo-Barker code, as illustrated in FIG. 4. As referred to herein, according to the present disclosure, a “pseudo” Barker Code is a derivative of the normal Barker Code, in which a single bit of the normal Barker Code is replaced with a binary pulse-position-modulated signal. FIGS. 1B and 2B illustrate the signal relationships.


According to exemplary embodiments, the quadrature detection precedes analog-to-digital conversion. The quadrature detector recovers the pulse modulation envelope associated with the low-frequency pulse modulation. The data samples are subsequently processed via spectral resolution or other means of each range bin data set. The spectral resolution approach used reduces the detection bandwidth and effectively integrates the energy of the range bin sample set.



FIG. 4 illustrates the system block diagram of a pulse code modulation (PCM) homodyne LiDAR transceiver with NCPC capability, according to the present disclosure. The PCM transmit waveform is implemented with AM pulse modulation of the CW laser signal. The PCM waveform is synthesized within the DSP and control system 110, i.e., the digital signal processor (DSP), and applied to the AM modulator, i.e., amplitude modulator 116. For illustrative purposes, a length 13 pseudo-Barker code is shown. A longer code length provides increased processing gain and lower sidelobes. In this illustrative exemplary embodiment, each burst of the transmit signal is comprised of 26 discrete time segments with binary transmit power levels of “1”, which represents a transmit power level, or “0”, which represents no transmit power. The modulation format is similar to that associated with On-Off Keying, or OOK. It is noted that transmit Tx pulses are unipolar, and correlation is bipolar. Tx modulation code generation may be accomplished using standard techniques, i.e., shift register, arbitrary waveform generator (AWG), or direct digital synthesis (DDS).


Continuing to refer to FIG. 4, the PCM transmit signal is focused on a spatial region of interest 105 using transmit optics 160; and a similar spatial region of interest 105 is achieved using receive optics 102. Analogous to radar parlance, transmit optics 160 and receive optics 102 perform the respective radar antenna functions. The transmit PCM signal is incident on a target object 104 at range R; the incident signal is scattered in accordance with the physical attributes and unique geometry of target object 104 at the operating wavelength; part of the scattered signal is reflected toward receive optics 102, where the signal is incident on diodes of I-Ch and Q-Ch detectors 106 and 108, respectively.


Continuing to refer to FIG. 4, the received signal is homodyned, or mixed, with the CW laser signal from optical signal splitter 114, which acts as a local oscillator. Upon mixing, phase coherence between the PCM transmit signal and the local oscillator engenders amplitude and phase demodulation of the PCM code at the outputs of I-Ch detector 106 and Q-Ch detector 108 in accordance with the detector output equations for the I-channel and Q-channel voltage pulses:







V

I
-

C

h



=


α






cos




[

2






π


(


2

R


λ
0


)



]






and






V

Q
-

C

h




=

α






sin




[

2






π


(


2

R


λ
0


)



]








where, R is the range to the object, λ0 is the operating wavelength


and α is an attenuation factor due to transmission and reflection losses.


It is recognized that:

R=R0+v·t

where R0 is the static range and v is the normal component of closing velocity. Upon substitution and arrangement of the constituent terms, one may write:







V

I
-
Ch


=

α





cos









2





π






f
d


t

+

ϕ
0









and









V

Q
-
Ch


=

α





sin









2





π






f
d


t

+

ϕ
0









where


,






f
d

=



2

v


λ
0








(
Doppler
)






and


,


ϕ
0

=

2


π


(


2


R
0



λ
0


)









(Static two—way transmission phase)


The I-Ch and Q-Ch voltages represent a pulsed signal with amplitude proportional to object range and frequency in accordance object normal component of Doppler velocity. Subsequent analog signal processing encompasses amplification (130, 136) and low-pass filtering (132, 138) in accordance with the pulse width (τb) and resolution and sampling rate of ADCs 134, 140.



FIG. 5A includes schematic waveform diagrams of recovered I-channel and Q-channel waveforms in the absence of motion, i.e., the stationary or static case, according to some exemplary embodiments. FIG. 5B includes schematic waveform diagrams of recovered I-channel and Q-channel waveforms in the presence of motion, i.e., the presence of Doppler, according to some exemplary embodiments. In the static environment, e.g., no relative motion between LiDAR sensor and object(s), illustrated in FIG. 5A, there is no Doppler component and the I-Ch and Q-Ch outputs are constant amplitude pulses. In contrast, as illustrated in FIG. 5B, when relative motion is present between the LiDAR system and objects, the I-Ch and Q-Ch outputs incur AM modulation in accordance with the Doppler frequency envelope, illustrated in dashed lines in FIG. 5B. That is, Doppler frequency is contained within the I/Q samples of the recovered PCM envelope. It is noted from FIG. 5B that the PPM effectively “staggers” the Doppler samples which spreads the spectrum and degrades resolution. Under Doppler processing according to the present disclosure, the homodyne processing or “mixing” applies amplitude modulation to the I-Ch and Q-Ch signals for moving target objects. A complex FFT may be executed on each sequential range bin data set. This requires FFT execution at each range bin. An alternate approach uses range detection and a threshold test or threshold detection. Threshold detection is a technique used in radar signal detection to determine if a signal is present within the combined signal and noise at the receiver output, usually after a single pulse or multiple pulses have been acquired from a specific spatial location such as a range bin at a specific bearing angle. Threshold detection is typically performed prior to FFT execution, thereby executing FFT only on range bins with known detected signals, which reduces the processing load by reducing the number of FFT algorithm executions.



FIG. 6A includes schematic waveform diagrams illustrating data sampling of recovered I-channel and Q-channel waveforms in the absence of motion, i.e., the stationary or static case, according to some exemplary embodiments. FIG. 6B includes schematic waveform diagrams illustrating data sampling of recovered I-channel and Q-channel waveforms in the presence of motion, i.e., the presence of Doppler, according to some exemplary embodiments. Data acquisition encompasses ADC 134, 140 sampling of the I-Ch and Q-Ch signals, respectively, at a rate consistent with the pulse width, i.e. chirp, width (τb) of the modulation code. According to the exemplary embodiments, the sampling frequency of ADCs 134, 140, fs is given by fs=1/τb. as illustrated in FIGS. 6A and 6B. It will be noted that 26 samples are used to acquire the data set for a single range bin. A single range bin width, also referred to as the range measurement resolution, is defined by the equation:







δ

R

=


c


τ
b


2






where c is the speed of light, and τb is the pulse width.



FIG. 7 includes a schematic waveform diagram illustrating data acquisition for sliding correlator implementation, according to some exemplary embodiments. Continuing to describe in detail the data acquisition process and subsequent signal processing according to the present disclosure, reference is made to FIG. 7, in which a transmit Tx code is indicated in the top trace and an illustrative demodulated I-Ch or Q-Ch signal return Rx from range bin 17 is illustrated in the lower trace. The ADC sample points are indicated by the arrows below the return signal. The 52-point data set, acquired following each transmission of the coded Tx waveform from the ADC sampling, is used to interrogate each range bin previously defined via correlation with a stored replica of the transmit code. The range bins are interrogated in increments of 26-point data sets, consistent with code length correlation. The correlator is also referred to as a “sliding” correlator because, following acquisition of the 52-point data set, the magnitude sum of the respective I-Ch and Q-Ch values is multiplied by the stored replica values at each range bin and added. The result is then compared to a predetermined threshold to determine if an object is present in the interrogated range bin. The sliding correlator may be implemented numerically within DSP and control system 110, i.e., digital signal processor, and is similar in mathematical structure to a finite impulse response (FIR) filter. FIG. 8 includes a schematic diagram illustrating the mathematical structure of the sliding correlator, according to some exemplary embodiments. Referring to FIG. 8, the fixed amplitude of the return signal illustrated in FIG. 7 indicates that the object is stationary in range bin 17.


With reference to FIGS. 7 and 8, data is acquired at each range bin and correlated with the receive code on a range-in-by-range-bin basis. The receive code may dwell for increased processing gain. Data acquired from each range bin is incrementally shifted and multiplied by the stored code coefficients (cn). The process effectively integrates the received energy. Magnitudes of the I-Ch and Q-Ch signals are used due to Doppler modulation.


An exemplary illustration of the structural and operational parameters of one approach to range processing for the PCM homodyne LiDAR transceiver, according to the present disclosure. Table 1 summarizes the conditions and parameters for the range processing operational exemplary illustration. In the exemplary illustration, a length 13, pseudo-Barker transmit waveform will be utilized as described in detail above.









TABLE 1







Operational Example Parametric Values











PARAMETER
SYMBOL
VALUE
UNIT
NOTE/COMMENT














Maximum range
Rmax
390
meter



Transmit chip width
τb
50.0 · 10−9
second


Range resolution
δR
7.5
meter
δR = cτb/2


Data set length
Ndata
52

Increase for longer range detection


Receive range bin data
rn


See FIG. IV-4-A


Correlation code coefficients
cn


See FIG. IV-4-B










FIG. 9A includes a schematic diagram illustrating a return signal in either the I-channel or Q-channel, within range bin 17, according to some exemplary embodiments. FIG. 9B includes a schematic waveform diagram illustrating the sliding correlator code signal, where the state of interrogation is in range bin 10, according to some exemplary embodiments. Referring to Table 1 and FIGS. 9A and 9b, the sliding correlator requires a code transmission burst for each range bin; therefore, there are 52 transmission code bursts, and the maximum range of detection is Rmax=Ndata·δR, in this case, 390 meters. One should also be noted that a range cell dwell mode may be implemented via use of a stationary code at a single range bin. In addition to dwell mode, additional processing gain is available via use of a stationary code at a single range bin for several transmission bursts.


As noted above, FIG. 9A illustrates a return signal in either I-channel or Q-Channel within range bin 17, while the sliding correlator code is illustrated in FIG. 9B, where the state of interrogation is at range bin 10. The sliding correlator continues incrementally by range bin until the all range bins have been interrogated. FIG. 10 includes a schematic illustration of the sliding correlator output for range bin 17. As noted with reference to FIGS. 9A, 9B and 10, upon interrogation of range bin 17, the output of the sliding correlator illustrated in FIG. 10 indicates a peak representing the sum of the magnitude of the I-channel and Q-Channel signals. The processing gain of the sliding correlator may be ascertained from the resulting peak which represents the aggregate detected signal level from each pulse of the code:

PGdB=10·Log(n)=11.1 dB



FIG. 10 also indicates a performance limitation as disclosed by the adjacent sideband levels. The sideband levels are significantly reduced by increasing the code length and optimization of the correlator code using a mismatched filter. FIGS. 11A and 11B provide an illustration of the reduced sidelobe levels resulting from increasing the code length. FIG. 11A includes a schematic diagram of length 31 maximum length sequence (MLS) pulse-position modulation (PPM) code correlation, according to some exemplary embodiments. FIG. 11B includes a schematic diagram of ideal length 31 MLS code correlation, according to some exemplary embodiments. Referring to FIGS. 10, 11A and 11B, a length 31 MLS code was utilized and modified in accordance with PPM criterion as previously described. The sliding correlator output for the PPM code and the ideal code are illustrated in FIGS. 11A and 11B, respectively. In addition to the sidelobe level differentiation, it should be noted that the processing gain is 3 dB higher in the case of the ideal code; this is a direct result of the reduced duty cycle (50%) associated with the PPM format.


With regard to correlation techniques discussed herein, the continuous and discrete state evaluations of the correlation function are mathematically defined by the correlation integral as set forth below. Specifically, the continuous state correlation equation (integral) is given by:








f


(
t
)


=



x


(
t
)


·

y


(
t
)



=



a
b





x


(
λ
)


·

y


(

λ
+
t

)









;





and the discrete state correlation equation (summation) is given by:







f
n

=



(

x
*
y

)

n

=




i
=

-









x
i

_

·


y

n
-
i


.









The ‘i’ index of ‘y’ produces a displacement of one increment in each sequential term of the summation. The bar over the ‘x’ term indicates the complex conjugate.


Doppler signal processing according to the present disclosure will now be described in detail. Unlike the direct detection NCPC LiDAR transceiver, which is not capable of Doppler detection, the NCPC homodyne LiDAR transceiver of the present disclosure is capable of Doppler detection. FIGS. 12A and 12B include schematic diagrams illustrating I-channel and Q-channel signal samples, respectively, of a moving target object 104 at high signal-to-noise ratio (SNR), according to some exemplary embodiments. FIGS. 13A and 13B include schematic diagrams illustrating complex Fast Fourier Transform (FFT) of I-channel and Q-channel data plotted with linear amplitude and logarithmic amplitude, respectively, according to some exemplary embodiments. FIGS. 14A and 14B include schematic diagrams illustrating complex FFT of smoothed I-channel and Q-channel data plotted with linear amplitude and logarithmic amplitude, respectively, according to some exemplary embodiments. FIGS. 13A and 13B illustrate spectral analysis of the I-channel and Q-channel data sets, and FIGS. 14A and 14B illustrate spectral analysis of the I-channel and Q-channel data sets with data smoothing. The Doppler detection process of the present disclosure is described with reference to FIGS. 12A, 12B, 13A, 13B, 14A and 14B. Referring to the I-channel and Q-Channel signals of FIGS. 12A and 12B, where the stationary and moving object graphics are illustrated, Doppler processing initially encompasses range bin object detection, followed by Doppler processing using spectral analysis of the sampled range bin data. It is noted that the number of points in the data set and the code length for a single range bin determine the Doppler frequency measurement resolution.


The diagrams of FIGS. 12A and 12B illustrate samples of the I-channel and Q-Channel signals for a moving object at a specific range bin using a pseudo-MLS code of 31 bits, modified in accordance with the PPM criterion described in detail above. With respect to the diagrams of FIGS. 13A and 13B, a single sample has been acquired for each bit of the code, and therefore the sample rate is fs=1/τb. The results of a complex spectral analysis, i.e., Fourier Transform, of the I-channel and Q-Channel data set is graphically illustrated in FIGS. 14A and 14B, where the peak signal is located in frequency bin-2. It is noted that the Doppler analysis is specific to a single range bin data set. A peak value search is sufficient for the object velocity measurement associated with the range bin data set.


It is noted that several side-lobes of the spectral analysis are significant in value when compared to the peak value. The high side-lobe levels are the direct result of code position sampling with no signal content. Side-lobe level reduction may be achieved via implementation of a data smoothing approach as demonstrated in the diagrams of FIGS. 14A and 14B, where a three point moving average has been applied to the data set prior to execution of the spectral analysis.


Doppler processing will now be further described by means of description of a Doppler processing example. Table 2 lists parameter values for the Doppler processing example. If the bit width (τb) is 10 nsec, the sample rate is 100 MSPS and the code length is Tcode=Ndata·τb; or 0.62 μsec. Note that there are approximately two cycles of the Doppler signal contained within the code length. Table 2 includes a parametric summary of the Doppler frequency/velocity calculation.









TABLE 2







Doppler Example Parametric Values











PARAMETER
SYMBOL
VALUE
UNIT
NOTE/COMMENT














Code length - bits
Ndata
62
bits
Pseudo-MLS - length 31


Transmit chip width
τb
10.0 · 10−9
second


Code length - time
Tcode
0.62 · 10−6
second
Tcode = Ndata · τb


Sample rate
fs
100
MSPS
fs = 1/τb


Doppler resolution
δfD
1.61
MHz
δfD = fs/Ndata [ ]


Operating wavelength
λo
1.0
um


Doppler frequency rate
fDrate
2.0
MHz/m/s
fD = 2 · ν/λo [ ] @ ν = 1.0 m/s


Max Doppler frequency
fDmax
50
m/s
fDmax = fs/fDrate


Signal processing gain
PGdB
14.9
dB
PGdB = 10 · Log(Ndata/2)









From the parametric data of Table 2 and the spectral analysis of FIGS. 13A, 13B, 14A, and 14B, where the peak is located in frequency bin 2, the Doppler frequency is 3.22 MHz (2·δfD), or 1.61 m/s. A longer code at the same bit width (τb), provides greater Doppler frequency measurement resolution, greater signal processing gain and lower sideband levels upon execution of the correlation function. However, the code length is restricted by the requirement that the object remain within the range resolution cell for the entire data acquisition interval; otherwise the processing gain is reduced. It is noted also that additional processing gain is achieved for moving objects due to the noise reduction bandwidth of the spectral processing of the Fourier Transform.


Table 3 represents a summary of parametric values for the PCM homodyne transceiver of the present disclosure consistent with a typical road vehicle application.









TABLE 3







Doppler Example Parametric Values











PARAMETER
SYMBOL
VALUE
UNIT
NOTE/COMMENT














Code length-bits
Ndata
1024
bits



Transmit chip width
τb
10.0 · 10−9
second


Code length-time
Tcode
10.24 · 10−6
second
Tcode = Ndata · τb


Sample rate
fs
100
MSPS
fs = 1/τb [ ] one sample per range bin


Range resolution
δR
1.5
meter
δR = c · τb/2


Unambiguous range
Runamb
1536
meter
Runamb = c · Tcode/2


Detection range
Rdet
TBD
meter


Operating wavelength
λo
1.0
um


Doppler frequency rate
fDrate
2.0
MHz/m/s
fD = 2 · ν/λo [ ] @ ν = 1.0 m/s


Max Doppler frequency
fDmax
50
m/s
fDmax = fs/fDrate


Doppler resolution
δfD
97.6
KHz
δfD = fs/Ndata [ ]


Data acquisition time
Tacq
20.1 · 10−3
second
=2 · (Ndata)2 · τb = 2 · Ndata · Tcode


Signal processing gain
PGdB
27.1
dB
PGdB = 10 · Log(Ndata/2)


Signal processing gain (Doppler)
PGdBD
30.1
dB
=10 · Log(Ndatab · fs)










FIG. 15 includes a schematic perspective view of an automobile 500, equipped with one or more LiDAR systems 300, equipped with the LiDAR transceiver 100 described herein in detail, according to some exemplary embodiments. Referring to FIG. 15, it should be noted that, although only a single scanning LiDAR system 300 is illustrated, it will be understood that multiple LiDAR systems 300 according to the exemplary embodiments can be used in automobile 500. Also, for simplicity of illustration, scanning LiDAR system 300 is illustrated as being mounted on or in the front section of automobile 500. It will also be understood that one or more scanning LiDAR systems 300 can be mounted at various locations on automobile 500.



FIG. 16 includes a schematic top view of automobile 500 equipped with two LiDAR systems 300, according to some exemplary embodiments. In the particular embodiments illustrated in FIG. 16, a first LiDAR system 300 is connected via a bus 560, which in some embodiments can be a standard automotive controller area network (CAN) bus, to a first CAN bus electronic control unit (ECU) 558A. Detections generated by the LiDAR processing described herein in detail in LiDAR system 300 can be reported to ECU 558A, which processes the detections and can provide detection alerts via CAN bus 560. Similarly, in some exemplary embodiments, a second LiDAR scanning system 300 is connected via CAN bus 560 to a second CAN bus electronic control unit (ECU) 558B. Detections generated by the LiDAR processing described herein in detail in LiDAR system 300 can be reported to ECU 558B, which processes the detections and can provide detection alerts via CAN bus 560. It should be noted that this configuration is exemplary only, and that many other automobile LiDAR configurations within automobile 500 can be implemented. For example, a single ECU can be used instead of multiple ECUs. Also, the separate ECUs can be omitted altogether.


It is noted that the present disclosure describes one or more LiDAR systems installed in an automobile. It will be understood that the embodiments of LiDAR systems of the disclosure are applicable to any kind of vehicle, e.g., bus, train, etc. Also, the scanning LiDAR systems of the present disclosure need not be associated with any kind of vehicle.


Whereas many alterations and modifications of the disclosure will become apparent to a person of ordinary skill in the art after having read the foregoing description, it is to be understood that the particular embodiments shown and described by way of illustration are in no way intended to be considered limiting. Further, the subject matter has been described with reference to particular embodiments, but variations within the spirit and scope of the disclosure will occur to those skilled in the art. It is noted that the foregoing examples have been provided merely for the purpose of explanation and are in no way to be construed as limiting of the present disclosure.


While the present inventive concept has been particularly shown and described with reference to exemplary embodiments thereof, it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present inventive concept as defined by the following claims.

Claims
  • 1. A LiDAR system, comprising: an optical source for generating a continuous wave (CW) optical signal;a control processor for generating a pulse-position modulation (PPM) signal;an amplitude modulation (AM) modulator for receiving the CW optical signal and the PPM signal and generating therefrom a pulse-position amplitude-modulated optical signal;a transmitter for transmitting the pulse-position amplitude-modulated optical signal through a transmit optical element into a region;a receive optical element for receiving reflected versions of the pulse-position amplitude-modulated optical signal reflected from at least one target object in the region;a first optical detector for receiving the CW optical signal from the optical source and a received version of the reflected versions of the pulse-position amplitude-modulated optical signal, and generating therefrom a first baseband signal;a second optical detector for receiving the CW optical signal from the optical source and a received version of the reflected versions of the pulse-position amplitude-modulated optical signal, and generating therefrom a second baseband signal, wherein the second optical detector comprises a second mixer for generating the second baseband signal, wherein the first optical detector generates an in-phase-channel voltage signal, and the second optical detector generates a quadrature-channel voltage signal, and wherein the first and second baseband signals are in quadrature; anda signal processor for receiving the first and second baseband signals and processing the first and second baseband signals to generate an indication related to the object.
  • 2. The LiDAR system of claim 1, wherein the LiDAR system is a homodyne LiDAR system.
  • 3. The LiDAR system of claim 1, wherein the LiDAR system is a heterodyne LiDAR system.
  • 4. The LiDAR system of claim 1, wherein the first optical detector comprises a first mixer for generating the first baseband signal.
  • 5. The LiDAR system of claim 1, wherein at least one of the first and second optical detectors comprises a phase shifter for introducing a phase difference between the first and second baseband signals.
  • 6. The LiDAR system of claim 5, further comprising: a first low-pass filter for filtering the in-phase-channel voltage signal to generate a filtered in-phase-channel voltage signal; anda second low-pass filter for filtering the quadrature-channel voltage signal to generate a filtered quadrature-channel voltage signal.
  • 7. The LiDAR system of claim 6, further comprising: a first analog-to-digital converter (ADC) for converting the in-phase-channel voltage signal to a digital in-phase-channel voltage signal; anda second ADC for converting the quadrature-channel voltage signal to a digital quadrature-channel voltage signal.
  • 8. The LiDAR system of claim 7, wherein the signal processor receives the first baseband signal and the second baseband signal and processes the first and second baseband signals to generate the indication related to the object.
  • 9. The LiDAR system of claim 8, wherein the processor, in processing the first and second baseband signals to generate the indication related to the object, performs Doppler processing.
  • 10. The LiDAR system of claim 8, wherein the processor, in processing the first and second baseband signals to generate the indication related to the object, performs correlation processing.
  • 11. A LiDAR method, comprising: generating a continuous wave (CW) optical signal;generating a pulse-position modulation (PPM) signal;generating a pulse-position amplitude-modulated optical signal from the CW optical signal and the PPM signal;transmitting the pulse-position amplitude-modulated optical signal though a transmit optical element into a region;receiving reflected versions of the pulse-position amplitude-modulated optical signal reflected from at least one object in the region;mixing the CW optical signal from the optical source and the reflected versions of the pulse-position amplitude-modulated optical signal to generate therefrom a first baseband signal;mixing the CW optical signal from the optical source and the reflected versions of the pulse-position amplitude-modulated optical signal to generate therefrom a second baseband signal;performing phase shifting to introduce a phase difference between the first and second baseband signals, wherein the first and second baseband signals are in quadrature;performing optical detection to generate an in-phase-channel voltage signal from the first baseband signal and a quadrature-channel voltage signal from the second baseband signal; andprocessing the first and second baseband signals to generate an indication related to the object.
  • 12. The LiDAR method of claim 11, wherein the LiDAR method is a homodyne LiDAR method.
  • 13. The LiDAR method of claim 11, wherein the LiDAR method is a heterodyne LiDAR method.
  • 14. The LiDAR method of claim 11, further comprising performing phase shifting to introduce a phase difference between the first and second baseband signals.
  • 15. The LiDAR method of claim 14, further comprising: low-pass filtering the in-phase-channel voltage signal to generate a filtered in-phase-channel voltage signal; andlow-pass filtering the quadrature-channel voltage signal to generate a filtered quadrature-channel voltage signal.
  • 16. The LiDAR method of claim 15, further comprising: converting the in-phase-channel voltage signal to a digital in-phase-channel voltage signal; andconverting the quadrature-channel voltage signal to a digital quadrature-channel voltage signal.
  • 17. The LiDAR method of claim 16, wherein processing the first and second baseband signals to generate the indication related to the object comprises performing Doppler processing on the digital in-phase-channel voltage signal and the digital quadrature-channel voltage signal.
  • 18. The LiDAR method of claim 16, wherein processing the first and second baseband signals to generate the indication related to the object comprises performing correlation processing on the digital in-phase-channel voltage signal and the digital quadrature-channel voltage signal.
US Referenced Citations (183)
Number Name Date Kind
3712985 Swarner et al. Jan 1973 A
3898656 Jensen Aug 1975 A
4125864 Aughton Nov 1978 A
4184154 Albanese et al. Jan 1980 A
4362361 Campbell et al. Dec 1982 A
4439766 Kobayashi et al. Mar 1984 A
4765715 Matsudaira et al. Aug 1988 A
4957362 Peterson Sep 1990 A
5200606 Krasutsky et al. Apr 1993 A
5210586 Grage et al. May 1993 A
5274379 Carbonneau et al. Dec 1993 A
5428215 Dubois et al. Jun 1995 A
5604695 Cantin et al. Feb 1997 A
5793491 Wangler et al. Aug 1998 A
5889490 Wachter et al. Mar 1999 A
5966226 Gerber Oct 1999 A
6078395 Jourdain et al. Jun 2000 A
6122222 Hossack Sep 2000 A
6292285 Wang et al. Sep 2001 B1
6384770 De Gouy May 2002 B1
6437854 Hahlweg Aug 2002 B2
6556282 Jamieson et al. Apr 2003 B2
6559932 Halmos May 2003 B1
7202941 Munro Apr 2007 B2
7227116 Gleckler Jun 2007 B2
7272271 Kaplan et al. Sep 2007 B2
7440084 Kane Oct 2008 B2
7483600 Achiam et al. Jan 2009 B2
7489865 Varshneya et al. Feb 2009 B2
7544945 Tan et al. Jun 2009 B2
7570347 Ruff et al. Aug 2009 B2
7675610 Redman et al. Mar 2010 B2
7832762 Breed Nov 2010 B2
8044999 Mullen et al. Oct 2011 B2
8050863 Trepagnier et al. Nov 2011 B2
8134637 Rossbach et al. Mar 2012 B2
8223215 Oggier et al. Jul 2012 B2
8363511 Frank et al. Jan 2013 B2
8508723 Chang et al. Aug 2013 B2
8629975 Dierking et al. Jan 2014 B1
8742325 Droz et al. Jun 2014 B1
8836761 Wang et al. Sep 2014 B2
8836922 Pennecot et al. Sep 2014 B1
8879050 Ko Nov 2014 B2
9007569 Amzajerdian et al. Apr 2015 B2
9063549 Pennecot et al. Jun 2015 B1
9086273 Gruver et al. Jul 2015 B1
9090213 Lawlor et al. Jul 2015 B2
9097646 Campbell et al. Aug 2015 B1
9140792 Zeng Sep 2015 B2
9157790 Shpunt et al. Oct 2015 B2
9267787 Shpunt et al. Feb 2016 B2
9285477 Smith et al. Mar 2016 B1
9575162 Owechko Feb 2017 B2
9618742 Droz et al. Apr 2017 B1
9651417 Shpunt et al. May 2017 B2
9658322 Lewis May 2017 B2
9696427 Wilson et al. Jul 2017 B2
9711493 Lin Jul 2017 B2
9753351 Eldada Sep 2017 B2
9823351 Haslim et al. Nov 2017 B2
9857472 Mheen et al. Jan 2018 B2
9869754 Campbell et al. Jan 2018 B1
10018725 Liu Jul 2018 B2
10018726 Hall et al. Jul 2018 B2
10024655 Raguin et al. Jul 2018 B2
10078133 Dussan Sep 2018 B2
10088557 Yeun Oct 2018 B2
10148060 Hong et al. Dec 2018 B2
10175360 Zweigle et al. Jan 2019 B2
10183541 Van Den Bossche et al. Jan 2019 B2
10411524 Widmer et al. Sep 2019 B2
10416292 de Mersseman et al. Sep 2019 B2
10473767 Xiang et al. Nov 2019 B2
10473784 Puglia Nov 2019 B2
10473943 Hughes Nov 2019 B1
10557923 Watnik et al. Feb 2020 B2
10558044 Pan Feb 2020 B2
10564268 Turbide et al. Feb 2020 B2
10578724 Droz et al. Mar 2020 B2
10678117 Shin et al. Jun 2020 B2
10775508 Rezk et al. Sep 2020 B1
20010052872 Hahlweg Dec 2001 A1
20030043363 Jamieson et al. Mar 2003 A1
20040028418 Kaplan et al. Feb 2004 A1
20040031906 Glecker Feb 2004 A1
20040135992 Munro Jul 2004 A1
20040155249 Narui et al. Aug 2004 A1
20050219506 Okuda et al. Oct 2005 A1
20060221250 Rossbach et al. Oct 2006 A1
20060232052 Breed Oct 2006 A1
20060239312 Kewitsch et al. Oct 2006 A1
20070140613 Achiam et al. Jun 2007 A1
20070181810 Tan et al. Aug 2007 A1
20070211786 Shattil Sep 2007 A1
20070219720 Trepagnier et al. Sep 2007 A1
20080088499 Bonthron et al. Apr 2008 A1
20080095121 Shattil Apr 2008 A1
20080100510 Bonthron May 2008 A1
20080219584 Mullen et al. Sep 2008 A1
20080246944 Redman et al. Oct 2008 A1
20090002680 Ruff et al. Jan 2009 A1
20090010644 Varshneya et al. Jan 2009 A1
20090190007 Oggier et al. Jul 2009 A1
20090251361 Bensley Oct 2009 A1
20100027602 Abshire Feb 2010 A1
20100128109 Banks May 2010 A1
20100157280 Kusevic et al. Jun 2010 A1
20100182874 Frank et al. Jul 2010 A1
20120075422 Wang et al. Mar 2012 A1
20120182540 Suzuki Jul 2012 A1
20120206712 Chang et al. Aug 2012 A1
20120236379 da Silva et al. Sep 2012 A1
20120310516 Zeng Dec 2012 A1
20120310519 Lawlor et al. Dec 2012 A1
20130088726 Goyal et al. Apr 2013 A1
20130093584 Schumacher Apr 2013 A1
20130120760 Raguin et al. May 2013 A1
20130166113 Dakin Jun 2013 A1
20130206967 Shpunt et al. Aug 2013 A1
20130207970 Shpunt et al. Aug 2013 A1
20130222786 Hanson Aug 2013 A1
20130250276 Chang et al. Sep 2013 A1
20140036252 Amzajerdian et al. Feb 2014 A1
20140049609 Wilson et al. Feb 2014 A1
20140152975 Ko Jun 2014 A1
20140168631 Haslim et al. Jun 2014 A1
20140233942 Kanter Aug 2014 A1
20140313519 Shpunt et al. Oct 2014 A1
20150009485 Mheen et al. Jan 2015 A1
20150055117 Pennecot et al. Feb 2015 A1
20150234308 Lim et al. Aug 2015 A1
20150260843 Lewis Sep 2015 A1
20150301162 Kim Oct 2015 A1
20150371074 Lin Dec 2015 A1
20150378011 Owechko Dec 2015 A1
20160047895 Dussan Feb 2016 A1
20160047896 Dussan Feb 2016 A1
20160047903 Dussan Feb 2016 A1
20160138944 Lee et al. May 2016 A1
20160178749 Lin et al. Jun 2016 A1
20160200161 Van Den Bossche et al. Jul 2016 A1
20160245902 Watnik et al. Aug 2016 A1
20160280229 Kasahara Sep 2016 A1
20160291160 Zweigle et al. Oct 2016 A1
20160357187 Ansari Dec 2016 A1
20160363669 Liu Dec 2016 A1
20160380488 Widmer et al. Dec 2016 A1
20170023678 Pink et al. Jan 2017 A1
20170090013 Paradie et al. Mar 2017 A1
20170102457 Li Apr 2017 A1
20170199273 Morikawa et al. Jul 2017 A1
20170219696 Hayakawa et al. Aug 2017 A1
20170269215 Hall et al. Sep 2017 A1
20170270381 Itoh et al. Sep 2017 A1
20170285346 Pan Oct 2017 A1
20170307736 Donovan Oct 2017 A1
20170307737 Ishikawa et al. Oct 2017 A1
20170329010 Warke Nov 2017 A1
20170329011 Warke Nov 2017 A1
20180052378 Shin et al. Feb 2018 A1
20180113193 Huemer Apr 2018 A1
20180128903 Chang May 2018 A1
20180143309 Pennecot et al. May 2018 A1
20180180718 Lin Jun 2018 A1
20180224529 Wolf et al. Aug 2018 A1
20180241477 Turbide et al. Aug 2018 A1
20180284237 Campbell Oct 2018 A1
20180284282 Hong et al. Oct 2018 A1
20180306913 Bartels Oct 2018 A1
20180341009 Niclass et al. Nov 2018 A1
20180364334 Xiang et al. Dec 2018 A1
20180372870 Puglia Dec 2018 A1
20190101644 DeMersseman et al. Apr 2019 A1
20190129009 Eichenholz et al. May 2019 A1
20190139951 T'ng et al. May 2019 A1
20190146060 Qiu et al. May 2019 A1
20190195990 Shand Jun 2019 A1
20190235064 Droz et al. Aug 2019 A1
20200081129 de Mersseman Mar 2020 A1
20200088847 DeMersseman et al. Mar 2020 A1
20200341120 Ahn Oct 2020 A1
20200341121 Ahn Oct 2020 A1
Foreign Referenced Citations (23)
Number Date Country
509180 Jan 2016 AT
19757840 Sep 1999 DE
102004033944 Feb 2006 DE
102006031114 Jul 2008 DE
102008045387 Mar 2010 DE
102014218957 Mar 2016 DE
102015217908 Mar 2017 DE
0112188 Jun 1987 EP
0578129 Jan 1994 EP
2696166 Dec 2014 EP
2824418 Jan 2015 EP
3203259 Aug 2017 EP
3457080 Mar 2019 EP
3578129 Dec 2019 EP
3147685 Jan 2020 EP
1994019705 Sep 1994 WO
2008008970 Jan 2008 WO
2015014556 Feb 2015 WO
2016072483 May 2016 WO
2016097409 Jun 2016 WO
2016204139 Dec 2016 WO
2019050643 Mar 2019 WO
2019099166 May 2019 WO
Non-Patent Literature Citations (43)
Entry
International Search Report and Written Opinion for International Application No. PCT/US2020/064474, dated Apr. 1, 2021.
International Search Report and Written Opinion for International Application No. PCT/US2018/057676, dated Jan. 23, 2019.
International Search Report and Written Opinion for International Application No. PCT/US2018/052849, dated May 6, 2019.
International Search Report and Written Opinion for International Application No. PCT/US2019/046800, dated Nov. 25, 2019.
Kasturi et al., UAV-Bome LiDAR with MEMS Mirror Based Scanning Capability; SPIE Defense and Commercial Sensing Conference 2016, Baltimore, MD; 10 pages, 2016.
Internet URL: https://www.continental-automotive.com/en-gl/Passenger-Cars/Chassis-Safety/Advanced-Driver-Assistance-Systems/Cameras [retrieved on Dec. 20, 2018].
Internet URL: https://www.continental-automotive.com/en-gl/Passenger-Cars/Chassis-Safety/Advanced-Driver-Assistance-Systems/Cameras/Multi-Function-Camera-with-Lidar [retrieved on Dec. 20, 2018].
Hi-Res 3d Flash LIDAR will Supplement Continental's Existing Portfolio for Automated Driving [online], Press Release, \larch 3, 2016, [retrieved on Dec. 20, 2018]. Retrieved from the Internet URL: https://www.continental-corporation.com/en/press/press-releases/hi-res-3d-flash-lidar-will-supplement-continental-s-existing-portfolio-for-automated-driving-15758.
A milestone for laser sensors in self-driving cars [online], Trade Press, Jul. 11, 2016, [retrieved on Dec. 19, 2018]. Retrieved from the Internet URL: https://www.osram.com/os/press/press-releases/a_milestone_for_lasersensors_in_self-driving_carsjsp.
Hewlett-Packard Application Note 77-4, Swept-Frequency Group Delay Measurements, Hewlett-Packard Co., September, 7 pages, 1968.
Kravitz et al., High-Resolution Low-Sidelobe Laser Ranging Based on Incoherent Pulse Compression, IEEE Jhotonic,s Technology Letters, vol. 24, No. 23, pp. 2119-2121, 2012.
Journet et al., A Low-Cost Laser Range Finder Based on an FMCW-like Method, IEEE Transactions on nstrumentation and Measurement, vol. 49, No. 4, pp. 840-843, 2000.
Campbell et al., Advanced Sine Wave Modulation of Continuous Wave Laser System for Atmospheric CO2 Differential Absorption Measurements; NASA Langley Research Center, 32 pages, 2018.
Levanon et al., Non-coherent Pulse Compression-Aperiodic and Periodic Waveforms; The Institution of Engineering and Technology, 9 pages, 2015.
Peer et al., Compression Waveforms for Non-Coherent Radar, Tel Aviv University, 6 pages, 2018.
Li, Time-of-Flight Camera-An Introduction, Technical White Paper, SLOA190B, Texas Instruments, 10 pages, 2014.
Pierrottet et al., Linear FMCW Laser Radar for Precision Range and Vector Velocity Measurements, Coherent Applications, Inc., NASA Langley Research Center, 9 pages, 2018.
Kahn, Modulation and Detection Techniques for Optical Communication Systems, Stanford University, Department of Electrical Engineering, 3 pages, 2006.
Niclass et al., Development of Automotive LIDAR, Electronics and Communications in Japan, vol. 98, No. 5, 6 pages, 2015.
Su et al., 2-D FFT and Time-Frequency Analysis Techniques for Multi-Target Recognition of FMCW Radar Signal, Proceedings of the Asia-Pacific Microwave Conference 2011, pp. 1390-1393.
Wojtkiewicz et al., Two-Dimensional Signal Processing in FMCW Radars, Instytut Podstaw Elektroniki Politechnika Warszawska, Warszawa, 6 pages, 2018.
Winkler, Range Doppler Detection for Automotive FMCW Radars, Proceedings of the 4th European Radar Conference, Munich Germany, 4 pages, 2007.
Li et al., Investigation of Beam Steering Performances in Rotation Risley-Prism Scanner, Optics Express, vol. 24, No. 12, 11 pages, 2016.
THORLABS Application Note, Risley Prism Scanner, 33 pages, 2018.
Simpson et al., Intensity-Modulated, Stepped Frequency CW Lidar for Distributed Aerosol and Hard Target Measurements, Applied Optics, vol. 44, No. 33, pp. 7210-7217, 2005.
Skolnik, Introduction to Radar Systems, 3rd Edition, McGraw-Hill, New York, NY 2001, pp. 45-48.
Wang et al., Range-Doppler image processing in linear FMCW Radar and FPGA Based Real-Time Implementation, Journal of Communication and Computer, vol. 6, No. 4, 2009.
International Search Report and Written Opinion for International Application No. PCT/US2018/057727 dated Jan. 28, 2019.
International Search Report and Written Opinion for International Application No. PCT/US2018/052837 dated Jan. 24, 2019.
International Search Report and Written Opinion for International Application No. PCT/US2017/033263 dated Aug. 29, 2017.
International Search Report and Written Opinion for International Application No. PCT/US2018/048869 dated Nov. 8, 2018.
International Search Report and Written Opinion for International Application No. PCT/US2018/051281 dated Nov. 22, 2018.
International Search Report and Written Opinion for International Application No. PCT/US2018/054992 dated Dec. 11, 2018.
International Search Report and Written Opinion for International Application No. PCT/US2018/049038 dated Dec. 12, 2018.
International Search Report and Written Opinion for International Application No. PCT/US2017/033265 dated Sep. 1, 2017,.
International Search Report and Written Opinion for International Application No. PCT/US2017/033271 dated Sep. 1, 2017.
Invitation to Pay Additional Fees for International Application No. PCT/US2018/052849 dated Mar. 8, 2019.
http://www.advancedscientificconcepts.com/products/overview.html.
Roncat, Andreas, The Geometry of Airborne Laser Scanning in a Kinematical Framework, Oct. 19, 2016, www.researchgate.net/profile/Andreas_Roncat/publication/310843362_The_Geometry_of Airborne_LaserScanningin_a_Kinematical_Frameworldinks/5839add708ae3a74b49ea03b1The-Geometry-of-Airbome-Laser-Scanning-in-a-Kinematical-Framework.pdf.
Church et al., “Evaluation of a steerable 3D laser scanner using a double Risley prism pair,” SPIE Paper.
Luhmann, “A historical review on panorama photogrammetry,” http://www.researchgate.net/publication/228766550.
International Search Report and Written Opinion for International Application No. PCT/US2020/039760, dated Sep. 18, 2020.
Communication from EP Application No. 18773034.6 dated Sep. 13, 2021.
Related Publications (1)
Number Date Country
20210124050 A1 Apr 2021 US