The present disclosure relates generally to object detection in a scene using frequency modulated continuous wave (FMCW) techniques, and more particularly, to FMCW based devices and methods for coherent range estimation of an object in the scene.
Several application areas such as autonomous vehicles, industrial robotics, navigation, aerospace, meteorological element measurement, atmospheric environment monitoring etc. require precise detection of objects for performing critical functions. Reliable object detection is usually performed using remote sensing techniques such as Light Detection and Ranging (LIDAR) that uses light in the form of a pulsed laser to measure ranges (variable distances) to objects in a scene. Several types of Lidars are available as of today for use in range estimation for such objects in a scene of interest. Conventional pulsed lidars transmit a short pulse of light towards an object, and the distance to the object is calculated from the time that elapses until the reflected light returns to the lidar system. Since pulsed lidar is incoherent, measuring only the intensity of light at the detector, it is susceptible to errors due to ambient light or interference from other lidars.
An alternative approach to distance estimation via laser light is frequency-modulated continuous-wave (FMCW) lidar. Unlike pulsed lidar, FMCW lidar has constant illumination power, which makes it compatible with integrated photonics. FMCW lidar is a coherent ranging technology that measures distance by mixing a local copy of the transmitted laser beam with the light reflected back to the receiver. The beat frequency of the resulting interference signal is proportional to the reflector range, so distance measurement becomes a frequency estimation problem.
Despite such advantages, FMCW lidar's accuracy is often limited by the laser source phase noise. Owing to the underlying oscillations, a laser cannot produce a single pure frequency of light. Instead, thermal changes, mechanical vibration, and even the quantum nature of photons cause randomness in the phase of oscillation, which corresponds to deviations in the emitted frequency. In applications where a noisy laser is used for FMCW lidar, the laser phase noise likewise causes the interference signal to deviate from the true beat frequency. These deviations increase as a function of the range to the object, so that conventionally there is a maximum range (the coherence range) that can be measured without excessive error. This understanding is based on classical frequency estimation techniques, which assume a sinusoid in additive noise, for which the maximum likelihood estimate of the frequency of a sinusoid in additive noise is the peak of the periodogram. However, peak-finding methods perform poorly when phase noise distributes the signal power over a range of frequencies.
Conventional techniques perform frequency estimation in the frequency domain. However, there is a drawback of this approach of conventional methods because phase noise statistics are not well understood in frequency domain. Attempts to decipher phase noise statistics have led to increase in complexity of the measurement system and/or require more hardware and computational resources. Accordingly, better approaches for estimating frequency of beat signals are needed.
It is an object of some embodiments to provide improved techniques for frequency estimation for determining range of objects in a scene of interest. In this regard, it is also an object of some embodiments to provide systems and methods for performing frequency estimation in time domain in which phase noise statistics are modeled explicitly and are better described and understood. Some example embodiments aim to perform frequency estimation in the time domain and take advantage of the known phase noise statistics. Some example embodiments are specifically directed towards providing frequency estimation techniques for scenarios where the maximum distance to be measured is greater than the coherence length of the light source.
Some example embodiments are based on the realization that for coherent ranging techniques, the object distance measurement task becomes a frequency estimation problem since the distance to an object in a scene is a function of the beat frequency of the interference signal resulting from a mixing of a local copy of the transmitted beam with the light reflected back to the receiver. In coherent ranging techniques such as FMCW, depth measurement is affected by correlated noise. Accordingly, some example embodiments are based on a realization that for FMCW lidar, the laser phase noise causes the interference signal to deviate from the true beat frequency and as such the beat frequency estimation is prone to error if the phase noise statistics are not taken into consideration. In this regard, some example embodiments are based on the understanding that the statistics of the phase noise can be modeled explicitly in the time domain since they are not well understood in the frequency domain.
Some example embodiments also realize that frequency estimation in time domain may be performed by a two-stage process comprising phase unwrapping and linear regression. In this regard, some example embodiments take into consideration prior information about the linear nature of the underlying unwrapped phase. Some example embodiments realize that FMCW depth measurement is affected by correlated noise and therefore incorporate phase error estimation into the alternating optimization. Towards this end, example embodiments approach frequency estimation from the perspective of phase unwrapping, explicitly accounting for correlations in the phase noise.
It is also an object of some example embodiments to determine the maximum likelihood unwrapping sequence of wrapped phases using linear minimum mean squared error estimation of the phase error. Some example embodiments recognize that the phase error statistics are easier to derive than the resulting power spectral density (PSD) of the noisy sinusoidal measurement. Therefore, example embodiments of the present disclosure are capable of accommodating non-white laser frequency noise distributions into the frequency estimation task.
Various example embodiments consider laser-based sources for frequency estimation as part of the bigger range estimation problem. In this area, FMCW lidars are promising when it comes to applications such as autonomous navigation because they are more robust than pulsed lidar to ambient light or interference from other lidar systems. However, as discussed above, the phase noise in the FMCW lidar causes the instantaneous frequency to deviate from the desired frequency modulation and decreases the temporal coherence in the interfered light beams. Some example embodiments are based on the recognition that the loss of temporal coherence becomes more severe as the distance to the target increases. The coherence range—determined by the amount of phase noise—is considered to be the distance beyond which ranging cannot be reliably performed. Some example embodiments also realize that in order to reach long distances (e.g., needed for navigation), a hardware solution is to use lasers with low linewidth (i.e., low phase noise). However, such low-linewidth lasers tend to be expensive and increase the hardware complexity of the overall system.
Several example embodiments realize that one attempt to increase range may be by accounting for the effect of phase noise. For example, it is a realization of some example embodiments that a Lorentzian distribution may be fitted to the power spectral density (PSD) of the measured interference signal, since assuming a white frequency noise laser model, the PSD is asymptotically Lorentzian as a function of range. Some other example embodiments also realize that to address the problem of phase noise affecting lidar measurements, one solution is to perform frequency-domain estimation by fitting a Lorentzian curve to the power spectral density of the measurement. However, Lorentz curve is an approximation to the interference PSD, and the fitting assuming Gaussian statistics is not optimal.
Some other example embodiments also realize that to address the problem of phase noise affecting lidar measurements, another solution is to use additional reference arms and fixed depths to measure properties of phase impairments (including phase noise and nonlinearities). Particularly, correction of impairments (so standard frequency-domain peak-finding can be used) may be achieved, instead of including impairment modeling in the estimation process. However, such solutions require additional hardware and calibration.
In view of the aforementioned realizations, various example embodiments are based on the understanding that attempts to correct for phase errors or fit the PSD with a curve that makes a simplistic assumption about the distribution lead to suboptimal or infeasible solutions. Some example embodiments also recognize that such approaches cannot achieve acceptable levels of precision, especially under high signal to noise ratio conditions. Towards these ends, several example embodiments propose the use of the exact phase noise statistics in the time domain to achieve better performance, especially at high SNR. Additionally, the time-domain method provided by some example embodiments use the statistics for realistic laser models, instead of the naïve white frequency noise model.
As such, one primary problem for some example embodiments is how to best account for the effects of phase noise in the ranging process. In this regard, some example embodiments realize that the phase noise statistics are easiest to describe in the time domain (i.e., directly in the phase of the interference signal) rather than in the frequency domain after a Fourier transform. Specifically, some example embodiments describe the interference signal phase error in closed form for both white and coloured frequency noise, as compared to the PSD which is only known in closed form for white frequency noise. It is also a realization of some example embodiments that the phase error is a stationary Gaussian process, so it is completely described by its autocorrelation function. Some example embodiments are based on the understanding that using a quadrature demodulator, it is possible to extract the interference signal phase to perform phase-based frequency estimation. FMCW lidar systems conventionally use a single balanced detector, whereas a quadrature demodulator contains two balanced detectors for direct in-phase and quadrature measurements.
Some example embodiments provide solutions for frequency estimation via phase unwrapping. Some example embodiments exploit the advantage of the phase error correlations for unwrapping the measured phase, and then again use phases error correlations for linear regression of the unwrapped phase. In this regard, some example embodiments provide an iterative algorithm based on the above principles and that can be initialized by—and improve upon—less exact methods (e.g., Lorentzian fitting), showing better accuracy at high SNR.
In order to achieve the aforementioned objectives and advantages, some example embodiments provide systems, methods, and programs for coherent range estimation of an object in the scene.
For example, some example embodiments provide an FMCW device comprising an emitter configured to transmit at least one wave of radiation to a scene, wherein the transmitted wave is linearly modulated in a frequency domain subject to impairments causing a non-linearity of the transmitted wave in the frequency domain. A receiver receives a reflection of the transmitted wave from the scene, and a mixer interferes a copy of the transmitted wave with the received reflection of the transmitted wave to generate a beat signal. A pair of analog to digital converters generate a sequence of samples of the beat signal with wrapped phases in a time domain. A processor is configured to estimate a frequency of the beat signal in an iterative manner until a termination condition is met. The iterative estimation of the frequency of the beat signal is based on phase unwrapping of the samples of the beat signal subject to correlated phase error derived from phase noise statistics of unwrapped phase of the beat signal and a linear regression fitting the frequency of the beat signal into the unwrapped phase of the beat signal. The estimated frequency of the beat signal may be utilized by the circuitry to estimate a distance to an object in the scene.
Some example embodiments also provide a frequency estimation method utilizing an FMCW device, the method comprising transmitting at least one wave of radiation to a scene, wherein the transmitted wave is linearly modulated in a frequency domain subject to impairments causing a non-linearity of the transmitted wave in the frequency domain. The method further comprises receiving a reflection of the transmitted wave from the scene, interfering a copy of the transmitted wave with the received reflection of the transmitted wave to generate a beat signal and generating a sequence of samples of the beat signal with wrapped phases in a time domain. The method also comprises estimating a frequency of the beat signal in the time domain in an iterative manner until a termination condition is met. The iterative estimation of the frequency of the beat signal is based on phase unwrapping of the samples of the beat signal subject to correlated phase error derived from phase noise statistics of unwrapped phase of the beat signal and a linear regression fitting the frequency of the beat signal into the unwrapped phase of the beat signal.
Some example embodiments also provide a non-transitory computer readable medium having stored thereon instructions executable by a computer for performing a frequency estimation method utilizing a FMCW device. The frequency estimation method comprising transmitting at least one wave of radiation to a scene, wherein the transmitted wave is linearly modulated in a frequency domain subject to impairments causing a non-linearity of the transmitted wave in the frequency domain. The method further comprises receiving a reflection of the transmitted wave from the scene, interfering a copy of the transmitted wave with the received reflection of the transmitted wave to generate a beat signal and generating a sequence of samples of the beat signal with wrapped phases in a time domain. The method also comprises estimating a frequency of the beat signal in the time domain in an iterative manner until a termination condition is met. The iterative estimation of the frequency of the beat signal is based on phase unwrapping of the samples of the beat signal subject to correlated phase error derived from phase noise statistics of unwrapped phase of the beat signal and a linear regression fitting the frequency of the beat signal into the unwrapped phase of the beat signal.
According to some example embodiments, the generated beat signal is distorted due to the non-linearity of the linear modulation caused by the impairments. In some example embodiments, a current iteration of the iterative estimation includes determining, for each sample of the sequence of samples of the beat signal, a current phase error and a phase unwrapping number fitting a previous frequency of the beat signal and a previous phase offset of the beat signal determined during a previous iteration. The current phase error for a current sample in the sequence of the beat signal is correlated with a previous phase error for a previous sample in the sequence of the beat signal by predetermined phase noise statistics. The current iteration of the iterative estimation also includes updating a current frequency of the beat signal and a current phase offset of the beat signal for the current iteration based on the determined current phase error and the determined phase unwrapping number.
According to some example embodiments, the iterative estimation of the frequency is based on alternative optimization that includes a Viterbi algorithm determining the current phase error and the phase unwrapping number probabilistically to maximize a likelihood of their fitting in the entire sequence of samples of the beat signal. For each current sample of the sequence of samples of the beat signal, the Viterbi algorithm uses the previous phase errors and the previous phase unwrapping number determined for the previous samples for causal estimation of a current phase error and a current phase unwrapping number. The causal estimation of the current phase error and the current phase unwrapping number are determined via a linear minimum mean squared error estimation.
The presently disclosed embodiments will be further explained with reference to the following drawings. The drawings shown are not necessarily to scale, with emphasis instead generally being placed upon illustrating the principles of the presently disclosed embodiments.
While the above-identified drawings set forth presently disclosed embodiments, other embodiments are also contemplated, as noted in the discussion. This disclosure presents illustrative embodiments by way of representation and not limitation. Numerous other modifications and embodiments can be devised by those skilled in the art which fall within the scope and spirit of the principles of the presently disclosed embodiments.
The following description provides exemplary embodiments only, and is not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the following description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing one or more exemplary embodiments. Contemplated are various changes that may be made in the function and arrangement of elements without departing from the spirit and scope of the subject matter disclosed as set forth in the appended claims.
Specific details are given in the following description to provide a thorough understanding of the embodiments. However, understood by one of ordinary skill in the art can be that the embodiments may be practiced without these specific details. For example, systems, processes, and other elements in the subject matter disclosed may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail. In other instances, well-known processes, structures, and techniques may be shown without unnecessary detail in order to avoid obscuring the embodiments. Further, like reference numbers and designations in the various drawings indicate like elements.
Also, individual embodiments may be described as a process which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process may be terminated when its operations are completed, but may have additional steps not discussed or included in a figure. Furthermore, not all operations in any particularly described process may occur in all embodiments. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, the function's termination can correspond to a return of the function to the calling function or the main function.
Furthermore, embodiments of the subject matter disclosed may be implemented, at least in part, either manually or automatically. Manual or automatic implementations may be executed, or at least assisted, through the use of machines, hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof. When implemented in software, firmware, middleware or microcode, the program code or code segments to perform the necessary tasks may be stored in a machine-readable medium. A processor(s) may perform the necessary tasks.
There has been a widescale adoption of imaging techniques in several application areas owing to the need to understand the environment around a subject for taking informed decisions. For example, in remote sensing applications, it is often desired that the physical characteristics of an area be described in as much clarity as possible. In other applications such as autonomous and semi-autonomous vehicles, taking time bound decisions is critical for the operation of such vehicles. For all such applications it is essential that depth estimation of objects in the scene of interest be performed accurately. Precise depth estimation is achievable by taking into consideration errors that occur in taking measurements of the scene. In most such scenarios the illumination source often introduces such errors in the measurements.
Lidar is an increasingly popular sensing modality for ranging applications varying from autonomous driving to industrial robotics and lunar navigation. Conventional pulsed lidar transmits a short pulse of light toward an object, and the distance to the object is calculated from the time that elapses until the reflected light returns to the lidar system. Since pulsed lidar is incoherent, measuring only the intensity of light at the detector, it is susceptible to errors due to ambient light or interference from other lidars. An alternative approach to distance estimation via laser light is frequency-modulated continuous-wave (FMCW) lidar. Unlike pulsed lidar, FMCW lidar has constant illumination power, which makes it compatible with integrated photonics. FMCW lidar is a coherent ranging technology that measures distance by mixing a local copy of the transmitted laser beam with the light reflected back to the receiver. The beat frequency of the resulting interference signal is proportional to the reflector range, so distance measurement becomes a frequency estimation problem.
However, as with any oscillator, a laser cannot produce a single pure frequency of light. Instead, thermal changes, mechanical vibration, and even the quantum nature of photons cause randomness in the phase of oscillation, which corresponds to deviations in the emitted frequency. When a laser is used for FMCW lidar, the laser phase noise likewise causes the interference signal to deviate from the true beat frequency. Often assuming a sinusoid in additive noise, there is a maximum range that can be measured without excessive error. The maximum likelihood estimate of the frequency of a sinusoid in additive noise is the peak of the periodogram, but peak-finding methods perform poorly when phase noise distributes the signal power over a range of frequencies.
Even when the distance to an object is greater than the coherence length of the laser, an FMCW lidar measurement will still contain information about the distance. However, updated techniques are needed that take the statistics of the phase noise into account when performing frequency estimation. In general, the statistics of the phase noise can be modeled explicitly in the time domain. However, frequency estimation is usually performed in the frequency domain, and the phase noise statistics are less well understood in the frequency domain. Instead, when the maximum distance to be measured is greater than the coherence length of the laser, it is useful to perform frequency estimation in the time domain and take advantage of the known phase noise statistics.
Some solutions aim to perform frequency estimation in the time domain by phase unwrapping and linear regression. Many of these algorithms perform a naïve phase unwrapping procedure that does not take the linear nature of the underlying signal into account. More sophisticated methods use the prior information that the underlying unwrapped phase is linear in an algorithm that alternates between phase unwrapping using estimates of the frequency and phase offset, and updated estimation of the frequency and phase offset given the estimated unwrapping. However, none of these methods consider correlated noise settings. Since FMCW depth measurement is affected by correlated noise, example embodiments disclosed herein incorporate phase error estimation into the alternating optimization.
Given a candidate frequency, the proposed solutions approximately recover the maximum likelihood unwrapping sequence using a Viterbi algorithm and the phase noise statistics. The algorithm then alternates between unwrapping and frequency estimate refinement until convergence. The proposed solutions consistently achieve superior performance at long range or with large-linewidth lasers when the signal-to-noise ratio is sufficiently high. These and several other advantages emerge from the disclosed systems, methods and programs provided by the example embodiments.
A Frequency Modulated Continuous Wave lidar or FMCW lidar system is a special type of lidar system that measures both distance and velocity of moving objects. This is achieved by continuously varying the frequency of the transmitted signal by a modulating signal at a known rate over a fixed time period. This modulation is often used for a very precise distance measurement at close range by phase comparison of the two echo signal frequencies.
A transmission wave for transmission into a scene is generated by one or more lasers of a lidar and fired towards the scene through suitable transmitters of the FMCW lidar system. A copy 10 of the transmitted wave is also stored locally for further processing. The transmitted wave is reflected from the scene and a reflected wave 20 is collected by suitable receivers of the FMCW lidar system. The copy 10 of the transmitted wave and the reflected wave 20 are fed into a mixer for signal mixing 30 which outputs measurements pertaining to the resultant interference signal. As a part of the mixing process, the optical signals may be converted into analog electronic signals which are then digitized to produce samples of the interference signal.
Let T be the sweep duration, do the initial phase, ω0 the initial angular frequency, γ the chirp rate, and on the source phase noise. Within one chirp period, the transmitted wave copy 10 channel has normalized electric field given by:
The received signal from the reflected wave 20 has an electric field ERX(t)=√{square root over (R)}ELO(t−τ) which has been scaled by the target reflectivity R and delayed by τ=2d/c, where d is the target distance and c is the speed of light. The electric fields of the transmitted wave copy 10 (ELO) and the reflected wave 20 (ERX) fields are summed at a coherent receiver for signal mixing 30, and the lowpass filtered intensity is measured by a detector, with measurement window Tmeas ensuring overlap of the mixed chirps.
The in-phase interference measurements 40 and quadrature measurements 50 are obtained as output of the signal mixing process 30. In some example embodiments, FMCW receivers may use a single balanced detector to capture the in-phase interference measurements 40, and the quadrature measurements 50 are computed from the in-phase interference measurements 40 via a Hilbert transform. In some example embodiments, a quadrature demodulator with two pairs of balanced detectors may capture both the in-phase measurements 40 and the quadrature measurements 50. The in-phase measurements 40 are given by:
where Δϕn(t)=ϕn(t)−ϕn(t−τ) is the phase change (also referred to as phase noise) and w is approximately additive white Gaussian noise (AWGN) with variance σw2/2.
The quadrature measurements 50 are given by:
where wI and wI are independent and identically distributed.
The observed interference signal is expressed as a complex valued sinusoid of the in-phase measurements 40 and the quadrature measurements 50.
where a=√{square root over (R)} is the signal amplitude, f=γτ is the beat frequency,
is the phase offset, η(t)=Δϕn(t) is a stationary random process denoting the phase change, and w(t)=wI(t)+jwQ(t) is circularly symmetric complex AWGN with autocorrelation w(t)=σw2δ(t). The signal-to-noise-ratio (SNR) due to AWGN is a2/σω2. Samples at times tn, n=0, 1, . . . . N−1 are given as:
The argument of rn is extracted as a tan 2(iQ(tn), iI(tn)):
where ϵn describes the effective phase error due to the AWGN and is known as additive observation phase noise (AOPN). The unwrapped phase is expressed two different ways, as:
where ξn=ηn+ϵn is the total phase error, or equivalently as
where yn=∠rn(2π) is the extracted wrapped phase (modulo 1), and un∈ is the unknown integer number of cycles that must be added to the wrapped phase to perform phase unwrapping.
The resultant expression expresses the interference signal in terms of its amplitude, the beat frequency, the phase offset, a stationary random process denoting the phase change, and circularly symmetric complex AWGN with autocorrelation. Thereafter, the unwrapped phase is defined as a function of the beat frequency, the phase offset, and the total phase error. The unwrapped phase is a sum of extracted wrapped phase and an unknown integer number of cycles that must be added to the wrapped phase to perform phase unwrapping.
Depth estimation 70 requires estimating the frequency of the beat signal from wrapped phase vector using the known phase change statistics 60. Then the final depth estimate rescales the frequency estimate by the chirp rate γ and the speed of light c as
However, directly maximizing the likelihood of the wrapped phase p(y|f, θ) is difficult because the positions of the discontinuities in the wrapped phase y are unknown. If the number of unwrappings u is recovered, yielding the complete data x=y+u, then maximizing the likelihood of the unwrapped phase p(x|f, θ) is straightforward because x is an affine function in correlated Gaussian noise. In this regard, the goal becomes determining the maximum likelihood estimate of the sequence of integer-valued unwrappings u, given the observed data y and initial estimates of the frequency {circumflex over (f)} and phase offset {circumflex over (θ)}. The Viterbi algorithm estimates the maximum likelihood sequence by assigning a length to transitions between possible states (the unwrapping values un) and then recursively finding the sequence of states with the shortest path length. The most likely sequence has the shortest path. Based on the unwrapped phase estimate {circumflex over (x)}=y+û computed from Viterbi phase unwrapping, the estimates of frequency {circumflex over (f)} and phase offset {circumflex over (θ)} may be refined by fitting an affine function.
The workflow 100A initializes the estimation procedure at a discrete set of M initial frequencies {f0(0), . . . , fM-1(0)} and phase offsets {θ0(0), . . . , θM-1(0)} Starting with an initial pair fm(0) and θm(0), the workflow iteratively performs Viterbi phase unwrapping and refinement of the estimates of frequency {circumflex over (f)} and phase offset {circumflex over (θ)} as long as the path length LV of the unwrapping sequence decreases. Then the final estimates of frequency {circumflex over (f)} and phase offset {circumflex over (θ)} are the frequency/offset pair that produced the lowest path length LV over all M grid points.
Having obtained the frequency of the beat signal, the distance estimate to an object from which the reflected wave 20 was reflected may be determined as an output of the depth estimation 70. In this way, the exemplar workflow 100A, by taking into consideration the correlated noise, provides accurate estimates of the beat frequency even when phase error is significant.
The exemplar workflow 100A may be compiled as a machine executable process or method and may be executed with an FMCW device or lidar system. Details of the components and working of such an FMCW lidar system are described next with reference to
In some example embodiments, the emitter 102 comprises a suitable light source such as laser and frequency modulation electronics. In some embodiments, the laser may be a tunable laser, and the frequency is modulated by controlling the laser oscillation directly. In some example embodiments, the laser may be a fixed-frequency laser, and the frequency may be modulated externally, e.g., using an electro-optical modulator. In some example embodiments, the emitter 102 transmits to a scene, a wave of radiation (such as the laser). Prior to transmission, the wave of radiation is linearly modulated in the frequency domain. The linear modulation is affected by impairments arising due to for example source phase noise causing non-linearity of the wave of radiation in the frequency domain.
In some example embodiments, the transmitter 104 comprises optics that steer and focus the laser beam on the object 106 to be measured. Focus may be achieved via a lens or combination of lenses. Beam-steering technologies utilized by some example embodiments include without limitation, mechanically-scanned mirrors, optical phased arrays, microelectromechanical systems (MEMS) mirrors, and liquid crystal meta surfaces.
The receiver 108 comprises focusing optics to collect light reflected from the object. The receiver may comprise a lens or lenses to focus the reflected light and a free-space-to-fiber coupler to couple the received light into a fiber optic cable.
The mixer 110 optically combines a copy of the light from the emitter 102 (the local oscillator) with light captured by the receiver to generate a beat signal. The beat signal is distorted due to the non-linearity of the linear modulation caused by the impairments due to laser phase noise. In some embodiments, the mixing is performed with a beam-splitter. In other embodiments, mixing is performed by a fiber-optic coupler. A detector of the mixer 110 then converts the optical signal to an analog electronic signal (e.g., a voltage or a current). The detector may be a single photodiode or a combination of photodiodes, such as in a balanced detector. The detector may also include amplification circuitry to increase the level of the electrical signal.
The ADC 112 converts the analog electrical signal to a digital signal at predetermined sampled times. The ADC 112 comprises sampling and quantization electronics that output a digital signal. In some embodiments, the ADC 112 has uniformly-spaced samples in time. In other embodiments, the ADC 112 uses an external reference such as a k-clock to determine sample times to correspond with a linear frequency sweep of the laser.
The processor 114 uses the digitized samples of the mixed signal to estimate the distance to the object. In some embodiments, the processor 114 is an on-board device such as a field-programmable gate array (FPGA). In other embodiments, the processor 114 is an external computer. The memory 116 amongst other things contains calibration data and parameters needed for the processor to make an accurate distance estimate 118.
The FMCW lidar system 200 comprises an emitter 202, a transmitter 204, a receiver 208, a mixer 210, a pair of analog to digital converters (ADC) 212A and 212B, and a processor 214. The emitter 202 comprises a tunable laser 222 and its modulation controller 220. The tunable laser 222 is fiber-coupled and connected to a splitter 224 that creates two copies of the laser beam. The first copy is sent to the transmitter 204, in which a collimator 226 focuses the light from the optical fiber into a bundle of parallel rays. Beam-steering optics, such as a pair of galvo mirrors 228, are used to direct the transmit beam towards the object 206.
The wavelength of the tunable laser 222 is swept by a modulation controller 220 so that the optical frequency of the laser beam is a linear function of time (i.e. linearly modulated in frequency domain). As illustrated in
Assuming T to be the sweep duration of the tunable laser 222, the sweep is a linear chirp between the start frequency fmin and the end frequency fmax. Phase noise φn causes the emitted light frequency to randomly deviate from the desired linear sweep. Subsequently the initial phase φ0, the initial angular frequency ω0, and the chirp rate γ are defined. Within one chirp period, the local oscillator (LO) channel has normalized electric field given by:
The received signal electric field ERX(t)=√{square root over (R)}ELO(t−τ) for the received light (RX), has been scaled by the object reflectivity R and delayed by τ=2d/c, where d is the distance from the transmitter 204 to the object 206, and c is the speed of light. For a static lidar system 200 and object 206, the delay τ is constant. Still, when ELO and ERX are optically combined in the mixer 210, the resultant interference signal's frequency differs from the expected beat frequency fbeat due to the phase noise. Depth estimation only uses the interference signal within the measurement window Tmeas ensuring overlap of the mixed chirps.
Referring to
where Δϕn(t)=φn(t)−φn(t−τ) is the “phase change,” i.e., the phase error in an interferometric measurement due to phase noise in an oscillator such as a laser. Balanced detector 230A filters out the DC terms and common-mode noise by taking the difference between the two photocurrents:
Considering only the time duration within the rect function, the in-phase interference measurement is
where wI is approximated as additive white Gaussian noise (AWGN) with variance σw2/2.
In the Q-branch, the LO signal is first phase-shifted by π/2 radians (90°) to get
Then the LO and RX optical signals are combined to get the photocurrent at each photodiode of the balanced detector 230B:
Balanced detector 230B filters out the DC terms and common-mode noise by taking the difference between the two photocurrents
The quadrature measurement takes the form:
where wQ and wI are independent and identically distributed.
In some example embodiments, the mixer 210 may use a single coupler instead of the optical hybrid to mix the LO and RX electric fields and single balanced detector to capture the in-phase measurement. Then the quadrature component is approximated by the Hilbert transform of the in-phase measurement {circumflex over (ι)}Q(t)=(iI(t).
Each of the ADCs (212A or 212B) comprises circuits that low-pass-filter, sample, and quantize the electrical signal to produce the in-phase (I) and quadrature (Q) digital measurements. The processor 214 extracts the phase from the I/Q components to perform depth estimation.
In some example embodiments, the FMCW lidar system and the FMCW device may be same. In some alternate embodiments, the lidar system may be external to the FMCW device, wherein the FMCW device may control one or more operations of the lidar. Irrespective of its realization, the FMCW lidar system 200 may execute the workflow 100A as a process or method.
The method 250 comprises splitting 3 the incident light into a local beam and a transmission beam. The splitting 3 may be performed by the emitter 202 of the FMCW lidar system 200. The transmission beam is transmitted 5 towards a scene of interest by the transmitter 204 of the FMCW lidar system 200. A reflection of the transmission beam from the scene is received 7 by the receiver 208 of the FMCW lidar system as a reflected beam. One or more objects may reflect the transmission beam in the scene towards the FMCW lidar system. The mixer 210 interferes 9 the received beam reflected from the scene with the local beam to produce an interference pattern. In this regard, the mixer 210 may perform the interference according to principles of the signal mixing 30 process of the workflow 100A.
One or more analog to digital converters (ADC) 112 sample 11 the interference pattern at linearly spaced frequencies to generate samples of the interference signals in the manner described with reference to the workflow 100A of
The scene may contain one or more objects and upon irradiation, the one or more objects may transmit back a reflection of the transmitted wave of radiation. The reflection of the transmitted wave may be received 303 from the scene by one or more receivers. The method 300 further comprises interfering 305 the copy of the transmitted wave with the received reflection of the transmitted wave to generate a beat signal. The interference may be performed in the manner described previously with respect to
The wrapped phase 604 is extracted from the measured signal 602. However, the wrapped phase may be insufficient for estimating the frequency in the time domain due to missing information about where and how many wrappings occur. Predicting the location and magnitude of the phase wrapping is further complicated by the deviations due to phase noise. To perform robust frequency estimation, the frequency estimation algorithm 612 thus jointly unwraps the phase and estimates the frequency in an alternating manner. In addition to the wrapped phase 604, the estimation algorithm 612 requires knowledge of the phase noise statistics 606 and an initial frequency estimate 610. Phase noise statistics 606 are known from prior calibration or manufacturer specification of properties such as the linewidth. The initial estimate of frequency 610 is obtained by computing the power spectral density of the measured signal 602 via a Fourier transform.
Given these inputs, the frequency estimation algorithm 612 proceeds in two alternating steps: phase unwrapping 614, which predicts the location and magnitude of the phase wrappings, and linear regression 616, which updates the estimate of the frequency. After a sufficient number of iterations, the latest result of linear regression is considered the final frequency estimate 618.
For simplicity, additional notation is introduced to expand the sinusoid 602 definition to
where a=√{square root over (R)} is the signal amplitude, f=γτ is the beat frequency, θ=ω0τ−γτ2/2 is the phase offset, η(t)=Δϕn(t) is a stationary random process denoting the phase change, and w(t)=wI(t)+jωQ(t) is circularly symmetric complex AWGN with autocorrelation w(t)=σw2δ(t). The signal-to-noise-ratio (SNR) due to AWGN is a2/σw2.
Samples at times tn, n=0, 1, . . . , N−1 are given as
The principal argument of rn is extracted as a tan 2(iQ(tn), iI(tn)) and yields
where ϵn describes the effective phase error due to the AWGN and is known as additive observation phase noise (AOPN). The unwrapped phase is further defined as:
where ξn=ηn+ϵn is the total phase error, yn=∠rn/(2π) is the extracted wrapped phase (modulo 1), and un∈ is the unknown integer number of cycles that must be added to the wrapped phase to perform phase unwrapping.
The two equivalent definitions of the unwrapped phase highlight the reason for the two-step estimation process. In the first definition, the unwrapped phase xn=ftn+θ+ξn is a linear function ftn+θ corrupted by additive noise ξn. Estimating the slope f from samples xn of a noisy linear function has many existing techniques, including ordinary least squares (OLS) and generalized least squares (GLS) if the noise is Gaussian. However, the samples xn are not available. Instead, only the wrapped phase yn is known. Thus, the problem becomes how to determine the unwrapping numbers un, which is a hard problem in the presence of noise. Still, preliminary estimates of frequency f and phase offset θ can guide the unwrapping, with the remaining uncertainty being attributed to the phase error ξn. Hence, estimation cycles through the two definitions of the unwrapped phase xn to recover the true frequency f. Details of the phase unwrapping step 614 of the frequency estimation algorithm 612 are described next with reference to
The goal is to find the approximate maximum likelihood estimate given by:
assuming the unwrapping process is causal, and using a finite memory of length C. Phase unwrapping proceeds sample by sample by first estimating the likelihood of a particular unwrapping number un at sample n. Then the Viterbi algorithm is used to determine the most likely sequence of unwrapping numbers. The Viterbi algorithm estimates the maximum likelihood sequence by assigning a length (the negative log-likelihood) to transitions between possible unwrapping values un and then recursively finding the sequence of states with the shortest path length. The possible unwrapping numbers at each time step is large, so per-survivor processing is used to keep only a fixed number of K survivor paths.
The inputs to the phase unwrapping step 614 are the wrapped phase vector y, initial frequency estimate {circumflex over (f)} and phase offset estimate {circumflex over (θ)}, the phase noise statistics, time samples t=[t0, t1, . . . , tN-1]T, and the number of survivors K. The phase noise statistics are contained in two components: the cross-covariance vector pC, the auto-covariance matrix QC. The cross-covariance vector pC has elements:
and auto-covariance matrix QC has elements:
where i, j∈(0, . . . , C−1). Both components are completely determined by the autocorrelation functions of the laser source phase noise and the additive noise of the receiver. Since both f and the phase noise statistics depend on τ in the FMCW lidar setting, the covariance matrix QC and correlation vector pC can be computed given {circumflex over (f)}.
Phase unwrapping proceeds as follows. For each sample time tn, there are K existing possible paths for the unwrapped phase x1:n-1(1), . . . , x1:n-1(K), which are determined from the known wrapped phase y and the candidate unwrapping numbers û1:n-1(1), . . . , u1:n-1(K). Notation zn-C:n-1 indicates elements [zn-C, . . . , zn-1]T, which are a subset of the vector z. Each existing survivor path k has a length Ln-1(k). For any sample n=C+1, . . . , N, and for each survivor k=1, . . . , K, the C previous unwrapping estimates in the path are
The line given by initial frequency estimate {circumflex over (f)} and phase offset estimate {circumflex over (θ)} is {circumflex over (f)}t+{circumflex over (θ)}. The deviation from this line is predicted by estimating the phase error at sample n using the previous unwrapping estimates.
In some embodiments, the phase error estimation is the Linear Minimum Mean-Squared Error (LMMSE) estimate, given as
The LMMSE phase error estimate has prediction error variance σe2. In other embodiments, the phase error can be predicted more quickly but with less accuracy via nearest-neighbor estimation:
The predicted unwrapped phase is then {tilde over (x)}n(k)={circumflex over (f)}tn+{circumflex over (θ)}+ξn(k).
Given the predicted phase error, the likelihood for each value of un(k) is:
The prediction error variance for the LMMSE estimator is:
The K values of un(k) closest to yn−{circumflex over (f)}tn−{circumflex over (θ)}−ξn(k), denoted , =1, . . . , K, have the largest likelihood and are considered as candidates for the path continuation. The total unwrapping path length (negative log-likelihood) for each candidate is
The same process is repeated for all K survivors, resulting in candidate path lengths {{tilde over (L)}n(1,1), . . . , , . . . , {tilde over (L)}n(K,K)}. Each of the K existing survivors has K possible unwrapping values , for a maximum of K2 total possible paths. Of these K2 possible paths, only the K most likely (i.e., shortest) paths survive. The K new survivors extend some or all of the previous survivors, and the unwrapping history is updated to û1:n(1), û1:n(K). For samples n=1, . . . , C, the same procedure holds, except only the previous n−1 samples are used for phase error estimation.
After the final sample n=N, the most likely unwrapping sequence û1:n is the survivor k with the shortest path length LN=LN(k). The maximum likelihood unwrapped phase is {circumflex over (x)}=y+û1:N.
Having estimated the maximum likelihood unwrapped phase, the remaining goal is to refine the frequency.
Let 1N=[1, . . . , 1]T represent a length-N vector of 1s, t=[t1, . . . , tN]T represent a length-N vector of time samples, A=[t, 1N] represent a matrix concatenating the vectors t and 1N, β=[f, θ] represent a vector of parameters to estimate, and QN represent the phase error auto-covariance matrix for the length-N sequence. In some embodiments, the matrix inverse QN−1 may be directly computed numerically. In other embodiments, the matrix inverse QN−1 may be approximated by first approximating the Toeplitz matrix QN as a circulant matrix and then inverting the circulant approximation via the fast Fourier transform algorithm. Linear regression is used to estimate the slope (frequency f) and the intercept (phase offset θ) of the line. In some embodiments, linear regression may be performed via generalized least squares estimation (GLS):
The advantage associated with using GLS is that it provides more accurate update of the frequency and the phase offset of the beat signal. In other embodiments, for faster updating, linear regression may be performed via ordinary least squares, ignoring the phase error statistics to avoid inversion of QN, i.e.,
The estimates {circumflex over (f)}, {circumflex over (θ)} are then used in the subsequent iteration for phase unwrapping.
The frequency estimation algorithm 612 of
Some other key aspects of the frequency estimation algorithm 612 of
Initialization: In some embodiments, the iterative algorithm 612 may be performed for a single pair of initial frequency estimate {circumflex over (f)}0 and initial phase offset estimate {circumflex over (θ)}0. In some embodiments, initial frequency estimate {circumflex over (f)}0 is the frequency corresponding to the peak of the periodogram of the interference signal. In other embodiments, initial frequency estimate {circumflex over (f)}0 is the frequency corresponding to center frequency of a Lorentzian function fit to the power spectral density of the interference signal. In some embodiments, the algorithm may be performed with M initial frequency estimates {{circumflex over (f)}01, . . . , {circumflex over (f)}0M} and M initial phase offset estimates {{circumflex over (θ)}01, . . . , {circumflex over (θ)}0M}. Then the final frequency estimate {circumflex over (f)} is the one with the shortest unwrapping path over all initializations. In some embodiments, the M initial frequency estimates {{circumflex over (f)}01, . . . , {circumflex over (f)}0M} may be linearly spaced frequencies on a grid. In other embodiments, the M initial frequency estimates {{circumflex over (f)}01, . . . , {circumflex over (f)}0M} may be random and uniformly distributed frequencies. In other embodiments, the M initial frequency estimates {{circumflex over (f)}01, . . . , {circumflex over (f)}0M} include the results of alternative frequency estimation techniques, including the peak of the periodogram of the interference signal or the center frequency of a Lorentzian function fit to the power spectral density of the interference signal.
Noise Statistics: In some embodiments the laser phase noise ϕn may be dominated by spontaneous emission, so the frequency noise ωn=dϕn/dt is assumed to be white and Gaussian. The frequency noise may have a constant PSD Sω(ω)=Δω, where Δv=Δω/(2π) is the full-width at half-maximum laser linewidth in Hz, and is assumed to be known. In such scenarios, the phase noise is a Wiener process. For a fixed delay τ, the resulting phase change Δϕn(t; τ) is a zero-mean, stationary Gaussian process with triangular autocorrelation function
In other embodiments, the laser phase noise ϕn may be affected by the wavelength sweep of the laser emitter. As a result, the frequency noise may be colored, not white. In some embodiments, the frequency noise may have a first-order low-pass transfer function with Power Spectral Density given by:
where αω is a scaling parameter and ω1 is the cutoff frequency. Then the phase change PSD is
In some embodiments, the PSD may have a second-order low-pass transfer function with PSD
where αω is a scaling parameter and ω2 is the cutoff frequency, and q is the quality factor. In other embodiments, the frequency noise has multiple sources, including white noise and flicker (1/f) noise.
Given the measured signal magnitudes |rn|, the scaled AOPN term 2π ϵn has a zero-mean von Mises distribution with a complicated dependence on a, |rn|, and σw2. The AOPN variance can be approximated as
The vehicle 802 may be a manually driven vehicle, a semi-autonomous vehicle or a fully autonomous self-driving vehicle and may be configured for communication with the base station 806. In this regard, the vehicle 802 may be equipped with suitable components to execute remote sensing, data communication, and data processing. Towards this end, it may be contemplated that the vehicle may be equipped with an onboard FMCW lidar system such as the ones described earlier. An emitter of the lidar system transmits a wave or beam of radiation (shown as solid line) towards the obstacles 808 and receives a reflection (shown as dotted line) of the transmitted wave/beam from the scene ahead. Since the vehicle 802 may be moving and/or due to errors in the source laser, the depth estimation of the obstacles 808 may not be correctly inferable using conventional techniques.
The vehicle 802 may invoke the FMCW lidar system to perform accurate depth estimation for the obstacles 808. The FMCW lidar system may fully or partially be onboard the vehicle 802. In embodiments where the FMCW lidar system is partially onboard the vehicle 802, the computation steps leading to frequency estimation may be performed via edge computing on the base station 806. In some example embodiments, the vehicle 802 may determine through the FMCW lidar system, the correct range of the obstacles 808 and thereby determine a geo-location of the obstacles 808. Accordingly, the onboard controller of the vehicle 808 may reroute the vehicle 808 to avoid collision with the obstacles 808. In some example embodiments, the vehicle 802 may additionally or alternately convey at least the geo-location of the obstacles 808 to the base station 806 for updating a map of the area in which the road link 804 exists.
Although the example use case is described with reference to an on-road vehicle, example embodiments described herein may be applicable to any type of vehicle such as an aerial vehicle, an aquatic vehicle, a spacecraft, and the like. Other example uses of some embodiments include industrial robotics, computer vision systems and autonomous driving.
The computer 911 can include a power source 954, depending upon the application the power source 954 may be optionally located outside of the computer 911. Linked through bus 956 can be a user input interface 957 adapted to connect to a display device 948, wherein the display device 948 can include a computer monitor, camera, television, projector, or mobile device, among others. A network interface controller (NIC) 934 is adapted to connect through the bus 956 to a network 936, wherein image data or other data, among other things, can be rendered on a third-party display device, third party imaging device, and/or third-party printing device outside of the computer 911.
Still referring to
The above description provides exemplary embodiments only, and is not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the following description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing one or more exemplary embodiments. Contemplated are various changes that may be made in the function and arrangement of elements without departing from the spirit and scope of the subject matter disclosed as set forth in the appended claims.
Specific details are given in the following description to provide a thorough understanding of the embodiments. However, understood by one of ordinary skill in the art can be that the embodiments may be practiced without these specific details. For example, systems, processes, and other elements in the subject matter disclosed may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail. In other instances, well-known processes, structures, and techniques may be shown without unnecessary detail in order to avoid obscuring the embodiments. Further, like reference numbers and designations in the various drawings indicated like elements. Also, individual embodiments may be described as a process which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process may be terminated when its operations are completed but may have additional steps not discussed or included in a figure. Furthermore, not all operations in any particularly described process may occur in all embodiments. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, the function's termination can correspond to a return of the function to the calling function or the main function.
Furthermore, embodiments of the subject matter disclosed may be implemented, at least in part, either manually or automatically. Manual or automatic implementations may be executed, or at least assisted, through the use of machines, hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof. When implemented in software, firmware, middleware or microcode, the program code or code segments to perform the necessary tasks may be stored in a machine-readable medium. A processor(s) may perform the necessary tasks. Various methods or processes outlined herein may be coded as software that is executable on one or more processors that employ any one of a variety of operating systems or platforms. Additionally, such software may be written using any of a number of suitable programming languages and/or programming or scripting tools, and also may be compiled as executable machine language code or intermediate code that is executed on a framework or virtual machine. Typically, the functionality of the program modules may be combined or distributed as desired in various embodiments.
Embodiments of the present disclosure may be embodied as a method, of which an example has been provided. The acts performed as part of the method may be ordered in any suitable way. Accordingly, embodiments may be constructed in which acts are performed in an order different than illustrated, which may include performing some acts concurrently, even though shown as sequential acts in illustrative embodiments. Further, use of ordinal terms such as “first,” “second,” in the claims to modify a claim element does not by itself connote any priority, precedence, or order of one claim element over another or the temporal order in which acts of a method are performed, but are used merely as labels to distinguish one claim element having a certain name from another element having a same name (but for use of the ordinal term) to distinguish the claim elements. Although the present disclosure has been described with reference to certain preferred embodiments, it is to be understood that various other adaptations and modifications can be made within the spirit and scope of the present disclosure. Therefore, it is the aspect of the append claims to cover all such variations and modifications as come within the true spirit and scope of the present disclosure.