FREQUENCY ESTIMATION SYSTEMS AND METHODS FOR COHERENT RANGE ESTIMATION

Information

  • Patent Application
  • 20240337735
  • Publication Number
    20240337735
  • Date Filed
    March 28, 2023
    a year ago
  • Date Published
    October 10, 2024
    23 days ago
Abstract
A distance estimation method comprises transmitting a wave of radiation modulated in frequency domain by an emitter to a scene, receiving a reflection of the transmitted wave from the scene, and interfering a copy of the transmitted wave with the received reflection to generate a sequence of samples of the beat signal with wrapped phases in a time domain. The method also comprises estimating a frequency of the beat signal in the time domain in an iterative manner until a termination condition is met. The iterative estimation of the frequency of the beat signal is based on phase unwrapping of the samples of the beat signal subject to correlated phase error derived from phase noise statistics of the emitter and a linear regression fitting the frequency of the beat signal into the unwrapped phases of the beat signal.
Description
TECHNICAL FIELD

The present disclosure relates generally to object detection in a scene using frequency modulated continuous wave (FMCW) techniques, and more particularly, to FMCW based devices and methods for coherent range estimation of an object in the scene.


BACKGROUND

Several application areas such as autonomous vehicles, industrial robotics, navigation, aerospace, meteorological element measurement, atmospheric environment monitoring etc. require precise detection of objects for performing critical functions. Reliable object detection is usually performed using remote sensing techniques such as Light Detection and Ranging (LIDAR) that uses light in the form of a pulsed laser to measure ranges (variable distances) to objects in a scene. Several types of Lidars are available as of today for use in range estimation for such objects in a scene of interest. Conventional pulsed lidars transmit a short pulse of light towards an object, and the distance to the object is calculated from the time that elapses until the reflected light returns to the lidar system. Since pulsed lidar is incoherent, measuring only the intensity of light at the detector, it is susceptible to errors due to ambient light or interference from other lidars.


An alternative approach to distance estimation via laser light is frequency-modulated continuous-wave (FMCW) lidar. Unlike pulsed lidar, FMCW lidar has constant illumination power, which makes it compatible with integrated photonics. FMCW lidar is a coherent ranging technology that measures distance by mixing a local copy of the transmitted laser beam with the light reflected back to the receiver. The beat frequency of the resulting interference signal is proportional to the reflector range, so distance measurement becomes a frequency estimation problem.


Despite such advantages, FMCW lidar's accuracy is often limited by the laser source phase noise. Owing to the underlying oscillations, a laser cannot produce a single pure frequency of light. Instead, thermal changes, mechanical vibration, and even the quantum nature of photons cause randomness in the phase of oscillation, which corresponds to deviations in the emitted frequency. In applications where a noisy laser is used for FMCW lidar, the laser phase noise likewise causes the interference signal to deviate from the true beat frequency. These deviations increase as a function of the range to the object, so that conventionally there is a maximum range (the coherence range) that can be measured without excessive error. This understanding is based on classical frequency estimation techniques, which assume a sinusoid in additive noise, for which the maximum likelihood estimate of the frequency of a sinusoid in additive noise is the peak of the periodogram. However, peak-finding methods perform poorly when phase noise distributes the signal power over a range of frequencies.


Conventional techniques perform frequency estimation in the frequency domain. However, there is a drawback of this approach of conventional methods because phase noise statistics are not well understood in frequency domain. Attempts to decipher phase noise statistics have led to increase in complexity of the measurement system and/or require more hardware and computational resources. Accordingly, better approaches for estimating frequency of beat signals are needed.


SUMMARY

It is an object of some embodiments to provide improved techniques for frequency estimation for determining range of objects in a scene of interest. In this regard, it is also an object of some embodiments to provide systems and methods for performing frequency estimation in time domain in which phase noise statistics are modeled explicitly and are better described and understood. Some example embodiments aim to perform frequency estimation in the time domain and take advantage of the known phase noise statistics. Some example embodiments are specifically directed towards providing frequency estimation techniques for scenarios where the maximum distance to be measured is greater than the coherence length of the light source.


Some example embodiments are based on the realization that for coherent ranging techniques, the object distance measurement task becomes a frequency estimation problem since the distance to an object in a scene is a function of the beat frequency of the interference signal resulting from a mixing of a local copy of the transmitted beam with the light reflected back to the receiver. In coherent ranging techniques such as FMCW, depth measurement is affected by correlated noise. Accordingly, some example embodiments are based on a realization that for FMCW lidar, the laser phase noise causes the interference signal to deviate from the true beat frequency and as such the beat frequency estimation is prone to error if the phase noise statistics are not taken into consideration. In this regard, some example embodiments are based on the understanding that the statistics of the phase noise can be modeled explicitly in the time domain since they are not well understood in the frequency domain.


Some example embodiments also realize that frequency estimation in time domain may be performed by a two-stage process comprising phase unwrapping and linear regression. In this regard, some example embodiments take into consideration prior information about the linear nature of the underlying unwrapped phase. Some example embodiments realize that FMCW depth measurement is affected by correlated noise and therefore incorporate phase error estimation into the alternating optimization. Towards this end, example embodiments approach frequency estimation from the perspective of phase unwrapping, explicitly accounting for correlations in the phase noise.


It is also an object of some example embodiments to determine the maximum likelihood unwrapping sequence of wrapped phases using linear minimum mean squared error estimation of the phase error. Some example embodiments recognize that the phase error statistics are easier to derive than the resulting power spectral density (PSD) of the noisy sinusoidal measurement. Therefore, example embodiments of the present disclosure are capable of accommodating non-white laser frequency noise distributions into the frequency estimation task.


Various example embodiments consider laser-based sources for frequency estimation as part of the bigger range estimation problem. In this area, FMCW lidars are promising when it comes to applications such as autonomous navigation because they are more robust than pulsed lidar to ambient light or interference from other lidar systems. However, as discussed above, the phase noise in the FMCW lidar causes the instantaneous frequency to deviate from the desired frequency modulation and decreases the temporal coherence in the interfered light beams. Some example embodiments are based on the recognition that the loss of temporal coherence becomes more severe as the distance to the target increases. The coherence range—determined by the amount of phase noise—is considered to be the distance beyond which ranging cannot be reliably performed. Some example embodiments also realize that in order to reach long distances (e.g., needed for navigation), a hardware solution is to use lasers with low linewidth (i.e., low phase noise). However, such low-linewidth lasers tend to be expensive and increase the hardware complexity of the overall system.


Several example embodiments realize that one attempt to increase range may be by accounting for the effect of phase noise. For example, it is a realization of some example embodiments that a Lorentzian distribution may be fitted to the power spectral density (PSD) of the measured interference signal, since assuming a white frequency noise laser model, the PSD is asymptotically Lorentzian as a function of range. Some other example embodiments also realize that to address the problem of phase noise affecting lidar measurements, one solution is to perform frequency-domain estimation by fitting a Lorentzian curve to the power spectral density of the measurement. However, Lorentz curve is an approximation to the interference PSD, and the fitting assuming Gaussian statistics is not optimal.


Some other example embodiments also realize that to address the problem of phase noise affecting lidar measurements, another solution is to use additional reference arms and fixed depths to measure properties of phase impairments (including phase noise and nonlinearities). Particularly, correction of impairments (so standard frequency-domain peak-finding can be used) may be achieved, instead of including impairment modeling in the estimation process. However, such solutions require additional hardware and calibration.


In view of the aforementioned realizations, various example embodiments are based on the understanding that attempts to correct for phase errors or fit the PSD with a curve that makes a simplistic assumption about the distribution lead to suboptimal or infeasible solutions. Some example embodiments also recognize that such approaches cannot achieve acceptable levels of precision, especially under high signal to noise ratio conditions. Towards these ends, several example embodiments propose the use of the exact phase noise statistics in the time domain to achieve better performance, especially at high SNR. Additionally, the time-domain method provided by some example embodiments use the statistics for realistic laser models, instead of the naïve white frequency noise model.


As such, one primary problem for some example embodiments is how to best account for the effects of phase noise in the ranging process. In this regard, some example embodiments realize that the phase noise statistics are easiest to describe in the time domain (i.e., directly in the phase of the interference signal) rather than in the frequency domain after a Fourier transform. Specifically, some example embodiments describe the interference signal phase error in closed form for both white and coloured frequency noise, as compared to the PSD which is only known in closed form for white frequency noise. It is also a realization of some example embodiments that the phase error is a stationary Gaussian process, so it is completely described by its autocorrelation function. Some example embodiments are based on the understanding that using a quadrature demodulator, it is possible to extract the interference signal phase to perform phase-based frequency estimation. FMCW lidar systems conventionally use a single balanced detector, whereas a quadrature demodulator contains two balanced detectors for direct in-phase and quadrature measurements.


Some example embodiments provide solutions for frequency estimation via phase unwrapping. Some example embodiments exploit the advantage of the phase error correlations for unwrapping the measured phase, and then again use phases error correlations for linear regression of the unwrapped phase. In this regard, some example embodiments provide an iterative algorithm based on the above principles and that can be initialized by—and improve upon—less exact methods (e.g., Lorentzian fitting), showing better accuracy at high SNR.


In order to achieve the aforementioned objectives and advantages, some example embodiments provide systems, methods, and programs for coherent range estimation of an object in the scene.


For example, some example embodiments provide an FMCW device comprising an emitter configured to transmit at least one wave of radiation to a scene, wherein the transmitted wave is linearly modulated in a frequency domain subject to impairments causing a non-linearity of the transmitted wave in the frequency domain. A receiver receives a reflection of the transmitted wave from the scene, and a mixer interferes a copy of the transmitted wave with the received reflection of the transmitted wave to generate a beat signal. A pair of analog to digital converters generate a sequence of samples of the beat signal with wrapped phases in a time domain. A processor is configured to estimate a frequency of the beat signal in an iterative manner until a termination condition is met. The iterative estimation of the frequency of the beat signal is based on phase unwrapping of the samples of the beat signal subject to correlated phase error derived from phase noise statistics of unwrapped phase of the beat signal and a linear regression fitting the frequency of the beat signal into the unwrapped phase of the beat signal. The estimated frequency of the beat signal may be utilized by the circuitry to estimate a distance to an object in the scene.


Some example embodiments also provide a frequency estimation method utilizing an FMCW device, the method comprising transmitting at least one wave of radiation to a scene, wherein the transmitted wave is linearly modulated in a frequency domain subject to impairments causing a non-linearity of the transmitted wave in the frequency domain. The method further comprises receiving a reflection of the transmitted wave from the scene, interfering a copy of the transmitted wave with the received reflection of the transmitted wave to generate a beat signal and generating a sequence of samples of the beat signal with wrapped phases in a time domain. The method also comprises estimating a frequency of the beat signal in the time domain in an iterative manner until a termination condition is met. The iterative estimation of the frequency of the beat signal is based on phase unwrapping of the samples of the beat signal subject to correlated phase error derived from phase noise statistics of unwrapped phase of the beat signal and a linear regression fitting the frequency of the beat signal into the unwrapped phase of the beat signal.


Some example embodiments also provide a non-transitory computer readable medium having stored thereon instructions executable by a computer for performing a frequency estimation method utilizing a FMCW device. The frequency estimation method comprising transmitting at least one wave of radiation to a scene, wherein the transmitted wave is linearly modulated in a frequency domain subject to impairments causing a non-linearity of the transmitted wave in the frequency domain. The method further comprises receiving a reflection of the transmitted wave from the scene, interfering a copy of the transmitted wave with the received reflection of the transmitted wave to generate a beat signal and generating a sequence of samples of the beat signal with wrapped phases in a time domain. The method also comprises estimating a frequency of the beat signal in the time domain in an iterative manner until a termination condition is met. The iterative estimation of the frequency of the beat signal is based on phase unwrapping of the samples of the beat signal subject to correlated phase error derived from phase noise statistics of unwrapped phase of the beat signal and a linear regression fitting the frequency of the beat signal into the unwrapped phase of the beat signal.


According to some example embodiments, the generated beat signal is distorted due to the non-linearity of the linear modulation caused by the impairments. In some example embodiments, a current iteration of the iterative estimation includes determining, for each sample of the sequence of samples of the beat signal, a current phase error and a phase unwrapping number fitting a previous frequency of the beat signal and a previous phase offset of the beat signal determined during a previous iteration. The current phase error for a current sample in the sequence of the beat signal is correlated with a previous phase error for a previous sample in the sequence of the beat signal by predetermined phase noise statistics. The current iteration of the iterative estimation also includes updating a current frequency of the beat signal and a current phase offset of the beat signal for the current iteration based on the determined current phase error and the determined phase unwrapping number.


According to some example embodiments, the iterative estimation of the frequency is based on alternative optimization that includes a Viterbi algorithm determining the current phase error and the phase unwrapping number probabilistically to maximize a likelihood of their fitting in the entire sequence of samples of the beat signal. For each current sample of the sequence of samples of the beat signal, the Viterbi algorithm uses the previous phase errors and the previous phase unwrapping number determined for the previous samples for causal estimation of a current phase error and a current phase unwrapping number. The causal estimation of the current phase error and the current phase unwrapping number are determined via a linear minimum mean squared error estimation.





BRIEF DESCRIPTION OF THE DRAWINGS

The presently disclosed embodiments will be further explained with reference to the following drawings. The drawings shown are not necessarily to scale, with emphasis instead generally being placed upon illustrating the principles of the presently disclosed embodiments.



FIG. 1A illustrates a workflow for depth estimation in a scene using principles of a frequency modulation continuous wave device, according to some example embodiments;



FIG. 1B shows a schematic of an FMCW lidar system for producing an estimate of the distance to an object, according to some example embodiments;



FIG. 2A illustrates a detailed schematic of an FMCW lidar system, according to some embodiments;



FIG. 2B illustrates a swept-frequency signal emitted from a tunable laser and its delayed copy captured at a receiver of an FMCW lidar system, according to some example embodiments;



FIG. 2C illustrates an exemplar method for distance estimation using an FMCW lidar system, according to some example embodiments;



FIG. 3 illustrates an iterative frequency estimation method utilizing principles of a FMCW lidar system, according to some example embodiments;



FIG. 4 illustrates a method describing one iteration of the iterative frequency estimation method of FIG. 3, according to some example embodiments;



FIG. 5 illustrates an algorithm of Viterbi phase unwrapping for frequency estimation, according to some example embodiments;



FIG. 6 illustrates schematics of a process of frequency estimation based on the signal phase, according to some example embodiments;



FIGS. 7A and 7B jointly illustrate a detailed schematic of the phase unwrapping step of the frequency estimation algorithm, according to some example embodiments;



FIG. 7C illustrates the process of frequency refinement via linear regression in the frequency estimation algorithm, according to some example embodiments;



FIG. 8 illustrates a scenario depicting an example use case of an FMCW lidar system, according to some example embodiments; and



FIG. 9 illustrates a block diagram of a system for implementing an FMCW lidar system, according to some example embodiments.





While the above-identified drawings set forth presently disclosed embodiments, other embodiments are also contemplated, as noted in the discussion. This disclosure presents illustrative embodiments by way of representation and not limitation. Numerous other modifications and embodiments can be devised by those skilled in the art which fall within the scope and spirit of the principles of the presently disclosed embodiments.


DETAILED DESCRIPTION

The following description provides exemplary embodiments only, and is not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the following description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing one or more exemplary embodiments. Contemplated are various changes that may be made in the function and arrangement of elements without departing from the spirit and scope of the subject matter disclosed as set forth in the appended claims.


Specific details are given in the following description to provide a thorough understanding of the embodiments. However, understood by one of ordinary skill in the art can be that the embodiments may be practiced without these specific details. For example, systems, processes, and other elements in the subject matter disclosed may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail. In other instances, well-known processes, structures, and techniques may be shown without unnecessary detail in order to avoid obscuring the embodiments. Further, like reference numbers and designations in the various drawings indicate like elements.


Also, individual embodiments may be described as a process which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process may be terminated when its operations are completed, but may have additional steps not discussed or included in a figure. Furthermore, not all operations in any particularly described process may occur in all embodiments. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, the function's termination can correspond to a return of the function to the calling function or the main function.


Furthermore, embodiments of the subject matter disclosed may be implemented, at least in part, either manually or automatically. Manual or automatic implementations may be executed, or at least assisted, through the use of machines, hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof. When implemented in software, firmware, middleware or microcode, the program code or code segments to perform the necessary tasks may be stored in a machine-readable medium. A processor(s) may perform the necessary tasks.


There has been a widescale adoption of imaging techniques in several application areas owing to the need to understand the environment around a subject for taking informed decisions. For example, in remote sensing applications, it is often desired that the physical characteristics of an area be described in as much clarity as possible. In other applications such as autonomous and semi-autonomous vehicles, taking time bound decisions is critical for the operation of such vehicles. For all such applications it is essential that depth estimation of objects in the scene of interest be performed accurately. Precise depth estimation is achievable by taking into consideration errors that occur in taking measurements of the scene. In most such scenarios the illumination source often introduces such errors in the measurements.


Lidar is an increasingly popular sensing modality for ranging applications varying from autonomous driving to industrial robotics and lunar navigation. Conventional pulsed lidar transmits a short pulse of light toward an object, and the distance to the object is calculated from the time that elapses until the reflected light returns to the lidar system. Since pulsed lidar is incoherent, measuring only the intensity of light at the detector, it is susceptible to errors due to ambient light or interference from other lidars. An alternative approach to distance estimation via laser light is frequency-modulated continuous-wave (FMCW) lidar. Unlike pulsed lidar, FMCW lidar has constant illumination power, which makes it compatible with integrated photonics. FMCW lidar is a coherent ranging technology that measures distance by mixing a local copy of the transmitted laser beam with the light reflected back to the receiver. The beat frequency of the resulting interference signal is proportional to the reflector range, so distance measurement becomes a frequency estimation problem.


However, as with any oscillator, a laser cannot produce a single pure frequency of light. Instead, thermal changes, mechanical vibration, and even the quantum nature of photons cause randomness in the phase of oscillation, which corresponds to deviations in the emitted frequency. When a laser is used for FMCW lidar, the laser phase noise likewise causes the interference signal to deviate from the true beat frequency. Often assuming a sinusoid in additive noise, there is a maximum range that can be measured without excessive error. The maximum likelihood estimate of the frequency of a sinusoid in additive noise is the peak of the periodogram, but peak-finding methods perform poorly when phase noise distributes the signal power over a range of frequencies.


Even when the distance to an object is greater than the coherence length of the laser, an FMCW lidar measurement will still contain information about the distance. However, updated techniques are needed that take the statistics of the phase noise into account when performing frequency estimation. In general, the statistics of the phase noise can be modeled explicitly in the time domain. However, frequency estimation is usually performed in the frequency domain, and the phase noise statistics are less well understood in the frequency domain. Instead, when the maximum distance to be measured is greater than the coherence length of the laser, it is useful to perform frequency estimation in the time domain and take advantage of the known phase noise statistics.


Some solutions aim to perform frequency estimation in the time domain by phase unwrapping and linear regression. Many of these algorithms perform a naïve phase unwrapping procedure that does not take the linear nature of the underlying signal into account. More sophisticated methods use the prior information that the underlying unwrapped phase is linear in an algorithm that alternates between phase unwrapping using estimates of the frequency and phase offset, and updated estimation of the frequency and phase offset given the estimated unwrapping. However, none of these methods consider correlated noise settings. Since FMCW depth measurement is affected by correlated noise, example embodiments disclosed herein incorporate phase error estimation into the alternating optimization.


Given a candidate frequency, the proposed solutions approximately recover the maximum likelihood unwrapping sequence using a Viterbi algorithm and the phase noise statistics. The algorithm then alternates between unwrapping and frequency estimate refinement until convergence. The proposed solutions consistently achieve superior performance at long range or with large-linewidth lasers when the signal-to-noise ratio is sufficiently high. These and several other advantages emerge from the disclosed systems, methods and programs provided by the example embodiments.


A Frequency Modulated Continuous Wave lidar or FMCW lidar system is a special type of lidar system that measures both distance and velocity of moving objects. This is achieved by continuously varying the frequency of the transmitted signal by a modulating signal at a known rate over a fixed time period. This modulation is often used for a very precise distance measurement at close range by phase comparison of the two echo signal frequencies. FIG. 1A illustrates a workflow 100A for depth estimation in a scene using principles of a frequency modulation continuous wave (FMCW) device, according to some example embodiments.


A transmission wave for transmission into a scene is generated by one or more lasers of a lidar and fired towards the scene through suitable transmitters of the FMCW lidar system. A copy 10 of the transmitted wave is also stored locally for further processing. The transmitted wave is reflected from the scene and a reflected wave 20 is collected by suitable receivers of the FMCW lidar system. The copy 10 of the transmitted wave and the reflected wave 20 are fed into a mixer for signal mixing 30 which outputs measurements pertaining to the resultant interference signal. As a part of the mixing process, the optical signals may be converted into analog electronic signals which are then digitized to produce samples of the interference signal.


Let T be the sweep duration, do the initial phase, ω0 the initial angular frequency, γ the chirp rate, and on the source phase noise. Within one chirp period, the transmitted wave copy 10 channel has normalized electric field given by:











E

L

0


(
t
)

=


rect

(


t
-

T
/
2


T

)




cos

(


ϕ
0

+


ω
0


t

+


γ


t
2


2

+


ϕ
n

(
t
)


)






(
1
)







The received signal from the reflected wave 20 has an electric field ERX(t)=√{square root over (R)}ELO(t−τ) which has been scaled by the target reflectivity R and delayed by τ=2d/c, where d is the target distance and c is the speed of light. The electric fields of the transmitted wave copy 10 (ELO) and the reflected wave 20 (ERX) fields are summed at a coherent receiver for signal mixing 30, and the lowpass filtered intensity is measured by a detector, with measurement window Tmeas ensuring overlap of the mixed chirps.


The in-phase interference measurements 40 and quadrature measurements 50 are obtained as output of the signal mixing process 30. In some example embodiments, FMCW receivers may use a single balanced detector to capture the in-phase interference measurements 40, and the quadrature measurements 50 are computed from the in-phase interference measurements 40 via a Hilbert transform. In some example embodiments, a quadrature demodulator with two pairs of balanced detectors may capture both the in-phase measurements 40 and the quadrature measurements 50. The in-phase measurements 40 are given by:












i
I

(
t
)

=



R



cos
[



ω
0


τ

-


γ


τ
2


2

+

γ

τ

T

+

Δ



ϕ
n

(
t
)



]


+


w
I

(
t
)



,




(
2
)







where Δϕn(t)=ϕn(t)−ϕn(t−τ) is the phase change (also referred to as phase noise) and w is approximately additive white Gaussian noise (AWGN) with variance σw2/2.


The quadrature measurements 50 are given by:












i
Q

(
t
)

=



R



sin
[



ω
0


τ

-


γ


τ
2


2

+

γτ

t

+


Δϕ
n

(
t
)


]


+


w
Q

(
t
)



,




(
3
)







where wI and wI are independent and identically distributed.


The observed interference signal is expressed as a complex valued sinusoid of the in-phase measurements 40 and the quadrature measurements 50.











r

(
t
)

=




i
I

(
t
)

+

j



i
Q

(
t
)




=



a


exp


{

j

2


π
[


f

t

+
θ
+

η

(
t
)


]


}


+

w

(
t
)




,




(
4
)







where a=√{square root over (R)} is the signal amplitude, f=γτ is the beat frequency,






θ
=



ω
0


τ

-


γ


τ
2


2






is the phase offset, η(t)=Δϕn(t) is a stationary random process denoting the phase change, and w(t)=wI(t)+jwQ(t) is circularly symmetric complex AWGN with autocorrelation custom-characterw(t)=σw2δ(t). The signal-to-noise-ratio (SNR) due to AWGN is a2ω2. Samples at times tn, n=0, 1, . . . . N−1 are given as:










r
n

=


a


exp


{

j

2


π
[


f


t
n


+
θ
+

η
n


]


}


+


w
n

.






(
5
)







The argument of rn is extracted as a tan 2(iQ(tn), iI(tn)):














r
n


=

2


π
[


f


t
n


+
θ
+

η
n

+

ϵ
n


]



,

(

m

o

d


2

π

)





(
6
)







where ϵn describes the effective phase error due to the AWGN and is known as additive observation phase noise (AOPN). The unwrapped phase is expressed two different ways, as:











x
n

=


f


t
n


+
θ
+

ξ
n



,




(
7
)







where ξnnn is the total phase error, or equivalently as











x
n

=


y
n

+

u
n



,




(
8
)







where yn=∠rn(2π) is the extracted wrapped phase (modulo 1), and uncustom-character is the unknown integer number of cycles that must be added to the wrapped phase to perform phase unwrapping.


The resultant expression expresses the interference signal in terms of its amplitude, the beat frequency, the phase offset, a stationary random process denoting the phase change, and circularly symmetric complex AWGN with autocorrelation. Thereafter, the unwrapped phase is defined as a function of the beat frequency, the phase offset, and the total phase error. The unwrapped phase is a sum of extracted wrapped phase and an unknown integer number of cycles that must be added to the wrapped phase to perform phase unwrapping.


Depth estimation 70 requires estimating the frequency of the beat signal from wrapped phase vector using the known phase change statistics 60. Then the final depth estimate rescales the frequency estimate by the chirp rate γ and the speed of light c as







d
ˆ

=



c


f
ˆ



2

γ


.





However, directly maximizing the likelihood of the wrapped phase p(y|f, θ) is difficult because the positions of the discontinuities in the wrapped phase y are unknown. If the number of unwrappings u is recovered, yielding the complete data x=y+u, then maximizing the likelihood of the unwrapped phase p(x|f, θ) is straightforward because x is an affine function in correlated Gaussian noise. In this regard, the goal becomes determining the maximum likelihood estimate of the sequence of integer-valued unwrappings u, given the observed data y and initial estimates of the frequency {circumflex over (f)} and phase offset {circumflex over (θ)}. The Viterbi algorithm estimates the maximum likelihood sequence by assigning a length to transitions between possible states (the unwrapping values un) and then recursively finding the sequence of states with the shortest path length. The most likely sequence has the shortest path. Based on the unwrapped phase estimate {circumflex over (x)}=y+û computed from Viterbi phase unwrapping, the estimates of frequency {circumflex over (f)} and phase offset {circumflex over (θ)} may be refined by fitting an affine function.


The workflow 100A initializes the estimation procedure at a discrete set of M initial frequencies {f0(0), . . . , fM-1(0)} and phase offsets {θ0(0), . . . , θM-1(0)} Starting with an initial pair fm(0) and θm(0), the workflow iteratively performs Viterbi phase unwrapping and refinement of the estimates of frequency {circumflex over (f)} and phase offset {circumflex over (θ)} as long as the path length LV of the unwrapping sequence decreases. Then the final estimates of frequency {circumflex over (f)} and phase offset {circumflex over (θ)} are the frequency/offset pair that produced the lowest path length LV over all M grid points.


Having obtained the frequency of the beat signal, the distance estimate to an object from which the reflected wave 20 was reflected may be determined as an output of the depth estimation 70. In this way, the exemplar workflow 100A, by taking into consideration the correlated noise, provides accurate estimates of the beat frequency even when phase error is significant.


The exemplar workflow 100A may be compiled as a machine executable process or method and may be executed with an FMCW device or lidar system. Details of the components and working of such an FMCW lidar system are described next with reference to FIG. 1B which illustrates a schematic of an FMCW lidar system 100B that produces an estimate 118 of the distance to an object 106. The FMCW lidar system 100B comprises an emitter 102, a transmitter 104, a receiver 108, a mixer 110, an analog-to-digital converter (ADC) 112, a processor 114, and a memory 116.


In some example embodiments, the emitter 102 comprises a suitable light source such as laser and frequency modulation electronics. In some embodiments, the laser may be a tunable laser, and the frequency is modulated by controlling the laser oscillation directly. In some example embodiments, the laser may be a fixed-frequency laser, and the frequency may be modulated externally, e.g., using an electro-optical modulator. In some example embodiments, the emitter 102 transmits to a scene, a wave of radiation (such as the laser). Prior to transmission, the wave of radiation is linearly modulated in the frequency domain. The linear modulation is affected by impairments arising due to for example source phase noise causing non-linearity of the wave of radiation in the frequency domain.


In some example embodiments, the transmitter 104 comprises optics that steer and focus the laser beam on the object 106 to be measured. Focus may be achieved via a lens or combination of lenses. Beam-steering technologies utilized by some example embodiments include without limitation, mechanically-scanned mirrors, optical phased arrays, microelectromechanical systems (MEMS) mirrors, and liquid crystal meta surfaces.


The receiver 108 comprises focusing optics to collect light reflected from the object. The receiver may comprise a lens or lenses to focus the reflected light and a free-space-to-fiber coupler to couple the received light into a fiber optic cable.


The mixer 110 optically combines a copy of the light from the emitter 102 (the local oscillator) with light captured by the receiver to generate a beat signal. The beat signal is distorted due to the non-linearity of the linear modulation caused by the impairments due to laser phase noise. In some embodiments, the mixing is performed with a beam-splitter. In other embodiments, mixing is performed by a fiber-optic coupler. A detector of the mixer 110 then converts the optical signal to an analog electronic signal (e.g., a voltage or a current). The detector may be a single photodiode or a combination of photodiodes, such as in a balanced detector. The detector may also include amplification circuitry to increase the level of the electrical signal.


The ADC 112 converts the analog electrical signal to a digital signal at predetermined sampled times. The ADC 112 comprises sampling and quantization electronics that output a digital signal. In some embodiments, the ADC 112 has uniformly-spaced samples in time. In other embodiments, the ADC 112 uses an external reference such as a k-clock to determine sample times to correspond with a linear frequency sweep of the laser.


The processor 114 uses the digitized samples of the mixed signal to estimate the distance to the object. In some embodiments, the processor 114 is an on-board device such as a field-programmable gate array (FPGA). In other embodiments, the processor 114 is an external computer. The memory 116 amongst other things contains calibration data and parameters needed for the processor to make an accurate distance estimate 118.



FIG. 2A illustrates a detailed schematic of an FMCW lidar system 200, according to some embodiments. In some example embodiments, the FMCW lidar system 200 may be embodied as a device. In some example embodiments the FMCW lidar system 200 may be a distributed system comprising electrical devices and optical devices. For example, the FMCW lidar system 200 may be realized using a controller and an opto-electronic device. The controller may comprise the data processing components of the FMCW lidar system 200 such as the processor and its associated circuitry. The opto-electronic components may comprise other electrical and optical components of the FMCW lidar system 200.


The FMCW lidar system 200 comprises an emitter 202, a transmitter 204, a receiver 208, a mixer 210, a pair of analog to digital converters (ADC) 212A and 212B, and a processor 214. The emitter 202 comprises a tunable laser 222 and its modulation controller 220. The tunable laser 222 is fiber-coupled and connected to a splitter 224 that creates two copies of the laser beam. The first copy is sent to the transmitter 204, in which a collimator 226 focuses the light from the optical fiber into a bundle of parallel rays. Beam-steering optics, such as a pair of galvo mirrors 228, are used to direct the transmit beam towards the object 206.


The wavelength of the tunable laser 222 is swept by a modulation controller 220 so that the optical frequency of the laser beam is a linear function of time (i.e. linearly modulated in frequency domain). As illustrated in FIG. 2B, a swept-frequency signal is emitted from the tunable laser 222 and its delayed copy is captured at the receiver 208 of the FMCW lidar system 200, according to some example embodiments. Light reflected from the object 206 is captured at the receiver 208, which focuses light from free space into an optical fiber.


Assuming T to be the sweep duration of the tunable laser 222, the sweep is a linear chirp between the start frequency fmin and the end frequency fmax. Phase noise φn causes the emitted light frequency to randomly deviate from the desired linear sweep. Subsequently the initial phase φ0, the initial angular frequency ω0, and the chirp rate γ are defined. Within one chirp period, the local oscillator (LO) channel has normalized electric field given by:








E


LO


(
t
)

=


rect

(


t
-

T
2


T

)




cos

(


ϕ
0

+


ω
0


t

+


γ


t
2


2

+


ϕ
n

(
t
)


)

.






The received signal electric field ERX(t)=√{square root over (R)}ELO(t−τ) for the received light (RX), has been scaled by the object reflectivity R and delayed by τ=2d/c, where d is the distance from the transmitter 204 to the object 206, and c is the speed of light. For a static lidar system 200 and object 206, the delay τ is constant. Still, when ELO and ERX are optically combined in the mixer 210, the resultant interference signal's frequency differs from the expected beat frequency fbeat due to the phase noise. Depth estimation only uses the interference signal within the measurement window Tmeas ensuring overlap of the mixed chirps.


Referring to FIG. 2A, the mixer 210, comprises a 90-degree optical hybrid 232 and a pair of balanced detectors 230A and 230B. The mixer 210 through an input port, receives the received light (RX) and through another input port, receives the local oscillator (LO), which is the second copy of the laser beam. The LO and RX fields are combined in the optical hybrid 232 to produce two pairs of mixed outputs. Each of the outputs is connected to a balanced detector (230A or 230B), which uses a pair of photodiodes and an amplifier to convert the optical signals into electrical signals. In this regard, the mixer 210 has one branch for the in-phase component (I) and one branch for the quadrature component (Q). In the I-branch, the LO and RX optical signals are combined to get the photocurrent at each photodiode of the balanced detector 230A as:









i

I
+


(
t
)

=








"\[LeftBracketingBar]"




E


LO


(
t
)

-


E


RX


(
t
)




"\[RightBracketingBar]"


2



t

=


rect

(


t
-


T


meas


2



T


meas



)

[



1
+
R

2

+


R




cos

(



ω
0


τ

-


γ



τ
2


2

+

γτ

t

+


Δϕ
n

(
t
)


)



]



,





and








i

I
-


(
t
)

=








"\[LeftBracketingBar]"




E


LO


(
t
)

-


E


RX


(
t
)




"\[RightBracketingBar]"


2



t


=


rect

(


t
-


T


meas


2



T


meas



)

[



1
+
R

2

-


R




cos

(



ω
0


τ

-


γ



τ
2


2

+

γτ

t

+


Δϕ
n

(
t
)


)



]



,




where Δϕn(t)=φn(t)−φn(t−τ) is the “phase change,” i.e., the phase error in an interferometric measurement due to phase noise in an oscillator such as a laser. Balanced detector 230A filters out the DC terms and common-mode noise by taking the difference between the two photocurrents:








i
I

(
t
)

=





i

I
+


(
t
)

-


i

I
-


(
t
)


2

.





Considering only the time duration within the rect function, the in-phase interference measurement is









i
I

(
t
)

=



R




cos
[



ω
0


τ

-

γ



τ
2

2


+

γτ


t

+


Δϕ
n

(
t
)


]


+


w
I

(
t
)



,




where wI is approximated as additive white Gaussian noise (AWGN) with variance σw2/2.


In the Q-branch, the LO signal is first phase-shifted by π/2 radians (90°) to get








E

LO
,
S


(
t
)

=


rect

(


t
-

T
2


T

)




sin

(


ϕ
0

+


ω
0


t

+


γ


t
2


2

+


ϕ
n

(
t
)


)

.






Then the LO and RX optical signals are combined to get the photocurrent at each photodiode of the balanced detector 230B:











i

Q
+


(
t
)

=







"\[LeftBracketingBar]"




E



LO
,
S



(
t
)

-


E


RX


(
t
)




"\[RightBracketingBar]"


2



t








=


rect

(


t
-


T


meas


2



T


meas



)

[



1
+
R

2

+


R




sin

(



ω
0


τ

-


γ



τ
2


2

+

γτ

t

+


Δϕ
n

(
t
)


)



]


,








and










i

Q
-


(
t
)

=







"\[LeftBracketingBar]"




E



LO
,
S



(
t
)

-


E


RX


(
t
)




"\[RightBracketingBar]"


2



t







=



rect

(


t
-


T


meas


2



T


meas



)

[



1
+
R

2

-


R




sin

(



ω
0


τ

-


γ



τ
2


2

+

γτ

t

+


Δϕ
n

(
t
)


)



]

.








Balanced detector 230B filters out the DC terms and common-mode noise by taking the difference between the two photocurrents








i
Q

(
t
)

=





i

Q
+


(
t
)

-


i

Q
-


(
t
)


2

.





The quadrature measurement takes the form:









i
Q

(
t
)

=



R




sin
[



ω
0



τ

-

γ



τ
2

2


+

γτ


t

+


Δϕ
n

(
t
)


]


+


w
Q

(
t
)



,




where wQ and wI are independent and identically distributed.


In some example embodiments, the mixer 210 may use a single coupler instead of the optical hybrid to mix the LO and RX electric fields and single balanced detector to capture the in-phase measurement. Then the quadrature component is approximated by the Hilbert transform of the in-phase measurement {circumflex over (ι)}Q(t)=custom-character(iI(t).


Each of the ADCs (212A or 212B) comprises circuits that low-pass-filter, sample, and quantize the electrical signal to produce the in-phase (I) and quadrature (Q) digital measurements. The processor 214 extracts the phase from the I/Q components to perform depth estimation.


In some example embodiments, the FMCW lidar system and the FMCW device may be same. In some alternate embodiments, the lidar system may be external to the FMCW device, wherein the FMCW device may control one or more operations of the lidar. Irrespective of its realization, the FMCW lidar system 200 may execute the workflow 100A as a process or method. FIG. 2C illustrates one such exemplar method 250 for distance estimation using an FMCW lidar system such as the FMCW lidar system 200, according to some example embodiments.


The method 250 comprises splitting 3 the incident light into a local beam and a transmission beam. The splitting 3 may be performed by the emitter 202 of the FMCW lidar system 200. The transmission beam is transmitted 5 towards a scene of interest by the transmitter 204 of the FMCW lidar system 200. A reflection of the transmission beam from the scene is received 7 by the receiver 208 of the FMCW lidar system as a reflected beam. One or more objects may reflect the transmission beam in the scene towards the FMCW lidar system. The mixer 210 interferes 9 the received beam reflected from the scene with the local beam to produce an interference pattern. In this regard, the mixer 210 may perform the interference according to principles of the signal mixing 30 process of the workflow 100A.


One or more analog to digital converters (ADC) 112 sample 11 the interference pattern at linearly spaced frequencies to generate samples of the interference signals in the manner described with reference to the workflow 100A of FIG. 1A. One or more processors 114 may interface with a memory 116 storing executable programs, data, and instructions to determine 13 a distance to an object in the scene through joint phase unwrapping and linear regression of the phase extracted from the interference samples. The phase unwrapping and refinement through linear regression may be performed in the manner described with reference to the workflow 100A. The distance is determined by rescaling the estimated beat frequency by the chirp rate and the speed of light. The one or more processors 114 may control an interface 120 such as an output device to output 15 the determined distance of the object in any suitable manner.



FIG. 3 illustrates an iterative frequency estimation method 300 utilizing components of an FMCW device, according to some example embodiments. The frequency estimation method 300 may be compiled and executed as a control process for the FMCW lidar system. In some example embodiments, the method 300 may comprise actions performed by one or more components of the FMCW lidar system 200. The method 300 comprises transmitting 301 at least one wave of radiation to a scene. The transmission 301 may be preceded by preprocessing such as generating and steering the at least one wave of radiation which may be performed by any suitable irradiation source such as a laser, a UV lamp, an IR lamp. Also, the preprocessing may comprise obtaining a local copy of the transmitted wave. The transmission 301 may comprise utilizing suitable optics and associated electronics to direct the at least one wave of radiation towards the scene.


The scene may contain one or more objects and upon irradiation, the one or more objects may transmit back a reflection of the transmitted wave of radiation. The reflection of the transmitted wave may be received 303 from the scene by one or more receivers. The method 300 further comprises interfering 305 the copy of the transmitted wave with the received reflection of the transmitted wave to generate a beat signal. The interference may be performed in the manner described previously with respect to FIG. 1A and FIG. 2A. The interference results in a beat signal which is generated 307 as a sequence of samples with wrapped phases in time domain. Thereafter, executing a phase unwrapping followed by linear regression in an iterative manner, the frequency of the beat signal in time domain is estimated 309 iteratively until a termination condition is met.



FIG. 4 illustrates a method 400 describing a current iteration of the iterative frequency estimation method 300 of FIG. 3, according to some example embodiments. For a current iteration, the method 400 is initialized with the estimate of frequency and phase offset for the iteration immediately preceding the current iteration. For the first iteration, the method 400 is initialized with initial estimates of the frequency {circumflex over (f)} and phase offset {circumflex over (θ)}. Thus, for the first iteration, the initial estimates of the frequency and phase offset are obtained 401 as previous frequency and previous phase offset of the beat signal. The method 400 further comprises determining 403 a current phase error and a phase unwrapping number (the number of unwrappings u) fitting the previous frequency and previous phase offset of the beat signal. The phase error and phase unwrapping number are determined in accordance with the fundamentals of the workflow 100A illustrated in FIG. 1A. The maximum likelihood unwrapping sequence of wrapped phases using linear minimum mean squared error estimation of the phase error is subsequently determined. Thereafter, using a Viterbi phase unwrapping algorithm, a length is assigned to transitions between possible states (the unwrapping values un). The method 400 proceeds to updating 405 a current estimate of the frequency and a current phase offset of the beat signal for the current iteration.



FIG. 5 illustrates an algorithm 500 of Viterbi phase unwrapping for frequency estimation, according to some example embodiments. The algorithm 500 takes as input, wrapped phase y, and a discrete set of M initial frequencies {f0(0), . . . , fM-1(0)} and phase offsets {θ0(0), . . . , θM-1(0)}. Starting with an initial pair fm(0) and θm(0), where m=0, 1, . . . . M−1, the algorithm 500 proceeds to iteratively perform phase unwrapping and refinement of the estimates of frequency {circumflex over (f)} and phase offset {circumflex over (θ)} as long as the path length LV of the unwrapping sequence decreases. Then the final estimates of frequency {circumflex over (f)} and phase offset {circumflex over (θ)} are the frequency/offset pair that produced the lowest path length LV over all M grid points.



FIG. 6 illustrates schematics of a process of frequency estimation based on the signal phase, according to some example embodiments. The measured in-phase and quadrature components are combined to get a single complex-valued sinusoid 602 r(t)=iI(t)+jiQ(t).


The wrapped phase 604 is extracted from the measured signal 602. However, the wrapped phase may be insufficient for estimating the frequency in the time domain due to missing information about where and how many wrappings occur. Predicting the location and magnitude of the phase wrapping is further complicated by the deviations due to phase noise. To perform robust frequency estimation, the frequency estimation algorithm 612 thus jointly unwraps the phase and estimates the frequency in an alternating manner. In addition to the wrapped phase 604, the estimation algorithm 612 requires knowledge of the phase noise statistics 606 and an initial frequency estimate 610. Phase noise statistics 606 are known from prior calibration or manufacturer specification of properties such as the linewidth. The initial estimate of frequency 610 is obtained by computing the power spectral density of the measured signal 602 via a Fourier transform.


Given these inputs, the frequency estimation algorithm 612 proceeds in two alternating steps: phase unwrapping 614, which predicts the location and magnitude of the phase wrappings, and linear regression 616, which updates the estimate of the frequency. After a sufficient number of iterations, the latest result of linear regression is considered the final frequency estimate 618.


For simplicity, additional notation is introduced to expand the sinusoid 602 definition to








r

(
t
)

=


a



exp

(

j


2


π
[


f


t

+
θ
+

η

(
t
)


]


)


+

w

(
t
)



,




where a=√{square root over (R)} is the signal amplitude, f=γτ is the beat frequency, θ=ω0τ−γτ2/2 is the phase offset, η(t)=Δϕn(t) is a stationary random process denoting the phase change, and w(t)=wI(t)+jωQ(t) is circularly symmetric complex AWGN with autocorrelation custom-characterw(t)=σw2δ(t). The signal-to-noise-ratio (SNR) due to AWGN is a2w2.


Samples at times tn, n=0, 1, . . . , N−1 are given as







r
n

=


a


exp


{

j


2


π
[


f



t
n


+
θ
+

η
n


]


}


+


w
n

.






The principal argument of rn is extracted as a tan 2(iQ(tn), iI(tn)) and yields











r
n


=

2


π
[


f



t
n


+
θ
+

η
n

+

ϵ
n


]



,






(

mod


2

π

)




where ϵn describes the effective phase error due to the AWGN and is known as additive observation phase noise (AOPN). The unwrapped phase is further defined as:










x
n

=


f



t
n


+
θ
+

ξ
n









=


y
n

+

u
n



,







where ξnnn is the total phase error, yn=∠rn/(2π) is the extracted wrapped phase (modulo 1), and uncustom-character is the unknown integer number of cycles that must be added to the wrapped phase to perform phase unwrapping.


The two equivalent definitions of the unwrapped phase highlight the reason for the two-step estimation process. In the first definition, the unwrapped phase xn=ftn+θ+ξn is a linear function ftn+θ corrupted by additive noise ξn. Estimating the slope f from samples xn of a noisy linear function has many existing techniques, including ordinary least squares (OLS) and generalized least squares (GLS) if the noise is Gaussian. However, the samples xn are not available. Instead, only the wrapped phase yn is known. Thus, the problem becomes how to determine the unwrapping numbers un, which is a hard problem in the presence of noise. Still, preliminary estimates of frequency f and phase offset θ can guide the unwrapping, with the remaining uncertainty being attributed to the phase error ξn. Hence, estimation cycles through the two definitions of the unwrapped phase xn to recover the true frequency f. Details of the phase unwrapping step 614 of the frequency estimation algorithm 612 are described next with reference to FIGS. 7A and 7B. Particularly, FIG. 7A illustrates the computation of path lengths of the unwrapping sequences while FIG. 7B illustrate the shortest path estimation, according to some example embodiments.


The goal is to find the approximate maximum likelihood estimate given by:








u
^

=


arg




max

u




p

(


y

u

,

f
ˆ

,

θ
ˆ


)




arg




max

u






n
=
0


N
-
1



p

(



y
n



u
n


,

y


n
-
1

:

n
-
C



,


u
^



n
-
1

:

n
-
C



,

f
ˆ

,

θ
ˆ


)





,




assuming the unwrapping process is causal, and using a finite memory of length C. Phase unwrapping proceeds sample by sample by first estimating the likelihood of a particular unwrapping number un at sample n. Then the Viterbi algorithm is used to determine the most likely sequence of unwrapping numbers. The Viterbi algorithm estimates the maximum likelihood sequence by assigning a length (the negative log-likelihood) to transitions between possible unwrapping values un and then recursively finding the sequence of states with the shortest path length. The possible unwrapping numbers at each time step is large, so per-survivor processing is used to keep only a fixed number of K survivor paths.


The inputs to the phase unwrapping step 614 are the wrapped phase vector y, initial frequency estimate {circumflex over (f)} and phase offset estimate {circumflex over (θ)}, the phase noise statistics, time samples t=[t0, t1, . . . , tN-1]T, and the number of survivors K. The phase noise statistics are contained in two components: the cross-covariance vector pC, the auto-covariance matrix QC. The cross-covariance vector pC has elements:









[

p
C

]

i

=


𝔼
[


η
n



η

n
-
C
+
i



]

+

𝔼
[


ϵ
n



ϵ

n
-
C
+
i



]



,




and auto-covariance matrix QC has elements:









[

Q
C

]


i
,
j


=


𝔼
[


η

n
-
C
+
i




η

n
-
C
+
j



]

+

𝔼
[


ϵ

n
-
C
+
i




ϵ

n
-
C
+
j



]



,




where i, j∈(0, . . . , C−1). Both components are completely determined by the autocorrelation functions of the laser source phase noise and the additive noise of the receiver. Since both f and the phase noise statistics depend on τ in the FMCW lidar setting, the covariance matrix QC and correlation vector pC can be computed given {circumflex over (f)}.


Phase unwrapping proceeds as follows. For each sample time tn, there are K existing possible paths for the unwrapped phase x1:n-1(1), . . . , x1:n-1(K), which are determined from the known wrapped phase y and the candidate unwrapping numbers û1:n-1(1), . . . , u1:n-1(K). Notation zn-C:n-1 indicates elements [zn-C, . . . , zn-1]T, which are a subset of the vector z. Each existing survivor path k has a length Ln-1(k). For any sample n=C+1, . . . , N, and for each survivor k=1, . . . , K, the C previous unwrapping estimates in the path are







û

n
-

C
:

n
-
1




(
k
)


=


[



u
^


n
-
C


(
k
)


,

,


u
^


n
-
1


(
k
)



]

.





The line given by initial frequency estimate {circumflex over (f)} and phase offset estimate {circumflex over (θ)} is {circumflex over (f)}t+{circumflex over (θ)}. The deviation from this line is predicted by estimating the phase error at sample n using the previous unwrapping estimates.


In some embodiments, the phase error estimation is the Linear Minimum Mean-Squared Error (LMMSE) estimate, given as








ξ
^

n

(
k
)


=


p
C
T





Q
C

-
1


(


y

n
-

C
:

n
-
1




+

û

n
-

C
:

n
-
1




(
k
)


-


f
ˆ



t

n
-

C
:

n
-
1





-

θ
ˆ


)

.






The LMMSE phase error estimate has prediction error variance σe2. In other embodiments, the phase error can be predicted more quickly but with less accuracy via nearest-neighbor estimation:








ξ
^

n

(
k
)


=


y

n
-
1


+


u
^


n
-
1


(
k
)


-


f
ˆ



t

n
-
1



-


θ
ˆ

.






The predicted unwrapped phase is then {tilde over (x)}n(k)={circumflex over (f)}tn+{circumflex over (θ)}+ξn(k).


Given the predicted phase error, the likelihood for each value of un(k) is:







p

(



y
n



u
n

(
k
)



,

y

n
-

C
:

n
-
1




,

û

n
-

C
:

n
-
1




(
k
)


,

f
ˆ

,

θ
ˆ


)




1


2

π


σ
e
2





exp



{

-



(


y
n

+

u
n

(
k
)


-


f
ˆ



t
n


-

θ
ˆ

-


ξ
^

n

(
k
)



)

2


2


σ
e
2




}

.






The prediction error variance for the LMMSE estimator is:







σ
e
2

=


σ
η
2

+

σ

ϵ
2


-


p
C
T



Q
C

-
1





p
C

.







The K values of un(k) closest to yn−{circumflex over (f)}tn−{circumflex over (θ)}−ξn(k), denoted custom-character, custom-character=1, . . . , K, have the largest likelihood and are considered as candidates for the path continuation. The total unwrapping path length (negative log-likelihood) for each candidate custom-character is








L
˜

n

(

k
,


)


=


L

n
-
1


(
k
)


+



y
n

+

ũ
n

(

k
,


)


-


f
ˆ



t
n


-

θ
ˆ

-


ξ
^

n

(
k
)




2


σ
e
2



+


log

(

σ
e
2

)

.






The same process is repeated for all K survivors, resulting in candidate path lengths {{tilde over (L)}n(1,1), . . . , custom-character, . . . , {tilde over (L)}n(K,K)}. Each of the K existing survivors has K possible unwrapping values custom-character, for a maximum of K2 total possible paths. Of these K2 possible paths, only the K most likely (i.e., shortest) paths survive. The K new survivors extend some or all of the previous survivors, and the unwrapping history is updated to û1:n(1), û1:n(K). For samples n=1, . . . , C, the same procedure holds, except only the previous n−1 samples are used for phase error estimation.


After the final sample n=N, the most likely unwrapping sequence û1:n is the survivor k with the shortest path length LN=LN(k). The maximum likelihood unwrapped phase is {circumflex over (x)}=y+û1:N.


Having estimated the maximum likelihood unwrapped phase, the remaining goal is to refine the frequency. FIG. 7C illustrates the process of frequency refinement via linear regression 616 in the frequency estimation algorithm 612, according to some example embodiments. Given {circumflex over (x)}=y+û1:N and the phase noise statistics, the aim is to estimate the slope f of the line that best explains {circumflex over (x)}.


Let 1N=[1, . . . , 1]T represent a length-N vector of 1s, t=[t1, . . . , tN]T represent a length-N vector of time samples, A=[t, 1N] represent a matrix concatenating the vectors t and 1N, β=[f, θ] represent a vector of parameters to estimate, and QN represent the phase error auto-covariance matrix for the length-N sequence. In some embodiments, the matrix inverse QN−1 may be directly computed numerically. In other embodiments, the matrix inverse QN−1 may be approximated by first approximating the Toeplitz matrix QN as a circulant matrix and then inverting the circulant approximation via the fast Fourier transform algorithm. Linear regression is used to estimate the slope (frequency f) and the intercept (phase offset θ) of the line. In some embodiments, linear regression may be performed via generalized least squares estimation (GLS):











β
ˆ



GLS


=

[


f
ˆ

,

θ
ˆ


]







=

arg


min
β




(


x
ˆ

-

A

β


)

T




Q
N

-
1


(


x
ˆ

-

A

β


)








=



(


A
T



Q
N

-
1



A

)


-
1




A
T



Q
N

-
1





x
ˆ

.









The advantage associated with using GLS is that it provides more accurate update of the frequency and the phase offset of the beat signal. In other embodiments, for faster updating, linear regression may be performed via ordinary least squares, ignoring the phase error statistics to avoid inversion of QN, i.e.,











β
ˆ



OLS


=

[


f
ˆ

,

θ
ˆ


]







=

arg


min
β




(


x
ˆ

-

A

β


)

T



(


x
ˆ

-

A

β


)








=



(


A
T


A

)


-
1





A

τ

X
ˆ



.









The estimates {circumflex over (f)}, {circumflex over (θ)} are then used in the subsequent iteration for phase unwrapping.


The frequency estimation algorithm 612 of FIG. 6 continues to alternate between Viterbi phase unwrapping 614 and linear regression 616 until a stopping criteria or termination condition is met. In some embodiments, the stopping criteria may be when the path length increases from iteration b−1 to b, i.e., L(b)>L(b−1). In other embodiments, the stopping criteria may be when b=B for some maximum number of iterations B. The final frequency estimate 618 is the value of f used in the final iteration.


Some other key aspects of the frequency estimation algorithm 612 of FIG. 6 are described below.


Initialization: In some embodiments, the iterative algorithm 612 may be performed for a single pair of initial frequency estimate {circumflex over (f)}0 and initial phase offset estimate {circumflex over (θ)}0. In some embodiments, initial frequency estimate {circumflex over (f)}0 is the frequency corresponding to the peak of the periodogram of the interference signal. In other embodiments, initial frequency estimate {circumflex over (f)}0 is the frequency corresponding to center frequency of a Lorentzian function fit to the power spectral density of the interference signal. In some embodiments, the algorithm may be performed with M initial frequency estimates {{circumflex over (f)}01, . . . , {circumflex over (f)}0M} and M initial phase offset estimates {{circumflex over (θ)}01, . . . , {circumflex over (θ)}0M}. Then the final frequency estimate {circumflex over (f)} is the one with the shortest unwrapping path over all initializations. In some embodiments, the M initial frequency estimates {{circumflex over (f)}01, . . . , {circumflex over (f)}0M} may be linearly spaced frequencies on a grid. In other embodiments, the M initial frequency estimates {{circumflex over (f)}01, . . . , {circumflex over (f)}0M} may be random and uniformly distributed frequencies. In other embodiments, the M initial frequency estimates {{circumflex over (f)}01, . . . , {circumflex over (f)}0M} include the results of alternative frequency estimation techniques, including the peak of the periodogram of the interference signal or the center frequency of a Lorentzian function fit to the power spectral density of the interference signal.


Noise Statistics: In some embodiments the laser phase noise ϕn may be dominated by spontaneous emission, so the frequency noise ωn=dϕn/dt is assumed to be white and Gaussian. The frequency noise may have a constant PSD Sω(ω)=Δω, where Δv=Δω/(2π) is the full-width at half-maximum laser linewidth in Hz, and is assumed to be known. In such scenarios, the phase noise is a Wiener process. For a fixed delay τ, the resulting phase change Δϕn(t; τ) is a zero-mean, stationary Gaussian process with triangular autocorrelation function








R

Δ


ϕ
n



(

t
;
τ

)

=



Δ
ω

(

τ
-



"\[LeftBracketingBar]"

t


"\[RightBracketingBar]"



)




rect

(

t

2

τ


)

.






In other embodiments, the laser phase noise ϕn may be affected by the wavelength sweep of the laser emitter. As a result, the frequency noise may be colored, not white. In some embodiments, the frequency noise may have a first-order low-pass transfer function with Power Spectral Density given by:










S
Δϕ

n

(

ω
;
τ

)

=


α
ω



τ
2


sin



c
2

(


ω

τ

2

)



1

1
+


(

ω

ω
1


)

2





,




where αω is a scaling parameter and ω1 is the cutoff frequency. Then the phase change PSD is








R

Δ


ϕ
n



(

t
;
τ

)

=



α
ω


ω
1




{









(

τ
-



"\[LeftBracketingBar]"

u


"\[RightBracketingBar]"



)



ω
1


-

exp

(


-

ω
1






"\[LeftBracketingBar]"

u


"\[RightBracketingBar]"



)

+

exp
(



-

ω
1



τcos


h

(


ω
1





"\[LeftBracketingBar]"

u


"\[RightBracketingBar]"



)



,







0




"\[LeftBracketingBar]"

u


"\[RightBracketingBar]"


<
τ








exp

(


-

ω
1






"\[LeftBracketingBar]"

u


"\[RightBracketingBar]"



)

[


cos


h

(


ω
1


τ

)


-
1

]

,







"\[LeftBracketingBar]"

u


"\[RightBracketingBar]"



τ




.







In some embodiments, the PSD may have a second-order low-pass transfer function with PSD










S
Δϕ

n

(

ω
;
τ

)

=


α
ω



τ
2


sin



c
2

(


ω

τ

2

)




ω
2
4



ω
2
4

+


ω
2




ω
2
2

(


1

q
2


-
2

)


+

ω
4





,




where αω is a scaling parameter and ω2 is the cutoff frequency, and q is the quality factor. In other embodiments, the frequency noise has multiple sources, including white noise and flicker (1/f) noise.


Additive Observation Phase Noise

Given the measured signal magnitudes |rn|, the scaled AOPN term 2π ϵn has a zero-mean von Mises distribution with a complicated dependence on a, |rn|, and σw2. The AOPN variance can be approximated as







σ
ϵ
2



min


{



σ
w
2




2



(

2

π

)

2


N








n
=
0



N
-
1





"\[LeftBracketingBar]"



r
n




"\[LeftBracketingBar]"

2






,


1

1

2



}







FIG. 8 illustrates a scenario depicting an example use case of an FMCW lidar system, according to some example embodiments. A vehicle 802 moving on a road link 804 may be connected to a base station 806. The road link may have one or more obstacles 808 and the goal may be to traverse the road link 804 without colliding with the obstacles 808. Towards this end, it is required that a correct estimation of the depth/range of the obstacles 808 be determined so that a route for the vehicle 802 may be planned so as to avoid the obstacles 808.


The vehicle 802 may be a manually driven vehicle, a semi-autonomous vehicle or a fully autonomous self-driving vehicle and may be configured for communication with the base station 806. In this regard, the vehicle 802 may be equipped with suitable components to execute remote sensing, data communication, and data processing. Towards this end, it may be contemplated that the vehicle may be equipped with an onboard FMCW lidar system such as the ones described earlier. An emitter of the lidar system transmits a wave or beam of radiation (shown as solid line) towards the obstacles 808 and receives a reflection (shown as dotted line) of the transmitted wave/beam from the scene ahead. Since the vehicle 802 may be moving and/or due to errors in the source laser, the depth estimation of the obstacles 808 may not be correctly inferable using conventional techniques.


The vehicle 802 may invoke the FMCW lidar system to perform accurate depth estimation for the obstacles 808. The FMCW lidar system may fully or partially be onboard the vehicle 802. In embodiments where the FMCW lidar system is partially onboard the vehicle 802, the computation steps leading to frequency estimation may be performed via edge computing on the base station 806. In some example embodiments, the vehicle 802 may determine through the FMCW lidar system, the correct range of the obstacles 808 and thereby determine a geo-location of the obstacles 808. Accordingly, the onboard controller of the vehicle 808 may reroute the vehicle 808 to avoid collision with the obstacles 808. In some example embodiments, the vehicle 802 may additionally or alternately convey at least the geo-location of the obstacles 808 to the base station 806 for updating a map of the area in which the road link 804 exists.


Although the example use case is described with reference to an on-road vehicle, example embodiments described herein may be applicable to any type of vehicle such as an aerial vehicle, an aquatic vehicle, a spacecraft, and the like. Other example uses of some embodiments include industrial robotics, computer vision systems and autonomous driving.



FIG. 9 illustrates a block diagram of a system for implementing an FMCW lidar system, according to some example embodiments. The controller 911 includes a processor 940, a computer readable memory 912, storage 958 and user interface 949 with optional display 952 and keyboard 951, which are connected through bus 956. For example, the user interface 964 in communication with the processor 940 and the computer readable memory 912, acquires and stores the image data in the computer readable memory 912 upon receiving an input from a surface, keyboard 953, of the user interface 957 by a user.


The computer 911 can include a power source 954, depending upon the application the power source 954 may be optionally located outside of the computer 911. Linked through bus 956 can be a user input interface 957 adapted to connect to a display device 948, wherein the display device 948 can include a computer monitor, camera, television, projector, or mobile device, among others. A network interface controller (NIC) 934 is adapted to connect through the bus 956 to a network 936, wherein image data or other data, among other things, can be rendered on a third-party display device, third party imaging device, and/or third-party printing device outside of the computer 911.


Still referring to FIG. 9, the image data or other electronic data, among other things, may be transmitted over a communication channel of the network 936, and/or stored within the storage system 958 for storage and/or further processing. Further, the time series data or other data may be received wirelessly or hard wired from a receiver 946 (or external receiver 938) or transmitted via a transmitter 947 (or external transmitter 939) wirelessly or hard wired, the receiver 946 and transmitter 947 are both connected through the bus 956. The computer 911 may be connected via an input interface 908 to external sensing devices 944 and external input/output devices 941. For example, the external sensing devices 904 may include sensors gathering data before-during-after of the collected time-series data of the machine. The computer 911 may be connected to other external computers 942. An output interface 909 may be used to output the processed data from the processor 940. It is noted that a user interface 949 in communication with the processor 940 and the non-transitory computer readable storage medium 912, acquires and stores the region data in the non-transitory computer readable storage medium 912 upon receiving an input from a surface of the user interface 949 by a user.


The above description provides exemplary embodiments only, and is not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the following description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing one or more exemplary embodiments. Contemplated are various changes that may be made in the function and arrangement of elements without departing from the spirit and scope of the subject matter disclosed as set forth in the appended claims.


Specific details are given in the following description to provide a thorough understanding of the embodiments. However, understood by one of ordinary skill in the art can be that the embodiments may be practiced without these specific details. For example, systems, processes, and other elements in the subject matter disclosed may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail. In other instances, well-known processes, structures, and techniques may be shown without unnecessary detail in order to avoid obscuring the embodiments. Further, like reference numbers and designations in the various drawings indicated like elements. Also, individual embodiments may be described as a process which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process may be terminated when its operations are completed but may have additional steps not discussed or included in a figure. Furthermore, not all operations in any particularly described process may occur in all embodiments. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, the function's termination can correspond to a return of the function to the calling function or the main function.


Furthermore, embodiments of the subject matter disclosed may be implemented, at least in part, either manually or automatically. Manual or automatic implementations may be executed, or at least assisted, through the use of machines, hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof. When implemented in software, firmware, middleware or microcode, the program code or code segments to perform the necessary tasks may be stored in a machine-readable medium. A processor(s) may perform the necessary tasks. Various methods or processes outlined herein may be coded as software that is executable on one or more processors that employ any one of a variety of operating systems or platforms. Additionally, such software may be written using any of a number of suitable programming languages and/or programming or scripting tools, and also may be compiled as executable machine language code or intermediate code that is executed on a framework or virtual machine. Typically, the functionality of the program modules may be combined or distributed as desired in various embodiments.


Embodiments of the present disclosure may be embodied as a method, of which an example has been provided. The acts performed as part of the method may be ordered in any suitable way. Accordingly, embodiments may be constructed in which acts are performed in an order different than illustrated, which may include performing some acts concurrently, even though shown as sequential acts in illustrative embodiments. Further, use of ordinal terms such as “first,” “second,” in the claims to modify a claim element does not by itself connote any priority, precedence, or order of one claim element over another or the temporal order in which acts of a method are performed, but are used merely as labels to distinguish one claim element having a certain name from another element having a same name (but for use of the ordinal term) to distinguish the claim elements. Although the present disclosure has been described with reference to certain preferred embodiments, it is to be understood that various other adaptations and modifications can be made within the spirit and scope of the present disclosure. Therefore, it is the aspect of the append claims to cover all such variations and modifications as come within the true spirit and scope of the present disclosure.

Claims
  • 1. A frequency modulation continuous wave (FMCW) device, comprising: an emitter configured to transmit at least one wave of radiation to a scene, wherein the transmitted wave is modulated in frequency domain using linear modulation that is subject to impairments causing a non-linearity of the transmitted wave in the frequency domain;a receiver configured to receive a reflection of the transmitted wave from the scene;a mixer operatively connected to the emitter and the receiver and configured to interfere a copy of the transmitted wave with the received reflection of the transmitted wave to generate a beat signal;an analog-to-digital converter (ADC) operatively connected to the mixer and configured to generate a sequence of samples of the beat signal with wrapped phases in time domain; anda processor configured to estimate, iteratively until a termination condition is met, a frequency of the beat signal in the time domain based on 1) phase unwrapping of the samples of the beat signal subject to correlated phase error derived from phase noise statistics of the emitter and 2) a linear regression fitting the frequency of the beat signal into unwrapped phases of the beat signal.
  • 2. The FMCW device of claim 1, wherein to perform a current iteration of the estimation, the processor is configured to determine, for each sample of the sequence of samples of the beat signal, a current phase error and a phase unwrapping number fitting a previous frequency of the beat signal and a previous phase offset of the beat signal determined during a previous iteration, wherein the current phase error for a current sample in the sequence of the beat signal is correlated with a previous phase error for a previous sample in the sequence of the beat signal by predetermined phase noise statistics; andupdate a current frequency of the beat signal and a current phase offset of the beat signal for the current iteration based on the determined current phase error and the determined phase unwrapping number.
  • 3. The FMCW device of claim 2, wherein the processor is configured to estimate iteratively, the frequency of the beat signal in the time domain, using an alternative optimization that includes a Viterbi algorithm determining the current phase error and the phase unwrapping number probabilistically to maximize a likelihood of their fitting in the entire sequence of samples of the beat signal.
  • 4. The FMCW device of claim 3, wherein the alternative optimization updates the frequency of the beat signal and the phase offset of the beat signal using a generalized least squares (GLS) regression.
  • 5. The FMCW device of claim 3, wherein the alternative optimization updates the frequency of the beat signal and the phase offset of the beat signal using via least squares regression.
  • 6. The FMCW device of claim 3, wherein the termination condition compares a likelihood given by the Viterbi algorithm with a threshold.
  • 7. The FMCW device of claim 3, wherein, for each current sample of the sequence of samples of the beat signal, the Viterbi algorithm uses the previous phase errors and the previous phase unwrapping number determined for the previous samples for causal estimation of a current phase error and a current phase unwrapping number.
  • 8. The FMCW device of claim 7, wherein the causal estimation of the current phase error and the current phase unwrapping number are determined via a linear minimum mean squared error estimation.
  • 9. The FMCW device of claim 1, wherein the phase noise statistics include the autocorrelation function of the emitter phase noise and the autocorrelation function of the receiver additive noise.
  • 10. The FMCW device of claim 1, wherein the termination condition is a number of iterations.
  • 11. A Lidar including the FMCW device of claim 1.
  • 12. The FMCW device of claim 1, wherein the processor is further configured to estimate a distance to an object in the scene, based on the estimated frequency of the beat signal.
  • 13. A Lidar including the FMCW device of claim 12, wherein the emitter includes a laser source with a coherence length shorter than the distance to the object.
  • 14. A frequency estimation method utilizing a frequency modulation continuous wave (FMCW) device, comprising: transmitting, at least one wave of radiation to a scene, wherein the transmitted wave is modulated by an emitter in a frequency domain using linear modulation that is subject to impairments causing a non-linearity of the transmitted wave in the frequency domain;receiving a reflection of the transmitted wave from the scene;interfering a copy of the transmitted wave with the received reflection of the transmitted wave to generate a beat signal;generating a sequence of samples of the beat signal with wrapped phases in time domain; andestimating, iteratively until a termination condition is met, a frequency of the beat signal in the time domain based on 1) phase unwrapping of the samples of the beat signal subject to correlated phase error derived from phase noise statistics of the emitter and 2) a linear regression fitting the frequency of the beat signal into unwrapped phases of the beat signal.
  • 15. The frequency estimation method of claim 14, wherein a current iteration of the frequency estimation comprises: determining, for each sample of the sequence of samples of the beat signal, a current phase error and a phase unwrapping number fitting a previous frequency of the beat signal and a previous phase offset of the beat signal determined during a previous iteration, wherein the current phase error for a current sample in the sequence of the beat signal is correlated with a previous phase error for a previous sample in the sequence of the beat signal by predetermined phase noise statistics; andupdating a current frequency of the beat signal and a current phase offset of the beat signal for the current iteration, based on the determined current phase error and the determined phase unwrapping number.
  • 16. The frequency estimation method of claim 15, wherein the iterative estimation of the frequency of the beat signal in the time domain is based on an alternative optimization that includes a Viterbi algorithm determining the current phase error and the phase unwrapping number probabilistically to maximize a likelihood of their fitting in the entire sequence of samples of the beat signal.
  • 17. The frequency estimation method of claim 16, wherein, for each current sample of the sequence of samples of the beat signal, the Viterbi algorithm uses the previous phase errors and the previous phase unwrapping number determined for the previous samples for causal estimation of a current phase error and a current phase unwrapping number.
  • 18. The frequency estimation method of claim 17, wherein the alternative optimization updates the frequency of the beat signal and the phase offset of the beat signal using a generalized least squares (GLS) regression.
  • 19. The frequency estimation method of claim 15, wherein the alternative optimization updates the frequency of the beat signal and the phase offset of the beat signal using least squares regression.
  • 20. A non-transitory computer-readable medium having stored thereon instructions executable by a computer for controlling a frequency modulation continuous wave (FMCW) device to perform a frequency estimation method, the frequency estimation method comprising: transmitting at least one wave of radiation to a scene, wherein the transmitted wave is modulated by an emitter in frequency domain using linear modulation that is subject to impairments causing a non-linearity of the transmitted wave in the frequency domain;receiving a reflection of the transmitted wave from the scene;interfering a copy of the transmitted wave with the received reflection of the transmitted wave to generate a beat signal;generating a sequence of samples of the beat signal with wrapped phases in a time domain; andestimating, iteratively until a termination condition is met, a frequency of the beat signal in the time domain based on 1) phase unwrapping of the samples of the beat signal subject to correlated phase error derived from phase noise statistics of the emitter and 2) a linear regression fitting the frequency of the beat signal into the unwrapped phases of the beat signal.