The present invention relates to LiDAR (Light Detection And Ranging) technology, and in particular to a LiDAR apparatus and process for making simultaneous measurements of distance and velocity.
Access to reliable and accurate navigation, guidance and situational awareness data is highly sought after to ensure mission success in a wide range of applications. Industries that rely heavily on navigation technology include commercial aviation, rail transport, space systems, trucking, ride sharing, mining, urban transport providers, active target weaponry, and automotive, to name a few.
Position is a very sought-after attribute, with velocity and acceleration typically being either upstream (derived from position) or downstream (position inferred from), and determined using simple Newtonian mechanics. All metrics are quoted relative to some global reference frame, which may be a known starting position of a vehicle or other mobile asset. Knowledge of the attitude of an accelerometer can be used to transform acceleration measurements into an inertial reference frame. Position is the integral of velocity over time, with velocity being the integral of acceleration over time. Navigation is a 7 degree of freedom problem. That is, the position of any asset can be completely resolved by having access to temporal (time), translational (X, Y, and Z) and rotational (yaw, pitch, and roll) information, and coupling this information with Newtonian mechanics.
Sensors to measure these 6 spatial degrees of freedom and one temporal degree of freedom are not new, and have existed in various forms for nearly 100 years. An inertial measurement unit operates by measuring linear acceleration using one or more accelerometers, and rotational rate(s) using one or more gyroscopes. Technologies used include fiber optic gyroscopes for measuring rotations (yaw, pitch, and roll), and accelerometers for measuring translations (X, Y, and Z). Position is typically of very high importance, and any error in the measurement of rotational rate or linear acceleration will exponentially affect the position accuracy. This is because linear acceleration must be integrated twice to determine position, and rotational rate must be integrated once to determine attitude.
Any unmeasured drift in the rotation of the sensor will result in an incorrect attitude determination being reported, which when combined with linear acceleration measurement errors will result in an incorrect position being reported. As a result, it is not uncommon for a state-of-the-art Inertial Measurement Unit (IMU) to report an integrated position drift (i.e., error) of up to 100 km over the course of an hour.
To correct for this erroneous position information, an Inertial Measurement unit (IMU) can be paired with auxiliary data streams to form an Inertial Navigation System (INS). Examples include wheel speed sensors and gear selector status in vehicles to provide a dead reckoning capability, or derived position from Simultaneous Location And Mapping (SLAM) algorithms. Other sources of this second measurement of position are Global Navigation Satellite Systems (GNSS). Examples of GNSS that are commonly used include the Global Positioning System (GPS), GLObal NAvigation Satellite System (GLONASS), Galileo, and the BeiDou navigation satellite System (BDS).
There are several operational environments where GNSS signals are either unreliable or not available. In these environments, it can be difficult to obtain external measurements of the position of the sensor, often resulting in large positional drifts that have negative effects on navigation. Likewise, wheel speed sensors can produce erroneous data if traction is lost in challenging terrain, or if tracked vehicles are used.
Light Detection And Ranging (LiDAR) is a sensor technology that remotely interrogates a target of interest using a laser signal. Coherent LiDAR is capable of measuring the relative radial velocity of a target due to the Doppler effect. The Doppler shift f due to a relative velocity v relates to the carrier wavelength λ by the equation:
The shorter the wavelength, the larger the Doppler shift. Relative velocity can thus be measured with high precision at optical wavelengths. For example, a near-infrared laser operating at a wavelength of 1064 nm will experience a 47 MHz Doppler shift at a relative velocity of 25 m/s:
As such, the absolute optical frequency of the Doppler shifted light will be:
where f0 is the optical carrier frequency, which is related to optical wavelength λ and the speed of light, c, by the equation:
The optical carrier frequency of a 1064 nm laser is approximately 282 THz, which is too high to measure directly using current electronics. It is thus necessary to measure changes in optical frequency using an interferometer. Interfering the Doppler shifted light with an unshifted reference (referred to as a local oscillator) at a photodetector produces an electronic signal at their difference frequency:
It is possible to measure the relative rates of all rotational (yaw, pitch, and roll) and translational (X, Y, and Z) axes using a coherent LiDAR sensor if the light is aimed at the target in the correct orientation. Only the Doppler contribution of relative radial velocity can be measured, so an orientation which puts the outgoing light as close to parallel to the motion as possible is desirable. A sensor that is capable of measuring this effect has previously been disclosed in U.S. Pat. No. 9,007,569 B2, entitled “Coherent doppler lidar for measuring altitude, ground velocity, and air velocity of aircraft and spaceborne vehicles”. This patent relies on a well-known technique called Frequency Modulated Continuous Wave (FMCW) LiDAR, which relies on frequency modulation of the laser to introduce an artificial heterodyne beat-note. The range and velocity can be inferred by measuring this beat-note frequency.
Certain applications require very precise velocity measurements that have an error far less than 1 cm/s. These include, but are not limited to, vehicle position tracking for hours at a time, airborne gravimetry, and satellite docking procedures. Since FMCW relies on modulation of the laser frequency, any noise introduced from the modulation will couple into the velocity measurement and reduce the precision.
Since only relative radial velocity can be measured, the configuration and number of optical channels need to be carefully planned. For example, the motion of a train is usually constrained to forward and backward directions, and consequently a single optical channel facing along either of these directions may suffice. However, as the train moves along the railway track, it typically pitches up and down due to topography and track defects, and this movement couples into the velocity measurements. For example, in the case of a sensor that is facing backwards, a positive pitch (up) event will result in a smaller than true forward velocity measurement, whilst a negative pitch (down) event will result in a larger than true forward velocity measurement. A simple way to overcome this is to have two sensors: one facing forward, the other facing backward. Depending on the exact configuration, the effect of pitch on the velocity measurement can be removed by simply taking the average of the two measurements. However, this necessitates a second sensor.
Other situations with more degrees of freedom may necessitate more sensors, sometimes more than 10, depending on the exact performance requirements. To achieve this many channels with other technologies such as Frequency Modulated Continuous Wave LiDAR, the hardware is usually duplicated as many times as required. However, this is costly, and results in a sensor that is unnecessarily large and complex. Time division multiplexing may also be used, but this results in a reduction in the measurement bandwidth, and a consequent degradation in the quality of the data produced.
It is desired to overcome or alleviate one or more difficulties of the prior art, or to at least provide a useful alternative.
In accordance with some embodiments of the present invention, there is provided a LiDAR apparatus, including:
In some embodiments, the pseudo-random bit sequences have low cross-correlation. In some embodiments, the optical signals have respective different delays such that the modulations do not overlap in time. In some embodiments, each modulated optical signal is modulated by the same pseudo-random bit sequence.
Also described herein is a LiDAR apparatus, including:
In some embodiments, the distances to the surface(s) are unconstrained, and the modulation components are further configured to output, from each of the output ports, and prior to outputting the modulated optical signals, a corresponding range-finding optical signal modulated by a corresponding pseudo-random bit sequence; and the digital signal processing component is further configured to, for each of the transmitted optical signals:
In some embodiments, the respective optical transmitters are arranged to transmit the respective modulated optical signals in different directions to enable navigation, telemetry, and positioning of a vehicle to which the apparatus is mounted.
In some embodiments, each optical transmitter and corresponding optical receiver constitute a corresponding optical transceiver. The optical transceivers may be, for example, beam expanders, telescopes, and/or off-axis reflectors.
In some embodiments, the different delays result from respective different optical path lengths between the output ports and the optical transmitters. In other embodiments, the different delays result from respective different electrical path lengths between a pseudo-random bit sequence generator and respective optical modulators of the modulation and delay components. In yet further embodiments, the different delays result from generating the pseudo-random bit sequence generator at different times, or from using different pseudo-random bit sequence codes for each delay.
In accordance with some embodiments of the present invention, there is provided a LiDAR process executed by a signal processing component of a LiDAR apparatus, including:
In some embodiments, the distances to the surface(s) are unconstrained, and the process includes calculating the different delays from respective measurements of the distances of the surface(s) from the LiDAR apparatus.
In some embodiments, each measurement of distance of the corresponding surface from the LiDAR apparatus is calculated by:
In some embodiments, the process includes controlling respective optical modulators to modulate the optical signals with the respective different delays.
In accordance with some embodiments of the present invention, there is provided at least one computer-readable storage medium having stored thereon processor-executable instructions that, when executed by at least one processor of a LiDAR apparatus, cause the at least one processor to execute any one of the above processes.
In accordance with some embodiments of the present invention, there is provided at least one non-volatile storage medium having stored thereon FPGA configuration data that, when used to configure an FPGA, causes the FPGA to execute any one of the above processes.
In accordance with some embodiments of the present invention, there is provided at least one non-volatile storage medium having stored thereon processor-executable instructions and FPGA configuration data that, when respectively executed by at least one processor of a LiDAR apparatus and used to configure an FPGA, causes the at least one processor and FPGA to execute any one of the above processes.
Some embodiments of the present invention are hereinafter described, by way of example only, with reference to the accompanying drawings, wherein:
Embodiments of the present invention include LiDAR (Light Detection And Ranging) apparatuses and processes in which one or more pseudo-random bit sequences (“PRBS”) are modulated onto an optical signal generated by a laser, and split into multiple channels to simultaneously measure the distance and relative radial velocity for each channel relative to some surface(s). The channels may be, for example, directed into free space through respective telescopes, and directed at the same surface or different surfaces which scatter or reflect light back to the telescopes, each with potentially different relative radial velocities and distances. Light scattering back towards the sensor is received and then interfered at a receiver. Depending on the optical configuration, the receiver could be, for example, a 90-degree optical hybrid receiver with two balanced photodetectors as shown in
This technology is referred to herein as “code-division multiple-access laser Doppler velocimetry” (or “CDMA LDV”) since it uses spread-spectrum code-division multiple access signal processing (similar to, for example, the Global Positioning System) to support multiplexed (i.e., more than one) measurements based on respective different delays of the sequences. The term “Laser Doppler Velocimetry” broadly describes the field of using a laser to measure velocity based on the Doppler effect, in which changes in relative radial velocity result in measurable changes in optical frequency.
A key advantage of the described invention is that it enables the separate and simultaneous measurement of multiple distinct line-of-sight velocities and distances without requiring the duplication of hardware that would otherwise be required for a single measurement channel, lowering overall hardware complexity, and instead relying on the efficient utilization of digital signal processing resources. For example,
Embodiments of the present invention thus enable a reduction of the overall complexity and number of hardware components by using spread-spectrum signal processing techniques which do not sacrifice measurement quality, and which provide considerable advantages when scaling the number of measurements that are required for any given application.
The LiDAR apparatuses described herein are configured for use in one of two types of application, depending on whether the distances (i.e., ranges) and radial velocities to be measured are constrained to be within a known range of values, or are unconstrained (i.e., are entirely unknown and can take any practical value).
For measurement channels that are unconstrained (i.e., when the distance between the optical output and the reflecting/scattering surface is unknown and may change appreciably over time), the relative distance between the optical output and the surface is first estimated and then used to calculate the corresponding delay of the channel. In this way, the unconstrained channel can be treated as a constrained channel. In some cases, to accurately measure distance, it is necessary to correct for disturbances in optical phase and frequency, including Doppler and other sources of phase noise. In the described embodiments, this is achieved using the process described in International Patent Application No. PCT/AU2020/051427, entitled “A LiDAR apparatus and process” (“the frequency compensation patent application”), the entirety of which is incorporated herein by reference. This process does require the duplication of receiver hardware for each measurement channel, but offers excellent measurement precision, accuracy, and dynamic range. Moreover, a key advantage of the frequency compensation described in the frequency compensation patent application is that it compensates the effects of Doppler shifting, enabling range to be calculated using a single template, and effectively collapsing a computationally intensive 2D search space into a single correlation calculation. The frequency compensation process also circumvents the need to measure and correct for a frequency shift on the received signal which, for example, could be accomplished by demodulating the input signal with a reference local oscillator prior to matched-template filtering.
In
The modulated light from each channel is transmitted via a corresponding optical transmitter to illuminate at least part of a remote surface or object that scatters and/or reflects a portion of the modulated light back towards a corresponding optical receiver of the LDV. In the described embodiments, and as shown in
A small portion of the scattered light (an ‘echo’) is captured using the beam expander (106) and coherently interfered with a local oscillator (130). In the described embodiments, the incoming light is separated from the outgoing light (128) using a fibre optic circulator (108). In some embodiments, a fiber-optic polarization beam splitter is used in place of the fibre-optic circulator (122), or as is the case in the embodiment shown in
Since only relative radial velocity can be measured, the configuration and number of optical channels needs to be carefully planned. In an example of a mobile asset such as a train, motion is usually constrained to forward/backwards. A single channel facing along this plane may suffice. However, as the train moves along the railway track it typically pitches up/down due to topography and track defects, and this couples into the forward velocity measurement. A positive pitch event will result in a smaller than true velocity measurement, whilst a negative pitch down event will result in a larger than true velocity measurement. A simple way to overcome this is to have two sensors: one facing forward, the other facing back. Depending on the exact configuration, the effect of pitch on the velocity measurement can be removed by simply taking the average of the two measurements. However, this necessitates a second sensor. Other situations with more degrees of freedom may necessitate more sensors, sometimes more than ten, depending on the exact performance requirements. The light exiting the laser has some amplitude A and angular frequency ω=2πfoptical, where foptical is the optical frequency of the light. A phasor describing the electric field as a function of time can then be written as Aeiωt. To address this difficulty, the inventors have developed a process whereby multiple simultaneous measurements can be made with no sacrifice in measurement quality or data throughput. This breakthrough is enabled by using Code Division Multiple Access (CDMA), which uses different digital signals to separate out each measurement channel.
The digital signal that is applied in phase to the outgoing light could, for example, in one embodiment be a Pseudo-Random Bit Sequence (PRBS) such as a maximal length sequence. An Nth order maximal length sequence will have a length of L=2N−1. If the PRBS is modulated at rate fchip which is the frequency at which one symbol/chip of the PRBS is modulated, and the resultant signal from the photodetectors is digitised using an Analog-to-Digital Converter (ADC) with a sampling frequency fadc, then the oversample ratio roversample is defined as the ratio of the ADC sample rate fadc to the chip rate fchip such that roversample=fadc/fchip. The length of the digital sequence in terms of digital ADC samples can then be described as Lactual=(2N−1)*roversample.
To separate out each of the individual channels from the superposition incident signal, the delay of each signal relative to the local template must be known. Furthermore, to successfully recreate the velocity vector map relative to the reference frame (which, for example, might be the ground), there must be some knowledge of what delay belongs to what measurement channel. Say, for example, that a three-channel system is employed with respective channels for the X, Y and Z translation axes. If the sensor is moving down (negative Z), but the Z measurement channel is incorrectly interpreted as X, then the resultant velocity vector by which the corrected position is derived from will be purely forwards. This is obviously erroneous, and will result in a large position uncertainty. To overcome this, the inventors have constrained the range of possible delays for each channel to a specific subset of the overall delay space of length Lactual=(2N−1)*roversample. A simple implementation is to divide the overall delay space Lactual by the number of simultaneous measurement channels, and constrain each measurement to only occupy the subset of unique delays. For example, if the delay space has a length of 30, and 3 channels are required, then channel one can take delays 1-10, channel two delays 11-20, and channel three delays 21-30. The means by which the offset is applied to each measurement channel can take many forms, including different optical fibre lengths (as shown in
Another method to separate out the individual channels from the superposition incident signal is to apply a digital signal that has low cross-correlation properties. Examples of such signals include, but are not limited to, maximal length sequences, Walsh-Hadamard codes, Gold codes, and Kasami codes. However, these codes tend to have poorer auto-correlation properties, which further emphasises the need to know the delay of each code relative to the local template. For low cross-correlation codes, the relative delay between each channel is no longer as important. The code applied to Channel 1 will not result in a large peak when demodulating using the code applied to Channel 2. This frees up the entire delay space for use.
A pseudo-random bit sequence of type [0,1] is modulated onto the phase of the light such that the electric field becomes:
where μ is the modulation depth (0 to Pi), and c[nTs] is the discrete-time form of the pseudo-random bit sequence of type [0,1]. It is important to note that full modulation depth (i.e., when β=π) looks like an inversion of the amplitude. This is because of the sine function shift by one half period identity sin(θ+π)=−sin(θ).
The light exits each of the N channels (for example, three channels), and is reflected by the target. The complex signal that is the output from the complex coupler can be described by:
with amplitudes Ai, i=1, 2, 3, angular frequency ω=2πf, time-varying phase θi[nTs], and c[(n−Ki)Ts] being the known digital signal encoded in phase with modulation depth p at time delay Ki. Depending on the pseudo-random bit sequence that has been modulated onto the phase of the light, demodulating at the desired delay N-K will result in suppression of the other signals.
For example, maximal length codes are a family of pseudo-random bit sequences that have length L=2N−1, where N is the size of the shift-register. A 10 point shift register will generate a code that is 1023 elements long. The auto-correlation properties of any maximal length sequence are two valued:
The received signal can then be demodulated for each channel using the prior known delay. If full modulation depth (i.e., when β=π) is used, then this looks like a digital signal amplified to the amplitude with values [1,−1]. It can be said that for full modulation depth, a [0,1] code applied in phase is converted to a [−1,1] code applied in amplitude. Taking the resultant signal applying this conversion gives:
Since the code now looks like an amplitude inversion, demodulation can be applied by multiplying the received signal with the same digital signal with polarity [−1,1]. Applying the correct delay for Channel 1, this results in:
Since multiplying a [−1,1] code with itself undoes any modulation (−1*−1=1, 1*1=1), this simplifies to:
If the digital signal is a maximal-length sequence, then the multiplication of the digital signal with a time-delayed version of itself produces the same digital signal with a fixed sample delay, M, relative to the original digital signal. The resultant delay of the maximal-length sequence is deterministic and can simply be compensated for.
The signal for Channel 1 is now a clean sinusoid plus whatever phase noise is present in the measurement, as denoted by the phase term θ1[nTs]. The other Channels are no longer sinusoids: their spectra have been spread by the digital signal. In the frequency domain, the original frequency content has been spread out (i.e., suppressed) over a much larger frequency range. It is this suppression that allows for isolation of individual measurement channels.
A coarse frequency estimation is calculated by taking a Fourier transform of the data to convert it from the time domain to the frequency domain. In an embodiment (such as the first embodiment described above) where no forced frequency offset is introduced, a complex measurement of the interfering light fields is required to disambiguate between positive (relative motion towards) or receding (relative motion away from) relative velocities. This technique resolves the direction ambiguity by measuring orthogonal projections of the received light with respect to the local oscillator, allowing the absolute relative frequency of the electronic signal to be determined by calculating a cross-spectrum or complex fast Fourier transform (FFT). The in-phase information and the quadrature information are orthogonal, and are sufficient to completely resolve the time-varying phase and frequency of the signal. In-phase and quadrature data fed to an FFT results in a single sided spectrum. The ambiguity of the direction of the frequency shift can be inferred, depending on which side of the spectrum the peak is situated. If a forced frequency oscillation is introduced (such as in the eight embodiment described above), then a complex measurement of the interfering fields is not required. This is because the relative radial velocity shift can be disambiguated by looking at the movement in the forced frequency offset. For example, if a forced offset of 80 MHz is introduced and a negative 15 MHz Doppler shift is detected, then the resultant frequency shift will be 80 MHz-15 MHz=65 MHz. Conversely, if a positive 15 MHz Doppler shift is detected, the resultant frequency shift will be 80 MHz+15 MHz=95 MHz. One such embodiment uses the Fast Fourier Transform (FFT) to exploit the computational efficiency it offers. For an input sequence of length L sampled at a rate Fs, it gives a frequency estimation with centre bin resolution of Fs/L. For example, a 1024 point FFT implemented on data that has been sampled at a rate of fs=52 MHz will have a bin resolution of
Therefore, the identified peak frequency will have an error range of ±25.390 kHz.
The error of the measured frequency can be reduced by any one of several methods. One such method is to apply interpolation. This can include, but is not limited to, the quadratic method, barycentric method, Quinn's first estimator, Quinn's second estimator, or Jain's method.
Another technique to further reduce the error in the measured frequency is to use a Phase Locked Loop (PLL). Phase locked loops generate an output signal that is proportional to the phase of the input signal relative to some locally generated reference. In one embodiment, the PLL is based on a Lock-In Amplifier, which extracts the phase and amplitude of a signal from a known carrier frequency in the presence of noise. In some embodiments, this involves demodulation at the known carrier frequency (which can be aided by coarse frequency estimation), with the second harmonic of the term being filtered out using a digitally implemented low-pass filter. For example, in some embodiments the filter is a cascaded integrator comb filter that decimates the demodulated input data by an integer multiple of the code length in samples. In some embodiments, the signal is simultaneously demodulated with sine and cosine, resulting in a full dual-quadrature readout. The amplitude of the signal can be calculated by taking the sum of squares of the output, whilst the relative phase can be calculated using an arctangent function. Applying a phase unwrapping function to the discrete phase measurements allows for phase tracking to occur over multiple fringes. The relative frequency offset of the signal to the demodulation frequency can be calculated by taking the derivative of the phase data. One example of this is to simply take the difference between subsequent estimations of the phase, and to apply a scaling factor proportional to the output sampling rate of the instrument. An advantage of using a phase locked loop to track the phase and frequency of the signal is that it forms a narrow-bandwidth filter around the demodulated carrier which is always centred around the input carrier frequency because of feedback control.
There are some situations where the range to the target reference surface for each measurement channel is time-varying. Such situations include, but are not limited to, spacecraft landings on interplanetary bodies, low-flying aircraft, and extremely bumpy roads. These can cause the relative delay of the modulated digital signal to change with respect to the local template. As such, the delay at which demodulation is applied may need to change to maximise the signal quality. The inventors refer to this mode of operation as “unconstrained mode”.
To identify the correct demodulation delay, a range measurement is taken to measure the round-trip time-of-flight. The Doppler shifting of the optical signal frequency poses a challenge because matched-template filtering is used to extract range information. As matched template-filtering relies on a correlation between the received signal and a local template, it is important to define the template as accurately as possible, which requires taking the Doppler shifting into account. This can be addressed by correlating the received signal with a range of different templates for respective different radial velocities. This technique works well in a post-processing or ‘offline’ context when it is acceptable to compute a series of correlations over an extended period of time. However, LiDAR for navigation requires a very high throughput of measurement data to improve the margin of safety and reduce the ability for drift to integrate.
This difficulty is overcome in the described embodiments using the process described in the frequency compensation patent application.
In the described embodiments, the LiDAR processes are implemented in the form of configuration data of a field-programmable gate array (FPGA) 2002 stored on a non-volatile storage medium 2004 such as a solid-state memory drive (SSD) or hard disk drive (HDD) of a signal processing component 2000 of the corresponding LiDAR apparatus, as shown in
The signal processing component 2000 also includes random access memory (RAM) 2006, at least one FPGA (or processor, as the case may be) 2008, and external interfaces 2010, 2012, 2014, all interconnected by at least one bus 2016. The external interfaces may include a network interface connector (NIC) 2012 to connect the LiDAR apparatus to a communications network and may include universal serial bus (USB) interfaces 2010, at least one of which may be connected to a keyboard 2018 and a pointing device such as a mouse 2019, and a display adapter 2014, which may be connected to a display device such as a panel display 2022. The signal processing component 2000 also includes an operating system 2024 such as Linux or Microsoft Windows.
Many modifications will be apparent to those skilled in the art without departing from the scope of the present invention.
The three-channel phase-encoded code division multiple access laser doppler velocimeter shown in
The digital signal processing (LiDAR process) of
A peak search was performed on the frequency domain data with the identified peak frequency used as the demodulation frequency in a lock-in amplifier. For example, the peak frequency for Channel two was identified as −23.888074 MHz, compared to the true frequency of Channel 2 of −23.865 MHz. This is a difference of 23.1 kHz. The lock-in amplifier was constructed from a second order cascaded-integrator comb filter, with a filter length of 490 samples. The unwrapped phase output of this measurement is shown in
Number | Date | Country | Kind |
---|---|---|---|
2021901996 | Jun 2021 | AU | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/AU2022/050684 | 6/30/2022 | WO |