MICROSECOND TIME OF FLIGHT (MTOF) SENSOR

Information

  • Patent Application
  • 20250076474
  • Publication Number
    20250076474
  • Date Filed
    April 30, 2024
    11 months ago
  • Date Published
    March 06, 2025
    a month ago
  • Inventors
    • BENCH; Jared (Herndon, VA, US)
  • Original Assignees
Abstract
An example μTOF is a flexible, small, sensor unit that uses modulated light to measure distance. The architecture of the sensor allows for many use cases. Use cases include the classic single emitter, single detector topology, but also include capability for operability as a full multi-input, multi-output (MIMO) system. In a MIMO configuration, the emitters and detectors can be arranged in a configuration similar to an RF antenna array or any number of other configurations from a single emitter/detector pair to vast dispersions of emitters and detectors. By coding the signal output by each emitter with a unique pseudo-noise (PN) or similar sequence, reflected signals received at the detector can be separated from each other, providing path distances between each emitter-detector pair. Given the robustness and noise immunity of PN sequences, this approach works well even with signal levels well below the noise floor. Using the measured path distances from each sensor to each emitter, the locations of objects in the scene can be extracted by triangulation.
Description
FIELD

The technology herein relates to a new type of sensor that is uniquely designed for detecting fast moving objects in space. Still more particularly, the technology herein relates to a Microsecond Time-of-Flight (μTOF) sensor that uses sensor technology employed by modern cell phones and GPS to pull weak signals from multiple sources in a noisy environment.


BACKGROUND & SUMMARY
Lidar Concepts and Typical Lidar Approaches

A LIDAR (Light Detection and Ranging) or TOF (time of flight) sensor is a 3D scanning technique used to measure distance to an object. LIDAR works by illuminating an object with a narrow column of light, and then measuring the reflection in a particular way. The light scattered by the object is detected by a photodetector, which can be a single pixel or array. Distance can be calculated by measuring the time of flight for the light to travel to the object and back.


Typically, LIDAR are characterized as either pulsed or continuous wave (CW).


Pulsed LIDAR is a somewhat brute force approach to distance measurement. The illumination consists of a single, high-amplitude, short pulse of light. After emitting a pulse, the detector circuit is monitored for reception of the reflected pulse. This technique has the advantage of being relatively simple. However, because the detector is looking for a short pulse of light, performance can degrade significantly in the presence of other LIDARs or in noisy environments. In addition, as the reflectivity and distance of the object varies, pulse amplitudes can vary wildly. Due to finite pulse rise times, this can create errors in the time of flight measurement, known as walk error. Such errors can be reduced via a variety of techniques, such as specialized signal conditioning circuitry or by measuring and compensating for the return pulse amplitude.


Continuous wave (CW) TOF sensors use an indirect distance measurement approach. Instead of transmitting a single pulse of light, CW LIDARs transmit a continuous train of pulses. The emitted waveform can take many shapes, to include sinusoidal or square waves. By measuring the phase difference between the emitted and received signal, distances can be calculated. This approach has better mutual interference and noise performance because the detection space can be narrowed to a single frequency and observed over many pulses. However, the repeating nature of the signal creates distance ambiguities in measurements that are dependent on modulation frequency. Under high ambient conditions, measurements also begin to show some contrast decay, limiting sensitivity and accuracy.


For both pulsed LIDAR and CWTOF approaches, it is necessary to consider how the system will be extended beyond a single pixel. For LIDAR, this is typically done by physical scanning a single emitter/detector pair over a field of view or by using an array of multiple emitters and/or detectors. Scanning makes use of mechanically rotating mirrors, MEMs actuators, or other laser steering technologies, and tends to be too slow to track fast moving objects. CWTOF lends itself better to use with a single emitter and array of detectors. This is due to the ability to easily integrate RF mixing technology onto a typical CMOS or CCD imager chip and integrate returns over time. Because of this, detector arrays for CWTOF tend to scale better in cost, size, and power than their pulsed LIDAR counterparts. However, sample rates are still limited by array readout times and integration times, which are limited. They are also very challenged in high ambient light conditions such as direct sunlight.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 shows an example non-limiting μTOF system block diagram.



FIG. 2 shows an example non-limiting simple μTOF sensing scheme with two emitters, one sensor and one object.



FIG. 3 shows example non-limiting creation and resolution of ambiguities for two objects.



FIG. 4 shows example non-limiting transmitted and received signals with detection peak for 0 dB signal-to-noise ratio.



FIG. 5 shows example non-limiting transmitted and received signals with detection peak for −20 dB signal-to-noise ratio.



FIG. 6 shows example non-limiting transmitted and received signals showing a zoomed detection peak.



FIG. 7 shows an example non-limiting received signal and detection for two closely spaced simultaneous signals.



FIG. 8 shows an example non-limiting received signal and detection for two closely spaced simultaneous returns with a 150 MHz modulation frequency.



FIG. 9 shows example non-limiting background removal for two interfering peaks.



FIG. 10 shows example non-limiting μTOF development kit on right and actual sensor on left.



FIG. 11 shows example non-limiting raw signal from μTOF on left, processed signal on right showing object approximately 11 meters range, along with 1 to 2 smaller returns.



FIG. 12 shows an example embodiment of a sensor input signal processing circuit.





DETAILED DESCRIPTION OF EXAMPLE NON-LIMITING EMBODIMENTS

An example TOF sensor system includes a flexible, small, sensor unit that uses modulated light to measure distance. The architecture of the sensor allows for many use cases. Use cases include the classic single emitter, single detector topology, but also include capability for operability as a full multi-input, multi-output (MIMO) system. In a MIMO configuration, the emitters and detectors can be arranged in a configuration similar to an RF antenna array or any number of other configurations from a single emitter/detector pair to vast dispersions of emitters and detectors.


By coding the signal outputted by each emitter with a unique pseudo-noise (PN) or similar sequence, reflected signals received at the detector can be separated from each other, providing path distances between each emitter-detector pair. A pseudo-noise (PR) code or pseudo-random noise (PRN) code generally speaking has a spectrum similar to random noise but is deterministically generated. See e.g., Kurpis et al, IEEE Std. 100; The New IEEE Standard Dictionary of Electrical and Electronic Terms 1993). Such codes have been used for example in direct spread spectrum transmission and reception. See e.g., Chawla et al, Parallel Acquisition of PN Sequences in DS/SS Systems, IEEE Transactions on Communications Vol. 42, No. 5 (May 1994). Given the robustness and noise immunity of common PN sequences, this approach works well even with signal levels well below the noise floor. Using the measured path distances from each sensor to each emitter, the locations of objects in the scene can be extracted by triangulation.


The system can work with N emitters. The more emitters one has, the better the spatial resolution. Two sensors is a minimum for determining a location (X, Y) on a line (two dimensions), and 3 sensors is a minimum for determining a location (X, Y, Z) in 3D space. But if you had 1,000 emitters then you could probably determine where something is to within several wavelengths of light, yielding tremendous accuracy. It is not possible to do this with a normal LIDAR since the multiple beams would interfere with each other. It is this ability to process PN codes that allows massive numbers of emitters to be used.


The PN codes allow the system to discriminate the signal from background noise and clutter, not only discriminate the signal from other emitters. This is extremely powerful in that LIDARs are limited by eye safety concerns, which grossly limits their range. With our novel approach you can increase range by a factor of maybe 100 or more. It really depends on how long of a PN sequence you want to emit and how good your processing is. But what we're proposing is nothing short of a revolution in LIDARs. Imagine how it would change autonomous vehicles if one could accurately determine objects at 2 km rather than the 200 meters of today's best LIDARs.


Example μTOF System Architecture

Instead of a single pulse or sinusoidal wave as described in the approaches above, a μTOF emitter outputs an arbitrary waveform that is unique. This vastly improves noise rejection due to mutual interference and other sources as compared with the pulsed approach. It also improves upon the noise rejection characteristics of the CW approach and greatly extends the unambiguous range. In addition, this allows a single pixel photodiode or similar light sensor to simultaneously receive reflected signals from multiple emitters.


Depending on arbitrary waveform design, significant processing SNR (signal to noise ratio) gains can be achieved, clearly extracting signals that are 20 dB below the noise floor or more.


A μTOF system can be scaled up to an arbitrarily large number of sensors and emitters if modulated signals are designed to have sufficient orthogonality. In addition, the emitters and detectors can be physically separated from each other along different axes. This separation creates a geometric diversity that can be used to triangulate the locations of individual objects in the scene.


The conventional GPS system works on somewhat similar principles. A GPS receiver simultaneously receives a unique RF waveform from each of several satellites in view. Essentially, by de-correlating these waveforms, a time-of-flight is computed to each satellite. Knowing the location of each satellite allows the receiver to triangulate its own position. However, GPS operates at L1 (1575.42 MHz) and L2 (1227.60 MHz)—not at light frequencies. However, similarly to a GPS system, a μTOF system could consist of any number of transmitting and receiving nodes to form a mesh grid.


The components of a μTOF system are shown in FIG. 1 and are made up of the following in one embodiment:


Emitter(s) 10(1), 10(2), . . . , 10(m). Each emitter is capable of transmitting an arbitrary waveform at an appropriate power level and modulation frequency.


Sensor(s) 12(1), 12(2), . . . , 12(n). Each sensor contains a photo sensitive element with appropriate sensitivity and frequency response to detect signals reflected from the scene.


Waveform generator 14. A circuit or a processor that generates arbitrary waveforms and controls modulation of emitters.


Signal processor system 16. A processor responsible for digitizing and processing the received signals observed at each sensor. The sample rate, resolution, and bandwidth is appropriate to capture any signals of interest.


Timing and control system 18. A circuit that contains logic to supervise timing and control of emitters, sensors, signal processor, waveform generator, and other subsystems.


Assuming the received signal can be sampled at sufficient speed, the scan rate for this architecture is limited only by the modulation frequency, the length of the arbitrary waveform, and how long it takes to process information from the scene.


The simplest example of a MIMO μTOF system consists of two emitters 10(1), 10(2) and a single sensor 12(1). Positioning these three elements in a single line enables 2D sensing along a single plane. FIG. 2 illustrates how these sensors might be positioned relative to each other. A single object O is shown, along with curves representing the measured path length for each sensor-emitter pair (i.e, emitter 10(1) to sensor 12(1), and emitter 10(2) to sensor 12(1)). The signals emitted by the two emitters 10(1), 10(2) are coded in such a way that sensor 12(1) can tell them apart. The location of the object O is determined by finding the intersection of the two paths.


For a slightly more complex system with two objects in the scene, ambiguities can occur, as shown in FIG. 3. In the left side of this figure, the two received object ranges create four possible intersection points, only two of which are valid. One way the ambiguity can be resolved is by adding another sensor-emitter pair such as shown in the right side of FIG. 3. Other techniques also exist for resolving these ambiguities.


Creation and resolution of ambiguities for 2 objects.


Additionally, the sensing space can be extended from 2D to 3D by adding more sensors or emitters to the system. To extend sensing capabilities to three dimensions, sensors and/or emitters are added along the other dimension. As more sensors and emitters are added, the system gains more ambiguity resolving capability in the presence of increasingly complex scenes.


Signal Processing and Distance Calculation

The arbitrary waveform and direct signal sampling provide significant processing signal gains. Noise immunity, sensitivity, maximum sensing distance, and power requirements all improve as the waveform increases in length. FIGS. 4 & 5 show how a received signal can be processed to find the signal of interest, even in the presence of noise. FIG. 4 and FIG. 5 correspond to SNR levels of 0 dB and −20 dB, respectively, for a 255-chip PN sequence. The location of the peak in the Signal Detection section corresponds to the time of flight for a reflected signal in the scene. In both cases, a clear peak is detected with very little noise or side lobes present.


In a cluttered environment, multiple peaks will exist. In one embodiment, these individual peaks are matched between sensor-emitter pairs to properly triangulate locations of objects in the scene. Many such algorithms already exist and have been in used in the RF world to extract angle of arrival and time of arrival information from an antenna array.


Additionally, by limiting the geometry to sensing along a single line and focusing on a small number of high-speed objects, the matching algorithms are significantly simplified. Static background returns can also be removed from consideration.


One technical aspect of one embodiment of the system is to characterize the returning waveform from a pulse train. There are many approaches for accomplishing this. One example method, simulated below as an example, is to sample the wave with a high-speed ADC (analog to digital converter) to measure it directly. There are many other approaches as well, from interferometry, under-sampling with statistical analysis, phase-shifting of the P/N sequence and so forth.


Example Simulation

A Matlab simulation of the μTOF sensor has been performed to verify some of the concepts discussed above. The following assumptions were made for system parameters:

    • PN Code Chip Frequency: 62.5 MHz
    • PN Code Length: 127 chips
    • ADC Sample Rate: 500 MSa/s
    • PN Code Repetitions: 4
    • ADC Effective Sample Rate: 2 GSa/s
    • Received Signal Level Peak: +/−50 mV
    • ADC Resolution: 12 bits


These assumptions would give a pixel measurement rate of about 500 kHz if measurements are processed on a rolling basis and an unambiguous range of 307 m. FIG. 6 shows that zooming in on the cross-correlation peak, we see that it is about 3 ns wide at 95% of the peak amplitude. This corresponds to about 45 cm and limits our ability to resolve the peaks of multiple simultaneous returns.


For example, assuming a second return located 2 m from the first, FIG. 7 shows that the peaks tend to constructively interfere with each other in a way that makes it difficult to distinguish two discrete peaks. This problem is greatly reduced with a higher chip rate, processing of additional samples, waveform pattern matching and other approaches. FIG. 8 shows the same double return scenario with a chip rate of 150 MHz. At the higher modulation frequency, the peaks are separated and easy to resolve from each other. Additionally, if the second object corresponds to a static background object, a background subtraction can be performed to remove the second peak, as shown in FIG. 9.


Example μTOF Components

To test out these sensing concepts, a modular prototyping backplane (development kit shown in FIG. 10) has the ability to swap out different hardware modules. Components include:


TOF Controller 50: Consists of a backplane with connectors and interfaces to mount and control submodules. Includes a high-speed ADC, capable of 500 MSa/s sampling at 12-bit resolution, and a processor system on chip (SoC).


Optical Sensor 10 (see left-hand side of FIG. 10): In one embodiment, a 27 mm×27 mm module that contains a single avalanche photodiode and analog front-end circuitry. Includes multiple gain stages, which are adjustable and allow for dynamically changing amplification levels. There are three stages of amplification. The avalanche photodiode 102 (see FIG. 12) itself produces gains up to 200× by adjusting the bias voltage, using 4096 steps of adjustment. The next gain stage is a transimpedance amplifier 104 with a gain of 4.7 kV/A. This transforms the current output of the photodiode into a voltage. See e.g., Lau et al, Excess Noise Measurement in Avalanche Photodiodes Using a Transimpedance Amplifier Front End, Meas. Sci. Technol. 17, 1941-1946 (2006).


A programmable gain amplifier 106 allows further gains ranging from 0.9 V/V to 87 V/V. The FIG. 12 circuitry yields a total dynamic range from 4.2 kV/A to 81,800 kV/A in nearly continuously variable gain steps. The analog bandwidth of one embodiment of the circuitry is about 520 MHz, and is capable of capturing fast moving light pulses as they arrive at the detector. In one embodiment, the sensor also holds an M12 lens mount, allowing the flexibility of low-cost COTS M12 lenses.


Avalanche Photodiode Bias Circuit 108: In one embodiment, the bias circuit is a 29 mm×29 mm module that steps up voltage at the 24V input to a high voltage that is used to bias the photodiode. Accurate current monitoring circuitry is integrated into the board along with over-current and shutdown protection.


Emitter 10: Contains a series of LEDs or lasers that are modulated at a high rate with a PN sequence. The circuits were designed to be flexible to allow adjustment of peak power and modulation frequency during the prototyping process. The current emitter allows modulation up to 62.5 MHz.


Example Performance

Initial example results are shown in FIG. 11. The PN codes have been successfully emitted, returned and processed at frequencies up to 62.5 MHz. The raw signal is shown on the left and is typical of a PN sequence. The processed signal on the right clearly shows a peak at the distance of the test object, about 11 meters away. Currently, the development focus is on collecting and analyzing data from high-speed projectiles in both laboratory and test range scenarios. In addition, signal processing algorithms and finding optimal emitter/receiver geometries area ongoing and continuing to be refined based on feedback from testing.


All patents and publications cited herein are expressly incorporated by reference.


While the invention has been described in connection with what is presently considered to be the most practical and preferred embodiments, it is to be understood that the invention is not to be limited to the disclosed embodiments, but on the contrary, is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.

Claims
  • 1. A distance measuring method comprising: (a) emitting a first optical signal comprising many first pulses each coded with a first PN encoding;(b) emitting a second optical signal comprising many second pulses each coded with a second PN encoding;(c) receiving the first and second optical signals reflected from at least one object,(d) at least one processor or processing circuit using the first PN encoding and the second PN encoding to discriminate between first pulses of the received first optical signal and second pulses of the received second optical signal; and(e) based on the discriminated first and second pulses, the at least one processor or processing circuit performing a time of flight calculation(s) to determine a range(s) of the at least one object.
  • 2. The method of claim 1 wherein using further comprises using the first PN encoding and the second PN encoding to discriminate between first and/or second pulses and noise.
  • 3. The method of claim 1 wherein the first optical signal comprises a first waveform and the second optical signal comprises a second waveform different from the first waveform.
  • 4. The method of claim 1 wherein performing comprises determining location of the object by finding intersection of a first path of the received first reflected optical signal and a second path of the received second reflected optical signal.
  • 5. The method of claim 1 wherein performing comprises matching peaks in received signals between sensor-emitter pairs to triangulate location(s) of the at least one object.
  • 6. The method of claim 1 wherein using comprises sampling a wave of the first and/or second optical signal with a high-speed analog to digital converter.
  • 7. The method of claim 1 wherein the at least one object is moving, and the performing is repeated to track position(s) of the at least one moving object.
  • 8. The method of claim 1 wherein receiving comprises capturing fast moving light pulses as they arrive at a detector with an analog bandwidth of on an order of 500 MHz.
  • 9. The method of claim 1 wherein the first and second optical signals each comprise modulated light.
  • 10. The method of claim 1 wherein codes comprise pseudo random noise codes.
  • 11. A distance measuring system comprising: a receiver configured to receive a first pulse train of many pulses each coded with a PN encoding, and a second pulse train of many pulses each coded with a PN encoding; andat least one processor or processing circuit connected to the receiver, the at least one processor or processing circuit performing operations comprising:using the PN encodings to discriminate between the received first pulse train and the received second pulse train, and based on the discriminated first and second pulse trains, performing a time of flight calculation(s) to determine a location characteristic of at least one object.
  • 12. The distance measuring system of claim 11 wherein using further comprises using a first PN encoding and the second PN encoding to discriminate between first and/or second trains and noise.
  • 13. The distance measuring system of claim 11 wherein the first pulse train comprises a first waveform and the second pulse train comprises a second waveform different from the first waveform.
  • 14. The distance measuring system of claim 11 wherein performing comprises determining location of the object by finding intersection of a first path of the received first pulse train and a second path of the received second pulse train.
  • 15. The distance measuring system of claim 11 wherein performing comprises matching peaks in received signals between sensor-emitter pairs to triangulate location(s) of the at least one object.
  • 16. The distance measuring system of claim 11 wherein using comprises sampling a wave of the first and/or second received pulse trains with a high-speed analog to digital converter.
  • 17. The distance measuring system of claim 11 wherein the at least one object is moving, and the performing is repeated to track position(s) of the at least one moving object.
  • 18. The distance measuring system of claim 11 wherein receiving comprises capturing fast moving light pulses as they arrive at a detector with an analog bandwidth of on an order of 500 MHz.
  • 19. The distance measuring system of claim 11 wherein the first and second received pulse trains each comprise modulated light.
  • 20. The distance measuring system of claim 11 wherein codes comprise pseudo random noise codes.
  • 21. A signal processing method comprising: receiving a pulse train of many optical pulses each coded with a PN encoding;using at least one processor or processing circuit to discriminate, based on the PN encoding, between the received pulse train and at least one other received signal and/or noise; andbased on the discriminated received pulse train, performing a time of flight calculation(s) to determine a location characteristic of at least one object capable of optically-reflecting at least a portion of the many optical pulses.
CROSS-REFERENCE TO RELATED APPLICATION

This application is a continuation of U.S. patent application Ser. No. 16/927,466, filed Jul. 13, 2020, now U.S. Pat. No. ______, which claims priority from U.S. Provisional Patent Application Nos. 62/873,721, filed Jul. 12, 2019, and 63/030,009, filed May 26, 2020, and are incorporated herein in their entirety by reference.

Provisional Applications (2)
Number Date Country
62873721 Jul 2019 US
63030009 May 2020 US
Continuations (1)
Number Date Country
Parent 16927466 Jul 2020 US
Child 18651654 US