This document relates generally to Laser Imaging Detection and Ranging (LIDAR) systems.
A LIDAR system can be used for machine vision, and also for vehicle navigation. LIDAR systems may include a transmit channel that can include a laser source to transmit a laser signal, and a receive channel that can include a photo-detector to detect a reflected laser signal. For applications such as vehicle navigation it is desirable for the LIDAR system to detect objects at distance, but a LIDAR system can be become more susceptible to noise as the imaging distance is increased. Typically, the power of the transmit channel is increased to improve the detection distance but this increase in power conumption cab e undesirable.
In the drawings, which are not necessarily drawn to scale, like numerals may describe similar components in different views. Like numerals having different letter suffixes may represent different instances of similar components. The drawings illustrate generally, by way of example, but not by way of limitation, various embodiments discussed in the present document.
where R can represent a distance from the LIDAR system to the illuminated object, Δt can represent a round trip travel time, and c can represent a speed of light.
An example of a tradeoff is shown in the top left frame of
Proceeding to the next frame shown to the right on
Within the receive chain, a filter and estimator circuit can be included. The filter circuit can be configured to filter the received signal using one or more time domain coefficients and/or frequency domain coefficients applied to a mathematical operation. Also within the receiver chain is a depth map circuit that can generate a depth map based on spatial coordinates and depth values. The depth map can be applied to a central processor along with an RGB video stream, for example, and the processor can generate an image using these inputs.
Noise in a LIDAR system can have various sources and some LIDAR system noise sources can be frequency dependent. For example, a transimpedance amplifier can be coupled to a photodetector to amplify an electrical signal based on the optical return LIDAR signal detected by and transduced into an electrical signal by the photodetector. Such a transimpedance amplifier can have a noise characteristic that increases with frequency, such that the transimpedance amplifier can be noisier at higher frequencies.
To accommodate such a transimpedance amplifier noise characteristic, a wider LIDAR transmit pulse can be used, having more energy concentrated at lower frequencies than a narrower LIDAR transmit pulse. But accurately determining a time-of-flight of the return LIDAR signal can be best accomplished using a narrower LIDAR transmit pulse, which has more energy concentrated at higher frequencies, at which the transimpedance amplifier can be noisy. For example, multiple narrow pulses can be combined close to one another to form a pulse train, e.g., where the time delay between the pulses in the pulse train is equal to or less than the time-of-flight for a pulse reflected by an object at the farthest distance measured. For interference mitigation purposes, the narrow pulses, e.g., forming a pulse train, can be encoded as a sequence.
Additionally, more distant objects are more easily detected using higher energy LIDAR transmit pulses, for example, wider or higher amplitude LIDAR transmit pulses. In a LIDAR system, range accuracy can be more important for closer objects than for distant objects.
For distant objects, the wide pulse portion of the hybrid pulse of this disclosure can maximize detection range because high frequency noise can be filtered out without removing the low-frequency content contained in the wide pulse signal. For objects at closer range and with adequate signal-to-noise ratio (SNR), the narrow pulse portion of the hybrid pulse of this disclosure can be detectable above the noise floor, which can yield good range resolution and precision.
Additional information regarding pulse shape optimization can be found in U.S. patent application Ser. No. 16/270,055 , titled “OPTICAL CODING IN A LIDAR SYSTEM” to Ronald A. Kapusta et al., filed on Feb. 7, 2019, the entire contents of which being incorporated herein by reference.
The Rx Chain of a LIDAR system may detect the reflected illumination using thresholding. Received energy that exceeds a specified energy threshold is deemed to be energy from the Tx Chain reflected by an object. Noise in the received signal is a concern. False alarms occur when there is in reality no object, but the received energy signal exceeds the specified energy threshold due to noise. Misdetection by the Rx Chain occurs when an object is present, but the received energy signal does not satisfy the specified energy threshold dues to noise.
The occupation probability map can use multi-spot probability processing and denoising, such as described above with respect to
The LIDAR image can be improved by using the LIDAR system sensors in combination with one or more additional sensors. For example, the LIDAR image can be improved by fusing information from the LIDAR sensor and information from a high resolution RGB sensor, such as a high resolution RGB video camera, e.g., high frame rate video camera. Examples of sensor fusion algorithms that can be used to perform the fusion of the LIDAR sensor information and the RGB sensor information include, but are not limited to, neural networks, machine learning, bilateral filtering, e.g., guided bilateral filtering, guided filters, Belief Propagation algorithms, and interpolation.
Other sensors that can be used can in addition to or instead of RGB sensors can include radar and/or infrared (IR) sensors.
As mentioned above with respect to
Two additional multi-spot processing techniques are described, which can be considered as complementary. Belief Propagation (BP) on the occupation grid can be an alternative to the multi-spot denoising technique described above with respect to
An “occupancy grid” can include a number of points that can he voxels. Each point or voxel on the grid i refers to a point in three-dimensional space (see
An example of a computation of probability of occupancy as the characteristic is shown on
In some example implementations, scene constraints can be introduced by specifying that nearby points or nearby voxels in space are correlated. Whether points or voxels are “nearby” can be defined as a specified distance from each other, such as immediately adjacent or within a number of points or voxels away. For example, a function can be introduced that, for any two neighboring points i and j in the occupancy grid, is large (e.g., greater than ½) for the case si=sj and small (e.g., less than ½) otherwise. The ratio between the “large” and “small” values can be a measure of how much nearby points can be biased to be similar. This function can be referred to as the scene potential function.
An example of a “best” scene can be one that maximizes the product of the most likely occupation states conditioned on the data (e.g., probability data) and the product over neighbors of the grid of the scene potential. The mathematical cost criterion is shown at the end of
As a surrogate to solving the optimization problem of
The final beliefs or adjusted characteristics can be combined with the matched filter output. For example, all final beliefs can be thresholded with some required occupation probability to declare a detection at each point and then the maximum of the matched filter output within each detection window can be used as an estimate of range.
In some example implementations, the computations can be simplified without significantly affecting performance by discarding all cases where the message is known to be very strong in either direction. For example, message passing for regions of the grid where the messages are all reinforcing the belief that the point is occupied or that the point is unoccupied may not be needed. Rather, message passing may be used only for regions of the grid where there is ambiguity in the occupation state.
Median filtering is a technique used in image processing that applies a kernel, e.g., a 3×3 kernel, and within each kernel, replaces the pixel at the center of the kernel with the median value of the pixels in that kernel. In accordance with this disclosure, a signal processing chain can apply a median filter on the occupation probability data. In other words, the signal processing chain can calculate an occupation probability for each direction.
For example, the signal processing chain can take the final beliefs at the output of the Belief Propagation, determine which distance to report in each direction, and use the belief at that distance as the occupation probability for that direction. In addition, the signal processing chain can use the input data probabilities (before running BP), choose a distance to report (including possibly no distance)and use the occupation probability at that distance in that direction as the occupation probability for that direction. Once the signal processing chain has determined a probability per “spot”, the signal processing chain can apply a median filter. The median filtering can be effective in reducing or eliminating false alarms.
Different signal processing techniques can be applied to different regions of the 3D Space. For example, the 3D space may be divided into subsets of points or voxels. For some subsets, belief propagation can be applied the voxels of one more of the subsets to determine occupancy of the voxels using information regarding the characteristics from nearby voxels. For other subsets, comparison to a threshold characteristic may by used to determine occupancy. One approach to dividing a 3D space into subsets, is to divide the 3D space by distance away from the LIDAR sensor or LIDAR system. Voxels less than a specified distance from the LIDAR system may be deemed close enough to not need more complicated signal processing such as Belief Propagation. For example, voxels that are close (e.g., 0 to 50 meters) may be processed with thresholding while voxels far away (e.g., 100 to 150 meters) may be processed using multi-spot processing and denoising.
Another approach to denoising the receive chain of a LIDAR system is eliminating peaks in the output of the matched filter output that do not conform to specific physical constraints. If the peaks correspond to real objects, the behavior of the peaks detected in the output will be constrained by behavior to which actual objects are constrained. Tracking of the peaks between outputs (e.g., between frames of the scene or field of view) is used to improve and refine detection of real objects. It should be noted that this is different from using a complicated detection scheme to identify an object and then track that identified object in the field of view. Here, suspected or candidate objects are tracked in an environment that may be noisy to identify the candidate objects as real objects or noise.
The detection method detects objects in the field of view by applying physical constraints to detected objects to discern actual objects from noise. For example, if the detected object is a car, the detected movement of the car will have a substantially constant velocity practical for cars, or no velocity for stationary objects on the road. While it is true that a car moving on the road might have some acceleration that would violate the velocity constraint, it should be noted that the LIDAR receiver is getting images at some reasonable frame-rate such as 10 frames a second. For an object accelerating from 0 to 60 in 2 seconds, (which is the slighter faster than what cars can do today) the object will move approximately ⅕ of a voxel each frame in the far-field regime. This amount is so small that the acceleration can effectively be treated as noise.
In general, the detection technique includes analysis of consecutive frames of the field of view, or portion of the field of view, to determine if a candidate object is present or if the candidate object is really noise and not an object. The analysis can be performed by processing circuitry (e.g., one or more digital signal processors or DSPs) that processes the data received from one or more LIDAR receive chains.
A real object that is not noise would move around in a more confined trajectory than noisy points. The detection technique uses basic behavior constraints from physics of a moving vehicle to determine whether a sequence of signals is an object or noise. The description that follows begins with description of a mathematical problem that can be adjusted to the LIDAR application of identifying objects.
The mathematical problem includes a process that can be referred to as a “Firefly Process” to describe how real objects can be discerned from noise in an image. Assume there is a rectangular three-dimensional (3D) space with dimensions L=(Lx, Ly, Lz) where Lx corresponds to the width, Ly corresponds to the height, and Lz corresponds to the depth. In this space, assume there are moving fireflies that flash light and that also move with constant velocity.
At each time frame i, all the flashes that occur in the 3D scene are recorded. This results in a list of points of positions of flashes. However, there are two types of noise that prevent immediately detecting where the fireflies are: 1) the positions of the firefly flashes have noise due to the recording system, and 2) there are ambient flashes which do not correspond to any fireflies. Noise in the firefly position can be modeled as Gaussian noise with mean zero and variance σ2 in each dimension. Hence, the firefly noise is independent of position.
The ambient flashes that are not fireflies can be modeled as a Poisson distribution with parameter λ given over volume (or any other process which creates some number of points independently and uniformly in space and some density parameter λ). In practice, these represent noise points where the matched filter output registered a signal that could be confused with a true signal from a moving object. The distribution of the noise points is modeled as Poisson random variables because that makes the noise points and the ambient flashes independent and uniform in space. The goal is to be able to determine which flashes are due to fireflies and which are due to ambient noise. In particular, after analysis of frames 1, . . . , n, the goal is to detect which flashes in the most recent frame are from fireflies. It is assumed that there could be any number of fireflies (including zero) in the scene.
For the LIDAR problem, the fireflies of the firefly process correspond to moving objects on the road. The noisy flashes in a frame represent noise points or voxels where the signal processing of the receive channel registered a signal that could be confused with a true signal from a moving object. Noise in the position of the object is a result of the processing required to transform the LIDAR measurement into a representative center point of the object.
In the object detection technique, it is first considered how to hypothesize whether a sequence of points is from an object or from noise before the analysis is presented of how to infer where the objects are located in the whole scene.
Suppose first that there is only one point appearing in each frame and that we know that either all points are from objects or all points are from ambient noise, The points are denoted as y1, y2, . . . , yn which respectively occur in frames at times t1, t2, . . . , tn. It can be inferred whether this trail corresponds to an object or to noise by using Heyman-Pearson Hypothesis testing. Unfortunately, if the trail of points is from an object, the actual positions and velocities are not known to determine the probability of observing the trail. Because of this, the Generalized Likelihood Ratio test is used in which the values of initial position and velocity of the trail lead to the largest likelihood.
Assuming the trail of points is from an object, the probability of the object position is maximized by using the estimate for velocity and some initial position given by minimizing ordinary least squares. Let {circumflex over (β)}1 represent our estimate for velocity and let {circumflex over (β)}0 represent our initial position arbitrarily chosen to occur at time 0. This results in the estimate for the position of the object at frame i as {circumflex over (β)}0+{circumflex over (β)}t.
This minimization to these parameters is given by
where n is a number of consecutive frames held in a buffer.
Assuming n≥2 the least squares optimization can be computed. This computation gives
The expression above can be simplified. If the variables ti are adjusted by subtracting a constant, by taking {tilde over (t)}i=ti−c so that Σi=1n{tilde over (t)}i=0, the expressions above simplify to
The above equations state that in view of the distribution of the empirical data, the best estimate of velocity is covariance of yi and {tilde over (t)}i divided by the variance of {tilde over (t)}i and the initial position is the expected value of yi. For the rest of the analysis, ti will be used to mean {tilde over (t)}i since the shift is not consequential.
In vector notation,
Then the probability is
If all the points are from ambient noise, the probability of each point occurring at a position in space is uniform and each frame is independent from another. The sequence of points occurs with uniform probability (p) under the ambient noise hypothesis. Therefore, in order to perform a generalized hypothesis test, it is needed only to determine L(X, Y) where
Comparing L(Y,X) to a threshold is equivalent to comparing |X{circumflex over (β)}-Y|2to a threshold. Thus, to distinguish between an object and noise, it is only necessary to compare |X{circumflex over (β)}-Y|2 to some threshold.
The comparison to the threshold can be used to control the probability that independently and uniformly generated noise contains a sequence of points that would pass being interpreted as an object. The threshold is denoted as η. A sequence of noise points Y will pass as an object if
|X{circumflex over (β)}-Y|2≤η.
By choosing a suitable value of η, the rates of false detections and true acceptances can be tailored for the implementation.
The proposed methods for object detection follow the idea of the Firefly Process while minimizing the computation time. The basic steps of the method are:
A cluster is comprised of candidate voxels that are candidates for being occupied by an object. For example, candidate voxels may be identified as voxels in a frame for which the matched filter output exceeds a specified threshold. Other characteristics can be used (e.g., probability data, likelihood ratio, etc.). A lenient threshold is set for inclusion of points as candidate points or voxels.
While the objects being detected are small, the objects may still have a height and width of a few voxels, and a depth of several voxels. Size and position constraints may be used to identify candidate voxels. For each frame, candidate voxels can be grouped together candidate clusters based on one or both of size and proximity to each other. Alternatively, candidate clusters can be identified by applying a clustering algorithm (e.g., DBSCAN) to the candidate voxels.
Candidate clusters in three or more consecutive frames are then analyzed to find sequences of three clusters that satisfy the behavior constraints. The justification for using three frames as a starting point is that it is the fewest number of frames required to distinguish an object from noise, and also requires the smallest amount of memory for storing and processing. Because it assumed that the object is not moving very far, all sequences of clusters in the three consecutive frames do not need to be searched, but only the sequences wherein the clusters are spatially close to one another.
For the case of three-point or three-voxel clusters, the behavior constraints are:
As mentioned previously herein, the signal received at the LIDAR receiver after a laser pulse is sent out is first processed by a matched filter. To discriminate an object from noise, the matched filter output is processed into probabilities based on statistics collected about its distribution. The probabilities reflect the probability that a certain coordinate (e.g., a voxel) in point space is an object.
The next step in the method is to search the frames for frame sequences containing the same cluster (cluster triples) because presence of the cluster in three consecutive frames means that the cluster likely corresponds to an object.
For each cluster in frame i, the corresponding clusters in frame i+1 are found that meet the velocity constraint. A cluster metric is used to test for similarities between clusters (based on factors like size, widths, and distance). The top matches (e.g., 5 matches) in frame i+1 that meet the velocity constraint and have the most similarity to the cluster in frame i are called candidates or candidate clusters. This is repeated to also find for each cluster in frame i+1 its possible candidates in frame i+2 to apply the acceleration constraint.
When frame i+2 is received, the clusters triples (A, B, C) are searched; where A is a cluster in frame i, B is a cluster in frame i+1 which is a candidate of A, and C is a cluster in frame i+2 which is a candidate of B. The identified cluster triples (A, B, C) are tested to see if the cluster triples meet the acceleration constraint. If the cluster triple meets the constraint, the cluster triple is stored in memory and labeled or otherwise identified in memory as being a cluster triple.
At this point, all the cluster information in frame i is cleared. The only clusters which will remain are those stored in memory as cluster triples.
If too many false alarms or misdetections occur in the cluster triples, the concept can be extended to longer cluster sequences (e.g., a sequence of four clusters, or a sequence of five clusters). There may be multiple ways to accomplish implementing longer sequences, but a preferred approach would be one that does not require storing more data.
At this point in the analysis, there are cluster triples stored in memory that meet the acceleration constraint. Suppose the latest collection of cluster triples are from frames i, i+1, and i+2. To extend the concept to a sequence of four clusters, for every cluster triple stored, it can be checked to see if there is another cluster in a fourth frame (i+3) which is along its trajectory.
One approach to accomplish this is to apply the Firefly Process estimation directly and add a new cluster to the cluster triple if the sequence of four clusters creates a vector Y not so far from the span of X (e.g., |X{circumflex over (β)}-Y|2 is small).
A second approach is to see if the new cluster in frame i+3 will satisfy the acceleration constraint with the clusters of the triple formed with frames i+2 and i+1. Thus, the acceleration constraint is applied over a three-frame window that moves through successive frames. This can be extended to see if the acceleration constraint is satisfied over the next three frames i+4, i+3, and i+2, and so on as desired. In the general case, the acceleration constraint is applied over frames i+k, i+(k−1), i+(k−2). This second approach involves less calculation than the first approach and yields satisfactory results in practice.
The velocity constraint can also be applied to a two-frame window that moves through successive frames. For example, after frame i and frame i+1 are tested using the velocity constraint, frames i+1 and i+2 are tested using the velocity constraint.
In
The object detection method described herein improves LIDAR depth images by finding a way to processing small objects that are far away. The depth in which a LIDAR system can detect an object is improved.
A first Aspect (Aspect 1) can include subject matter (such as a Laser Imaging Detection and Ranging (LIDAR) system) comprising signal processing circuitry and a memory coupled to the signal processing circuitry and configured to store LIDAR measurement data obtained by the LIDAR system representative of a three-dimensional (3D) space in a field of view of the LIDAR system. The signal processing circuitry is configured to convert the LIDAR measurement data to a voxel characteristic of voxels of the 3D space, continue to process and adjust the voxel characteristics of all voxels in the 3D space, and generate an indication of presence of an object in the field of view according to the adjusted voxel characteristics.
In Aspect 2, the subject of matter of Aspect 1 optionally includes signal processing circuitry configured to convert the LIDAR measurement data to probability data as the voxel characteristic for the voxels, the probability data representing a probability that the object occupies the voxels; adjust the probability data of the first voxel using probability data of the other voxels within the specified distance of the first voxel; and generate the indication of presence of the object in the field of view according to the adjusted probability data of the voxels in the 3D space.
In Aspect 3, the subject matter of Aspect 2 optionally includes signal processing circuitry configured to recalculate the probability data of the first voxel using the probability data of the other voxels multiple times; compare the recalculated probability data of the first voxel and the other voxels to one or more specified probability thresholds; and identify the voxels of the 3D space occupied by the object using results of the comparison of the recalculated probability data.
In Aspect 4, the subject matter of one or any combination of Aspects 1-3 optionally includes signal processing circuitry convert the LIDAR measurement data to a likelihood ratio as the voxel characteristic for the voxels, wherein the likelihood ratio is a ratio including a probability that a voxel is occupied by the object and a probability that the voxel is not occupied by the object; adjust the likelihood ratio of the first voxel using the likelihood ratios of the other voxels within the specified distance of the first voxel; continue to adjust the likelihood ratios of all voxels in the 3D space; and generate the indication of presence of an object in the field of view according to the adjusted likelihood ratios.
In Aspect 5, the subject matter of Aspect 4 optionally includes signal processing circuitry configured to compare the likelihood ratios of the first voxel and the other voxels to one or more threshold likelihood ratios; and generate the indication of presence of an object in the field of view according to the comparisons of the likelihood ratios.
In Aspect 6, the subject matter of one or any combination of Aspects 1-5 optionally includes signal processing circuitry configured to determine, for each voxel of the 3D space, a predicted value of the voxel characteristic of other voxels within a specified distance thereof; adjust the voxel characteristic of individual voxels of the 3D space using predicted values of the voxel characteristic; and generate the indication of presence of an object in the field of view according to the adjusted voxel characteristics.
In Aspect 7, the subject matter of Aspect 6 optionally includes signal processing circuitry configured to repeat the determining of the predicted values of the voxel characteristic and the adjusting the voxel characteristic of individual voxels of the 3D space using the predicted values multiple times.
In Aspect 8, the subject matter of Aspect 7 optionally includes signal processing circuitry configured to apply median filtering to the adjusted voxel characteristics of the voxels of the 3D space, and generate the indication of presence of an object in the field of view according to the adjusted and filtered voxel characteristics.
In Aspect 9, the subject matter of one or any combination of Aspects 6-8 optionally includes signal processing circuitry configured to divide the voxels of the 31) space into subsets of voxels including a first subset of voxels and a second subset of voxels. For voxels included in a first subset of voxels, the signal processing circuitry is configured to determine, for each voxel of the first subset of voxels, the predicted value of the voxel characteristic of other voxels within a specified distance thereof; adjust the voxel characteristic of individual voxels of the first subset of voxels using the predicted values of the voxel characteristic; and generate the indication of presence of the object in the voxels in the first subset of voxels using the adjusted voxel characteristics. For voxels included in the second subset of voxels, the signal processing circuitry is configured to compare the voxel characteristics to a threshold voxel characteristic value; and generate the indication of presence of the object in the voxels of the second subset of voxels using the comparisons to the threshold voxel characteristic value. In Aspect 10, the subject matter of one or any combination of Aspects 1-9 optionally includes a LIDAR sensor configured to obtain the LIDAR measurement data. The LIDAR sensor optionally includes a LIDAR signal transmit chain configured to transmit light pulses into the field of view; and a LIDAR signal receive chain including a photo-detector configured to detect light energy reflected by the object in the field of view in response to the transmit light pulses and determine the LIDAR measurement data using the detected light energy.
Aspect 11 includes subject matter (such as a LIDAR system) or can optionally be combined with one or any combination of Aspects 1-10 to include such subject matter, comprising a memory configured to store frames of LIDAR measurement data obtained by the LIDAR system, wherein a frame is representative of a sample of a three-dimensional (3D) space in a field of view of the LIDAR system and multiple frames represent multiple samples of the 31) space in time; and signal processing circuitry operatively coupled to the memory. The signal processing circuitry is configured to convert the LIDAR measurement data to a voxel characteristic for the voxels; identify voxels of the 3D space that are candidate voxels for being occupied by an object using the voxel characteristic; identify clusters of the candidate voxels as candidate clusters; and identify voxels corresponding to an object by applying one or more behavior constraints to the candidate clusters over multiple frames.
In Aspect 12, the subject matter of Aspect 11 optionally includes signal processing circuitry configured to identify a candidate cluster in a first frame corresponding to a first sample of the 3D space; identify the candidate cluster in a second frame corresponding to a second sample of the 3D space consecutive to the first sample; and apply a velocity constraint to the candidate cluster over the first and second frames to identify whether the voxels of the candidate cluster correspond to the object.
In Aspect 13, the subject matter of Aspect 12 optionally includes signal processing circuitry configured to identify the candidate cluster in a third frame corresponding to a third sample of the 3D space consecutive to the second sample; apply a first test of the velocity constraint to the candidate cluster over the first and second frames; apply a second test of the velocity constraint to the candidate cluster over the second and third frames; and identify that the voxels of the candidate cluster correspond to the object when the candidate cluster satisfies the first and second applied tests of the velocity constraint.
In Aspect 14, the subject matter of one or any combination of Aspects 11-13 optionally includes signal processing circuitry configured to identify a candidate cluster in a first frame corresponding to a first sample of the 31) space; identify the candidate cluster in a second frame corresponding to a second sample of the 3D space consecutive to the first sample; identify the candidate cluster in a third frame corresponding to a third sample of the 3D space consecutive to the second sample; and apply an acceleration constraint to the candidate cluster over the first, second, and third frames to identify whether the voxels of the candidate cluster correspond to the object.
In Aspect 15, the subject matter of Aspect 14 optionally includes signal processing circuit configured to identify the candidate cluster in a fourth frame corresponding to a fourth sample of the 3D space consecutive to the third sample; apply a first test of the acceleration constraint to the candidate cluster over the first, second, and third frames; apply a second test of the acceleration constraint to the candidate cluster over the second, third, and fourth frames; and identify that the voxels of the candidate cluster correspond to the object when the candidate cluster satisfies the first and second applied tests of the acceleration constraint.
In Aspect 16, the subject matter of one or any combination of Aspects 11-15 optionally includes signal processing circuitry configured to identify a candidate cluster in N frames, wherein N is an integer greater than or equal to two; and apply a least squares constraint to the candidate cluster over the N frames to identify whether the voxels of the cluster correspond to the object.
In Aspect 17, the subject matter of one or any combination of Aspects 11-16 optionally includes signal processing circuitry configured to convert the LIDAR measurement data to probability data as the voxel characteristic for the voxels, the probability data representing a probability that an object occupies the voxels; and identify voxels that satisfy a probability threshold as the candidate voxels.
In Aspect 18, the subject matter of one or any combination of Aspects 11-17 optionally includes signal processing circuitry configured to identify a cluster as a candidate cluster using one or more of a number of candidate voxels in the cluster and position of the cluster in the frame.
In Aspect 19, the subject matter of one or any combination of Aspects 11-18 optionally includes signal processing circuitry configured to identify a candidate cluster in a first frame corresponding to a first sample of the 3D space; identify the candidate cluster in a second frame corresponding to a second sample of the 3D space consecutive to the first sample; and apply a size constraint to the candidate cluster over the first and second frames to identify whether the voxels of the candidate cluster correspond to the object.
In Aspect 20, the subject matter of one or any combination of Aspects 11-19 optionally includes signal processing circuitry configured to identify a candidate cluster in a first frame corresponding to a first sample of the 3D space; identify the candidate cluster in a second frame corresponding to a second sample of the 3D space consecutive to the first sample; and apply a shape constraint to the candidate cluster over the first and second frames to identify whether the voxels of the candidate cluster correspond to the object.
In Aspect 21, can include subject matter (such as a LIDAR system) or can optionally be combined with one or any combination of Aspects 1-20 to includes such subject matter comprising a LIDAR signal transmit chain including a Laser diode and circuitry configured to drive the Laser diode to transmit a LIDAR pulse; a receive signal chain including a photo-detector configured to detect reflected LIDAR energy; a memory to store a time series of samples of the reflected LIDAR energy received at the receive signal chain; and an estimator circuit configured to estimate a distance of an object according to the time series of samples of LIDAR energy using a detection threshold, wherein the detection threshold varies with time over the time series of samples of LIDAR energy.
In Aspect 22, the subject matter of Aspect 21 optionally includes an estimator circuit configured to decrease the detection threshold with time over the time series of the samples of the LIDAR energy.
In Aspect 23, the subject matter of one or both of Aspects 21 and 22 optionally includes an estimator circuit configured to decrease the detection threshold according to a piece-wise constant function over the time series of samples of LIDAR energy.
These non-limiting Aspects can be combined in any permutation or combination. The above detailed description includes references to the accompanying drawings, which form a part of the detailed description. The drawings show, by way of illustration, specific embodiments in which the invention can be practiced. These embodiments are also referred to herein as “examples.” All publications, patents, and patent documents referred to in this document are incorporated by reference herein in their entirety, as though individually incorporated by reference. In the event of inconsistent usages between this document and those documents so incorporated by reference, the usage in the incorporated reference(s) should be considered supplementary to that of this document; for irreconcilable inconsistencies, the usage in this document controls.
In this document, the terms “a” or “an” are used, as is common in patent documents, to include one or more than one, independent of any other instances or usages of “at least one” or “one or more.” In this document, the term “or” is used to refer to a nonexclusive or, such that “A or B” includes “A but not B,” “B but not A,” and “A and B,” unless otherwise indicated. In the appended claims, the terms “including” and “in which” are used as the plain-English equivalents of the respective terms “comprising” and “wherein.” Also, in the following claims, the terms “including” and “comprising” are open-ended, that is, a system, device, article, or process that includes elements in addition to those listed after such a term in a claim are still deemed to fall within the scope of that claim. Moreover, in the following claims, the terms “first,” “second,” and “third,” etc. are used merely as labels, and are not intended to impose numerical requirements on their objects. Method examples described herein can be machine or computer-implemented at least in part.
The above description is intended to be illustrative, and not restrictive. For example, the above-described examples (or one or more aspects thereof) may be used in combination with each other. Other embodiments can be used, such as by one of ordinary skill in the art upon reviewing the above description. The Abstract is provided to comply with 37 C.F.R. § 1.72(b), to allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. Also, in the above Detailed Description, various features may be grouped together to streamline the disclosure. This should not be interpreted as intending that an unclaimed disclosed feature is essential to any claim, Rather, inventive subject matter may lie in less than all features of a particular disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment. The scope of the invention should be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.
This application claims the benefit of priority to U.S. Provisional Application Ser. No. 62/802,691, filed Feb. 7, 2019, U.S. Provisional Application Ser. No. 62/824,666, filed Mar. 27, 2019, and U.S. Provisional Application Ser. No. 62/923,827, filed Oct. 21, 2019, which are hereby incorporated by reference in their entirety.
Number | Date | Country | |
---|---|---|---|
62802691 | Feb 2019 | US | |
62824666 | Mar 2019 | US | |
62923827 | Oct 2019 | US |