Safe autonomy in vehicles, whether airborne, ground, or sea-based, relies on rapid precision, characterization of, and rapid response to, dynamic obstacles. Ladar systems are commonly used for detecting such obstacles. As used herein, the term “ladar” refers to and encompasses any of laser radar, laser detection and ranging, and light detection and ranging (“lidar”).
However, it is sometimes the case that artifacts and noise may be present in the ladar return data and ladar images used for object detection with autonomous vehicles. Such artifacts and noise can hamper the analysis operations that are performed on the ladar return data. For example, machine learning is often used to train an image classifier from a set of ladar images so that the trained image classifier can accurately detect and classify different types of objects in images. However, the presence of artifacts and noise in the ladar images can corrupt the training process and/or the classification process, which can lead to a risk of misclassification during vehicle operation.
The inventors believe that some of the artifacts and noise present in ladar return data arise from non-uniform illumination of the field of view by the ladar system. More particular, the ladar system may illuminate some portions of the field of view with ladar pulses more heavily than other portions of the field of view.
Accordingly, in an example embodiment, the inventors disclose a ladar system that adapts shot energy for the ladar transmitter as a function of prior ladar return data so that the ladar system can achieve a more uniform illumination (or smoother illumination) of nearby parts of the field of view. Accordingly, the ladar transmitter may adjust its shot energy on a shot-by-shot basis for interrogating range points that are near each other in the field of view. It should be understood that the goal of increasing the uniformity or smoothness of illumination by the ladar transmitter over a region of nearby portions of the field of view does not require the ladar transmitter to produce equal illumination for each range point in that region. Instead, it should be understood that increased uniformity or smoothness is a soft term that relates to a gradient of intensities that is sufficiently mild so as to not unduly erode the performance of object classification algorithms (many of which may be powered by machine learning techniques, edge detection processes, bounding boxes, and others).
Furthermore, a number of factors can make the goal of increasing the uniformity illumination technically challenging. First, the propagation characteristics of the environment between the ladar transmitter and the targeted range point may be variable. For example, the amount of atmospheric attenuation often varies, which can yield fluctuations in the energy levels of the ladar return data. Second, the energy discharged by the ladar transmitter can vary from shot-to-shot. This can be especially true for lasers which have adjustable pulse energy, such as with many fiber lasers. Third, the angular sensitivity of the ladar receiver may vary. This can be most notable in bistatic operations where the ladar system scans on transmit but not on receive. In such a case, the ladar return data detected by the ladar receiver may exhibit angular variation, whether it be a focal plane array or a non-imaging system.
Accordingly, for example embodiments, the inventors believe that ladar system adaptation should be based on the observed behavior of the ladar returns. Because current lasers operate at a fast re-fire rate (e.g., 100,000 to 3,000,000 shots per second is typical), this means that using ladar return data to adapt the ladar system with low latency is a computationally-challenging task. For example, a ladar system might scan a 300 m swath, in which case the last ladar return arrives at the ladar receiver around 2 microseconds (us) after ladar pulse launch from the ladar transmitter. If there are 300,000 ladar pulse shots per second, this leaves only about 1.3 us to detect ladar pulse returns, estimate the shot energy desired for the next ladar pulse shot (to more uniformly illuminate a region of the field of view), and then prime the laser pump for that next ladar pulse shot (if it is desired for the ladar system to have the ability to adapt shot energy on a shot-by-shot basis). Given that many pulsed lasers have bandwidths close to 1 GHz, this is a daunting computational task as well as a daunting data management task.
In order to provide a solution to this problem in the art, the inventors disclose example embodiments where the ladar system stores data about a plurality of ladar returns from prior ladar pulse shots in a spatial index that associates ladar return data with corresponding locations in a coordinate space to which the ladar return data pertain. This spatial index can then be accessed by a processor to retrieve ladar return data for locations in the coordinate space that are near a range point to be targeted by the ladar system with a new ladar pulse shot. This nearby prior ladar return data can then be analyzed by the ladar system to help define a shot energy for use by the ladar system with respect to the new ladar pulse shot. Accordingly, the shot energy for ladar pulse shots can be adaptively controlled to achieve desired level of illumination for the range points within a defined vicinity of each other in the coordinate space (e.g., a more uniform level of illumination).
The spatial index of prior ladar return data can take the form of a tree structure that pre-indexes the prior ladar return data, where the tree structure has a root node, a plurality of branches (with associated branch nodes), and a plurality of leaf nodes, wherein the leaf nodes associate the ladar return data with corresponding locations in the coordinate space. Through such a tree structure, rapid lookups to find the prior ladar return data for locations within a defined vicinity of the new ladar pulse shot can be performed. As an example, the tree structure can be a quad tree index, where 4 leaf nodes are linked to a common first level branch, 4 first level branches are linked to a common second level branch, and so on until the root node is reached, and where the branch topology is selected to reflect spatial proximity. The computational complexity of performing data retrieval from an example quad tree index is 2 p log4(m), where m is the number of rows and columns in the coordinate space grid, and where p is the number of prior ladar returns to retrieve and inspect. In contrast, without pre-indexing, the complexity, including memory fetches, to retrieve p prior ladar returns would be approximately pm2. For a ¼ Mega-pixel ladar frame, the cost savings (for a given p) of using a quad tree index to pre-index the prior ladar return data is on the order of 55 times. The savings are even more dramatic, since the dominating component of the computational complexity for quad tree indexes is p rather than m, and where p varies with the search radius R used in an example embodiment to govern the locations in the coordinate space are deemed to be within the defined vicinity of the new ladar pulse shot.
While an example embodiment uses the spatial index of prior return ladar to adaptively control shot energy, it should be understood that in other example embodiments parameters of the ladar system other than shot energy can be adaptively controlled based on the spatial index of prior return ladar. For example, the detector and/or comparison thresholds used by the ladar receiver can be adaptively controlled using the spatial index. As another example, shot selection itself can be adaptively controlled using the spatial index. For example, with respect to adaptive shot selection, the techniques used for the adaptive modification of the spatial index can be used to preemptively reduce the number of required shots in the shot selection stage. As an example, the system can use super-resolution to interpolate by dropping random shots. As another example, the system can use optical flow techniques in combination with spatial indexes to adapt shot selection.
These and other features and advantages of the present invention will be described hereinafter to those having ordinary skill in the art.
At step 150, the processor determines whether a new ladar pulse shot is to be taken. As explained in the above-referenced and incorporated patent applications, the processor can identify new ladar pulse shots based on a shot list that includes an ordered listing of ladar pulse shots. Each ladar pulse shot can be identified by the coordinates of the range point to be targeted by that ladar pulse shot. For example, these coordinates can be identified by x,y values in terms of elevation and azimuth in a field of view coordinate space for the ladar transmitter 102. Rectangular to polar conversion can be applied if necessary to translate coordinates from one system to another, folding in an additional parameter, mainly range. Such range point coordinates can be referred to as pixel locations for the ladar system 100. The range point coordinates for the new ladar pulse shot are then identified by the processor at step 152.
Next, at step 154, the processor searches a spatial index 160 to find prior ladar return data for range points that are near the target location identified at step 152. The spatial index 160 associates return data from prior ladar pulse shots with the locations of the range points targeted by those prior ladar pulse shots. Accordingly, step 154 can define a vicinity around the identified range point location from step 152 to establish the zone of range point locations that will qualify as being “nearby” the targeted range point. As an example, the nearby vicinity can be defined in terms of a radius around the targeted range point location (e.g., where such a radius value can be expressed as a count of pixels or some other suitable unit). Prior ladar return data that is associated with a location within such a defined vicinity is then located within the spatial index 160 as a result of step 154.
At step 156, the processor processes and analyzes the nearby prior ladar return data from step 154. The analysis that is performed at step 156 can vary based on the type of control that a practitioner wants to employ over the ladar system. For example, if a practitioner wants to increase the uniformity of illumination of nearby range points by the ladar transmitter 102, step 156 can include an analysis where the intensity of nearby ladar returns can be analyzed. As another example, if a practitioner wants to exercise adaptive control over shot selection, an absence or sparseness of returns analyzed at step 156 can be construed by the processor as an indication that the region has nothing of interest, in which case shot selection can be adjusted to adopt a sparser sampling (e.g., if the ladar system is pointing into the open sky, it may be desirable to only sparsely sample such empty space). Similarly, if the analysis of the returns reveals a relatively uniform intensity, this may be indicative of an amorphous heterogeneous background (e.g., road, dirt, grass, etc.), which the processor may construe as indicating a relaxing of that region's scan priority in favor of more intensity dynamic regions. The returns can also be analyzed for changes in texture, using techniques such as Markovian field parameter estimation. This allows the user to segment the image based not on edges or range but rather on spatial stochastic properties of surfaces, enabling for example a lidar-only characterization of grass versus asphalt.
At step 158, the processor applies the analysis from step 156 to define a value for a parameter used by the ladar system 100 with respect to the new ladar pulse shot. The nature of step 158 can also vary based on the type of control that a practitioner wants to employ over the ladar system. For example, if a practitioner wants to increase the uniformity of illumination of nearby range points by the ladar transmitter 102, step 158 can define a shot energy for the new ladar pulse shot so that the ladar pulse shot would illuminate the targeted range point with an energy amount that is derived from the intensities of the prior ladar returns from nearby range points. As another example, step 158 can involve defining values used by the ladar receiver 104 with respect to detection/comparison thresholds. As yet another example, step 158 can involve tagging the new ladar pulse shot as a shot not to be taken or to be deferred if the prior ladar pulse return data for nearby range points indicates there may be an occlusion or other factor that would make the return data unreliable. Additional details about such examples are discussed below with reference to example embodiments.
In the example of
Through the hierarchical spatial organization of the tree structure 200 shown by
With the example of
Also, while the example of
In an example embodiment, the root node 202/212 can address the entire corpus of prior ladar return data indexed by the tree structure 200. This corpus can correspond to a single frame of prior ladar return data, multiple frames of prior ladar return data over a defined time period, or other groupings of prior ladar return data. It should be understood that the tree structure 200 provides a scheme for indexing cells. The content in the leaf node cells can be whatever a practitioner decides. If multiple frames reside in a single tree structure 200, then it may be desirable for the leaf nodes to provide an age index, e.g. how many frames ago it was shot. This could be generalized to an absolute time stamp, if desired. If age is above a specified user threshold, leaf/nodes are then determined “stale”, and recursively deactivated. In example embodiments where the ladar system employs compressive sensing/scanning, it should be understood that a frame can be a fluid frame given that the same shots will presumably not be collected from frame-to-frame. This stands in contrast to non-agile ladar systems where a frame is a fixed set of range points, fired repeatedly and invariantly. With an agile ladar system that employs compressive sensing/scanning, a frame can be viewed as a collection of range point shots that loosely encompasses a field of view, and a subsequent frame revisits said view, albeit in a modified manner (e.g., different range points being targeted with shots).
Recall, by definition, each potential leaf node can be associated with a status identifier that identifies whether that leaf node is considered “active” or not. Deactivating leafs nodes in general may also lead to deactivating branches and branch nodes leading to that deactivated leaf node. Indeed, this is the power of a quad tree index when used with an agile ladar system that employs compressive sensing/scanning because a vast majority of leaf nodes are never searched since they are not active. The processor tracks which leaf nodes should be defined as active based on the freshness or staleness of their respective ladar return data. For instance, the system can time tag returns so that a leaf node can be denoted as “stale” if all time stamps are old, and “fresh” if it contains fresh ladar return data. The processor can then track which leaf nodes to search across based on the freshness or staleness of their respective ladar return data, by inspecting associated (currently) active leaf nodes. If the leaf node is inactive, one can simply write over the stale ladar return data still stored by that leaf node the next time that the ladar system receives return data for the range point corresponding to that leaf node.
A practitioner may employ a constraint on how long the tree structure 200 holds prior ladar return data; a practitioner may not want “stale” ladar return data to influence the adaptive control of the ladar system on the assumption that conditions may have changed since the time that “stale” ladar return data was captured relative to the current time. The precise time durations or similar measures employed by the ladar system 100 to effectively flush itself of “stale” ladar return data can vary based on the needs and desires of a practitioner. For example, in some example embodiments, it may be desirable to flush the tree structure of prior ladar return data after each ladar frame is generated. In other example embodiments, it may be desirable to flush the tree structure of prior ladar return data after a defined sequence of ladar frames has been generated (e.g., holding the ladar return data for a given frame for a duration of, say, 3 ladar frames such that the returns for the oldest of the 3 ladar frames is dropped when a new ladar frame is started). In other example embodiments, it may be desirable to manage the ladar return data on a shot-by-shot basis where ladar return data is held in the tree structure for X number of shots. It is also possible to hold the tree structure 200 for an amount of time which is data dependent. For example, a practitioner may elect to reset the tree structure 200 whenever the entire set of possible branches have been populated, i.e. when the number of leaf nodes is 4n where n is the number of branching levels (depth) in the tree. Alternatively, the tree structure 200 may be reset when the stochastic structure of the leaf contents shifts, as may be measured using principal component analysis.
Further still, any of a number of mechanisms can be employed for implementing control over the freshness of data in the tree structure. For example, the mean time between updates can be a criterion for deactivation, or the intensity variance, which if too large may indicate that the returns are simply receiver noise. A complete purge can be performed each frame or arbitrarily, but if we are indexing the previous n frames, a partial purging can be done each frame to remove all old (stale) leaf nodes (where “old” here means exceeding some time threshold). Otherwise if we don't remove or deactivate “stale” leaf nodes, they would be included in subsequent searches. A practitioner may decide not to purge from memory inactive leaf nodes of previously stored ladar return data. In so doing, the practitioner can both streamline search (deactivated nodes are removed from the tree reducing search complexity) while simultaneously storing “stale” data that may resurface as useful later on. Suppose, for example, that a ladar-equipped car drives into a tunnel. The environment suddenly changes dramatically, what was sky is now suddenly filled with returns from the tunnel ceiling, etc. This sudden change in the environment may quickly result in useful data being deemed “stale”. When the vehicle exits the tunnel, the environment will revert back to what it was before. Far better then, to “resurrect” the old “stale” data and update it, than start from scratch. For example the road surface, trees or lack thereof, weather conditions which reflect return intensity etc. will likely be highly correlated with conditions extant before entering the tunnel.
The power of quad tree indexing with respect to adaptive ladar system control becomes even stronger when the ladar return data is sparse, which is the general case for an intelligent and agile dynamic range point ladar system that employs compressive sensing/scanning. In such a case, as shown by
In the example of
But before we begin, note that in a “pure play” quad tree index, such as is defined here in an example embodiment, the branch nodes only provide (i) pointers to their (first generation) children (where such children may be leaf or branch nodes), and (ii) the bounding box corners that define the contour which encompasses all leaf nodes that are descendants of the subject branch node. The role of the bounding box coordinates is that it allows the quad tree index to be searched to determine if a given leaf node is within a distance R of any leaf in the tree, quickly discarding the need for any further testing below the branch node. This is why the logarithm term appears in the complexity search model. By design, the area defined by all branch nodes in the tree are always such that their union is the entire ladar field of view.
However, it should be understood that a practitioner may choose to store additional information in branch nodes if desired. For example, a practitioner may find it desirable to store data in branch nodes that aggregates all of the return data for active leaf nodes under that branch node. In this way, if the search for nearby returns would encompass all of the active leaf nodes under a given branch node, then the lookup process would need to only access the subject branch node to obtain aggregated return data (rather than requiring additional traversal deeper into the tree structure down to the active leaf nodes under that branch node). In terms of choosing what information to include in branch nodes, the trade off of what information (if any) of the shot returns to store in branches close to the root will surrounds issues such as pre-processing, memory, versus recurring execution time per query.
We are now poised to discuss an example embodiment for using quad trees in connection with the example of
Now the goal in a quad tree is to have (up to) 4 children for each branch node, where the children can themselves be either branch nodes or leaf nodes. Recall we define the bounding box where all the branch node children “live” as well. The boxes can be rectangles so that we keep the search geometry simple.
The root of the tree is 212, which encompasses all data that has been collected in a frame, or set of frames. The root is the “foundation” of the tree and does not per se possess any information, it is just an initialization step. The first set of nodes, 204, below the tree are all branch nodes, labeled A,B,C,D as per
In our case, all leaf nodes are at the second layer in the tree, 206, with entries, 216, that consist of x,y,range triplets (for training nodes), and after the search the current node includes the training set pointers as well. Whether or not to delete these after interpolation is for the user to decide.
It should be evident that
At step 302, the processor selects the maximal search radius, R, that is used to define the vicinity of range points that are considered “nearby” the targeted range point identified at step 300. This value for R can be expressed as a count of pixels or cells around the targeted range point (e.g., 3 pixels/cells). However, it should be understood that other units could be chosen. For purposes of explanation with the running example of
Further still, it should be understood that for some embodiments a practitioner may choose to employ a fixed value of R for all ladar pulse shots; while for other embodiments, a practitioner may choose to adjust the value for R in certain circumstances. The size of “R” can vary as a function of the types of objects, the density of objects, and/or likelihood of presence of objects, that may be known to exist in the field of view. For example, it may be desirable to controllably adjust the search radius on a shot-by-shot basis. As another example, the search radius can be controlled based on the type of object that is expected to be targeted by the new ladar pulse shot. Thus, the system expects the new ladar pulse shot to target a road surface, the ladar system can define the search radius accordingly. Given that the road surface is expected to exhibit a relatively smooth return over large distances, the ladar system may choose to set R to a relatively large value. But, if the returns indicate that an object may be present where the new ladar pulse is targeted, the system may want to reduce the size of R to account for a smaller area that the object might encompass. The choice of R presents a tradeoff involving speed, where the smaller R will result in faster results. There is another (largely) unrelated tradeoff, namely the rate of change anticipated for the intensity of the scene. The sky will be unlikely to change in intensity much, while a road with a reflective sign post will vary quite a bit. Since the system can adjust based on the energy expended, the system can largely minimize any artifacts from intensity fluctuation in the image. After selecting R, the search begins at the tree root (step 304).
At step 306, the processor tests to see if the new range point is within a distance R of the children of the root node (1st generation descendants which, by design, are branch nodes). Then, for all k such children the search continues as follows. The search drops, at step 308, to the second level of child nodes, which may be either leaf or branch nodes. Of these four nodes the search examines which nodes, if any, are with radius R of the targeted range point, and labels these as leaf or branch nodes, numbered as n,m (step 310). Next, one tests whether m>0 (step 312); and if so adds these leafs to the list of training cells (step 314). At step 316, the process sets k′ equal to the number that represents all of the branch nodes within n that are no more than R from the new range point. Next, at step 318, the processor moves downward in the tree index by a level and repeats steps 310, 312, 314, and 316 until the stack is empty.
With reference to
The process flow then proceeds to step 308. At step 308, the processor moves upward in the spatial index to the next higher level branch (the grandchild branch one generation removed from the root). Then, these nodes are inspected (step 310), which in the
The process flow then proceeds to step 316. At step 316, the processor continues the search, 318, down all k′ branch nodes found in 310 until all leaf nodes have been accessed. With reference to
At step 320, the processor performs interpolation on the prior ladar return data in the active leaf nodes identified at steps 310 and 318 to compute a shot energy for the new ladar pulse shot that will target the subject range point. Accordingly, with reference to
At step 322, the ladar transmitter 102 transmits the new ladar pulse shot toward the targeted range point, where the ladar transmitter 102 controls the laser source so that the new ladar pulse shot has the shot energy computed at step 320. The laser source of the ladar system can have adjustable control settings for charge time and average power, and these parameters can be controlled to define the shot energy. With reference to
At step 324, the ladar receiver 104 then receives and processes a ladar pulse reflection corresponding to the ladar pulse shot fired at step 322. As part of this operation, the ladar system 100 is able to extract data from the reflection such as the intensity of the reflection and the range to the targeted range point. The ladar system can also, if desired, extract data from the reflection such as the shape of the pulse return, the average noise level, and (when available) polarization. This data can then serve as the ladar pulse return data for the range point targeted by step 322. Techniques for receiving and processing ladar pulse reflections are described in the above-referenced and incorporated patent applications. With reference to
At step 326, the processor updates the spatial index based on the new ladar pulse return data obtained at step 324. As part of this step, the processor can populate a leaf node linked to the subject range point with the new ladar pulse return data. With reference to
After step 326, the process flow would then return to step 300 to repeat itself with respect to the next shot on the shot list. It should be noted that for this repeat pass, if (continuing with the reference example from
In the example of
For purposes of explanation, and continuing with the example of
At step 400, a processor selects a given frame, which may take the form of a defined frame (e.g., a foveation frame) as described in the above-referenced and incorporated patent applications. The processor can also select a value for the search radius R at step 400. Once the frame has been selected, the system can determine the ladar pulse shots that are planned to be taken for the various range points within the selected frame.
At step 402, the processor finds all of the active leaf node neighbors that are within R of the targeted range point location for each shot of the selected frame. The count of these active leaf node neighbors, which is a function of returns from lidar shots fired, can be denoted asp. Let us denote by p{max} the largest set of possible training samples available. We can find the value of p{max} in advance, by simply counting how many of the range points in the shot list are within a distance R. This is a maximum for p, clearly, since training sets are defined by active leafs, and active leafs can only appear when there is a shot list associated with said nodes. We note that when the training set is sparse, that is few possible leaf nodes are available within a fixed distance R, a speedup is available in exchange for memory (and we will revisit this speedup opportunity shortly).
The maximum degrees of freedom (DOFs) is 6. The degrees of freedom in this context defines the number of variables in the model. Since we are proposing here a second order polynomial these terms are, consecutively: {constant, Az, El, {Az×El}, Az2, El2}. In other words we have the intensity vs shot energy model:
shot(energy)=c0+c1*Az+c2*El+c3Az*El+c4*Az2+c5*El2 (1)
The idea is we use past data to find the c0 . . . c5 terms and then for any new position we plug in Azimuth (Az) and Elevation (El) and we get the shot energy. The constant is the “floor” of the intensity. For a road surface, for example, we would expect very little azimuth (Az) and elevation (El) dependence so the constant term would be large and the other terms small. If we are looking toward the ground we would expect azimuth to change quickly (at least at the road boundary) so that the c1 term is big.
This use of the model in (1) with training as outlined below effectively provides a proportional feedback loop, where the input is scaled based on the previous measurement (Direct Markov process) or measurements (Markovian State space process). However, instead of simply feeding back to the same pixel (which may not be fired again for some time), the input (training set) is fed to all neighboring pixels about to fire (which may not have been fired for some time). Due to the multiplicity of neighbors feeding back to any pixel (and spatial variance), we can interpolate the inputs (the optimal shot energies), regardless of if or when these pixels fired. In this manner we have an asynchronous quasi-Markovian, or discrete event system model, which is well-suited to a dynamic, agile, ladar system.
At step 402, the processor can choose the form of the interpolation model so that the DOFs are less than or equal to the number of potential training leafs, p, bounded by p{max}. So, for example, if the value of p is 6 or greater, the system can use the full set of the parameter list above; but if p<6, the system will choose a subset of the above parameter list so there are not more parameters than samples. For example if p=1, we only have one model parameter, and p{max} possible ways in which this single return/active-leaf can arise, and for p=2 we have 2 parameters and
potential active leafs, etc.
In total there will be
candidate models. For sufficiently small p{max} we can choose these models in advance to reduce downstream computation and latency. This is especially attractive when we have a background frame, and a pop up region of interest. In such a case the background pattern serves as a training set for many shots, allowing us to amortize the precomputation cost across many shots. To give a numerical example, let p{max}=3 for some range point Q in the list. This means that there are at most 3 training samples within range R of Q. Denote the three training sample points as R,S,T. Then the possible training sets are reduced to {R},{S},{T},{RS},{RT},{ST},{RST}, a total of 7=23−1 possibilities. For training sets with a single entry we use a constant model, c0 so in (1) shot energy ∝ intensity in {R} or {S} or {T} respectively. For two entries in the training set we pick a constant plus azimuth model, yielding c1, in which case we fit a line to {RS},{RT},{ST}, and for {RST} a quadratic. Denote by θapply the location of the range point in azimuth for the upcoming shot, and in the two entry case the training entries as {a,b} and the training positions in azimuth as θa, θb, then we have:
For the constant and linear examples thus far there is a savings of three divisions and three multiplies using pre-indexing with the tree structure 200, in the forming of the c1θapply term but behold: For the three sample case we have training entries {a,b,c} and we can proceed (using Vandermonde matrix properties) as follows, Set:
Notice that all the terms in brackets can be precomputed in advance. The energy evaluation now only takes three multiples, versus about 16 multiplies/divides. In terms of multiplies and divides the net savings with precomputing is then 11 versus 22 multiplies if done directly.
To take a specific case of how the above processing might be employed, let us assume we choose to reset the quad tree at each frame, and we have a fixed distance R that we scan for neighbors to build our set point. At the beginning of a frame there will be no prior shots, so we must use an initial condition for the first shot (or update from the last frame before resetting to zero the leaf contents). Assuming that we scan in a roughly left-right up-down fashion we will see that after a few initial conditions we have enough neighboring cells to obtain interpolation from the current frame. To begin we will have a low model order, starting with just a constant, then adding azimuth, then elevation, then the cross term, then finally quadratic terms. By precomputing the Cholesky factor we can keep the interpolation latency low enough to have updates within a frame. Of course this works best if the range point list is configured so that there tend to be numerous points within a region of radius R, and that these points are collected quasi-monotonically in time. This will be the case for a raster scan, foviated patterns, and most uniform random patterns.
For each model, the processor sets up and solves the regression problem the coefficients in equation (1) above. We denote the right hand side row vector by bt and the left hand row vector by ct where t denotes transpose. The solution for the regression equation A c≈b in Euclidean norm is:
c=(AAt)−1Ab. (2)
Here b is the vector of prior data c is the vector of coefficients in (2), and A is the model matrix. (The practitioner will notice A would be Vandermonde if the sole cross term Az*El vanishes.)
For the case of (2) with only one parameter other than the constant term (which we denote by θ0) we can express this, for a random, representative choice of data b as:
Note that the matrix A is independent of the intensity measured on prior returns and depends only on spatial structure of range points. This is the feature that allows us to precompute this factor, as demonstrated in the “toy” R,S,T example discussed above.
At step 404, the processor finds the Cholesky factor of the interpolation map for each of the 2p models. For example, at step 404, the processor finds the Cholesky factor of AAT:
The system has now exhausted all that it can do a priori. Once the ladar system begins taking shots, two new pieces of the puzzle will emerge. The first is that the system will know if a pulse reflection is detected on each shot. If yes, then k, the number of model nodes that are active, which is initially set to zero, will be updated, for each of the possible node to which the shot is a child. Second, the system will get an intensity return if a pulse reflection is detected. The range points with returns for these p shots defines which of the 2p nodes are active and therefore which model to use for interpolation.
The system can now begin to execute the shot list. At step 406, the processor (at time T) interpolates the desired shot energy using intensity data from the k active leaf node neighbors and the pre-computed Cholesky factors. With our running example from above, step 406 can compute the a posteriori optimal shot energy as:
energy=(0.05+0.1*0.1)/2=0.035
In terms of the overall system parameters we have:
A posteriori optimal energy=shot energy out*intensity set point/return intensity.
This energy allotment scheme can be seen to be valid/efficacious in practice as follows: when the anticipated return intensity is low we need more energy. The intensity set point is the dimensionless scaling for the energy output from prior collections which adjusts for return intensity. This is akin to a traditional proportional feedback loop, where the input is scaled based on the previous measurement. However, instead of simply feeding back to the same pixel (which may not be fired again for some time), this input is fed to all neighboring pixels about to fire (which may not have been fired for some time). Due to the multiplicity of neighbors feeding back to any pixel (and spatial variance), we interpolate the inputs (the optimal shot energies).
The computational expediency of this solution is predicated by the relatively few comparisons in
The linear equations in equation (3) are solvable using, the Cholesky scheme, of computational complexity
Using the fact that A is independent of the ladar returns, we can precompute A, and indeed (ATA)−1AT, for every scheduled shot in the frame, for every model order. We can further reduce complexity by expressing (AAt) as LLt where L is the lower triangular Cholesky factor. This allows us to solve equations involving (AAt)−1=L−tL−1 by double back solving at a cost of only p2 operations. If DOF and intensity samples are equal in quantity, then (AAt)−1AT is square and can be expressed as a lower times an upper triangular matrix against resulting in p2operations. This speed up allows for microsecond-scale update rates, in firmware or hardware implementations. For example, of the speedup, one might have a 3 usec update rate ladar system (˜330,000 KHz shot rate), with a 300 m range extent, leading to 2 usec for the last pulse return, giving only 1 usec before the next shot is taken on average. With 6 shots used for interpolation of a six DOF model, the complexity is about 36 operations, which can be easily met with a 100 M flop device. With a quad tree index, the firmware or hardware is needed more for data transfer than raw operation counts. Without the quad tree index, a single sort across a 1 M voxel addressable point cloud would involve 1,000 comparisons per 3 us, thereby stressing the limits of most portable low cost general purpose computers, and this is without the frequently needed further complexity of retaining data over multiple frames. With a quad tree index, and R=16, each search now requires only 16 comparisons per 3 us.
At step 408, the ladar system transmits the ladar pulse shot toward the targeted range point (e.g., AAD) with the shot energy computed at step 406. If the ladar system detects a return from this ladar pulse shot, the ladar system updates the value for k and also updates the interpolation model (step 410). Specifically the size of c grows and A has an added column. As is the case whenever a return is obtained, the tree is updated as well. So long as there are additional shots in the shot list for the frame, the process flow can return to step 406 and repeat for the next shot on the shot list. However, once all of the shots for the subject frame have been completed (step 412), the process flow can return to step 400 to select the next frame.
While
Furthermore, as noted above, the spatial index can be expanded to dimensions greater than 2 if desired by a practitioner, in which case additional search features such as range could be included in the decision-making about which range points are deemed “nearby” a subject range point. Likewise, an extension from a quadratic form of interpolation to a multinomial (higher dimensional) interpolation can also be employed if desired.
Furthermore, the ladar system can adaptively control aspects of its operations other than shot energy based on the spatial index of prior ladar return data. For example, it can adaptively mitigate various forms of interference using the spatial index. Examples of interference mitigation can include adaptively adjusting control settings of the ladar receiver 104 (e.g., controlling receiver sensitivity via thresholds and the like) and detecting/avoiding occlusions or other non-targeting conditions via adaptive shot selection. A reason to adapt based on interference is that the laser can vary in efficiency with temperature. Furthermore, fiber lasers have hysteresis, meaning that the efficiency, specifically maximum pulse energy available, depends on shot demands in the recent past.
As noted above, receiver noise can be adaptively mitigated using the spatial index. Examples of ladar receiver control settings that a practitioner may choose to adaptively control using the techniques described herein may include time-to-digital conversion (TDC) thresholds and spacings (if the ladar receiver 104 employs TDC to obtain range information from ladar pulse returns) and analog-to-digital conversion (ADC) thresholds (if the ladar receiver 104 employs ADC to obtain range information from ladar pulse returns). Examples of a suitable ladar receiver 104 are described in the above-referenced and incorporated patent applications. To process a signal that represents incident light on a photodetector array in the ladar receiver 104, receiver circuitry will attempt to detect the presence of ladar pulse reflections in the received signal. To support these operations, ADC and/or TDC circuitry can be employed. A tunable parameter for such a ladar receiver are the thresholds used by such TDC and/or ADC circuitry to detect the presence of ladar pulse reflections. In example embodiments, the ladar system can adaptively control these thresholds based on the spatial index of prior ladar return data so that the ladar receiver 104 can more effectively detect new ladar pulse returns.
Additional potential sources of interference are other fielded and environmentally present ladar systems. The spatial index can be used to detect such interference (which will tend to impact multiple beams due to Lambertian scattering of the other-vehicle-borne ladar). Identifying the presence of such other ladar systems, and using interpolation as discussed above to locate such interference can assist in interference mitigation. In addition to the fact that one can desensitize the ladar system through threshold changes, interference can be reduced by shifting the transmit beam direction to retain target illumination but reduce exposure to interference. Such an approach is particularly effective when the shift is sub-diffraction limited, i.e. a micro-offset (see
After all of the nearby active leaf nodes have been identified and accessed after step 318, the
At step 522, the processor adjusts the relevant control settings for the ladar receiver 104 based on the parameter value computed at step 520. For example, if a register in the ladar receiver 104 holds a threshold value to use for comparison operations by the TDC and/or ADC circuitry, the processor can write the adaptively computed threshold value from step 520 into this register. In doing so, the ladar receiver 104 is then ready to receive and process a new ladar shot return.
At step 524, the ladar transmitter 102 transmits a new ladar pulse shot toward the targeted range point. At step 526, the ladar pulse reflection from the targeted range point is received by the ladar receiver 104. The ladar receiver 104 then processes the received reflection using the adjustably controlled parameter value from step 522 to obtain ladar return data for the current ladar pulse shot. As noted above, this can involve the ladar receiver using a threshold value from step 522 that is stored in a register of the ladar receiver 104 to influence the operation of TDC and/or ADC circuitry.
Thereafter, as noted above with reference to step 316 of
The spatial index can be used to control any aspect of the receiver that may be spatially dependent. This can include maximum depth of range (once we hit the ground no pulses behind (under) the ground can be obtained, maximum signal return (which can be used to adjust dynamic range in the digital receiver), atmospheric attenuation (rain and fog can be angle dependent).
For illustrative purposes for an example run of
Accordingly, it should be understood that multiple techniques can be used for searching the spatial index to find nearby prior ladar return data. Furthermore, while the reverse search technique is described in
Similarly, if the ladar system is able to learn that there are objects in the field of view that may become damaged if illuminated with ladar pulses (as determined from the prior ladar return data), the ladar system can adaptively adjust its shot schedule to shoot around such objects. Thus, if there is a nearby smart phone with a camera that could be damaged if irradiated with a ladar pulse shot, the ladar system can detect the presence of such a smart phone in the prior ladar return data using image classification techniques, and then make a decision to adaptively adjust the ladar pulse shot schedule to avoid irradiating the camera's detected location if necessary.
In another example where the ladar system is deployed in an autonomous vehicle used in outside environments (such as an automobile), it may be desirable to include wiper blades or the like to periodically clean the outside surface of the ladar system's optical system. In such instances, it is desirable to not illuminate the spatial regions in the field of view that would be blocked by the wiper blades when the wiper blades are in use. Thus, if the ladar system hits the wiper blade with a ladar pulse shot, the return data from that shot is expected to exhibit a large intensity magnitude (due to limited beam spreading and therefore limited energy dissipation) and extremely short range. Thus, the prior return data in the spatial index can be processed to detect whether a wiper blade is present in a particular region of the field of view, and the ladar system can adjust the shot schedule to avoid hitting the wiper blades, while minimizing beam offset. More perception/cognition-based adaptive shot selection schemes can be envisioned. For example, in heavy rain, the video can detect spray from tires, or wheel well spoilers on large trucks, and schedule shots so as to minimize spray from said objects. Note that techniques for shot timing for the avoidance of wiper blade retro direction can also be used to avoid retro directive returns from debris on the first surface of the transmit lens. Furthermore, the shot scheduler can be adjusted to fire immediately after the blade has crossed an area to be interrogated, since the lens is cleanest at the exact moment after the wiper has passed.
In the
If step 620 results in a conclusion by the processor that an occlusion or other non-targeting condition is present, the process flow returns to step 300 and a new ladar pulse shot is selected from the shot list. As part of this, the ladar shot that was avoided because of this branching may be returned to a later spot on the shot list. However, a practitioner may choose to employ other adjustments to the shot schedule. For example, the avoided ladar pulse shot could be dropped altogether. As another example, the avoided ladar pulse shot could be replaced with a different ladar pulse shot that targets a nearby location. If step 620 results in a conclusion by the processor that an occlusion or other non-targeting condition is not present, then the process flow proceeds to step 622.
At step 622, the ladar transmitter 102 transmits a new ladar pulse shot toward the targeted range point. At step 324, the ladar pulse reflection from the targeted range point is received by the ladar receiver 104. The ladar receiver 104 then processes the received reflection to obtain ladar return data for the current ladar pulse shot. If desired by a practitioner, the ladar receiver 104 could also be adaptively controlled based on the content of the spatial index as discussed in connection with
Furthermore, while
Scanning block 708 can include the scanning mirrors and will direct the fired laser to the targeted range point. The scanning block 708 balances two conditions for pulse firing. It chooses the time to fire when the mirrors are in the correct azimuth and elevation position to hit the targeted range point. It also ensures, through information it collects from the spatial index 160, that the shot energy level is correct. However, these two criteria can be in conflict, since shot energy varies with the time between shots. The process of sorting out these criteria can be controlled by a shot scheduler (which can be referred to as a mangler). Since all shot energy levels impact the mangler, tight communication between the scan mirrors and the tree structure is desirable as shown by the lines connecting the scanning block 708 and the spatial index 160. The ladar pulse 758 will then propagate in the environment toward the targeted range point. A reflection 760 of that ladar pulse 758 can then be received by the receiver block 710 via lens 712. A photodetector array and associated circuitry in the receiver block 710 will then convert the reflection 760 into a signal 762 that is representative of the ladar pulse reflection 760. Signal processing circuitry 714 then processes signal 762 to extract the ladar pulse return data (e.g., range 764, intensity, etc.). This ladar pulse return data then gets added to the ladar point cloud 716 and spatial index 160. Examples of suitable embodiments for scanning block 708, receiver block 710, and signal processing circuitry 714 are disclosed in the above-referenced and incorporated patent applications.
As shown by
Further still, with reference to
The left frame of
By contrast,
Thus, with reference to
In step 1 of
Thus, at step 2, the ladar system can use the object monitoring from step 1 to adaptively adjust shot energy in a manner that reduces the risk of damaging a smart phone within the target region. For example, if step 1 detects a cell phone in the region around range point ABC, the ladar system can select a charge time set point value for the ladar pulse targeting ABC as a function of the detected range (e.g., see
Subsequently, at step 3, if the ladar return data indicates that the camera persists in the field of view, the ladar system can then adjust its shot list to exclude illumination of the camera (see, e.g.,
In another example embodiment, interpolation techniques and the like can be used to fill data gaps in a sparse array of ladar return data. For example, as described in the above-referenced and incorporated patent applications, the ladar system 100 can employ compressive sensing to dynamically scan the field of view and intelligently target only a small subset of range points in the field of view. In this fashion, the ladar system can focus its targeting on the areas of interest in the field of view that are believed to be the most salient for safety and accurate object detection while avoiding overly shooting into empty space. With compressive sensing, the ladar system will produce sparse arrays of ladar return data for each ladar frame. This permits the ladar system 100 to exhibit a much higher frame rate while not losing any meaningful detection coverage. However, many downstream applications that process ladar data to perform image classification and object tracking may be designed with a standardized interface with ladar systems. With such standardized interfaces, it may be desirable, for software interoperability for instance, for the ladar frames to exhibit some fixed size or minimum size. Accordingly, to make a dynamic ladar system that employs compressive sensing compatible with such downstream image processing applications, a practitioner may want to synthetically fill the sparse ladar frames produced by the ladar system 100 with information derived from the sparse array of ladar return data.
At step 1304, a processor synthetically fills a ladar frame based on the data in the sparse array. In doing so, the processor is able to fill the empty gaps (the non-targeted range points) in the field of view between the various range points that had been targeted with ladar pulse shots. Interpolation techniques such as those discussed above can be used to fill in these gaps between ladar pulse return data from the sparse array. The gap filling can be done using any data extrapolation method, examples of which are available in MatLab and other software packages, such as polynomial interpolation, or iterative projection onto convex sets. The spatial index of prior ladar return data can be leveraged at step 1304 to accelerate the calculations that will drive step 1304. For example, if the processor can use the ladar return data from the spatial index for a plurality of range points that are near a blank range point to be synthetically filed to facilitate an interpolation or other filling computation that allows the processor to synthetically compute hypothetical synthetic return data for the non-targeted range point. This process can be iteratively repeated for different non-targeted range points (or non-targeted regions in the coordinate space), until the ladar frame is deemed full (step 1306). Once full, the synthetically-filled ladar frame can be output at step 1308 for consumption by a downstream image processing application such as an image classification or object detection application.
It should be noted that because step 1304 does not require any additional scanning by the ladar system's mirrors as well as transmit/receive propagation delays, it is expected that step 1304 can be performed with low latency relative to the time that would be needed to physically scan and interrogate the non-targeted range points via ladar pulses, particularly if sufficiently powerful compute resources are used such as parallel processing logic. In this fashion, it is expected that the frame rate for the
With the example approach of
In another example embodiment, the synthetic ladar frame can be generated in coordination with a rolling shutter technique. An example of such a rolling shutter technique is shown by
With such a “rolling” ladar frame, there is a benefit in that the ladar system gets near every point in the image roughly twice as fast as one would by scanning each row in succession. Therefore, anything that enters the frame which exhibits a size of three or more horizontal lines/rows or more can be detected twice as fast. New ladar frames can be generated each time the shot list has been scanned in a given direction (step steps 1402 and 1404). Synthetic filling techniques as described at step 1304 of
Furthermore, if these synthetic frame techniques are combined with fast velocity estimation as described in U.S. patent application Ser. No. 16/106,406, entitled “Low Latency Intra-Frame Motion Estimation Based on Clusters of Ladar Pulses”, filed Aug. 21, 2018, the entire disclosure of which is incorporated herein by reference, the ladar system can obtain an interpolated dense image of a static background scene and target motion at the same time. Velocity estimations of a moving object in the image can be obtained using the current position of the object in the current synthetic frame and the prior position of the object in the previous synthetic frame, assuming that the radius parameter R discussed earlier is less than the width and height of the moving object. For example, the current synthetic frame can be based on an ascending traversal of the shot rows while the prior synthetic frame can be based on the immediately preceding descending traversal of the shot rows. Assuming that the moving object encompasses three or more horizontal lines/rows, this permits the ladar system to estimate velocity based on their relative positions in the synthetic frames. Given that the synthetic frames are being generated faster than a conventional full ladar frame, this improves the speed by which the ladar system can detect and track object velocity. Also, if desired, a practitioner need not reverse the scan direction for each traversal of the shot list at step 1406.
In another example embodiment, the spatial index-based interpolation techniques described herein can be used in combination with optical flow techniques for target tracking.
However, the use of optical flow in ladar systems is complicated by at least two factors: (1) with ladar systems, only a small fraction of the scene is usually illuminated/interrogated (e.g., even with raster scan approaches, beam steps may be 1 degree with a beam divergence on the order of 0.1 degree)—in which case much of the information needed for full scale optical flow is not available and must be estimated, and (2) unlike video systems (which operate in angle/angle (steradian) space—and which delivers angular pixel sizes that are independent of range), ladar systems work with voxels (volume pixels) that exhibit a size which grows with depth. Because the sizes of voxels grow with depth, the optical flow calculation for ladar systems will need to account for aligning data at different distances. For example, with reference to
Furthermore, while the depth information that is available from a ladar system helps with object tracking, it can be a complicating factor for arresting self-motion of the ladar system. However, the ability to use the spatial index as discussed above to interpolate returns not only helps solve complicating factor (1) discussed above, but it also allows for the system to generate fixed size optical flow voxels, which helps solve complicating factor (2) discussed above.
The spatial index-based interpolation can generate fixed size voxels as follows. If the object (car) is at close range, then the shots themselves are fine-grained, and no interpolation is needed. At longer range—where the actual data voxels are spaced wider apart—the spatial index-based interpolation techniques can be used to obtain a finely-spaced sampling. For example, suppose at 10 m the shot grid delivers samples spaced every foot apart. At 100 m, the same shot list produces voxels 10 feet apart. But, given this widely-spaced data, one can interpolate using the spatial index (e.g., see equation (3) above) to effectively resample at one foot spacing.
At step 1602, the system determines the motion from Frame K to previous frames using knowledge of the ladar system's velocity. Step 1602 can be performed using techniques such as arrested synthetic aperture imaging (SAI), or image stacking. For example, in principle, when a person is a passenger in a car driving past a wheat field, that person will notice the wheat ahead appears to move must faster than wheat directly to the person's side, if in both cases a reference point (e.g., wheat stock) is picked at the same radial distance. Arrested SAI uses this fact, combined with knowledge of the car motion, to align (stack) images frame to frame. By determining the motion between frames, the system can register the current frame to previous frames for static objects depicted in those frames (e.g., see 1502 and 1504 in
Then, at step 1606, the system can determine whether an optical flow line maps the current shot position for the ladar system to a position in the previous frame for which there is not a ladar return. If the shot position maps to an optical flow line for which there is not prior return data, the system can leverage the spatial index of prior return data to interpolate the return data for the current shot position using the spatial index interpolation techniques described above. There are several techniques for choosing what points to use as a basis for interpolation. One example is to use the points which are the subject points in the prior frame. Another technique is to use edge detection to find the boundary of an object, and then store the associated voxels which form that boundary edge from the prior frame as the basis for interpolation in the current frame. By doing so for the optical flow lines found at step 1604, the system is able to build a static scene intensity (clutter) map for the static aspects of the scene.
This leaves the objects in the scene that are moving. For these objects, the ladar system can include a camera that produces camera images of the scene (e.g., camera data such as video RGB data). Moving objects with defined characteristics can then be detected within the camera images. For example, moving objects with high reflectivity (e.g., the taillights and license plate shown in
Next, at step 1610, the system can schedule ladar shots that target the predicted future positions for those moving objects as determined at step 1608. Accordingly, with reference to the example of
While
While the invention has been described above in relation to its example embodiments, various modifications may be made thereto that still fall within the invention's scope. Such modifications to the invention will be recognizable upon review of the teachings herein.
This patent application claims priority to U.S. provisional patent application Ser. No. 62/750,540, filed Oct. 25, 2018, and entitled “Adaptive Control of Ladar Systems Using Spatial Index of Prior Ladar Return Data”, the entire disclosure of which is incorporated herein by reference. This patent application also claims priority to U.S. provisional patent application Ser. No. 62/805,781, filed Feb. 14, 2019, and entitled “Adaptive Control of Ladar Systems Using Spatial Index of Prior Ladar Return Data”, the entire disclosure of which is incorporated herein by reference. This patent application is also related to (1) U.S. patent application Ser. No. 16/356,046, filed this same day, and entitled “Adaptive Control of Ladar Systems Using Spatial Index of Prior Ladar Return Data”, now U.S. Pat. No. 10,656,252, (2) U.S. patent application Ser. No. 16/356,061, filed this same day, and entitled “Adaptive Control of Ladar Shot Energy Using Spatial Index of Prior Ladar Return Data”, now U.S. Pat. No. 11,327,177, (3) U.S. patent application Ser. No. 16/356,089, filed this same day, and entitled “Adaptive Control of Ladar Shot Selection Using Spatial Index of Prior Ladar Return Data”, now U.S. Pat. No. 10,598,788, (4) U.S. patent application Ser. No. 16/356,101, filed this same day, and entitled “Adaptive Control of Ladar System Camera Using Spatial Index of Prior Ladar Return Data”, now U.S. Pat. No. 10,656,277, and (5) U.S. patent application Ser. No. 16/356,116, filed this same day, and entitled “System and Method for Synthetically Filling Ladar Frames Based on Prior Ladar Return Data”, now U.S. Pat. No. 10,670,718, the entire disclosures of each of which are incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
4579430 | Bille | Apr 1986 | A |
5552893 | Akasu | Sep 1996 | A |
5625644 | Myers | Apr 1997 | A |
5638164 | Landau | Jun 1997 | A |
5808775 | Inagaki et al. | Sep 1998 | A |
5815250 | Thomson et al. | Sep 1998 | A |
5831719 | Berg et al. | Nov 1998 | A |
6031601 | McCusker et al. | Feb 2000 | A |
6205275 | Melville | Mar 2001 | B1 |
6245590 | Wine et al. | Jun 2001 | B1 |
6288816 | Melville et al. | Sep 2001 | B1 |
6847462 | Kacyra et al. | Jan 2005 | B1 |
6926227 | Young et al. | Aug 2005 | B1 |
7038608 | Gilbert | May 2006 | B1 |
7206063 | Anderson et al. | Apr 2007 | B2 |
7236235 | Dimsdale | Jun 2007 | B2 |
7436494 | Kennedy et al. | Oct 2008 | B1 |
7602477 | Nakamura | Oct 2009 | B2 |
7701558 | Walsh et al. | Apr 2010 | B2 |
7800736 | Pack et al. | Sep 2010 | B2 |
7894044 | Sullivan | Feb 2011 | B1 |
7944548 | Eaton | May 2011 | B2 |
8072663 | O'Neill et al. | Dec 2011 | B2 |
8081301 | Stann et al. | Dec 2011 | B2 |
8120754 | Kaehler | Feb 2012 | B2 |
8228579 | Sourani | Jul 2012 | B2 |
8427657 | Milanovi | Apr 2013 | B2 |
8635091 | Amigo et al. | Jan 2014 | B2 |
8681319 | Tanaka et al. | Mar 2014 | B2 |
8892569 | Bowman et al. | Nov 2014 | B2 |
8896818 | Walsh et al. | Nov 2014 | B2 |
9069061 | Harwit | Jun 2015 | B1 |
9085354 | Peeters et al. | Jul 2015 | B1 |
9128190 | Ulrich et al. | Sep 2015 | B1 |
9261881 | Ferguson et al. | Feb 2016 | B1 |
9278689 | Delp | Mar 2016 | B1 |
9285477 | Smith et al. | Mar 2016 | B1 |
9305219 | Ramalingam et al. | Apr 2016 | B2 |
9315178 | Ferguson et al. | Apr 2016 | B1 |
9336455 | Withers et al. | May 2016 | B1 |
9360554 | Retterath et al. | Jun 2016 | B2 |
9383753 | Templeton et al. | Jul 2016 | B1 |
9437053 | Jenkins et al. | Sep 2016 | B2 |
9516244 | Borowski | Dec 2016 | B2 |
9575184 | Gilliland et al. | Feb 2017 | B2 |
9581967 | Krause | Feb 2017 | B1 |
9841495 | Campbell et al. | Dec 2017 | B2 |
9885778 | Dussan | Feb 2018 | B2 |
9897687 | Campbell et al. | Feb 2018 | B1 |
9897689 | Dussan | Feb 2018 | B2 |
9933513 | Dussan et al. | Apr 2018 | B2 |
9958545 | Eichenholz et al. | May 2018 | B2 |
10007001 | LaChapelle et al. | Jun 2018 | B1 |
10042043 | Dussan | Aug 2018 | B2 |
10042159 | Dussan et al. | Aug 2018 | B2 |
10073166 | Dussan | Sep 2018 | B2 |
10078133 | Dussan | Sep 2018 | B2 |
10088558 | Dussan | Oct 2018 | B2 |
10108867 | Vallespi-Gonzalez et al. | Oct 2018 | B1 |
10185028 | Dussan et al. | Jan 2019 | B2 |
10209349 | Dussan et al. | Feb 2019 | B2 |
10215848 | Dussan | Feb 2019 | B2 |
10282591 | Lindner et al. | May 2019 | B2 |
10598788 | Dussan et al. | Mar 2020 | B1 |
10656252 | Dussan et al. | May 2020 | B1 |
10656277 | Dussan et al. | May 2020 | B1 |
10670718 | Dussan et al. | Jun 2020 | B1 |
11327177 | Dussan et al. | May 2022 | B2 |
20020176067 | Charbon | Nov 2002 | A1 |
20030122687 | Trajkovic et al. | Jul 2003 | A1 |
20030151542 | Steinlechner et al. | Aug 2003 | A1 |
20030154060 | Damron | Aug 2003 | A1 |
20050057654 | Byren | Mar 2005 | A1 |
20050216237 | Adachi et al. | Sep 2005 | A1 |
20060007362 | Lee et al. | Jan 2006 | A1 |
20060176468 | Anderson et al. | Aug 2006 | A1 |
20060197936 | Liebman et al. | Sep 2006 | A1 |
20060227315 | Beller | Oct 2006 | A1 |
20060265147 | Yamaguchi et al. | Nov 2006 | A1 |
20080136626 | Hudson et al. | Jun 2008 | A1 |
20080159591 | Ruedin | Jul 2008 | A1 |
20090059201 | Willner et al. | Mar 2009 | A1 |
20090119044 | Levesque | May 2009 | A1 |
20090128864 | Inage | May 2009 | A1 |
20090242468 | Corben et al. | Oct 2009 | A1 |
20090292468 | Wu et al. | Nov 2009 | A1 |
20100027602 | Abshire et al. | Feb 2010 | A1 |
20100053715 | O'Neill et al. | Mar 2010 | A1 |
20100165322 | Kane et al. | Jul 2010 | A1 |
20100204964 | Pack et al. | Aug 2010 | A1 |
20110066262 | Kelly et al. | Mar 2011 | A1 |
20110085155 | Stann et al. | Apr 2011 | A1 |
20110097014 | Lin | Apr 2011 | A1 |
20110146908 | Kobayashi | Jun 2011 | A1 |
20110149268 | Marchant et al. | Jun 2011 | A1 |
20110149360 | Sourani | Jun 2011 | A1 |
20110153367 | Amigo et al. | Jun 2011 | A1 |
20110260036 | Baraniuk et al. | Oct 2011 | A1 |
20110282622 | Canter | Nov 2011 | A1 |
20110317147 | Campbell et al. | Dec 2011 | A1 |
20120038817 | McMackin et al. | Feb 2012 | A1 |
20120044093 | Pala | Feb 2012 | A1 |
20120044476 | Earhart et al. | Feb 2012 | A1 |
20120236379 | da Silva et al. | Sep 2012 | A1 |
20120249996 | Tanaka et al. | Oct 2012 | A1 |
20120257186 | Rieger et al. | Oct 2012 | A1 |
20130206967 | Shpunt et al. | Aug 2013 | A1 |
20140021354 | Gagnon et al. | Jan 2014 | A1 |
20140078514 | Zhu | Mar 2014 | A1 |
20140211194 | Pacala et al. | Jul 2014 | A1 |
20140291491 | Shpunt et al. | Oct 2014 | A1 |
20140300732 | Friend et al. | Oct 2014 | A1 |
20140350836 | Stettner et al. | Nov 2014 | A1 |
20150081211 | Zeng et al. | Mar 2015 | A1 |
20150153452 | Yamamoto et al. | Jun 2015 | A1 |
20150269439 | Versace et al. | Sep 2015 | A1 |
20150304634 | Karvounis | Oct 2015 | A1 |
20150331113 | Stettner et al. | Nov 2015 | A1 |
20150369920 | Setono et al. | Dec 2015 | A1 |
20150378011 | Owechko | Dec 2015 | A1 |
20150378187 | Heck et al. | Dec 2015 | A1 |
20160003946 | Gilliland et al. | Jan 2016 | A1 |
20160005229 | Lee et al. | Jan 2016 | A1 |
20160041266 | Smits | Feb 2016 | A1 |
20160047895 | Dussan | Feb 2016 | A1 |
20160047896 | Dussan | Feb 2016 | A1 |
20160047897 | Dussan | Feb 2016 | A1 |
20160047898 | Dussan | Feb 2016 | A1 |
20160047899 | Dussan | Feb 2016 | A1 |
20160047900 | Dussan | Feb 2016 | A1 |
20160047903 | Dussan | Feb 2016 | A1 |
20160146595 | Boufounos et al. | May 2016 | A1 |
20160157828 | Sumi et al. | Jun 2016 | A1 |
20160274589 | Templeton et al. | Sep 2016 | A1 |
20160293647 | Lin et al. | Oct 2016 | A1 |
20160320486 | Murai et al. | Nov 2016 | A1 |
20160370462 | Yang | Dec 2016 | A1 |
20160379094 | Mittal et al. | Dec 2016 | A1 |
20170158239 | Dhome et al. | Jun 2017 | A1 |
20170176990 | Keller | Jun 2017 | A1 |
20170199280 | Nazemi et al. | Jul 2017 | A1 |
20170205873 | Shpunt et al. | Jul 2017 | A1 |
20170211932 | Zadravec et al. | Jul 2017 | A1 |
20170219695 | Hall et al. | Aug 2017 | A1 |
20170234973 | Axelsson | Aug 2017 | A1 |
20170242102 | Dussan et al. | Aug 2017 | A1 |
20170242103 | Dussan | Aug 2017 | A1 |
20170242104 | Dussan | Aug 2017 | A1 |
20170242105 | Dussan et al. | Aug 2017 | A1 |
20170242106 | Dussan et al. | Aug 2017 | A1 |
20170242107 | Dussan et al. | Aug 2017 | A1 |
20170242108 | Dussan et al. | Aug 2017 | A1 |
20170242109 | Dussan et al. | Aug 2017 | A1 |
20170263048 | Glaser et al. | Sep 2017 | A1 |
20170269197 | Hall et al. | Sep 2017 | A1 |
20170269198 | Hall et al. | Sep 2017 | A1 |
20170269209 | Hall et al. | Sep 2017 | A1 |
20170269215 | Hall et al. | Sep 2017 | A1 |
20170307876 | Dussan et al. | Oct 2017 | A1 |
20180031703 | Ngai et al. | Feb 2018 | A1 |
20180045816 | Jarosinski | Feb 2018 | A1 |
20180059248 | O'Keeffe | Mar 2018 | A1 |
20180075309 | Sathyanarayana et al. | Mar 2018 | A1 |
20180081034 | Guo | Mar 2018 | A1 |
20180120436 | Smits | May 2018 | A1 |
20180137675 | Kwant | May 2018 | A1 |
20180143300 | Dussan | May 2018 | A1 |
20180143324 | Keilaf et al. | May 2018 | A1 |
20180188355 | Bao et al. | Jul 2018 | A1 |
20180224533 | Dussan et al. | Aug 2018 | A1 |
20180238998 | Dussan et al. | Aug 2018 | A1 |
20180239000 | Dussan et al. | Aug 2018 | A1 |
20180239001 | Dussan et al. | Aug 2018 | A1 |
20180239004 | Dussan et al. | Aug 2018 | A1 |
20180239005 | Dussan et al. | Aug 2018 | A1 |
20180284234 | Curatu | Oct 2018 | A1 |
20180284278 | Russell et al. | Oct 2018 | A1 |
20180284279 | Campbell et al. | Oct 2018 | A1 |
20180299534 | LaChapelle et al. | Oct 2018 | A1 |
20180306905 | Kapusta et al. | Oct 2018 | A1 |
20180306926 | LaChapelle | Oct 2018 | A1 |
20180306927 | Slutsky et al. | Oct 2018 | A1 |
20180341103 | Dussan et al. | Nov 2018 | A1 |
20180348361 | Turbide | Dec 2018 | A1 |
20190025407 | Dussan | Jan 2019 | A1 |
20190041521 | Kalscheur et al. | Feb 2019 | A1 |
20190086514 | Dussan et al. | Mar 2019 | A1 |
20190086550 | Dussan et al. | Mar 2019 | A1 |
20190113603 | Wuthishuwong et al. | Apr 2019 | A1 |
20190212450 | Steinberg et al. | Jul 2019 | A1 |
20200044336 | Dani | Feb 2020 | A1 |
20200132818 | Dussan et al. | Apr 2020 | A1 |
Number | Date | Country |
---|---|---|
103885065 | Jun 2014 | CN |
2004034084 | Apr 2004 | WO |
2006076474 | Jul 2006 | WO |
2008008970 | Jan 2008 | WO |
2016025908 | Feb 2016 | WO |
2017143183 | Aug 2017 | WO |
2017143217 | Aug 2017 | WO |
2018152201 | Aug 2018 | WO |
2019010425 | Jan 2019 | WO |
Entry |
---|
Office Action for U.S. Appl. No. 16/356,101 dated Jun. 12, 2019. |
Office Action for U.S. Appl. No. 16/356,116 dated Jul. 25, 2019. |
Schubert et al., “How to Build and Customize a High-Resolution 3D Laserscanner Using Off-The-Shelf Components”, preprint for Towards Autonomous Robotic Systems, 2016. |
Notice of Allowance for U.S. Appl. No. 16/356,116 dated Mar. 25, 2020. |
Prosecution history for U.S. Appl. No. 16/356,046, filed Mar. 18, 2019, now U.S. Pat. No. 10,656,252 issued May 19, 2020. |
“Compressed Sensing,” Wikipedia, 2019, downloaded Jun. 22, 2019 from https://en.wikipedia.org/wiki/Compressed_sensing, 16 pgs. |
“Entrance Pupil,” Wikipedia, 2016, downloaded Jun. 22, 2019 from https://enwikipedia.org/wiki/Entrance_pupil, 2 pgs. |
Analog Devices, “Data Sheet AD9680”, 98 pages, 2014-2015. |
Chen et al., “Estimating Depth from RGB and Sparse Sensing”, European Conference on Computer Vision, Springer, 2018, pp. 176-192. |
Donoho, “Compressed Sensing”, IEEE Transactions on Inmformation Theory, Apr. 2006, vol. 52, No. 4, 18 pgs. |
Howland et al., “Compressive Sensing LIDAR for 3D Imaging”, Optical Society of America, May 1-6, 2011, 2 pages. |
Kessler, “An afocal beam relay for laser XY scanning systems”, Proc. of SPIE vol. 8215, 9 pages, 2012. |
Kim et al., “Investigation on the occurrence of mutual interference between pulsed terrestrial LIDAR scanners”, 2015 IEEE Intelligent Vehicles Symposium (IV), Jun. 28-Jul. 1, 2015, COEX, Seoul, Korea, pp. 437-442. |
Maxim Integrated Products, Inc., Tutorial 800, “Design A Low-Jitter Clock for High Speed Data Converters”, 8 pages, Jul. 17, 2002. |
Meinhardt-Llopis et al., “Horn-Schunk Optical Flow with a Multi-Scale Strategy”, Image Processing Online, Jul. 19, 2013, 22 pages. |
Moss et al., “Low-cost compact MEMS scanning LADAR system for robotic applications”, Proc. of SPIE, 2012, vol. 8379, 837903-1 to 837903-9. |
Office Action for U.S. Appl. No. 16/356,046 dated Jun. 3, 2019. |
Office Action for U.S. Appl. No. 16/356,089 dated Jun. 12, 2019. |
Redmayne et al., “Understanding the Effect of Clock Jitter on High Speed ADCs”, Design Note 1013, Linear Technology, 4 pages, 2006. |
Rehn, “Optical properties of elliptical reflectors”, Opt. Eng. 43(7), pp. 1480-1488, Jul. 2004. |
Sharafutdinova et al., “Improved field scanner incorporating parabolic optics. Part 1: Simulation”, Applied Optics, vol. 48, No. 22, p. 4389-4396, Aug. 2009. |
U.S. Appl. No. 16/106,350, filed Aug. 21, 2018. |
U.S. Appl. No. 16/106,406, filed Aug. 21, 2018. |
Prosecution history for U.S. Appl. No. 16/356,089, filed Mar. 18, 2019, now U.S. Pat. No. 10,598,788 issued Mar. 24, 2020. |
Prosecution history for U.S. Appl. No. 16/356,101, filed Mar. 18, 2019, now U.S. Pat. No. 10,656,277 issued May 19, 2020. |
Prosecution history for U.S. Appl. No. 16/356,116, filed Mar. 18, 2019, now U.S. Pat. No. 10,670,718 issued Jun. 2, 2020. |
Prosecution history for U.S. Appl. No. 16/356,061, filed Mar. 18, 2019, now U.S. Pat. No. 11,327,177 issued May 10, 2022. |
Number | Date | Country | |
---|---|---|---|
20200225324 A1 | Jul 2020 | US |
Number | Date | Country | |
---|---|---|---|
62805781 | Feb 2019 | US | |
62750540 | Oct 2018 | US |