The present technology is generally directed to the correction of motion-based inaccuracy in point clouds, for example, generated by one or more emitter/detector sensors (e.g., laser sensors) that are carried by a scanning platform.
The surrounding environment of a scanning platform can typically be scanned or otherwise detected using one or more emitter/detector sensors. Emitter/detector sensors, such as LiDAR sensors, typically transmit a pulsed signal (e.g. laser signal) outwards, detect the pulsed signal reflections, and identify three-dimensional information (e.g., laser scanning points) in the environment to facilitate object detection and/or recognition. Typical emitter/detector sensors can provide three-dimensional geometry information (e.g., scanning points represented in a three-dimensional coordinate system associated with the sensor or scanning platform) accumulated over short periods of time. The information obtained regarding the positions of objects can facilitate the process of detecting pedestrians, vehicles, and/or other objects in the environment, thereby providing a basis for target tracking, obstacle avoidance, route planning, and/or other applications in automated or assisted navigation operations. However, inaccuracies exist at least partly due to the accumulation of scanning points, which can affect various higher level applications. Accordingly, there remains a need for improved sensing techniques and devices.
The following summary is provided for the convenience of the reader and identifies several representative embodiments of the disclosed technology.
In some embodiments, a computer-implemented method for adjusting point clouds generated using at least a scanning unit carried by a scanning platform, includes obtaining a base point cloud comprising a plurality of scanning points that are produced by the scanning unit during a period of time, wherein each of the scanning points indicates a position of at least a portion of a target object and wherein the target object is associated with a motion model. The method can further include determining one or more adjusting factors applicable to the scanning points based, at least in part, on the motion model, and transforming at least one subset of the scanning points based, at least in part, on the one or more adjusting factors to generate an adjusted point cloud of the target object.
In some embodiments, the positions indicated by at least two of the scanning points correspond to different timepoints. In some embodiments, the positions indicated by at least two of the scanning points correspond to different portions of the target object. In some embodiments, the scanning points are represented within a three-dimensional reference system associated with the scanning unit or the scanning platform. In some embodiments, the motion model includes a translational motion component, and/or a rotational motion component. In some embodiments, the translational motion component includes a constant translational speed factor. In some embodiments, the motion model includes rotational motion component. In some embodiments, the rotational motion component includes a constant rotational speed factor.
In some embodiments, determining the one or more adjusting factors comprises assessing a point cloud measurement based, at least in part, on a volume relating to the scanning points. In some embodiments, assessing the point cloud measure includes applying the motion model to the scanning points and searching for a minimized quantity of volume pixels (voxels) occupied by the scanning points at a target timepoint in accordance with the applying of the motion model. In some embodiments, assessing the point cloud measure includes applying the motion model to the scanning points and searching for a minimized volume enclosed by the scanning points at a target timepoint in accordance with the applying of the motion model.
In some embodiments, the target timepoint corresponds to the end of the period of time. In some embodiments, the one or more adjusting factors include at least one of a translational velocity or an rotational speed. In some embodiments, transforming the at least one subset of the scanning points comprises relocating each scanning point of the subset based, at least in part, on the one or more adjusting factors. In some embodiments, relocating each scanning point is based, at least in part, on movements associated with the scanning point between a timepoint when the scanning point was produced and a subsequent target timepoint. In some embodiments, a relative distance between the target object and the scanning platform changes during the period of time.
In some embodiments, the scanning platform includes at least one of an unmanned aerial vehicle (UAV), a manned aircraft, an autonomous car, a self-balancing vehicle, a robot, a smart wearable device, a virtual reality (VR) head-mounted display, or an augmented reality (AR) head-mounted display. In some embodiments, the method further includes locating the target object based, at least in part, on the adjusted point cloud.
Any of the foregoing methods can be implemented via a non-transitory computer-readable medium storing computer-executable instructions that, when executed, cause one or more processors associated with a scanning platform to perform corresponding actions, or via a vehicle including a programmed controller that at least partially controls one or more motions of the vehicle and that includes one or more processors configured to perform corresponding actions.
When scanning an object using emitter/detector sensor(s) (e.g., a LiDAR sensor), relative movement between the scanned object and the sensor(s) (e.g., carried by a mobile scanning platform) can cause inaccuracies (e.g., smearing or blurring) in a three-dimensional (3D) point cloud that includes scanning points accumulated over a period of time. Because inaccurate scanning points do not reflect the true positions of the object (or portions thereof), the reconstructed object based on the 3D point cloud can be inaccurate, thus affecting high level applications such as object tracking, obstacle avoidance, or the like. The presently disclosed technology can use a conventional 3D point cloud as input, analyze the scanning results of the moving object, and correct the motion-based inaccuracy in accordance with motion-model(s). Corrected or adjusted point clouds generated based on the presently disclosed technology can facilitate efficient and accurate object detection or recognition, thus providing a reliable basis for various applications in automated or assisted navigation processes.
Several details describing structures and/or processes that are well-known and often associated with scanning platforms (e.g., UAVs or other types of movable platforms) and corresponding systems and subsystems, but that may unnecessarily obscure some significant aspects of the presently disclosed technology, are not set forth in the following description for purposes of clarity. Moreover, although the following disclosure sets forth several embodiments of different aspects of the presently disclosed technology, several other embodiments can have different configurations or different components than those described herein. Accordingly, the presently disclosed technology may have other embodiments with additional elements and/or without several of the elements described below with reference to
Many embodiments of the technology described below may take the form of computer- or controller-executable instructions, including routines executed by a programmable computer or controller. The programmable computer or controller may or may not reside on a corresponding scanning platform. For example, the programmable computer or controller can be an onboard computer of the scanning platform, or a separate but dedicated computer associated with the scanning platform, or part of a network or cloud based computing service. Those skilled in the relevant art will appreciate that the technology can be practiced on computer or controller systems other than those shown and described below. The technology can be embodied in a special-purpose computer or data processor that is specifically programmed, configured or constructed to perform one or more of the computer-executable instructions described below. Accordingly, the terms “computer” and “controller” as generally used herein refer to any data processor and can include Internet appliances and handheld devices (including palm-top computers, wearable computers, cellular or mobile phones, multi-processor systems, processor-based or programmable consumer electronics, network computers, mini computers and the like). Information handled by these computers and controllers can be presented at any suitable display medium, including an LCD (liquid crystal display). Instructions for performing computer- or controller-executable tasks can be stored in or on any suitable computer-readable medium, including hardware, firmware or a combination of hardware and firmware. Instructions can be contained in any suitable memory device, including, for example, a flash drive, USB (universal serial bus) device, and/or other suitable medium. In particular embodiments, the instructions are accordingly non-transitory.
A typical 3D point cloud can include scanning points accumulated over one or more periods of time (e.g., one frame or a few consecutive frames produced by the sensor(s)). When a target object being scanned moves relative to the sensor(s) during the time period(s), certain portions of the point cloud can indicate false positions of the object in space (with respect to a current timepoint), creating smearing, blurring, or dragging “shadow” effects. The length and shape of the shadow can depend on the nature of the target object's relative movement. For example, translational motion of the target object can contribute to a flat, rectangular shadow, while rotational motion can leave an arc-like trajectory.
Because the motion of an object in space can be arbitrary, the mathematical description of motion-based inaccuracies can be complicated. Conventional point cloud processing systems do not optimize and deal with this phenomenon, and simply treat portions of the false point cloud as if the purported portion of the object exists in the physical space. However, this false sense of object shape and/or position can contribute to erroneous estimates or overly conservative decision-making in target tracking, obstacle avoidance, path planning, and/or other applications.
The base point cloud 225, therefore, includes information that reflects the trajectory of the target object 205 (or a portion thereof) as it moves during the time period Δt. The length and shape of the base point cloud 225 depends, at least in part, on the actual movement of the target object 205 during the scanning time period. The presently disclosed technology can generate a corrected or adjusted point cloud 235 via relocating at least a subset of the scanning points 215a-215e, so that the corrected or adjusted point cloud 235 more accurately reflects the shape and/or distance of at least some portion of the target object 205 (e.g., the front of a car as illustrated in
PΔt=Up∈
where each scanning point p∈PΔt is associated with a scanning timepoint tp when the scanning point was collected or produced.
If the target object travels at a velocity vt at timepoint t, a corrected or adjusted point cloud after scanning points relocation can include the following set:
P′Δt=Up∈P
where the integral operation calculates the amount of additional displacement that each scanning point in the base point cloud should have incurred after the scanning timepoint tp. If both translational motion vt and rotation motion rt of the object occur, the set of points included in a corrected or adjusted point cloud can be expressed as:
P′Δt=Up∈P
where, ct corresponds to a point (e.g., centroid of the object) about which the object rotates at timepoint t, pt corresponds to the position of point p at timepoint t, and mathematical operator * stands for a corresponding rotation transformation.
S=∫t=0t=Δt(v+at)dt=vΔt+½aΔt2 (3)
A similar analysis can be applied to a target object that decelerates. Therefore, as illustrated in
At block 505, the method includes obtaining a base point cloud of scanning points regarding an object (e.g., a vehicle, pedestrian, aircraft, etc.). As discussed above, the scanning points are collected during a period of time, and individual scanning points can indicate the positions of different portions of the object at different timepoints. Illustratively, the base point cloud can include scanning points of a single frame produced by an emitter/detector sensor (e.g., a LiDAR sensor). Individual scanning points within a frame may not be generated simultaneously. For example, in some embodiments, although sensor data (e.g., scanning points) are collected continuously, frames of scanning points are generated or transmitted in accordance with some discrete time intervals. In other words, a frame may correspond to a set of sensor data (e.g., scanning points) accumulated over a certain duration of time (e.g., 0.1 second). In some embodiments, the base point cloud can also include scanning points of multiple, consecutive frames produced by one or more sensors.
At block 510, the method includes determining a motion model for the object's movement during the period of time. The motion model can include a translational motion component, a rotational motion component, an oscillatory motion component, and/or other motion components. Illustratively, formula (2) as discussed above can be selected as a motion model associated with the object during the period of time.
At block 515, the method includes assessing a point cloud measurement to determine estimated motion model factors applicable to the scanning points. Motion-based inaccuracy can cause a false, enlarged size of the base point cloud. Therefore, the method can include assessing volume or size related point cloud measurements in order to determine estimated motion model factors. Illustratively, the method can search for a minimized number of volume pixels or voxels (e.g., 0.001-cubic-meter cubes that evenly divide up the three-dimensional space surrounding the scanning platform), with each voxel containing at least one scanning point. Put another way, the fewest number of voxels that describe the point cloud can correspond to the closest approximation of the object in a stationary position. Mathematically, the minimization function can be expressed as:
arg minv,r GIRD(Up∈P
where v, r stand for constant translational velocity and rotational speed of the object (which are the motion model factors to be estimated), function GIRD (P) calculates the quantity of voxels occupied by P, and function Es(v, r) can correspond to an a priori term for observing the translational velocity and rotational speed, which can take the following form:
Es(v,r)=|v−v′|2+|r−r′|2 (5)
where v′, r′ can correspond to observations obtained from a different method of sensor (e.g., by aligning point clouds of the object corresponding to different times, or by using other sensor(s) such as laser tachometers or millimeter-wave radars). In some embodiments, formula (3) does not require the term Es(v, r). In these embodiments, the minimization search can be computationally more expensive (e.g., taking longer to converge).
The voxel quantity minimization process based on formula (4) can include determining multiple rotation centers ct of the object. In some embodiments, for computational efficiency and expediency, the method includes a two-step approximation in which a translational transformation is followed by a rotational transformation. Illustratively, in these embodiments, the point cloud measurement assessment can be performed in accordance with the following formula:
arg minv,r GIRD(r·dt*(Up∈P
where C can correspond to a centroid point of an intermediate point cloud (e.g., Up∈P
In some embodiments, the method can include assessing a volume enclosed by the point cloud. Similar to the voxel quantity assessment, this approach can use a motion model based formula to calculate a measure (e.g., a volume enclosed by the outer surface of the point cloud, such as a mesh connecting all outer surface scanning points). Different than the voxel quantity assessment which seeks to minimize a “skeleton” volume, the enclosed-volume assessment can assess the overall size of the point cloud. In some embodiments, the method can include assessing multiple measurements of the point cloud (e.g., both voxel quantity and enclosed-volume measurements), and calculating a weighted average of the estimated motion model factors that resulted from the multiple assessments. In some embodiments, the motion model(s) can include factor(s) of non-constant form. For example, the translational velocity and/or rotational speed of the object to be estimated can be defined as function(s) of time t.
In accordance with some embodiments, searching for a minimization of point cloud measurement (e.g., voxel quantity or enclosed-volume) can include finding a global or local minimized value of the measurement, or (e.g., for reasons of computational efficiency, constraints, and/or economy) simply finding a reduced (but not necessarily minimized) value of the measurement.
At block 520, the method includes transforming the scanning points in the base point cloud in accordance with the estimated motion model factors to form a corrected or adjusted point cloud. Illustratively, the method can include relocating at least a subset of scanning points initially included in the base point cloud based, at least in part, on the estimated translational velocity v and/or estimated rotational speed r, in accordance with an applicable motion model (e.g., formula (1) or formula (2)). The relocation moves each scanning point in the subset from a position at timepoint tp when the scanning point was collected or produced to an estimated position of the scanning point at the end of the time period Δt. The method can label or otherwise use the transformed scanning points in combination with any scanning points collected or produced at the end of the time period to form the corrected or adjusted point cloud for the object.
At block 525, the method includes taking one or more further actions based on the corrected or adjusted point cloud. Illustratively, the controller can determine the centroid, contour, shape, and/or can otherwise recognize the object based on the corrected point cloud, which can be more accurate than using the base point cloud. The controller can also determine distances between the sensor (or the scanning platform) and various portions of the object, based on the corrected point cloud, and thereby facilitate obstacle avoidance, target tracking, route planning, and/or other automated/assisted navigation applications. The method of
The processor(s) 705 may include central processing units (CPUs) to control the overall operation of, for example, the host computer. In certain embodiments, the processor(s) 705 accomplish this by executing software or firmware stored in memory 710. The processor(s) 705 may be, or may include, one or more programmable general-purpose or special-purpose microprocessors, digital signal processors (DSPs), programmable controllers, application specific integrated circuits (ASICs), programmable logic devices (PLDs), or the like, or a combination of such devices.
The memory 710 can be or include the main memory of the computer system. The memory 710 represents any suitable form of random access memory (RAM), read-only memory (ROM), flash memory, or the like, or a combination of such devices. In use, the memory 710 may contain, among other things, a set of machine instructions which, when executed by processor 705, causes the processor 705 to perform operations to implement embodiments of the presently disclosed technology.
Also connected to the processor(s) 705 through the interconnect 725 is a (optional) network adapter 715. The network adapter 715 provides the computer system 700 with the ability to communicate with remote devices, such as the storage clients, and/or other storage servers, and may be, for example, an Ethernet adapter or Fiber Channel adapter.
The techniques described herein can be implemented by, for example, programmable circuitry (e.g., one or more microprocessors) programmed with software and/or firmware, or entirely in special-purpose hardwired circuitry, or in a combination of such forms. Special-purpose hardwired circuitry may be in the form of, for example, one or more application-specific integrated circuits (ASICs), programmable logic devices (PLDs), field-programmable gate arrays (FPGAs), etc.
Software or firmware for use in implementing the techniques introduced here may be stored on a machine-readable storage medium and may be executed by one or more general-purpose or special-purpose programmable microprocessors. A “machine-readable storage medium,” as the term is used herein, includes any mechanism that can store information in a form accessible by a machine (a machine may be, for example, a computer, network device, cellular phone, personal digital assistant (PDA), manufacturing tool, any device with one or more processors, etc.). For example, a machine-accessible storage medium includes recordable/non-recordable media (e.g., read-only memory (ROM); random access memory (RAM); magnetic disk storage media; optical storage media; flash memory devices; etc.), etc.
The term “logic,” as used herein, can include, for example, programmable circuitry programmed with specific software and/or firmware, special-purpose hardwired circuitry, or a combination thereof.
Some embodiments of the disclosure have other aspects, elements, features, and/or steps in addition to or in place of what is described above. These potential additions and replacements are described throughout the rest of the specification. Reference in this specification to “various embodiments,” “certain embodiments,” or “some embodiments” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the disclosure. These embodiments, even alternative embodiments (e.g., referenced as “other embodiments”) are not mutually exclusive of other embodiments. Moreover, various features are described which may be exhibited by some embodiments and not by others. Similarly, various requirements are described which may be requirements for some embodiments but not other embodiments. For example, some embodiments account for translational motion only, others for rotational motions only, still others account for both. As another example, some embodiments seek minimization of voxel quantity, others seek minimization of enclosed volume, still others use both techniques.
To the extent any materials incorporated by reference herein conflict with the present disclosure, the present disclosure controls.
The present application is a continuation of U.S. patent application Ser. No. 16/145,173, filed Sep. 28, 2018, which is a continuation of U.S. patent application Ser. No. 15/729,533, filed Oct. 10, 2017, which is a continuation of International Patent Application No. PCT/CN17/95300, filed Jul. 31, 2017, all of which are incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
4283116 | Weis | Aug 1981 | A |
5179565 | Tsuchiya et al. | Jan 1993 | A |
5249046 | Ulich et al. | Sep 1993 | A |
6101455 | Davis | Aug 2000 | A |
6246258 | Lesea | Jun 2001 | B1 |
6344937 | Sparrold et al. | Feb 2002 | B1 |
6666855 | Somani et al. | Dec 2003 | B2 |
7085400 | Holsing et al. | Aug 2006 | B1 |
7236299 | Smith et al. | Jun 2007 | B1 |
7336407 | Adams et al. | Feb 2008 | B1 |
7564571 | Karabassi et al. | Jul 2009 | B2 |
7843448 | Wheeler et al. | Nov 2010 | B2 |
7899598 | Woon et al. | Mar 2011 | B2 |
8224097 | Matei et al. | Jul 2012 | B2 |
8396293 | Korah et al. | Mar 2013 | B1 |
8488877 | Owechko et al. | Jul 2013 | B1 |
8503046 | Mikkelsen et al. | Aug 2013 | B2 |
8605998 | Samples et al. | Dec 2013 | B2 |
8620089 | Korah et al. | Dec 2013 | B1 |
8665122 | Klepsvik | Mar 2014 | B2 |
8773182 | Degani et al. | Jul 2014 | B1 |
8798372 | Korchev et al. | Aug 2014 | B1 |
9076219 | Cha et al. | Jul 2015 | B2 |
9097804 | Silver et al. | Aug 2015 | B1 |
9098753 | Zhu | Aug 2015 | B1 |
9128190 | Ulrich et al. | Sep 2015 | B1 |
9369697 | Kumagai et al. | Jun 2016 | B2 |
9383753 | Templeton et al. | Jul 2016 | B1 |
9396545 | Fu et al. | Jul 2016 | B2 |
9470548 | Ahn et al. | Oct 2016 | B2 |
9584748 | Saito | Feb 2017 | B2 |
9644857 | Ashgriz et al. | May 2017 | B1 |
9659378 | Sasaki et al. | May 2017 | B2 |
10908262 | Dussan | Feb 2021 | B2 |
20040135992 | Munro | Jul 2004 | A1 |
20050248749 | Kiehn et al. | Nov 2005 | A1 |
20050254628 | Saladin et al. | Nov 2005 | A1 |
20070214687 | Woon et al. | Sep 2007 | A1 |
20070296951 | Kuijk et al. | Dec 2007 | A1 |
20080114253 | Randall et al. | May 2008 | A1 |
20080319706 | Uffenkamp et al. | Dec 2008 | A1 |
20090310867 | Matei et al. | Dec 2009 | A1 |
20100271615 | Sebastian et al. | Oct 2010 | A1 |
20100296705 | Miksa | Nov 2010 | A1 |
20110285981 | Justice et al. | Nov 2011 | A1 |
20120032541 | Chen et al. | Feb 2012 | A1 |
20120121166 | Ko et al. | May 2012 | A1 |
20120170024 | Azzazy et al. | Jul 2012 | A1 |
20120170029 | Azzazy et al. | Jul 2012 | A1 |
20120248288 | Linder et al. | Oct 2012 | A1 |
20120256916 | Kitamura et al. | Oct 2012 | A1 |
20130107243 | Ludwig et al. | May 2013 | A1 |
20130284475 | Hirabayashi et al. | Oct 2013 | A1 |
20130329065 | Haraguchi et al. | Dec 2013 | A1 |
20140049765 | Zheleznyak et al. | Feb 2014 | A1 |
20140071121 | Russ et al. | Mar 2014 | A1 |
20140132723 | More | May 2014 | A1 |
20140368651 | Irschara et al. | Dec 2014 | A1 |
20150185313 | Zhu | Jul 2015 | A1 |
20150206023 | Kochi et al. | Jul 2015 | A1 |
20150219920 | Ando et al. | Aug 2015 | A1 |
20160035124 | Sinha | Feb 2016 | A1 |
20160070981 | Sasaki et al. | Mar 2016 | A1 |
20160154999 | Fan et al. | Jun 2016 | A1 |
20160274589 | Templeton et al. | Sep 2016 | A1 |
20160311528 | Nemovi et al. | Oct 2016 | A1 |
20160327779 | Hilman | Nov 2016 | A1 |
20170046840 | Chen et al. | Feb 2017 | A1 |
20170046845 | Boyle et al. | Feb 2017 | A1 |
20170153319 | Villeneuve et al. | Jun 2017 | A1 |
20170178352 | Harmsen et al. | Jun 2017 | A1 |
20170227628 | Zheleznyak et al. | Aug 2017 | A1 |
20170248698 | Sebastian et al. | Aug 2017 | A1 |
20170316701 | Gil et al. | Nov 2017 | A1 |
20170337365 | Kikinis | Nov 2017 | A1 |
20180357503 | Wang et al. | Dec 2018 | A1 |
20180365835 | Yan et al. | Dec 2018 | A1 |
20220108523 | Koyama | Apr 2022 | A1 |
Number | Date | Country |
---|---|---|
101216562 | Jul 2008 | CN |
101256232 | Sep 2008 | CN |
202182717 | Apr 2012 | CN |
102508255 | Jun 2012 | CN |
102944224 | Feb 2013 | CN |
102971657 | Mar 2013 | CN |
103257342 | Aug 2013 | CN |
103257348 | Aug 2013 | CN |
103403577 | Nov 2013 | CN |
103499819 | Jan 2014 | CN |
203645633 | Jun 2014 | CN |
103969637 | Aug 2014 | CN |
103983963 | Aug 2014 | CN |
104463872 | Mar 2015 | CN |
104600902 | May 2015 | CN |
105517903 | Apr 2016 | CN |
105628026 | Jun 2016 | CN |
105759253 | Jul 2016 | CN |
106019296 | Oct 2016 | CN |
106019923 | Oct 2016 | CN |
106030431 | Oct 2016 | CN |
106063089 | Oct 2016 | CN |
106093958 | Nov 2016 | CN |
106093963 | Nov 2016 | CN |
106199622 | Dec 2016 | CN |
106597414 | Apr 2017 | CN |
106597416 | Apr 2017 | CN |
107037721 | Aug 2017 | CN |
63-194211 | Aug 1988 | JP |
2002-199682 | Jul 2002 | JP |
2005-321547 | Nov 2005 | JP |
2015-200555 | Nov 2015 | JP |
60-76541 | Feb 2017 | JP |
10-2011-0124892 | Nov 2011 | KR |
10-2016-0026989 | Mar 2016 | KR |
10--1665938 | Oct 2016 | KR |
WO 2015148824 | Oct 2015 | WO |
WO 2016127357 | Aug 2016 | WO |
WO 2016170333 | Oct 2016 | WO |
WO 2017021778 | Feb 2017 | WO |
Entry |
---|
International Searching Authority, The International Search Report and Written Opinion of the International Searching Authority, PCT/CN2017/082584, dated Jan. 30, 2018, 12 pages. |
Aijazi et al., “Segmentation Based Classification of 3D Urban Point Clouds: A Super-Voxel Based Approach with Evaluation”, 2013. |
Douillard et al., “On the Segmentation of 3D Lidar Point Clouds”, 2011. |
Hackel et al., “Fast Semantic Segmentation of 3D Point Clouds with Strongly Varying Density”, 2016. |
Levinson et al., “Automatic Online Calibration of Cameras and Lasers”, 2013. |
Liu et al., “A 3.9 ps RMS Resolution Time-To-Digital Converter Using Dual-sampling Method on Kintex UltraScale FPGA”, 2006. |
Montemerlo et al., “Junior: The Stanford Entry in the Urban Challenger”, 2008. |
Palka et al., “A Novel Method Based Solely on FPGA Units Enabling Measurement of Time and Charge of Analog Signals in Positron Emission Tomography”, 2014. |
Raismian, “Google Cars Autonomous Driving”, 2017. |
Schwarze, “A New Look at Risley Prisms”, 2006. |
Tongtong et al., “Gaussian-Process-Based Real-Time Ground Segmentation for Autonomous Land Vehicles”, 2014. |
Zhang et al., “LOAM: Lidar Odometry and Mapping in Real-Time”, Robotics Science and Systems, vol. 2, 2014. |
International Searching Authority, International Search Report, PCT/CN2017/078680, dated Jan. 3, 2018, 4 pages. |
Number | Date | Country | |
---|---|---|---|
20220138908 A1 | May 2022 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16145173 | Sep 2018 | US |
Child | 17578750 | US | |
Parent | 15729533 | Oct 2017 | US |
Child | 16145173 | US | |
Parent | PCT/CN2017/095300 | Jul 2017 | US |
Child | 15729533 | US |