The subject technology provides methods for aligning Light Detection and Ranging (LiDAR) sensors under evaluation with a reference LIDAR sensor on the same vehicle while there is limited overlap between the reference LIDAR sensor and other LIDAR sensors under evaluation when scanning the same scene.
An autonomous vehicle is a motorized vehicle that can navigate without a human driver. An exemplary autonomous vehicle includes a plurality of sensor systems, such as, but not limited to, a camera sensor system, a LIDAR sensor system, a radar sensor system, amongst others, wherein the autonomous vehicle operates based upon sensor signals output by the sensor systems. Specifically, the sensor signals are provided to an internal computing system in communication with the plurality of sensor systems, wherein a processor executes instructions based upon the sensor signals to control a mechanical system of the autonomous vehicle, such as a vehicle propulsion system, a braking system, or a steering system. In some applications, these systems utilize a perception system (or perception stack) that implements various computing vision techniques to reason about the surrounding environment.
Disclosed are systems, apparatuses, methods, computer-readable medium, and circuits for determining that an evaluation LIDAR on a vehicle is misaligned to a reference LIDAR on the vehicle. In one aspect, a method may include receiving data from successive scans performed by the reference LIDAR and an evaluation LIDAR while the vehicle is in motion; receiving vehicle odometry from accelerometers on the vehicle while performing the scans by the reference LIDAR; determining an odometry correction from a process of aligning point clouds received as a result of the scans by the reference LIDAR, wherein the amount of translation and rotation required to align the point clouds after the recorded odometry has been taken into account is reflective of the odometry correction; creating a map from the process of aligning the point clouds received as the result of the scans by the reference LIDAR; determining an alignment error for the evaluation LIDAR from a process of aligning point clouds received as a result of the scans by the evaluation LIDAR to the map, wherein the amount of translation and rotation required to align the point clouds after the recorded odometry and the odometry correction have been taken into account is reflective of the alignment error for the evaluation LIDAR to the reference LIDAR. For example, a processor receives data from successive scans performed by the reference LIDAR and an evaluation LIDAR while the vehicle is in motion; receives vehicle odometry from accelerometers on the vehicle while performs the scans by the reference LIDAR; determines an odometry correction from a process of aligns point clouds received as a result of the scans by the reference LIDAR, wherein the amount of translation and rotation required to align the point clouds after the recorded odometry has been taken into account is reflective of the odometry correction; creates a map from the process of aligns the point clouds received as the result of the scans by the reference LIDAR; determines an alignment error for the evaluation LIDAR from a process of aligns point clouds received as a result of the scans by the evaluation LIDAR to the map, wherein the amount of translation and rotation required to align the point clouds after the recorded odometry and the odometry correction have been taken into account is reflective of the alignment error for the evaluation LIDAR to the reference LIDAR.
In another aspect, a computing system is provided that includes a storage (e.g., a memory configured to store data, such as virtual content data, one or more images, etc.) and one or more processors (e.g., implemented in circuitry) coupled to the memory and configured to execute instructions and, in conjunction with various components (e.g., a network interface, a display, an output device, etc.), cause the processor to receive data from successive scans performed by the reference LIDAR and an evaluation LIDAR while the vehicle is in motion; receive vehicle odometry from accelerometers on the vehicle while perform the scans by the reference LIDAR; determine an odometry correction from a process of align point clouds received as a result of the scans by the reference LIDAR, wherein the amount of translation and rotation required to align the point clouds after the recorded odometry has been taken into account is reflective of the odometry correction; create a map from the process of align the point clouds received as the result of the scans by the reference LIDAR; determine an alignment error for the evaluation LIDAR from a process of align point clouds received as a result of the scans by the evaluation LIDAR to the map, wherein the amount of translation and rotation required to align the point clouds after the recorded odometry and the odometry correction have been taken into account is reflective of the alignment error for the evaluation LIDAR to the reference LIDAR.
Additional embodiments and features are set forth in part in the description that follows, and will become apparent to those skilled in the art upon examination of the specification or may be learned by the practice of the disclosed subject matter. A further understanding of the nature and advantages of the disclosure may be realized by reference to the remaining portions of the specification and the drawings, which forms a part of this disclosure.
The disclosed technology addresses a need to check relative extrinsic alignments of multiple LIDAR sensors under evaluation to a reference LIDAR sensor on a vehicle. The alignments can be done by projecting all the LIDAR scans into the same coordinate frame, which requires knowledge of the positions of the LIDAR sensors relative to each other.
The LIDAR data is captured over a relatively short travel distance in a constrained environment. There is a very small portion overlapping in the LIDAR scans because of different views of the same scene by the multiple LIDAR sensors and the reference LIDAR sensor when the vehicle moves relative to the target. As such, the point clouds from scanning the scene by the multiple LIDAR sensors have limited or partial overlap with the reference LIDAR sensor.
The disclosure provides a solution that builds a map of an entire scene by using a reference LIDAR sensor. When the vehicle drives in a scene or a space, the reference LIDAR sensor on the vehicle may incrementally take scans, and the scans are aligned by using an iterative closest points (ICP) algorithm. The data from the multiple LIDAR sensors can be fed into the ICP algorithm, which can be used to generate a transformation that minimizes the distance between the LIDAR scans from each of the multiple LIDAR sensors.
The disclosure provides a method that creates a map of the scene by accumulating scans from the reference LIDAR sensor and aligning scans from other LIDAR sensors with the map. The map creates larger point clouds for alignments of the LIDAR sensors under evaluation to the reference LIDAR sensor on the vehicle. The map increases the amount of data in a consistent coordinate frame and thus improves accuracy.
The disclosure also provides a method that determines odometry correction terms between successive scans from the reference LIDAR sensors and applies the odometry correction terms to the successive scans for the other LIDAR sensors under evaluation. An estimate of the vehicle odometry is obtained from accelerometers and gyroscopes on the vehicle. However, the estimate of the vehicle odometry is inaccurate, which would cause inaccuracy in building the map. A small correction may be made for each scan. The disclosure provides a method that extracts those corrections for the scans from the reference LIDAR sensor and considers those corrections to be a corrective term for the odometry on the vehicle as a whole. When aligning evaluation LIDAR sensors against that map, the odometry correction is applied to each scan. Then, the corrected scans from the evaluation LIDARs are aligned against the map. The odometry correction term helps provide more robust alignments than without the odometry corrections.
The method includes collecting a series of samples from the reference LIDAR sensor, which may be spaced based on a travel distance. Then, each of the samples from the reference LIDAR are aligned to the prior samples (in time/sequence) from the reference LIDAR, which is the process that builds the map. Once all samples from the reference LIDAR are aligned in sequence, the samples form the map. The odometry correction or correction term may be saved for each of the samples. The odometry correction can be used before aligning individual scans from evaluation LIDAR sensors on the vehicle against the map. Since the odometry correction has been applied, any additional error will most likely be due to an incorrect alignment of the evaluation LIDAR with respect to the evaluation LIDAR.
Individual scans can be taken from LIDAR sensors under evaluation, which are referred to as evaluation LIDARs. Instead of aligning the scans against individual scans from the reference LIDAR sensor, the scans from the evaluation LIDAR sensors are aligned against the map created from the reference LIDAR sensor.
The present technology provides a benefit of being able to determine likely misalignments of LIDAR on the same vehicle without needing external equipment. The present technology is able to identify LIDAR misalignment errors by comparing data from other LIDARs on the same vehicle.
A further benefit is that the present method does not require a specialized calibration scene. The present technology can be used while a vehicle with multiple LIDARs is moving in any area, including a city street. The fact that the present disclosure makes reference to a particular testing environment that may or may not have LIDAR targets does not negate the capability of the ideas described herein to be used in other environments or without particular targets.
The autonomous vehicle 102 can navigate about roadways without a human driver based upon sensor signals output by sensor systems 104-106 of the autonomous vehicle 102. The autonomous vehicle 102 includes a plurality of sensor systems 104-106 (a first sensor system 102 through an Nth sensor system 104). The sensor systems 104-106 are of different types and are arranged about the autonomous vehicle 102. For example, the first sensor system 104 may be a camera sensor system and the Nth sensor system 106 may be a lidar sensor system. Other exemplary sensor systems include radar sensor systems, global positioning system (GPS) sensor systems, inertial measurement units (IMU), infrared sensor systems, laser sensor systems, sonar sensor systems, and the like.
The autonomous vehicle 102 further includes several mechanical systems that are used to effectuate appropriate motion of the autonomous vehicle 102. For instance, the mechanical systems can include but are not limited to, a vehicle propulsion system 130, a braking system 132, and a steering system 134. The vehicle propulsion system 130 may include an electric motor, an internal combustion engine, or both. The braking system 132 can include an engine brake, brake pads, actuators, and/or any other suitable componentry that is configured to assist in decelerating the autonomous vehicle 102. The steering system 134 includes suitable componentry that is configured to control the direction of movement of the autonomous vehicle 102 during navigation.
The autonomous vehicle 102 further includes a safety system 136 that can include various lights and signal indicators, parking brake, airbags, etc. The autonomous vehicle 102 further includes a cabin system 138 that can include cabin temperature control systems, in-cabin entertainment systems, etc.
The autonomous vehicle 102 additionally comprises an internal computing system 110 that is in communication with the sensor systems 104-106 and the mechanical systems 130, 132, 134. The internal computing system includes at least one processor and at least one memory having computer-executable instructions that are executed by the processor. The computer-executable instructions can make up one or more services responsible for controlling the autonomous vehicle 102, communicating with remote computing system 150, receiving inputs from passengers or human co-pilots, logging metrics regarding data collected by sensor systems 104-106 and human co-pilots, etc.
The internal computing system 110 can include a control service 112 that is configured to control operation of the vehicle propulsion system 106, the braking system 108, the steering system 110, the safety system 136, and the cabin system 138. The control service 112 receives sensor signals from the sensor systems 102-104 as well communicates with other services of the internal computing system 110 to effectuate operation of the autonomous vehicle 102. In some embodiments, control service 112 may carry out operations in concert one or more other systems of autonomous vehicle 102.
The internal computing system 110 can also include a constraint service 114 to facilitate safe propulsion of the autonomous vehicle 102. The constraint service 116 includes instructions for activating a constraint based on a rule-based restriction upon operation of the autonomous vehicle 102. For example, the constraint may be a restriction upon navigation that is activated in accordance with protocols configured to avoid occupying the same space as other objects, abide by traffic laws, circumvent avoidance areas, etc. In some embodiments, the constraint service can be part of the control service 112.
The internal computing system 110 can also include a communication service 116. The communication service can include both software and hardware elements for transmitting and receiving signals from/to the remote computing system 150. The communication service 116 is configured to transmit information wirelessly over a network, for example, through an antenna array that provides personal cellular (long-term evolution (LTE), 3G, 5G, etc.) communication.
In some embodiments, one or more services of the internal computing system 110 are configured to send and receive communications to remote computing system 150 for such reasons as reporting data for training and evaluating machine learning algorithms, requesting assistance from remoting computing system or a human operator via remote computing system, software service updates, ridesharing pickup and drop off instructions etc.
The internal computing system 110 can also include a latency service 118. The latency service 118 can utilize timestamps on communications to and from the remote computing system 150 to determine if a communication has been received from the remote computing system 150 in time to be useful. For example, when a service of the internal computing system 110 requests feedback from remote computing system 150 on a time-sensitive process, the latency service 118 can determine if a response was timely received from remote computing system 150 as information can quickly become too stale to be actionable. When the latency service 118 determines that a response has not been received within a threshold, the latency service 118 can enable other systems of autonomous vehicle 102 or a passenger to make necessary decisions or to provide the needed feedback.
The internal computing system 110 can also include a user interface service 120 that can communicate with cabin system 138 in order to provide information or receive information to a human co-pilot or human passenger. In some embodiments, a human co-pilot or human passenger may be required to evaluate and override a constraint from constraint service 114, or the human co-pilot or human passenger may wish to provide an instruction to the autonomous vehicle 102 regarding destinations, requested routes, or other requested operations.
As described above, the remote computing system 150 is configured to send/receive a signal from the autonomous vehicle 140 regarding reporting data for training and evaluating machine learning algorithms, requesting assistance from remoting computing system or a human operator via the remote computing system 150, software service updates, ridesharing pickup and drop off instructions, etc.
The remote computing system 150 includes an analysis service 152 that is configured to receive data from autonomous vehicle 102 and analyze the data to train or evaluate machine learning algorithms for operating the autonomous vehicle 102. The analysis service 152 can also perform analysis pertaining to data associated with one or more errors or constraints reported by autonomous vehicle 102.
The remote computing system 150 can also include a user interface service 154 configured to present metrics, video, pictures, sounds reported from the autonomous vehicle 102 to an operator of remote computing system 150. User interface service 154 can further receive input instructions from an operator that can be sent to the autonomous vehicle 102.
The remote computing system 150 can also include an instruction service 156 for sending instructions regarding the operation of the autonomous vehicle 102. For example, in response to an output of the analysis service 152 or user interface service 154, instructions service 156 can prepare instructions to one or more services of the autonomous vehicle 102 or a co-pilot or passenger of the autonomous vehicle 102.
The remote computing system 150 can also include a rideshare service 158 configured to interact with ridesharing applications 170 operating on (potential) passenger computing devices. The rideshare service 158 can receive requests to be picked up or dropped off from passenger ridesharing app 170 and can dispatch autonomous vehicle 102 for the trip. The rideshare service 158 can also act as an intermediary between the ridesharing app 170 and the autonomous vehicle wherein a passenger might provide instructions to the autonomous vehicle 102 to go around an obstacle, change routes, honk the horn, etc.
The remote computing system 150 can also include a LIDAR alignment evaluation service 160 configured to build a map from a reference LIDAR sensor, and align LIDARs under evaluation with the reference LIDAR on the same vehicle. The LIDAR alignment evaluation service 160 is also configured to use an iterative closest point (ICP) algorithm to determine an odometry correction term from the reference LIDAR sensor and to receive the data from scans from each of the LIDAR sensors, run the data through the ICP algorithm. The odometry corrective term may be represented by six degrees of freedom, including translation coordinates x, y, z, and rotation angles roll, pitch, and yaw.
The ICP algorithm works in a following way. The ICP algorithm may tweak x, y, z slightly in sequence, and then tweak roll, pitch, and yaw separately. Then, the ICP algorithm may use the gradient information from the tweaking to try to alter the estimated transformation until the transformation reaches a local minimum.
The LIDAR alignment evaluation service 160 is also configured to check the alignment against the reference LIDAR sensor, and validate the calibration before the AV drives on the road. The LIDAR data and odometry data are collected by the internal computing system 110 on the vehicle and evaluated on the internal computing system 110 on the vehicle.
Vehicle 102 moves through scene 206, as illustrated by arrow 208. Structural information in scene 206 may be used to collect LIDAR data, such as ground planes, sidewalls, and ceilings, among others. The distance from the ground, sidewalls, and ceilings can be detected by the LIDAR sensors.
The front LIDAR sensors 202 and 204A-B face forward and may get a scan of the ground in front of the vehicle, including the ceiling and sidewalls. The side LIDAR sensors 204C-D are primarily for detecting obstacles in a short-range. The side LIDAR sensors 204C-D are aimed down toward the ground and may get a very high amount of visibility on the ground next to the vehicle.
According to some examples, method 300 includes performing an initial reference LIDAR scan at step 302. For example, the LIDAR 202 or reference LIDAR illustrated in
According to some examples, method 300 includes performing successive reference LIDAR scans at step 304. For example, the LIDAR 202 or reference LIDAR illustrated in
As an example, one scan or sample is taken from the reference LIDAR sensor 202 on the vehicle at a fixed travel distance, e.g., every one meter of the travel distance. Then, the consecutive samples from the reference LIDAR sensor 202 are used to build a map. For instance, a number of scans, e.g., 25 consecutive scans, are taken from roof left LIDAR sensor 202 to build the map.
According to some examples, method 300 may include associating odometry data reflecting the segment of movement of the vehicle between the successive reference LIDAR scans, the odometry data for the movement between the LIDAR scans being associated with the respective LIDAR scans occurring after the segment of movement of the vehicle at step 306. For example, the LIDAR alignment evaluation service 160 illustrated in
According to some examples, method 300 may include aligning respective point clouds received as a result of the successive reference LIDAR scans of the reference LIDAR with a point cloud received from the first reference LIDAR scan from the reference LIDAR at step 308. For example, the LIDAR alignment evaluation service 160 illustrated in
In another example of the aligning respective point clouds at step 308, method 300 may include inputting (a) the point cloud received from the first reference LIDAR scan, (b) a first of the respective point clouds received as a result of the first of the successive reference LIDAR scans, and (c) the associated odometry data reflecting the segment of movement of the vehicle between the first LIDAR scan and the first of the successive reference LIDAR scans into an iterative closest point algorithm. For example, the LIDAR 202 illustrated in
Further, method 300 may include determining an odometry error that reflects the amount of translation and rotation that is needed to align the point cloud received from the first reference LIDAR scan and the first of the respective point clouds received as a result of the first of the successive reference LIDAR scans when taking into account the odometry data for the segment of movement of the vehicle between the first LIDAR scan and the first of the successive reference LIDAR scans. For example, the LIDAR alignment evaluation service 160 illustrated in
Further, method 300 may include inputting (iteratively) sequential point clouds from the successive reference LIDAR scans into the iterative closest point algorithm to align the sequential point clouds to the first point cloud from the first reference LIDAR scan. For example, the LIDAR alignment evaluation service 160, illustrated in
According to some examples, method 300 may include iteratively determining an odometry error that reflects the amount of translation and rotation that is needed to align the sequential point clouds to the first point cloud from the first reference LIDAR scan when taking into account the known odometry data for the segment of movement of the vehicle between a previous reference LIDAR scan of the sequential reference LIDAR scans and the next sequential reference LIDAR scan of the sequential reference LIDAR scans at step 310. For example, the LIDAR alignment evaluation service 160 illustrated in
According to some examples, method 300 may include averaging the collective odometry errors to arrive at an odometry correction at step 312. For example, the LIDAR alignment evaluation service 160 illustrated in
According to some examples, method 400 may include creating a map from the process of aligning the point clouds received as the result of the scans by the reference LIDAR at step 313. For example, the LIDAR alignment evaluation service 160 illustrated in
According to some examples, method 300 may include performing successive LIDAR scans by the evaluation LIDAR at step 314. The successive scans by the evaluation LIDAR can occur during the performing the successive reference LIDAR scans (the occur at step 304). For example, the LIDAR (e.g. 204A-D) or evaluation LIDAR illustrated in
As an example, one scan or sample is taken from each of the LIDAR sensors 204A-D on the vehicle simultaneously with the reference LIDAR sensor 202 at a fixed travel distance, e.g. every one meter of the travel distance. The samples from the evaluation LIDAR sensors 204A-D are used for the evaluation of the alignments to the reference LIDAR sensor 202.
For instance, each of the 25 scans from the side left LIDAR sensor 204C is aligned to the map to get an estimate of misalignment from the reference LIDAR sensor 202. This evaluation process can be repeated for each of the other LIDAR sensors 204A, 204B, and 204D. Now, 25 individual estimates of the six degrees of freedom transformation can be obtained for each of the other LIDAR sensors.
According to some examples, method 300 may include associating the odometry data reflecting the segment of movement of the vehicle between the successive LIDAR scans, the odometry data for the movement between the LIDAR scans being associated with the respective LIDAR scans occurring after the segment of movement of the vehicle at step 316. For example, the LIDAR alignment evaluation service 160 illustrated in
According to some examples, method 300 may include aligning respective point clouds received as a result of the successive LIDAR scans of the evaluation LIDAR and a portion of the map at step 318. For example, the LIDAR alignment evaluation service 160 illustrated in
In another example of the aligning respective point clouds at step 318, the method may include inputting (a) a first of the respective point clouds received as a result of the first of the successive LIDAR scans from the evaluation LIDAR, and (b) the associated odometry data reflecting the segment of movement of the vehicle between the LIDAR scans, (c) the odometry correction into the iterative closest point algorithm. For example, the evaluation LIDAR (e.g. 204A-D) illustrated in
Method 300 may also include applying the odometry correction to the successive LIDAR scans from the evaluation LIDAR to align to the first LIDAR scan from the evaluation LIDAR.
Further, method 300 may include determining an alignment error that reflects the amount of translation and rotation that is needed to align the respective point cloud received as a result of the successive LIDAR scans with the odometry correction from the evaluation LIDAR and a portion of the map when taking into account the known odometry data for the segment of movement of the vehicle and the odometry correction. For example, the LIDAR alignment evaluation service 160 illustrated in
Further, method 300 may include inputting sequential point clouds from the successive evaluation LIDAR scans with odometry corrections into the iterative closest point algorithm to align the sequential point clouds to the portion of the map. For example, the evaluation LIDAR (e.g. 204 A-D) illustrated in
According to some examples, method 300 may include iteratively determining the alignment error that reflects the amount of translation and rotation that is needed to align the sequential point clouds to the map when taking into account the known odometry data for the segment of movement of the vehicle and the odometry correction at step 320. For example, the LIDAR alignment evaluation service 160 illustrated in
According to some examples, method 100 may include averaging the collective alignment errors to arrive at an alignment correction for the evaluation LIDAR which reflects a degree of misalignment of the evaluation LIDAR to a reference LIDAR on the same vehicle at step 322. For example, the LIDAR alignment evaluation service 160 illustrated in
According to some examples, the method 100 may include repeating the aligning respective point clouds received as a result of the successive LIDAR scans of the evaluation LIDAR and a portion of the map and the averaging the collective alignment errors to arrive at an alignment correction for the evaluation LIDAR for each of the evaluation LIDAR on the vehicle at step 324. For example, the LIDAR alignment evaluation service 160 illustrated in
According to some examples, method 300 may include determining that the alignment error is greater than a threshold thus indicating that remediation should be performed at step 326. This step relates to when the threshold is not met. For example, the LIDAR alignment evaluation service 160 illustrated in
According to some examples, method 300 may include applying a processing adjustment to account for the degree of misalignment when ingesting point cloud data from the evaluation LIDAR at step 328. For example, the internal computing system 110 illustrated in
According to some examples, method 300 may include realigning the evaluation LIDAR to account for the degree of misalignment. For example, a technician may realign the evaluation LIDAR to account for the degree of misalignment.
According to some examples, method 300 may be repeated with a different LIDAR designated as the reference LIDAR.
According to some examples, method 400 may include performing scans simultaneously by the reference LIDAR and evaluation LIDARs while the vehicle is in motion at step 410. For example, the reference LIDAR 202 and evaluation LIDARs (e.g. 204A-D) illustrated in
According to some examples, the method may include collecting vehicle odometry while performing the scans by the reference LIDAR at step 420. For example, the sensor system illustrated in
According to some examples, the method may include determining an odometry correction or odometry correction term from a process of aligning point clouds received as a result of the scans by the reference LIDAR at step 430. For example, the LIDAR alignment evaluation service 160 illustrated in
According to some examples, method 400 may include creating a map from the process of aligning the point clouds received as the result of the scans by the reference LIDAR at step 440. For example, the LIDAR alignment evaluation service 160 illustrated in
For each of the other LIDAR sensors, there may be relative misalignments to the map. To estimate the relative misalignment is by taking the individual samples (e.g. 25 samples) from the evaluation sensors and getting 25 different six degrees of freedom estimates of the relative misalignment between the other LIDAR sensor and the reference LIDAR sensor on the vehicle.
According to some examples, the method may include determining an alignment error for the evaluation LIDAR from a process of aligning point clouds received as a result of the scan by the evaluation LIDAR to the map at step 460. For example, the LIDAR alignment evaluation service 160 illustrated in
According to some examples, the method may be repeated with a different LIDAR designated as the reference LIDAR at step 470. For example, the LIDAR alignment evaluation service 160 illustrated in
According to some examples, the method may include configuring a computing system on the vehicle to apply a processing adjustment to account for the alignment error when ingesting point cloud data from the evaluation LIDAR having the alignment error at step 480. For example, the internal computing system 110 on the vehicle illustrated in
The following examples are for illustration purposes only. It will be apparent to those skilled in the art that many modifications, both to materials and methods, may be practiced without departing from the scope of the disclosure.
One of the five LIDAR sensors (e.g. roof left sensor 202) is considered as a reference LIDAR sensor. All the other four sensors 204A-D are considered as evaluation LIDAR sensors, which are aligned with the reference LIDAR sensor 202. The point clouds in
As illustrated in
The odometry corrections may vary.
An average for each roll, pitch, and yaw in
In some aspects, the misalignment may be corrected by a combination of hardware corrections and software corrections.
In some embodiments, computing system 1000 is a distributed system in which the functions described in this disclosure can be distributed within a data center, multiple data centers, a peer network, etc. In some embodiments, one or more of the described system components represents many such components each performing some or all of the function for which the component is described. In some embodiments, the components can be physical or virtual devices.
Example system 1000 includes at least one processing unit (CPU or processor) 1010 and connection 1005 that couples various system components including system memory 1015, such as read-only memory (ROM) 1020 and random-access memory (RAM) 1025 to processor 1010. Computing system 1000 can include a cache of high-speed memory 1012 connected directly with, in close proximity to, or integrated as part of processor 1010.
Processor 1010 can include any general purpose processor and a hardware service or software service, such as services 1032, 1034, and 1036 stored in storage device 1030, configured to control processor 1010 as well as a special-purpose processor where software instructions are incorporated into the actual processor design. Processor 1010 may essentially be a completely self-contained computing system, containing multiple cores or processors, a bus, memory controller, cache, etc. A multi-core processor may be symmetric or asymmetric.
To enable user interaction, computing system 1000 includes an input device 1045, which can represent any number of input mechanisms, such as a microphone for speech, a touch-sensitive screen for gesture or graphical input, keyboard, mouse, motion input, speech, etc. Computing system 1000 can also include output device 1035, which can be one or more of a number of output mechanisms known to those of skill in the art. In some instances, multimodal systems can enable a user to provide multiple types of input/output to communicate with computing system 1000. Computing system 1000 can include communications interface 1040, which can generally govern and manage the user input and system output. There is no restriction on operating on any particular hardware arrangement, and therefore the basic features here may easily be substituted for improved hardware or firmware arrangements as they are developed.
Storage device 1030 can be a non-volatile memory device and can be a hard disk or other types of computer-readable media which can store data that are accessible by a computer, such as magnetic cassettes, flash memory cards, solid state memory devices, digital versatile disks, cartridges, random access memories (RAMs), read-only memory (ROM), and/or some combination of these devices.
The storage device 1030 can include software services, servers, services, etc., that when the code that defines such software is executed by the processor 1010, it causes the system to perform a function. In some embodiments, a hardware service that performs a particular function can include the software component stored in a computer-readable medium in connection with the necessary hardware components, such as processor 1010, connection 1005, output device 1035, etc., to carry out the function.
For clarity of explanation, in some instances, the present technology may be presented as including individual functional blocks including functional blocks comprising devices, device components, steps or routines in a method embodied in software, or combinations of hardware and software.
Any of the steps, operations, functions, or processes described herein may be performed or implemented by a combination of hardware and software services or services, alone or in combination with other devices. In some embodiments, a service can be software that resides in memory of a client device and/or one or more servers of a content management system and perform one or more functions when a processor executes the software associated with the service. In some embodiments, a service is a program or a collection of programs that carry out a specific function. In some embodiments, a service can be considered a server. The memory can be a non-transitory computer-readable medium.
In some embodiments, the computer-readable storage devices, mediums, and memories can include a cable or wireless signal containing a bit stream and the like. However, when mentioned, non-transitory computer-readable storage media expressly exclude media such as energy, carrier signals, electromagnetic waves, and signals per se.
Methods according to the above-described examples can be implemented using computer-executable instructions that are stored or otherwise available from computer-readable media. Such instructions can comprise, for example, instructions and data which cause or otherwise configure a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. Portions of computer resources used can be accessible over a network. The executable computer instructions may be, for example, binaries, intermediate format instructions such as assembly language, firmware, or source code. Examples of computer-readable media that may be used to store instructions, information used, and/or information created during methods according to described examples include magnetic or optical disks, solid-state memory devices, flash memory, USB devices provided with non-volatile memory, networked storage devices, and so on.
Devices implementing methods according to these disclosures can comprise hardware, firmware and/or software, and can take any of a variety of form factors. Typical examples of such form factors include servers, laptops, smartphones, small form factor personal computers, personal digital assistants, and so on. The functionality described herein also can be embodied in peripherals or add-in cards. Such functionality can also be implemented on a circuit board among different chips or different processes executing in a single device, by way of further example.
The instructions, media for conveying such instructions, computing resources for executing them, and other structures for supporting such computing resources are means for providing the functions described in these disclosures.
In some variations, the iterative closest point algorithm is a generalized iterative closest point algorithm.
In some variations, the odometry error is broken into an amount of error in 6-degrees-of-freedom reflecting error in x, y, and z planes and rotation of pitch, roll, and yaw.
In some variations, the alignment error is broken into an amount of error in 6 degrees of freedom reflecting error in x, y, and z planes and rotation of pitch, roll, and yaw.
In some variations, the vehicle can have multiple evaluation LIDARs, the method comprising repeating the aligning respective point clouds received as a result of the successive LIDAR scans of the evaluation LIDAR and a portion of the map and the averaging the collective alignment errors to arrive at an alignment correction for the evaluation LIDAR for each of the evaluation LIDAR on the vehicle.
In some variations, repeating the aligning respective point clouds received as a result of the successive LIDAR scans of the evaluation LIDAR and a portion of the map can include performing the repeated aligning steps in series. The portion of the map includes data from the reference LIDAR.
In some variations, repeating the aligning respective point clouds received as a result of the successive LIDAR scans of the evaluation LIDAR and a portion of the map can include performing the repeated aligning steps concurrently.
In some variations, when each of the multiple evaluation LIDARs all have a statistically significant alignment error, the reference LIDAR is misaligned.
In some variations, the method may include determining that the alignment error is greater than a threshold, thus indicating that remediation should be performed.
In some variations, the alignment error is considered greater than the threshold when a statistically significant number of samples from the evaluation LIDAR is greater than half of a degree of alignment error.
In some variations, the method may include realigning the evaluation LIDAR to account for the degree of misalignment.
In some variations, the method may include applying a processing adjustment to account for the degree of misalignment when ingesting point cloud data from the evaluation LIDAR.
In some variations, the reference LIDAR can be any LIDAR on the vehicle.
In some variations, a different LIDAR can be designated as the reference LIDAR.
In some variations, the point clouds received from the reference LIDAR have limited overlap with the point clouds received from the evaluation LIDAR.
In some variations, the iterative closest point algorithm outputs the odometry error for each LIDAR scan of the reference LIDAR in 6-degrees-of-freedom (x, y, z, pitch, roll, and yaw).
In some variations, the iterative closest point algorithm outputs the alignment error for each LIDAR scan of the evaluation LIDAR in 6-degrees-of-freedom (x, y, z, pitch, roll, and yaw).
In some variations, the odometry error is a reflection of the vehicle's actual movement that is not represented in the odometry data for the movement of the vehicle.
In some variations, the odometry correction is a compensation for the odometry error that is applied to the point clouds received from the scans from the evaluation LIDAR when aligning the point clouds received from the scans of the evaluation LIDAR to the map.
Although a variety of examples and other information was used to explain aspects within the scope of the appended claims, no limitation of the claims should be implied based on particular features or arrangements in such examples, as one of ordinary skill would be able to use these examples to derive a wide variety of implementations. Further and although some subject matter may have been described in language specific to examples of structural features and/or method steps, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to these described features or acts. For example, such functionality can be distributed differently or performed in components other than those identified herein. Rather, the described features and steps are disclosed as examples of components of systems and methods within the scope of the appended claims.