This document generally relates to remote sensing and perception systems and methods for autonomous driving.
Light detection and ranging (LiDAR) is a remote sensing and perception system that can determine ranges by targeting an object or a surface with a laser and measuring the time for the reflected light to return to the receiver. One of the significant issues with LiDAR is the difficulty in reconstructing point cloud data in poor weather conditions. In heavy rain, for example, the light pulses emitted from LiDAR system are partially reflected off of rain droplets which adds noise to the data.
Disclosed are devices, systems, and methods for processing LiDAR data.
In an aspect, a remote sensing method may include obtaining data points that are spatially distributed and have respective intensity values by performing a remote detection and ranging operation, determining a spatial autocorrelation of a set of data points, out of the data points, based on a difference in distances between data points in the set of data points, determining an intensity weight multiplier based on a reference intensity value of the data points and an average intensity value of the data points, determining a quality score of the set of data points by applying the intensity weight multiplier to the spatial autocorrelation of the set of data points, and identifying, based on the quality score, whether the set of data points includes one or more data points that are created by a noise source.
In another aspect, a remote sensing method may include generating a remote sensing image grid that includes a plurality of grid cells obtained by performing a remote detection and ranging operation, by dividing a spatial distribution of data points into the plurality of grid cells, determining an intensity weight multiplier based on a reference intensity value of the plurality of data points and an average intensity value of the plurality of data points, determining a spatial autocorrelation of data points for each grid cell based on a difference in distances between data points in each grid cell, determining a quality score of the data points in each grid cell by applying the intensity weight multiplier to the spatial autocorrelation of the data points in each grid cell, and identifying, based on the quality score, whether the data points in each grid cell include one or more data points that are created by a noise source.
In another aspect, a remote sensing and perception system may include a first data processing unit configured to: generate a remote sensing image grid that includes a plurality of grid cells obtained by performing a remote detection and ranging operation, by dividing a spatial distribution of data points into the plurality of grid cells; determine an intensity weight multiplier based on a reference intensity value of the plurality of data points and an average intensity value of the plurality of data points; and determine a quality score of the data points in each grid cell; and a second processing unit in communication with the first data processing unit and including a plurality of arithmetic-logic units configured to perform computations in parallel, each of the plurality of arithmetic-logic units configured to determine a spatial autocorrelation of data points in each corresponding grid cell based on a difference in distances between data points in each grid cell, wherein the quality score of the data points in each grid cell is determined by: applying the intensity weight multiplier to the spatial autocorrelation of the data points in each grid cell; and identifying, based on the quality score, whether the data points in each grid cell include one or more data points that are created by a noise source.
The above and other aspects and features of the disclosed technology are described in greater detail in the drawings, the description, and the claims.
Section headings are used in the present document only for ease of understanding and do not limit scope of the embodiments to the section in which they are described.
To automatically detect whether the LiDAR pointcloud quality is negatively affected by rain, fog, or other noise sources (such as EMI), a pointcloud quality metric based on the LiDAR ranging and intensity characteristics has been developed. The proposed metric relies only on the current LiDAR data frame's geometric features and do not require any perception or training modules, and thus it can be easily deployed to various LiDAR systems. The proposed metric is tested with real road data and is proved to be capable of distinguishing the ‘low quality’ data frames with noise anomaly from the normal ones for both 905 nm and 1550 nm LiDARs.
In some implementations, the in-vehicle control computer 150 may include at least one processor 170 (which can include at least one microprocessor) that executes processing instructions stored in a non-transitory computer readable medium, such as a memory 175. The in-vehicle control computer 150 may also represent a plurality of computing devices that may serve to control individual components or subsystems of the autonomous vehicle 105 in a distributed fashion. In some embodiments, the memory 175 may contain processing instructions (e.g., program logic) executable by the processor 170 to perform various methods and/or functions of the autonomous vehicle 105 as explained in this patent document. For instance, the processor 170 executes the operations associated with the plurality of vehicle subsystems 140 for ensuring safe operation of the autonomous vehicle by training using simulations of different scenarios.
In some implementations, the memory 175 may contain additional instructions as well, including instructions to transmit data to, receive data from, interact with, or control one or more of a vehicle drive subsystem 142, a vehicle sensor subsystem 144, and a vehicle control subsystem 146.
The autonomous vehicle 105 may include various vehicle subsystems that facilitate the operation of vehicle 105. The vehicle subsystems may include the vehicle drive subsystem 142, the vehicle sensor subsystem 144, and/or the vehicle control subsystem 146. The components or devices of the vehicle drive subsystem 142, the vehicle sensor subsystem 144, and the vehicle control subsystem 146 are shown as examples in
The vehicle sensor subsystem 144 may include a number of sensors configured to sense information about an environment in which the autonomous vehicle 105 is operating or a condition of the autonomous vehicle 105. The vehicle sensor subsystem 144 may include one or more cameras or image capture devices, one or more temperature sensors, an inertial measurement unit (IMU), a localization system such as a Global Positioning System (GPS), a laser range finder/LiDAR unit, a RADAR unit, an ultrasonic sensor, and/or a wireless communication unit (e.g., a cellular communication transceiver). The vehicle sensor subsystem 144 may also include sensors configured to monitor internal systems of the autonomous vehicle 105 (e.g., an O2 monitor, a fuel gauge, an engine oil temperature, etc.). In some implementations, the autonomous vehicle 105 may further include a synchronization unit that synchronizes multiple heterogeneous sensors.
The IMU may include any combination of sensors (e.g., accelerometers and gyroscopes) configured to sense position and orientation changes of the autonomous vehicle 105 based on inertial acceleration. The localization system may be any sensor configured to estimate a geographic location of the autonomous vehicle 105. For this purpose, the localization system may include a receiver/transmitter operable to provide information regarding the position of the autonomous vehicle 105 with respect to the Earth. The RADAR unit may represent a system that utilizes radio signals to sense objects within the environment in which the autonomous vehicle 105 is operating. In some embodiments, in addition to sensing the objects, the RADAR unit may additionally be configured to sense the speed and the heading of the objects proximate to the autonomous vehicle 105. The laser range finder or LiDAR unit may be any sensor configured to sense objects in the environment in which the autonomous vehicle 105 is located using lasers. The LiDAR unit may be a spinning LiDAR unit or a solid-state LiDAR unit. The cameras may include one or more cameras configured to capture a plurality of images of the environment of the autonomous vehicle 105. The cameras may be still image cameras or motion video cameras.
The vehicle control subsystem 146 may be configured to control operation of the autonomous vehicle 105 and its components. Accordingly, the vehicle control subsystem 146 may include various elements such as a throttle and gear, a brake unit, a navigation unit, a steering system and/or an autonomous control unit. The throttle may be configured to control, for instance, the operating speed of the engine and, in turn, control the speed of the autonomous vehicle 105. The gear may be configured to control the gear selection of the transmission. The brake unit can include any combination of mechanisms configured to decelerate the autonomous vehicle 105. The brake unit can use friction to slow the wheels in a standard manner. The brake unit may include an anti-lock brake system (ABS) that can prevent the brakes from locking up when the brakes are applied. The navigation unit may be any system configured to determine a driving path or route for the autonomous vehicle 105. The navigation unit may additionally be configured to update the driving path dynamically while the autonomous vehicle 105 is in operation. In some embodiments, the navigation unit may be configured to incorporate data from the localization system and one or more predetermined maps so as to determine the driving path for the autonomous vehicle 105. The steering system may represent any combination of mechanisms that may be operable to adjust the heading of vehicle 105 in an autonomous mode or in a driver-controlled mode.
The traction control system (TCS) may represent a control system configured to prevent the autonomous vehicle 105 from swerving or losing control while on the road. For example, TCS may obtain signals from the IMU and the engine torque value to determine whether it should intervene and send instruction to one or more brakes on the autonomous vehicle 105 to mitigate the autonomous vehicle 105 swerving. TCS is an active vehicle safety feature designed to help vehicles make effective use of traction available on the road, for example, when accelerating on low-friction road surfaces. When a vehicle without TCS attempts to accelerate on a slippery surface like ice, snow, or loose gravel, the wheels can slip and can cause a dangerous driving situation. TCS may also be referred to as electronic stability control (ESC) system.
In some embodiments of the disclosed technology, a remote sensing and perception system may include a LiDAR 145 and a remote sensing and perception system 180.
Certain weather conditions, such as rain or fog, and other factors, such as electromagnetic interference (EMI) or hardware failure, can cause the LiDAR to generate pointclouds with a large amount of false positive points. Such a degradation or failure in the LiDAR perception performance may lead to a critical system failure or a perception system may be deemed to be outside the operational design domain (ODD). In addition, to achieve higher level of autonomy (e.g., L3+), it is required that the ADS determines if the system and components (e.g., LiDARs) are operating within the ODD and/or in a healthy state. Therefore, in some implementations, the system keeps monitoring the LiDAR performance and determine whether an incoming LiDAR pointcloud is in a usable state.
In some implementations, a remote sensing and perception system mainly focuses on the performance degradation of the LiDAR in certain weather conditions such as rain or fog. Some quantification methods can reduce LiDAR performance degradation based on, e.g., signal attenuation, visibility range, point density, and target reflectance in rain or fog, and such quantification methods may be verified by simulations or controlled environment tests. In some implementations, statistical learning approaches and anomaly detection methods can be used to quantify the performance degradation in adverse weather conditions such as rain or fog. However, in some cases, a LiDAR performance degradation, and a critical perception system failure, which can occur simultaneously, are not related to each other. In one example, even when a LiDAR has a reduced visibility range in rain, the perception system may remain functional. In another example, even if the LiDAR is operating with its full performance in a sunny day, the LiDAR may generate a large amount of false positive points due to a hardware failure and may cause a perception system failure. In some implementations, a deep-learning-based approach may be used to classify and detect LiDAR pointcloud anomalies. However, the deep-learning-based approach requires a large amount of annotated LiDAR data frames to train the software, and the data collection, annotation and training pipeline must be repeated for different LiDAR properties, such as spinning vs solid state, 905 nm vs 1550 nm, and a change to the mounting locations. In addition, the real-time computational cost is high and may not be desirable given the limited onboard computational cost.
The disclosed technology can address the issues discussed above by using a new performance metric to detect an anomaly in LiDAR pointcloud that is caused by rain, fog, or other noise sources such as EMI or hardware failure. The performance metric is focused on the characteristics of the LiDAR pointcloud impacted by the above noise sources, data points of which are sparsely and randomly distributed in space and have abnormal intensities (e.g., reflectance). Instead of, or in addition to, the above-discussed metrics such as visibility and point density degradation, the disclosed technology can be implemented in some embodiments to consider (1) abnormal range and intensity distribution of the LiDAR pointcloud, and/or (2) high false positive rate, which may lead to a critical perception system failure. The metric based on some embodiments of the disclosed technology only requires basic LiDAR performance statistics, and it does not need priori rain data or training. The metric based on some embodiments of the disclosed technology can be tested with real road data collected from various LiDAR types and wavelengths with a combination of one or more of clear weather, rain, fog, EMI, or hardware component failure. The metric based on some embodiments of the disclosed technology can distinguish low-quality data frames from normal data frames for both 905 nm and 1550 nm LiDARs.
LiDAR pointcloud impacted by weather conditions or hardware component failures typically have the following two characteristics:
(a) Randomly and sparsely distributed detections in the 3D space (e.g., detected point clusters of points that are randomly and sparsely distributed). EMI and hardware failure may affect the LiDAR's signal processing module and generate random and sparse false positives. In weather conditions such as rain or fog, this is mainly caused by reflection from water droplets, reflection from scattered laser signals and reduced pointcloud density due to signal attenuation.
(b) Abnormal intensity values. In weather conditions such as rain or fog, the intensity values are typically lower than normal due to signal attenuation. EMI and hardware failure may lead to either low or excessively high intensity values. In some implementations, the term “intensity” that is used in relation to a remote detection and ranging operation such as LiDAR can indicate a reflectivity (e.g., the ability of a surface to reflect the laser pulse that LiDAR sends out) or a strength of a receiving signal of LiDAR. In some implementations, the intensity refers to the strength of signals or light rays received by a ranging sensor during its ranging operation, e.g., the number of transmitted signals or light rays (e.g., laser pulses) that are reflected by a target object. The intensity is dependent on a reflectivity of a surface of the target object, a range or distance between the sensor and the target object, and environment conditions.
In some embodiments of the disclosed technology, a remote sensing method includes a performance metric formulation that includes obtaining a random and sparse distribution of the LiDAR detections, and a determination of abnormal intensity values.
In some embodiments of the disclosed technology, the remote sensing method includes a spatial autocorrelation as a measure of a spatial distribution of the LiDAR detections. The underlying idea is that if a segment of LiDAR pointcloud data is generated by a laser that detects an actual object, the distance values in the data segment tend to be clustered. On the other hand, if a LiDAR data segment contains an excessive number of noise points, the distance values in the data segment are more likely dispersed. Referring to
The spatial autocorrelation of a set of LiDAR points is defined as follows. Given a set of LiDAR points:
where ri, θi, ϕi and γi represent the distance, azimuth, elevation, and intensity of the i-th LiDAR point, respectively.
The spatial autocorrelation of the distance values is defined as:
where
is the average distance of all distance values in the data segment.
In Eq. 2, wij is a pre-defined weight value. For example, wij can be set as the inverse angular distance between points pi and pj for i≠j and 0 for i=j.
In another example, wij can be set as 1 for i≠j and 0 for i=j.
In some implementations, the sum of all weights can be expressed as:
The spatial autocorrelation is valued between [−1, 1], where a high value means that the data points are clustered, and a low value means that the data points are dispersed. It should be noted that by the definition above, an isolated point is also considered as perfectly dispersed since it is most likely to be treated as a noise point in perception algorithms.
The main difference between the autocorrelation and statistical variance is that the statistical variance only considers the absolute distance between each individual point to the mean, and the spatial autocorrelation considers the difference between neighboring points. For example, consider two-point distributions shown as below. Both case 1 and case 2 have the same range variance, and yet the spatial autocorrelation of case 2 is negative while the spatial autocorrelation of case 1 is positive, indicating the pointcloud in case 2 is more dispersed. In some implementations, multiple vehicles/objects in the LiDAR field of view can typically generate a pointcloud distribution similar to case 1, and noise/false positives may result in a pointcloud distribution close to case 2. In addition, an isolated point has minimum variance by definition, while it has the lowest spatial autocorrelation score following the formulation above. Therefore, spatial autocorrelation is a more suitable point range distribution measure for our application.
Some LiDARs may occasionally generate clustered false positive detections in heavy rain or dense fog, which can be hard to capture by spatial autocorrelation.
On the other hand, as shown in the pointcloud examples above, the perception system based on some embodiments of the disclosed technology may capture LiDAR data frames having low data quality because they tend to have abnormal intensity distributions. In most cases, the LiDAR data may have intensities that are lower than normal intensities, and in some other cases, the LiDAR data may have extremely high or oversaturated intensities. Therefore, the disclosed technology can be implemented in some embodiments to take the intensity into consideration, in addition to the spatial autocorrelation, by applying or adding an intensity weight multiplier to the spatial autocorrelation (e.g., by multiplying the spatial autocorrelation by the intensity weight multiplier). The intensity weight multiplier can be formulated from any intensity statistical measures such as mean, standard deviation, or even higher order statistical moments. In some embodiments discussed in this patent document, the formulation of the intensity multiplier may be based on the intensity mean.
Let γref be a reference intensity value associated to particular LiDAR models. The reference value indicates a normal LiDAR intensity during a normal operation (e.g., clear weather, no hardware issues). The reference is user defined and can be obtained through a statistical analysis of LiDAR data, since the LiDAR intensity during normal operation is typically consistent with small fluctuations. Let
where k is a constant scale factor. The above definition of Kγ addresses low intensity values since they are most common in rain or fog conditions. A low average intensity leads to a high weight multiplier. High average intensity does not contribute to the weight multiplier, because a high average intensity can also be a result of retro-reflective targets occurring at a close range, which is irrelevant to the LiDAR data quality and is very commonly seen as vehicles can pass road signs from time to time.
In some embodiments of the disclosed technology, the LiDAR data quality metric is formulated as the multiplication of the intensity weight multiplier and the spatial autocorrelation Kγ−1. In one example, the statistics of the LiDAR quality metric may be obtained from test data and correspond to the system performance and product requirement that can be used to determine a threshold value to determine whether LiDAR data is “low-quality.”
In order to obtain the spatial autocorrelation of LiDAR points, the disclosed technology can be implemented in some embodiments to perform a calculation on LiDAR points in a small local area instead of performing a calculation on all LiDAR points across the entire FOV all at once, since typical objects and other physical features do not occupy the entire LiDAR FOV and the LiDAR points are bound to be scattered when looking from a global FOV perspective.
Furthermore, such a calculation on LiDAR points in a small local area can reduce the computational cost as the spatial autocorrelation is of O(N2) with N being the number of points considered in the calculation. Therefore, a LiDAR signal processing method based on some embodiments of the disclosed technology may include creating a LiDAR image grid and calculate the spatial autocorrelation grid by grid. For each LiDAR data frame, the LiDAR signal processing method may include projecting the LiDAR data onto an azimuth-elevation image and dividing the image by grids. In some embodiments, each projected point contains range and intensity information.
In some embodiments, for each grid, the LiDAR signal processing method may include calculating the weighted spatial autocorrelation of all the distance values of the points in that grid. In some embodiments, the autocorrelation formula requires at least two points in the grid. In some embodiments, the unweighted spatial autocorrelation value of the grid can be defined as −1 if there is only one point in the grid, i.e., if the isolated point is considered as a “perfectly scattered” point. The overall performance score of the LiDAR data frame is the sum of the weighted spatial autocorrelation over all grids averaged by the number of grids.
In scenarios below, the test results of the proposed metric with various LiDARs and with data collected from uncontrolled road environment are provided. The LiDAR and metric parameters are listed in the table below.
This scenario is a 1 min trip segment where the LiDAR pointcloud generated by LiDAR 1 is impacted by EMI for about 10 s, generating a large amount of low-intensity false positives. Referring to
To demonstrate how the spatial autocorrelation and the intensity weight multiplier contribute to the overall metric respectively, the orange curve shows the spatial autocorrelation over time without the multiplication of the intensity weight, and the blue curve shows the overall quality metric score. Both curves show significant dips for about 10 s which correspond to the duration of the EMI effect. In this scenario, the spatial autocorrelation can clearly capture the false positives, and the intensity weight magnifies the gap between the normal and low-quality data frames since most of the noise points have low intensity values. It should be noted that even for normal data frames, the intensity weight scales the spatial autocorrelation since there are always points with intensity values below the reference intensity.
While the EMI affected pointcloud leads to a peak in the variance, there are other peaks when the pointcloud is normal, making the variance score not able to distinguish the pointcloud frames with anomaly.
An 1 hr trip segment is analyzed by LiDAR 1 for the rain performance, where the weather is clear for the first half hour followed by heavy to moderate rain in the next 20 min and rain stops in the last 10 min., but the road surface is extremely wet.
The pointcloud density and overall intensity drops significantly during rain. In addition, as mentioned above, there are clustered false positive points close to the LiDAR whose intensity values are low.
Similarly, we show both the spatial autocorrelation over time with and without the multiplication of the intensity weight.
The spatial autocorrelation itself does not distinguish the rain data frames well from the normal data frames. With the help of the intensity weight multiplier, the rain data frames are well distinguished from the normal data frames. A detailed study of how the quality metric performs on the LiDAR's rain data is presented below.
Referring to
Referring to
This scenario is a 3 min trip segment obtained by LiDAR 2. The LiDAR starts to generate a high amount of false positive points with abnormal intensities close to 2:00 (2 minutes into the trip segment) and recovers after about 30 sec. As shown in
Due to the sparse noise distribution pattern, both weighted spatial autocorrelation and unweighted spatial autocorrelation have significant low scores as the false positive detections start to occur. In some implementations, as the overall LiDAR intensity increases during the failure time, the intensity weight multiplier decreases, which leads to an increase in the quality metric. However, the quality metric remains low enough to distinguish the LiDAR frames from normal ones.
In this scenario, the performance of the LiDAR using LiDAR 2 over the same 1 hr rain trip segment in Scenario 2 is investigated. Due to the 905 nm laser used by the LiDAR 2, it has much higher false positive rates from water splash and rain droplets compared to the LiDAR 1 in Scenario 2.
Due to the sparse distribution pattern of the water splash and rain droplet points, both weighted spatial autocorrelation and unweighted spatial autocorrelation have significant lower scores during the rain. The quality metric score jitters during the last portion of the trip. When checking the corresponding camera data, it is found that the weather clears up during the trip and the road surface is still wet. As other vehicles occasionally pass by and produce water splash from time to time, the quality metric score captures the noise and the jitter.
This scenario is an 1 hr trip segment obtained using two LiDAR models (LiDAR 1 and LiDAR 2) where the fog is most dense in the first quarter of the trip and gradually clears up throughout the rest of the trip. The LiDAR performance in fog is in general very similar to the LiDAR performance in rain, except that there are no false positive detections from the water splash when other vehicles drive over wet road surfaces.
Referring to
As discussed above, the disclosed technology can be implemented in some embodiments to detect LiDAR noise anomaly typically caused by rain, fog, or other factors such as hardware failure, by providing a pointcloud quality metric. The pointcloud quality metric is based on the LiDAR pointcloud ranging and intensity characteristics and does not require any data annotation or learning techniques. The metric based on some embodiments of the disclosed technology is tested on road data and proved to be effective for different 905 nm and 1550 nm LiDAR systems.
The computational cost of the spatial autocorrelation is high due to the large number of points LiDAR can produce. Since the spatial autocorrelation of each grid cell is independent to other grid cells, it can be computed in parallel. As shown in
In some embodiments of the disclosed technology, a remote sensing and perception system 2110 includes a first data processing unit 2112 and a second data processing unit 2114. The remote sensing and perception system 2110 is in communication with a sensor 2120 to obtain data points that are spatially distributed and respectively have intensity values. In some implementations, the sensor 2120 is configured to perform a remote detection and ranging operation.
In some embodiments of the disclosed technology, the first data processing unit 2112 includes a controller, a memory, and an arithmetic-logic unit. In one example, the controller is configured to control the flow of data between the memory and the arithmetic-logic unit, and between the first data processing unit 2112 and the sensor 2120 and between the first data processing unit 2112 and the second data processing unit 2114.
In some embodiments of the disclosed technology, the second processing unit 2114 includes a plurality of arithmetic-logic units Arithmetic-Logic Unit 1, Arithmetic-Logic Unit 2, Arithmetic-Logic Unit 3, . . . , and Arithmetic-Logic Unit N (N is a positive integer) configured to perform computations in parallel.
In some embodiments of the disclosed technology, the first data processing unit 2112 may generate a remote sensing image grid that includes a plurality of grid cells obtained by performing a remote detection and ranging operation, by dividing a spatial distribution of data points into the plurality of grid cells.
In some embodiments of the disclosed technology, the first data processing unit 2112 may determine an intensity weight multiplier based on a reference intensity value of the plurality of data points and an average intensity value of the plurality of data points.
In some embodiments of the disclosed technology, each of the plurality of arithmetic-logic units in the second processing unit 2114 may determine a spatial autocorrelation of data points for each corresponding grid cell based on a difference in distances between neighboring data points in each grid cell.
In some embodiments of the disclosed technology, the first data processing unit 2112 may determine a quality score of the data points for each grid cell by applying the intensity weight multiplier to the spatial autocorrelation of the neighboring data points in each grid cell to determine whether the neighboring data points in each grid cell include one or more abnormal data points that are created by a noise source.
In some implementations, a remote sensing and perception system 2220 may be in communication with a sensor 2210 and may obtain data points that are spatially distributed and respectively have intensity values, by performing a remote detection and ranging operation; determine a spatial autocorrelation of the data points based on a difference in distances between neighboring data points of the data points; determine an intensity weight multiplier based on a reference intensity value of the data points and an average intensity value of the data points; and determine a quality score of the data points by applying the intensity weight multiplier to the spatial autocorrelation of the data points to determine whether the data points include one or more abnormal data points that are created by a noise source.
In some implementations, a remote sensing and perception system 2220 may be in communication with an autonomous driving controller 2230 to provide the quality score to the autonomous driving controller 2230 configured to control an autonomous vehicle by considering, based on the quality score, data points other than the one or more abnormal data points.
In some embodiments of the disclosed technology, a remote sensing and perception method 2300 may include, at 2310, obtaining data points that are spatially distributed and have respective intensity values by performing a remote detection and ranging operation, at 2320, determining a spatial autocorrelation of a set of data points, out of the data points, based on a difference in distances between data points in the set of data points, at 2330, determining an intensity weight multiplier based on a reference intensity value of the data points and an average intensity value of the data points, at 2340, determining a quality score of the set of data points by applying the intensity weight multiplier to the spatial autocorrelation of the set of data points, and at 2350, identifying, based on the quality score, whether the set of data points includes one or more data points that are created by a noise source.
In some embodiments of the disclosed technology, a remote sensing and perception method 2400 may include, at 2410, generating a remote sensing image grid that includes a plurality of grid cells obtained by performing a remote detection and ranging operation, by dividing a spatial distribution of data points into the plurality of grid cells, at 2420, determining an intensity weight multiplier based on a reference intensity value of the plurality of data points and an average intensity value of the plurality of data points, at 2430, determining a spatial autocorrelation of data points for each grid cell based on a difference in distances between data points in each grid cell, at 2440, determining a quality score of the data points in each grid cell by applying the intensity weight multiplier to the spatial autocorrelation of the data points in each grid cell, and at 2450, identifying, based on the quality score, whether the data points in each grid cell include one or more data points that are created by a noise source.
LiDAR sensors play an important role in the perception stack of modern autonomous driving systems. Adverse weather conditions such as rain, fog and dust, as well as some (occasional) LiDAR hardware fault may cause the LiDAR to output pointcloud with abnormal features such as high random noise rate and low intensity. To address these issues, the disclosed technology can be implemented in some embodiments to detect whether a LiDAR is generating anomalous pointcloud output by analyzing the pointcloud characteristics. Some embodiments of the disclosed technology were studied with extensive real public road data collected by LiDARs with different scanning mechanisms and laser spectrums, and are proven to be able to effectively handle various known and unknown anomaly sources. Some embodiments of the disclosed technology rely on pure mathematical analysis and does not require any labeling or training as learning-based methods do, therefore, the proposed method is scalable and can be quickly deployed either online to improve the autonomy safety by monitoring anomalies in the LiDAR data or offline to perform in-depth study of the LiDAR behavior over large amount of data.
LiDAR (Light Detection and Ranging) sensors have caught growing attention from the automotive and autonomous driving industry thanks to their capability of continuously generating high-definition and accurately-ranged image (pointcloud) of the surroundings, regardless of the ambient illuminance conditions. One particular challenge of using LiDARs for perception in autonomous driving is the performance degradation in adverse weather conditions such as rain, fog, dust, etc., where the LiDAR's laser signal may be scattered and/or attenuated, leading to reduced laser power and signal-noise ratio (SNR) and thus may cause the pointcloud output to have random noise and lower intensity. Not only the adverse environmental conditions can produce pointcloud with high amount of random noise and abnormal intensity values, sometimes defected LiDAR hardware components or unknown random factors may also lead to anomalous pointcloud output. For example, a LiDAR with defected electromagnetic shielding may produce highly noisy pointcloud output when strong signal interference sources such as cellular towers are nearby. To address these issues, the disclosed technology can be implemented to provide a method to characterize the aforementioned LiDAR pointcloud anomalies, which can benefit the autonomous driving system (ADS) safety as well as the ADS development cycle. In terms of increasing the level of automation and the ADS safety, a higher level ADS (level 3+) needs to detect whether the system is within its operation domain and behave correspondingly, according to the Society of Automotive Engineers (SAE). The ADS operation domain is typically bounded by environmental conditions and system component health, and it is essential that the ADS sensors such as LiDARs are able to determine their status and data quality. As for the application in the ADS development, the data frames with anomalous LiDAR pointcloud are typically associated with edge cases and long-tail scenarios, which require extra attention yet have relatively low rate of occurrence in the vast amount of data generated by the autonomous driving fleet. Having those cases picked out effectively and efficiently helps to save the time and effort required for ADS development.
While studies on general LiDAR pointcloud anomalies are limited, the topic of LiDAR performance under adverse weather conditions have been vastly touched. Many of the studies focus on the performance degradation of the LiDAR in rain/fog and have developed various quantification methods for aspects such as signal attenuation, visibility range, point density and target reflectance. In some implementations, statistical-based learning methods can be used to classify whether a LiDAR is working in adverse weather based on performance degradation metrics. These methods can be verified through simulation or testing in controlled environment which may not well resemble the realistic road conditions. For example, many controlled environments that emulate rains include several static test targets (e.g., vehicles, pedestrians, etc.). Such environment cannot produce water splashes generated from rolling wheels of other vehicles on the road, which is typically seen and picked up by the LiDARs in realistic operations. In addition, it should be noted that many of the commonly studied LiDAR performance degradation aspects do not always lead to safety-critical component or system failure. For example, a LiDAR typically have a reduced visibility range in rain which only reduces the perception system's capability and does not necessarily kill all the perception functions; on the other hand, even if the LiDAR is operating with its full capability in a sunny day, it may generate a large amount of false positive points due to hardware failure which is likely to be recognized as objects by the perception system and cause the vehicle to perform a hard-brake. In some implementations, a deep-learning based approach can be used to classify and detect LiDAR pointcloud anomalies. However, there are two major drawbacks to apply the deep-learning based approaches in practical R&D and implementation. First, it requires a large amount of annotated LiDAR data frames to train the software, moreover, the data collection, annotation and training pipeline must be repeated for different LiDAR properties, such as spinning vs solid state, 905 nm vs 1550 nm, or even a change to the mounting locations, thus lengthens the R&D cycle; and second, the real-time computational cost is high and may not be desirable given the limited onboard computational cost.
The disclosed technology can be implemented in some embodiments to provide a novel quality metric to quantitatively characterize the general noise-related anomalies in LiDAR pointcloud. To capture the spatially scattered nature of LiDAR noise points, a spatial autocorrelation, which is widely used in statistical studies, can be used to quantify how ‘dispersed’ the points are in a frame of LiDAR pointcloud. A factor related to the intensity of the pointcloud is also included in the quality metric to better separate the cases where the LiDAR is in heavy rain or dense fog.
The disclosed technology can be implemented in some embodiments to provide a general quality metric that is able to capture noise-related anomalies in LiDAR pointcloud regardless of the cause of the anomaly. It is particularly useful in identifying new pointcloud issues with unknown causes or very little prior experience during both early-stage system validation or large-scaled operation.
The disclosed technology can be implemented in some embodiments to provide an approach that does not require a priori data collection, labeling and training, thereby reducing the time and resource consumption for practical implementation.
The quality metric implemented based on some embodiments is verified with over 10,000 miles of public road data collected by LiDARs with various laser spectrums, scanning mechanisms and mounting locations. The results show that it is able to identify the pointcloud not only affected by adverse weather conditions, but also by uncommon noise sources such as signal interference, multi-path reflection, etc.
As will be discussed below, the disclosed technology can be implemented in some embodiments to provide LiDAR pointcloud quality metric.
In this section, some typical scenarios and characteristics of anomalous LiDAR pointcloud are discussed based on which we formulate the pointcloud quality metric. An implementation method utilizing LiDAR image grid and GPU (graphic processing unit) acceleration is also presented.
LiDAR pointcloud impacted by adverse weathers or hardware component failures may produce anomalous pointcloud with the following typical characteristics:
Examples of typical anomalous LiDAR pointcloud collected during public road testing are shown in
The pointcloud quality metric based on some embodiments includes two factors to address the two major characteristics of anomalous LiDAR pointcloud shown in
The disclosed technology can be implemented in some embodiments to employ the concept of spatial autocorrelation as a measure of the LiDAR points' level of spatial dispersion. In statistics, spatial autocorrelation is used to describe the overall spatial clustering of a group of data by calculating each data point's correlation with other nearby data points. A low spatial autocorrelation means that the group of data is dispersed, while a high spatial autocorrelation means that the data group is clustered. For example, the black and white squares in the checkerboard shown in
The underlying idea of using spatial autocorrelation to characterize the LiDAR pointcloud's spatial dispersion/clustering is that if a segment of LiDAR pointcloud data is generated by lasers detecting an actual object, the distance values in the data segment tend to be clustered since common road objects such as cars and pedestrians typically have large and continuous reflection surfaces. On the other hand, if a LiDAR data segment contains an excessive number of noise points, the distance values in the data segment are more likely dispersed. An illustration of the idea is shown in
The spatial autocorrelation of a set of LiDAR points is defined as Eq. 1 above. the spatial autocorrelation of the distance values is defined as Eq. 2 above. In one example, the correlation of one data point to all other data points in the set is defined as Eq. 4. Alternatively, can also be defined based the inverse angular distance between points i and j so that the correlation between closer points have higher weight (see Eq. 3 above).
The spatial autocorrelation is valued between [−1, 1], where a value of −1 indicates that the set of points are fully dispersed in the 3-dimensional physical space and a value of 1 means that the points are well clustered. It should be noted that by the definition above, a set with one isolated point, i.e., N=1, is also considered as fully dispersed and has a spatial autocorrelation value of −1. In some implementations, Eq. 2 is a reasonable definition for isolated point since the isolated point is most likely to be treated as a noise point in perception algorithms.
The main difference between the autocorrelation and statistical variance is that the statistical variance only considers the absolute difference between each individual points to the average, thus, it depicts how the data is distributed in the sample space. The spatial autocorrelation, on the other hand, considers the relation between each individual points to other points. Sets of data points that have the same statistical variance may not necessarily have the same spatial autocorrelation. As shown by the two pointcloud examples in
LiDARs with specific laser wavelengths may generate clustered instead of scattered noise points in heavy rain of dense fog.
Since this particular type of LiDAR noise is typically clustered, it can be hard to characterize using spatial autocorrelation alone, as will be shown in the test results in Section 3. However, in some implementations, this noise type only occurs when there is a dense layer of laser-absorbing/deflecting matter such as heavy rain, dense fog or intense smog, etc., and the points almost always have extremely low intensity values since they are generated from partial reflection of the laser pulse passing through the matter. Therefore, in addition to the spatial autocorrelation, low intensity values can also be taken into consideration by adding an intensity weight multiplier to the spatial autocorrelation. The intensity weight multiplier can be formulated from any intensity statistical measures such as mean, standard deviation, or any other metrics that can distinguish the abnormally low intensity values. The disclosed technology can be implemented in some embodiments to provide one formulation of the intensity multiplier based on the average intensity.
Let γref be a reference intensity value which indicates a nominal LiDAR intensity during normal operation (clear weather, no hardware issues). The reference is a user-defined value which is typically associated to specific LiDAR models from different manufacturers. The reference value can be obtained through statistical analysis of LiDAR data, since the LiDAR intensity during normal operation is typically consistent with small fluctuations. Let
These cases with high average intensities are irrelevant to the LiDAR data quality and are very commonly seen as vehicles can pass road signs from time to time. Therefore, it is possible to disregard the high average intensity in the definition of the multiplier. Overall, the LiDAR data quality metric is formulated as the multiplication of the intensity weight multiplier and the spatial autocorrelation Kγ−1.
It makes practical sense to calculate the spatial autocorrelation of the LiDAR points in a small local area instead of calculating for all LiDAR points across the entire FOV all at once, since typical objects and other physical features do not occupy the entire LiDAR FOV and the LiDAR points are bound to be scattered when looking from a global FOV perspective. Furthermore, calculating in a small local area reduces the computational cost as the spatial autocorrelation is of O(N2) with N being the size of the pointcloud under consideration. Therefore, in implementation, we first create a LiDAR image grid and calculate the spatial autocorrelation grid by grid.
For each LiDAR data frame, we project all the LiDAR points onto an azimuth-elevation image, with each point containing its range and intensity information. The image is then divided into grids in both azimuth and elevation directions. An example of such image grid is shown in
where i and j denotes the indices of the grid cells, and the V and H denotes the number of grid cells in the elevation and azimuth, respectively.
In some embodiments, the time complexity to calculate the spatial autocorrelation is of O(N2), where N is the number of point a LiDAR produces in one frame. Therefore, the computational cost of the weighted spatial autocorrelation can be high since modern automotive LiDARs can generate up to 100,000 points in a single frame. Applying the implementation based on the LiDAR pointcloud image grid shown above, the computation can be done in parallel for each grid cell since the spatial autocorrelation of each grid cell is independent to other grid cells. As GPUs become a more and more viable resource on automotives, in this section, we propose a GPU-accelerated parallel computation implementation of the weighted spatial autocorrelation.
Test data with two different LiDAR models which have different specifications in almost all aspects from the scanning mechanism to the laser spectrum can be collected. Table 2 lists some of the key parameters of the two LiDAR models.
Several Navistar International LT625 trucks equipped with both LiDAR models is used for data collection on public road. Each truck is also equipped with multiple cameras oriented to various directions. The cameras are synchronized with the LiDARs, and the camera images are recorded in addition to the LiDAR data as reference. We have accumulated a total of over 230 unit-hours and 10,000 unit-miles of road data with a combination of conditions covering different aspects, including various time of day such as daytime, nighttime, dusk and dawn, various weather conditions such as clear day, rainy and foggy, and various surroundings such as highway, local road, test track and parking lot. Both LiDARs output pointcloud at a 10 Hz rate, leading to a total amount of over 828 k frames of pointcloud data. We calculate the pointcloud quality metric once every second, e.g., once every 10 Hz. Since the scenarios that produces noise or anomalous LiDAR pointcloud, such as rains and fogs, can typically last for some time in a continuous manner, we are still able to capture the anomalous LiDAR pointcloud without losing much information while reducing the effort to go through the test dataset. A summary of the dataset is given in Table 3.
In the examples below, a few typical scenarios are selected and analyzed in detail to discuss the performance of the method based on some embodiments, while providing an overview of the method's performance over the entire test data set.
In this scenario, one unit of LiDAR 2 with defected EMI shielding passes through a cellular signal tower, generating a large amount of low-intensity noise points. As shown in
As comparison,
In this scenario, we investigate a trip segment where both LiDAR models are exposed in heavy rain.
For the LiDAR data frames with low quality metric score outputs, we define a true positive result when there are notable noise/anomalous points in the pointcloud, and a false positive result when no notable noise/anomalous points are found. In this section, we pick the true and false positive cases by finding pointcloud frames whose quality metric score is less than −0.4. It should be noted that the quality metric score threshold is merely a bar to filter out the frames of interest from the large amount of test data, and is not meant to be a threshold for real application.
Specifically,
The disclosed technology can be implemented in some embodiments to provide a novel approach to characterize the noise and anomalies in the LiDAR pointcloud, which can be caused by adverse environment conditions such as rain, fog, dust, or LiDAR internal component failures. To capture the anomalous pointcloud frames, the disclosed technology can be implemented in some embodiments to provide a quality metric score based only on the LiDAR pointcloud characteristics, e.g., the spatial distribution of the points and the intensity values, which does not require any data annotation or training. The method implemented based on some embodiments has been verified with numerous test data collected from public road with various LiDAR physical modalities, and the result proves that the proposed quality metric score can effectively capture the anomalous LiDAR pointcloud caused by different reasons. In addition, the disclosed technology can be implemented in some embodiments to monitor the operation condition of an autonomous driving system in real time. Furthermore, In addition, the disclosed technology can be implemented in some embodiments to efficiently select the data collected in rain/fog from enormous amount of test data for further analysis.
Therefore, various implementations of features of the disclosed technology can be made based on the above disclosure, including the examples listed below.
Example 1. A remote sensing method, comprising: obtaining data points that are spatially distributed and have respective intensity values by performing a remote detection and ranging operation; determining a spatial autocorrelation of a set of data points, out of the data points, based on a difference in distances between data points in the set of data points; determining an intensity weight multiplier based on a reference intensity value of the data points and an average intensity value of the data points; determining a quality score of the set of data points by applying the intensity weight multiplier to the spatial autocorrelation of the set of data points; and identifying, based on the quality score, whether the set of data points includes one or more data points that are created by a noise source. In some implementations, the remote detection and ranging operation may include any operations associated with LiDAR. In some implementations, the intensity value may include the reflectivity (e.g., the ability of a surface to reflect the laser pulse that LiDAR sends out) or the strength of a receiving signal of LiDAR. Accordingly, intensity values of data points can indicate the reflectivity of objects corresponding to the data points or the strength of a receiving signal of LiDAR from the objects corresponding to the data points. In some implementations, the intensity refers to the strength of signals or light rays received by a ranging sensor during its ranging operation, e.g., the number of transmitted signals or light rays (e.g., laser pulses) that are reflected by a target object. The intensity is dependent on a reflectivity of a surface of the target object, a range or distance between the sensor and the target object, and environment conditions.
Example 2. The method of example 1, wherein the reference intensity value of data points is determined based on an intensity during a normal operation of the remote detection and ranging operation.
Example 3. The method of example 1, wherein the one or more data points created by the noise source include at least one of: randomly and sparsely distributed data points; or data points with intensity values lower than an average intensity by a predetermined value.
Example 4. The method of example 1, wherein it is determined that the data points include one or more data points created by the noise source, upon a determination that the quality score of the set of data points is lower than a reference quality score.
Example 5. The method of example 4, wherein the reference quality score is statistically determined.
Example 6. The method of example 1, wherein the applying the intensity weight multiplier to the spatial autocorrelation of the data points includes multiplying the spatial autocorrelation by the intensity weight multiplier.
Example 7. The method of example 1, wherein the intensity weight multiplier is a mean intensity or a standard deviation of the data points.
Example 8. The method of example 1, further comprising: generating a remote sensing image grid by dividing a spatial distribution of the data points into a plurality of grid cells; determining the spatial autocorrelation for each grid cell; and determining the quality score for each grid cell.
Example 9. The method of example 8, wherein the generating the remote sensing image grid includes projecting the data points onto an azimuth-elevation image and dividing the azimuth-elevation image by a grid size.
Example 10. The method of example 8, wherein the determining the spatial autocorrelation for each grid cell includes calculating a weighted spatial autocorrelation of all distance values of the data points in each grid cell.
Example 11. The method of example 1, wherein the remote detection and ranging operation includes a light detection and ranging (LiDAR).
Example 12. The method of example 1, further comprising: providing the quality score to an autonomous driving controller to control an autonomous vehicle by considering, based on the quality score, data points other than the one or more data points that are created by the noise source.
Example 13. A remote sensing method, comprising: generating a remote sensing image grid that includes a plurality of grid cells obtained by performing a remote detection and ranging operation, by dividing a spatial distribution of data points into the plurality of grid cells; determining an intensity weight multiplier based on a reference intensity value of the plurality of data points and an average intensity value of the plurality of data points; determining a spatial autocorrelation of data points for each grid cell based on a difference in distances between data points in each grid cell; determining a quality score of the data points in each grid cell by applying the intensity weight multiplier to the spatial autocorrelation of the data points in each grid cell; and identifying, based on the quality score, whether the data points in each grid cell include one or more data points that are created by a noise source.
Example 14. The method of example 13, wherein the spatial distribution of data points is an azimuth-elevation image of the data points.
Example 15. The method of example 13, wherein the reference intensity value of the plurality of data points is determined based on an intensity during a normal operation of the remote detection and ranging operation.
Example 16. The method of example 13, wherein the one or more data points created by the noise source include at least one of: randomly and sparsely distributed data points; or data points with intensity values lower than an average intensity by a predetermined value.
Example 17. The method of example 13, wherein it is determined that the data points include one or more data points created by the noise source, upon a determination that a sum of quality scores of the data points in the plurality of grid cells is lower than a reference quality score.
Example 18. The method of example 13, wherein the applying the intensity weight multiplier to the spatial autocorrelation of the data points includes multiplying the spatial autocorrelation by the intensity weight multiplier.
Example 19. The method of example 13, further comprising: providing the quality score to an autonomous driving controller to control an autonomous vehicle by considering, based on the quality score, data points other than the one or more data points created by the noise source.
Example 20. A remote sensing and perception system, comprising: a first data processing unit configured to: generate a remote sensing image grid that includes a plurality of grid cells obtained by performing a remote detection and ranging operation, by dividing a spatial distribution of data points into the plurality of grid cells; determine an intensity weight multiplier based on a reference intensity value of the plurality of data points and an average intensity value of the plurality of data points; and determine a quality score of the data points in each grid cell; and a second processing unit in communication with the first data processing unit and including a plurality of arithmetic-logic units configured to perform computations in parallel, each of the plurality of arithmetic-logic units configured to determine a spatial autocorrelation of data points in each corresponding grid cell based on a difference in distances between data points in each grid cell, wherein the quality score of the data points in each grid cell is determined by: applying the intensity weight multiplier to the spatial autocorrelation of the data points in each grid cell; and identifying, based on the quality score, whether the data points in each grid cell include one or more data points that are created by a noise source.
Example 21. The system of example 20, wherein the spatial distribution of data points is an azimuth-elevation image of the data points.
Example 22. The system of example 20, wherein the reference intensity value of the plurality of data points is determined based on an intensity during a normal operation of the remote detection and ranging operation.
Example 23. The system of example 20, wherein the one or more data points created by the noise source include at least one of: randomly and sparsely distributed data points; or data points with intensity values lower than an average intensity by a predetermined value.
Example 24. The system of example 20, wherein it is determined that the data points include one or more data points that are created by the noise source, upon a determination that a sum of quality scores of the data points in the plurality of grid cells is lower than a reference quality score.
Example 25. The system of example 20, wherein the applying the intensity weight multiplier to the spatial autocorrelation of the data points includes multiplying the spatial autocorrelation by the intensity weight multiplier.
Implementations of the subject matter and the functional operations described in this patent document can be implemented in various systems, digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Implementations of the subject matter described in this specification can be implemented as one or more computer program products, e.g., one or more modules of computer program instructions encoded on a tangible and non-transitory computer readable medium for execution by, or to control the operation of, data processing apparatus. The computer readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more of them. The term “data processing unit” or “data processing apparatus” encompasses all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.
A computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program does not necessarily correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
The processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit).
Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read only memory or a random-access memory or both. The essential elements of a computer are a processor for performing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks. However, a computer need not have such devices. Computer readable media suitable for storing computer program instructions and data include all forms of nonvolatile memory, media, and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
While this patent document contains many specifics, these should not be construed as limitations on the scope of any invention or of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments of particular inventions. Certain features that are described in this patent document in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable sub-combination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a sub-combination or variation of a sub-combination.
Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Moreover, the separation of various system components in the embodiments described in this patent document should not be understood as requiring such separation in all embodiments.
Only a few implementations and examples are described, and other implementations, enhancements and variations can be made based on what is described and illustrated in this patent document.
This patent document claims priority to and benefits of U.S. Provisional Application No. 63/515,947, filed on Jul. 27, 2023. The entire contents of the before-mentioned patent application are incorporated by reference as part of the disclosure of this document.
Number | Date | Country | |
---|---|---|---|
63515947 | Jul 2023 | US |