The present application claims priority under 35 U.S.C. § 119 to Japanese Patent Application No. 2023-050569, filed Mar. 27, 2023, the contents of which application are incorporated herein by reference in their entirety.
The present disclosure relates to a technique for dynamically adjusting parameters of a recognition algorithm of an autonomous driving vehicle.
Examples of documents showing the technical level of the technical field related to the present disclosure include JP2021-528628A and JP2021-006797A. For example, JP2021-528628A discloses a recognition system for an autonomous driving vehicle, which specifies a sensor that has failed in recognition based on outputs of a plurality of types of sensors and lowers the reliability of the output of the sensor, thereby improving the recognition performance as a whole.
The recognition algorithm of the autonomous driving vehicle includes at least one parameter. The setting of parameters affects the recognition performance of the recognition algorithm. However, the recognition performance of the recognition algorithm depends on a recognition environment in which the autonomous driving vehicle is placed, and the recognition environment changes depending on sunlight conditions and weather conditions. To accommodate such changes in the recognition environment, the parameters need to be dynamically adjusted.
The present disclosure has been made in view of the above problem. An object of the present disclosure is to enable dynamic adjustment of parameters of a recognition algorithm of an autonomous driving vehicle to an optimal value according to a recognition environment.
The present disclosure provides an apparatus for achieving the above object. The apparatus of the present disclosure is an external environment recognition apparatus for an autonomous driving vehicle. The apparatus of the present disclosure includes an external sensor, at least one processor, and at least one memory communicatively coupled to the at least one processor and storing a plurality of executable instructions. The plurality of instructions is configured to cause at least one processor to execute the following processes. The first process is to estimate the self-location of the ego-vehicle. The second process is to acquire registration information of a known stationary object associated with the self-location. The third process is to acquire sensor information corresponding to the known stationary object of which the registration information is acquired by the second process, by the external sensor. The fourth process is to adjust a value of a specific parameter related to the sensor information among parameters of a recognition algorithm based on a deviation between the registered information and the sensor information on the same stationary object.
The present disclosure also provides a method for achieving the above object. The method of the present disclosure is a method for adjusting parameters of a recognition algorithm of an autonomous driving vehicle by an on-board computer. The method of the present disclosure includes the following steps. The first step is a step of estimating the self-location of the ego-vehicle. The second step is a step of acquiring registration information of a known stationary object associated with the self-location. The third step is a step of acquiring sensor information corresponding to the known stationary object of which the registration information is acquired in the second process, by an external sensor mounted on the ego-vehicle. The fourth step is a step of adjusting a value of a specific parameter related to the sensor information among the parameters of the recognition algorithm based on a deviation between the registered information and the sensor information on the same stationary object.
Furthermore, the present disclosure provides a program for achieving the above object. A program of the present disclosure is a program executable by an on-board computer of an autonomous driving vehicle. The program of the present disclosure is configured to cause the on-vehicle computer to execute the first to fourth processes described above. The program according to the present disclosure may be recorded in a non-transitory computer-readable storage medium.
According to the technique of the present disclosure, the sensor information obtained by the external sensor is evaluable with reference to the registration information of the known stationary object. Based on the evaluation, the parameter value of the recognition algorithm can be dynamically adjusted to an optimal value corresponding to the recognition environment.
The vehicle 2 is equipped with an autonomous driving system 10. The autonomous driving system 10 is a system that performs three operations necessary for driving a vehicle, that is, recognition, determination, and operation, on behalf of a driver. The external environment recognition apparatus according to the embodiment of the present disclosure is an apparatus constituting a part of the autonomous driving system 10, and is configured to perform recognition among the three operations.
Sensor information obtained by an external sensor mounted on the vehicle 2 is used for recognition by the external recognition apparatus. The external sensor includes a Lidar 12 that scans the front of the vehicle 2 and a camera 16 that captures the front of the vehicle 2. The scanning range 14 of the Lidar 12 and the photographing range 18 of the camera 16 are at least partially overlapped. The Lidar 12 and the camera 16 are connected to the autonomous driving system 10 via an in-vehicle network. Further, a vehicular state sensor including an IMU and a GPS receiver are connected to the on-board network.
The autonomous driving system 10 is a computer including a processor 20 and a memory 22 communicatively coupled to the processor 20. The number of processors 20 and the number of memories 22 constituting the autonomous driving system 10 may be plural. The memory 22 is a computer-readable recording medium. The memory 22 stores a stationary object database (hereinafter, referred to as a stationary object DB) 26. The stationary object DB 26 is a database in which information on stationary objects on the map is registered. The stationary object mentioned here means an object that is constantly stationary, and an object that is temporarily present, such as a parked vehicle, is not included in the stationary object. Examples of such stationary objects include road signs, traffic lights, buildings, utility poles, and posts.
The memory 22 stores at least one program executable by the processor 20. The program comprises a plurality of instructions 24. The instructions 24 stored in the memory 22 include instructions for causing the processor 20 to operate as an external environment recognition apparatus. The processor 20 functions as a recognition unit 202, a self-location estimation unit 204, a registration information acquisition unit 206, a parameter determination unit 208, and a parameter adjustment unit 210 by executing these instructions. The functions of the respective units will be described below.
The recognition unit 202 recognizes a target existing around the vehicle 2 from the sensor information of the external sensor, and outputs information regarding the target as recognition information. The target to be recognized includes a moving object such as a preceding vehicle or a pedestrian and a stationary object 6 whose position is fixed on the map. A recognition algorithm obtained by machine learning is used for the recognition process by the recognition unit 202. The recognition algorithm can also be referred to as a recognition model. By processing the sensor information with the recognition algorithm, recognition information regarding the target included in the sensor information is obtained.
The recognition algorithm has a plurality of adjustable parameters. The optimal values of the parameters under a standard recognition environment are set in advance. However, the recognition environment in which the vehicle 2 is placed changes. Therefore, if the value of the parameter is fixed, the recognition performance of the recognition algorithm may be degraded depending on the change in the recognition environment. The self-location estimation unit 204, the registration information acquisition unit 206, the parameter determination unit 208, and the parameter adjustment unit 210 included in the processor 20 as the external recognition apparatus are functions for dynamically adjusting the parameters in accordance with a change in the recognition environment.
The self-location estimation unit 204 estimates the self-location of the vehicle 2. The self-location of the vehicle 2 can be estimated by using, for example, coordinates of the vehicle 2 acquired by the GPS receiver, an attitude of the vehicle 2 acquired by the IMU, sensor information acquired by the LiDAR 12 or the camera 16, and map information. Any self-location estimation method including a known method can be applied to the self-location estimation unit 204.
The registration information acquiring unit 206 acquires registration information of a known stationary object associated with the self-location from the stationary object DB 26. In the example illustrated in
The registration information includes first type information which is information of the stationary object 6 itself and second type information which is information of the vehicle 2 with respect to the stationary object 6. The first type information includes distance-independent information that does not depend on the distance between the stationary object 6 and the vehicle 2 and distance-dependent information that depends on the distance between the stationary object 6 and the vehicle 2. Examples of the distance-independent information include the position, material, color, and volume of the stationary object 6. Examples of the distance-dependent information include the number of constituent points of the point cloud, the intensity, the accuracy of distances of the points, the gas likelihoods, and the number of constituent points of the noise point cloud when the external sensor is the LiDAR 12, and include the pixel values and the hues when the external sensor is the camera 16. The second type information is, for example, the position of the vehicle 2 and the position of the external sensor at which the point cloud of the stationary object 6 can be acquired, and the traveling speed of the vehicle 2.
The registration information is a value calculated logically or a statistical value calculated from past sensor information. Among the registered information, the distance-independent information is not affected by a change in the recognition environment such as weather, whereas at least a part of the distance-dependent information is affected by the change in the recognition environment. For example, when information acquired in rainy weather is compared with information acquired in fine weather, the number of constituent points of the point cloud decreases, the distance accuracy of each point deteriorates, and the number of constituent points of the noise point cloud increases. Therefore, depending on the recognition environment, a deviation occurs between the current sensor information and the registration information.
Each parameter of the recognition algorithm is optimized based on the registration information. Therefore, in a recognition environment in which a deviation occurs between the sensor information of the current external sensor and the registered information, it is possible to determine that the parameter related to the sensor information in which the deviation occurs is inappropriate. The parameter determination unit 208 acquires the current sensor information of the external sensor corresponding to the stationary object of which the registration information is acquired from the stationary object DB 26 by the registration information acquiring unit 206. In the example illustrated in
When the parameter determining unit 208 determines that the value of the specific parameter is inappropriate, the parameter adjusting unit 210 adjusts the value of the specific parameter to an appropriate value. The adjustment method varies depending on the type of the specific parameter. The details of the adjustment method by the parameter adjustment unit 210 will be described later with a specific example.
Next, a method of adjusting the parameters of the recognition algorithm by the processor 20 as the external environment recognition apparatus will be described.
First, the flowchart A will be described. In step S11, sensor information is acquired from various sensors connected to the autonomous driving system 10. Next, in step S12, the self-location of the vehicle 2 is estimated using at least a part of the sensor information acquired in step S11 and the map information. Next, in step S13, sensor information of the external sensor corresponding to the stationary object is acquired from the sensor information acquired in step S11.
Next, in step S14, the sensor information acquired in step S12 is associated with the self-location estimated in step S13, and the associated information is registered in the stationary object DB 26. Each time the vehicle 2 passes near a stationary object and is detected by the external sensor, the stationary object is accumulated in the stationary object DB 26. In step S15, it is determined whether the amount of information stored in the stationary object DB 26 is equal to or greater than a predetermined value.
If the amount of traffic has reached a certain value, the procedure proceeds to step S16. In step S16, a statistic of the stationary object is calculated from the stationary object DB 26. Then, the statistic is registered in the stationary object DB 26 as the registration information of the stationary object. For a stationary object whose amount of information has not reached the predetermined value, the processing from step S11 to step S15 is repeated until the amount of information becomes equal to or greater than the predetermined value.
Next, the flowchart B will be described. In step S21, sensor information is acquired from various sensors connected to the autonomous driving system 10. Next, in step S22, the self-location of the vehicle 2 is estimated using at least a part of the sensor information acquired in step S21 and the map information. In step S23, the registration information of the stationary object corresponding to the self-location estimated in step S22 is acquired from the stationary object DB 26. In step S24, sensor information of the external sensor corresponding to the stationary object of which the registration information is acquired in step S21 is acquired from the sensor information acquired in step S23.
In step S25, the registration information of the stationary object acquired from the stationary object DB 26 in step S23 is compared with the sensor information of the external sensor corresponding to the stationary object acquired in step S24. Then, it is determined whether or not there is a deviation between the registration information and the sensor information. When there is no deviation between the registration information and the sensor information, it can be determined that the parameters of the recognition algorithm are appropriate. In this case, the procedure is terminated without adjusting the parameters. On the other hand, when there is a deviation between the registration information and the sensor information, the procedure proceeds to step S26. In step S26, the value of a specific parameter related to the sensor information among the parameters of the recognition algorithm is adjusted.
The criterion and method for determining the presence or absence of the deviation differ depending on the contents of the registration information and sensor information to be compared. A specific example of a method for adjusting the parameters of the recognition algorithm will be described below.
The first specific example is an example of a method of adjusting a parameter of a recognition algorithm for recognizing a target from point cloud information of the LiDAR 12. According to the recognition algorithm, clustering processing is performed on the point cloud obtained by the LiDAR 12, and each of the clusters of the point cloud obtained by the clustering processing is recognized as a target. In one example of the clustering process, first, each constituent point of the point cloud is regarded as a cluster consisting of one point, and the inter-cluster distance is calculated by the shortest distance method. Then, clusters having an inter-cluster distance equal to or less than a threshold value are connected as the same cluster, and the inter-cluster distance is calculated again for the connected clusters. Such processing is repeatedly performed, and the processing is ended at a point in time when the number of clusters finally does not change, and each of the clusters finally remaining is output as a target.
In the first specific example, a threshold value of the distance when the clusters are connected as the same cluster is used as a parameter of the recognition algorithm. The thresholds of distances, which are parameters of the recognition algorithm, are determined in advance according to the resolutions of the LiDAR 12. In the first example, the following values a and b are stored in the stationary object DB 26 as the registered information of the stationary object. These are sensor information related to the distance threshold, which is a parameter of the recognition algorithm.
The volume vdb may be measured from the real object. Further, when a set of positions of the respective constituent points of the point cloud of the stationary object is set as X, Y, and Z, and max (X) and min (X) are functions for calculating the maximum value and the minimum value of the matrix X, respectively, the volume vdb can also be calculated by the following Equation (1).
By using the above values registered in the stationary object DB 26, the point cloud density ddb of the stationary object when the distances from the vehicle 2 to the stationary objects are r can be calculated by the following Equation (2).
Here, a case where the LiDAR 12 is degraded by noise such as raindrops will be considered. In
In the first specific example, parameter adjustment according to the recognition environment is performed on the threshold value of the distance which is a parameter of the recognition algorithm based on determination using the following Equation (3). In the Equation (3), d is the current point cloud density of the stationary object 6 obtained from the output of the LiDAR 12. Thresholdd is a determination value for determining whether the parameter is appropriate or inappropriate.
When the relationship of the Equation (3) is satisfied, it is determined that the value of the parameter is inappropriate for the current recognition environment. In this case, a ratio ddb/d between the point cloud density ddb obtained from the registration information of the stationary object DB 26 and the current point cloud density d of the stationary object 6 is calculated as a coefficient. Then, the threshold of the distance, which is a parameter of the recognition algorithm, is multiplied by the coefficient ddb/d. As a result of such parameter adjustment, it is possible to suppress non-detection of the target in the rain.
In
The second example is also an example of a method for adjusting a parameter of a recognition algorithm for recognizing a target from point cloud information of the LiDAR 12. The laser light emitted from the LiDAR 12 may be reflected by water vapor in the air or exhaust gas from preceding vehicle, which may cause erroneous detection. Therefore, a filter for removing noise is provided in the recognition algorithm. In the second specific example, the threshold value of the filter is used as a parameter of the recognition algorithm. In the second example, the following values a and b are stored in the stationary object DB 26 as the registered information of the stationary object. These are sensor information related to the threshold of the filter, which is a parameter of the recognition algorithm.
According to the recognition algorithm, a filtering process is performed on the point cloud obtained by the LiDAR 12. In one example of the filtering process, when the relationship represented by the following Equation (4) is satisfied for a certain cluster, the cluster is determined to be a gas. In the Equation (4), n is the number of constituent points of the point cloud constituting the cluster, vi is a gas likeliness indicating the likeliness that the i-th constituent point is a gas, V is a mean value of gas likeliness of the entire cluster, and Thresholdv is a filter threshold value for determining whether the target cluster is a gas. The threshold value of the filter, which is a parameter of the recognition algorithm, is adjusted in advance using the travel log.
Here, a case where the gas likelihoods of the point cloud obtained from the output of the LiDAR 12 deviate from the original values will be considered. In
In the second specific example, the difference between the mean value Vdb of the gas likelihoods of the constituent points of the point cloud of the stationary object 6 registered in the stationary object DB 26 and the mean value V of the gas likelihoods of the constituent points of the current point cloud of the stationary object 6 obtained from the output of the LiDAR 12 is calculated. Then, the threshold value of the filter of the recognition algorithm is corrected using the difference as a correction value.
In
The third example is also an example of a method of adjusting a parameter of a recognition algorithm for recognizing a target from point cloud information of the LiDAR 12. In the recognition algorithm, a future behavior is predicted for each moving object by using a behavior prediction model (the behavior prediction model itself is known at the time of filing of the present application). The prediction result by the behavior prediction model is represented by the probability density distribution of the future position of the reference point of the moving object. In the third example, the variance of the probability density distribution is used as a parameter of the recognition algorithm. In the third specific example, the following values a, b, and c are stored in the stationary object DB 26 as the registered information of the stationary object. These are sensor information related to the variance of the probability density distribution, which is a parameter of the recognition algorithm.
Here, a case where a point cloud is generated between the moving object and the vehicle 2 due to a raindrop will be considered. The point cloud between the moving object and the vehicle 2 is referred to as a noise point cloud. When the noise point cloud is close to the moving object, it is determined that the noise point cloud and the moving object belong to the same cluster in the clustering process using the recognition algorithm, and as a result, the recognized shape of the moving object changes as compared with the normal state. The recognition algorithm uses the positions of the cluster in the previous and current frames to estimate the velocity. Therefore, if the recognized shape changes and the center position changes, the speed of the moving object is erroneously estimated. From the above, it is considered that when a noise point cloud is generated near the moving object, the reliability of a speed estimation result by the recognition algorithm is reduced.
As described above, in the recognition algorithm, the future position of the moving object is represented by the probability density distribution. Since the probability density distribution is a region where the moving object is estimated to be likely to travel, when the reliability of the speed estimation is low, it is necessary to increase the probability density distribution as compared with the normal time in order to avoid contact between the moving object and the vehicle. The variance of the probability density distribution is a parameter for adjusting the range of the probability density distribution.
In
Here, d (Ct, Ct+dt) is a function for calculating the three dimensional distance between the center position Ct at time t and the center position Ct+dt at time t+dt, that is, the amount of change over time in the cluster center position between the previous and subsequent frames. The speed of the stationary object 6 in a normal state should be 0 km/h, but when the speed is erroneously estimated due to the noise point cloud, the time variation of the cluster center position between the previous and subsequent frames increases. The standard deviation of the points constituting the cluster is also larger than that in the normal state.
In the third specific example, the parameter adjustment is performed on the variance of the probability density distribution, which is a parameter of the recognition algorithm, according to the recognition environment based on the determination using the following Equations (5) and (6). Note that Thresholddd in Equation (5) and Thresholds in Equation (6) are determination thresholds for determining whether the parameters are appropriate or inappropriate.
When both the relationships of Equations (5) and (6) are satisfied, it is determined that the value of the parameter is inappropriate for the current recognition environment. When the future position of the target is estimated by the recognition algorithm, probability density distribution in which the variance is widened by the speed errors between the previous and subsequent frames is used. By performing such parameter adjustment under a recognition environment in which a noise point cloud is generated, the effect described with reference to
In
The fourth specific example is an example of a method of adjusting a parameter of a recognition algorithm for recognizing a target from visual information of the camera 16. In the fourth specific example, the correction value of the hue of the HSV image obtained from the camera 16 is used as a parameter of the recognition algorithm. In the fourth example, the following values are stored in the stationary object DB 26 as the registered information of the stationary object. These are sensor information related to the correction value of the hue of the HSV image, which is a parameter of the recognition algorithm.
As a method of detecting an object from the visual information of the camera 16, there are various methods such as template matching, key point matching, and a method of realizing them by machine learning. What can be said in common to all detection methods is that detection can be performed with high accuracy when an image given as prior knowledge and a current image are close to each other. Conversely, for example, when the color balance of the image is changed due to an abnormality of the camera 16 and a large deviation occurs between the image of the prior knowledge and the current image, it is assumed that the detection performance of the object is deteriorated.
As a specific process, first, a process of converting an image of the camera 16 at a certain time into an HSV format and adding a predetermined value to the hue of the HSV image is added to the recognition algorithm. Next, the position of the stationary object 6 is acquired from the stationary objects DB 26 based on the self-location of the vehicle 2, and the pixels of the portion corresponding to the stationary object 6 are cut out from the image of the camera 16. Then, when the hue of the cut-out image is separated from the hue registered in the still object DB 26 by a predetermined value or more, the difference in hue is calculated, and the predetermined value set in the recognition algorithm is corrected by the difference. By such processing, it is possible to suppress a decrease in the object detection performance due to a change in the color balance of the image of the camera 16.
| Number | Date | Country | Kind |
|---|---|---|---|
| 2023-050569 | Mar 2023 | JP | national |