The present invention relates to an external environment recognition device mounted on a vehicle.
In recent years, a driving assistance device and an automatic driving device of a vehicle have been developed. In the driving assistance device or the automatic driving device, it is important to estimate a position of a target.
PTL 1 describes a technique of detecting a target from two points by using a camera or a sensor mounted on a host vehicle, obtaining two error distributions, and comparing standard errors E1 and E2.
The technique described in PTL 1 is a technique of determining whether or not to select one sampling point of two error distributions or a sampling point of an overlapping region of two error distributions from a magnitude relationship between the standard errors E1 and E2 and estimating the target by using the selected sampling point.
However, PTL 1 does not consider a case where there is an error between a sensor point group detected by using the camera or the sensor and a map point group and has a difficulty to accurately estimate a relative relationship between a map feature and a target by the sensor in a case where there is the error between the sensor point group and the map point group.
When it is difficult to accurately estimate the relative relationship between the map feature and the target detected by the sensor, it is difficult to perform driving assistance and automatic driving of the vehicle with high accuracy.
An object of the present invention is to provide an external environment recognition device and an external environment recognition method capable of accurately estimating a relative relationship between a map feature and a target detected by a sensor.
In order to achieve the above object, the present invention is configured as follows.
An external environment recognition device includes a self-position estimation unit that estimates a self-position which is a position of a host vehicle on a map stored in a map database based on external environment information acquired by an external environment sensor mounted on the host vehicle, a target recognition unit that recognizes targets around the host vehicle based on the external environment information, a map information acquisition unit that acquires map information, which includes a map point group which is a set of feature points on the map and feature information including information on a position and a type of a feature, a sensor point group acquisition unit that acquires a sensor point group around the targets recognized by the target recognition unit from the target recognition unit, and a point group matching unit that estimates positions of the targets on the map by matching between the sensor point group acquired by the sensor point group acquisition unit and the map point group.
In addition, an external environment recognition method includes estimating a self-position which is a position of a host vehicle on a map stored in a map database based on external environment information acquired by an external environment sensor mounted on the host vehicle, recognizing targets around the host vehicle based on the external environment information, acquiring map information, which includes a map point group which is a set of feature points on the map and feature information including information on a position and a type of a feature, acquiring a sensor point group around the recognized targets, and estimating a position of the target on the map by matching between the acquired sensor point group and the map point group.
According to the present invention, it is possible to provide the external environment recognition device and the external environment recognition method capable of accurately estimating the relative relationship between the map feature and the target detected by the sensor.
In the present invention, a position of a sensor target on a map is estimated by extracting a point group around a sensor target (a target detected by a sensor) and performing point group matching between a map point group and the sensor target, and a relative relationship between a map feature and a sensor target is accurately estimated.
Embodiments of the present invention will be described in detail with reference to the accompanying drawings.
In
The self-position estimation unit 3 estimates a self-position, which is a position of a host vehicle 10 on a map based on external environment information acquired by an external environment sensor 2 mounted on the host vehicle 10 (illustrated in
The target recognition unit 4 recognizes targets around the host vehicle 10 based on external environment information detected by the external environment sensor 2. The external environment sensor 2 is a sensor such as a camera or a radar, and detects external environment information of the host vehicle 10.
The map information acquisition unit 5 acquires map information including map information stored in a storage unit mounted on the host vehicle 10, a map point group which is a set of feature points on a map transmitted from an outside, and feature information including information on a position and a type of a feature. In
The sensor point group acquisition unit 6 acquires sensor point groups, which are a plurality of positions around the targets recognized by the target recognition unit 4, from the external environment information detected by the external environment sensor 2.
The point group matching unit 7 estimates positions of the targets on the map recognized by the target recognition unit 4 by matching between the sensor point group acquired by the sensor point group acquisition unit 6 and the map point group acquired by the map information acquisition unit 5. That is, the point group matching unit 7 determines whether or not the point group matching has succeeded, and outputs the positions of the targets on the map estimated by the matching between the sensor point group and the map point group in a case where the point group matching has succeeded. In a case where the point group matching fails, the point group matching unit 7 outputs the positions of the targets on the map calculated by using the self-position and a recognition result of the targets recognized by the target recognition unit 4.
The target selection unit 8 selects a target that satisfies a predetermined condition among targets recognized by target recognition unit 4 based on the self-position of the host vehicle 10 estimated by the self-position estimation unit 3, the recognition result of the targets recognized by target recognition unit 4, and the feature information acquired by the map information acquisition unit 5.
A positional relationship between the host vehicle 10 and a different vehicle 11 and a positional relationship between the host vehicle 10, the different vehicle 11, and a road will be described. With respect to an actual positional relationship (true positional relationship) with a target in an external environment of the host vehicle 10, a positional relationship between a self-generated map by odometry (a method for estimating a current position from a rotation angle of a wheel of the vehicle) and the target in the external environment detected by using the external environment sensor 2 may be inaccurate.
Note that, the different vehicle 11 is also included in the target.
A scale error will be described with reference to
(a) of
(b) of
Comparing the true position map 12T1 illustrated in (a) of
(a) of
(b) of
Comparing the true position map 12T1 illustrated in (a) of
That is, in the true position map 12T1 illustrated in (a) of
When the relative relationship between the different vehicle 11 and the retreat region 17 is inaccurate due to the scale error, it is difficult to perform driving assistance or automatic driving of the host vehicle 10 with high accuracy.
An angle error will be described with reference to
(a) of
(b) of
Comparing the true position map 12T2 illustrated in (a) of
(a) of
(b) of
Comparing the true position map 12T2 illustrated in (a) of
That is, the different vehicle 11 is a position of advancing in a direction parallel to the dividing line 14 in the true position map 12T2 illustrated in (a) of
When the relative relationship between the different vehicle 11 and the dividing line 14 is inaccurate due to the angle error, it is difficult to perform driving assistance or automatic driving of the host vehicle 10 with high accuracy, similarly to the scale error.
In the first embodiment of the present invention, the positions of the targets detected by the external environment sensor 2 on the map are estimated by extracting the point groups around the targets detected by the external environment sensor 2 and performing the point group matching between the extracted point groups around the targets and the map point groups acquired from the map information. Thus, the scale error and the angle error are corrected to accurately estimate the relative relationship between the host vehicle 10, the different vehicle 11, and the feature on the map.
In (a) of
The extraction processing of the plurality of points 15 in the sensor point group 15G is executed by the external environment recognition device 1. That is, the target recognition unit 4 recognizes the targets from the external environment information detected by the external environment sensor 2. In addition, the self-position estimation unit 3 estimates the map information acquired by the map information acquisition unit 5 and the self-position, which is the position of the host vehicle 10 detected by the external environment sensor 2. The target selection unit 8 selects the target from the map information acquired by the map information acquisition unit 5, the self-position estimated by the self-position estimation unit 3, and the targets recognized by the target recognition unit 4. The sensor point group acquisition unit 6 extracts the sensor point group 15G based on the targets recognized by the target recognition unit 4 and the target selected by the target selection unit 8.
Subsequently, the point group matching unit 7 performs matching processing between the sensor points 15 and a map point group 16 based on the map information acquired by the map information acquisition unit 5 and the sensor point group 15G extracted by the sensor point group acquisition unit 6.
(b) of
The matching processing is performed to overlap the sensor points 15 and the map points 16, a relative relationship between a different vehicle position 11P3 on the map and the retreat region 17 is made accurate.
For the angle error illustrated in
The determination of whether or not the matching processing has succeeded in the point group matching unit 7 will be described.
In a case where the number of points included in the map is less than or equal to a predetermined threshold value, it is determined that the matching has failed. In this case, the matching processing is not executed from the beginning.
Subsequently, in a case where the number of points included in the map exceeds the predetermined threshold value, in a case where an average value of distances between the sensor points 15 and the map points 16 corresponding to each other is more than or equal to a predetermined threshold value it is determined that the matching has failed. In addition, in a case where the number of corresponding points between the sensor points 15 and the map points 16 is less than or equal to a predetermined threshold value, it is also determined that the matching has failed.
In a case where the matching processing has succeeded, the point group matching unit 7 outputs a position and a posture of a sensor target on the map estimated by the matching processing. The host vehicle 10 performs driving assistance and automatic driving processing based on the position and posture of the sensor target output from the point group matching unit 7.
In a case where the matching processing has failed, the point group matching unit 7 can output the position and posture of the sensor target on the map (a provisional position of the target selection unit 8) estimated from the self-position estimation result and the sensing result.
Next, an operation of the target selection unit 8 will be described.
A combination of the sensor target and the map feature whose relative relationship is important is set in advance.
The target selection unit 8 calculates a provisional position 11P4 of a sensor feature from a self-position estimation result 10P3 and a sensing result 13R1. The target selection unit 8 selects a sensor target whose distance from a corresponding map feature (a retreat region in the example illustrated in
As illustrated in
The target selection unit 8 may be configured to be able to change the threshold value of target selection (the distance between the feature and the sensor target) in accordance with a size, a speed, weather, and brightness of the sensor target such that the target can be appropriately selected.
For example, as the size of the sensor target is larger, the threshold value is set to be larger. As the speed of the sensor target is higher, the threshold value is set to be larger (an arrival time of the sensor target at the map feature can be used instead of the distance).
In addition, as the weather is worse, the threshold value can be set to be larger. As the brightness is darker, the threshold value can be set to be larger.
An external environment recognition method according to the first embodiment of the present invention will be described.
The self-position that is the position of the host vehicle 10 on the map stored in the map database 9 is estimated based on the external environment information acquired by the external environment sensor 2 mounted on the host vehicle 10, the target around the host vehicle 10 is recognized based on the external environment information, the map information including the map point group that is the set of feature points on the map and the feature information including the information on the position and type of the feature is acquired, the sensor point group around the recognized target is acquired, and the position of the target on the map is estimated by matching the acquired sensor point group and the map point group.
As described above, according to the first embodiment of the present invention, since the position of the sensor target on the map is estimated by extracting the point group around the sensor target and performing the point group matching between the map point group and the sensor target, it is possible to provide the external environment recognition device and the external environment recognition method capable of accurately estimating the relative relationship between the map feature and the target detected by the sensor.
Next, a second embodiment of the present invention will be described.
The speed prediction unit 20 predicts a position and a speed of a different vehicle 11 ahead on a map estimated by a point group matching unit 7, is used for an adaptive cruise control (ACC) function, and appropriately controls a speed of the host vehicle 10.
In (a) of
When a speed of the host vehicle 10P3 is controlled based on the speed prediction of the speed prediction unit 20, the adaptive cruise control function can be executed with high accuracy.
As described above, according to the second embodiment of the present invention, effects similar to the effects of the first embodiment can be obtained, and the adaptive cruise control function can be executed with high accuracy.
Next, a third embodiment of the present invention will be described.
The intention prediction unit 24 is used in an automatic driving device, and predicts an intention of a pedestrian based on a relative relationship between a pedestrian position on the map estimated by a point group matching unit 7 and a crosswalk on the map.
As a result, it is possible to accurately predict the intention of the pedestrian and appropriately plan an action of a host vehicle 10.
In (a) of
In (b) of
Similarly, for a stopped vehicle that is a sensor target and a time-limited parking zone that is a map feature, the intention prediction unit 24 predicts an intention of the stopped vehicle or the like, and the automatic driving device can perform control based on the result. For example, the intention prediction unit 24 predicts whether the stopped vehicle is a parked vehicle or a vehicle that is temporarily stopped at a traffic light or the like.
In a case where the different vehicle 11P5 is positioned on the crosswalk 23, an external environment sensor 2 may not be able to detect the crosswalk 23. In this case, it is necessary to acquire information of a map database 9 and determine whether or not the crosswalk on which the vehicle is positioned is the crosswalk 23.
The intention prediction unit 24 can accurately predict the intention of the pedestrian 22 based on the relative relationship between the position of the pedestrian 22 on the map estimated by the point group matching unit 7 and the crosswalk 23 on the map.
As described above, according to the third embodiment of the present invention, it is possible to obtain effects similar to the effects of the first embodiment, and it is possible to appropriately control the host vehicle 10 by automatic driving by predicting the intention of the pedestrian or the like.
Next, a fourth embodiment of the present invention will be described.
As illustrated in
As illustrated in (a) of
Since a retreat region 17A in a host vehicle 10N at a subsequent moment is in the highly accurate range, the self-map generation unit 26 generates the self-map from an odometry estimated position 28 and the target information from the target recognition unit 4, and stores the self-map in the map database 9.
A state where a time has further elapsed from the state illustrated in
In this manner, a relatively highly accurate map is generated and stored in the map database 9.
In the example described above, although the relatively highly accurate map is generated and stored in the map database 9, it is also possible to store a point group only around a map feature whose relative positional relationship with the sensor target is important. For example, it is also possible to store the point group only around the retreat region. By doing this, a map capacity stored in the map database 9 can be reduced.
As described above, according to the fourth embodiment of the present invention, it is possible to obtain effects similar to the effects of the first embodiment, generate a relatively highly accurate map, and further reduce the map capacity stored in the map database 9.
Next, a fifth embodiment of the present invention will be described.
As illustrated in (a) and (b) of
In the state illustrated in (a) of
Since the trajectory prediction unit 30 can accurately predict the trajectory of the oncoming vehicle, an action of the host vehicle can be appropriately planned.
As described above, according to the fifth embodiment of the present invention, since effects similar to the effects of the first embodiment can be obtained and the trajectory of the oncoming vehicle can be accurately predicted, the action of the host vehicle can be appropriately planned.
Note that, in the first to fifth embodiments described above, although a target selection unit 8 is a constituent requirement of the external environment recognition device 1, the article selection unit 8 can be omitted, and an example in which the article selection unit 8 is omitted is also included in the embodiment of the present invention.
In the example in which the article selection unit 8 is omitted, a self-position estimated by a self-position estimation unit 3 is output to the point group matching unit 7, and target information recognized by a target recognition unit 4 is output only to a sensor point group acquisition unit 6. Then, the point group matching unit 7 performs point group matching based on the self-position estimated by the self-position estimation unit 3, the sensor point group acquired by the sensor point group acquisition unit 6, and map information from a map information acquisition unit 5.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2022/019420 | 4/28/2022 | WO |