This application claims priority to Japanese Patent Application No. 2021-110032 filed on Jul. 1, 2021, the contents of which are incorporated herein by reference.
The present invention relates to a driving assistance device.
Currently, various driving assistance systems have been developed and commercialized. Among these, in order to realize applications such as a lane keeping system and a lane change system, it is necessary to detect and track a lane marking around a vehicle with high accuracy by using a sensor mounted on the vehicle.
In a case of tracking a lane marking, a Kalman filter is generally used. In this method, an object position at the next time can be predicted from an object position up to the previous time. However, if a white line being tracked cannot be correctly associated with obtained observation (a detected lane marking position), the accuracy of prediction decreases. In association between lane markings, a position, a shape, and the type of a lane marking are used. However, in the existing method, one lane marking type is generally assigned to an observed lane marking, but in a case of a sensor having a wide observation range, a plurality of lane marking types may be included. Thus, in a place where a lane marking type changes frequently, such as a merging lane of an expressway or near an intersection, it is difficult to correctly determine a lane marking type, and tracking of a lane marking may fail.
Background art of the present technical field includes the following prior art. PTL 1 (JP 2010-244456 A) discloses a boundary recognition device in which a boundary line candidate extraction unit extracts a boundary line candidate of a lane on a road from an image acquired by an in-vehicle camera by using known image processing such as pattern matching or Hough transform, one or more types of boundary line feature calculation units calculate a certainty factor of a boundary line likelihood of the extracted boundary line candidate with a likelihood, a boundary line feature integration unit multiplies and integrates the calculated likelihoods to output a likelihood indicating the boundary line likelihood, a boundary line selection unit selects a boundary line candidate having a maximum likelihood among the output likelihoods as a boundary line on the road, and the boundary line feature calculation unit calculates a likelihood of the boundary line candidate by using a luminance variance, an internal edge amount, and the like, and uses a feature amount detected by a road surface feature extraction unit or the like to change and output the likelihood of the boundary line candidate.
Systems that utilize surrounding lane marking information, such as lane keeping systems and lane change systems, require stable lane marking information over a long period of time. In order to obtain stable lane marking information for a long period of time, since there is a possibility that the observation results for one frame only may include instantaneous non-detection or erroneous detection, it is therefore necessary to associate recognition results for each of frames and integrate observation results for several frames. In a case where the technique disclosed in PTL 1 is applied, only one lane marking type can be assigned to an observation result. Therefore, even in a case where a lane marking type changes frequently, one lane marking type e included most in observation results is assigned, and thus association of lane markings may fail. Since an output lane marking position depends on a lane marking type, it is desirable to assign a lane marking type to each region instead of assigning one lane marking type to an observation result.
Therefore, an object of the present invention is to provide a driving assistance device that tracks a lane marking with high accuracy by using a sensor attached to a vehicle.
A representative example of the invention disclosed in the present application is as follows. That is, a driving assistance device that recognizes a lane marking around an own vehicle with at least one sensor includes a lane marking detection unit that detects lane marking information from data acquired by the sensor; a lane marking type candidate estimation unit that recognizes a lane marking type from the detected lane marking information and calculates a likelihood of each lane marking type; a lane marking type branch point estimation unit that detects a lane marking type branch point that is a point at which a likelihood of the lane marking type changes with respect to an advancing direction of the own vehicle, and distinguishes regions before and after the lane marking type branch point as a first region and a second region; and a lane marking information integration unit that, in a case where a lane marking of the first region and a lane marking of the second region have the same lane marking type, integrates the lane marking of the first region and the lane marking of the second region into a continuously recognizable lane marking.
According to one aspect of the present invention, a lane marking can be tracked with high accuracy by using a sensor attached to a vehicle. Problems, configurations, and effects other than those described above will be clarified by the following description of embodiments.
As illustrated in
Data acquired by a sensor 110 attached to the own vehicle and vehicle information 120 acquired from the own vehicle via a network such as CAN are input to the driving assistance device 100. The driving assistance device 100 sequentially acquires data at a predetermined frame rate from the sensor 110 (for example, a camera or a LiDAR) mounted on the own vehicle. The vehicle information 120 indicates information regarding movement of the own vehicle obtained from the own vehicle and information equivalent thereto, and is, for example, a vehicle speed, a steering wheel angle, a vehicle turning radius, a state of an accelerator or a brake, and a state of a shift lever. Similarly to the information obtained from the sensor 110, the vehicle information 120 is sequentially acquired at a predetermined frame rate.
The lane marking detection unit 200 detects a lane marking from the data obtained from the sensor 110. It is assumed that detection of a lane marking is performed by using a known method, and for example, in the case where a lane marking is extracted from an image, there are a line segment detector (LSD), Hough transform, and the like. The lane marking detection unit 200 outputs position information of the detected lane marking to the lane marking type candidate estimation unit 210.
The lane marking type candidate estimation unit 210 estimates a lane marking type to which each lane marking detected by the lane marking detection unit 200 is likely to belong. For the estimation of the lane marking type, a known method, for example, pattern matching using a pattern of the lane marking type, or a lane marking model in which periodicity for each lane marking type is modeled may be used. The lane marking type candidate estimation unit 210 outputs the lane marking type determination result to the lane marking type integration unit 220.
The lane marking type integration unit 220 integrates information regarding the type of each lane marking estimated by the lane marking type candidate estimation unit 210 for each lane marking. As a result of the integration, a lane marking type having a certain likelihood or more is assigned as the type of lane marking. The lane marking type integration unit 220 outputs the assigned lane marking type to the lane marking type branch point estimation unit 230.
In a case where a plurality of lane marking types are assigned to one integrated lane marking as a result of integration of lane markings in the lane marking type integration unit 220, the lane marking type branch point estimation unit 230 detects a point at which a lane marking type changes. A point at which a lane marking type changes may be detected by using a ratio of lane marking types included in the integrated lane marking. In a case where a change point at which the lane marking type changes is found, a region of the lane marking before and after the change point is divided into regions, and lane marking types are assigned to the regions after division. The lane marking type branch point estimation unit 230 outputs the lane marking type assigned to each of the regions after division to the lane marking integration necessity determination unit 240.
The lane marking integration necessity determination unit 240 determines whether it is necessary to integrate the divided lane markings. A determination result is used by the lane marking information integration unit 260. Note that the lane marking integration necessity determination unit 240 is effective in a case where a road structure is complicated and there are many oblique white lines, and thus need not be provided in a case where the road structure is simple.
The lane marking reliability estimation unit 250 adds a reliability to the detected lane marking. For example, the reliability is generally determined by decreasing the reliability of a lane marking detected by a camera as the lane marking becomes farther from the vehicle, or increasing the reliability in a case where a lane marking is detected at the same place by a plurality of sensors 110. The lane marking reliability estimation unit 250 outputs the estimated reliability to the lane marking information integration unit 260.
The lane marking information integration unit 260 associates a lane marking being tracked with a newly observed lane marking, and integrates information regarding the associated lane markings to determine whether the lane marking is an output target lane marking. For example, the lane markings are associated with each other by using shapes, positions, types, and the like of the lane markings. Unnecessary lane markings may be excluded from output targets with reference to a lane width specified in the Road Structure Ordinance. The lane marking information integration unit 260 outputs information regarding the lane marking determined to be an output target to the external system 300.
The detection of a lane marking using each functional block described above may be performed separately for lane markings on the left side and the right side of the vehicle. Note that the consistency of lane markings detected on both sides of the vehicle may be determined with reference to a rule defined in the road structure example.
The external system 300 is, for example, a system that recognizes a lane in which the own vehicle can travel by using lane marking information, such as a lane keeping system or a lane change system, or an application that senses a surrounding environment of a route on which the own vehicle has traveled and generates a map.
Next, processing executed by the driving assistance device 100 in a case where at least one of a camera or a LIDAR is used as the sensor 110 will be described with reference to a flowchart.
First, in S201, the lane marking detection unit 200 extracts a line segment that is a lane marking candidate from acquired data. Lane marking candidates include not only general lane markings including a white line and an orange line but also lane markings including Botts' Dots, cat's eyes, and the like.
Next, in S202, the lane marking detection unit 200 performs filtering to remove noise from the lane marking candidates extracted in S201. This is because in a case where a line segment is extracted by using a method such as an LSD, not only a crack of a road surface and a shadow of a utility pole or a building but also a road marking is extracted as illustrated in
Next, in S203, the lane marking detection unit 200 groups the line segments on the basis of the result in S202. That is, in a case where one lane marking is divided into a plurality of line segments through rubbing of a broken line or a white line and output, it is determined whether the plurality of line segments belong to the same group, and the line segments are grouped. For example, in the grouping of the line segments, a line segment within a predetermined threshold is set as a lane marking candidate partitioning the same lane by using a position and an angle of a lane marking candidate from the own vehicle. For example, a position of the lane marking candidate may be calculated on the basis of a position of the lane marking candidate in a vehicle lateral direction among parameters obtained by approximating each detected lane marking candidate with a straight line, and an angle calculated from an inclination with respect to a vehicle advancing direction may be calculated as an angle of the lane marking. In the present embodiment, line segments having an angle error of 10° or less with respect to a lane marking in which a difference in position between lane markings is 0.5 m or less when viewed in the vehicle lateral direction are associated with each other.
Next, in S204, the lane marking detection unit 200 extracts a lane marking from the lane marking candidates. Here, the lane marking is a line having a constant width.
Note that, in a case where a lane marking type includes one having a short length in the advancing direction of the vehicle such as Botts' Dots or cat's eyes, a lane marking may not be detected in a case where the sampling interval described above is long. Therefore, the method of the present example can also be applied to the Botts' Dots and the cat's eyes as long as the Botts' Dots and the cat's eyes are detected by the camera or the LiDAR.
Next, in S205, the lane marking detection unit 200 calculates a lane marking position to be used in the following processing on the basis of the lane marking positions extracted in S204. Regarding the lane marking position, an intersection point on a side close to the vehicle among intersection points of the sampling line and the lane marking set at regular intervals is set as a lane marking position. As a result, the detected lane marking is expressed as a point sequence. In the present example, although the lane marking position is expressed by the point sequence sampled at regular intervals, the lane marking position may be handled in a line state.
Next, the lane marking type candidate estimation unit 210 will be described with reference to
Next, in S212, the lane marking type candidate estimation unit 210 collates a graph shape obtained by plotting the luminance of the lane marking of the road surface for each sampling interval extracted in S211 with the model having the lane marking shape obtained by modeling the luminance in advance by using a known method such as template matching to obtain the matching degree. The model having the lane marking shape is a model in which a vertical axis represents luminance and a horizontal axis represents a lateral position from the vehicle as illustrated in
Next, in S213, the lane marking type candidate estimation unit 210 registers the likelihood of the lane marking type for each lane marking estimated in S212 in a list as illustrated in
Next, the lane marking type integration unit 220 will be described with reference to
Next, in S222, the lane marking type integration unit 220 associates the lane markings acquired by the respective sensors 110. Lane markings within a certain range are associated by using a distance between approximate parameters calculated from a point sequence indicating a lane marking shape. As a parameter approximating the point sequence, respective approximation parameters of a straight line, a quadratic curve, and a circle are obtained, distances between the approximation parameters and the point sequence are calculated, and a parameter having the smallest distance is set as the most suitable approximation parameter. Since a method of calculating a distance between approximation parameters and a method of calculating a distance between a point sequence and an approximation parameter are known, the description thereof will be omitted. Lane markings being tracked at this time are simultaneously associated with each other. The above method is also used for association between the lane markings being tracked. In the present example, although the association is performed by using the approximation parameter, an association method is not limited thereto.
Next, in S223, the lane marking type integration unit 220 integrates the likelihoods of the types of the lane marking associated in S222. The likelihoods are integrated by using the following formula.
In Formula (1), Ti represents that each lane marking type is correct, Sfc represents a lane marking type determination result of the front camera, Sl represents a lane marking determination result of the LiDAR, P (Sfc|Ti) represents a likelihood in which a result recognized by the front camera is each lane marking type, P (Sl|Ti) represents a likelihood in which a result recognized by the LiDAR is each lane marking type, P(Ti) represents a prior probability, and P(Ti|Sfc, Sl) represents a posterior probability. Here, the prior probability indicates each likelihood registered in the associated lane marking being tracked. In the present example, the likelihood integration method using the Bayesian estimation has been described, but an average of likelihoods of lane marking types estimated by the respective sensors 110 may be obtained, or a weighted average may be obtained by adding a weight for each sensor type.
Next, in S224, the lane marking type integration unit 220 determines the type of lane marking on the basis of the likelihood integrated in S223. In the present example, a lane marking having a likelihood of 0.4 or more is assigned as a lane marking type. Here, in a case where likelihoods of a plurality of lane markings are high, a plurality of lane marking types are assigned. In the present example, although the threshold of the likelihood is set to 0.4, this numerical value may be freely determined.
Next, in S225, the lane marking type integration unit 220 registers the lane marking information in a list (
Next, the lane marking type branch point estimation unit 230 will be described with reference to
Next, in S232, the lane marking type branch point estimation unit 230 estimates a lane marking branch point from the branch point candidates set in S231. In the estimation of a lane marking branch point, after division at the position of the set branch point candidate, a likelihood of each lane marking type is calculated for each of the regions after division (S212). For example, in a case where a region is divided at the middle point among the points P illustrated in
In a case where there is a lane marking of which a branch point has been detected in the processing so far (yes in S233), the processing proceeds to the process in S234, and in a case where there is no branch point (no in S233), the processing of the lane marking type branch point estimation unit 230 is ended.
In S234, the lane marking type branch point estimation unit 230 divides the region on the basis of the detected branch point as illustrated in
In S235, the lane marking type branch point estimation unit 230 re-registers the lane marking information for each of the regions after division in the list. In S235, not only the lane marking type but also the point sequence representing the lane marking belonging to each region is divided. In addition, branch point information (a position and which lane marking is divided into which lane marking) is also registered.
Next, the lane marking integration necessity determination unit 240 will be described with reference to
Next, in S242, the lane marking integration necessity determination unit 240 calculates a distance between the point sequence positions belonging to each region by using the branch point information read in S241. This is intended to divide distant lane markings, such as in a case where a point sequence before and after an intersection is acquired. In the present example, division is performed in a case where a distance between the point sequences is 3 m or more with reference to the threshold of the broken line. Otherwise, the processing proceeds to S243.
Next, in S243, the lane marking integration necessity determination unit 240 calculates an approximation parameter by using the point sequence of each of the regions after division. Since the approximation parameter has already been described, the description thereof will be omitted. Here, an angle between the point sequences is calculated by using the obtained approximation parameter. Here, in a case where the angle is equal to or more than a predetermined threshold, it is considered that the number of lanes such as a lane increase portion increases, and thus, division is performed.
Next, in S244, the lane marking integration necessity determination unit 240 registers information regarding whether to integrate or divide the lane markings in the vicinity of the branch point. The registered information is used in the subsequent processing.
Next, the lane marking reliability estimation unit 250 will be described with reference to
Next, in S252, the lane marking reliability estimation unit 250 associates the lane markings by using the coordinate-converted lane markings. The lane markings are associated in units of divided lane markings. Also here, the lane markings are similarly associated with lane markings tracked in the past. Since the method of associating the lane markings has already been described, the description thereof will be omitted.
Next, in S253, the lane marking reliability estimation unit 250 estimates a reliability of each lane marking. In an overlapping region of the observation range of the sensor 110, the reliability is estimated on the basis of whether or not the same lane marking can be detected by each sensor 110. For example, in a case where the camera and the LiDAR have an overlapping region in front of the own vehicle, a reliability of a lane marking detected only by one sensor 110 is set to 0.5, and a reliability of a lane marking detected by both sensors 110 is set to 1.0. In an example in which a reliability is set according to the number of sensors 110 that can detect a lane marking, an existence probability of the lane marking may be calculated on the basis of Bayesian estimation from the number of times of tracking or whether or not tracking can be performed, and the calculated existence probability may be used as a reliability.
Next, the lane marking information integration unit 260 will be described with reference to
Next, in S262, the lane marking information integration unit 260 associates the lane markings. Here, association using an approximation parameter and association using a lane marking type are performed. Since the association using an approximation parameter has already been described, the description thereof will be omitted. In the association using a lane marking type, two processes are executed. One is validity determination using a lane marking type. For example, as illustrated in
Next, in S263, the lane marking information integration unit 260 integrates the information regarding the lane marking regions associated in S262. Here, since it is assumed that the lane marking information matches, point sequence parameters expressing the position of the lane marking are integrated. As described above, by managing a point sequence representing a lane marking for each lane marking type, the lane marking is stably tracked. Although a point sequence is managed by being divided for each of the lane marking regions, a connection relationship for each of the lane marking regions is stored at the lane marking type branch point, and thus, there is no problem of lane marking tracking.
As described above, after the series of processes is executed, the lane marking information is output to an external application that uses the lane marking information.
In Example 2, a scene understanding unit 270 is added to the driving assistance device 100. With the addition of the scene understanding unit 270, processing of the lane marking type integration unit 220 and the lane marking reliability estimation unit 250 is changed.
In order to accurately recognize a lane marking, it is necessary to prioritize sensor information acquired by the sensor 110 in consideration of a scene in which a lighting environment greatly changes, such as backlight or a tunnel entrance/exit by recognizing the environment around a vehicle and the weather such as rain and fog. From the viewpoint of a lane marking type, a dot line is likely to appear in a scene of a downward curve, and a zebra zone or a lane marking having a complicated shape is likely to appear near an intersection. The scene understanding unit 270 of Example 2 assumes a function of recognizing these external factors.
First, the scene understanding unit 270 will be described. The scene understanding unit 270 roughly recognizes two scenes. One is a scene in which the recognition performance of the sensor 110 significantly deteriorates. This is, for example, a scene in which a proof condition such as backlight or a tunnel entrance/exit greatly changes, a scene in which a sensing range is interrupted due to adhesion of water, dirt, or the like to a windshield or a lens of a camera, or a weather factor such as rain, fog, or snow. The other is a scene in which a specific lane marking type such as a curve on a downhill or a road marking is likely to appear. Therefore, map information 400 is input to the scene understanding unit 270.
Next, the scene understanding unit 270 will be described with reference to
Next, in S272, the scene understanding unit 270 recognizes a scene in which the recognition performance of the sensor 110 significantly deteriorates. Since a scene in which the recognition performance of the sensor 110 significantly deteriorates depends on the sensor 110 mounted the own vehicle, a scene in which the performance deteriorates is set in advance for each configuration of the sensor 110. For the set scene, for example, backlight or a tunnel entrance/exist is determined by detecting a scene in which luminance in the image greatly changes, or an attachable matter to a lens or a windshield is determined by an object recognized at the same place for a certain period of time. A scene may be determined according to machine learning in which a target scene is learned in advance.
Next, in S273, in a case where a scene in which the recognition performance significantly deteriorates is recognized in S272, the scene understanding unit 270 estimates a sensing region with the sensor 110 influenced in S274. Estimation of the sensing region using the influenced sensor 110 is performed by the following procedure. First, in a case where a performance deterioration factor e (backlight) of the sensor 110 is recognized in S272 from a scene as illustrated in
Next, in S274, the scene understanding unit 270 estimates information regarding the estimated disturbance, the sensor 110 influenced by the disturbance, and the lane marking influenced by the disturbance, and in S275, registers the estimated information in the list illustrated in
Next, in S276, the scene understanding unit 270 refers to the map information 400 to recognize a road situation in which a specific lane marking type is likely to appear. In the case of an environment in which an appearance frequency of a lane marking is biased, such as a curve on a downhill or the vicinity of a junction of an expressway, a lane marking likelihood initial value indicating an appearance probability of a lane marking type in which the appearance frequency is biased (that is, the lane marking type is likely to appear) in a surrounding situation as illustrated in
Next, in a case where a specific road situation is recognized in S276 (yes in S277), the scene understanding unit 270 registers the road situation in the memory in S278.
Next, with reference to
Here, Si is a sensor type, T0 is a probability of being a correct answer, T1 is a probability of being an incorrect answer, and R is a reliability.
Next, with reference to
An example in a case where map information 400 is combined will be described with reference to
Next, the lane marking type candidate estimation unit 210 will be described with reference to
Next, in S222′, the lane marking type candidate estimation unit 210 reads surrounding information of the own vehicle from the map information 400. The surrounding information is limited by the map information 400 to be used. In the present example, on the assumption that a high-precision map is used, a lane marking type, a lane marking position, the number of lanes, a road width, and lane connection information are read. A range in which the map information 400 is read is information within a predetermined distance from the own vehicle.
Next, in S223′, the lane marking type candidate estimation unit 210 converts the information read in S222′ into a coordinate system centered on the own vehicle. The coordinate conversion is performed by a known method such as Hubeni's formula by using the latitude and the longitude acquired from the GNSS information.
Next, in S224′, the lane marking type candidate estimation unit 210 associates the map information 400 of which the coordinates have been converted in S223′ with the lane marking detected by the own vehicle. Since the lane marking position information obtained from the map information 400 is a point sequence, this association is performed in the same manner as the association between lane markings. Even in a case where the lane marking information obtained from the map information 400 is expressed by a function instead of a point sequence, point sequence information can be obtained through sampling at predetermined intervals.
Next, in S225′, the lane marking type candidate estimation unit 210 determines a lane marking type. The lane marking type may be determined by assigning the lane marking type in the map information 400 associated with the lane marking information detected by the own vehicle. In a case where the lane marking information acquired from the map information 400 is not associated with the lane marking detected by the own vehicle, it is determined that the lane marking is erroneously detected, and the detected lane marking information is deleted. By assuming that the map information 400 is the latest one, the lane marking information which is not associated is deleted. Thus, in a case where there is an intention to update the map, the lane marking type may be estimated by using the method of Example 1 with the lane marking type obtained from the map information 400 as an initial value. In a case where a map in which a lane marking type or the like is not registered, such as a navigation map, is used, the map information 400 is used by changing a value of an initial value at the time of lane marking type determination by using information such as before an intersection or before a curve.
The lane marking type branch point estimation unit 230 uses the result of association between the lane marking information read from the map information 400 by the lane marking type candidate estimation unit 210 and the detected lane marking to set a branch point of the lane marking type for the lane marking which the corresponding lane marking has been detected. In a case where the branch point of the lane marking type is not registered in the map information 400, the branch point is estimated according to the method of Example 1.
The lane marking information integration unit 260 integrates the lane markings by using the number of lanes, the road width, and the lane marking type registered in the map information 400. The lane marking information integration process is basically the same as that illustrated in
The lane marking information estimated through the series of processes is output to the external system 300.
In Example 4, an example in which lane marking information estimated by using the driving assistance device 100 is registered as the map information 400 will be described. The map information 400 is registered in a storage device in a server or the driving assistance device. In a case where the lane marking information is registered in the map information 400, the external system 300 in
A case is assumed in which the registration of the lane marking information in the map information 400 is different from that in which the lane marking information is registered due to a reason such as generation of a map on which a lane marking, an obstacle position, and the like of a surrounding environment used for automated driving are written or under construction. The map may be generated or updated at a timing at which the system side recognizes that there is no surrounding map, or the map may be generated or updated when a user changes a mode to a map generation/update mode. The map information 400 is recorded after being converted from a coordinate system centered on the own vehicle to a coordinate system that can be expressed by an absolute value by using information from the GNSS or the like. As a result, it is also possible to share the map between vehicles. The information to be registered in the map is a lane marking position, a lane marking type, a lane marking branch point, and time. The lane marking branch point is registered and can thus be used for estimation of a position in the vehicle front-rear direction that is difficult in self-position estimation using the map information 400 or switching between processes when the own vehicle enters an intersection.
As described above, the driving assistance device 100 according to the example of the present invention includes the lane marking detection unit 200 that detects lane marking information from data acquired by the sensor 110, the lane marking type candidate estimation unit 210 that recognizes the type of the lane marking from the detected lane marking information and calculates a likelihood of each type, the lane marking type branch point estimation unit 230 that detects a lane marking type branch point that is a point at which the likelihood of the lane marking type changes with respect to an advancing direction of an own vehicle, and distinguishes regions before and after the lane marking type branch point as a first region and a second region, and the lane marking information integration unit 260 that integrates a lane marking of the first region and a lane marking of the second region into a continuously recognizable lane marking. Therefore, lane markings to be tracked can be continuously recognized and thus the lane markings can be tracked with high accuracy.
The driving assistance system includes the lane marking integration necessity determination unit 240 that determines necessity of integration of the lane marking of the first region and the lane marking of the second region. In a case where the lane marking integration necessity determination unit 240 determines that integration is necessary, the lane marking information integration unit 260 integrates the lane marking of the first region and the lane marking of the second region into a continuously recognizable lane marking. Therefore, even in a case where a road structure is complicated and there are many white lines in an oblique direction, the lane markings can be integrated.
The driving assistance system includes the scene understanding unit 270 that estimates a situation around the vehicle from the data acquired by the sensor 110, and the lane marking reliability estimation unit 250 that estimates a reliability of the lane marking type determined for each sensor 110 on the basis of an output of the scene understanding unit 270 and determines a reliability of the lane marking for each sensor 110. The lane marking type candidate estimation unit 210 refers to the estimated reliability and determines lane marking information of a lane marking type candidate for each sensor 110 on the basis of an edge of the lane marking by using at least one of a luminance and a reflection intensity acquired by the sensor 110. Therefore, even if the information acquired by the sensor 110 is influenced by the disturbance, a lane marking can be accurately tracked.
The lane marking type candidate estimation unit 210 calculates a likelihood of each lane marking type by comparing luminance information of a lane marking sampled at regular intervals in the advancing direction of the own vehicle with a lane marking shape model obtained by modeling luminance information, so that the lane marking can be tracked by the same logic regardless of the type (a camera, a LiDAR, or the like) of the sensor 110.
The lane marking information integration unit 260 associates the lane markings by using the lane marking type and integrates the lane marking information, so that the lane marking can be tracked with higher accuracy than simply viewing a position, and erroneous detection of the lane marking due to erroneous association can be reduced.
The lane marking type branch point estimation unit 230 divides the lane marking into the regions before and after the lane marking type branch point and manages the lane marking information for each lane marking type, so that the type of the lane marking can be accurately determined before and after the branch point.
In a case where a plurality of lane marking types are assigned, the lane marking type branch point estimation unit 230 detects the lane marking branch point on the basis of a ratio of the likelihood of the lane marking type, so that complicated processing becomes unnecessary and the type of the lane marking can be quickly determined with a low load.
Since the reliability of the lane marking is set on the basis of the disturbance factor, the sensor type, and the lane marking type, even if the appearance of the lane marking changes due to the disturbance, the scene understanding unit 270 can accurately recognize the lane marking regardless of the influence of the disturbance by setting weights finely according to the disturbance factor.
The lane marking information integration unit 260 integrates the lane markings of the type of which a likelihood is calculated for each sensor 110 by using the likelihood of the lane marking type that is likely to appear in the situation around the vehicle estimated by the scene understanding unit 270, so that the lane marking type can be quickly and accurately recognized.
The lane marking type candidate estimation unit 210 recognizes the type of the lane marking from the detected lane marking information and the map information 400, and the lane marking type branch point estimation unit 230 detects the lane marking type branch point that is a point at which the likelihood of the lane marking type changes in the traveling direction of the own vehicle by using the map information 400, so that the lane marking type and the change point can be quickly and accurately recognized.
Since the lane marking information integration unit 260 integrates the lane marking of the first region and the lane marking of the second region by using the number of lanes and the road width acquired from the map information 400, it is possible to prevent erroneous recognition that a structure such as a guardrail is a lane marking.
The lane marking information integration unit 260 integrates the lane markings of the type of which the likelihood is calculated for each sensor 110 by using the likelihood of the lane marking type that is likely to appear at the position estimated from the map information 400, so that the lane marking type can be recognized quickly and accurately.
Since the lane marking information integration unit 260 registers the information regarding the position, the type, and the branch point of the integrated lane marking in the map information, the latest information can be reflected on the map to update the map information.
Note that the present invention is not limited to the above-described examples, and includes various modifications and equivalent configurations within the concept of the appended claims. For example, the above-described examples have been described in detail for easy understanding of the present invention, and the present invention is not necessarily limited to those having all the described configurations. A part of the configuration of one example may be replaced with the configuration of another example. The configuration of another example may be added to the configuration of a certain example. A part of the configuration of each example may be added, deleted, or replaced with another configuration.
Some or all of the above-described configurations, functions, processing units, processing means, and the like may be realized by hardware by being designed, for example, an integrated circuit, or may be realized by software by a processor interpreting and executing a for program realizing each function.
Information such as a program, a table, and a file for realizing each function can be stored in a storage device such as a memory, a hard disk, or a solid state drive (SSD), or a recording medium such as an IC card, an SD card, or a DVD.
Control lines and information lines that are considered to be necessary for description are illustrated, and not all control lines and information lines necessary for implementation are illustrated. In practice, it may be considered that almost all the configurations are connected to each other.
Number | Date | Country | Kind |
---|---|---|---|
2021-110032 | Jul 2021 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2022/004528 | 2/4/2022 | WO |