The disclosure relates to methods and systems for the detection of obstacles. The disclosure relates in particular to methods and systems for detecting static obstacles in the environment of vehicles.
Various methods and systems for the detection of obstacles (i.e. generally of objects) in the environment of vehicles are known in the prior art. The environment of a vehicle is detected by means of various sensors here and, based on the data supplied by the sensor system, it is determined whether there are any obstacles in the environment of the vehicle and, if necessary, their position is determined. The sensor technology used for this purpose typically includes sensors that are present in the vehicle, for example ultrasonic sensors (e.g. PDC and/or parking aid), one or more cameras, radar (e.g. speed control with distance keeping function) and the like. Typically, a vehicle contains different sensors that are optimized for specific tasks, for example with regard to detection range, dynamic aspects and requirements with respect to accuracy and the like.
The detection of obstacles in the vehicle environment is used for different driver assistance systems, for example for collision avoidance (e.g. Brake Assist, Lateral Collision Avoidance), lane change assistant, steering assistant and the like.
For the detection of static obstacles in the environment of the vehicle, fusion algorithms are required for the input data of the different sensors. In order to compensate for sensor errors, such as false positive detections (e.g. so-called ghost targets) or false negative detections (e.g. undetected obstacles) and occlusions (e.g. caused by moving vehicles or limitations of the sensor'S field of view), tracking of sensor detections of static obstacles is necessary.
Different models are used to map the immediate environment around the vehicle. A method known in the prior art for detecting static obstacles is the Occupancy Grid Fusion (OGF). In OGF, the vehicle environment is divided into rectangular cells. For each cell, a probability of occupancy with respect to static obstacles is calculated during fusion. The size of the cells determines the accuracy of the environmental representation.
S. Thrun and A. Bücken, “Integrating grid-based and topological maps for mobile robot navigation,” in Proceedings of the Thirteenth National Conference on Artificial Intelligence -Volume 2, Portland, Oreg., 1996, describe research in the field of mobile robot navigation and essentially two main paradigms for mapping indoor environments: grid-based and topological. While network-based methods generate accurate metric maps, their complexity is often unable to plan efficiently and solve problems in large indoor spaces. Topological maps on the other hand may be used much more efficiently, but accurate and consistent topological maps are difficult to learn in large environments. Thrun and Bücken describe an approach that integrates both paradigms. Grid-based maps are learned with artificial neural networks and Bayesian integration. Topological maps are generated as a further superordinate level on the grid-based maps by dividing the latter into coherent regions. The integrated approaches described are not easily applicable to scenarios whose parameters deviate from the indoor environments described.
With regard to application in the vehicle, OGF-based methods comprise at least the following disadvantages. A representation that comprises a high accuracy requires a correspondingly large number of comparatively small cells and thus causes a high calculation effort and places high demands on the available storage capacity. For this reason, efficient detection of static obstacles by means of OGF is often imprecise, since, due to the nature of the method, an increase in efficiency may practically only be achieved by using larger cells, at the expense of accuracy.
As in the present case of an obstacle detection application in vehicles, many applications require a more accurate representation of the surrounding area in the immediate environment, whereas a less accurate representation is sufficient at medium to greater distances. These requirements are typical for the concrete application described here and are reflected in the available sensor technology. Typically, the accuracy of the sensor technology used decreases with increasing distance, so that sufficient and/or desired accuracy is available in the close range, but not in the further away range. These properties may not be mapped with an OGF because the cells are stationary. This means that a cell may represent a location that is in the close range at one point in time, but in the far range at another point in time.
Embodiments of the methods and systems disclosed here will partially or fully remedy one or more of the aforementioned disadvantages and enable one or more of the following advantages.
Presently disclosed methods and systems enable an improved detection of obstacles and/or objects in the environment of vehicles. In particular, the disclosed methods and systems enable a simultaneous improvement in efficiency and accuracy of the detection of obstacles and/or objects in the environment of vehicles. Presently disclosed methods and systems further enable a differentiated observation of objects depending on the distance to the vehicle, so that closer objects may be detected more precisely and more distant objects with sufficient accuracy and high efficiency. Presently disclosed methods and systems further enable an efficient detection of all objects based on a relative position of the objects to the vehicle, so that objects of primary importance (e.g. objects in front of the vehicle) may be detected precisely and efficiently and objects of secondary importance (e.g. lateral objects or objects in the rear of the vehicle) may be detected with sufficient precision and in a resource-saving manner.
It is an object of the present disclosure to provide methods and systems for the detection of obstacles in the environment of vehicles, which avoid one or more of the above-mentioned disadvantages and realize one or more of the above-mentioned advantages. It is further an object of the present disclosure to provide vehicles with such systems that avoid one or more of the above mentioned disadvantages and realize one or more of the above mentioned advantages.
This object is solved by the respective subject matter of the independent claims. Advantageous implementations are indicated in the subclaims.
According to embodiments of present disclosure, in a first aspect a method for detecting one or more objects in an environment of a vehicle is given, the environment being bounded by a perimeter. The method comprises segmenting the environment into a plurality of segments such that each segment of the plurality of segments is at least partially bounded by the perimeter of the environment, detecting one or more detection points based on the one or more objects in the environment of the vehicle, combining the one or more detection points into one or more clusters based on a spatial proximity of the one or more detection points, and assigning a state to each of the segments of the plurality of segments. The step of assigning a state to each of the segments of the plurality of segments is based on the one or more detected detection points and/or (i.e., additionally or alternatively) on the one or more combined clusters.
Preferably, in a second aspect according the previous aspect 1, the environment includes an origin, the origin optionally coinciding with a position of the vehicle, in particular a position of the centre of a rear axle of the vehicle.
Preferably, in a third aspect according to the previous aspect 2, each segment of a first subset of the plurality of segments is defined in terms of a respective angular aperture originating from the origin, the first subset comprising one, more, or all segments of the plurality of segments.
Preferably, in a fourth aspect according to the previous aspect 3, the segments of the first subset comprise at least two different angular apertures, wherein in particular: Segments extending substantially laterally of the vehicle comprise a larger angular aperture than segments extending substantially in a longitudinal direction of the vehicle; or segments extending substantially laterally of the vehicle comprise a smaller angular aperture than segments extending substantially in a longitudinal direction of the vehicle.
Preferably, in a fifth aspect according to one of aspects 3 or 4, the segments of the first subset comprise an angular aperture originating from the origin substantially in the direction of travel of the vehicle.
Preferably, in a sixth aspect according to one of the preceding aspects 1 to 5 and aspect 3, each segment of a second subset of the plurality of segments is defined in terms of a cartesian subsection, wherein the second subset, possibly based on the first subset, comprises one, more, or all segments of the plurality of segments.
Preferably, in a seventh aspect according to the previous aspect 6, the segments of the second subset comprise at least two different extensions in one dimension.
Preferably, in an eighth aspect according to one of the two preceding aspects 6 and 7, the segments of the second subset comprise a first extension substantially transverse to a direction of travel of the vehicle which is greater than a second extension substantially in a direction of travel of the vehicle.
Preferably, in a ninth aspect according to the previous aspects 3 and 6, the segments of the first subset are defined on one side of the origin 84 and the segments of the second subset are defined on an opposite side of the origin. In particular, the segments of the first subset are defined starting from the origin in the direction of travel of the vehicle.
Preferably, in a tenth aspect according to one of the previous aspects 1 to 9, the combining of the one or more detection points into one or more clusters is based on the application of the Kalman filter.
Preferably, in an eleventh aspect according to the previous aspect, the one or more clusters are treated as one or more detection points.
Preferably, in a twelfth aspect according to any of the preceding aspects, the state of a segment of the plurality of segments indicates an at least partial overlap of an object with the respective segment, wherein preferably the state includes at least one discrete value or one probability value.
Preferably, in a thirteenth aspect according to any of the previous aspects, the vehicle includes a sensor system configured to detect the objects in the form of detection points.
Preferably, in a fourteenth aspect according to the previous aspect, the sensor system comprises at least a first sensor and a second sensor, wherein the first and second sensors are configured to detect objects, optionally wherein the first and second sensors are different from each other and/or wherein the first and second sensors are selected from the group comprising ultrasonic-based sensors, optical sensors, radar-based sensors, lidar-based sensors.
Preferably, in a fifteenth aspect according to the previous aspect, detecting the one or more detection points further includes detecting the one or more detection points by means of the sensor system.
Preferably, in a sixteenth aspect according to any of the previous aspects, the environment essentially comprises one of the following forms: Square, rectangle, circle, ellipse, polygon, trapeze, parallelogram.
According to embodiments of present disclosure, in a seventeenth aspect a system for detecting one or more objects in an environment of a vehicle is given. The system comprises a control unit and a sensor technology, wherein the control unit is configured to execute the method according to any of the preceding aspects.
According to embodiments of the present disclosure, in an eighteenth aspect a vehicle is given, comprising the system according to the previous aspect.
Embodiments of the disclosure are shown in the figures and are described in more detail below.
In the following, unless otherwise stated, the same reference numerals are used for identical elements and elements having the same effect.
Typically, an environment 80 is considered whose extent in the longitudinal direction, i.e. along a direction of travel of the vehicle 100 is greater than in the direction transverse to it. Furthermore, the environment in front of vehicle 100 in the direction of travel may have a greater extent than behind the vehicle 100. Preferably, the environment 80 has a speed-dependent extent, so that a sufficient foresight of at least two seconds, preferably at least three seconds, is made possible.
As exemplified in
Furthermore, a precise determination of an effective size of an object 50 or conclusions about its shape, as shown in
As already described, a smaller cell size requires correspondingly more resources for the detection and/or processing of object data, so that higher accuracy is typically associated with disadvantages in terms of efficiency and/or resource requirements.
The segment-based representation may consist of cartesian or polar or mixed segments.
When different components and/or concepts are spatially related to the vehicle 100, this is done relative to a longitudinal axis 83 of the vehicle 100 extending along and/or parallel to an assumed direction of forward travel. In
Starting from the origin 84 of the coordinate grid, the environment 80 is divided and/or segmented into polar segments 220 in the direction of travel (to the right in
In addition, the environment 80, starting from the origin 84 of the coordinate grid against the direction of travel (in
A segmentation of the environment 80 by different segments 220, 230 (e.g. polar and cartesian) may allow an adaptation to different detection modalities depending on the specific application. For example, the detection of objects 50 in the environment 80 of the vehicle 100 in the direction of travel may have a greater accuracy and range than the detection of objects 50 in the environment 80 of the vehicle 100 against the direction of travel (e.g. behind the vehicle) or to the side of the vehicle 100.
Methods according to the present disclosure make it possible to represent obstacles over a continuous size as a distance within a segment in relation to an origin. In addition to the distance, the angle of a detected obstacle may be detected and taken into account. In particular, this enables improved accuracy of obstacle detection compared to known methods. In addition, according to the present disclosure, methods allow the fusion of different detections of an obstacle (by one or more sensors). An association and/or grouping of the detections may be based on the properties of the individual detections (variance and/or uncertainty). This also improves the precision of the detection compared to known methods.
Known methods may involve a comparatively trivial combination of several detection points, for example by means of a polyline. However, such a combination is fundamentally different from the combination and/or fusion of individual detections described in the present disclosure. A combination, for example using a polyline, corresponds to an abstract representation of an obstacle and/or a detection of a shape or even an outline. Methods according to the present disclosure make it possible to combine and/or merge different detections of the exact same feature or element of a coherent obstacle. In particular, this enables an even more precise determination of the existence and/or position of individual components of an obstacle.
In general, a segment 220, 230 may contain none, one or more objects 50. In
Based on the sensor technology of the vehicle 100, i.e. based on the signals of one or more sensors, no, one or more detection points 54, 56 are detected in a segment. When several sensors are used, there are typically different ranges of vision and/or detection that allow reliable and/or more reliable detection of objects 50. Objects 50 that may not be detected by one sensor or may only be detected with difficulty (e.g. based on a limited detection range, the type of detection and/or interference) may often be reliably detected by another sensor. During the detection, detection points are registered which may be classified locally in the coordinate system.
The sensor system of the vehicle 100 preferably includes one or more sensors selected from the group including ultrasonic sensors, lidar sensors, optical sensors and radar-based sensors.
Cyclically, obstacle points that are close to each other may be associated together in each time step and fused with respect to their properties (e.g. position, probability of existence, height, etc.). The result of this fusion is stored in the described representation and tracked and/or traced over time by means of vehicle movement (cf. “tracking” in the sense of following, tracing). The results of fusion and tracking serve as further obstacle points in the following time steps in addition to new sensor measurements.
Tracking and/or tracing describes a continuation of the already detected objects 50 and/or the detection points 54, 56 based on a change of position of the vehicle. Here, a relative movement of the vehicle (e.g. based on dead reckoning and/or odometry sensor technology, or GPS coordinates) is mapped accordingly in the representation.
An essential advantage of the methods according to the present embodiment is that a respective state of a segment is not related and/or tracked to sector segments, but to any detected obstacles. Furthermore, flexible states such as probabilities or classification types may be tracked as information. Known methods typically only consider discrete states (e.g. occupied or not occupied), which only comprise an abstract reference but do not represent any properties of detected obstacles.
Starting with the nearest object 54-1, a cluster of objects is created by grouping all objects within the two-dimensional positional uncertainty of object 54-1. The cluster with the objects 54-1, 54-2 and 54-3 is created. No further objects may be assigned to objects 54-4 and 54-5. For this reason, each of them forms its own cluster. Within a cluster, the position is fused, for example, using Kalman filters, and the probability of existence using Bayes or Dempster-Shafer.
In step 502 the environment 80 is divided and/or segmented into a plurality of segments such that each segment 220, 230 of the plurality of segments is at least partially bounded by the perimeter 82 of the environment 80. This means (cf.
In step 504 one or more detection points 54, 56 are detected based on the one or more objects 50 in the environment 80 of the vehicle 100. Based on the sensor technology of the vehicle 100 detection points of the object(s) are detected as points (e.g. coordinates, position information), preferably relative to the vehicle 100 or in another suitable reference frame. The detection points 54, 56 detected in this way thus mark positions in the environment of 80 of the vehicle 100 at which an object 50 and/or a partial area of the object has been detected. As may be seen in
Optionally, in step 506, one or more detection points 54, 56 are combined into clusters based on a spatial proximity of the points to each other. As described with respect to
In step 508, each of the segments 220, 230 of the plurality of segments is assigned a state based on the one or more detection points 54, 56 and/or the detected clusters. If no clusters have been formed, step 508 is based on the detected detection points 54, 56. Optionally, step 508 may be based additionally or alternatively on the detected clusters, with the aim of enabling the highest possible detection accuracy and providing segments with a state accordingly. In particular, the state indicates a relation of the segment with one or more obstacles. According to the embodiments of the present disclosure, the state may take a discrete value (e.g., “occupied” or “unoccupied”, and/or suitable representations such as “0” or “1”) or a floating value (e.g., values expressing a probability of occupancy, such as “30%” or “80%”, and/or suitable representations such as “0.3” or “0.8”; or other suitable values, e.g., discrete levels of occupancy, such as “strong”, “medium”, “weak”).
If we are talking about a vehicle in this case, it is preferably a multi-track motor vehicle (car, truck, van). This results in several advantages explicitly described in this document as well as several other advantages that are comprehensible to the person skilled in the art.
Although the invention has been illustrated and explained in detail by preferred embodiments, the invention is not restricted by the disclosed examples and other variations may be derived by the person skilled in the art without leaving the scope of protection of the invention. It is therefore clear that there is a wide range of possible variations. It is also clear that examples of embodiments are really only examples which are not in any way to be understood as a limitation of the scope of protection, the possible applications or the configuration of the invention. Rather, the preceding description and the description of the figures enable the person skilled in the art to implement the exemplary embodiments in a concrete way, wherein the person skilled in the art, being aware of the disclosed inventive step, may make various changes, for example with regard to the function or the arrangement of individual elements mentioned in an exemplary embodiment, without leaving the scope of protection defined by the claims and their legal equivalents, such as further explanations in the description.
Number | Date | Country | Kind |
---|---|---|---|
10 2018 115 895.5 | Jun 2018 | DE | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/DE2019/100558 | 6/18/2019 | WO | 00 |