The invention relates to a method and to a device for determining whether a vehicle can pass through an object by means of a (spatially resolving) 3-D camera.
Objects extending over a roadway, such as bridges, tunnels, overhead road signs, are recognized as obstacles in particular by radar sensors, without there being a reliable estimation from the radar data as to whether a vehicle can pass through said obstacle. Even when using mono-cameras this estimation is often difficult to be made.
DE 10234645 B4 shows a vertical stereo camera arrangement for a vehicle, by means of which a clearance height of a bridge can be estimated with sufficient accuracy from the position of the horizontal lower edge of the tunnel entry and the position of a horizontal edge between the tunnel front side and the roadway level. It is further stated that a combination of a horizontal and a vertical stereo arrangement enables all horizontal and vertical infrastructural components of road traffic to be captured and measured.
DE 10 2004 015 749 A1 also shows a device for determining the possibility for a vehicle to pass through. In front of an obstacle the clearance width and/or the clearance height are measured by means of a sensor unit. In addition it is proposed to monitor the course of the roadway ahead by means of a sensor of the sensor unit so as to be able to determine a height difference between an ascending roadway and the position of a beam spanning the roadway, if necessary.
DE 10 2009 040 170 A1 proposes to use a sensor unit with e.g. a stereo camera in order to determine a maximum clearance height and/or a minimum ground clearance and to drive a running gear actuator unit such that the maximum clearance height is not exceeded by the total height of the vehicle and the minimum ground clearance is adhered to, as long as this is possible for the region of the roadway to be passed through.
One approach to object recognition in stereo image data processing is shown by U. Franke et al. in 6D-Vision: Fusion of Stereo and Motion for Robust Environment Perception in Proceedings of DAGM-Symposium 2005, pp. 216-223. Here, the positions and velocities of many pixels are estimated simultaneously in three dimensions.
It is apparent that the methods and devices of the state of the art have disadvantages and can, under certain circumstances, give inaccurate estimates, e.g. if the width and height of the entry region are not sufficient to ensure that an obstacle can be passed through.
In view of the above, it is an object of at least one embodiment of the present invention to overcome said disadvantages and to give a more reliable estimation as to whether and how a subject vehicle can pass through an object.
The above object can be achieved by an embodiment of a method according to the invention, in which a 3-D camera records at least one image of the surroundings of the vehicle, preferably in a (potential) direction of travel. At least one trajectory is ascertained, on which the vehicle is likely to move. Said trajectory can be ascertained using image data from the 3-D camera, but it could also be ascertained in a different manner; in any case positional characteristics of the ascertained trajectory are available to the method, which enables a comparison to be made with the image data from the 3-D camera.
From the image data of the 3-D camera it is determined whether an object located above the trajectory is recognized and whether said object has one or more connections to the ground.
Objects being within or close to the vehicle trajectory can also be determined as potential obstacles and it can be determined whether said objects form a connection above the trajectory.
Objects “hanging” above the roadway and having no connection to the ground in the region of the trajectory can also be determined.
For a corresponding object the dimensions and shape of the area between said object and the roadway (hereinafter also referred to as entry area) which is to be passed through according to the trajectory are determined from the image data. A determination of the shape can be based on object, clearance, image and/or disparity data from the 3-D camera and can use said data as a starting point for determining the precise dimensions of the area.
However, the method is not restricted to objects having a closed entry area within the image range. If, for example, an object is recognized which hangs above the roadway and which does not have a connection to the ground within the image range due to the pillars of the bridge being outside the field of view of the 3-D camera, only the (partial) area between the hanging object and the roadway which is shown in the image is measured and its shape determined. The same procedure can be applied if, for example, only the lateral bridge pillars of a very high bridge are shown in the image.
A more precise determination of the dimensions and the shape of the entry area can be advantageously achieved by sensor fusion, i.e. fusing the data from the 3-D camera with the data from other sensors, such as ultrasound, lidar, radar, etc.
By comparing the dimensions and the shape of the entry area with those of the vehicle, it is determined whether and how the vehicle can pass through the object. This means that it is also possible to ascertain a precise passage trajectory or a passage corridor along which the vehicle will not collide with the object.
This information can be preferably communicated to the driver. The driver can also be assisted in driving into the object or the vehicle could be automatically steered into the object if passing through said object is possible.
If a passage is not possible, a warning can be issued to the driver or even an intervention be made in the vehicle control system.
Due to the precise determination of the dimensions and in particular the shape of a passage area, the invention also enables the recognition that a vehicle does not fit through a passage shape even though the total height of the vehicle is below the maximum clearance height of the object. This is the case, for example, if a truck having a rectangular cross-section wants to drive through a rounded tunnel, because the lateral heights of the tunnel are too low.
In an advantageous embodiment, in addition to the dimensions and shape of the two-dimensional entry area, the dimensions and shape of the three-dimensional passage space between the object and the roadway surface through which the vehicle is to pass are also determined or estimated from the image data. This can be done by means of the image data from the 3-D camera. If parts of the passage space which can be passed through are hidden in the image data, the shape and dimensions of the actual passage space can be estimated from the data available.
The determination of the area or the space to be passed through can preferably be made using a depth map, in particular a disparity map, from the image data provided by the 3-D camera.
From the depth map or disparity map, edges of the entry area and/or the space which can be passed through can be advantageously determined.
As an alternative, the edges of the entry area can be determined from the image data by means of an edge detection algorithm, for example by means of a Canny or Sobel operator.
According to an advantageous further development of the invention the determination of the dimensions and shape of the entry area or the space which can be passed through, made using the depth map, can be combined with those from edge detection by means of an intensity or color analysis of the pixels.
The dimensions and shape of the area or the space to be passed through can be preferably determined via a sequence of multiple images provided by the 3-D camera. Thus, for example, the spatial shape of a tunnel which has been partially hidden at the beginning can be updated and completed as to its dimensions when advancing into the tunnel.
The dimensions and shape of the area or the space to be passed through can be determined due to the motion of the vehicle itself, taking into account in particular the motion of the 3-D camera.
For this purpose, a 3-D scene reconstruction can be made from the image data of the 3-D camera, for example using the optical flow method.
Advantageously a determination of the spatial points of the objects closest to the vehicle or the 3-D camera is made in different height segments, assuming that the spatial points are vertically above one another (e.g. in the case of rectangular segments of entry areas).
Here, the measured distances in the longitudinal and transverse directions of the vehicle or the trajectory can be preferably weighted differently.
The 3-D camera is preferably a stereo camera or a photonic mixing device camera or PDM sensor.
The invention further comprises a device for determining whether a vehicle can pass through an object. The device comprises a 3-D camera for recording at least one image of the surroundings ahead of the vehicle. Moreover, the device comprises means for ascertaining at least one trajectory on which the vehicle is likely to move. In addition means are provided for determining from the image data of the 3-D camera whether an object located above the trajectory is recognized and whether said object has one or more connections to the ground. Finally, means are provided for determining whether a passage through the object is possible by determining from the image data the dimensions and shape of the area or space between the object and the roadway.
The invention will now be explained with reference to figures and exemplary embodiments, in which
In
In addition to the maximum height of the vehicle, it also depends on the shape (height profile) of the vehicle, the width of the vehicle and the lateral position of the vehicle inside the tunnel.
The arc-shaped entry area of the tunnel is defined by the left (a) and right (b) boundaries of the tunnel entry, both extending vertically, and by the upper boundary (c) of the tunnel entry, which is curved.
The interior space of the tunnel, i.e. the space which can be passed through, is defined by the left (d) and right (f) boundaries of the inside of the tunnel, which could be referred to as tunnel walls, and by the tunnel ceiling (e) (or the upper boundary of the inside of the tunnel). The roadway (g) describes a curve inside the tunnel. The interior space of the tunnel is therefore curved accordingly in the longitudinal direction. The shape of the space (the tunnel) to be passed through is predetermined by the interior space of the tunnel, the roadway surface (g) acting as a bottom boundary.
Edges of the entry area extend between the boundaries of the tunnel entry (a, b, c) and the boundaries of the inside of the tunnel (d, f, e). The bottom edge of the entry area extends where the roadway surface (g) is intersected by the area defined by the edges of the tunnel entry described above.
The image data of a 3-D camera can be used to determine the shape and dimensions of both the (entry) area and the space to be passed through.
Functions such as the following can be realized using these determinations:
In
Even though this determination is possible in principle also with the state of the art and is usually sufficient, there may be situations which are critical: e.g. a local elevation of the ground on the roadway under the bridge, also for example due to an object on the roadway (not illustrated in
Number | Date | Country | Kind |
---|---|---|---|
10 2011 113 077 | Sep 2011 | DE | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/DE2012/100203 | 7/6/2012 | WO | 00 | 3/6/2014 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2013/034138 | 3/14/2013 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
5530420 | Tsuchiya | Jun 1996 | A |
5710553 | Soares | Jan 1998 | A |
6677986 | Poechmueller | Jan 2004 | B1 |
7259660 | Ewerhart et al. | Aug 2007 | B2 |
7289018 | Ewerhart et al. | Oct 2007 | B2 |
8352112 | Mudalige | Jan 2013 | B2 |
20050012603 | Ewerhart | Jan 2005 | A1 |
20050143887 | Kinoshita | Jun 2005 | A1 |
20060013438 | Kubota | Jan 2006 | A1 |
20060245653 | Camus | Nov 2006 | A1 |
20060287826 | Shimizu | Dec 2006 | A1 |
20080049150 | Herbin et al. | Feb 2008 | A1 |
20080049975 | Stiegler | Feb 2008 | A1 |
20090121852 | Breuer et al. | May 2009 | A1 |
20090169052 | Seki | Jul 2009 | A1 |
20100098297 | Zhang | Apr 2010 | A1 |
20100315505 | Michalke et al. | Dec 2010 | A1 |
Number | Date | Country |
---|---|---|
197 43 580 | Apr 1999 | DE |
102 34 645 | Feb 2004 | DE |
102004015749 | Dec 2004 | DE |
102004010752 | Sep 2005 | DE |
102006053289 | May 2008 | DE |
102009040170 | Apr 2010 | DE |
102009050492 | Dec 2010 | DE |
102009028644 | Feb 2011 | DE |
102010013647 | Feb 2011 | DE |
102011106173 | Feb 2012 | DE |
1 209 485 | May 2002 | EP |
1 892 688 | Feb 2008 | EP |
10-062162 | Mar 1998 | JP |
11-139225 | May 1999 | JP |
2010-282615 | Dec 2010 | JP |
Entry |
---|
International Search Report of the International Searching Authority for International Application PCT/DE2012/100203, mailed Oct. 2, 2012, 2 pages, European Patent Office, HV Rijswijk, Netherlands. |
PCT International Preliminary Report on Patentability including English Translation of PCT Written Opinion of the International Searching Authority for International Application PCT/DE2012/100203, issued Mar. 12, 2014, 7 pages, International Bureau of WIPO, Geneva, Switzerland. |
German Search Report for German Application No. 10 2011 113 077.6, dated Jun. 11, 2012, 5 pages, Muenchen, Germany, with English translation, 5 pages. |
Uwe Franke et al., “6D-Vision: Fusion of Stereo and Motion for Robust Environment Perception”, in Proceedings of DAGM-Symposium, 2005, DaimlerChrysler AG, Stuttgart, Germany, pp. 216 to 223. |
Number | Date | Country | |
---|---|---|---|
20140218481 A1 | Aug 2014 | US |