The present invention relates to a method for representing a vehicle's surrounding environment on a human-machine interface in a driver assistance system. The present invention also relates to a corresponding computer program product, as well as a corresponding driver assistance system.
Driver assistance systems are auxiliary devices in a vehicle whose purpose is to assist the driver in maneuvering the vehicle. Driver assistance systems typically encompass different subsystems, such as a park assist system, a navigational system or a blind spot warning system that employ a sensor system to monitor the vehicle's surrounding environment. Such sensor systems may include optical sensors, ultrasound sensors, radar sensors or LIDAR sensors, for example, which, individually or in combination, generate, respectively process data relating to the vehicle's surrounding environment.
European Patent Application EP 2 058 762 A2 discusses a method for directly representing distances of objects to a vehicle in a bird's-eye view. First, an image acquisition unit detects objects located in the surroundings of the vehicle. If an object is detected that is expected to collide with the vehicle at a specific height above ground, a virtual plane of projection is then created at the height of the collision point. In the course of the image processing, the pixels of the image recorded by the image acquisition unit are projected onto this plane, and an image is thus generated from the bird's eye perspective. In addition, pixels between the vehicle and the collision point are projected onto one plane at the level of the roadway, and more remote pixels are projected onto the virtual plane of projection.
The German Patent Application DE 10 2010 010 912 A1 relates to a driver assistance device having an optical representation of captured objects. The driver assistance device includes a sensor device for recording objects in the surroundings of the vehicle, as well as a display device. The sensor device can measure ground coordinates of the object as 2D or 3D positional data that the display device then uses in the perspective representation for positioning a symbol that symbolizes the object in the plan view. A symbol of the captured object is placed in the perspective representation. A plurality of recorded images around the vehicle are needed to be able to represent the entire surroundings of the virtual vehicle. These particular recorded images are preprocessed, and the information contained therein is used to produce a bird's eye perspective representation.
German Patent Application DE 10 2005 026 458 A1 relates to a driver assistance system for a vehicle. The driver assistance system includes an evaluation unit which, by analyzing sensor signals, determines distance data to objects captured at close range to the vehicle. The defined distance data are represented as an object contour on an optical display in relation to a schematic plan view of the particular vehicle.
Today's multi-camera systems, which are used in automobiles, compute a shared view from images from a plurality of cameras installed in the vehicle. Through the use of a virtual camera, different views can be represented by the plurality of cameras installed in the vehicle. This allows the driver to see the entire immediate surroundings of the vehicle simply by glancing at the heads-up display. Thus, using such a system, the driver can have an overview of blind spots.
The method provided by the present invention makes it possible to detect objects projecting in the vicinity, i.e., in the surroundings of the vehicle, in the course of an image processing and/or on the basis of other sensor systems, thus, for example, laser, radar, lidar, ultrasound, stereo-camera systems, to name just a few. Instead of projecting these detected objects into the plane, the approach according to the present invention provides that the coordinates of the base point of the object be determined upon detection of a raised object. In a subsequent method step, the plane of projection is raised in front of the object, so that raised objects in the surroundings of the vehicle are no longer projected as “shadows cast” onto the plane, rather are discernible as raised objects in the various selectable views of the virtual camera. This procedure leads to an improved representation of the surroundings, respectively to a better processing of the ambient situation for the driver, within which he/she is able to better orient himself/herself since the image has a more natural appearance and a more intuitive aura.
If it is additionally possible to also determine the height of the object, then this is likewise considered in the modification of the plane of projection, making it possible to further improve the view.
The advantages of the present approach are evident above all in that raised objects are represented more naturally and intuitively by raising a plane of projection. As a result, the representation no longer has an artificial character, and the driver is provided with a more natural representation of his/her immediate surroundings.
The present invention is explained below in greater detail with reference to the drawings.
From the representation in accordance with
A bird's-eye perspective 20 is shown exemplarily in the representation in accordance with
From camera images 12, 14, 16, 18, a curved, allround display 22 in a three-dimensional view may also be created that images the surroundings of vehicle 30. Instead of projecting camera images 12, 14, 16 and 18 onto a plane, the individual images may be projected onto a cylindrically curved surface, making it possible to significantly improve the representation, particularly with regard to reproducing an area that is located further away.
A plane of projection will become apparent from the representation in accordance with
As the representation in accordance with
It is readily apparent from the ray path plotted in
In accordance with the illustration in
An adapted plane of projection, that extends in front of the vehicle, is readily apparent from the representation of
As
To complete this description, it is noted that, in the representation in accordance with
The transition point where horizontal region 36 of plane of projection 46 transitions into vertical region 38 essentially coincides with base point 48 of the at least one detected, raised object 34.
Another adaptation of the plane of projection, in particular in consideration of the size of the at least one detected, raised object, is inferable from the representation in accordance with
As
If, as indicated in
As is indicated in connection with
In addition, in the representation in accordance with
With regard to the sensor systems that are used in the present context for implementing the inventive method, most notably, ultrasound sensors, radar sensors, laser sensors, stereo cameras, as well as Structure from motion systems, a mono-camera and the like are suited. Using such sensor systems that capture the surrounding environment of vehicle 30, it is possible to record the measured values of raised object 34. The recorded measured values may be converted into a position relative to any given desired coordinate system, for example.
In connection with
The present invention is not limited to the exemplary embodiments described here and the aspects emphasized therein. Rather, within the subject matter indicated by the pending claims, a multiplicity of modifications are possible that reside within the scope of expert activity.
Number | Date | Country | Kind |
---|---|---|---|
10 2011 084 554.2 | Oct 2011 | DE | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/EP2012/068794 | 9/24/2012 | WO | 00 | 4/8/2014 |