The present invention relates to a method for maneuvering a vehicle, in particular for maneuvering a vehicle in a parking space. The present invention also relates to a maneuvering assistance system.
“Parking assistants” for vehicles such as passenger cars are available. These parking assistants usually are made available by maneuvering assistance systems and by methods for maneuvering vehicles.
More cost-effective maneuvering assistance systems based on reverse travel cameras offer the opportunity of also monitoring the region behind the vehicle on a monitor when driving in reverse. Areas that are not covered by the reverse travel camera, for instance the regions to the side next to the vehicle, are therefore unable to be displayed. In particular at the end of the parking maneuver, the boundary lines or structures otherwise restricting or characterizing the parking space are no longer detected by the maneuvering assistance system in maneuvering assistance systems of this type that are based on reverse travel cameras, and thus are no longer displayed on the monitor.
In addition, what is referred to as surround-view systems are also available. Such surround-view systems are typically based on multiple cameras, such as 3 to 6, and offer an excellent allround view that can be displayed on a monitor of a maneuvering assistance system. As a result, such maneuvering assistance systems allow a precise alignment of a vehicle along parking lines or other structures restricting a parking space. However, the higher costs on account of multiple cameras are disadvantageous in such surround-view systems.
It is an object of the present invention to provide a method for maneuvering a vehicle, and to provide a maneuvering assistance system that allows a precise alignment of a vehicle along structures bounding a parking space and that can be made available in a cost-effective manner.
According to the present invention, an example method for maneuvering a vehicle, in particular for maneuvering a vehicle into a parking space, is provided, which includes the following steps:
The vehicle may be any random vehicle, in particular any road vehicle. For example, the vehicle is a passenger car, a truck or a bus.
The regions, such as the first region and the second region, which are sensed at different instants by images, describe outer regions, that is to say, regions that lie outside the vehicle. Preferably, these are horizontal or three-dimensional regions. The individual regions are the regions that are sensed or are able to be sensed by an image recording system, e.g., a camera, on or inside the vehicle. For example, the first region and the second region may be the rear region of a vehicle, which is sensed by a reverse travel camera at the individual instant.
In a step a), the rear region of a vehicle sensable by a reverse travel camera thus is recorded as first region at a first instant with the aid of a first image. Using suitable algorithms, e.g., an algorithm for line detection, a first element within this first region is detected in the first image in step b). As an alternative or in addition, the first image, or the image information of the first image, is stored or buffer-stored in step b). At a second instant, a second region, such as a region that is able to be sensed by a reverse travel camera of the vehicle at this instant, is sensed by a second image in step c). At the second instant, the vehicle preferably is no longer at the same location as at the first instant. In other words, the vehicle has moved between the first instant and the second instant, for instance has backed up. The second region is therefore not identical with the first region, which means that the second region lies at least sectionally or partially outside the first region. For example, the first region and the second region may overlap each other. Furthermore, the first region and the second region may abut each other.
In step d), the position of the detected first element at the second instant is calculated with the aid of suitable algorithms. Since the detected first element is located in the part of the first region that does not overlap the second region and thus lies outside the second region at the second instant, the first element can no longer be detected by an image recording system such as a reverse travel camera, at the second instant. The position of the first element at the second instant is therefore calculated. As an alternative or in addition, the position of the first image in relation to the second instant is calculated in step d). In step e), the first image and/or the first element is inserted as virtual element, e.g., as line drawing, into the second image at this calculated position and displayed.
The particular images, e.g., the first image and the second image, preferably are displayed on a screen or a monitor in the vehicle at the particular instant. The displayed images preferably include more than only the region sensed at this instant. For example, the position of the first element outside the second region is displayed as virtual first element in the second image as well. In addition, for example, the vehicle or at least the current position of the vehicle is shown as further virtual element in the images such as the first image and the second image.
The time intervals of the instants, e.g., the interval between the first instant and the second instant, may have any suitable time interval. For example, these time intervals may lie in the second or millisecond range.
With the aid of the method of the present invention for maneuvering a vehicle, for example, a region sensed by a camera as well as the elements detected in this region can be continually projected into the region outside the region sensed by a camera as a function of the vehicle movement. This gives the driver the opportunity to orient himself at the static structures in the image, for instance.
For example, a current camera image may be augmented by virtual supplemental lines, the positions of which have been calculated using previously detected visible lines. The calculation or implementation preferably may take place on a 3D processor (GPU) of a head unit of the vehicle or the maneuvering assistance system.
It is furthermore preferred that at least one second element within the second region in the second image is sensed in a further step f). As an alternative or in addition, the second image or the image information of the second image is stored or buffer-stored in step f). Preferably in a step g), a third region is then sensed by a third image at a third instant, the third region lying at least partially outside the second region as well as preferably also partially outside the first region. The third instant preferably follows the first and the second instant. In a following step h), the position of the second element detected in the second image at the second instant preferably is calculated in relation to the third instant. The position of the second element detected in the second image at the second instant lies outside the third region. As an alternative or in addition, the position of the second image in relation to the third instant is calculated in step h). Moreover, the second image and/or the second element preferably is inserted into or displayed in the third image as virtual second element at the position calculated in step h), preferably in a next step.
It is furthermore preferred that the individual steps are repeated at predefined time intervals. It would moreover be possible to repeat the individual steps whenever the vehicle has traveled a predefined distance. By repeating the individual method steps, abutting or also partially overlapping further regions are able to be sensed with the aid of further images at successive points in time. Moreover, additional elements within these further regions are detectable in the further images, and the particular positions of the further elements can be calculated in relation to the following point in time and inserted into the current image and displayed therein as virtual further elements.
When the image is output on a monitor of a maneuvering assistance system in the vehicle, for instance, the viewer of the individual current image is given the impression that the vehicle is virtually sliding or moving over the regions sensed at earlier instants.
The method for maneuvering a vehicle preferably is based only on a reverse travel and/or forward travel camera (front camera). The lateral regions next to the vehicle can thus not be sensed by cameras. When viewing the current image on a monitor of a maneuvering assistance system, however, this method makes it possible to continue the display of elements from no longer sensable regions.
The elements such as the first element and/or the second element and/or a third element and/or further elements preferably are what is known as static structures, for instance structures bounding or characterizing a parking space. As a result, these static structures such as lines, for instance, may represent lines that restrict a parking space and are marked on the ground. Moreover, the characterizing structures may be static structures within the parking space, e.g., manhole covers or drains. In particular, these elements are also regions of larger or longer structures which are sensed completely or sectionally by the image recording system such as a camera at the particular instant in time. Furthermore, the static structures may involve curb stone edges, parking vehicles, bollards, guard rails, walls or other structures bounding a parking space.
It is moreover preferred that the part of the first region sensed by the first image and not overlapping the second region is displayed in the second image and outside the second region. As a result, it is preferably provided not only to project detected elements into the next region, but to project complete image information of previously sensed camera images into the particular current image. The projected image portions preferably are characterized as virtual structures. This makes it possible to infer from the current image that a particular region of the image does not constitute “live” information. It is possible, for instance, to display such image portions in the way of comic art (3D art map) in the form of a line drawing, ghost image or vector field.
The calculation of the position of the first element detected in the first image at the first instant preferably takes place in relation to the second instant, based on a movement compensation. That is to say, the position is calculated while taking into account the movement of the vehicle that has taken place between the particular instants, e.g., the first instant and the second instant. The calculation of the position in particular is based on a translation of the vehicle. A translation is a movement in which all points of a rigid body, in this case, the vehicle, undergo the same displacement. Both the path covered, i.e., the distance, and the direction (e.g., when cornering) are sensed. Moreover, the calculation of the position preferably is based on the yaw angle of a camera disposed in or on the vehicle or the yaw angle of the vehicle. The yaw angle is the angle of a rotary motion, or angular motion, of the camera or the vehicle about its vertical axis or the vertical axis of plane. Therefore, taking the yaw angle into account in particular makes it possible to consider the executed change in direction of a vehicle between the respective instants in time in the movement compensation.
In addition, the calculation of the position preferably is also based on a pitch angle and/or roll angle of the camera or the vehicle. The pitch angle is the angle of a rotary or angular motion of the camera or the vehicle about its transverse axis. The roll angle is the roll rate and thus the angle of an angular or rotary motion of the camera or the vehicle about its longitudinal axis. This makes it possible to consider a change in height of the vehicle or the camera in relation to the road surface in the movement compensation.
It is furthermore provided that further image information, e.g., images sensed and recorded at an earlier instant in time, are taken into account and used. Such further image information can be inserted into the current image and displayed. For example, this may also be what is known as external image information. External image information, for example, may be provided on storage media or also by online map services.
According to the present invention, a maneuvering assistance system for a vehicle, in particular for parking, is furthermore provided, the maneuvering assistance system being based on a previously described method for maneuvering a vehicle. The maneuvering assistance system has a first image sensor, disposed on or inside the vehicle and in the rear region of the vehicle, for sensing the first region by means of a first image. For example, the first image sensor is a first camera, in particular a reverse travel camera. Therefore, it is preferably provided that the first image sensor is generally directed toward the rear.
Moreover, the maneuvering assistance system has a second image sensor, disposed on or inside the vehicle and in the front region of the vehicle, for sensing the first region by means of a first image. The second image recording system, for example, is a forward travel camera. Therefore, it is preferably provided that the second image sensor in principle is directed toward the front.
By providing a second image sensor, such as a second camera directed toward the front, the maneuvering assistance system not only is able to provide an assistance system for reverse travel of a vehicle, but for the forward travel of a vehicle as well. For example, a camera facing forward makes it possible to display a parking maneuver on a screen when driving forward as well, because all described features of a method for maneuvering a vehicle are also provided when using a camera directed toward the front.
Furthermore, it is preferably provided that the maneuvering assistance system includes no more than one or two image sensors, in particular cameras.
The maneuvering assistance system furthermore preferably includes especially sensors for sensing the own movement, in particular the translation, of the vehicle. The own vehicle motion preferably is able to be ascertained with the aid of sensors and/or using odometry, an inertial sensor system, the steering angle or directly from the image.
The present invention is explained below on the basis of preferred exemplary embodiments with reference to the figures.
In
Number | Date | Country | Kind |
---|---|---|---|
10 2013 217 699.6 | Sep 2013 | DE | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2014/066954 | 8/7/2014 | WO | 00 |