The invention concerns a headlight system for vehicles, preferably for motor vehicles according to the preamble of Claim 1.
Headlight systems for vehicles are known in which the headlights follow a direction change (for example when driving around a curve) with corresponding illumination area. This co-steering can be coupled and therefore driven by a mechanical connection (cable pull system) between the steering or other pivoting components when traveling around a curve.
Such a system only functions when the driver steers into a curve or turns in another direction from his lane. In such systems the co-steering of the headlight is coupled via geometric steering parameters to the steering and functions at the same time as the steering process. All these systems have a true predictive behavior.
Systems are also conceivable that are supported by navigation systems with GPS.
The determined data are compared with data of street maps that are entered in the present navigation systems. Current position data can also be evaluated via a GPS satellite system. Coupling between pure mechanical information of the moving vehicle with entered map material data as well as actual determined data via GPS satellite systems make it possible to co-steer the headlights according to the desired direction change simultaneously or in anticipation during turning or traveling around a curve. Current map material is necessary. Difficulties occur abroad, since map material is often lacking or small streets are not marked. On many trips in the immediate vicinity, for example, on the way to work, shopping and the like, no navigation system is used; the system then does not know what the destination is. Driving in anticipation is therefore not possible with the systems.
Such headlight systems are best suited for simultaneous pivoting of the headlights during traveling around the curve. Simpler and older systems often react with a time offset so that pivoting of the headlights only occurs already when steering into a curve or when turning has been initiated.
Moreover, other systems are known in which the roadway and therefore the travel direction are detected via a sensor. Examples of such systems are adaptive cruise control systems. Studies on optical lane-holding systems are also known.
The purpose of these systems in the case of adaptive cruise control systems is to maintain a minimum distance to a vehicle traveling in front, or in the case of lane-holding systems to warn the driver before he leaves his lane.
The underlying task of the invention is to design the generic headlight system so that even before the direction change or direction alteration pivoting of the headlights can occur accordingly.
This task is solved in the generic headlight system according to the invention with the characterizing features of Claim 1.
In the headlight system according to the invention a prediction of the road layout is achieved without using GPS data or map material data. The imaging sensor determines images/data of the surroundings. From these data the image processing device determines the street layout in front of the vehicle and the presumed further travel motion of the vehicle by means of additional vehicle-specific information, for example, the speed of the vehicle. The light beams can be adjusted accordingly. For example, before a change in travel direction arrives or is initiated, the illumination area in front of the vehicle is adjusted according to this change in travel direction. For this purpose a direction of speed factor is determined, which records the area in front of the vehicle via an optical system and therefore identifies whether a direction change is to be expected or is not imminent. These environment-specific images/data are linked to vehicle data, like vehicle speed, set travel direction displays and possibly steering values. From the environmental images/data and these vehicle data, signals are generated with which an anticipatory direction change of the emitting light beams in the headlight is possible. Predictive assumptions are additionally made; for example, a speed reduction before a street intersection recognized by an image processing unit means a desire of the driver to turn or the desire to get his bearings or inspect a hazard site. Broader illumination is necessary. Additional information, like setting of the travel direction display, recognition of steering movements or recognition of only a possible turning direction, among other things, specify the assumptions concerning the driver's wish.
By means of optically recorded environmental data the headlight system can identify objects on the edge of the roadway or roadway markings. The objects can be prominent objects that establish the system as statically defined objects. By a number of detected images of the street layouts situated in front of the vehicle as well as its direct edge surroundings changes of static objects are made possible. This system therefore recognizes whether a curve or turn or intersection is to be expected during continuing travel. In particular, by linking these environmental data with the vehicle data an anticipatory direction change and therefore a corresponding headlight control is reliably guaranteed.
The static objects (prominent objects) on the edges of the roadway can be determined in their position and coordinate change during travel of the vehicle so that the system can establish differences in their evaluation and can define vectors according to these differences as motion or also speed vectors. Should the road layout to be expected vary to one side, the system will identify in all prominent objects, for example, the roadway edge or prominent objects, a continuous lateral change and therefore establish a motion vector. According to the travel speed of the vehicle, the size and length of this vector therefore also changes, with which the parameters of speed during travel direction changes to be expected, especially strong travel direction changes, like intersections or turns, can also be determined.
Through such an anticipatory system that operates independently of stored data, like street maps or actual satellite data, a continuous illumination adjustment during travel is possible. This continuous illumination adjustment changes the illumination area of the headlights advantageously in stepless fashion according to the stipulated direction changes and/or speed of the vehicle.
Naturally the mentioned stored data can be used as a support; however, they are not absolutely essential for the headlight system according to the invention.
During slow travel the illumination range can therefore be advantageously reduced and the illumination of the side areas in front of the vehicle increased. During fast travel the illumination of the vehicle in the front lateral area can be reduced, whereas the illumination can be aligned farther ahead in the travel direction. During fast travel through a curve illumination far forward is advantageous, in which a lateral widening of the illumination area occurs according to the curvature of the change in direction of travel.
Even in a standing vehicle this state is recognized; the illumination area will then correspond instead to a surrounding and position illumination.
The illumination can occur stepless or in finely graded steps.
For the possible situations of the forward and lateral illumination areas typical maps or relative calculation formulas can be entered in the control system. All additional parameters or experience values can be entered in such controls and systems as a binary code in memories so that a corresponding illumination can be determined at any time.
Another variant consists of the fact that such a system can be programmed quasi-intelligently, in which the system is capable of identifying the driving behavior of a driver and the characteristics entered in the memory can be made available for calculation of correspondingly adapted illumination areas. Such an individual adaptation can occur continuously, which has the major advantage that, depending on the traffic situation or conditional capabilities of the driver, corresponding illuminations can be adapted.
Additional features of the invention are apparent from the additional claims, the description and the drawings.
The invention is further explained by means of practical examples depicted in the drawings. In the drawings
The optical recording area 6 of the camera 5, as shown in
In an enlarged recording area 6 an area of the street 2 extending farther forward, as well as the immediate street surroundings, are recorded. In this case not only the right roadway edge 22 and the middle roadway marking 24 are recorded, but also the left roadway edge 23, as well as prominent contours or geometries 25, 26 that are situated in the immediate recording area of the right or left street edge 22, 23. The camera 5 is advantageously designed so that it can record edge structures of the street that are larger than about 40 cm.
The image in
A camera image 20 with a street 2 and prominent objects 25, 26 in the edge area of the street are imaged in
In this way the direction of movement 27 and therefore the covered path 27 [sic] is recorded via the image processing unit 10. If now, deviating from the depiction in
The frequency of the camera images 20 recorded for evaluation is also of great significance, since at high travel speeds the distances of the prominent objects 25, 26 are correspondingly larger with the same image recording sequence. Conclusions concerning the travel speed can therefore also be drawn from the objects 25, 26.
During image processing, with reference to prominent objects 25, 26 identical or very similar features from one image to the next image are sought. The changes from image to image are generally geometric changes, which, for example, in a camera image sequence of 30 images per second and integration of all smaller changes lead to a larger change to a direction vector 30 of the actual movement of vehicle 1. This large change describes the instantaneous motion or speed vector 30.
During a lateral change in prominent objects 25, 26 the image processing unit 10 will identify a direction change and therefore a direction vector 30. This direction vector 30 is the integration of geometric changes during a lateral shift from one image to the next recorded image.
During normal curved travel these lateral changes can be evaluated as slight changes. If an abrupt change in direction occurs, for example, a turn into another street, the direction vector 30 will experience an extremely large change to the side because of the lateral geometric changes of the prominent objects 25, 26.
It is therefore possible by means of changes of prominent objects 25, 26 situated essentially in front of vehicle 1 to recognize a change in direction and to determine this change in direction as the calculation result of an image processing unit 10 from a motion vector/speed vector 30 with a corresponding variable direction. This occurs in the described manner for an area (street layout 21) situated mostly in front of vehicle 1. According to the travel speed of vehicle 1 it is naturally necessary that the area situated in front of vehicle 1 be detected farther out front at higher speed. At lower speeds a smaller area is then naturally detected accordingly. A smaller area is detected accordingly [sic]. A representation of the motion vector 30, as established in this example during curved travel is imaged in
A simplified representation of the headlight system is shown in
The image recording unit 11 and the evaluation unit 12 form the image processing unit 10. After a planned change in direction, for example, a corresponding signal is sent by the image processing unit 10 to a light control device 13. This signal is evaluated by the light control device 13, which sends a corresponding control signal to the headlight 14. Adjustment motors are driven with the control signal. They pivot the headlight 14 in the direction of the travel direction change or align it so that an illumination range adjusted to the present speed and corresponding to the roadway layout 21 (desired change in travel direction) is set.
To explain the complex relations of image recording unit 11 and image evaluation unit 12 a flow chart of the entire image processing unit 10 is shown in
The image in front of vehicle 1 is recorded by camera 5. Naturally in addition to optical camera systems, other systems can be used. For example, ultrasonic systems, radar systems or laser systems can be involved here. All these systems are suitable for recording measurement data for processing or for furnishing the appropriate data necessary for this. The data so recorded are environment-specific data, since they make assertions concerning the environment outside the vehicle.
In the next step, for example, edges 31 are detected from these data/images or other prominent objects 25, 26 identified. This procedure is repeated from image to image, in which the image data so processed are combined in a subsequent step to an object hypothesis 32. An attempt is made in this object hypothesis 32 to combine individual prominent points or edges with similar features from several consecutive images to an object. If an object hypothesis, for example, edge 22 was confirmed along the vehicle (
These possibilities are identified and combined accordingly in the object classification step 33. This processing step leads to a number of object classes 34 to 38 then further processed to corresponding combined information and data.
For example, there is an object class 34 that records environmental conditions, like fog, rain, day or night. Another object class 35 mostly processes the street layout 21, as well as the corresponding edge conditions, for example, edge structure, whereas another object class 36 identifies all data and information concerning other traffic participants (vehicles traveling in front or oncoming vehicles). Obstacles on the roadway are recognized in object class 37. These can be, for example, lost objects or stopped vehicles/objects. In another object class 38 it is possible to identify hazardous objects on the edge of the roadway. Such hazardous objects, for example, can be vehicles or persons approaching from the side. All these object classes 34 to 38 are generated according to ordinary image processing algorithms and in all generally expected logical relations. Blocks 39 to 43 describe functions for the system according to the invention that can be derived from object classes 34 to 38. From the derived functions 39 to 43, as well as object classes 34, 36 the illumination areas 44 to 47 can be determined via a map according to the invention and combined to an illumination area 48. This then advantageously gives the illumination area marked 16 in
These functional units 39 to 43 are capable of evaluating the information and data summarized in object classes 34 to 38 in detail and therefore according to the desired reaction. For example, traffic participants in front of the actual vehicle 1 could be identified in function 39, special situations in function 40, like intersections, or a driver wish in function 41 (selected travel direction display, speed) can be further specified. The function units 39 to 41 are determined among other things from object class 35. Object class 36 is assigned the function units 41 and 42. In function 42, for example, hazard situations before an impending collision are recognized. Function 42 is accordingly not only linked to object class 36 but also to object classes 37 and 38. In function 43 driving dynamic data, i.e., vehicle-specific data, like speed and steering angle are recorded and advantageously improve with additional data from the vehicle. All these functions 39 to 43, which are available in advantageous processing for the system according to the invention, are now advantageously assigned to light areas 44 to 47 via a set of characteristics.
For example, an area restriction 44 with reference to glare from oncoming vehicles is checked. The area restriction 44 is determined from object class 36. An additional area 45, which his linked to functions 39, 41, 43 permits additional light areas caused by typical edge conditions to be expected on the travel path. This can involve, for example, a travel path as shown in
It is ensured in the base illumination area 46 that all data determined and evaluated thus far correspond to a legal minimum illumination area and illumination areas deviating from this therefore cannot be adjusted, for example overlapping area 15, 16 in
Another additional area 47, which is linked to functions 40 and 42, permits special additional areas based on special situations, like the function 42 described previously. In this case identified hazard situations, like possible collisions, are considered. This can mean, for example, that objects lying on the roadway or objects coming onto the roadway from the side, for example, animals or pedestrians, are deliberately illuminated in order to warn the driver accordingly, even if these objects lie outside of the illumination area determined based on the traffic situation. The data of the image processing unit 10 processed concerning objects 34 to 38, functions 39 to 43 and areas 44 to 47 are combined in the unit 48 so that a desired illumination area is determined from all conditions that apply for the instantaneous situation. In this way the environment-specific and vehicle-specific data are linked to each other so that the necessary illumination area is determined, depending on the traffic situation and/or the environmental situation.
In order to avoid irritation or incorrect interpretations of other traffic participants (for example, confusion with flashers), changes should only occur slowly. The signals over a certain time are filtered or integrated (function 49). It is therefore possible to permit following of the illumination area not recognizable for the driver and other traffic participants. Old and new values are integrated according to the time behavior so that a clear signal image is generated.
In a subsequent step of conversion 50 the signals are processed so that any light fields 57 or optical elements or actuators 58 (
The method of controlling lamps 57, optical elements or actuators 58 is shown in detail in
In
The use of power drivers 56 is necessary, since the data made available by the image processing unit 10 do not permit direct control of motors, actuators 58 or lamps 57. Higher currents and voltages are required for this than are ordinarily available in electronic evaluation and processing units. As is apparent in
A similar principle to that described in
With reference to the actuators 58,
During identification of a change in travel direction, after recording of images in the corresponding evaluated data in the image processing unit 10 a corresponding signal is sent to the light control device 13. This light control device 13 controls a servomotor depicted as in
The illumination areas are preferably according to the current ECE regulations.
Evaluation of sensor signals is known, for example, images by image processing or so-called machine vision. The layout and evaluation of RADAR or LIDAR sensors is also known. Standard software in which software code for an image evaluation/image processing can be generated automatically is also known.
The headlight system has at least one sensor, for example, a CMOS camera chip with optics and at least one evaluation unit (computer), preferably a digital signal processor on which data evaluation, preferably image evaluation occurs. From the determined data (street layout, etc.) the ideal illumination area, preferably according to ECE guidelines is calculated and set via the actuators. In addition the outer circuitry (voltage processing/voltage supply, power drivers for the actuators, etc.) occurs as shown schematically in
Number | Date | Country | Kind |
---|---|---|---|
10 2006 050 236.1 | Oct 2006 | DE | national |