The disclosure relates to a head-up display system 100 and a method involving the head-up display system 100 for identifying and displaying information about a part of a road that is not visible to a driver.
Current navigation systems which provide their directions in a visible form usually display the information via a separate panel or use a head-up display. The information presented on the head-up display is usually rather limited and consists of simple icons, concise textual information or/and arrows to provide navigational information to the driver. When a driver views navigational information on a separate panel of a navigational system 100 his attention is drawn away from the situation on the road ahead of him at least for a short moment which may lead to dangerous situations. Therefore, it would be desirable that the driver is informed about potentially dangerous situations or/and the exact upcoming road course in an improved way.
Document DE102004048347A1 discloses a driving aid device for a motor vehicle. The device comprises a navigation device, an imaging sensor, an image reproduction device and a controller, whereby at least the navigation device, the imaging sensor and the image reproduction device are connected to the controller. The controller processes the images recorded by the imaging sensor for recognition of the carriageway and the road trajectory and determines at least the subsequent road trajectory lying outside the field of view of the imaging sensor from the road map data provided for the navigation device in order to generate a prediction of the road trajectory in the form of a positionally-accurate display on the image reproduction device by integration of both forms of information. Said display may be accurately overlaid on the view of the traffic and driving situation visible to the driver in a positionally and perspectively accurate manner.
Document US2016003636A1 discloses a system which includes a lane marking manager determining a first boundary line, a second boundary line, and a centerline of a current lane of travel. The system also includes a confidence level determiner assigning a first confidence level to the first boundary line, a second confidence level to the second boundary line, and a third confidence level to the centerline. Further, the system includes a user interface outputting representations of the first boundary line, the second boundary line, and the centerline based, at least in part, on the first confidence level, the second confidence level, and the third confidence level.
DocumentUS2018089899A1 discloses an AR system that leverages a pre-generated 3D model of the world to improve rendering of 3D graphics content for AR views of a scene, for example an AR view of the world in front of a moving vehicle. By leveraging the pre-generated 3D model, the AR system uses a variety of techniques to enhance the rendering capabilities of the system. The AR system obtains pre-generated 3D data (e.g., 3D tiles) from a remote source (e.g., cloud-based storage), and uses this pre-generated 3D data (e.g., a combination of 3D mesh, textures, and other geometry information) to augment local data (e.g., a point cloud of data collected by vehicle sensors) to determine much more information about a scene, including information about occluded or distant regions of the scene, than is available from the local data.
Therefore, it is an object of the present disclosure to provide an improved system overcoming the drawbacks of the prior art.
Disclosed is a Head-Up display system 100 for a vehicle comprising:
a projector 104 and a transparent plane 105 in the field of view of a driver, configured to project information regarding the course of a road onto the transparent plane 105; and a processor 103 configured to:
analyze an image of a road ahead of a vehicle, the image provided by a camera 101, and determine the road course based on the input of the camera 101;
analyze navigational information regarding the position of the vehicle on a map comprising the road, the navigational information provided by a navigation system 102, and determine the road course based on the input of the navigation system 102;
match the road course determined by the input of the camera 101 and the road course determined by the input of the navigation system 102;
determine the part of the road course determined by the input of the navigation system 102 not captured by the camera 101;
calculate graphical information 306 on the part of the road course ahead not captured by the camera 101; and
project the calculated graphical information 306 regarding the part of the road course ahead not captured by the camera 101 via the projector 104 starting from the end of the road course ahead not captured by the camera 101, thereby providing graphical information 306 regarding the non-visible part of the road ahead onto the transparent plane 105.
The road course determined on the input of the camera 101 and the road course determined on the input of the navigation system 102 may comprise information regarding the roadsides of the road, the road lane 304 used by the vehicle, or/and the medial strip 303 of a road.
The projected graphical information 306 may be in the form of continuous or non-continuous lines indicating the roadsides of the road or a strip indicating the road or a road lane 304 of the road.
The projected graphical information 306 may be in a color different from the colors visible in the field of view of the driver.
The processor 103 may be further configured to determine that the part of the road ahead not captured by the camera 101 contains a cause of danger and provide an alarm to the driver.
The cause of danger may be a sharp or abrupt turning, a traffic light, or/and a narrowing of the road.
The alarm may be a visible, haptic or acoustic alarm.
The alarm may be indicated by a predetermined color and/or by a flashing of the projected graphical information 306.
The system 100 may further comprise the camera 101 configured to capture an image of a road ahead of a vehicle.
The system 100 may further comprise the navigation system 102 configured to determine the position of a vehicle on a map comprising road lanes 304.
Disclosed is also a computer implemented method for providing graphical information 306 on the non-visible part of a road ahead in the field of view of the driver with the head display system 100 described above comprising:
analyzing an image of a road ahead of a vehicle, wherein the image is provided by a camera 101 and determine the road course;
analyzing navigational information regarding the position of the vehicle on a map comprising the road, the navigational information provided by a navigation system 102, and determine the road course;
matching the road course determined by the input of the camera 101 and the road course road course determined on the input of the navigation system 102;
determining the part of the road course determined by the input of the navigation system 102 not captured by the camera 101;
calculating graphical information 306 regarding the part of the course of the road ahead not captured by the camera 101 and starting from the end of the road ahead not captured by the camera 101; and
projecting the calculated graphical information 306 on the part of the course of the road ahead not captured by the camera 101 via the projector 104 onto the transparent plane 105 starting from the end of the road ahead not captured by the camera 101.
Disclosed is also a data carrier comprising instructions for a processing system 100, which when executed by the processing system 100, cause the computer to perform the computer implemented method describe above.
Disclosed is also a processing system 100 comprising the data carrier described above.
The processing system 100 may be an Application-Specific Integrated Circuit, ASIC, a Field-programmable gate arrays, FPGA, or a general purpose computer.
Disclosed is also a vehicle comprising the head-up display system 100 or the processing system 100 described above.
Disclosed is a system 100 for a vehicle as illustrated in
The system 100 can be considered to be a head-up display system 100 or a system 100 that is integrated into a head-up display system 100. The system 100 by a processor 103 analyses an input received and provided by a camera 101. The input is at least one image by a camera 101. The camera 101 is capable of capturing an image of the field of view that is visible in the direction into which the vehicle is driving, i.e., usually the forward direction of the car, but is also possible that the camera 101 captures at least one image in the direction reverse to the forward direction of the car. The input can consist of at least one image or a (successive) series of images. Capturing a series of images allows to continuously update the calculated graphical information 306. The processor 103 after analyzing the at least one image of a road ahead of the vehicle identifies the road course visible to the camera 101 and thus the part of the road visible to the driver.
Object recognition analysis can be performed on the images received by the camera 101 to determine the presence and course of a road. For example, the processor 103 may be configured to detect the road by: a color transition between the road and its surroundings, guide posts on the left and/or right side of the road, lane lines, and/or medial lines.
The camera 101 may be replaced or supplemented by a laser detection and ranging, LIDAR, or/and radio detection and ranging, RADAR system 100 to provide information on the distance between the vehicle and the road ahead, i.e., the course of the road in three-dimensional space. The camera 101 may also consist of a stereo camera 101 for determining a three-dimensional image of the road ahead of the vehicle to provide information on the distance between the vehicle and the road ahead, i.e., the course of the road in three-dimensional space. Based on the knowledge of the distance of the vehicle to the road a three-dimensional representation for the road course can be determined.
Preferably concurrently, the system 100 is configured to also receive navigational information regarding the position of the vehicle on a map. The map, which is stored in an electronic memory and comprises at least positional data in two-dimensional or three-dimensional form on the course of roads or tracks, provides information on the road course as stored in the navigational system 100 and thus it can be identified on which road a vehicle is driving and which course this road has.
The system 100 is further configured to match the road course determined from the camera 101-input with the road course determined from the input of the navigational system 100. Matching visible objects with positional data is a known technique in the field of augmented reality and any suitable algorithm may be used for achieving this task. For example, for this tasks both inputs are transformed into the same spatial reference system 100 which can be spatial reference system 100 of the analyzed image, the spatial reference system 100 provided by the navigational system 100 or a third reference system 100. The navigational input is either already provided in the form of three-dimensional spatial data or transformed into the form of three-dimensional spatial data by the system 100.
The system 100 is further configured to determine the part of the road course determined from the input of the navigational system 100 that is not captured by the camera 101. This is the part of the road that is not visible for the camera 101 or the driver. It may not be visible because it is occluded by objects like, trees, hills, mountains, buildings, tunnels which are positioned at or in front of an upcoming curve. In addition, the part of the road may not be visible because the vehicle is approaching a hilltop.
The system 100 is further configured to calculate graphical information 306 for the system 100 which can be projected onto the transparent plane 105 representing the part of the road course determined from the input of the navigational system 100 that is not captured by the camera 101 (see
The graphical information on the nonvisible part of the road course may seamlessly or almost seamlessly connect the visible road with a representation of the non-visible road ahead. The system can also be configured that only a limited part of the non-visible part of the road is displayed, i.e., having a length corresponding to the length of the part of the road in reality of less than 10 km, 5 km, 3 km, 2 km, 1 km, 500 m, 200 m.
The determined road course from the camera 101 input and the road course from the navigation system 102 can comprise information on the roadsides of the road, the road lane 304 used by the vehicle, or/and the medial strip 303 of a road. Thus, the representation of the non-visible road ahead can also include this information.
Accordingly, the projected graphical information 306 can be in the form of continuous or non-continuous lines indicating the roadsides of the road or a strip indicating the road or a road lane 304 of the road.
The projected graphical information 306 can be in a color different from the colors visible in the field of view of the driver. In this way, it is easier for the driver to identify the non-visible part of the road against the surroundings. However, it is also contemplated that the graphical information 306 is provided in the same or almost the same color and/or texture of the road to avoid a distraction of the driver from the road by the overlaid graphical information 306.
The processor 103 can be further configured to determine that the part of the road ahead not captured by the camera 101 contains a cause of danger and provide an alarm to the driver.
The cause of danger can be a sharp or abrupt turning, a traffic light, or/and a narrowing of the road. The system 100 may identify a cause of danger also by the data input provided by the navigation system 102. For example, the system 100 may be configured to determine a sharp or abrupt terming or a narrowing when the angle of the turning falls under a predetermined value like 100°, 90°, 80° or less, or the width of the non-visible road compared to the width of the visible road falls under a predetermined value like 90%, 80%, 70% or less of the width of the visible road.
The alarm can be a visible, haptic or acoustic alarm. In particular, the alarm can be indicated by the color and/or by a flashing of the projected graphical information 306. For example, the projected graphical information 306 may usually be in a default color, like yellow or blue, and switch to an alarm color like red and additionally or alternatively start flashing.
The system 100 may further comprise the camera 101 or/and any other form of an image capturing device like a LIDAR or RADAR configured to capture an image of a road ahead of a vehicle.
The system 100 may further comprise the navigation system 102 configured to determine the position of a vehicle on a map comprising road lanes 304.
Disclosed is also a method (illustrated in
Disclosed is also a data carrier comprising instructions for a processing system 100, which when executed by the processing system 100, cause the computer to perform the method described above.
The system 100 may be implemented on a processing system which may comprise the data carrier described above.
The processing system 100 is not particularly limited and can be an Application-Specific Integrated Circuit, ASIC, a Field-programmable gate arrays, FPGA, or a general purpose computer.
Disclosed is also a vehicle comprising the system 100 described above or the processing system 100 described above.
Thus, a head-up display system and a method involving the head-up display system is described for identifying and displaying information about a part of a road that is not visible to a driver wherein the head-up display system 100 for a vehicle comprises:
a projector 104 and a transparent plane 105 in the field of view of a driver, configured to project information regarding the course of a road onto the transparent plane 105; and a processor 103 configured to:
analyze an image of a road ahead of a vehicle, the image provided by a camera 101, and determine the road course based on the input of the camera 101;
analyze navigational information regarding the position of the vehicle on a map comprising the road, the navigational information provided by a navigation system 102, and determine the road course based on the input of the navigation system 102;
match the road course determined by the input of the camera 101 and the road course determined by the input of the navigation system 102;
determine the part of the road course determined by the input of the navigation system 102 not captured by the camera 101;
calculate graphical information 306 on the part of the road course ahead not captured by the camera 101; and project the calculated graphical information 306 regarding the part of the road course ahead not captured by the camera 101 via the projector 104 starting from the end of the road course ahead not captured by the camera 101, thereby providing graphical information 306 regarding the non-visible part of the road ahead onto the transparent plane 105.
100 (Heads-Up Display) System
101 Camera
102 Navigation system
103 Processor
104 Projector
105 transparent plane
301 Left roadside
302 Right roadside
303 Medial strip
304 Road lane
305 Obstacle
306 Graphical information
S1-S6 Method Steps
This application is the U.S. national phase of PCT Application No. PCT/EP2019/051229 filed on Jan. 18, 2019, the disclosure of which is incorporated in its entirety by reference herein.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2019/051229 | 1/18/2019 | WO | 00 |