The present invention relates to a head up display (HUD) of a motor vehicle.
A head up display emits light that reflects from the front windshield to be seen by the driver. The light appears to come from a virtual image in front of the driver and in front of the windshield. This type of head up display is currently commercially available.
Conventional head up displays create the virtual image by first using a display or picture generation unit to create an image. Next, the light from the image is reflected from one or more mirrors. Next, the light from the mirrors is directed up to the windshield and is then reflected from the windshield towards the driver. The mirrors are designed and positioned relative to the display so that the light seen by the driver, which is reflected from the windshield, appears to come from a virtual image that is outside of the vehicle. The mirrors and display are typically contained in a package that occupies a volume beneath the top surface of the dashboard.
A head-up display (HUD) in a vehicle helps the driver keep their eyes on the road, and many car manufactures are offering a HUD system in a car. The HUD system projects graphics in front of the driver. The distance from the driver's eyes to the graphics, which is referred to as the virtual image distance (VID), is 2m or more (for augmented reality HUD, VID is typically at least 7 m or more). Therefore, unlike typical displays in a vehicle, HUD graphics are beyond the reach of the driver's human fingers, which makes the traditional way of user interacting with graphics through touch not possible. That is, for a HUD application, the touch area cannot be in or on the virtual image display area because the virtual image is outside of the windshield and is beyond the arm's length of the driver.
The invention may provide a method for defining a touch control area for a HUD application.
The invention comprises, in one form thereof, a head up display system for a motor vehicle including a light field emitter emitting a light field that is reflected off of a windshield of the motor vehicle and that is visible to a human driver of the motor vehicle as a virtual image disposed outside of the windshield. The virtual image includes a plurality of graphical elements. A hand sensor detects a position of a hand of the human driver in space. An electronic processor is communicatively coupled to the light field emitter and to the hand sensor. The electronic processor receives a signal from the hand sensor indicative of the position of a hand of the human driver in space, and determines which one of the graphical elements in the virtual image is aligned with an eye location of the human driver and the detected position of the hand of the human driver in space.
The invention comprises, in another form thereof, a head up display method for a motor vehicle, including emitting a light field that is reflected off of a windshield of the motor vehicle and that is visible to a human driver of the motor vehicle as a virtual image disposed outside of the windshield. The virtual image includes a plurality of graphical elements. A position of a hand of the human driver in space is detected. It is determined which one of the graphical elements in the virtual image is aligned with an eye location of the human driver and the detected position of the hand of the human driver in space.
The invention comprises, in yet another form thereof, a head up display system for a motor vehicle including a light field emitter emitting a light field that is reflected off of a windshield of the motor vehicle and that is visible to a human driver of the motor vehicle as a virtual image disposed outside of the windshield. The virtual image includes a plurality of graphical elements. A hand sensor detects a position of a hand of the human driver in space. An eye sensor detects a position of an eye of the human driver in space. An electronic processor is communicatively coupled to the light field emitter, the hand sensor and the eye sensor. The electronic processor receives a first signal from the hand sensor indicative of the position of a hand of the human driver in space. A second signal is received by the electronic processor from the eye sensor indicative of the position of an eye of the human driver in space. The electronic processor determines which one of the graphical elements in the virtual image is aligned with the detected position of the eye of the human driver and the detected position of the hand of the human driver in space.
An advantage of the invention is that it makes the touch control experience possible for HUD applications.
The above-mentioned and other features and objects of this invention, and the manner of attaining them, will become more apparent and the invention itself will be better understood by reference to the following description of embodiments of the invention taken in conjunction with the accompanying drawings, wherein:
The embodiments hereinafter disclosed are not intended to be exhaustive or limit the invention to the precise forms disclosed in the following description. Rather the embodiments are chosen and described so that others skilled in the art may utilize its teachings.
With the possible touch area defined, one (typically a car manufacturer) can decide the actual touch area in 3D space. The touch area doesn't necessarily have to cover the entire possible touch area. For instance, the touch area can be just a small area that the user can easily reach, such as somewhere over the steering wheel. This is illustrated in
As disclosed hereinabove, the eye location plays a crucial role in the touch experience for HUD. Thus, eye tracking capability in the car to determine the eye location (preferably the 3D location in space) is needed to provide the best and unlimited touch experience to the user for HUD.
In the cases where the eye location information is needed for the HUD touch experience, due to the unpredictable eye position in time, the decision about which of the graphic elements the user intended to touch needs to be calculated in real-time based on the location of the touchable graphic elements, touched point(s) within the defined touch area, and the eye position. The best and most precise results can be obtained by determining all those values in 3D (x, y, z) space. Since the VID, the virtual image display area, and the touch area are fixed/pre-defined and can be made available to the car system, it is possible to obtain 3D values for the touched point and the graphic elements. A driver monitoring system may determine the eye location in three-dimensional space. Whether determination of 3D values or 2D values of eye locations is called for may depend on the use case. For instance, if the touch experience of HUD is limited to determination of gestures (e.g., swipe, pinch, zoom, etc.) only, then only 2D locations of touch point(s) may be called for.
As mentioned above, depending on the HUD touch use cases, not all inputs shown in
Disclosed herein is a method of enabling a touch experience for a HUD display and application wherein the graphic area (e.g., the virtual image in the case of HUD) is not reachable by the user. The present invention may provide a method of defining a touchable area for a HUD application based on the location of the eye box, virtual image, and windshield. The present invention may also provide a method of using a touch sensor that can enable touch for the defined touchable area for a HUD application (which is likely to have the touch area in the air) such as (but not limited to) a light-based sensor touch system. The present invention may further provide a method of using information about (1) the touchable graphic elements' locations, (2) the touched point within the defined touch area, and (3) the eye position to make the decision on touch, which enables the most precise touch experience without any touch use case limitation. However, the requirement of a 3D value for the input to make a proper touch decision for HUD can be relaxed based on the touch use cases, which may limit the touch precision, touch use cases, and touch user experiences.
Next, in step 1020, a position of a hand of the human driver in space is detected. For example, hand sensor (e.g., a light sensor strip) 924 may detect the position of the hand of driver 914 in three-dimensional space.
In a final step 1030, it is determined which one of the graphical elements in the virtual image is aligned with an eye location of the human driver and the detected position of the hand of the human driver in space. For example, simple geometry calculations can be applied and used to decide which element(s) and/or point(s) in virtual image 710 is touched when 3D location information of (1) touchable graphic elements 7181, 7182, . . . , 718N, (2) touched point(s) 730 and (3) the eye position 714 are available. That is, it can be determined which one of the graphical elements 7181, 7182, . . . , 718N in the virtual image 710 is aligned with an eye location of the human driver, as ascertained by driver monitoring system 944, and the position of the hand of the human driver in space, as ascertained by hand sensor 924.
While this invention has been described as having an exemplary design, the present invention may be further modified within the spirit and scope of this disclosure. This application is therefore intended to cover any variations, uses, or adaptations of the invention using its general principles. Further, this application is intended to cover such departures from the present disclosure as come within known or customary practice in the art to which this invention pertains.
This application claims benefit of U.S. Provisional Application No. 63/125,251, filed on Dec. 14, 2020, the disclosure of which is hereby incorporated by reference in its entirety for all purposes.
Number | Date | Country | |
---|---|---|---|
63125251 | Dec 2020 | US |