The present disclosure relates to a Heads-up Display (HUD), also referred to as Head Mounted Display (HMB) system and methods of using the same, which include a rear looking camera that provides a rear-view image that is integrated with vehicle navigation, which is presented to an operator on a heads up display viewable while the operator is facing the forward vehicle direction.
In avionics, the benefits of a HUD in an airplane cockpit has been well explored—see “Heads-up display for pilots”, U.S. Pat. No. 3,337,845 by Gerald E. Hart, granted Aug. 22, 1967.
In the previously filed U.S. patent application Ser. No. 13/897,025, filed May 17, 2013, titled “Augmented Reality Motorcycle Helmet” published as U.S. 2013/0305437, (which claims benefit of U.S. Provisional Patent Application Ser. No. 61/649,242) a display was projected on to the inner surface of a motorcycle helmet visor.
The HUD system described herein focuses, in one aspect, on improved safety via enhanced situational awareness. Advantageously, the HUD system directly enhances vehicle operator safety by providing increased situational awareness combined with decreased reaction time.
The HUD system may be part of a digitally-enhanced helmet in one embodiment. Other embodiments of the HUD system include, but are not limited to, a windshield of a motorized or human-powered vehicle for ground or water transportation.
Additionally, this HUD design incorporates: (1) turn-by-turn direction elements for forward travel; (2) vehicle telemetry and status information; (3) both combined with a rearward view of scene behind and to the sides of the operator on the display.
Additionally, this HUD design incorporates: (1) music (2) telephony (3) “walky-talky” auditory functionality, through (1.a) internal storage; (1.b) connection to a paired smart-phone device via BlueTooth or other radio or USB or other wired connection; (2.a) connection to a paired smart-phone device; (3.a) radio communication via BlueTooth or other radio to another device.
Additionally, this HUD design improves on user safety by utilizing a display combined with focusing lenses collimated so that the display will appear to be at an optical distance of infinity, which reduces user delay by eliminating the need for a user to re-focus their eye from the road surface ahead (“visual accommodation”).
In another aspect is provided an optical stack of display, lenses, and a partially reflective prism or holographic waveguide in a helmet which presents imagery focused at infinity, therefore negating the need for an operator's eye to change focal accommodation from road to display, thus decreasing reaction time.
Additionally the HUD display may be semi-transmissive (or “see-through”) so that the display imagery and information does not completely occlude the operators vision in the image frustum occupied by the display.
Additionally, the HUD design digitally processes the super-wide camera imagery to provide more accurate perceived distance perception of objects in the view by the operator.
Additionally, the HUD design presents audio information to the operator in the form of digitally generated voice or as sounds that function “earcons” corresponding to alerts.
Additionally, the HUD design presents haptic information to the operator in the form of a buzzer or pressure that functions as alerts.
The present disclosure will be better understood from a reading of the following detailed description, taken in conjunction with the accompanying drawing figures in which like references designate like elements, and in which:
A HUD system is described for displaying information to the user optionally incorporates several visual elements according to user control, including optionally a super-wide-angle rear facing camera view, optionally a map view in place of the rear camera view, optionally the camera view plus turn by turn travel guides, optionally the camera view plus vehicle and/or helmet telemetry, optionally the camera view plus turn by turn travel guides and telemetry. Advantageously, the HUD system directly enhances vehicle operator safety by providing increased situational awareness combined with decreased reaction time.
This HUD system is preferably used with a helmet, such as a motorcycle helmet, that functions with visor open or closed, as it incorporates a separate micro-display and optical stack with a partially silvered prism or holographic waveguide to position a small see-through display in the operator's field of view, as described herein. Other embodiments of the HUD include, but are not limited to, a windshield of a motorized or human-powered vehicle for ground or water transportation.
As also described herein, the HUD system also incorporates a digital processor to de-warp super wide-angle camera imagery, with the benefit of providing the operator coherent image distance judgments from center (directly behind) to edge (left or right side) vectors, including blind spot areas normally invisible to an operator of a vehicle equipped with standard rear and side mirrors.
Additional image processing can also be included to enhance imagery to compensate for fog or low light, and also to increase the saliency of certain image components, such as yellow traffic lines, lane markers, or other relevant objects.
Rear view camera imagery is also preferably blended digitally with navigation information (e.g., turn by turn directions) and/or vehicle telemetry (e.g., speed, tachometer, check engine, etc.) by a processor provided by such information by radio or other means, for display on the heads-up display, as described herein. Additionally, navigation, telemetry, and other information may be presented aurally to the operator.
The HUD system display is preferably focused at an ocular infinity. The benefit is that visual accommodation is negated, resulting a comprehension improvement on the part of the operator on the order of hundreds of milliseconds. In human vision, objects approximately eighteen feet or farther away do not require the eye to adjust focus; the eye's focusing is relaxed. In an ordinary vehicle, display and control elements are much closer than eighteen feet, and muscles in the eyes must pull on the lens of the eye and distort it to bring such objects into focus. This is called “visual accommodation”, and takes on the order of hundreds of milliseconds. The benefit of a display focused at infinity is that no visual accommodation is needed to look at the display and again nine is needed to look back to the road; comprehension and situational awareness is accomplished much faster, resulting in increased safety for the operator.
The display shown as display 130 in
In the first shown in
In the embodiment shown in
The HUD system may accomplish a digital transformation of the rear facing cameras imagery so as to dewarp the image so as to accomplish equal angles of view mapped into equal linear distances in the display e.g., the usual and traditional “fish eye” view of a 180 or 210 degree lens is transformed so that items and angles near the center are similar in size and displacement to items and angles near the edges, particularly left and right edges. This effect is shown in
The effect described in may be accomplished by direct digital image processing in the camera sensor itself, and subsequently displayed to the user.
The effect may be accomplished by subsequent digital image processing by an onboard digital processor in the helmet, and subsequently displayed to the user.
The effect may optionally be overlaid with a graphical indication of the true angles relative to the camera mounted in the helmet. For example, a reticule may be overlaid indicating the where true angles such as 45, 90, and 120 degree angles have been mapped into the warped/dewarped image. This can aid the user in understanding where rearward objects are relative to their head, body, and vehicle.
The various configurations of the display may be optionally enabled or defeated by the user.
The desired configuration may be accomplished by an external application communicating with the helmet's processor via wireless communication.
In a helmet embodiment, the display configuration may be accomplished by an external application communicating with the helmet's processor via wired communication.
The display configuration may be accomplished by voice command processed by a processor internal to the helmet.
In the preferred embodiment of the system, the rear-facing camera 450 collects a video stream of extreme wide-angle imagery from the rear of the helmet (or vehicle), which is processed, preferably as shown by a specialized dewarp engine 460 (or dedicated processor as described herein) to “de-warp” the imagery so as to present the appearance of objects in the center rear, and extreme left and right at equal distances from the camera 450 as being the same visual area thus same perceived distance from the operator, as opposed the conventional “fish-eye” view where objects at the same distance appear much larger in the center versus the edges of the field of view of a camera. This de-warping may be produced within a single frame time by a dedicated processor used as a dewarp engine 460, such as the GeoSemiconductor GW3200, and this is the preferred such embodiment. However, the dewarping may also be accomplished by a more general purpose processor or SOC, albeit at greater expense and/or time delay (the latter may be more than one frame time; this delay decreases appropriate operator situational awareness and increases reaction time to events). Likewise, the dewarping may be accomplished by the central SOC 410, albeit again at greater time delay that is more than one frame time.
In the preferred embodiment of the system 400, graphical representations composed by the SOC 410 are merged with camera imagery, and then presented to the operator. This may be accomplished by specialized video blending circuitry 470, which present lightens the computational load on the SOC 410, and is preferably accomplished in less than one frame time. The merging may also be accomplished by the SOC 410 itself, by the SOC 410 reading in the video imagery from the dewarp engine 460, and composing the graphical representation merged with the video in an on-chip buffer, and then writing it out to the camera display 480. However, this may require a more expensive SOC 410, and/or greater time delay than one frame time, and thus is not the preferred embodiment. One implementation that accomplishes the preferred embodiment is to use as the video blender 470 and the display 480 a Kopin A230 display that incorporates video blending circuitry. In one implementation, the video from the GeoSemiconductor GW3200 dewarp engine is output in RGB565 format (5 bits per pixel for red, 6 bits per pixel for green, five bits per pixel for blue) video, and the SOC 410 outputs its graphical imagery as RGB4444 (four bits per red, green, blue and 4 bits for a video alpha channel) which is combined by the Kopin display controller into a combined video stream that is rendered to the operator.
The HUD system can also incorporate additional digital image processing and effects to enhance, correct, subsample, and display the camera imagery.
For example, the image processor may be able to detect the horizon and adjust the imagery to keep the horizon within a preferred region and orientation of the image displayed to the user.
The image processor may be able to auto correct for environmental illumination levels to aid the user in low light conditions, by adjusting brightness, gamma, and contrast.
The image processor may be able to edge-enhance the imagery for low contrast conditions such as fog, drizzle, or rain, especially combined with low light levels. It will be apparent to one skilled in the art that digital convolutions such as Laplacian kernels may be readily applied to the imagery to accomplish such enhancement.
The image processor may be able to detect road markers such as lane lines, and enhance their appearance to increase salience to the user.
The HUD system incorporates additional digital image processing and effects to detect image elements and present audio indicators to the user corresponding to salient properties of said image elements.
For example, where a “blob” is detected by image processing or by radar/lidar and it's trajectory is mapped into a spatialized audio “earcon” that informs the user of the blobs location and movement relative to the helmet. It will be apparent to once skilled in the art that several such objects may be detected and presented to the user simultaneously.
The blob may be visually enhanced to increase its salience to the user.
The blob moving into an important and salient location relative to the user (e.g., a blind spot) is presented to the user via a haptic interface.
The haptic effector may be an integral part of the users helmet, suit, jacket, boots, or other clothing.
The coupling with the haptic interface may be accomplished wirelessly or via a wired connection.
In one embodiment of the HUD system, the camera view incorporates indicators in the left or right corner informing the user of an upcoming turn, as shown in
The indicators change color, hue, and/or brightness in a manner to indicate how soon the turn should occur. As rider approaches the turn, the HUD UI may display several dots or pixel maps which illuminate in a sliding fashion across the top of the HUD display in the direction of the turn. If it is a right turn, it will slide left. If it is a right turn, it will slide right. As the turn approaches, the animation increases in speed until it is solid-on when the driver is upon the turn. This feature essentially operates as a visual proximity sensor. When paired with voice direction this creates a very clear instruction to the operator to execute subsequent navigation.
The indicator informs the user of an approaching curve requiring slowing down, where this may be indicated by salient variations in hue, lightness, brightness, boldness, and/or blinking.
In some embodiments, textual information is displayed between the left and right turn indicator regions; e.g., “Right turn in 0.5 miles”.
Navigation information, and/or warnings may be presented aurally as tones or voice.
As mentioned before, the display and communication configuration may be selected, defeated, and/or combined under user control. E.g., the user may select rear view display only, rear view display plus voice directions, voice only, etc., in all relevant combinations.
The personalized configuration may be accomplished via an app on an external device.
The configuration may be communicated wirelessly or through a wired connection.
In a helmet embodiment, voice command from the user may be processed by the processor integrated within the helmet.
In an embodiment where a map view or turn by turn navigation directions are selected for display, the view may be provided by an external device (such as a smart phone) connected to a digital network in real time (e.g., Google maps).
In an embodiment where or turn by turn navigation directions are selected for display, the view may be provided by an external device (such as a smart phone) with a local store of map information to be used when a digital wireless cellular connection is not available.
The map or turn by turn navigation view may also be provided by a local digital storage (such as a memory module within a helmet) as a backup to the map or turn by turn navigation information retrieved from the external device, for use when a digital wireless cellular connection is not available
The map or navigation view described may be controlled and initialized by an app on an external device (such as a smartphone) via wired or wireless connections.
The present disclosure also relates to additional presentation aspects, in addition to the video imagery, additional graphical presentations overlaid on the video that correspond to vehicle telemetry information, such as but not limited to speed, tachometer, temperature, check engine, and fuel supply.
The present disclosure also relates to the presentation, in addition to the video imagery and graphical imagery, audio alerts (tones and voice) that correspond to and augment the visual presentation.
The present disclosure also relates to the presentation, in addition to the video imagery and graphical imagery, audio such as music both stored internally and on an external device, and the provision of two way radio communication to accomplish telephony and “walky-talky” conversation.
The present disclosure also relates to the presentation, in addition to the video imagery, graphical imagery, and audio, haptic stimulation (e.g., buzzer, tactile pressure, etc.) that corresponds and augments the other alerts.
The present disclosure also relates to the presentation, in addition to the video imagery, graphical imagery, and audio, haptic stimulation (e.g., buzzer, tactile pressure, etc.) that corresponds and augments the other alerts.
Although the embodiments have been particularly described with reference to embodiments thereof, it should be readily apparent to those of ordinary skill in the art that various changes, modifications and substitutes are intended within the form and details thereof, without departing from the spirit and scope thereof. Accordingly, it will be appreciated that in numerous instances some features will be employed without a corresponding use of other features. Further, those skilled in the art will understand that variations can be made in the number and arrangement of components illustrated in the above figures.
Number | Date | Country | Kind |
---|---|---|---|
PCT/US2015/056460 | Oct 2015 | US | national |
This application is a continuation of U.S. application Ser. No. 14/519,091, entitled “Methods and Apparatus for Integrated Forward Display of Rear-View Image and Navigation Information to Provide Enhanced Situational Awareness”, filed Oct. 20, 2014, which is hereby incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
Parent | 14519091 | Oct 2014 | US |
Child | 14940006 | US |