This disclosure is related to head-up displays (HUD).
A HUD is a display that presents data in a partially transparent manner and at a position allowing a user to see it without having to look away from his/her usual viewpoint (e.g., substantially forward). Although developed for military use, HUDs are now used in commercial aircraft, automobiles, computer gaming, and other applications.
Within vehicles, HUDs may be used to project virtual images or vehicle parameter data in front of the vehicle windshield or surface so that the image is in or immediately adjacent to the operator's line of sight. Vehicle HUD systems can project data based on information received from operating components (e.g., sensors) internal to the vehicle to, for example, notify users of lane markings, identify proximity of another vehicle, or provide nearby landmark information.
HUDs may also receive and project information from information systems external to the vehicle, such as a navigational system on a smartphone. Navigational information presented by the HUD may include, for example, projecting distance to a next turn and current speed of the vehicle as compared to a speed limit, including an alert if the speed limit is exceeded. External system information advising what lane to be in for an upcoming maneuver or warning the user of potential traffic delays can also be presented on the HUD.
HUDs may also be employed in augmented reality displays or enhanced vision systems that identify, index, overlay or otherwise reference objects and road features including infrastructure. Such advanced systems require precision alignment of HUD images relative to the observers within the vehicle and objects within their field of view. Moreover, such HUD systems may employ the windshield to provide HUD combiner functionality over a wide field of view. Such large format reflective displays present challenges to the designer with respect to image location and distortion.
In one exemplary embodiment, a HUD image acquisition and correction apparatus for a vehicle includes a HUD patch defined upon a reflective surface of a windshield of the vehicle, a virtual image generator for projecting images within the HUD patch, and a virtual image sensor assembly located within a HUD eyebox region of the vehicle and having a field of view including the HUD patch. The apparatus further includes a controller configured to control projection of a predetermined test image within the HUD patch, receive from the virtual image sensor assembly a HUD patch image, determine a compensation function based upon the predetermined test image and the HUD patch image, and provide to the virtual image generator the compensation function for application to raw images prior to projection.
In addition to one or more of the features described herein, the compensation function may include a distortion compensation function.
In addition to one or more of the features described herein, the compensation function may include an alignment compensation function.
In addition to one or more of the features described herein, the compensation function may include a color compensation function.
In addition to one or more of the features described herein, the apparatus may include a fixture for the virtual image sensor assembly, the fixture locating the virtual image sensor assembly within the HUD eyebox region of the vehicle.
In addition to one or more of the features described herein, the fixture may be fixedly attached to a static vehicle structure.
In addition to one or more of the features described herein, the fixture may be fixedly attached to a seat back.
In addition to one or more of the features described herein, the fixture may be fixedly attached to a robot assembly.
In addition to one or more of the features described herein, the virtual image sensor assembly may include at least one camera.
In addition to one or more of the features described herein, the virtual image sensor assembly may include a plurality of individually, positionally adjustable cameras.
In addition to one or more of the features described herein, the apparatus may include comprising an alignment system for locating the virtual image sensor assembly within the HUD eyebox region.
In addition to one or more of the features described herein, the alignment system for locating the virtual image sensor assembly within the HUD eyebox region may include a laser alignment system.
In addition to one or more of the features described herein, the alignment system for locating the virtual image sensor assembly within the HUD eyebox region may include a camera.
In addition to one or more of the features described herein, the camera may include the virtual image sensor assembly.
In addition to one or more of the features described herein, the apparatus may include seat positioning motors, the controller configured to control the seat positioning motors to move the fixture and virtual image sensor assembly into a final desired position within the HUD eyebox region.
In another exemplary embodiment, a HUD image acquisition and correction apparatus for a vehicle includes a HUD patch defined upon a reflective surface of a windshield of the vehicle, a virtual image generator for projecting images within the HUD patch, and a virtual image sensor assembly located within a HUD eyebox region of the vehicle and having a field of view including the HUD patch, the virtual image sensor assembly including a plurality of cameras. The apparatus further includes a controller configured to control projection of a predetermined test image within the HUD patch, receive from the virtual image sensor assembly a HUD patch image including the predetermined test image reflected off the reflective surface of the windshield, the HUD patch image providing information corresponding to distortion effects of the reflective surface within the HUD patch to the controller, determine a distortion compensation function based upon the predetermined test image and the HUD patch image, the distortion compensation function effective to counteract the distortion effects of the reflective surface within the HUD patch, and provide to the virtual image generator the compensation function for application to raw images prior to projection.
In addition to one or more of the features described herein, the apparatus may include a fixture for the virtual image sensor assembly, the fixture locating the virtual image sensor assembly within the HUD eyebox region of the vehicle.
In addition to one or more of the features described herein, the apparatus may include the fixture fixedly attached to one of a static vehicle structure, a seat back, and a robot assembly.
In yet another exemplary embodiment, HUD image acquisition and correction method for a vehicle includes projecting a predetermined test image within a HUD patch defined upon a reflective surface of a windshield of the vehicle, receiving from a virtual image sensor assembly located within a HUD eyebox region of the vehicle and having a field of view including the HUD patch a HUD patch image including the predetermined test image reflected off the reflective surface of the windshield, determining a compensation function based upon the predetermined test image and the HUD patch image, and providing to the virtual image generator the compensation function for application to raw images prior to projection.
In addition to one or more of the features described herein, the compensation function may include at least one of a distortion compensation function, an alignment compensation function, and a color compensation function.
The above features and advantages, and other features and advantages of the disclosure are readily apparent from the following detailed description when taken in connection with the accompanying drawings.
Other features, advantages, and details appear, by way of example only, in the following detailed description, the detailed description referring to the drawings in which:
The following description is merely exemplary in nature and is not intended to limit the present disclosure, its application, or uses. It should be understood that throughout the drawings, corresponding reference numerals indicate like or corresponding parts and features. As used herein, control module, module, control, controller, control unit, processor and similar terms mean any one or various combinations of one or more of Application Specific Integrated Circuit(s) (ASIC), electronic circuit(s), central processing unit(s) (preferably microprocessor(s)) and associated memory and storage (read only memory (ROM), random access memory (RAM), electrically programmable read only memory (EPROM), hard drive, etc.) or microcontrollers executing one or more software or firmware programs or routines, combinational logic circuit(s), input/output circuitry and devices (I/O) and appropriate signal conditioning and buffer circuitry, high speed clock, analog to digital (A/D) and digital to analog (D/A) circuitry and other components to provide the described functionality. A controller may include a variety of communication interfaces including point-to-point or discrete lines and wired or wireless interfaces to networks including wide and local area networks, on vehicle networks (e.g. Controller Area Network (CAN), Local Interconnect Network (LIN)) and in-plant and service-related networks. Controller functions as set forth in this disclosure may be performed in a distributed control architecture among several networked controllers. Software, firmware, programs, instructions, routines, code, algorithms and similar terms mean any controller executable instruction sets including calibrations, data structures, and look-up tables. A controller may have a set of control routines executed to provide described functions. Routines are executed, such as by a central processing unit, and are operable to monitor inputs from sensing devices and other networked controllers and execute control and diagnostic routines to control operation of actuators. Routines may be executed at regular intervals during ongoing engine and vehicle operation. Alternatively, routines may be executed in response to occurrence of an event, software calls, or on demand via user interface inputs or requests.
The PGU 104 may include the image light source 106 and a display and lens assembly 120. The image light source 106 generates a virtual image light beam 121 including graphic images that are projected onto a display of the display and lens assembly 120. The virtual image light beam 121 is then directed at a series of one or more fold mirrors 122. The one or more fold mirrors 122 may be used for packaging considerations. The virtual image light beam 121 is reflected at the mirror 117 and then may be reflected through light and glare traps 124 to the windshield 116. The virtual image light beam 121 is displayed on the windshield which serves as the HUD combiner. The light and glare traps 124 may filter and thus prevent, for example, sun light (or ambient light) from being reflected from the windshield 116 towards the mirror 117 and minimize effects of glare.
The HUD system 101 may further include and/or be connected to a manual controller 136 including switches (buttons, paddles, sliders, rotaries, joysticks or the like) 138. The HUD system 101 may also include and/or be connected to a display, seat motors, or seat switches (not separately illustrated). The display may be, for example, a touchscreen, such as an infotainment display (265,
Certain HUD applications require precise alignment of the virtual images produced by the HUD. Placement of simple information upon the windshield, such as conventional engine gauge display, is not positionally critical. However, augmented reality systems intended to improve driver or occupant situational awareness by identifying, overlaying, or otherwise enhancing visibility of objects or features located on a road scene require virtual image placement taking into consideration the observer's eye position, the scene object position and the vehicle windshield position. In order to enable robust virtual image placement fidelity in such systems, the virtual image position must be calibrated relative to the vehicle reference frame. In addition to positional precision, geometric accuracy and undistorted images as perceived by the observer are desirable.
In accordance with an exemplary embodiment,
Assembly of virtual image generator 105 into the vehicle may be accomplished by the installation of an entire dash assembly into which the virtual image generator 105 has been assembled as part of a subassembly process or build-up of the dash assembly 209. Alternatively, a smaller subassembly including an instrument cluster pod may contain the virtual image generator 105 and may be assembled to the dash assembly already installed within the vehicle. Alternatively, the virtual image generator 105 may be assembled into the instrument cluster, dash assembly or upper dash pad as a separate assembly component. Virtual image generator 105 is adapted and controlled to project virtual image light beam 121 toward windshield 116 within the HUD patch 111.
In accordance with the present disclosure, it may be desirable to align images and/or color channels displayed within the HUD patch by virtual image generator 105 and to characterize or instrument the reflective surface 118 of the windshield 116 at the HUD patch 111. Such adjustments may be made manually or autonomously. For example, the user may adjust a projected image through manual controller 136. Alternatively, such adjustment may be effected in an automated process including HUD patch image sensing. Adjustments of the virtual image may be implemented in various ways depending upon the particular hardware configuration of the HUD system 101 and, more particularly, of the virtual image generator 105. Optical effects also may be imparted by the reflective surface 118 of the windshield 116 within the HUD patch 111 upon the virtual images projected by the virtual image generator 105. Thus, curvature, waves, dimples and other imperfections in the windshield may become apparent to the observer 112 of reflected images. With respect to such distortion effects imparted by the reflective surface 118 of the windshield 116, adjustments to the projected virtual image may be made to counteract such effects. Adjustment may be made by well-known image distortion or warping engines implemented within the PGU, controller 102 or other off-vehicle processor. Preferably, such corrections are implemented in non-volatile memory associated with the PGU 104 as a calibration thereto and applied to raw images provided by controller 102 for projection and display. Alternatively, such corrections may be implemented in non-volatile memory associated with the controller 102 as a calibration thereto which applies corrections to raw images prior to provision to the PGU 104 for projection and display. Such corrections or compensations may generally be referred to as compensation functions. Counteracting such distortions may require characterizing, instrumenting or otherwise mapping the reflective surface 118 at the HUD patch. Therefore, it may be desirable for virtual image alignment and distortion corrections to sense the virtual image within the HUD patch. More particularly, sensing of the virtual image within the HUD patch may be accomplished substantially from the desirable perspective of an observer 112. It is envisioned that such alignment and distortion corrections are infrequently required, for example during initial vehicle assembly, windshield replacement, or virtual image generator 105 removal and replacement.
Virtual image sensing from the perspective of an observer 112 may be accomplished generally within a HUD eyebox region 260 associated with the HUD system 101. In accordance with the present disclosure, a virtual image sensor assembly 206 is located within the HUD eyebox region 260.
In the figures, driver seat 222 is illustrated and includes adjustable seat cushion 213, adjustable seat back 264, and adjustable headrest 266. The seat is movably fastened to seat rails 262 and, within limits, may be able to raise, lower, tilt, and move fore/aft. The seatback 264 is hinged toward its bottom at the rear of the seat bottom 213 for pivotal movement as represented by arrow 268. Headrest 266 is secured to the seatback 264 via posts 225 disposed within post guides at the top of seatback 264. Headrest 266 is adjustable up and down and may articulate or rotate, also within limits. The seatback 264 may define a seatback axis 270. The headrest 266 may define a horizontal axis 272 and a vertical axis 274. The horizontal axis 272 may be substantially perpendicular to the seatback axis 270. The seatback 264 and the headrest 266 may be arranged with one another such that the seatback axis 270 and the vertical axis 274 share a same axis. For example, when the seatback 264 is oriented substantially upright and the headrest 266 is oriented substantially upright, the seatback axis 270 and the vertical axis 274 share a same axis as shown in
A location of the center 275 of the eyebox 260 may be based off of the seatback axis 270, the horizontal axis 272, and/or the vertical axis 274. The center 275 may be spaced equidistantly from each side of the eyebox 260 when the eyebox region is shaped as a cube. The center 275 may be spaced from the seatback axis 270 and/or the vertical axis 274 a distance represented by a dimension 276. The distance represented by the dimension 276 may be a dimension selected to orient the eyebox 260 at a location based on a representative location of an observer's eyes. In one example, the distance represented by dimension 276 may be substantially equal to between 20 and 100 millimeters. In another example, a location of the eyebox 260 may also be based off of a distance represented by dimension 278. The distance represented by dimension 278 may be substantially equal to between 0 and 5 millimeters.
A virtual image sensor assembly 206 may include one or more sensors to simulate an observer's view of the HUD patch 111. Virtual image sensor assembly 206 therefore has a field of view which includes the HUD patch 111. In one example, the virtual image sensor assembly 206 may include a camera array having multiple cameras. An exemplary virtual image sensor assembly 206 is illustrated in
In accordance with certain embodiments and with particular reference to
An apparatus for virtual image acquisition and correction may be utilized, as mentioned, during initial vehicle assembly, windshield replacement, or virtual image generator 105 removal and replacement, for example. Thus, it is desirable for repeatability of results to ensure consistent set up. In accordance with the present disclosure's use of virtual image sensor assembly 206 within eyebox 260, a preferred location is defined substantially in accordance with a vehicle reference in mind. For example, features within the vehicle providing reference for an eyebox location may include, for example, one or more of A-pillars, B-pillars, door opening frame, roof or other static, structural features. One exemplary feature may include the vehicle seat tracks 262.
In operation, the virtual image sensor assembly 206 may provide positional reference data as set forth herein. A controller may access data, for example reference locations for preselected vehicle features and compare the reference locations to the positional reference data provided by the virtual image sensor assembly 206. The user may be prompted to positionally adjust the virtual image sensor assembly 206 when the virtual image sensor assembly 206 is adjustably configured, for example affixed to a headrest fixture 221 or affixed to robot assembly 231, as set forth herein. If affixed to the headrest fixture 221, the user may control actuation of the seat positioning motors to establish virtual image sensor assembly 206 into a final desired position based upon the positional reference data provided by the virtual image sensor assembly 206. Alternatively, or additionally, a laser alignment system may be employed to locate the virtual image sensor assembly 206 to its desired position. Similarly, if affixed to the robot assembly 231, the user may control actuation of the robot to establish virtual image sensor assembly 206 into a final desired position based upon the positional reference data provided by the virtual image sensor assembly 206. Advantageously, robot assembly 231 may be adjustable with six degrees of freedom. Alternatively, or additionally, a laser alignment system may be employed to locate the virtual image sensor assembly 206 to its desired position. In accordance with another embodiment with limited adjustability of the virtual image sensor assembly 206 once fixtured, for example when fixtured 223 to the vehicle seat tracks 262 or other vehicle structure, fine adjustments to the exemplary virtual image sensor assembly 206 may be effected at each individual camera 300, such as by servo mechanisms as described herein. Alternatively, or additionally, the virtual image sensor assembly 206 may be fixtured with an intervening adjustment mechanism to allow for final positioning of the virtual image sensor assembly 206 based upon one or more of the positional reference data provided by the virtual image sensor assembly 206, laser alignment system, or other positional feedback tool apparent to one having ordinary skill in the art. When properly configured with controllably adjustable mechanizations, for example robot assembly 231, seat positioning motors, or servo motors, any of the above adjustments may be automated through well-known feedback control.
With reference to
Preferably, alignment of the virtual image may be accomplished subsequent to determination and integration of the distortion compensation function into the virtual image generator 105. The field of view of the virtual image sensor assembly 206 may also include preselected vehicle component or other permanent or temporary fiducials providing positional reference as described herein. An exemplary virtual image alignment process may include scanning the filed of view of the virtual image sensor assembly 206 and identifying such fiducials. A predetermined test image is projected by virtual image generator 105 and may include one or more identification features. The controller 215 may access data, for example predefined positional relationship data relating the fiducial(s) and the preferred location and alignment of the predetermined test image. The controller may compare the test image location and alignment to the preferred location and alignment and, based upon the differences, effect alignment of the predetermined test image with the preferred location and alignment. Image alignment adjustments are thus determined. Preferably, such image alignment adjustments may be implemented in non-volatile memory associated with the PGU 104 of the virtual image generator 105 as a calibration thereto and applied to raw images provided by controller 102 for projection and display. Such corrections or compensations may generally be referred to as alignment compensation functions. Alternatively, such image alignment adjustments may be implemented in non-volatile memory associated with the controller 102 as a calibration thereto which applies corrections to raw images prior to provision to the PGU 104 of the virtual image generator 105 for projection and display. Thus, once determined, the image alignment adjustments are provided by controller 215 to virtual image generator 105 and integrated therein as part of the virtual image pre-projection processing. Image alignment adjustments may be implemented in various ways depending upon the particular hardware configuration of the HUD system 101 and, more particularly, of the virtual image generator 105. By way of non-limiting examples, adjustment of the virtual image may be effected by rotation of an interior mirror in the case of a DLP HUD system, application of a lens function to a phase hologram in the case of a holographic HUD system, or image translation on an LCD display having reserve pixels, or x,y translation of the entire LCD display in a LCD HUD system.
Color alignment of the virtual image displayed within the HUD patch may also be adjusted or dialed in in similar fashion. For example, a predetermined test image, such as a matrix of regularly spaced dots in a two-dimensional cartesian coordinate system as described herein may be projected by the virtual image generator 105. The virtual image sensor assembly 206 captures the reflected image within the HUD patch 111, including individual color channels and provides the data to controller 215. Coordinates of the dots for each color channel may be extracted and misalignments determined through comparisons. Adjustments to one or more color channels to bring them into alignment are determined. Preferably, such color channel adjustments may be implemented in non-volatile memory associated with the PGU 104 of the virtual image generator 105 as a calibration thereto and applied to raw images provided by controller 102 for projection and display. Such corrections or compensations may generally be referred to as color compensation functions. Alternatively, such color channel adjustments may be implemented in non-volatile memory associated with the controller 102 as a calibration thereto which applies corrections to raw images prior to provision to the PGU 104 of the virtual image generator 105 for projection and display. Thus, once determined, the color channel adjustments are provided by controller 215 to virtual image generator 105 and integrated therein as part of the virtual image pre-projection processing to effect color alignment compensation.
Unless explicitly described as being “direct,” when a relationship between first and second elements is described in the above disclosure, that relationship can be a direct relationship where no other intervening elements are present between the first and second elements, but can also be an indirect relationship where one or more intervening elements are present (either spatially or functionally) between the first and second elements.
It should be understood that one or more steps within a method may be executed in different order (or concurrently) without altering the principles of the present disclosure. Further, although each of the embodiments is described above as having certain features, any one or more of those features described with respect to any embodiment of the disclosure can be implemented in and/or combined with features of any of the other embodiments, even if that combination is not explicitly described. In other words, the described embodiments are not mutually exclusive, and permutations of one or more embodiments with one another remain within the scope of this disclosure.