The present disclosure relates to vehicle displays and systems, such as but not necessarily limited to dynamically enhancing operation of a display to support the presentation of video images in a manner operable to counteract external factors influencing symbology of human visibility.
Electronic displays may be included in vehicles, automobiles, etc. to facilitate presenting video images and/or other types of images, media, etc. to occupants. Such displays may include a screen or other type of human-machine interface (HMI) through which the video images may be presented. An ability to maintain symbology of human visibility for the video images may be related to capabilities of the display to present the video images in a manner matching or closely aligned with an intended visibility of the video images when received at the display within a video signal. In other words, an ability of the display to present the video images through the screen as depicted or originally intended within the video signal may reflect capabilities of the display to maintain symbology of human visibility. Due to displays, particularly those included within vehicles, operating in environments where various external influences may induce a wide variety of circumstances capable of disrupting the symbology of human visibility, such as from ambient light, forward looking light, temperature, etc., it may be desirable to dynamically adjust presentation of the video images according to such external influences.
One aspect of the present disclosure relates to a display system configured for dynamically adjusting an image signal to compensate for external factors. The display may include a backlight source or source driver(s), which may be global and/or addressable (i.e., local dimming or full array local dimming, or edge backlight), a transmissive display, and/or an emissive display (organic light-emitting diode (OLED) or microLED). The display may receive a video signal from an external source or generate its own video from an embedded source. The system may periodically receive input of ambient light conditions from an ambient light sensor and may apply a corresponding transformation to a gamma transfer function, other type of transfer function, or other characteristic of the display, such as to thereby improve the perceived ambient contrast ratio. The system may optionally receive information about the video content and adapt the transformation accordingly.
One aspect of the present disclosure relates to a display system including a display subsystem, an ambient light sensor, a host microcontroller and/or additional peripheral circuitry, e.g., memory, processors, etc. The display system may utilize a communication link between one or more source drivers of the display subsystem and the host microcontroller, which may rely upon communication speeds capable of updating the image contents once per frame or at another desirable intervals. Alternatively, slower update rates may be considered to match the physiological response of the human eye to light stimulus. When employed in automotive applications, for example, the display system may employ a frame rate of or about 60 Hz, and, for example, in gaming applications a frame rate of or about as high as 240 Hz or more may be employed. The light sensor input may be filtered to accommodate pupillary responses to changes in ambient brightness (e.g., the pupil may contract faster than it dilates). The source drivers may allow for dynamic gamma transfer function adjustments or other adjustments to transfer functions. In the case of gamma transfer functions, the gamma function may refer to a transformation of the per-pixel luminance over a given brightness scale from 0-100%, however, other mathematical relationships than the gamma transfer function may be utilized.
A gamma value may be derived from a relational transformation between filtered light input and desired gamma output. From this relational transformation, a mathematical function or a lookup table may be generated, however, non-gamma based transformations may also be used. The relational transformation may also accept secondary inputs, such as a histogram of the source video image or a coefficient value which numerically describes the image contents. For transmissive displays, a relationship may be created between applied voltage and a given pixel's transmission rate, and for an emissive display, a relationship may be created between applied current and subpixel luminance. The mathematical relationship or lookup table described herein may be transformed based on the relationship between transmission and voltage and/or the relationship between current and luminance and the desired gamma exponent. This may be used to determine the new drive voltages (for transmissive displays) or new current adjustments (emissive displays) to achieve the desired gamma curve. The microcontroller may communicate to the display source driver the new gamma setpoint voltages, and thereby transform the output image of the display. Because the transformation may occur at a last step of the video path, it may be compliant with functional safety systems which do a checksum verification between the source and target, which may differ from digital processes whereby a lossy algorithm may be used.
One aspect of the present disclosure relates to a method for dynamically enhancing video images presented through an electronic vehicle display. The method may include measuring external factors influencing symbology of human visibility for the video images to be presented through the display, receiving imaging properties for the video images, identifying a plurality of nominal setpoints for one or more source drivers of the display, and processing the external factors, the imaging properties, and the nominal setpoints according to an image enhancement process. The image enhancement process may include adjusting one or more of the nominal setpoints to create a plurality of enhanced setpoints operable for maintaining symbology of the video images when influenced by the external factors.
The method may include identifying the nominal setpoints from a nominal video transfer function of the display, optionally with the nominal video transfer function cross-referencing each of the nominal setpoints relative to display current or voltages for the display and gray shade values included as at least part of the imaging properties for the video images.
The method may include specifying the enhanced setpoints within an enhanced video transfer function operable for use with the display, optionally with the enhanced video transfer function cross-referencing each of the enhanced setpoints relative to the display current or voltages and the gray shade values.
The method may include generating a setpoint message for use in conveying the enhanced video transfer function to the source drivers, optionally with the setpoint message delineating the enhanced setpoints to be used when presenting the video images through the display.
The method may include generating a display luminance control message for use in conveying display luminance control values to the display, optionally with the display luminance control values dynamically varying a luminance of the display as a function of the external factors.
The method may include delineating the enhanced setpoints within a lookup table configured for cross-referencing each of the enhanced setpoints relative to the display current or voltages and the gray shade values.
The method may include the source drivers interpolating the display current or voltages for the gray shade values beyond the gray shade values specified with the enhanced setpoints.
The method may include selecting the enhanced setpoints to adjust gamma characteristics for the display relative to the nominal setpoints.
The method may include selecting the enhanced setpoints based at least in part on a histogram of the video images.
The method may include the gamma characteristics of the display being defined according to a gamma transfer function represented as:
where:
The method may include presenting the video images through the display according to the enhanced setpoints.
The method may include presenting the video images without remapping of the gray shade values for the video images.
The method may include deriving the external factors from one or more of an ambient light sensor configured for sensing an ambient light visible to an occupant of a vehicle having the display, a forward looking light sensor configured for sensing a forward looking light visible to the occupant while viewing an area over top of the display, a biometric sensor configured for sensing pupillary responses and/or a pupil diameter of the occupant while viewing the display, and a camera configured for recording video images of the vehicle, the display, and/or an ambient environment inside and/or outside of the vehicle.
The method may include the display configured as a transmissive display.
The method may include the display configured as an emissive display.
One aspect of the present disclosure relates to a display system for a vehicle. The display system may include a display operable within the vehicle for presenting video images received in an input video signal to an occupant of the vehicle, an external factors controller configured form determining external factors influencing symbology of human visibility for the video images to be presented through the display, and an image enhancement controller configured for receiving imaging properties for the video images, processing the external factors and the imaging properties according to an image enhancement process, optionally with the image enhancement process creating a plurality of enhanced setpoints operable for maintaining symbology of the video images against the external factors, and directing the display to present the video images according to the enhanced setpoints.
The display system may include the image enhancement controller configured for selecting the enhanced setpoints according to a gamma transfer function or another transfer function, optionally with each of the transfer functions specifying the enhanced setpoints according to display current or voltages for the display and gray shade values for the video images.
The display system may include the display configured for presenting the video images according to the enhanced setpoints without remapping of gray shade values specified for the video images within the input video signal.
One aspect of the present disclosure relates to a controller for dynamically enhancing video images presented through an electronic display of a vehicle. The controller may be configured for measuring an ambient light influencing symbology of human visibility for the video images to be presented through the display, generating a plurality of enhanced setpoints operable for maintaining symbology of the video images against the ambient light, and directing the display to present the video images according to the enhanced setpoints such that the symbology of the video images is maintained against the ambient light without the display remapping gray shade values specified for the video images.
The controller may be configured for directing the display to present the video images according to luminance control values, optionally with the luminance control values dynamically varying a luminance of the display as a function of the ambient light.
The above features and advantages along with other features and advantages of the present teachings are readily apparent from the following detailed description of the modes for carrying out the present teachings when taken in connection with the accompanying drawings. It should be understood that even though the following Figures and embodiments may be separately described, single features thereof may be combined to additional embodiments.
The accompanying drawings, which are incorporated into and constitute a part of this specification, illustrate implementations of the disclosure and together with the description, serve to explain the principles of the disclosure.
As required, detailed embodiments of the present disclosure are disclosed herein; however, it is to be understood that the disclosed embodiments are merely exemplary of the disclosure that may be embodied in various and alternative forms. The figures are not necessarily to scale; some features may be exaggerated or minimized to show details of particular components. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a representative basis for teaching one skilled in the art to variously employ the present disclosure.
The ECU 24 may be in electrical communication with the forward looking light sensor 28, the ambient light sensor 30, the display 12, and/or other componentry of circuits to facilitate the dynamic image enhancement contemplated herein. The ECU 24 may receive a forward luminance value 38 from the forward looking light sensor 28 that may be proportional to an intensity of the forward looking light 34 sensed thereby. The ECU 24 also receive an ambient luminance value 40 from the ambient light sensor 30 that may be proportional to an intensity of the ambient light 36 sensed thereby. A biometric sensor (not shown) may be configured for providing a biometric signal to the ECU 24, such as in response to sensing pupillary responses and/or a pupil diameter of the occupant while viewing the display 12. A camera (not shown) may be configured for providing a recorded signal to the ECU 24, such as in response to recording video images of the vehicle, the display 12, and/or and ambient environment inside and/or outside of the vehicle. The forward looking light sensor 28, the ambient light sensor 30, the biometric sensor, the camera, and/or additional sensors or diagnostics tools included onboard the vehicle, or communication therewith, such as via wireless signaling, may be utilized in accordance with the present disclosure to facilitate assessing external factors influencing symbology of human visibility for the video images presented through the display 12. The ECU 24 may be configured to direct or otherwise control the display 12 utilizing messages, signals, information, etc. transferred via a communication medium 48.
An input video 58 may be configured for receiving an input video signal 60, such as one including video images, media, etc. from an external source and/or an entity onboard the vehicle and/or in communication therewith. The video input 58, for example, may be utilized in cooperation with the additional subsystems onboard the vehicle to facilitate display 12 video images and otherwise communicating information with vehicle occupants 20 through one or more of the displays 12. A signal processor 64 may be configured to operate independently of the ECU 24 for purposes of recovering the video images or other media from the video input signal 60 and generating a video output signal 66 for presentation through the display 12. The signal processor 64 may be configured as a video converter operable for converting the video images to those suitable for presentation through the display 12, which may vary depending on the type and capabilities of display 12. The signal processor 64 is shown to be independent of the ECU 24 for non-limiting purposes in order to highlight one advantageous aspect of the present disclosure whereby presentation of the video images may be made without having to remap gray shade values or to otherwise perform complex or dedicated conversion processes on the video images. The present disclosure, optionally instead, may be operable to essentially pass through the video images to the display 12 without making adjustments to the gray shade values or other image content manipulations to account for the external influences affecting symbology of human visibility.
The ECU 24 may be configured to operate according to a plurality of non-transitory instructions stored on a non-transitory computer-readable storage medium, which when executed with one or both processors, may be operable for facilitating the operation contemplated herein. The ECU 24, for example, may be configured for communicating with the forward looking light sensor 28 to receive the forward luminance value 38 and the ambient light sensor 30 to receive the ambient luminance value 40. The ECU 24 may include a dynamic image enhancement block 72, an image analysis block 74, a background (or ambient) luminance determination block 76, a forward luminance determination block 78, and an automatic luminance determination block 80. These blocks, the operations associated therewith, and/or the other elements shown may be included as part of the ECU 24, however, the present disclosure fully contemplates one or more of the blocks and/or the operations associated therewith being performed or generated from or based on sources offboard the ECU 24, such as with the corresponding information, data, outputs, etc. being generated or provided from a source, host, or other feature and thereafter communicated to the ECU 24. The capability to perform some of the illustrated functions outside of the ECU 24, such as with a graphics processing unit (GPU) or host, may be beneficial in ameliorating the complexities of the ECU 24 and/or to permit some of the described operations to be performed in software, with algorithms, etc. offboard the ECU 24. A display luminance control value 82 may be generated by the ECU automatic luminance determination block 80 and transferred to the display 12. The blocks 72 to 80 may be implemented in hardware and/or software executing on the hardware. The video input signal 60 may be received by the image analysis block 74. The forward luminance value 38 may be received by the forward looking luminance determination block 78. The ambient luminance value 40 may be received by the background luminance determination block 76.
The ambient illumination shining on/reflected from the display 12 may be measured by ambient light sensor 30 and the illumination Lux level may be communicated to the background luminance determination block 76. The background luminance determination block 76 may determine background luminance (LBG) information of the ambient light 36 observed on the front face of the display 12. The background luminance LBG information may be relayed to the automatic luminance determination block 80. The automatic luminance determination block 80 may use the background luminance LBG information received from the background luminance determination block 76 to determine the desired display luminance control value 82. The display luminance control value 82 may be used by the display 12 to control the luminance level of the display 12. Display luminance (LDisplay) information (e.g., the display luminance control value 82 internal to the ECU 24) may be transferred to the dynamic image enhancement block 72. In parallel to the ambient (background) luminance processing, the forward looking light sensor 28 may be configured to measure the forward looking luminance value 38 seen by the driver when looking through the windshield. In addition to controlling the display 12 luminance as a function of the ambient luminance value 40, display 12 visibility performance may be improved by the utilization of the forward looking light sensor 28 to compensate for conditions of transient adaptation or eye adaptation mismatch. When the driver 20 looks at a bright scene, such as a sunrise or sunset, the display 12 luminance may be increased since the pupils of the driver 20 may be constricted and thus may require more display 12 luminance for image visibility. Under the sun rise/sunset scenarios, the ambient light sensor 30 may be shaded and, without the use of the forward looking light sensor 28, may cause the display 12 luminance to be decreased. The implementation of the forward looking light sensor 28 may thus allow the display 12 luminance to be properly controlled to increase the display 12 luminance where appropriate.
An illuminance Lux level of the forward looking light sensor 28 may be conveyed to the forward luminance determination block 78. The forward luminance determination block 78 may determine forward looking luminance (LFL) information for the forward looking light. The forward looking luminance LFL information may be transmitted to the automatic luminance determination block 80. The automatic luminance determination block 80 may use the information in conjunction with the background luminance level LBG information to determine an appropriate display luminance control value 82. One non-limiting aspect of the present disclosure contemplates the dynamic image enhancement being configured for selecting or otherwise creating a plurality of enhanced setpoints 84 operable for maintaining symbology of the video images against the external factors, i.e., against the ambient and forward looking light 34, 36. The enhanced setpoints 84 may be selected relative to nominal or typical setpoints 84 of the display 12. The display 12, for example, may be manufactured with a plurality of nominal setpoints and/or a nominal transfer function, such as those typically used to operate a source driver 86 thereon. In the event the display 12 lacks nominal setpoints, the enhanced setpoints 84 may be determined anew or set each time the display 12 is in use. The source driver 86 may be configured as a source driver integrated circuit (SDIC) or other logically executing feature included within software and/or hardware capable of varying or changing setpoints according to a setpoint message or signal received from the dynamic image enhancement block.
The enhanced setpoints 84, accordingly, may be generated according to an enhanced video transfer function and relative to the nominal setpoints and/or other design or predefined characteristics of the display 12 and/or the source driver 86. The source driver 86, for example, may be configured to apply voltages and/or currents globally to particular pixels, optionally on a row-by-row and column-by-column basis, to facilitate selectively setting and varying the per-pixel coloring used to present the video images or other media, i.e., setting the voltages and/or currents needed for each pixel.
The enhanced setpoints 84 contemplated herein may be calculated by the dynamic image enhancement block 72, and optionally adapted according to the type or configuration of the display 12, to facilitate counteracting external factors influencing symbology of human visibility for the video image to be presented. The enhanced setpoint 84 values may be those associated with controllable parameters of the source driver 86 capable of setting one or more transfer functions for the display 12, i.e., controllable parameters of the display 12 operable for controlling the presentation of video images. The transfer functions may correspond with lookup tables, mathematical representations, logic, or other dynamic features capable of selectively controlling voltages and/or current utilized to operate the display 12. While other transfer functions are contemplated and may be similarly employed, one non-limiting aspect of the present disclosure contemplates the enhanced setpoints 84 being selectable to correspondingly define a gamma function or gamma response for the display 12, optionally on a frame-by-frame basis. Instead of trying to maintain the desired display 12 gamma at a fixed value, e.g., 2.2, the present disclosure relates to dynamically modifying a transmission versus gray shade function within the display 12 by sending the desired gamma voltage setpoint 84 information to the display 12 via a communication interface, such as I2C or SPI.
The desired transfer function may be determined as a function of the light sensor inputs 38, 40 and the display 12 operational luminance. If available, the dynamic image enhancement block 72 may also receive information from graphics generation units providing video histogram information or from general screen types, such as cluster, map, etc. The histogram or screen type information may be generated with the ECU 24 and/or from an external source or host and used to determine where a majority of the image gray shades, which may then be used in accordance with the present disclosure to optimize the image transfer information sent to the display 12 via gamma voltage set points. The process to determine the gamma voltage set points to the display 12 may start with measuring transmission versus voltage for a transmissive TFT LCD display 12.
The alternating voltages that may be required to maintain zero net DC voltage across the liquid crystal are not elucidated as one skilled in the art would appreciate such values to be common in the LCD industry, i.e., most TFT LCD source drivers 86 employ gamma voltage set points that establish the gamma transfer function. In some older drivers, these voltages may be set with resistive divider strings or other methods such as digital to analog converters. In newer source drivers 86, these voltages may be set via a communication bus such as I2C or SPI with the driver 86. The number of voltage and/or current points may vary, such as 8, 16 or even higher number of setpoints 84, for better control over the transfer function. These voltage and/or current setpoints 84 may be selected in accordance with the enhancement processes described herein and associated with discrete gray shade values, with the voltages therebetween being interpolated by the driver 86.
The LCD transfer function, as shown in
where T=transmission, TMax=maximum display transmission, GS=video gray shade number, GSMax=maximum video gray shade number, and γ=display gamma value.
Equation 1 may also be normalized in terms of a percentage basis as shown in the following equation:
One non-limiting aspect of the present disclosure contemplates utilizing the VLCD versus transmission function 110 diagrammed in
One non-limiting aspect of the present disclosure contemplates utilizing the display luminance control value 82 in cooperating with the enhanced setpoints 84 to increase or decrease the luminance level of a display 12. This capability may be helpful in addressing the visibility of the upper gray shades in a video image. The lower gray shades, however, may nonetheless remain largely less visible due to the reflected luminance of the display 12 overwhelming and washing out the lower gray shade luminance levels. Because increasing the display 12 luminance may be insufficient to adequately maintain symbology of human visibility for lower video content visibility, one non-limiting aspect of the present disclosure contemplates correspondingly selecting the enhanced setpoints 84 on a frame-by-frame basis, and thereby the associated gamma function or other transfer function utilized by the source driver 86. The enhanced setpoints 84 may be used in this manner to improve visibility for lower video content, optionally to the extent of maintaining symbology of human visibility therefor within a predefined range or threshold such that the display 12 presents the video images in a manner matching or closely aligned with an intended visibility of the video images when received at the display 12 within the input video signal.
Block 146 relates to a characteristics process whereby the ECU 24 may determine operating characteristics and/or capabilities for the display 12. The characteristics process, for example, may include identifying nominal setpoints or other setpoint parameters for the display 12, e.g., the quantity, type, or other variables for the setpoints that may be usable with one or more transfer functions included with the display 12 or the source driver 86. The characteristics process may include determining the transmission versus the VLCD described above in
The enhanced setpoints 84 may be used to define VLCD in axis 130 relative to gray shade values in the horizontal axis 134. The gray shade values may correspond with grade shade values specified within the input video signal 60 for the video images to be presented. One non-limiting aspect of the present disclosure contemplates the ECU 24 being configured to selectively vary the setpoint message transmitted to the display 12 on a frame-by-frame basis to maintain symbology of human visibility for the video images associated therewith. This may include the ECU 24, or more specifically the dynamic image enhancement block 72, selectively varying the enhanced setpoints 84 used for each frame so as to correspondingly manipulate presentation of the video images. The setpoints 84, for example, may be varied so as to address the above-described complications with respect to maintaining symbology for lower gray shade content, e.g., the enhanced setpoints 84 may be selected to correspondingly influence presentation of the lower gray shade content such that the lower gray shade content may maintain visibility when presented with the higher gray shade content. The source driver 86 may utilize the corresponding enhanced setpoints 84, i.e., transmission versus gray shade values of
The capability of the present disclosure to selectively generate enhanced setpoints 84 or to otherwise determine setpoints 84, optionally on a frame-by-frame basis, which may be achieved by deviating from or adjusting nominal setpoints 84 of the source driver 86, may be beneficial in maximizing symbology of human visibility for the video images without having to manipulate or otherwise transform the underlying content, e.g., without having to remap gray shade values or otherwise engage in costly and expensive manipulation of the content comprising the video images. While the present disclosure fully contemplates the ECU 24 including capabilities for image enhancement without adjusting the setpoints, such as within software, and/or without re-mapping gray shade values or otherwise directly manipulating the content of the video images, such as with the video converter 100, the capability to support symbology without having to reprogram or reconfigure the content may be advantageous in enhancing electronic displays 12 without the burdensome and expensive task of configuring new signal processors or other graphical elements of the display 12. The capabilities described herein, for example, may be particularly beneficial with general-purpose vehicle interface processors (VIPs) commonly employed within automotive products to facilitate operation of electronic displays 12. Rather than having to manipulate or otherwise reprogram such VIPs, the present disclosure may be operable with setpoint 84 controls typically included therewith such that the present disclosure may maintain symbology through manipulation of the setpoints 84 without having to correspondingly program or redesign other aspects of the VIPs.
The setpoints may be determined by starting with the desired transmission function 148 in
The terms “comprising”, “including”, and “having” are inclusive and therefore specify the presence of stated features, steps, operations, elements, or components, but do not preclude the presence or addition of one or more other features, steps, operations, elements, or components. Orders of steps, processes, and operations may be altered when possible, and additional or alternative steps may be employed. As used in this specification, the term “or” includes any one and all combinations of the associated listed items. The term “any of” is understood to include any possible combination of referenced items, including “any one of” the referenced items. “A”, “an”, “the”, “at least one”, and “one or more” are used interchangeably to indicate that at least one of the items is present. A plurality of such items may be present unless the context clearly indicates otherwise. All numerical values of parameters (e.g., of quantities or conditions), unless otherwise indicated expressly or clearly in view of the context, including the appended claims, are to be understood as being modified in all instances by the term “about” whether or not “about” actually appears before the numerical value. A component that is “configured to” perform a specified function is capable of performing the specified function without alteration, rather than merely having potential to perform the specified function after further modification. In other words, the described hardware, when expressly configured to perform the specified function, is specifically selected, created, implemented, utilized, programmed, and/or designed for the purpose of performing the specified function.
While various embodiments have been described, the description is intended to be exemplary, rather than limiting and it will be apparent to those of ordinary skill in the art that many more embodiments and implementations are possible that are within the scope of the embodiments. Any feature of any embodiment may be used in combination with or substituted for any other feature or element in any other embodiment unless specifically restricted. Accordingly, the embodiments are not to be restricted except in light of the attached claims and their equivalents. Also, various modifications and changes may be made within the scope of the attached claims. Although several modes for carrying out the many aspects of the present teachings have been described in detail, those familiar with the art to which these teachings relate will recognize various alternative aspects for practicing the present teachings that are within the scope of the appended claims. It is intended that all matter contained in the above description or shown in the accompanying drawings shall be interpreted as illustrative and exemplary of the entire range of alternative embodiments that an ordinarily skilled artisan would recognize as implied by, structurally and/or functionally equivalent to, or otherwise rendered obvious based upon the included content, and not as limited solely to those explicitly depicted and/or described embodiments.
This application claims the benefit of U.S. Provisional Application No. 63/503,447, filed May 19, 2023, the disclosure of which is hereby incorporated by reference in its entirety herein.
Number | Date | Country | |
---|---|---|---|
63503447 | May 2023 | US |