The present disclosure relates generally to liquid crystal display controls and more particularly to controls for pulse width modulated digital drive displays.
Some electronic displays, such as liquid crystal (LC) displays (including liquid crystal on silicon (LCoS) displays), may be driven by applying a pulse width modulation (PWM) signal to each pixel of the display to an active or inactive state (binary digital drive). In such a display, a PWM signal consisting of a sequence of voltage pulses (each defined by a voltage level and a duration or width) controls light emitted from the corresponding pixel of the display.
In the drawings, which are not necessarily drawn to scale, like numerals may describe similar components in different views. To easily identify the discussion of any particular element or act, the most significant digit or digits in a reference number refer to the figure number in which that element is first introduced. Some non-limiting examples are illustrated in the figures of the accompanying drawings in which:
Different types or modes of PWM may be utilized in controlling a display. Each of these modes may have particular advantages, or disadvantages, depending on the application for which the display is being utilized, as well as on the specific characteristics of the display. Modes of modulation utilized in PWM include leading edge aligned modulation, trailing edge aligned modulation, and dual-edge modulation (also called center-aligned modulation), as will be appreciated by those skilled in the art. In head-mounted display applications, such as in augmented reality (AR) and virtual reality (VR) headsets, visual artifacts from head motion or eye motion can be reduced by using center aligned modulation so that all pixel greyscale values appear at the same time/location. However, leading edge aligned modulation may be preferred over center aligned modulation to eliminate the presence of dark banding artifacts. These dark banding artifacts occur due to physical characteristics of an LCOS display that result in nonlinearities in the operation of adjacent pixels with dissimilar activation times (a.k.a. rising edges), as will be understood by those skilled in the art. While leading edge aligned PWM can be beneficial for this reason in AR and VR headset applications, temporal effects caused by the head movements of a wearer may result in unwanted artifacts being observed by the wearer when pixels of the LCOS display are controlled through leading edge aligned PWM; hence the need to make a solution that has the benefits of both modes without the artifacts.
Examples are described herein that provide techniques for modulating a center aligned PWM signal, where a dithering bit associated with the signal is encoded separately to enable positioning the PWM signal according to a base grayscale level, with shifts in a leading edge of a pulse of the PWM signal for centering and then extending only a trailing edge of the pulse for dither. Thus, in some examples a center aligned PWM signal applied to each pixel voltage is dithered only on the trailing edge of the PWM signal. Although the PWM signal is modulated on both the leading edge and trailing edge of the pulse of the PWM signal, the dithering is performed only on the trailing edge of the PWM signal.
Some examples described herein may attempt to address the technical problem of a darkening effect that can occur for PWM modes that utilize modulation that dithers the leading edge of a PWM signal. By instead dithering only the trailing edge of a center aligned PWM signal, this darkening effect may be reduced in some examples. Techniques described herein can be applied to various display types including, but not limited to, liquid crystal on silicon (LCoS) displays used in augmented reality displays, mixed reality displays, virtual reality displays, heads up displays, and projectors.
In color sequential modes, drive signals can include various voltage changes: the voltage levels for the active and inactive states can be unique per color, the voltage level can change somewhat during a pulse or between pulses, and the polarity of the voltage can toggle during the pulse. However, examples described herein simplify the drive by limiting the drive signal state to either the active state or inactive state.
The terms “dither” or “dithering” as used herein refer generally to spatial dithering, and more specifically to ordered dithering in which small groups or arrays of pixels or sub-pixels (e.g., a 4×4 matrix of pixels) of a display are controlled through applied PWM signals to provide varying shades of color to simulate an overall desired color. In such an ordered dithering approach, none of the pixels in the matrix actually displays the desired color, but the human eye averages the colors of the pixels in the matrix and perceives the desired color. The color of each pixel in the array is determined through a threshold map or matrix, a commonly used threshold matrix is a Bayer matrix. The Bayer matrix is applied to the original color values of the array of pixels, thereby setting a new value for the color of each pixel in the array based on a distance or magnitude between the original color of the pixel and the closest color in the reduced palette of colors available for the display being controlled, as will be described in more detail below.
As described above, electronic displays such as LC displays (including LCOS displays) may be driven by applying a PWM signal to each pixel of the display. In such a display, the voltage pulses forming the PWM signal control light emitted from the corresponding pixel of the display. Each pulse is defined by a level (e.g., a voltage level) and a duration (also referred to as a width). Each PWM signal is applied across a pixel electrode and a common electrode of the corresponding pixel to thereby control the optical properties of an LC material between the pixel and common electrodes. The common electrode is common to all the pixels of the display, whereas each pixel has a separate pixel electrode. In the present description, these electrodes specific to each pixel may be referred to collectively as “pixel electrodes” or simply as “electrodes” of the pixel.
Modulating a duration of the pulses of the PWM signal applied across the pixel electrodes of each pixel changes an effective value over time of the voltage applied across the LC material between these electrodes. Through such PWM signals applied to pixels of an LCoS display, grayscale image generation is achieved through the pixels of the display. This operation of an LCoS display through the application of a PWM signal to control an intensity from each pixel is a form of digital operation of the LCOS display, because the PWM signal is a digital signal having one of two voltage levels. Grayscale operation of the display, meaning varying an intensity light from each pixel of the LCOS display between a maximum value and a minimum value, is achieved through application of PWM signals to the pixels. The PWM signal applied to each pixel corresponds to writing a fast series of 1's and 0's to each pixel, which causes the LC material of the pixel to be alternately driven between a fully ON and a fully OFF state. The fully OFF state of the liquid crystal may be referred to as an “extinction” state, meaning that the LC material returns to its untwisted or relaxed state when no voltage is applied across the material. As the LC material transitions between the fully ON and fully OFF states, a fraction of the light is permitted to pass in a non-linear correlation to percentage of change between the two states. These changes in state of the LC material of the pixel between fully ON and OFF happen much faster than the human eye can detect and, as a result, the eye responds to the average of the varying intensity profile created by these ON and OFF drive states as the equivalent grayscale intensity for the pixel. In the following description, changes to the state of the LC material of a pixel may simply be referred to as changes to the state of the pixel.
In applying the PWM signal to drive each pixel, drive circuitry for the display applies a periodic digital voltage signal to the pixel electrode of the pixel, with an active period or duration of the periodic digital voltage signal being a function of the desired grayscale level or intensity for the pixel. Further, the drive circuitry of the LCOS display alternately supplies one of two values or levels of a reference voltage to the common electrode of the pixels. In some color sequential displays, there is a unique pair of reference voltages for each color provided by the pixels of the display. The reference voltage provided by the drive circuitry on the common electrode of each pixel is typically referred to as a common voltage signal Vcom. The LC material responds to the absolute value of the magnitude of the voltage differential between the pixel electrode and the common electrode, in either the positive or negative polarity. A small delta voltage is the inactive state and a large delta voltage is the active state. Typically, during operation of the LCOS display changes to the common voltage signal Vcom are synchronous with changes in the PWM signal supplied to the pixel electrode of each pixel, enabling pixels to maintain the same state with opposite delta voltage polarity, and may occur multiple times during a modulation profile of the pixel. The modulation profile of a pixel will be described in more detail below. Typically, the drive circuitry drives or toggles both the PWM signal and the common voltage signal Vcom to an opposite or complementary state or level when no change in the state of the pixel is desired. This toggling of the PWM and Vcom signals causes an inversion event of the LC material of the target pixel and has no effect on the light modulation behavior of the LC material of the pixel. These inversion operations, which are typically referred to as “pixel inversion,” maintain a time-average zero bias across the LC material of each of the pixels, which is necessary to ensure proper long term operation of the LCOS display, as will be understood by those skilled in the art. In the following description and diagrams, the modulation profile of the PWM signal on the pixel electrode of a pixel will ignore these inversion events, which can be applied as an independent overlay on the modulation profile, in order to simplify representation of the modulation profile.
Left aligned asymmetric mode, right aligned asymmetric mode, and center aligned symmetric mode are three PWM modes of operation. PWM is a form of signal modulation where, in this context, the durations or widths of pulses of the PWM signal correspond to specific grayscale values that determine a brightness or intensity of a pixel. In left aligned asymmetric PWM mode, the leading edge of a voltage pulse is fixed, and the trailing edge is modulated or changed in accordance with the grayscale value of the pixel. In right aligned asymmetric PWM mode, the trailing edge of a voltage pulse is fixed, and the leading edge is modulated or changed in accordance with the grayscale value of the pixel. In center aligned symmetric PWM mode, both the leading and trailing edges of a voltage pulse are modulated or changed in accordance with the grayscale value of the pixel.
The number of modulation steps that can be supported is dependent on application specific constraints of resolution, frame rate, clock speeds, bus widths, etc. The following examples use a limit of 3-bit encoding for the base grayscale value, supporting 7 or 8 non-zero grayscale values with 8 or 9 modulation events, to simplify the diagrams. Practical systems will implement higher numbers of modulations using more bits for encoding.
In many display systems, hardware limitations, as stated earlier, constrain the number of modulation changing events in a given portion of the frame time, such as over the illumination window 102. A modulation changing event is a change in the characteristics of the PWM signal that results in a change in perceived illumination: for example, a difference in pulse width. In some cases, the full number of desired grayscale levels in the data cannot each be matched with a corresponding modulation event. For example, 8 bits of data per color would provide 255 grayscale levels, but only 31 modulation segments may be available in the given portion of the frame time (e.g., during the illumination window 102). In many systems the display response to each PWM signal pulse is non-linear, and the desired gamma representation of the input data is non-linear; in such systems, the target spacing between modulation events is not linear. For example, the hardware constraints may allow 32 evenly spaced modulation events, but only 16 modulation events spaced non-linearly to match the target gamma behavior. As a result, a subset of the total grayscale levels are selected as the native levels that correspond to achievable modulation pulse widths for pulses of the PWM signal. For example, 31 achievable unique widths or native levels for pulses of the PWM signal would represent a 5-bit native profile for the PWM signal. The remaining grayscale levels for the data are achieved using dithering between the native levels. In some examples, this dithering can be spatial dithering, in which adjacent pixels are driven at selected native levels and combine to approximate the target grayscale level (e.g., ordered matrix dithering). In some examples, temporal dithering may be utilized, in which sequential fields or frames of the native levels average to the target grayscale. A combination of spatial and temporal dithering may also be used in some examples.
In some instances, the fringe-field effects of spatial dithering methods that involve modulating a leading edge of a PWM signal are associated with a darkening effect that persists through a whole color sub-frame. This darkening effect may last for more than one frame if the LC material does not go all the way to extinction between sub-frames.
Due to these fringe-field effects, conventional dithering approaches have not used dithering schemes that include right aligned modulations or center aligned modulations, because both such approaches may include leading edge modulation, which causes the darkening effect. As a result, prior dithering approaches have only used trailing edge modulation for driving amplitude mode LCOS displays.
However, center aligned PWM modulation—which typically includes leading edge and trailing edge modulation, as in the even grayscale values of
Examples of PWM techniques are described herein with reference to a system in which a PWM voltage signal is applied to one or more pixels of a display and drives the illumination output of each pixel. In some examples, a pixel includes an individual reflective element (e.g., a metallic mirrored element), a common electrode (which, for example, is shared with other pixels), and a liquid crystal material that is positioned therebetween.
A digital pulse width modulation scheme is utilized in a display system, in accordance with examples described herein, as the driving sequence for representing different gray levels. A center aligned PWM method is utilized to drive pixels of the display. The center aligned modulation technique involves changing or modulating both the leading and trailing edges of a pulse.
In some examples, the center aligned PWM technique employs binary drive sequences that use n bits (referred to as select bits) to select the base modulation pulse width and 1 additional bit (referred to as the dither bit) to select the next higher pulse width resulting from the dither matrix operations on the lower order grayscale bits representing the grayscale levels between the native levels. These n+1 bit sequences are used to modulate each pixel of the display: e.g., a four (4) bit base modulation (also referred to as a native modulation) has 15 or 16 active modulation segments corresponding to 4-bit codes, whereby any of the 4 select bits are not zero. The dither bit is provided separate from the select bits, such that the modulation system can treat the two conditions differently: i.e., the case of select=x (wherein, e.g., 0<x<24) with the dither bit asserted (e.g., dither bit=1) can be different from the case of select=x+1 with the dither bit not asserted (e.g., dither bit=0).
The trailing dither methodology depends on this separation of the dither bit from the base select bits, instead of just adding it to the base select (limiting the max value to not exceed 2n−1) to get a resulting n-bit modulation select code.
In some examples, the value of n denoting the number of select bits can range from 3 to 8 bits, and can vary between colors. The n select bits with the 1 dither select bit added to them represent a binary encoding of a grayscale level. The encoded grayscale level corresponds to a desired PWM pulse width for the modulation utilized to drive pixels of the display by the display drive circuitry, such that the pixels output the corresponding grayscale illumination intensity.
In some examples, when the select bits are at the maximum binary value (e.g., “1111”), then the dither bit is ignored, resulting in saturation for grayscale values above the maximum native grayscale. In some examples, an extra modulation segment provides an additional available pulse width (e.g. for n=4, a 16th available pulse width), selected by the maximum select value and the dither bit asserted, which eliminates the saturation problem when the dither bit is added to the maximum native grayscale value. It will be appreciated by one of ordinary skill in the art that the number of select bits utilized to modulate the pixels of the display may vary according to the native modulation capability of the display or modulation sequence.
It is also possible to implement modulation sets that do not correspond to a power of 2, matching n select bits (a.k.a. fractional bit depth). For example, the hardware may limit the number of modulation events to 24, which is halfway between 4 and 5 select bits. If the source image data is 8-bits per color, it can be attenuated by a scale of 184/255 or 191/255, so that a 5-bit base select code only uses values from 0 to 23. In this case the dither bit would be a sixth select bit representing the result of the dither matrix operation on the remaining 3 bits of the scaled 8-bit value.
Various examples described herein for center aligned PWM with trailing edge dithering are independent and distinct from spatial dithering, and may be compatible with one or more forms of spatial dithering, such as threshold matrix dithering or error diffusion dithering. Temporal dithering—in which different dither levels are applied in successive frames or color sub-frames—may be overlaid or combined with spatial dithering in a display system, and the dither bit supplied to the modulation system in examples described herein would be the resulting bit from the combined dither methods.
It will be appreciated that references to a bit, such as the dither bit, having a “value” may refer to either of two binary values of the bit. References to a bit being “asserted” or “set” may refer to a designated value of the bit that has a specific effect within the encoding/decoding scheme; the choice of designated or asserted value in defining the scheme is arbitrary and may be either 1 or 0, high or low, and so on in various examples. Unless otherwise indicated, examples described herein use a bit value of 1 to indicate an asserted bit, and a value of 0 to indicate a non-asserted bit. References to a bit value being “inverted” relative to a first value (e.g., 0) refer to the other or opposite binary value from the first value (e.g., 1).
Example schemes are described herein for encoding or otherwise generating binary encodings in which the dither bit for each pixel is not added to the base grayscale level for the pixel encoded in the select bits, but is instead kept separate and distinct from the select bits. In other words, there exists at least one grayscale value x encoded by the n select bits such that inverting the value of the dither bit (e.g., from 0 to 1 or from 1 to 0) has an effect different from setting the grayscale value encoded by the select bits to (x−1) or (x+1). This does not preclude the possibility that a scheme according to examples described herein may encode some grayscale values that are incremented or decremented by the inversion of the dither bit value; however, the example schemes described herein contain at least one case in which a grayscale value x is not simply incremented or decremented by the inversion of the dither bit value. This independence or distinctness of the dither bit from the select bits differs from conventional approaches to PWM, in which all bits used to encode a pulse are used to determine the width of the pulse. In contrast, some examples described herein may use the dither bit to select other characteristics of the pulse, such as a temporal location of the leading edge of the pulse within the illumination window 102. Examples of schemes for encoding and decoding binary encodings are described below with reference to the system of
In some examples, the leading edges (also called rising edges herein, meaning the rise of the LC response not the rise of a voltage) of voltage pulses sent to adjacent pixels are temporally aligned, thereby potentially preventing or mitigating the dark banding artifact visible in an image presented by a display when the leading edge of a voltage pulse sent to one pixel does not align temporally with a leading edge of a voltage pulse sent to an adjacent pixel. In some examples, a pulse having a width determined by a select value (encoded by the n select bits) of x+1, and having its dither bit asserted, may be shifted earlier in time to align its leading edge with other pixels that have the same select value but the opposite dither bit state (i.e., a non-asserted dither value). Similarly, a pulse having a width corresponding to select value x with the dither bit asserted may process the asserted dither bit to extend the width of the pulse to the next available width, thereby keeping the leading edge of the pulse aligned with other pixels at the same select value but without the dither bit asserted. By grouping adjacent pixels together at a shared temporal location for their leading edges, regardless of whether the pixels have identical or off-by-one select values, such example encoding/decoding schemes can prevent or mitigate the aforementioned dark banding artifact caused by many adjacent pixels having non-aligned leading edges.
In an image with a slowly transitioning grayscale gradient or shading, there will be a smooth transition from one grayscale level to the next as the image progresses through different levels of dithering between adjacent select values (also called base levels, native base levels, base grayscale levels, or native base grayscale levels). Additionally, when the gradient transitions through one native base level to the next, if the pulse is extended on the trailing edge, the dithering of the pixels in the image will present visually as a smooth transition between the two base levels. However, in this proposed approach to center aligned PWM dithering, when the step from one base level to the next base level of the center aligned modulation requires the leading edge to move earlier in time, the temporal mismatch between the leading edges of adjacent pixels on the borders of two adjacent pixel regions may produce a narrow darkening effect (on the order of 1 pixel wide) at the boundary of these two regions. This fringe-field effect is typically less pronounced with higher native bit depth modulations and/or faster color sub frame (CSF) rates, due to the reduced time between modulation events, which shortens the duration of the modulation segment, thereby limiting the fringe-field effect. However, examples described herein may use contour blending techniques that may diminish this effect.
In contrast to
Finally, as an exception to the pattern, select value 7, dither bit not asserted 428 is identical to select value 7 from
In
Thus,
Similarly,
It will be appreciated that, in some examples, the saturation issue exhibited by the examples of
It will be appreciated that, in some examples, the saturation issue exhibited by the examples of
The display 806 includes a modulation control system 804 and a communication link 808. The modulation control system 804 is configured to receive and decode binary encodings to the display 806, and is described below with reference to
The computing system 802 includes a binary encoding system 810 for generating binary encodings for controlling pixels of the display 806, such as the encodings described above with reference to
The communication link 808 is used to transmit the binary encodings to the display 806. The communication link 808 can be implemented by any suitable wired or wireless communication technology, such as a data bus or a high speed digital link (such as USB-C, HDMI, MIPI, etc.).
In some examples, the display 806 is configured to receive binary encodings from the computing system 802, decode the binary encodings to generate pulse signal patterns for individual pixels (e.g., as shown in
In some examples, the computing system 802 and/or display 806, including the modulation control system 804, are each implemented by a machine 1500 as described in
In some examples, the data formatting component 920 may be configured to receive a binary encoding (including n select bits and a dither bit) from the computing system 802 and separate the select bits from the dither bit, as needed, to generate an index to a Look-Up-Table (LUT) for each pixel. Example LUTs are described below with reference to
In some examples, the data formatting component 920 may be configured to receive from the computing system 802, the binary encoding data (including n select bits and a dither bit) to be loaded in the pixel select memory 910, the definition of the LUT functions to be loaded in the modulation function memory 914 and the command list to be loaded in the sequence control memory 902, for every frame.
The sequence control memory 902 contains a list of operations to be performed during each frame (or each sub-frame, such as a color sub-frame). Each operation has an associated time (also called a temporal location or temporal position within the frame time or the sub-frame time, such as during the illumination window 102) at which the operation is to be performed. In some examples, the sequence control memory 902 includes at least two operations, for example, at least one operation that initiates a modulation event, and at least one operation that initiates a voltage update event. A sequence timer and control 904 processes the list of operations in the sequence control memory 902, such that each operation is initiated at the indicated time in each frame (or sub-frame).
When a modulation event is initiated, a modulation event controller 906 controls the performance of that modulation event. When a voltage update event is initiated, a voltage update controller 908 is activated to control the performance of the voltage update event.
A modulation event controller 906 receives, from the sequence timer and control 904, a modulation index value (or simply a modulation start signal). The modulation event controller 906 increments the select value for the matching modulation index to indicate which LUT function is to be used for that modulation. Next, the modulation event controller 906 obtains or accesses a LUT function corresponding to the modulation index value from a modulation function memory 914 and sends the LUT function to LUT logic of the backplane interface control 912 to be used for controlling all pixels. The modulation event controller 906 also increments the select value through all index values across all of the pixels in the pixel select memory 910 to send them to the LUT logic of the backplane interface control 912. The backplane interface control 912 performs both processing of the LUT function and control of the pixels (by generating the backplane data stream 916).
Examples of LUTs used to implement the encoding schemes of
The pulse signal patterns shown in
In some examples, in addition to modulation events, the pulse signal patterns may also be translated into inversion events for inverting the polarity of the LC field. To invert the LC field, both the pixel electrode and the common electrode voltages are inverted, without changing any pulse widths. In some such examples, the modulation event controller 906 may instead be a plane-load controller, and both modulation events and inversion events may be jointly referred to as “plane load events”.
When formulating the desired modulation pulse widths and locations (for centering) in the illumination window 102, the times for each modulation event of the base grayscale levels (for the native bit depth) are first selected to meet the desired gamma according to the response of the LC. These are entered into a modulation function and timing table for the base levels with no dither bit asserted. The corresponding PWM pulses for the base levels with the dither bit asserted are simply added to the table by extending each native pulse to the next modulation event in time. As described above, the pulse width of the dither extended select value x pulse and the non-extended pulse of the select value x+1 pulse should be the same width; when these pulses are not coincident in time, the corresponding leading and trailing segments are “paired”. This puts additional constraints on the modulation event timing, and times must be adjusted to match segment durations for segments that are thus paired. If there is a significant difference in these paired segment durations, it may result in a non-monotonic grayscale response, which may be undesirable. There are multiple options for placement of the centered pulses, with pros and cons: a more perfectly centered low grayscale pulse increases the number of paired segments, but decreasing the number of paired segments results in the signal being less centered. These considerations are taken into account when selecting an encoding/decoding scheme with a lower number of segments (e.g., first example modulation scheme 400 or second example modulation scheme 500), or an encoding/decoding scheme with a larger number of segments but less centered pulses (e.g., third example modulation scheme 600 or fourth example modulation scheme 700).
Although the example method 1100 depicts a particular sequence of operations, the sequence may be altered without departing from the scope of the present disclosure. For example, some of the operations depicted may be performed in parallel or in a different sequence that does not materially affect the function of the method 1100. In other examples, different components of an example device or system that implements the method 1100 may perform functions at substantially the same time or in a specific sequence.
At operation 1102, a desired grayscale value for a pixel is obtained by the binary encoding system 810. At operation 1104, the desired grayscale value is processed by the binary encoding system 810 to generate the binary encoding, comprising a plurality of select bits encoding a select value and a dither bit encoding a dither value. At operation 1106, the binary encoding is provided to a display for pulse width modulation of the pixel. At operation 1108, the display (e.g., the modulation control system 804 of the display 806) receives the binary encoding and decodes the binary encoding to generate a pulse having a width and a leading edge at a temporal position according to the scheme. At operation 1110, the display (e.g. the modulation control system 804 of the display 806) controls the pixel using the pulse.
To further reduce the residual artifacts at the boundary between some grayscale values from rising edge fringe field differences, a contour blending (CB) technique can be added to some examples described above. A scale factor may be used to multiply grayscale values when processing image data, thereby scaling the grayscale values according to the scale factor. Ordinarily, the scale factor is a fixed value used for all pixels (referred to herein as a Normal Scale Factor). However, contour blending is achieved by making slight changes to the scale factor over multiple frames, which results in moving the residual artifact (or any periodic artifact in gradients) to different locations in the image, thus spreading the effect and dimming its appearance in any one location. By changing the scale factor, the grayscale values corresponding to a pulse width and leading edge position encoded in the select bits and dither bit of the binary encoding may be scaled by a different amount in different frames. The contour blending approach changes the scale factor for each frame, resulting in a slightly altered grayscale value at each pixel, particularly with reference to the rounding method used when determining the scaled grayscale value. The number of frames for spreading or dimming is adjustable for different examples (e.g., over 2, 3, 4 or 5 frames, etc.). With a relatively high frame rate, more spreading may be tolerated without introducing a flickering effect. For example, at a 120 Hz frame rate, a 4-frame contour blending may provide 25% blending without flicker.
In some examples, the contour blending may occur over multiple sub-frames instead of multiple frames. (For the purposes of this disclosure, the term “frame” refers to a video frame, i.e., a pixel image formed on a display within an interval of time, and may also refer to a sub-frame unless otherwise specified.)
The amount of adjustment to the scale factor is also adjustable, but typically only the smallest adjustable increment of grayscale value (also called a least significant bit (LSB) of grayscale value) needs to be adjusted per step. For example, if 3 frames of blending are used, the adjustments to the scale would be (+1, 0, −1) LSB for the three frames, resulting in a net near zero change to each displayed pixel. Similarly, for 4 frames of blending, adjustments of +1, 0, −1, −2 LSB could be used, resulting in a net −0.5 LSB average change in grayscale value over the four frames. Special cases for the maximum or minimum grayscale values may be implemented in some examples to keep maximum values at the maximum and minimum values at the minimum, without wrapping in the number space (as described below). To minimize flicker, the complementary opposite adjustments should be paired together in adjacent frames (i.e. in a 5 frame blend of +2 to −2, the +2 and −2 should be in adjacent frames and the +1 and −1 should be in adjacent frames). This keeps intensity changes at the ½ frame rate; the averages for each adjacent pair of frames will be zero grayscale adjustment, except for a minor difference due to gamma. However, the artifact being blended will not be symmetric, so some examples may keep the total number of frames to a minimum to get the desired blending without adding flicker. In addition, blending over a larger number of frames may result in additional lower rate flicker due to the non-linear gamma impact on average grayscale values; for this reason, some examples use just the minimum number of frames for blending.
In a second sub-step of operation 1104, after the scaled grayscale values are determined for each frame in the sequence, each scaled grayscale value is mapped to a binary encoding (e.g., select bit values and a dither bit value) corresponding to the scaled grayscale value. The binary encodings thus generated are then sent to the display, decoded, and used to control the pixel during each of the corresponding frames during operations 1106, 1008, and 1110 of method 1100.
In a second sub-step of operation 1104, after the scaled grayscale values are determined for each frame in the sequence, each scaled grayscale value is mapped to a binary encoding corresponding to the scaled grayscale value. The binary encodings thus generated are then sent to the display, decoded, and used to control the pixel during each of the corresponding frames during operations 1106, 1008, and 1110 of method 1100.
It will be appreciated that, in conventional image processing, pixel data is represented in floating point format during operations and then converted or scaled to an integer representation with a bit-depth matching the capability of the display (or the current mode of the display). A scaling multiplication is typically performed on each color of every pixel as part of that final data formatting. In some examples, the contour blending techniques described herein simply replace the scaling factor used in the conventional scaling multiplication operation with an altered scale factor for each frame. In some cases, an input image may already have the pixels in a fixed bit-depth integer representation; in such cases, a scaling division can be applied to convert to floating point format before the data processing alterations are made. For example, when the floating point data range is from 0.0 to 1.0, and the data bit-depth of the display is 8 bits per primary color, the normal scale value would be 255. A four frame blend could use the select values: 256, 255, 254, 253 over four adjacent frames, thereby achieving an average grayscale value of 254.5, which is within one select value unit of the target select value of 255. In keeping with the balanced pair issue described above, the temporal order of the four select values assigned to the four frames could be: 255 (i.e., average select value +0.5), 254 (i.e., average select value −0.5), 256 (i.e., average select value +1.5), 253 (i.e., average select value −1.5). Other possible orders consistent with the techniques described above are (253, 254, 255, 256), (253, 256, 254, 255), (253, 256, 255, 254), etc. However, it may be preferable to avoid select values that are arranged to include two positive offsets adjacent to each other (e.g., 255 followed by 256), as this may increase delta and result in noticeable flicker. Thus, of the three example orders above, those without adjacent positive offsets may be preferable: e.g., (255, 254, 256, 253) or (253, 256, 254, 255).
However, the resulting integer generated by the scaling operation depends on the rounding method used when converting from an integer to a floating point value and back to an integer value (source imagery often starts as 8-bits per color). In some examples, the truncation method of rounding is used, and scale factors below the normal scale factor (e.g., −1 or −2) result in the expected/desired change to the data, but scale factors above the normal scale factor get lost in the rounding. To compensate, the scale adjustments may be applied in floating point format, and positive adjustments to the scale factor may use an additional adjustment slightly below 1.0 to bias the rounding in order to make the desired change to the data (e.g., instead of +1, use +1.995). The contour blending rounding table below provides examples of adjustments that can be used for biasing the rounding operation in some examples.
In the contour blending rounding table, the Source Integer indicates the input value of the integer (e.g., a desired grayscale value) between 0 and 255; the Source Float indicates the corresponding floating point value between 0.0 and 1.0; the various Scale Adjustment values indicate adjustments to the select value in each frame of a multi-frame contour blending frame sequence; and the Scaled Data values indicate the output of the scaling operation performed on the Source Integer as modified by the Scale Adjustment after truncation-based rounding. In some examples, only four of the Scale Adjustment columns are used: for example, the “1.995” Scale Adjustment column may be used in place of the “1” Scale Adjustment column: due to the operation of rounding via truncation used in some systems, a value higher than +1 and approaching +2 may be used to ensure that the value is incremented instead of truncated downward to duplicate the output of the “0” Scale Adjustment column (as may be seen by the equal values of the Scaled Data in the “0” and “1” Scale Adjustment columns). It will be appreciated that some examples may use a value other than 1.995, such as a value higher than 1 but lower than 2; however, 1.995 is provided as an example because it approximates the level of precision needed for the scaled data (i.e., 0.995 is approximately 254/255).
In some examples, when processing grayscale values near the maximum grayscale value, the positive scaling results in saturation because the grayscale value can't go above the maximum grayscale value; thus, the negative adjusted scaling would be one sided without compensation from corresponding positive scaling. This may result in flicker and/or an undesired attenuation in maximum illumination. To address this issue, some examples may detect when the input (e.g., the Source Integer or desired grayscale value) is at or near the maximum (e.g., when the desired grayscale value is within the largest positive adjustment of the maximum grayscale value for desired adjustments of more than +1). If such a case is detected for some set of pixels, contour blending may not be performed for these pixels in some such examples. It will be appreciated that the visual artifacts that contour blending is intended to address usually do not affect pixels at or near the maximum grayscale value, so contour blending may not be needed for those pixels in any event.
Such selective omission of contour blending for pixels having a desired grayscale value near the maximum grayscale value, in combination with trailing dither techniques described above, may exhibit further beneficial effects in some examples. The supplemental modulation segment added at the end of the illumination window 102 (e.g., segment #8 608 or 706 in
The machine 1500 may include processors 1504, memory 1506, and input/output I/O components 1508, which may be configured to communicate with each other via a bus 1510. In an example, the processors 1504 (e.g., a Central Processing Unit (CPU), a Reduced Instruction Set Computing (RISC) Processor, a Complex Instruction Set Computing (CISC) Processor, a Graphics Processing Unit (GPU), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Radio-Frequency Integrated Circuit (RFIC), another processor, or any suitable combination thereof) may include, for example, a processor 1512 and a processor 1514 that execute the instructions 1502. The term “processor” is intended to include multi-core processors that may comprise two or more independent processors (sometimes referred to as “cores”) that may execute instructions contemporaneously. Although
The memory 1506 includes a main memory 1516, a static memory 1518, and a storage unit 1520, both accessible to the processors 1504 via the bus 1510. The main memory 1506, the static memory 1518, and storage unit 1520 store the instructions 1502 embodying any one or more of the methodologies or functions described herein. The instructions 1502 may also reside, completely or partially, within the main memory 1516, within the static memory 1518, within machine-readable medium 1522 within the storage unit 1520, within at least one of the processors 1504 (e.g., within the processor's cache memory), or any suitable combination thereof, during execution thereof by the machine 1500.
The I/O components 1508 may include a wide variety of components to receive input, provide output, produce output, transmit information, exchange information, capture measurements, and so on. The specific I/O components 1508 that are included in a particular machine will depend on the type of machine. For example, portable machines such as mobile phones may include a touch input device or other such input mechanisms, while a headless server machine will likely not include such a touch input device. It will be appreciated that the I/O components 1508 may include many other components that are not shown in
Communication may be implemented using a wide variety of technologies. The I/O components 1508 further include communication components 1528 operable to couple the machine 1500 to a network 1530 or devices 1532 via respective coupling or connections. For example, the communication components 1528 may include a network interface component or another suitable device to interface with the network 1530. In further examples, the communication components 1528 may include wired communication components, wireless communication components, cellular communication components, satellite communication, Near Field Communication (NFC) components, Bluetooth® components (e.g., Bluetooth® Low Energy), Wi-Fi® components, Zigbee, Ant+, and other communication components to provide communication via other modalities. The devices 1532 may be another machine or any of a wide variety of peripheral devices (e.g., a peripheral device coupled via a USB).
The various memories (e.g., main memory 1516, static memory 1518, and memory of the processors 1504) and storage unit 1520 may store one or more sets of instructions and data structures (e.g., software) embodying or used by any one or more of the methodologies or functions described herein. These instructions (e.g., the instructions 1502), when executed by processors 1504, cause various operations to implement the disclosed examples.
The instructions 1502 may be transmitted or received over the network 1530, using a transmission medium, via a network interface device (e.g., a network interface component included in the communication components 1528) and using any one of several well-known transfer protocols (e.g., hypertext transfer protocol (HTTP)). Similarly, the instructions 1502 may be transmitted or received using a transmission medium via a coupling (e.g., a peer-to-peer coupling) to the devices 1532.
As described above, examples described herein may address one or more technical problems associated with pulse width modulation of pixels. By providing a dither bit independent from the grayscale value encoded by the select bits of a binary encoding, the trailing edges of pulses may be dithered to maintain leading edges in synchrony across adjacent pixels, thereby potentially diminishing visual artifacts created by fringe-field effects. In addition, contour blending may be used in some examples to further reduce the residual artifacts from leading edge fringe-field differences while reducing the likelihood and/or degree of flickering. In some examples, full display illumination may be maintained by using a trailing edge dithering scheme that adds an additional possible time segment to pulses encoded within an illumination window in combination with contour blending at or near maximum grayscale values.
Example 1 is a method for controlling a pixel of a display using pulse width modulation (PWM), comprising: obtaining a desired grayscale value for the pixel; processing the desired grayscale value to generate a binary encoding, comprising: a plurality of select bits encoding a select value; and a dither bit encoding a dither value, the binary encoding being encoded according to a scheme in which: a first select value and a first dither value indicate a center-aligned pulse of a first width having a leading edge at a first temporal position; a second select value and the first dither value indicates a center-aligned pulse of a second width having a leading edge at a second temporal position; and the first select value and a second dither value indicate a center-aligned pulse of a second width having a leading edge at the first temporal position; and providing the binary encoding to the display for pulse width modulation of the pixel.
In Example 2, the subject matter of Example 1 includes, wherein: the plurality of select bits consists of n select bits; and the scheme is configured to encode 2n different pulse widths using the n select bits.
In Example 3, the subject matter of Examples 1-2 includes, wherein: the plurality of select bits consists of n select bits; and the scheme is configured to encode (2n−1) different pulse widths using the n select bits.
In Example 4, the subject matter of Examples 1-3 includes, wherein: the plurality of select bits consists of n select bits; and the scheme is configured to encode fewer than (2n−1) different pulse widths using the n select bits.
In Example 5, the subject matter of Examples 1-4 includes, wherein: during a current illumination window, a second pixel adjacent to the pixel is modulated by a second binary encoding encoding a pulse having a leading edge at a second pixel pulse temporal position; and the select value and the dither value of the binary encoding indicate a center-aligned pulse having a leading edge at the second pixel pulse temporal position.
In Example 6, the subject matter of Example 5 includes, wherein: the dither value of the binary encoding indicates the second pixel pulse temporal position.
In Example 7, the subject matter of Examples 1-6 includes, decoding, at the display, the binary encoding to generate a pulse having: a width; and a leading edge at a temporal position according to the scheme; and controlling the pixel using the pulse.
In Example 8, the subject matter of Examples 1-7 includes, wherein: the desired grayscale value is intended to apply to the pixel over a plurality of frames; and the scheme applies contour blending to the pixel by: generating a plurality of scaled grayscale values corresponding to the plurality of the frames, one or more of the scaled grayscale values being adjusted from the desired grayscale value by a respective scale adjustment such that an average of the plurality of scaled grayscale values is within one unit of grayscale value of the desired grayscale value; and generating a plurality of binary encodings corresponding to the plurality of scaled grayscale values.
In Example 9, the subject matter of Example 8 includes, wherein: the plurality of scaled grayscale values including one or more pairs of scaled grayscale values in which: one scaled grayscale value of the pair is above the desired grayscale value by a first amount; the other scaled grayscale value of the pair is below the desired grayscale value by the first amount; and each pair of the one or more pairs corresponds to a pair of adjacent frames of the plurality of frames.
In Example 10, the subject matter of Examples 8-9 includes, wherein: the plurality of scaled grayscale values, ordered by scaled grayscale value, each differ from the adjacent scaled grayscale values by at most one unit of grayscale value.
Example 11 is a system comprising: a display; and one or more processors configured to execute instructions that configure the system to control a pixel of the display using pulse width modulation (PWM) by performing operations comprising: obtaining a desired grayscale value for the pixel; processing the desired grayscale value to generate a binary encoding, comprising: a plurality of select bits encoding a select value; and a dither bit encoding a dither value, the binary encoding being encoded according to a scheme in which: a first select value and a first dither value indicate a center-aligned pulse of a first width having a leading edge at a first temporal position; a second select value and the first dither value indicates a center-aligned pulse of a second width having a leading edge at a second temporal position; and the first select value and a second dither value indicate a center-aligned pulse of a second width having a leading edge at the first temporal position; and providing the binary encoding to the display for pulse width modulation of the pixel.
In Example 12, the subject matter of Example 11 includes, wherein: the plurality of select bits consists of n select bits; and the scheme is configured to encode 2n different pulse widths using the n select bits.
In Example 13, the subject matter of Examples 11-12 includes, wherein: the plurality of select bits consists of n select bits; and the scheme is configured to encode (2n−1) different pulse widths using the n select bits.
In Example 14, the subject matter of Examples 11-13 includes, wherein: the plurality of select bits consists of n select bits; and the scheme is configured to encode fewer than (2n−1) different pulse widths using the n select bits.
In Example 15, the subject matter of Examples 11-14 includes, wherein: during a current illumination window, a second pixel adjacent to the pixel is modulated by a second binary encoding encoding a pulse having a leading edge at a second pixel pulse temporal position; and the select value and the dither value of the binary encoding indicate a center-aligned pulse having a leading edge at the second pixel pulse temporal position.
In Example 16, the subject matter of Example 15 includes, wherein: the dither value of the binary encoding indicates the second pixel pulse temporal position.
In Example 17, the subject matter of Examples 11-16 includes, wherein the operations further comprise: decoding, at the display, the binary encoding to generate a pulse having: a width; and a leading edge at a temporal position according to the scheme; and controlling the pixel using the pulse.
In Example 18, the subject matter of Examples 11-17 includes, wherein: the desired grayscale value is intended to apply to the pixel over a plurality of frames; and the scheme applies contour blending to the pixel by: generating a plurality of scaled grayscale values corresponding to the plurality of the frames, one or more of the scaled grayscale values being adjusted from the desired grayscale value by a respective scale adjustment such that an average of the plurality of scaled grayscale values is within one unit of grayscale value of the desired grayscale value; and generating a plurality of binary encodings corresponding to the plurality of scaled grayscale values.
In Example 19, the subject matter of Example 18 includes, wherein: the plurality of scaled grayscale values including one or more pairs of scaled grayscale values in which: one scaled grayscale value of the pair is above the desired grayscale value by a first amount; the other scaled grayscale value of the pair is below the desired grayscale value by the first amount; and each pair of the one or more pairs corresponds to a pair of adjacent frames of the plurality of frames.
Example 20 is a non-transitory computer-readable storage medium, the computer-readable storage medium including instructions that, when executed by one or more processors of a system, cause the system to control a pixel of a display using pulse width modulation (PWM) by performing operations comprising: obtaining a desired grayscale value for the pixel; processing the desired grayscale value to generate a binary encoding, comprising: a plurality of select bits encoding a select value; and a dither bit encoding a dither value, the binary encoding being encoded according to a scheme in which: a first select value and a first dither value indicate a center-aligned pulse of a first width having a leading edge at a first temporal position; a second select value and the first dither value indicates a center-aligned pulse of a second width having a leading edge at a second temporal position; and the first select value and a second dither value indicate a center-aligned pulse of a second width having a leading edge at the first temporal position; and providing the binary encoding to the display for pulse width modulation of the pixel.
Example 21 is at least one machine-readable medium including instructions that, when executed by processing circuitry, cause the processing circuitry to perform operations to implement of any of Examples 1-20.
Example 22 is an apparatus comprising means to implement of any of Examples 1-20.
Example 23 is a system to implement of any of Examples 1-20.
Example 24 is a method to implement of any of Examples 1-20.
It will be appreciated that the various aspects of the methods described above may be combined in various combination or sub-combinations.
“Component” refers, for example, to a device, physical entity, or logic having boundaries defined by function or subroutine calls, branch points, APIs, or other technologies that provide for the partitioning or modularization of particular processing or control functions. Components may be combined via their interfaces with other components to carry out a machine process. A component may be a packaged functional hardware unit designed for use with other components and a part of a program that usually performs a particular function of related functions. Components may constitute either software components (e.g., code embodied on a machine-readable medium) or hardware components. A “hardware component” is a tangible unit capable of performing certain operations and may be configured or arranged in a certain physical manner. In various examples, one or more computer systems (e.g., a standalone computer system, a client computer system, or a server computer system) or one or more hardware components of a computer system (e.g., a processor or a group of processors) may be configured by software (e.g., an application or application portion) as a hardware component that operates to perform certain operations as described herein. A hardware component may also be implemented mechanically, electronically, or any suitable combination thereof. For example, a hardware component may include dedicated circuitry or logic that is permanently configured to perform certain operations. A hardware component may be a special-purpose processor, such as a field-programmable gate array (FPGA) or an application-specific integrated circuit (ASIC). A hardware component may also include programmable logic or circuitry that is temporarily configured by software to perform certain operations. For example, a hardware component may include software executed by a general-purpose processor or other programmable processors. Once configured by such software, hardware components become specific machines (or specific components of a machine) uniquely tailored to perform the configured functions and are no longer general-purpose processors. It will be appreciated that the decision to implement a hardware component mechanically, in dedicated and permanently configured circuitry, or in temporarily configured circuitry (e.g., configured by software), may be driven by cost and time considerations. Accordingly, the phrase “hardware component” (or “hardware-implemented component”) should be understood to encompass a tangible entity, be that an entity that is physically constructed, permanently configured (e.g., hardwired), or temporarily configured (e.g., programmed) to operate in a certain manner or to perform certain operations described herein. Considering examples in which hardware components are temporarily configured (e.g., programmed), each of the hardware components need not be configured or instantiated at any one instance in time. For example, where a hardware component comprises a general-purpose processor configured by software to become a special-purpose processor, the general-purpose processor may be configured as respectively different special-purpose processors (e.g., comprising different hardware components) at different times. Software accordingly configures a particular processor or processors, for example, to constitute a particular hardware component at one instance of time and to constitute a different hardware component at a different instance of time. Hardware components can provide information to, and receive information from, other hardware components. Accordingly, the described hardware components may be regarded as being communicatively coupled. Where multiple hardware components exist contemporaneously, communications may be achieved through signal transmission (e.g., over appropriate circuits and buses) between or among two or more of the hardware components. In examples in which multiple hardware components are configured or instantiated at different times, communications between such hardware components may be achieved, for example, through the storage and retrieval of information in memory structures to which the multiple hardware components have access. For example, one hardware component may perform an operation and store the output of that operation in a memory device to which it is communicatively coupled. A further hardware component may then, at a later time, access the memory device to retrieve and process the stored output. Hardware components may also initiate communications with input or output devices, and can operate on a resource (e.g., a collection of information). The various operations of example methods described herein may be performed, at least partially, by one or more processors that are temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Whether temporarily or permanently configured, such processors may constitute processor-implemented components that operate to perform one or more operations or functions described herein. As used herein, “processor-implemented component” refers to a hardware component implemented using one or more processors. Similarly, the methods described herein may be at least partially processor-implemented, with a particular processor or processors being an example of hardware. For example, at least some of the operations of a method may be performed by one or more processors or processor-implemented components. Moreover, the one or more processors may also operate to support performance of the relevant operations in a “cloud computing” environment or as a “software as a service” (SaaS). For example, at least some of the operations may be performed by a group of computers (as examples of machines including processors), with these operations being accessible via a network (e.g., the Internet) and via one or more appropriate interfaces (e.g., an Application Programming Interface (API)). The performance of certain of the operations may be distributed among the processors, not only residing within a single machine, but deployed across a number of machines. In some examples, the processors or processor-implemented components may be located in a single geographic location (e.g., within a home environment, an office environment, or a server farm). In other examples, the processors or processor-implemented components may be distributed across a number of geographic locations.
“Computer-readable storage medium” refers, for example, to both machine-storage media and transmission media. Thus, the terms include both storage devices/media and carrier waves/modulated data signals. The terms “machine-readable medium,” “computer-readable medium” and “device-readable medium” mean the same thing and may be used interchangeably in this disclosure.
“Machine storage medium” refers, for example, to a single or multiple storage devices and media (e.g., a centralized or distributed database, and associated caches and servers) that store executable instructions, routines and data. The term shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media, including memory internal or external to processors. Specific examples of machine-storage media, computer-storage media and device-storage media include non-volatile memory, including by way of example semiconductor memory devices, e.g., erasable programmable read-only memory
(EPROM), electrically erasable programmable read-only memory (EEPROM), FPGA, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; USB flash drives; and CD-ROM and DVD-ROM disks. The terms “machine-storage medium,” “device-storage medium,” “computer-storage medium” mean the same thing and may be used interchangeably in this disclosure. The terms “machine-storage media,” “computer-storage media,” and “device-storage media” specifically exclude carrier waves, modulated data signals, and other such media, at least some of which are covered under the term “signal medium.”
“Non-transitory computer-readable storage medium” refers, for example, to a tangible medium that is capable of storing, encoding, or carrying the instructions for execution by a machine.
“Signal medium” refers, for example, to any intangible medium that is capable of storing, encoding, or carrying the instructions for execution by a machine and includes digital or analog communications signals or other intangible media to facilitate communication of software or data. The term “signal medium” shall be taken to include any form of a modulated data signal, carrier wave, and so forth. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a matter as to encode information in the signal. The terms “transmission medium” and “signal medium” mean the same thing and may be used interchangeably in this disclosure.
This patent application claims the benefit of U.S. Provisional Patent Application No. 63/603,546, filed Nov. 28, 2023, which is incorporated by reference herein in its entirety.
Number | Date | Country | |
---|---|---|---|
63603546 | Nov 2023 | US |