The present disclosure relates generally to display systems and devices and, more specifically, to displaying images having dynamic display areas with arbitrary borders.
Electronic devices often use electronic displays to provide visual representations of information by displaying one or more images. Such electronic devices may include computers, mobile phones, portable media devices, tablets, televisions, virtual-reality headsets, vehicle dashboards, and so forth. To display an image, an electronic display may control light emission from display pixels based on image data, which indicates target characteristics of the image. For example, the image data may indicate target luminance of specific color components, such as a green, a blue, and/or a red component, at various pixels in the image.
The electronic display may enable perception of various colors in the image by blending (e.g., averaging) the color components. For example, blending the green component, the blue component, and the red components at various luminance levels may enable perception of a range of colors from black to white. To facilitate controlling luminance of the color components, each display pixel in the electronic display may include one or more sub-pixels, which each controls luminance of one color component. For example, a display pixel may include a red sub-pixel, a blue sub-pixel, and/or a green sub-pixel. To enhance image quality around the edges of an electronic display—particularly along rounded edges of an electronic display—image processing circuitry may apply a gain value set (e.g., a gain value to be applied to each sub-pixel color type) for each pixel in a particular display region of a frame of the image data so that the pixels illuminate and facilitate displaying the image as desired. The gain value set may prevent or reduce aliasing along the rounded border. Often, the gain value set may be predetermined or known a priori. For example, the gain value set may be determined during manufacturing of the display with the rounded border display region. As such, the gain value set may be static.
To display images with dynamic display areas having arbitrary borders that differ from image frame to image frame, image processing circuitry may apply a dynamic gain value set to prevent or reduce aliasing along the arbitrary borders of the dynamic display area. Indeed, there may be many use cases where images may have elements with arbitrary borders in relation to other elements. By way of example, some user interface elements may expand, shrink, separate, or move dynamically over a series of image frames. To ensure that the borders of these elements appear crisp and clean, a dynamic gain value set may be applied to regions of image data that include the borders. The dynamic gain value set may be associated with a dynamic gain value map that may change from image frame to image frame based on the position of the borders of the dynamic display.
In some cases, the dynamic gain value set of the dynamic gain map may be applied in addition to or independent of a static gain value set associated with a static gain map for static arbitrary borders (e.g., fixed borders of an electronic display). These may also be referred to as a primary gain map (e.g., static gain map) and secondary gain map (e.g., dynamic gain map) of gain value sets that are applied for arbitrary border gain (ABG) correction to pixels displaying image data in various display regions. The ABG correction may prevent or reduce image artifacts along a border of an arbitrary shape (e.g., a rounded border, an angled border), such that the gain values applied to the respective pixels provide an anti-aliasing effect along the border. For example, a group of pixels may form a display region displaying at least some image data. In some cases, the display region may encompass a portion of the display with non-rectilinear borders (e.g., may have rounded edges).
The primary gain map (e.g., static gain map) may include gain value sets of gain values to apply to the pixels (e.g., sub-pixels of the pixels) where borders in the image data do not change between frames of image data. By way of example, the primary gain map (e.g., static gain map) may provide a gain value set to adjust the borders of the electronic display. The secondary gain map (e.g., a dynamic gain map) may include gain value sets of gain values to apply to the pixels of changing display regions, where the borders change between frames of image data. The changes may include width and/or height of the display region, the presence of the display region (e.g., present on subsequent frame and not on previous frame), the position of the display region on the display (e.g., along an x-axis direction and/or a y-axis direction), and the like. The gain value sets of the secondary gain map may be dynamic and change with each frame of image data, for example, based on the changes to the dynamic display regions. As will be discussed in detail herein, the systems and methods described herein may facilitate in providing crisp edges along the rounded borders of the display regions. In other cases, there may be a single gain map that includes both the static gain value set and dynamic gain value sets. Additionally or alternatively, there may be multiple different dynamic gain maps with gain value sets corresponding to different image elements with different dynamic borders.
When introducing elements of various embodiments of the present disclosure, the articles “a,” “an,” and “the” are intended to mean that there are one or more of the elements. The terms “comprising,” “including,” and “having” are intended to be inclusive and mean that there may be additional elements other than the listed elements. Additionally, it should be understood that references to “one embodiment”, “an embodiment”, or “some embodiments” of the present disclosure are not intended to be interpreted as excluding the existence of additional embodiments that also incorporate the recited features. Use of the term “approximately” or “near” should be understood to mean including close to a target (e.g., design, value, amount), such as within a margin of any suitable or contemplatable error (e.g., within 0.1% of a target, within 1% of a target, within 5% of a target, within 10% of a target, within 25% of a target, and so on). As used herein, an “active region” refers to a portion of a frame of image data that undergoes processing. As such, when applying an arbitrary border gain (ABG) to a frame of image data, the portion of the frame that utilizes the arbitrary border gain technique may be the active region. As will be discussed herein, a display region may be included in the active region. The ABG techniques applied to the active region may facilitate displaying image data in the display region without fringing or other image artifacts along borders of elements of the display region.
As previously mentioned, electronic devices may include displays, which present visual representations of information, for example, as images in one or more image frames. To display an image, an electronic display may control light emission from its display pixels based on image data, which indicates target characteristics of the image. For example, the image data may indicate target luminance (e.g., brightness) of specific color components in a portion (e.g., image pixel) of the image, which when integrated by the human eye may result in perception of a range of different colors. Generally, each display pixel in the electronic display may correspond with an image pixel in an image to be displayed. In other words, a display pixel and an image pixel may correspond to a pixel position. To facilitate displaying the image, a display pixel may include one or more sub-pixels, which each controls luminance of one color component at the pixel position. For example, the display pixel may include a red sub-pixel that controls luminance of a red component, a green sub-pixel that control luminance of a green component, and/or a blue sub-pixel that controls luminance of a blue component.
Moreover, in some instances, display regions that include the sub-pixels may vary in one or more characteristics, such as shape and/or size. For example, a first display region may have an element with four straight borders connected at approximately ninety-degree corners. On the other hand, a second display region may have elements with non-rectilinear borders. For example, the second display region may have four straight borders connected with four rounded (e.g., curved) borders. As previously mentioned, a gain map may include gain value sets for sub-pixels of pixels of image data. By way of example, arbitrary border gain correction may involve gain value sets that are applied at the sub-pixels to reduce or eliminate image artifacts that may otherwise result at the borders of the various shaped display regions.
To compensate for static borders of arbitrary shape, a gain map may be static, such that the same gain value sets applied at the active region are applied at the respective sub-pixels. By way of example, the static gain map may include gain values to resolve for image artifacts otherwise occurring at a known or predetermined non-rectilinear border of the display. However, the display may include multiple display regions and the display regions may vary between the frames of image data, such that a static gain value set may not resolve for image artifacts at the borders of the varying display regions. Accordingly, the present disclosure provides techniques for improving perceived image quality of an electronic display, for example, by processing image data using dynamic gain value sets based on dynamic display regions for frames of image data.
Turning first to
By way of example, the electronic device 10 may represent a block diagram of the notebook computer depicted in
In the electronic device 10 of
In certain embodiments, the display 18 may be a liquid crystal display (LCD), which may display images generated on the electronic device 10. In some embodiments, the display 18 may include a touch screen, which may facilitate user interaction with a user interface of the electronic device 10. Furthermore, it should be appreciated that, in some embodiments, the display 18 may include one or more light-emitting diode (LED) displays, organic light-emitting diode (OLED) displays, active-matrix organic light-emitting diode (AMOLED) displays, or some combination of these and/or other display technologies. The displays may include display regions that are dynamic or static between displaying frames of image data.
The input structures 22 of the electronic device 10 may enable a user to interact with the electronic device 10 (e.g., pressing a button to increase or decrease a volume level). The I/O interface 24 may enable the electronic device 10 to interface with various other electronic devices, as may the network interface 26. The network interface 26 may include, for example, one or more interfaces for a personal area network (PAN), such as a BLUETOOTH® network, for a local area network (LAN) or wireless local area network (WLAN), such as an 802.11x WI-FED network, and/or for a wide area network (WAN), such as a 3rd generation (3G) cellular network, universal mobile telecommunication system (UMTS), 4th generation (4G) cellular network, long term evolution (LTE®) cellular network, long term evolution license assisted access (LTE-LAA) cellular network, 5th generation (5G) cellular network, and/or New Radio (NR) cellular network. In some embodiments, the electronic device 10 may communicate over the aforementioned wireless networks (e.g., WI-FI®, WIMAX®, mobile WIMAX®, 4G, LTE®, 5G, and so forth) using the transceiver 30. The transceiver 30 may include circuitry useful in both wirelessly receiving the reception signals at the receiver and wirelessly transmitting the transmission signals from the transmitter (e.g., data signals, wireless data signals, wireless carrier signals, radio frequency signals). As further illustrated, the electronic device 10 may include the power source 28. The power source 28 may include any suitable source of power, such as a rechargeable lithium polymer (Li-poly) battery and/or an alternating current (AC) power converter.
In certain embodiments, the electronic device 10 may take the form of a computer, a portable electronic device, a wearable electronic device, or other type of electronic device. Such computers may be generally portable (such as laptop, notebook, and tablet computers), or generally used in one place (such as desktop computers, workstations, and/or servers). In certain embodiments, the electronic device 10 in the form of a computer may be a model of a MacBook®, MacBook® Pro, MacBook Air®, iMac®, Mac® mini, or Mac Pro® available from Apple Inc. of Cupertino, California. By way of example, the electronic device 10, taking the form of a notebook computer 10A, is illustrated in
The input structures 22, in combination with the display 18, may allow a user to control the handheld device 10B. For example, the input structures 22 may activate or deactivate the handheld device 10B, navigate user interface to a home screen, a user-configurable application screen, and/or activate a voice-recognition feature of the handheld device 10B. Other input structures 22 may provide volume control, or may toggle between vibrate and ring modes. The input structures 22 may also include a microphone that may obtain a user's voice for various voice-related features, and a speaker that may enable audio playback and/or certain phone capabilities. The input structures 22 may also include a headphone input that may provide a connection to external speakers and/or headphones.
Turning to
Similarly,
With the foregoing in mind,
As previously discussed, gain value sets may be applied to pixels along the rounded borders of the display regions 50. That is, active regions may include the display regions 50 for applying arbitrary border gain. As previously mentioned, an active region includes a portion of a frame of image data that undergoes processing, effectively a frame boundary. By way of example, processing may include applying gain value sets to a pixel for arbitrary border gain correction. Data applied to a pixel located outside the active region may be copied from the input to the output (e.g., additional gain not applied prior to driving the pixels). Generally, the display regions 50 may include dimensions corresponding to approximately the size of the display 18 or portions of the display 18. In some instances, such as for rounded borders, the arbitrary border gain of an active region may generally include a gain to be applied to a portion of the display and this portion may be greater than another portion with the rounded border of the display region 50. For example, an x-y definition of the active region in an x-y coordinate system may be rectangular to encompass the rounded border of the display region 50 to apply the arbitrary border mask around the rounded borders.
Specifically, an image data source may generate image data corresponding to a rectangular image. A display pipeline could adjust the rectangular image frame of image data for display on the non-rectilinear display region 50, for example, by applying a black mask at pixels outside the display region 50. However, in some instances, applying a black mask may result in perceivable visual artifacts, such as color fringing along the border of the display region 50 and/or aliasing along the rounded border of the display region 50.
As such, gain value sets (e.g., for an arbitrary gain) of a gain map may be applied to the pixels of one or more display regions 50 along the rounded border for arbitrary border gain correction. Generally, and as previously mentioned, the active region includes a portion of a frame of image data that undergoes processing. The pixels located outside of the active region may output image data that is the same or approximately the same as input image data, such that the image data has not gone through processing related to the arbitrary border gain correction.
As will be discussed herein, and in some embodiments, two independently coded maps, such as a primary gain map (e.g., static gain map) and/or a secondary gain map (e.g., dynamic gain map), may provide the gain value sets for a frame. The primary gain map may take any suitable shape in relation to the electronic display 18. For example, when the electronic display 18 includes rounded edges, the primary gain map may include edge gains along the border of the display region 50A and/or along the border of the display 18, where the edge gains are statically configured and may be applied to the entire display region 50A. Thus, the gain value sets applied to respective pixels for the primary gain map are static for each pixel between frames of image data. On the other hand, the secondary gain map may include gain value sets that change between frames of image data to correct for dynamic borders. The gain value sets of the secondary gain map are dynamically configured and/or reconfigured on a per-frame basis, generally enabled or disabled (e.g., such that the display region 50 may appear or disappear between frames), and/or the position and/or the size of the of the map may change (e.g., corresponding to changing display regions 50 between frames).
By applying such gain values of the gain value sets, the pixels adjacent the rounded border of a display region 50 may be dimmed or otherwise adjusted (e.g., changed in luminance) to reduce likelihood of producing perceivable aliasing along the rounded border when the image is displayed. In additional or alternative embodiments, the display regions 50 may include rectangular borders. As will be discussed herein, in addition to gain values derived from the gain maps, separate fixed gain values along the rectangular edges of a display region 50 for the primary map and the rectangular edges of the secondary map may be specified through sets of registers with gains that are independent for each sub-pixel color and/or per rectangular edge of the display region 50.
In the depicted embodiment, the first display region 50A may include the largest portion of the display 18, but the first display region 50A may take any other suitable shape. Indeed, the first display region 50A may occupy only part of the electronic display and may not overlap with other display regions 50 (e.g., those associated with dynamic gain maps) in other examples. By way of example, the first display region 50A may have static borders that are fixed (e.g., based on the physical edges of the electronic display 18) and do not change from frame to frame. The other display regions 50 (e.g., 50B, 50C, 50D) may encompass different areas of the display 18 than the first display region 50A that may or may not overlap with the display region 50A. By way of example, these other display regions 50 (e.g., 50B, 50C, 50D) may have dynamic borders that change from frame to frame. As such, these changing display regions 50 (e.g., 50B, 50C, 50D) may be referred to as dynamic display regions 50 that may change in size, width, length, position, and so forth. The dynamic display regions 50 may, additionally or alternatively, appear or disappear from one frame to another (e.g., the presence of different display regions 50 may vary depending on the frame). Moreover, the dynamic borders of the dynamic display regions 50 may have shapes that change from frame to frame.
By way of example, animation that may call for precise arbitrary borders within the electronic display 18 may use a dynamic display region 50 that may have borders that grow or shrink. By using a dynamic display region 50 to apply a border gain to the changing borders in the animation, the borders may be precise and clean, avoiding image artifacts (e.g., color fringing) that might otherwise appear.
As will be discussed in more detail with respect to
To illustrate,
As shown, a first frame 55A (Frame X) includes the first display region 50A. As previously mentioned, the first display region 50A may include image data that does not change between frames and as such, application of the primary gain map 52 (indicated by the dashed line box) may provide the gain values to be applied to pixels in the first display region 50A. The primary gain map 52 may take any suitable shape (e.g., may be rectilinear, may be rectangular, may take an arbitrary shape), and may encompass all or part of the image frame to fully enclose the static borders of the electronic display. On the other hand, a second frame 55B (Frame X+1), a third frame 55C (Frame X+2), a fourth frame 55C (Frame X+3), and a fifth frame 55E (Frame X+4) include changing display regions 50, such as the second display region 50B and the third display region 50C, between the successive frames 55. These frames 55B-E also include a portion of the display 18 with image data that remains the same or substantially the same between the successive frames. As such, frames 55B-E also include the first display region 50A in addition to the changing display regions 50B, 50C. The second display region 50B and the third display region 50C change between the frames 55 and, as such, are dynamic. Thus, application of the secondary gain map 54 (indicated by the dotted line box) may provide the gain values to be applied to pixels along the borders of these dynamic display regions 50B, 50C. The secondary gain map 54 may be smaller than or the same size as the primary gain map 52. In some cases, there may be multiple secondary gain maps 54 for different dynamic display regions 50 (e.g., one for 50B, one for 50C, one for 50D; one for 50B and 50C, one for 50D).
As shown, the second display region 50B disposed at the top portion of the display 18 in the second frame 55B through the fifth frame 55E may include a region having rounded edges and is a region that changes by becoming more rectangular as the frames progress. Moreover, the second display region 50B appears on the second frame 55B. For example, the second display region 50B first appears on the second frame 55B and then increases in width (e.g., along an x-axis in an x-y coordinate system) and/or decreases in height (e.g., along a y-axis in the x-y coordinate system) between the second frame 55B through the fifth frame 55E, as the frames 55 progress. Similarly, the third display region 50C first appears on the second frame 55B and increases in height along the y-axis, as well as moves along the display 18 to become more centered on the display 18. As such, the secondary gain map 54 may apply to the dynamic display regions 50B, 50C that have varying display region characteristics as the frames 55 progress, where the characteristics include size (e.g., width and/or height of the display region 50), presence of the display region 50 (e.g., present on subsequent frame and not on previous frame), position of the display region 50 on the display 18 (e.g., moving in a negative x-axis direction and/or a positive y-axis direction), and the like. The secondary gain map 54 may update with each frame 55 based at least in part on the changes to the dynamic display regions 50B, 50C. That is, the gain value sets of the secondary gain map 54 may update for each frame 55 to continue providing smooth edges at the borders of the dynamic display regions 50B, 50C. The gain value sets of the secondary gain map 54 may be programmed by processing circuitry of the electronic device (e.g., GPU, display pipeline, an application processor, metadata of a frame of image data) at any suitable rate (e.g., on a frame-by-frame basis).
In the depicted embodiment, the display pixels 66 are organized in rows and columns. For example, a first display pixel row includes a first display pixel 66A, a second display pixel 66B, a third display pixel 66C, and so on. Additionally, a third display pixel row includes a fourth display pixel 66D, a fifth display pixel 66E, a sixth display pixel 66F, and so on. As described above, a display pixel 66 may include one or more sub-pixels, which each control luminance of a corresponding color component. In the depicted embodiment, the display pixels 66 include red sub-pixels 68, green sub-pixels 70, and blue sub-pixels 72. Additionally, in the particular depicted embodiment, display pixels 66 fully contained in the non-rectilinear display region each include two sub-pixels, for example, a green sub-pixel 70 and alternatingly a red sub-pixel 68 or a blue sub-pixel 72 (e.g., in a red, green, blue (RGB) display).
Some display pixels 66 along a rounded border 130 may include fewer sub-pixels in the non-rectilinear second display region 50B. In the depicted embodiment, such display pixels 66 may each include one sub-pixel, for example, alternatingly a red sub-pixel 68 or a blue sub-pixel 72. For example, due to a top-left rounded border of the display 18 of the second display region 50B, the first display pixel 66A may include only a blue sub-pixel 72, the second display pixel 66B may include only a red sub-pixel 68, and the third display pixel 66C may include only a blue sub-pixel 72, and so forth.
In any case, each display pixel 66 may correspond with a pixel position and, thus, an image pixel received from the image data source 38. With regard to the depicted embodiment, each display pixel 66 may correspond with an image pixel
To display an image frame, luminance of each display pixel 66 may be controlled based at least in part on an image pixel image data corresponding with an image pixel at its pixel position. However, in some instances, shape of the image frame may differ from the shape of the display region 50 of the electronic display 18. For example, as previously mentioned, the image frame may be rectangular while the display region 50 for the image frame is non-rectilinear with rounded border. Moreover, the border gains may change with each frame 55 due to the dynamic nature of the second display region 50B. In such instances, one or more image pixels may correspond to pixel positions outside the display region 50 (e.g., along and/or outside the rounded border). For example, the first image pixel in the rectangular image frame may correspond to a pixel position 73, which is outside the non-rectilinear second display region 50. In other words, a display pixel 66 may not be implemented in the electronic display 18 at a pixel position corresponding with an image pixel.
Thus, to facilitate displaying an image frame on the second display region 50B with a different shape, the image frame could be adjusted before display, for example, by applying a black mask. However, as described above, display pixels 66 may rely on color blending to enable perception of a range of different colors. In other words, simply disregarding image pixels corresponding to pixel positions outside the display region may, in some instances, result in perceivable aliasing (e.g., a stair-step pattern) at a display pixel 66 along a rounded border 130 since neighboring display pixels 66 that the display pixel 66 would otherwise be blended with are not present. Moreover, perceivable color fringing may occur at a display pixel 66 along a straight border 64 since neighboring display pixels 66 that the display pixel 66 would otherwise be blended with are not present. To improve image quality, as described above, image pixel image data may be processed based on gain values associated with a corresponding pixel position.
The gain values may be provided by the primary gain map 52 and/or the secondary gain map 54 for static or dynamic display regions 50 in frames 55, respectively. Moreover, the primary gain map 52 and/or the secondary gain map 54 may be in an uncompressed format that explicitly associates (e.g., maps) each pixel position to a gain value set. Accordingly, one or more gain values associated with each pixel position and, thus, each sub-pixel position at the pixel positions, may be included in the uncompressed gain maps.
As depicted, the portion 34 of the electronic device 10 may also include an image data source 38, a display driver 40, a controller 42, and external memory 44. In some embodiments, the controller 42 may control operation of the display pipeline 36, the image data source 38, and/or the display driver 40. To facilitate controlling operation, the controller 42 may include a controller processor 51 and controller memory 53. In some embodiments, the controller processor 51 may execute instructions stored in the controller memory 53. Thus, in some embodiments, the controller processor 51 may be integrated with the processor(s) 12, the image processing circuitry, the timing controller in the display 18, and/or be a separate processing module. Additionally, in some embodiments, the controller memory 53 may be included in the local memory 14, the main memory storage 16, the external memory 44, an internal memory 46 of the display pipeline 36, and/or a separate tangible, non-transitory, computer readable medium.
In the depicted embodiment, the display pipeline 36 is communicatively coupled to the image data source 38. In this manner, the display pipeline 36 may receive image data of an image to be displayed on the display 18 from the image data source 38, for example, in a source (e.g., red, green, blue (RGB)) format and/or as a rectangular image. In some embodiments, the image data source 38 may be integrated with the processor(s) 12, the image processing circuitry, or both.
As described above, the display pipeline 36 may process the image data received from the image data source 38. To process the image data, the display pipeline 36 may include one or more applicable image data processing blocks 37. For example, in the depicted embodiment, the image data processing blocks 37 include a sub-pixel layout resampler (SPLR) block 56, which provides display pixel image data (e.g., image data in display format) by filtering (e.g., interpolating or sub-sampling) image pixel image data (e.g., image data in source format). In some embodiments, the image data processing blocks 37 may additionally or alternatively include an ambient adaptive pixel (AAP) block, a dynamic pixel backlight (DPB) block, a white point correction (WPC) block, a sub-pixel layout compensation (SPLC) block, a burn-in compensation (BIC) block, a panel response correction (PRC) block, a dithering block, a sub-pixel uniformity compensation (SPUC) block, a content frame dependent duration (CDFD) block, an ambient light sensing (ALS) block, or the like.
As will be described in more detail below, the display pipeline 36 may process the image data received from the image data source 38 based at least in part on data stored in the external memory 44 and/or the internal memory 46. Mover, the display pipeline 36 may access the primary gain map 52 and/or the secondary gain map 54 stored in the external memory 44 and/or the internal memory 46. The primary gain map 52 and/or the secondary gain map 54 may be stored in compressed format (e.g., compressed version) within respective memories (e.g., random access memories (RAMs)). As will be discussed with respect to
Generally, storing data in the external memory 44 versus the internal memory 46, may present various implementation associated cost and/or processing efficiency tradeoffs. For example, due to physical sizing constraints, increasing storage capacity of the external memory 44 may be more cost-efficient than increasing storage capacity of the internal memory 46. As such, storage capacity of the external memory 44 may be larger than storage capacity of the internal memory 46.
Additionally, access to the external memory 44 and the internal memory 46 may differ. For example, the internal memory 46 may be dedicated for use by the display pipeline 36. In other words, data stored the internal memory 46 may be more readily accessible by the display pipeline 36, for example, with reduced latency, which may facilitate improving processing efficiency of the display pipeline 36. Comparatively, the display pipeline 36 may access the external memory 44 via a direct memory access (DMA) channel 58 since external memory 44 is external from the display pipeline 36. However, to provide data access in this manner, the direct memory access channel 58 may be implemented with increased bandwidth, which increases implementation-associated cost. Moreover, when the external memory 44 is shared with other components, data access latency and, thus, processing efficiency of the display pipeline 36 may be affected.
After processing, the display pipeline 36 may output processed image data, such as display pixel image data and/or the gain value sets, to the display driver 40 for implementation (e.g., driving the pixels with the image data and gain values). Based at least in part on the processed image data and the gain value sets from the gain maps, the display driver 40 may apply analog electrical signals to the display pixels of the electronic display 18 to display images in one or more image frames 55. In this manner, the display pipeline 36 may operate to facilitate providing visual representations of information on the electronic display 18 while also preventing or reducing image artifacts in various display regions.
The process 80 includes the processor(s) 12 (e.g., or the display pipeline 36) receiving (process block 82) image pixel image data, for example, from the image data source 38 of
The processor(s) 12 may process (process block 84) the image pixel image data to determine display pixel image data, which is the image data to be displayed on the display 18 in one or more display regions 50 and may indicate target luminance of color components at display pixels of the electronic display 18. Specifically, to determine the display pixel image data, the processor(s) 12 may convert image data from a source format to a display format. In some embodiments, the processor(s) 12 may determine the display format based at least in part on layout of sub-pixels in the electronic display 18.
Moreover, processing the image pixel image data may include applying the gain of the value sets of the primary gain map 52 and/or the secondary gain map 54 at respective pixels corresponding to the image data, as will be discussed with respect to
The primary gain map 52—corresponding to a static display region with borders that remain fixed—may be unchanged from frame to frame and the same gain value sets may be stored in the RAMs 102 for the primary gain map 52. By contrast, secondary gain map(s) 54—corresponding to one or more dynamic display region(s) with borders that may be changing—may change from frame to frame and different gain value sets may be stored in the RAMs 102 for the secondary gain map(s) 54 at different times. For example, the secondary gain map(s) 54 may change when certain animation is appearing from frame to frame. The secondary gain map 54 may be updated, for example, based on the image data (e.g., based on whether there is a border within the image data that is below or above a threshold value, such as gray level 0 (G0)), based on metadata associated with the image data, or by direct adjustment of the secondary gain map 54 in memory by image processing circuitry (e.g., GPU, display pipeline, an application processor). For instance, certain animation sequences may have a particular sequence of secondary gain map 54 gain value sets. Updates to the secondary gain map 54 gain value sets may change from frame to frame to correspond to changes in the image data being processed for display on the display 18. By way of example, the border of a dark region to enable an under-display sensor or a dialog box that appears onscreen may grow or shrink over the course of several frames. Yet both the static and changing borders may be crisp and precise using the primary gain map 52 and the secondary gain map(s) 54.
Although the borders of a dynamic display region 50 (e.g., 50B, 50C, or 50D of
The gain maps (e.g., the primary gain map 52 and/or the secondary gain map 54) may be decompressed (block 104), as shown, to obtain the gain value sets from the gain maps. The gain maps may provide the gain for the particular pixel position at a sub-pixel of the pixel. The gain value set may include three gain values for sub-pixel positions (block 106), such as a red gain, a green gain, and a blue gain, for the red sub-pixel, the green sub-pixel, and the blue sub-pixel, respectively.
Generally, a run map may include the size of a current run of either coded rows or uncoded rows. A coded row may refer to a row of gains in a gain map that includes a gain for a red sub-pixel, a green sub-pixel, and a blue sub-pixel (redGain, greenGain, blueGain) triple (or a gain for a red sub-pixel, a green sub-pixel (redGain, greenGain) pair in the case of certain high-resolution and/or high dynamic range (HDR) panels with sub-sampled pixels). The gains may have any suitable bit depth. Using gains provided as values between 0 and 1 in 8-bit depth by way of example, coded rows may represent rows in which not all of the pixels are gained to 1. An uncoded row may refer to a row of a red sub-pixel, a green sub-pixel, and a blue sub-pixel (redGain, greenGain, blueGain) triples (or (redGain, greenGain) pairs in the case of certain high-resolution panels with sub-sampled pixels) having gain values that are all equal to 1. A certain gain triple may specify that the pixel corresponding to the gain shall not be modified. Other gain values may modify corresponding pixels. All three segments of compressed data (e.g., run map, position map, and gain map) may start at a byte boundary. That is, each segment may be byte aligned at the end of the segment. In one example, in the decompressed form, each gain map may provide the gain value sets. The map may provide three gains for each input pixel of the image impacted by the map that correspond to the red, green, and blue component respectively. The runs may alternate between coded and uncoded rows. The designation of whether the first row is coded is specified by the programmable bit start run register. Any suitable bit depth (e.g., 6 bits, 7 bits, 8 bits, 9 bits, 10 bits, 11 bits, 12 bits, 13 bits, 14 bits, 15 bits, 16 bits) may be used as gain values in the gain maps.
Additionally, programmable registers may specify an offset position and size of the area of pixels impacted by the primary gain map 52 and/or the secondary gain map 54, respectively. The offset position may be relative to the start of a display region 50 (e.g., in an active region) and the size of the area of pixels impacted by the gain map may include pixels disposed approximately entirely (e.g., completely) within that display region 50 or around that display region 50.
In addition to the gains derived from each stored compressed map, separate fixed gains along the rectilinear or straight edges of the display region 50 for the primary gain map 52 and/or the edges of the specified region for the dynamic, secondary gain map 54 may be specified through sets of registers, where the gains are independent per sub-pixel color (e.g., for red, green, and/or blue) and/or per edge of the display region 50. In the case of high-resolution panels with sub-sampled pixels, the pixels along the left and right edges will have either a red or a blue color component. Consequently, for these pixels, the other gain value may be disregarded. For each edge, a start and end pixel position may also be specified. These positions are relative to the start coordinate of an area that encompasses the display region 50 for the primary gain map 52 and the start coordinate of the specified region for the dynamic, secondary gain map 54, and may be included within the dimensions of the display region 50 for the primary gain map 52 and the region for the dynamic, secondary gain map 54, respectively. The edge gains, where specified or preset, may override the map gains or be applied in conjunction with the map gains. These settings may be preset by programming through a register.
The edge gains and decompressed map gains may be combined. In some embodiments, for any given pixel position, only one of the primary or dynamic, secondary gain maps may hold a gain value that is not equal to 1 with the exception that both maps may contain a gain value of 0. As such, a priority may be programmed to prioritize a gain from the primary gain map 52 or the dynamic secondary gain map 54. For example, the priority may be used when the decompressed map gain or its corresponding edge gain for any component is a non-zero gain in the gain map with selected priority. This may ensure that proper anti-aliasing is applied when the borders of a dynamic display region and the borders of a static region are near to one another or overlap (e.g., when a dynamic display region moves near or expands to reach the outer edge of the electronic display, such as a rounded edge of the electronic display).
The processing circuitry may apply (process block 156) a static, primary gain map 52 to pixels in a static display region 50 (e.g., the static display region 50A). The processing circuitry may apply static gains to rounded borders for display regions that remain constant between frames 55. By way of example, such static display regions 50 may include rounded border edges of the display 18. The processing circuitry may also apply (process block 158) a dynamic, secondary gain map 54 to the dynamic display regions 50. Specifically, and as previously discussed, the processing circuitry may decompress stored compressed gain maps from RAMs, and then derive the gains at each pixel position for the rounded borders. The processing circuitry may store different sets of gain values into the secondary gain map(s) 54 for different image frames. As such, the secondary gain map(s) 54 may apply gains to changing borders in the dynamic display regions 50. While the flowchart of
The processing circuitry may display (process block 160) the image data on the display. By applying the appropriate gain values, the processing circuitry may remove or reduce any image artifacts or aliasing along the borders of the display regions 50 at each frame. In this manner, the display 18 may provide a seamless viewing experience using the gain map techniques described herein.
The techniques presented and claimed herein are referenced and applied to material objects and concrete examples of a practical nature that demonstrably improve the present technical field and, as such, are not abstract, intangible or purely theoretical. Further, if any claims appended to the end of this specification contain one or more elements designated as “means for [perform]ing [a function] . . . ” or “step for [perform]ing [a function] . . . ,” it is intended that such elements are to be interpreted under 35 U.S.C. 112(f). However, for any claims containing elements designated in any other manner, it is intended that such elements are not to be interpreted under 35 U.S.C. 112(f).
It is well understood that the use of personally identifiable information should follow privacy policies and practices that are generally recognized as meeting or exceeding industry or governmental requirements for maintaining the privacy of users. In particular, personally identifiable information data should be managed and handled so as to minimize risks of unintentional or unauthorized access or use, and the nature of authorized use should be clearly indicated to users.
This application claims priority to U.S. Provisional Application No. 63/404,091, filed Sep. 6, 2022, entitled “DYNAMIC ARBITRARY BORDER GAIN,” the disclosure of which is incorporated by reference in its entirety for all purposes.
Number | Date | Country | |
---|---|---|---|
63404091 | Sep 2022 | US |