This disclosure relates generally to the field of display panels, specifically to subpixel rendering for display panels.
Some display panels may include multiple display regions with different pixel layouts. One example is a display panel adapted to installation of under-display (or under-screen) optical elements, such as cameras, proximity sensors, and other optical sensors. Mobile device manufacturers seek to optimize available display area by eliminating any non-display elements on the surface of devices. Elements including but not limited to camera and proximity sensors require dedicated space outside of the display area, which limits the available display area. One option is to place optical elements such as cameras or other optical sensors underneath the display panel. In one example, a front-facing camera or other optical element may be placed underneath the display surface enabling photos to be taken in a “selfie-mode”. In some embodiments, pixels above an under-display optical element may be spaced wider than pixels in other areas of the display panel to allow sufficient light to pass through the pixels and reach the under-display optical element. These regions with widely-spaced pixels may be referred to as low-pixel density regions, or regions with low pixels-per-inch (PPI).
This summary is provided to introduce in a simplified form a selection of concepts that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to limit the scope of the claimed subject matter.
In one or more embodiments, a display driver is provided. The display driver includes an image processing circuit and a driver circuit. The image processing circuit configured to receive input image data corresponding to an input image. The image processing circuit is further configured to generate first subpixel rendered data from a first part of the input image data for a first display region of a display panel using a first setting and generate second subpixel rendered data from a second part of the input image data for a second display region of the display panel using a second setting different from the first setting. The first setting is for a first pixel layout of the first display region and the second setting is for a second pixel layout of the second display region. The first pixel layout is different than the second pixel layout. The driver circuit is configured to update the first display region of the display panel based at least in part on the first subpixel rendered data and update the second display region of the display panel based at least in part on the second subpixel rendered data.
In one or more embodiments, a display device is provided. The display device includes a display panel and a display driver. The display panel includes a first display region with a first pixel layout and a second display region with a second pixel layout different than the first pixel layout. The display driver is configured to receive input image data corresponding to an input image to be displayed on a display panel. The display driver is further configured to generate first subpixel rendered data from a first part of the input image data for the first display region using a first setting for the first pixel layout of the first display region and generate second subpixel rendered data from a second part of the input image data for the second display region using a second setting for the second pixel layout of the second display region. The second setting is different from the first setting. The display driver is further configured to update the first display region of the display panel based at least in part on the first subpixel rendered data and update the second display region of the display panel based at least in part on the second subpixel rendered data.
In one or more embodiments, a method for driving a display panel is provided. The method includes receiving input image data corresponding to an input image. The method further includes generating first subpixel rendered data from a first part of the input image data for a first display region of a display panel using a first setting and generating second subpixel rendered data from a second part of the input image data for a second display region of the display panel using a second setting different from the first setting. The first setting is for a first pixel layout of the first display region and the second setting is for a second pixel layout of the second display region. The first pixel layout is different than the second pixel layout. The method further includes: updating the first display region of the display panel based at least in part on the first subpixel rendered data; and updating the second display region of the display panel based at least in part on the second subpixel rendered data.
Other aspects of the embodiments will be apparent from the following description and the appended claims.
So that the manner in which the above recited features of the present disclosure can be understood in detail, a more particular description of the disclosure, briefly summarized above, may be had by reference to embodiments, some of which are shown in the appended drawings. It is to be noted, however, that the appended drawings illustrate only exemplary embodiments, and are therefore not to be considered limiting of inventive scope, as the disclosure may admit to other equally effective embodiments.
To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures. It is contemplated that elements disclosed in one embodiment may be beneficially utilized in other embodiments without specific recitation. Suffixes may be attached to reference numerals for distinguishing identical elements from each other. The drawings referred to herein should not be understood as being drawn to scale unless specifically noted. Also, the drawings are often simplified and details or components omitted for clarity of presentation and explanation. The drawings and discussion serve to explain principles discussed below, where like designations denote like elements.
The following detailed description is merely exemplary in nature and is not intended to limit the disclosure or the application and uses of the disclosure. Furthermore, there is no intention to be bound by any expressed or implied theory presented in the preceding background, summary, or the following detailed description.
A display panel may include two or more display regions with different pixel layouts (or geometries). The pixel layout difference may include the difference in the pixel density (which may be measured as pixel-per-inch (PPI)) and/or the difference in the spacing between pixels. The pixel layout difference may additionally or instead include a difference in one or more of the size, configuration, arrangement, and number of subpixels in each pixel.
In one example implementation, a display panel may include a low pixel density region under which an under-display optical element (e.g., a camera, a proximity sensor or other optical sensors) is disposed. The low pixel density region may have a lower pixel density than the pixel density of the rest of the active region of the display panel, which may be referred to as nominal pixel density region. The low pixel density region may be configured to allow sufficient external light to reach the under-display optical element. In one implementation, an under-display camera is disposed underneath the low pixel density region and configured to capture an image through the low pixel density region.
In some embodiments, driving or updating a display panel based on input image data may involve applying subpixel rendering to input image data. Subpixel rendering is a technique to increase the apparent resolution of a display device by rendering subpixels (e.g., red (R) subpixels, green (G) subpixels, and blue (B) subpixels) based on the physical pixel layout. Subpixel rendering may determine or calculate graylevels of respective subpixels based on input image data and the physical pixel layout.
One issue is that subpixel rendering may cause image artifact, distortion and/or color shift in embodiments where a display panel that includes two or more display regions with different pixel layout. The present disclosure provides various techniques for mitigating the image artifact, distortion and/or color shift potentially caused by subpixel rendering in the display image displayed on a display panel that includes display regions with different pixel layouts.
In one or more embodiments, a display driver includes an image processing circuit and a driver circuit. The image processing circuit is configured to receive input image data corresponding to an input image. The image processing circuit is further configured to generate first subpixel rendered data from a first part of the input image data for a first display region of a display panel using a first setting, and generate second subpixel rendered data from a second part of the input image data for a second display region of the display panel using a second setting. The first setting is for a first pixel layout of the first display region, and the second setting is for a second pixel layout of the second display region. The first pixel layout is different than the second pixel layout, and the first setting is different from the second setting. The driver circuit is configured to update the first display region of the display panel based at least in part on the first subpixel rendered data, and update the second display region of the display panel based at least in part on the second subpixel rendered data. Using the first setting and the second setting for the first pixel layout and the second pixel layout, respectively, may effectively mitigate distortion and/or color shift potentially caused by the subpixel rendering. In the following, a description is given of detailed embodiments of the present disclosure.
The display driver 110 is configured to drive or update the display panel 120 based on image data 112 received from a source 130. The image data 112 corresponds to an input image to be displayed on the display panel 120. The image data 112 may include pixel data for respective pixels of the display image. Pixel data for each pixel may include graylevels of respective colors (e.g., red (R), green (G), and blue (B)) of the pixel. In embodiments where the image data 112 is in an RGB format, the pixel data for each pixel includes graylevels for red, green, and blue (which may be hereinafter referred to as R graylevel, G graylevel, and B graylevel, respectively). The source 130 may be a processor (e.g., an application processor and a central processing unit (CPU)), an external controller, a host, or other devices configured to provide the image data 112.
The display panel 120 includes a plurality of display regions with different pixel layouts. In the shown embodiment, the display panel 120 includes a first display region 122 with a first pixel layout and a second display region 124 with a second pixel layout that is different from the first pixel layout. The first pixel layout and the second pixel layout may be different in the pixel density (e.g., as measured by pixel-per-inch (PPI)). In some embodiments, the pixel density of the second display region 124 is lower than the pixel density of the first display region 122 and one or more under-display optical elements (e.g., a camera, a proximity sensor or other optical sensors) are disposed underneath the second display region 124. The low pixel density of the second display region 124 may allow sufficient light to pass through the second display region 124 and reach the under-display optical elements. The first pixel layout and the second pixel layout may be additionally or instead different in the size, configuration, arrangement and/or number of subpixels in each pixel. In other embodiments, the display panel 120 may further include one or more display regions with pixel layouts different from the first pixel layout and the second pixel layout.
In one or more embodiments, the display driver 110 includes an image processing circuit 140, a driver circuit 150, and a register circuit 160. The image processing circuit 140 is configured to apply image processing to image data 112 received from the source 130 to generate voltage data that specifies voltage levels of data voltages with which respective subpixels of the display panel 120 are to be updated. As discussed later in detail, the image processing includes subpixel rendering. The image processing may further include color adjustment, scaling, overshoot/undershoot driving, gamma transformation, and other image processes. The driver circuit 150 is configured to generate the data voltages based on the voltage data received from the image processing circuit 140 and update the respective subpixels of the display panel 120 with the generated data voltages. The register circuit 160 is configured to store settings of the image processing performed by the image processing circuit 140.
The image processing circuit 140 includes a subpixel rendering (SPR) circuit 142. The image processing circuit 140 is configured to provide input image data to the SPR circuit 142, where the input image data is based on the image data 112 received from the source 130. The input image data may be the image data 112 as is or image data generated by applying desired image processing (e.g., color adjustment, scaling, and other image processing) to the image data 112. The SPR circuit 142 is configured to apply subpixel rendering to the input image data.
In one or more embodiments, the SPR circuit 142 is configured to perform the subpixel rendering for the first display region 122 and the second display region 124 with different settings. The register circuit 160 is configured to store a first setting 162 for the first pixel layout of the first display region 122 and a second setting 164 for the second pixel layout of the second display region 124. The first setting 162 may specify a particular set of one or more operations (e.g., that include one or more algorithms and/or computations) of the subpixel rendering to be performed for the first display region 122 and the second setting 164 may specify a particular set of one or more operations (e.g., that include one or more algorithms and/or computations) of the subpixel rendering to be performed for the second display region 124. Details of the first setting 162 and the second setting 164 will be described later. The first setting 162 is different from the second setting 164 as the second pixel layout of the second display region 124 is different from the first pixel layout of the first display region 122.
The SPR circuit 142 is configured to generate first subpixel rendered data by applying subpixel rendering to a first part of the input image data for the first display region 122 using the first setting 162 and generate second subpixel rendered data from a second part of the input image data for a second display region 124 of the display panel using the second setting 164. The image processing circuit 140 is further configured to generate first voltage data for the first display region 122 based on the first subpixel rendered data and generate second voltage data for the second display region 124 based on the second subpixel rendered data. The driver circuit 150 is configured to update the subpixels of the first display region 122 based at least in part on the first voltage data for the first display region 122 and update the subpixels of the second display region 124 based at least in part on the second voltage data for the second display region 124. As the first voltage data for the first display region 122 is based on the first subpixel rendered data, the driver circuit 150 is configured to update the first display region 122 of the display panel 120 based at least in part on the first subpixel rendered data. Correspondingly, as the second voltage data for the second display region 124 is based on the second subpixel rendered data, the driver circuit 150 is configured to update the second display region 124 of the display panel 120 based at least in part on the first subpixel rendered data. Using the first setting 162 and the second setting 164 for the first display region 122 and the second display region 124, respectively, enables the SPR circuit 142 to achieve improved subpixel rendering for the first display region 122 and the second display region 124, effectively mitigating distortion and/or color shift potentially caused by the subpixel rendering.
The input image data 210 is input from a host device 205 to an SPR circuit 220. The host device 205 may be one embodiment of the source 130 of
The low pixel density region SPR circuit 222 may receive input image data 210 and, based on the setting 231, may apply image processing to generate low pixel density region output 223. The image processing performed in the low pixel density region SPR circuit 222 may include subpixel rendering for the low pixel density region 271. The setting 231 may specify particular algorithms or image computations to be performed in the low pixel density region SPR circuit 222. The low pixel density region output 223 may contain information to drive subpixels of the low pixel density region 271 with the received input image data 210. The low pixel density region SPR circuit 222 may apply a decimation or averaging algorithm to map the larger number of received pixels in input image data 210 into the smaller number of pixels in the low pixel density region 271 of the display panel 270. The low pixel density region output 223 may include subpixel rendered data for the low pixel density region 271.
The nominal pixel density region SPR circuit 224 may receive the input image data 210 and, based on the setting 232, apply image processing to generate nominal pixel density region output 225. The image processing performed in the nominal pixel density region SPR circuit 224 may include subpixel rendering for the nominal pixel density region. The setting 232 may specify particular algorithms or image computations to be performed in the nominal pixel density region SPR circuit 224. The nominal pixel density region output 225 may contain information to drive subpixels with the received input image data 210. The nominal pixel density region SPR circuit 224 may apply any desired image processing algorithms to the input image data 210 to generate the desired image response in areas of the nominal pixel density, those areas outside the low pixel density region 271 of the display panel 270. The nominal pixel density region output 225 may include subpixel rendered data for the nominal pixel density region.
A combiner circuit 280 takes as input the low pixel density region output 223, the nominal pixel density region output 225, and the location setting 233. For pixel locations with the location setting 233 set to a value indicating a pixel location in the low pixel density region 271, the combiner circuit 280 may output the low pixel density region output 223 to a driver 290. For pixel locations with the location setting 233 set to a value indicating a pixel location in the nominal pixel density region, the combiner circuit 280 may output the nominal pixel density region output 225 to the driver 290. For pixel locations with the location setting 233 set to a value indicating a pixel location at the boundary between the low pixel density region 271 and the nominal pixel density region, the combiner circuit 280 may apply specialized image processing to reduce visible artifacts in the boundary between the low pixel density region 271 and the nominal pixel density region.
In the low pixel density region 320, individual pixels are spaced further apart than in nominal pixel density region 310. Pixels 321 and 322 are separated in the horizontal direction by a distance 3 times the distance between pixels 311 and 312. This specific example should not be considered as limiting embodiments with other distances between pixels. Pixels in low pixel density region 320 may be separated by a distance which is greater than or less than the separation distance shown in
Pixel 324 in the low pixel density region 320 is shown alongside one embodiment in which pixels of the nominal pixel density must be processed to generate a single pixel in the low pixel density region 320. These six pixels (323a, 323b, 323c, 323d, 323e, 323f) represent input image information which is processed in the low pixel density region SPR circuit 222 to generate information to drive subpixels with the desired image data for pixel 324. These six pixels may be present in the input image information but may not be physically present in the display panel 270 but are shown here to demonstrate concepts of the display system. In some embodiments, the low pixel density region SPR circuit 222 may perform a decimation of pixels at nominal pixel density to transform the 6 pixels of information at nominal pixel density into the subpixel information to drive input image data 210 onto single low pixel density pixel 324. In other embodiments, the low pixel density region SPR circuit 222 may perform an averaging operation of data at nominal pixel density to transform the 6 pixels of information into the subpixel information to drive the input image data 210 onto single low pixel density pixel 324. In other embodiments, the low pixel density region SPR circuit 222 may utilize other signal processing algorithms to transform the 6 pixels of information at the nominal pixel density into the subpixel information for pixel 324.
Pixel 326 represents another embodiment of the relationship between the density of nominal density pixels and the low pixel density region 320 pixels. Pixel 326 overlaps with 8 nominal density pixels, shown as input pixels 325a, 325b, 325c, 325d, 325e, 325f, 325g and 325h. In this and other embodiments, the low pixel density region SPR circuit 222 may transform the 8 nominal density pixels into the single pixel 326. This transformation may include a decimation computation, an averaging operation or other algorithm to represent the 8 nominal density pixels 325a, 325b, 325c, 325d, 325e, 325f, 325g and 325h by a single low density pixel 326.
Other embodiments of the display system may include pixels of different shapes than those shown here, including but not limited to rectangles, squares, hexagons or other regular polygons. The transformation of multiple pixels at the nominal density into a lower density in the low pixel density region 320 may involve computations of a wide range of input image pixels. Computations may involve more pixels or fewer pixels than those shown here. Multiple pixels at the nominal density may overlap with single pixels in the low pixel density region 320 in different patterns than shown in these examples and continue to practice the disclosed display system.
Pixels 313a, 313b, 313c, 313d, 313e, 313f, 321 and 322 exist on a boundary between the low pixel density region 320 and the nominal pixel density region 310. Additional image processing may be applied to these boundary pixels. In some embodiments, pixel information for boundary pixels may be averaged with adjacent pixels to smooth discontinuities. In other embodiments, pixel information for boundary pixels may be filtered with a window function. The combiner circuit 280 may adjust luminance values for boundary pixels based on the location setting 233.
The display panel 420 includes a first display region 422 with a first pixel layout and a second display region 424 with a second pixel layout that is different from the first pixel layout. In the shown embodiment, the pixel density of the second display region 424 is lower than the pixel density of the first display region 422. In some embodiments, one or more under-display optical elements (not shown) may be disposed underneath the second display region 424 while the second display region 424 is configured to allow sufficient light to pass through the second display region 424 and reach the under-display optical elements. Examples of the under-display optical element include cameras, proximity sensors, and other optical sensors.
In one or more embodiments, the display driver 410 includes an interface (I/F) circuit 435, an image processing circuit 440, a driver circuit 450, a register circuit 460, and a region definition decoder 470. The image processing circuit 440, the driver circuit 450, and the register circuit 460 may be embodiments of the image processing circuit 140, the driver circuit 150, and the register circuit 160 of
The interface circuit 435 is configured to receive the image data 412 from the source 430 and forward the image data 412 to the image processing circuit 440. The interface circuit 435 may be further configured to receive a setting update 414 from the source 430 and update settings stored in the register circuit 460 as indicated by the setting update 414.
The image processing circuit 440 is configured to apply desired image processing to the image data 412 received from the source 430 to generate voltage data 416 that specifies voltage levels of data voltages with which respective subpixels of the display panel 420 are to be updated. In one or more embodiments, the image processing performed by the image processing circuit 440 includes subpixel rendering. The image processing may further include color adjustment, scaling, overshoot/undershoot driving, gamma transformation, and other image processes.
The driver circuit 450 is configured to update the respective subpixels of the display panel 420 based on the voltage data 416 received from the image processing circuit 440. In one implementation, the driver circuit 450 may be configured to generate and provide data voltages to the respective subpixels of the display panel 420 such that the data voltages have voltage levels as specified by the voltage data 416.
The register circuit 460 is configured to store settings used in the image processing to be performed by the image processing circuit 440. In the shown embodiment, the settings stored in the register circuit 460 include a first setting 462, a second setting 464, and a display region definition 466. The first setting 462 may specify a particular algorithm and/or computation of the subpixel rendering to be performed for the first display region 422 and the second setting 464 may specify a particular algorithm and/or computation of the subpixel rendering to be performed for the second display region 424. The display region definition 466 includes information that defines the first display region 422 and the second display region 424. The display region definition 466 may indicate the shape, location, dimensions (e.g., the width and height) and/or other spatial information of the second display region 424.
The register circuit 460 may be further configured to store boundary compensation coefficients 468 used in subpixel rendering for subpixels at the boundary between the first display region 422 and the second display region 424. In one or more embodiments, a selected one of the boundary compensation coefficients 468 may be applied in subpixel rendering for each subpixel located at the boundary between the first display region 422 and the second display region 424 to mitigate an image artifact at the boundary. Details of the use of the boundary compensation coefficients 468 in the subpixel rendering will be given later.
The region definition decoder 470 is configured to decode the display region definition 466 to generate a region indication signal 472. The region indication signal 427 indicates in which of the first display region 422 and the second display region 424 the subpixel of interest in the image processing performed by the image processing circuit 440 is located. The region indication signal 472 may be one embodiment of the location setting 233 described in relation to
In one or more embodiments, the image processing circuit 440 includes an SPR circuit 442 and a gamma circuit 444. The image processing circuit 440 is configured to provide input image data to the SPR circuit 442, where the input image data is based on the image data 412 received from the source 430. The input image data may be the image data 412 as is or image data generated by applying desired image processing (e.g., color adjustment, scaling, and other image processing) to the image data 412. The SPR circuit 442 is configured to apply subpixel rendering to the input image data.
In the shown embodiment, the SPR circuit 442 includes a first display region SPR circuit 445, a second display region SPR circuit 446, and a combiner circuit 447.
The first display region SPR circuit 445 is configured to receive a first part of the input image data for the first display region 422 and apply, based on the first setting 462, subpixel rendering to the first part of the input image data to generate first subpixel rendered data 448. The first subpixel rendered data 448 may include graylevels of the subpixels in the first display region 422.
The second display region SPR circuit 446 is configured to receive a second part of the input image data for the second display region 424 and apply, based on the second setting 464, subpixel rendering to the second part of the input image data to generate second subpixel rendered data 449. The second subpixel rendered data 449 may include graylevels of the subpixels in the second display region 424.
The combiner circuit 447 is configured to generate resulting subpixel rendered data 415 by combining the first subpixel rendered data 448 and the second subpixel rendered data 449. The combiner circuit 447 may be configured to output, based on the region indication signal 472, the first subpixel rendered data 448 as the resulting subpixel rendered data 415 for the subpixels in the first display region 422 and output the second subpixel rendered data 449 as the resulting subpixel rendered data 415 for the subpixels in the second display region 424. The combiner circuit 447 may be further configured to apply a selected one of the boundary compensation coefficients 468 to the graylevel indicated by the first subpixel rendered data 448 or the second subpixel rendered data 449 for each subpixel at the boundary between the first display region 422 and the second display region 424 in generating the resulting subpixel rendered data 415. The selection of the boundary compensation coefficient 468 for each subpixel at the boundary may be based on the location of each subpixel. As discussed later in detail, the application of the boundary compensation coefficients 468 may mitigate an image artifact which potentially occur at the boundary between the first display region 422 and the second display region 424.
The gamma circuit 444 is configured to apply gamma transformation to the resulting subpixel rendered data 415 to generate the voltage data 416. In embodiments where the first display region 422 and the second display region 424 are different in the pixel density, the gamma transformation may be performed with different “gamma curves” between the first display region 422 and the second display region 424. The “gamma curve” referred herein is the correlation between the graylevels indicated by the resulting subpixel rendered data 415 and the voltage levels indicated by the voltage data 416. In one embodiment, the gamma curves for the first display region 422 and the second display region 424 are determined depending on the ratio of the pixel density of the second display region 424 to the pixel density of the first display region 422. For example, in embodiments where the pixel density of the second display region 424 is X times of the pixel density of the first display region 422 where X is a number between zero and one, non-inclusive, the gamma curves for the first display region 422 and the second display region 424 are determined such that the luminance of subpixels of the second display region 424 is 1/X times of the luminance of subpixels of the first display region 422 for a fixed graylevel and a fixed color. The gamma curves thus determined reduce or eliminate the difference in the brightness between the images displayed in the first display region 422 and the second display region 424.
In the following, a detailed description is given of example subpixel rendering performed by the SPR circuit 442, according to one or more embodiments.
The second display region 424 includes pixels 600C. In the shown embodiment, the pixels 600C are configured identically to the pixels 600A, which each includes one R subpixel 602R, two G subpixels 602G, and one B subpixel 602B. In the shown embodiment, the pixels 600A and 600B are disposed adjacent to one another in the first display region 422 while the pixels 600C are spaced from one another in the second display region 424. Accordingly, the pixel density of the second display region 424 is lower than the pixel density of the first display region 422. In the shown embodiment, the pixel density of the second display region 424 is one fourth of the pixel density of the first display region 422.
In the subpixel rendering, the graylevel of each R subpixel of the display panel 420 is determined based on R graylevels of one or more neighboring input pixels. Correspondingly, the graylevel of each G subpixel of the display panel 420 is determined based on the G graylevels of one or more neighboring input pixels and the graylevel of each B subpixel is determined based on B graylevels of one or more neighboring input pixels. In the following, a detailed description is first given of example determination (or calculation) of the graylevels of the R subpixels of the display panel 420 in the subpixel rendering.
In one or more embodiments, the R reference regions for the R subpixels in the first display region 422 are defined differently from the R reference regions for the R subpixels in the second display region 424. In one implementation, the definition of the R reference regions for the R subpixels in the first display region 422 is indicated by the first setting 462 (shown in
In one implementation, the shape of the R reference regions for the first display region 422 is different from the shape of the R reference regions for the second display region 424. In the embodiment shown in
In one embodiment, the graylevel of the R subpixel 1002 is calculated based at least in part on the R graylevels of input pixels that are at least partially overlapped by the R reference region 1004. In the shown embodiment, the graylevel of the R subpixel 1002 is calculated based at least in part on the R graylevels of input pixels P00, P01, P10, and P11 that are partially overlapped by the R reference region 1004. The calculation of the graylevel of the R subpixel 1002 may be further based on fractions of overlaps of the R reference region 1004 over the input pixels P00, P01, P10, and P11.
In some embodiments, the graylevel of the R subpixel 1002 may be calculated as the γ-th root of a weighted sum of the γ-th powers of the R graylevels of input pixels P00, P01, P10, and P11. In one implementation, the graylevel of the R subpixel 1002 may be calculated in accordance with the following formula (1):
where Rspr_1002 is the graylevel of the R subpixel 1002, Rin_ij is the R graylevel of the input pixel Pij, wij is the weight assigned to the input pixel Pij, and γ is the gamma value of the display system 400. The gamma value γ may be 2.2, which is one of standard gamma values for display systems. In one implementation, the weight wij is the ratio of the portion of the input pixel Pij overlapped by the R reference region 1004 to the total area of the R reference region 1004. The weights w00, w01, w10, and w11 assigned to the input pixels P00, P01, P10 and P11 are determined based on fractions of overlaps of the R reference region 1004 over the input pixels P00, P01, P10, and P11, respectively. In one implementation, the weights w00, w01, w10, and w11 are determined as the ratios of the areas of overlapped portions of the input pixels P00, P01, P10, and P11 to the total area of the R reference region 1004, respectively, the overlapped portions of the input pixels P00, P01, P10, and P11 being overlapped by the R reference region 1004.
In embodiments where the areas of the overlapped portions of the input pixels P00, P01, P10, and P11 overlapped by the R reference region 1004 are equal to one another, the graylevel of the R subpixel 1002 may be calculated as the γ-th root of the average of the γ-th powers of the R graylevels of the input pixels P00, P01, P10, and P11. In the embodiment shown in
Rspr_1002=(0.25Rin
The graylevels of other R subpixels in the first display region 422 may be calculated similarly to the R subpixel 1002.
In one embodiment, the graylevel of the R subpixel 1102 is calculated based at least in part on the R graylevels of input pixels that are at least partially overlapped by the R reference region 1104. In the shown embodiment, the graylevel of the R subpixel 1102 is calculated based at least in part on the R graylevels of input pixels P00, P01, P02, P03, P10, P11, P12, and P13. The calculation of the graylevel of the R subpixel 1102 may be further based on fractions of overlaps of the R reference region 1104 over the input pixels P00, P01, P02, P03, P10, P11, P12, and P13.
In some embodiments, the graylevel of the R subpixel 1102 may be calculated as the γ-th root of a weighted sum of the γ-th powers of the R graylevels of input pixels P00, P01, P02, P03, P10, P11, P12, and P13. In one implementation, the graylevel of the R subpixel 1102 may be calculated in accordance with the following formula (3):
Rspr_1102=(w00·Rin_00γ+w01·Rin_01γ+w02·Rin_02γ+w03·Rin_03γw10·Rin_10γ+w11·Rin_11γ+w12·Rin_12γ+w13·Rin_13γ)1/γ, (3)
where Rspr_1102 is the graylevel of the R subpixel 1102, Rin_ij is the R graylevel of the input pixel Pij, wij is the weight assigned to the input pixel Pij, and γ is the gamma value of the display system 400. In one implementation, the weight wij is the ratio of the portion of the input pixel Pij overlapped by the R reference region 1104 to the total area of the R reference region 1104. The weights w00, w01, w02, w03, w10, w11, w12, and w13 assigned to the input pixels P00, P01, P02, P03, P10, P11, P12, and P13 are determined based on fractions of overlaps of the R reference region 1004 over the input pixels P00, P01, P02, P03, P10, P11, P12, and P13, respectively. In one implementation, the weights w00, w01, w02, w03, w10, w11, w12, and w13 are determined as the ratios of the areas of overlapped portions of the input pixels P00, P01, P02, P03, P10, P11, P12, and P13 to the total area of the R reference region 1004, respectively, the overlapped portions of the input pixels P00, P01, P02, P03, P10, P11, P12, and P13 being overlapped by the R reference region 1004.
In embodiments where the areas of the portions of the input pixels P00, P01, P02, P03, P10, P11, P12, and P13 overlapped by the R reference region 1104 are equal to one another, the graylevel of the R subpixel 1102 may be calculated as the γ-th root of the average of the γ-th powers of the R graylevels of the input pixels P00, P01, P02, P03, P10, P11, P12, and P13. In the embodiment shown in
Rspr_1102=(0.125Rin_00γ+0.125Rin_01γ+0.125Rin_02γ+0.125Rin_03γ+0.125Rin_10γ+0.125Rin_11γ+0.125Rin_12γ+0.125Rin_13γ)1/γ, (4)
The graylevels of other R subpixels in the second display region 424 may be calculated similarly to the R subpixel 1102.
In embodiments where the shape of the R reference regions is different between the first display region 422 and the second display region 424 (for example as shown in
One approach to mitigate the image artifact may be to modify the shapes of R reference regions defined for the boundary R subpixels, which are positioned at the boundary between the first display region 422 and the second display region 424, such that the R reference regions defined for the boundary R subpixels do not overlap any other R reference regions. This approach may however complicate the shapes of R reference regions defined for the boundary R subpixels, undesirably increasing the calculation amount needed for the subpixel rendering.
In one or more embodiments, the image artifact at the boundary between the first display region 422 and the second display region 424 is mitigated by applying boundary compensation coefficients to the graylevels of at least some of the boundary R subpixels. In some embodiments, boundary compensation coefficients may be applied to the graylevels of the boundary R subpixels in the second display region 424. In other embodiments, boundary compensation coefficients may be applied to the graylevels of the boundary R subpixels in both the first display region 422 and the second display region 424. In still other embodiments, boundary compensation coefficients may be applied to the graylevels of the boundary R subpixels in the first display region 422. The boundary compensation coefficients may be empirically predetermined and stored in the register circuit 460 as the boundary compensation coefficients 468 shown in
In one implementation, the graylevels of the boundary R subpixels in the second display region 424 may be determined by first determining base graylevels of boundary R subpixels as the γ-th root of a weighted sum of the γ-th powers of the R graylevels of the corresponding input pixels as described above (e.g., in accordance with the above-described formula (3) or (4)) and determining the final graylevels of the boundary R subpixels by applying boundary compensation coefficients to the base graylevels. In some embodiments, the second display region SPR circuit 446 (shown in
For the boundary R subpixel 1202 in the second display region 424 shown in
Rbase_1202=(0.125Rin_00γ+0.125Rin_01γ+0.125Rin_02γ+0.125Rin_03γ+0.125Rin_10γ+0.125Rin_11γ+0.125Rin_12γ+0.125Rin_13γ)1/γ, (5)
where Rbase_1202 is the base graylevel of the boundary R subpixel 1202. The final graylevel of the boundary R subpixel 1202 may be determined by applying a boundary compensation coefficient determined for the boundary R subpixel 1202. In some embodiments, the final graylevel of the boundary R subpixel 1202 is determined by multiplying the base graylevel Rbase_1202 of the boundary R subpixel 1202 by the boundary compensation coefficient determined for the boundary R subpixel 1202. In such embodiments, the final graylevel Rspr_1202 of the boundary R subpixel 1202 is determined as:
Rspr_1202=ηR·Rbase_1202, (6)
where ηR is the boundary compensation coefficient determined for the boundary R subpixel 1202. In one implementation, the boundary compensation coefficient ηR for the boundary R subpixel 1202 may be empirically determined and stored in the register circuit 460 as part of the boundary compensation coefficients 468. The graylevels of other boundary R subpixels in the first display region 422 and/or the second display region 424 may be calculated similarly to the boundary R subpixel 1202.
The shapes of overlaps of the R reference regions defined for the boundary R subpixels in the second display region 424 over the R reference regions defined for the boundary R subpixels in the first display region 422 may vary depending on the positions of the boundary R subpixels. Referring to
In one or more embodiments, the boundary compensation coefficients are determined in relation to the shapes of the overlaps to mitigate an image artifact between the first display region 422 and the second display region 424. More specifically, in some embodiments, the boundary compensation coefficient applied to the base graylevel of a boundary R subpixel is determined based on the position of the boundary R subpixel. The boundary compensation coefficient applied to the base graylevel of a boundary R subpixel may be selected from the boundary compensation coefficients 468 stored in the register circuit 460 (shown in
While
In some embodiments, the graylevel of the R subpixel 1402 may be calculated as the γ-th root of a weighted sum of the γ-th powers of the R graylevels of input pixels P01, P02, P10, P11, P12, P13, P20, P21, P22, P23, P31, and P32. In one implementation, the graylevel of the R subpixel 1402 may be calculated in accordance with the following formula (7):
Rspr_1402=(w01·Rin_01γ+w02·Rin_02γ+w10·Rin_10γ+w11·Rin_11γ+w12·Rin_12γ+w13·Rin_13γ+w20·Rin_20γ+w21·Rin_21γ+w22·Rin_22γ+w23·Rin_23γ+w31·Rin_31γ+w32·Rin_32γ)1/γ (7),
where Rspr_1402 is the graylevel of the R subpixel 1402, Rin_ij is the R graylevel of the input pixel Pij, wij is the weight assigned to the input pixel Pij, and γ is the gamma value of the display system 400. In one implementation, the weight wij is the ratio of the portion of the input pixel Pij overlapped by the R reference region 1404 to the total area of the R reference region 1404. The weights w01, w02, w10, w11, w12, w13, w20, w21, w22, w23, w31, and w32 assigned to the input pixels P01, P02, P10, P11, P12, P13, P20, P21, P22, P23, P31, and P32 are determined based on fractions of overlaps of the R reference region 1404 over the input pixels P01, P02, P10, P11, P12, P13, P20, P21, P22, P23, P31, and P32 respectively. In one implementation, the weights w01, w02, w10, w11, w12, w13, w20, w21, w22, w23, w31, and w32 are determined as the ratios of the areas of overlapped portions of the input pixels P01, P02, P10, P11, P12, P13, P20, P21, P22, P23, P31, and P32 to the total area of the R reference region 1404, the overlapped portions of the input pixels P01, P02, P10, P11, P12, P13, P20, P21, P22, P23, P31, and P32 being overlapped by the R reference region 1404.
In the embodiment shown in
Rspr_1402=(0.0625Rin_01γ+0.0625Rin_02γ+0.0625Rin_10γ+0.125Rin_11γ+0.125Rin_12γ+0.0625Rin_13γ+0.0625Rin_20γ+0.125Rin_21γ+0.125Rin_22γ+0.0625Rin_23γ+0.0625Rin_31γ+0.0625Rin_32γ)1/γ (8),
The graylevels of other R subpixels in the second display region 424 may be calculated similarly to the R subpixel 1402.
In some embodiments, the graylevel of the R subpixel 1502 may be calculated as the γ-th root of a weighted sum of the γ-th powers of the R graylevels of input pixels P01, P02, P10, P11, P12, P13, P20, P21, P22, P23, P31, and P32. In one implementation, the graylevel of the R subpixel 1502 may be calculated in accordance with the following formula (9):
Rspr_1502=(w01·Rin_01γ+w02·Rin_02γ+w10·Rin_10γ+w11·Rin_11γ+w12·Rin_12γ+w13·Rin_13γ+w20·Rin_20γ+w21·Rin_21γ+w22·Rin_22γ+w23·Rin_23γ+w31·Rin_31γ+w32·Rin_32γ)1/γ (9),
where Rspr_1502 is the graylevel of the R subpixel 1502, Rin_ij is the R graylevel of the input pixel Pij, wij is the weight assigned to the input pixel Pij, and γ is the gamma value of the display system 400. In one implementation, the weight wij is the ratio of the portion of the input pixel Pij overlapped by the R reference region 1504 to the total area of the R reference region 1504.
In the embodiment shown in
Rspr_1502=(0.03125Rin_01γ+0.03125Rin_02γ+0.09375Rin_10γ+0.125Rin_11γ+0.125Rin_12γ+0.09375Rin_13γ+0.09375Rin_20γ+0.125Rin_21γ+0.125Rin_22γ+0.09375Rin_23γ+0.03125Rin_31γ+0.03125Rin_32γ)1/γ (10),
The graylevels of other R subpixels in the second display region 424 may be calculated similarly to the R subpixel 1502.
The determination of the graylevels of the B subpixels of the display panel 420 involves defining a B reference region for each B subpixel of the display panel 420 and determining the graylevel of each B subpixel based at least in part on B graylevels of input pixels of the input image, the input pixels being at least partially overlapped by the B reference region. The B reference regions are defined such that the positions of respective B reference regions map to the positions of the corresponding B subpixels of the display panel 420. In one implementation, the B reference regions may be defined such that the geometric center of each B reference region is positioned on the corresponding B subpixel of the display panel 420. The shape of the B reference regions for the first display region 422 is different from the shape of the B reference regions for the second display region 424. In the embodiment shown in
The graylevel of each B subpixel of the display panel 420 may be determined based at least in part on the B graylevels of the input pixels that are at least partially overlapped by the B reference region defined for each B subpixel of the display panel 420. The graylevels of the B subpixels in the first display region 422 may be calculated in a similar manner to the R subpixels in the first display region 422 (e.g., in accordance with the formula (1) or (2)) while the graylevels of the B subpixels in the second display region 424 may be calculated in a similar manner to the R subpixels in the second display region 424 (e.g., in accordance with the formula (3) or (4)). In some embodiments, the graylevels of the B subpixels in the first display region 422 may be determined by the first display region SPR circuit 445 (shown in
Further, graylevels of boundary B subpixels may be calculated in a similar manner to boundary R subpixels (e.g., in accordance with the formula (5) or (6)), where a boundary B subpixel is such a B subpixel that the B reference region defined for the B subpixel in one of the first display region 422 and the second display region 424 overlaps one or more other B reference regions defined for one or more B subpixels in the other of the first display region 422 and the second display region 424.
The determination of the graylevels of the G subpixels of the display panel 420 involves defining a G reference region for each G subpixel of the display panel 420 and determining the graylevel of each G subpixel based at least in part on the G graylevel(s) of one or more input pixels of the input image, the one or more input pixels being at least partially overlapped by the G reference region. The G reference regions are defined such that the positions of respective G reference regions map to the positions of the corresponding G subpixels of the display panel 420. In one implementation, the G reference regions may be defined such the geometric center of each G reference region is positioned on the corresponding G subpixel of the display panel 420. The shape of the G reference regions for the first display region 422 is different from the shape of the G reference regions for the second display region 424.
In the embodiment shown in
Further, the G reference region of each G subpixel in the second display region 424 is defined in a rectangular shape to overlap five input pixels. The graylevel of each G subpixel in the second display region 424 may be determined based at least in part on the G graylevels of the five input pixels that are at least partially overlapped by the G reference region defined for each G subpixel in the second display region 424. The graylevels of the G subpixels in the second display region 424 may be calculated in a similar manner to the R subpixels in the second display region 424 (e.g., in accordance with the formula (3) or (4)).
In some embodiments, the graylevel of the G subpixel 1802 may be calculated as the γ-th root of a weighted sum of the γ-th powers of the G graylevels of input pixels P00, P01, P02, P03, and P04. In one implementation, the graylevel of the G subpixel 1802 may be calculated in accordance with the following formula (11):
Gspr_1802=(w00·Gin_00γ+w01·Gin_01γ+w02·Gin_02γ+w03·Gin_03γ+w04·Gin_04γ)1/γ (11),
where Gspr_1802 is the graylevel of the G subpixel 1802, Gin_ij is the G graylevel of the input pixel Pij, wij is the weight assigned to the input pixel Pij, and γ is the gamma value of the display system 400. In one implementation, the weight wij is the ratio of the portion of the input pixel Pij overlapped by the G reference region 1804 to the total area of the G reference region 1804. The weights w00, w01, w02, w03, and w04 assigned to the input pixels P00, P01, P02, P03, and P04 are determined based on fractions of overlaps of the G reference region 1804 over the input pixels P00, P01, P02, P03, and P04, respectively. In one implementation, the weights w00, w01, w02, w03, and w04 are determined as the ratios of the areas of overlapped portions of the input pixels P00, P01, P02, P03, and P04 to the total area of the G reference region 1804, the overlapped portions of the input pixels P00, P01, P02, P03, and P04 being overlapped by the G reference region 1804.
In the embodiment shown in
Gspr_1802=(0.125Gin_00γ+0.25Gin_01γ+0.25Gin_02γ+0.25Gin_03γ+0.125Gin_04γ)1/γ (12),
The graylevels of other G subpixels in the second display region 424 may be calculated similarly to the G subpixel 1802.
A G reference region defined for a G subpixel in the second display region 424 may overlap one or more G reference regions defined for one or more G subpixels in the first display region 422. If a G reference region defined for a G subpixel in the second display region 424 overlaps one or more other G reference regions defined for one or more G subpixels in the first display region 422, such a G subpixel may be hereinafter referred to as boundary G subpixel.
To mitigate the image artifact at the boundary between the first display region 422 and the second display region 424, in one or more embodiments, boundary compensation coefficients are applied to the graylevels of at least some of the boundary G subpixels in the second display region 424. The boundary compensation coefficients may be empirically predetermined and stored in the register circuit 460 as part of the boundary compensation coefficients 468 as shown in
In one implementation, the graylevels of the boundary G subpixels in the second display region 424 may be determined by first determining base graylevels of boundary G subpixels as the γ-th roots of weighted sums of the γ-th powers of the G graylevels of the corresponding input pixels as described above (e.g., in accordance with the above-described formula (11) or (12)) and determining the final graylevels of the boundary G subpixels by applying boundary compensation coefficients to the base graylevels. In one implementation, the second display region SPR circuit 446 (shown in
For the boundary G subpixel 1902 in the second display region 424 shown in
Gbase_1902=(0.125Gin_00γ+0.25Gin_01γ+0.25Gin_02γ+0.25Gin_03γ+0.125Gin_04γ)1/γ (13),
where Gbase_1902 is the base graylevel of the boundary G subpixel 1902. The final graylevel of the boundary G subpixel 1902 may be determined by applying a boundary compensation coefficient determined for the boundary G subpixel 1902. In some embodiments, the final graylevel of the boundary G subpixel 1902 is determined by multiplying the base graylevel Gbase_1902 of the boundary G subpixel 1902 by the boundary compensation coefficient determined for the boundary G subpixel 1902. In such embodiments, the final graylevel Gspr_1902 of the boundary G subpixel 1902 is determined as:
Gspr_1902=ηG·Gbase_1902, (14)
where ηG is the boundary compensation coefficient determined for the boundary G subpixel 1902. In one implementation, the boundary compensation coefficient ηG for the boundary G subpixel 1902 may be empirically determined and stored in the register circuit 460 as part of the boundary compensation coefficients 468. The graylevels of other boundary G subpixels in the second display region 424 may be calculated similarly to the boundary G subpixel 1902.
Method 2000 of
The method 2000 includes receiving input image data (e.g., the image data 112 of
The method 2000 further includes generating second subpixel rendered data (e.g., the nominal pixel density region output 225 of
The method 2000 further includes updating the first display region of the display panel based at least in part on the first subpixel rendered data at step 2008. The method 2000 further includes updating the second display region of the display panel based at least in part on the second subpixel rendered data at step 2010.
While many embodiments have been described, those skilled in the art, having benefit of this disclosure, will appreciate that other embodiments can be devised which do not depart from the scope. Accordingly, the scope of the invention should be limited only by the attached claims.
This application claims benefit under 35 U.S.C. § 119(e) to U.S. Provisional Patent Application Ser. No. 63/248,893, filed on Sep. 27, 2021. U.S. Provisional Patent Application Ser. No. 63/248,893 is incorporated herein by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
20210065625 | Wang | Mar 2021 | A1 |
20220051641 | Park | Feb 2022 | A1 |
20230030179 | Chen | Feb 2023 | A1 |
Number | Date | Country | |
---|---|---|---|
20230100358 A1 | Mar 2023 | US |
Number | Date | Country | |
---|---|---|---|
63248893 | Sep 2021 | US |