This application claims priority of Japanese Patent Application No. 2014-023874, filed on Feb. 10, 2014, the disclosure which is incorporated herein by reference.
The present relates to a panel display device, a display panel driver and a method of driving a display panel, more particularly, to an apparatus and method for correction of image data in a panel display device.
The auto contrast optimization (ACO) is one of widely-used techniques for improving display qualities of panel display devices such as liquid crystal display devices. For example, contrast enhancement of a dark image under a situation in which the brightness of a backlight is desired to be reduced effectively suppresses deterioration of the image quality with a reduced power consumption of the liquid crystal display device. In one approach, the contrast enhancement may be achieved by performing a correction calculation on image data (which indicate grayscale levels of each subpixel of each pixel). Japanese Patent Gazette No. 4,198,720 B2 discloses a technique for achieving a contrast enhancement, for example.
An auto contrast enhancement is most typically achieved by analyzing image data of the entire image and performing a common correction calculation for all the pixels in the image on the basis of the analysis; however, according to an inventors' study, such auto contrast enhancement may cause a problem that, when a strong contrast enhancement is performed, the number of representable grayscale levels is reduced in dark and/or bright regions of images. A strong contrast enhancement potentially causes so-called “blocked up shadows” (that is, a phenomenon in which an image element originally to be displayed with a grayscale representation is undesirably displayed as a black region with a substantially-constant grayscale level) in a dark region in an image, and also potentially causes so-called “clipped white” in a bright region in an image.
One known approach to address such problem is local contrast correction. For example, Japanese Patent Application Publication No. 2001-245154 A discloses a local contrast correction. In the technique disclosed in this patent document, a small difference in the contrast between individual regions in the original image is maintained while the maximum difference in the contrast between the individual regions is restricted.
One known technique for a local contrast correction is to perform contrast correction of respective positions of tine image in response to the difference between the original image and an image obtained by applying low-pass filtering to image data. Such technology is disclosed, for example, in Japanese Patent Application Publications Nos. 2008-263475 A, H07-170428 A and 2008-511048 A. The technique using low-pass filtering, however, causes a problem of an increased circuit size, since this technique requires a memory for storing an image obtained by the low-pass filtering.
Another known technique for a local contrast correction is to perform a contrast correction of each area defined in the image of interest on the basis of the image characteristics of each area. Such technology is disclosed, for example, in Japanese Patent Application Publications Nos. 2001-113754 A and 2010-278937 A. In the technique disclosed in these patent document, a contrast correction suitable for each area is achieved by setting the input-output relation of input image data and corrected image data (image data obtained by performing contrast correction on the input image data) for pixels of each area on the basis of the image characteristics of each area.
The technique which performs a contrast correction of each area defined in the image on the basic of the image characteristics of each area may undesirably cause discontinuities in the displayed image at boundaries between adjacent areas. Such discontinuities in the displayed image may be undesirably observed as block noise.
In the technique disclosed in Japanese Patent Application Publications No. 2010-278937 A, the input-output relation of input image data and corrected image data is continuously modified to resolve such discontinuities in the displayed image (refer to
For example, let us consider the case when input image data of an image including a first region of a constant color with a luminance value of 200 and a second region of a constant color with a luminance value of 20 are provided and areas arrayed in two rows and two columns are defined in the image, and the APLs of the areas are calculated as 155, 110, 110 and 20, respectively, as illustrated in
When a gamma value of γA is determined with respect to position A in the area with an APL of 150 and a gamma value of γB is determined with respect to position B in an area with an APL of 110, the gamma value is determined so as to continuously modified between positions A and B with the technique in which the input-output relation between the input image data and the corrected image data is continuously modified; however, the continuous modification of the gamma value results in that the finally-obtained grayscale levels of the respective colors indicated in the corrected image data are different even if the input image data indicates the constant grayscale levels of the respective colors. This is undesirably observed as a halo effect.
As thus discussed, there is a need for providing a technique which effectively reduces a discontinuity in the display region at edges of areas in a contrast correction based on the image characteristics of respective areas defined in the image, while suppressing occurrence of a halo effect.
Disclosed herein are display devices, display panel drivers and a method for driving a display panel. In one example, a display device is provided that includes a display panel and a driver. The display panel includes a display region, wherein a plurality of areas are defined in the display region. The driver is configured to drive each pixel in the display region in response to input image data. The driver is additionally configured to (1) generate APL-calculation image data corresponding to an APL-calculation luminance image by performing an APL-calculating filtering process on the input image data; (2) calculate area characterization data including first APL data indicating an average picture level of each of the areas in the APL-calculation luminance image for each of the areas, from the APL-calculation image data; (3) calculate second APL data for each pixel depending on a position of each pixel and the first APL data of the area characterization data associated with the area in which each pixel is located and with areas adjacent to the area in which each pixel is located, and generate pixel-specific characterization data including the second APL data for each pixel; (4) generate output image data associated with each pixel by performing a correction calculation based on the second APL data of the pixel-specific image data associated with each pixel; and (5) drive each pixel in response to the output image data associated with each pixel. The APL-calculating filtering process for a target pixel of the pixels in the display region includes setting a luminance value of the target pixel in the APL-calculation luminance image to a specific APL-calculation alternative luminance value in response to differences of a luminance value of the target pixel from those of pixels near the target pixel in a luminance image corresponding to the input image data.
In another example, a display panel driver for driving each pixel in a display region of a display panel in response to input image data is provided. The display region includes a plurality of areas are defined therein. The driver includes an area characterization data calculation section, a pixel-specific characterization data calculation section, correction circuitry, and drive circuitry. The area characterization data calculation section is operable to generate APL-calculation image data corresponding to an APL-calculation luminance image by performing an APL-calculating filtering process on the input image data, and calculates area characterization data including first APL data indicating an average picture level of each of the areas in the APL-calculation luminance image for each of the areas, from the APL-calculation image data. The pixel-specific characterization data calculation section is operable to calculate second APL data for each pixel depending on the position of each pixel and the first APL data of the area characterization data associated with the area in which each pixel is located and with areas adjacent to the area in which each pixel is located to generate pixel-specific characterization data including the second APL data for each pixel. The correction circuitry is operable to generate output image data associated with each pixel by performing a correction calculation based on the second APL data of the pixel-specific image data associated with each pixel. The drive circuitry is operable to drive each pixel in response to the output image data associated with each pixel. The APL-calculating filtering process for a target pixel of the pixels in the display region includes setting a luminance value of the target pixel in the APL-calculation luminance image to a specific APL-calculation alternative luminance value in response to differences of a luminance value of the target pixel from those of pixels near the target pixel in a luminance image corresponding to the input image data.
In another example, a display panel drive method for driving each pixel in a display region of a display panel in response to input image data is provided. The display panel drive method includes generating APL-calculation image data corresponding to an APL-calculation luminance image by performing an APL-calculating filtering process on the input image data; calculating area characterization data including first APL data indicating an average picture level of each of the areas in the APL-calculation luminance image for each of the areas, from the APL-calculation image data; calculating second APL data for each pixel depending on the position of each pixel and the first APL data of the area characterization data associated with the area in which each pixel is located and with areas adjacent to the area in which each pixel is located to generate pixel-specific characterization data including the second APL data for each pixel; generating output image data associated with each pixel by performing a correction calculation based on the second APL data of the pixel-specific image data associated with each pixel; and driving each pixel in response to the output image data associated with each pixel. The APL-calculating filtering process for a target pixel of the pixels in the display region includes setting a luminance value of the target pixel in the APL-calculation luminance image to a specific APL-calculation alternative luminance value in response to differences of a luminance value of the target pixel from those of pixels near the target pixel in a luminance image corresponding to the input image data.
The above and other advantages and features of the present invention will be more apparent from the following description taken in conjunction with the accompanied drawings, in which:
The invention will be now described herein with reference to illustrative embodiments. Those skilled in the art would recognize that many alternative embodiments can be accomplished using the teachings of the present invention and that the invention is not limited to the embodiments illustrated for explanatory purposed.
Therefore, an objective of the present invention is to provide a technique which effectively reduces a discontinuity in the display region at edges of areas in a contract correction based on the image characteristics of respective areas defined in the image, while suppressing occurrence of a halo effect.
Other objectives and new features of the present invention would be understood from the disclosure in the Specification and attached drawings.
In an aspect of the present invention, a display device includes: a display panel including a display region; and a driver driving each pixel in the display region in response to input image data. A plurality of areas are defined in the display region. The driver is configured: to generate APL-calculation image data corresponding to an APL-calculation luminance image by performing an APL-calculating filtering process on the input image data and to calculate area characterization data including first APL data indicating an average picture level of each of the areas in the APL-calculation luminance image for each of the areas, from the APL-calculation image data. The driver is further configured to calculate second APL data for each pixel depending on the position of each pixel and the first APL data of the area characterization data associated with the area in which each pixel is located and with areas adjacent to the area in which each pixel is located, and to generate pixel-specific characterization data including the second APL data for each pixel. The driver is further configured to generate output image data associated with each pixel by performing a correction calculation based on the second APL data of the pixel specific image data associated with each pixel and to drive each pixel in response to the output image data associated with each pixel. The APL-calculating filtering process for a target pixel of the pixels in the display region involves setting a luminance value of the target pixel in the APL-calculation luminance image to a specific APL-calculation alternative luminance value in response to differences of a luminance value of the target pixel from those of pixels near the target pixel in a luminance image corresponding to the input image data.
In a preferred embodiment, the driver is configured to generate square-mean-calculation image data corresponding to a square-mean-calculation luminance image by performing a square-mean-calculating filtering process on the input image data. In this case, the area characterization data include square-mean data indicating a mean of squares of luminance values of pixels in each of the areas in the square-mean-calculation luminance image, and the pixel-specific characterization data include variance data which depend on the position of each pixel and the square-mean data of the area characterization data associated with the area in which each pixel is located and with areas adjacent to the area in which each pixel is located. The driver is configured to determine a gamma value of a gamma curve for each pixel based on the second APL data of the pixel-specific characterization data associated with each pixel, and to perform an operation for modifying a shape of the gamma curve for each pixel, based on the variance data of the pixel-specific characterization data associated with each pixel. The square-mean-calculating filtering process for the target pixel involves setting a luminance value of the target pixel in the square-mean-calculation luminance image to a specific square-mean-calculation alternative luminance value in response to differences of the luminance value of the target pixel from those of pixels near the target pixels in the luminance image corresponding to the input image data.
In another aspect of the present invention, a display panel driver is provided for driving each pixel in a display region of a display panel in response to input image data. A plurality of areas are defined in the display region. The driver includes: an area characterization data calculation section which generates APL-calculation image data corresponding to an APL-calculation luminance image by performing an APL-calculating filtering process on the input image data, and calculates area characterization data including first APL data indicating an average picture level of each of the areas in the APL-calculation luminance image for each of the areas, from the APL-calculation image data; a pixel-specific characterization data calculation section which calculates second APL data for each pixel depending on the position of each pixel and the first APL data of the area characterization data associated with the area in which each pixel is located and with areas adjacent to the area in which each pixel is located to generate pixel-specific characterization data including the second APL data for each pixel; a correction circuitry which generates output image data associated with each pixel by performing a correction calculation based on the second APL data of the pixel-specific image data associated with each pixel; and a drive circuitry which drives each pixel in response to the output image data associated with each pixel. The APL-calculating filtering process for a target pixel of the pixels in the display region involves setting a luminance value of the target pixel in the APL-calculation luminance image to a specific APL-calculation alternative luminance value in response to differences of a luminance value of the target pixel from those of pixels near the target pixel in a luminance image corresponding to the input image data.
In a preferred embodiment, the area characterization data calculation section generates square-mean-calculation image data corresponding to a square-mean-calculation luminance image by performing a square-mean-calculating filtering process on the input image data. The area characterization data include square-mean data indicating a mean of squares of luminance values of pixels in each of the areas in the square-mean-calculation luminance image, and the pixel-specific characterization data include variance data which depend on the position of each pixel and the square-mean data of the area characterization data associated with the area in which each pixel is located and with areas adjacent to the area in which each pixel is located. The correction circuitry determines a gamma value of a gamma curve for each pixel based on the second APL data of the pixel-specific characterization data associated with each pixel, and performs an operation for modifying a shape of the gamma curve for each pixel, based on the variance data of the pixel-specific characterization data associated with each pixel. The square-mean-calculating filtering process for the target pixel involves setting a luminance value of the target pixel in the square-mean-calculation luminance image to a specific square-mean-calculation alternative luminance value in response to differences of the luminance value of the target pixel from those of pixels near the target pixels in the luminance image corresponding to the input image data.
In another aspect of the present invention, a A display panel drive method is provided for driving each pixel in a display region of a display panel in response to input image data. The method includes: generating APL-calculation image data corresponding to an APL-calculation luminance image by performing an APL-calculating filtering process on the input image data; calculating area characterization data including first APL data indicating an average picture level of each of the areas in the APL-calculation luminance image for each of the areas, from the APL-calculation image data; calculating second APL data for each pixel depending on the position of each pixel and the first APL data of the area characterization data associated with the area in which each pixel is located and with areas adjacent to the area in which each pixel is located to generate pixel-specific characterization data including the second APL data for each pixel; generating output image data associated with each pixel by performing a correction calculation based on the second APL data of the pixel-specific image data associated with each pixel; and driving each pixel in response to the output image data associated with each pixel. The APL-calculating filtering process for a target pixel of the pixels in the display region involves setting a luminance value of the target pixel in the APL-calculation luminance image to a specific APL-calculation alternative luminance value in response to differences of a luminance value of the target pixel from those of pixels near the target pixel in a luminance image corresponding to the input image data.
In one preferred embodiment, the drive method further includes: generating square-mean-calculation image data corresponding to a square-mean-calculation luminance image by performing a square-mean-calculating filtering process on the input image data. In this case, the area characterization data include square-mean data indicating a mean of squares of luminance values of pixels in each of the areas in the square-mean-calculation luminance image, and the pixel-specific characterization data include variance data which depend on the position of each pixel and the square-mean data of the area characterization data associated with the area in which each pixel is located and with areas adjacent to the area in which each pixel is located. In the step of generating the output image data, a gamma value of a gamma curve for each pixel is determined on the basis of the second APL data of the pixel-specific characterization data associated with each pixel, and the shape of the gamma curve for each pixel is modified on the basis of the variance data of the pixel-specific characterization data associated with each pixel. The square-mean-calculating filtering process for the target pixel involves setting a luminance value of the target pixel in the square-mean-calculation luminance image to a specific square-mean-calculation alternative luminance value in response to differences of the luminance value of the target pixel from those of pixels near the target pixels in the luminance image corresponding to the input image data.
The present invention effectively reduces a discontinuity in the display region at edges of areas in a contrast correction based on the image characteristics of respective areas defined in the image, while suppressing occurrence of a halo effect.
The LCD panel 2 includes a display region 5 and a gate line drive circuit 6 (also referred to as gate-in-panel (GIP) circuit). Disposed in the display region 5 are a plurality of gate lines 7 (also referred to as scan lines or address lines), a plurality of data lines 8 (also referred to as signal lines or source lines) and a plurality of pixels 9. In the present embodiment, the number of the gate lines 7 is v, the number of the data lines 8 is 3h and the pixels 9 are arrayed in v rows and h columns, where v and h are integers equal to or more than two. In the following, the horizontal direction of the display region 5 (that is, the direction in which the gate lines 7 are extended) may be referred to as X-axis direction and the vertical direction of the display region 5 (that is, the direction in which the data lines 8 are extended) may be referred to as Y-axis direction.
In the present embodiment, each pixel 9 includes three subpixels: an R subpixel 11R, a G subpixel 11G and a B subpixel 11B, where the R subpixel 11R is a subpixel corresponding to a red color (that is, a subpixel displaying a red color), the G subpixel 11G is a subpixel corresponding to a green color (that is, a subpixel displaying a green color) and the B subpixel 11B is a subpixel corresponding to a blue, color (that is, a subpixel displaying a blue color). Note that the R subpixel 11R, G subpixel 11G and B subpixel 11B may be collectively referred to as subpixel 11 if not distinguished from each other. In the present embodiments subpixels 11 are arrayed in v rows and 3h columns on the LCD panel 2. Each subpixel 11 is connected with one corresponding gate line 7 and one corresponding data line 8. In driving respective subpixels 11 of the LCD panel 2, gate lines 7 are sequentially selected and desired drive voltages are written into the subpixels 11 connected with a selected gate line 7 via the data lines 8. This allows setting the respective subpixels 11 to desired grayscale levels to thereby display a desired image in the display region 5 of the LCD panel 2.
Referring back to
The interface circuit 21 receives the input image data DIN and synchronization data DSYNC from the processor 4 and forwards the input image data DIN to the approximate gamma correction circuit 22 and the synchronization data DSYNC to the timing control circuit 27.
The approximate gamma correction circuit 22 performs a correction calculation (or gamma correction) on the input image data DIN in accordance with a gamma curve specified by correction point data set CP_selk received from the correction point data calculation circuit 29, to thereby generate output image data DOUT. In the following, data indicating the grayscale level of an R subpixel 11R of the output image data DOUT may be referred to as output image data DOUTR. Correspondingly, data indicating the grayscale level of a G subpixel 11G of the output image data DOUT may be referred to as output image data DOUTG and data indicating the grayscale level of a B subpixel 11B of the output image data DOUT may be referred to as output image data DOUTB.
The number of bits of the output image data DOUT is larger than that of the input image data DIN. This effectively avoids losing information of the grayscale levels of pixels in the correction calculation. In the present embodiment, in which the input image data DIN represent the grayscale level of each subpixel 11 of each pixel 9 with eight bits, the output image data DOUT may be, for example, generated as data that represent the grayscale level of each subpixel 11 of each pixel 9 with 10 bits.
Although a gamma correction is most typically achieve with an LUT (lookup table), the gamma correction performed by the approximate gamma correction circuit 22 in the present embodiment is achieved with an arithmetic expression, without using an LUT. The exclusion of an LUT from the approximate gamma correction circuit 22 effectively allows reducing the circuit size of the approximate gamma correction circuit 22 and also reducing the power consumption necessary for switching the gamma value. It should be noted however that the approximate gamma correction circuit 22 uses an approximate expression, not the exact expression, for achieving the gamma correction in the present embodiment. The approximate gamma correction circuit 22 determines coefficients of the approximate expression used for the gamma correction in accordance with a desired gamma curve to achieve a gamma correction with a desired gamma value. A gamma correction with the exact expression requires a calculation of an exponential function and this undesirably increases the circuit size. In the present embodiment, in contrast, the gamma correction is achieved with an approximate expression which does not include an exponential function to thereby reduce the circuit size.
The shapes of the gamma curves used in the gamma correction performed by the approximate gamma correction circuit 22 are specified by correction point data sets CP_selR, CP_selG or CP_selB. To perform gamma corrections with different gamma values for the R subpixel 11R, G subpixel 11G and B subpixel 11B of each pixel 9, different correction point data sets are respectively prepared for the R subpixel 11R, G subpixel 11G and B subpixel 11B of each pixel 9 in the present embodiment. The correction point data set CP_selR is used for a gamma correction of input image data DINR associated with an R subpixel 11R. Correspondingly, the correction point data set CP_selG is used for a gamma correction of input image data DING associated with a G subpixel 11G and the correction point data set CP_selB is used for a gamma correction of input image data DINB associated with a B subpixel 11B.
As illustrated in
The coefficients of the arithmetic expression used for the gamma correction by the approximate gamma correction unit 22R are determined on the basis of the correction point data CP0 to CP5 of the correction point data set CP_selR. Correspondingly, the coefficients of the arithmetic expressions used for the gamma corrections by the approximate gamma correction units 22G and 22B are determined on the basis of the correction point data CP0 to CP5 of the correction point data set CP_selG and CP_selB, respectively.
The approximate gamma correction units 22R, 22G and 22B have the same function except for that the input image data and the correction point data sets fed thereto are different.
Referring back to
The timing control circuit 27 performs timing control of the entire drive IC 3 in response to the synchronization data DSYNC. In detail, the timing control circuit 27 generates the latch signal SSTB in response to the synchronization data DSYNC and feeds the generated latch signal SSTB to the latch circuit 24. The latch signal SSTB is a control signal instructing the latch circuit 24 to latch the color-reduced data DOUT
The characterization data calculation circuit 28 and the correction point data calculation circuit 29 constitute a circuitry which generates the correction point data CP_selR, CP_selG and CP_selB in response to the input image data DIN and feeds the generated correction point data sets CP_selR, CP_selG and CP_selB to the approximate gamma correction circuit 22.
In detail, the characterization data calculation circuit 28 includes an area characterization data calculation section 28a and a pixel-specific characterization data calculation section 28b. The area characterization data calculation section 28a calculates area characterization data DCHR
The display region 5 of the LCD panel 2 is divided into a plurality of areas. In the example illustrated in
The area characterization data DCHR
It should be noted that the area characterization data DCHR
Referring back to
The correction point data calculation circuit 29 generates the correction point data sets CP_selR, CP_selG and CP_selB in response to the pixel-specific characterization data DCHR
The rate-of-change filter 30 calculates the luminance value of each pixel 9 by performing a color transformation (such as an RGB-YUB transformation and an RGB-YCbCr transformation) on the input image data DIN (which describe the grayscale levels of the R subpixel 11R, G subpixel 11G and B subpixel 11B of each pixel 9), and generates APL-calculation image data DFILTER
The APL calculation circuit 31 calculates the APL of each area, which may be referred to as APL(N, M), from the A PL-calculation image data DFILTER
The rate-of-change filter 32, on the other hand, calculates the luminance value of each pixel 9 by performing a color transformation on the input image data DIN, and generates square-mean-calculation image data DFILTER
The square-mean data calculation circuit 33 calculates square-mean data <Y2>(N, M) which indicate the mean of squares of the luminance values of pixels 9 in each area, from the square-mean calculation image data DFILTER
In the following, in order to distinguish the filtering processes performed by the rate-of-change filters 30 and 32, the filtering process performed by the rate-of-change filter 30 is referred to as APL-calculating filtering process (first filtering process), and the filtering process performed by the rata-of-change filter 32 is referred to as square-mean-calculating filtering process (second filtering process). As is discussed later, the APL-calculating filtering process and the square-mean-calculating filtering process performed by the rate-of-change filters 30 and 32 are of significance for suppressing discontinuities in the display image at the boarders between the areas while also suppressing occurrence of a halo effect.
According to these definitions, the APL calculation circuit 31 calculates the APL of each of the areas in an image obtained by applying the APL-calculating filtering process to a luminance image associated with input image data DIN (the image thus obtained may be referred to as “APL-calculation luminance image”, hereinafter). The APL calculated for an area A(N, M) may be denoted by APL(N, M), hereinafter. The APL of each area in an APL-calculation luminance image associated with APL-calculation image data DFILTER
The square-mean data calculation circuit 33 calculates the mean of squares of the luminance values of pixels 9 in each area of an image obtained by performing a square-mean-calculating filtering process on an luminance image associated with input image data DIN (the image thus obtained may be referred to as “square-mean calculation luminance image”, hereinafter). The mean of squares of the luminance values of pixels 9 calculated for the area A(N, M) may be denoted by Y2(N, M), hereinafter.
In the present embodiment, the APL of each area of an APL-calculation luminance image and the mean of squares of the luminance values of pixels 9 in each area of a square-mean calculation luminance image are used as feature quantities indicated by area characterization data DCHR
The characterization data calculation result memory 34 sequentially receives and stores the APL data and square-mean data of the area characterization data DCHR
The area characterization data memory 35 sequentially receives the area characterization data DCHR
Filtered characterization data DCHR
In the present embodiment, the area characterization data DCHR
The filtered characterization data memory 37 stores therein the filtered characterization data DCHR
The pixel-specific characterization data calculation circuit 38 calculates pixel-specific characterization data DCHR
Pixel-specific characterization data DCHR
The correction point data set storage register 41 stores therein a plurality of correction point data sets CP#1 to CP#m. The correction point data sets CP#1 to CP#m are used as seed data for determining the above-described correction point data sets CP_LR, CP_LG and CP_LB. Each of the correction point data sets CP#1 to CP#m includes correction point data CP0 to CP5 defined as illustrated in
The interpolation/selection circuit 42 determines gamma values γ—PIXELR, γ—PIXELG and γ—PIXELB on the basis of the APL data APL_PIXEL(y, x) of the pixel-specific characterization data DCHR
In one embodiment, the interpolation/selection circuit 42 may select one of the correction point data sets CP#1 to CP#m on the basis of the gamma value γ—PIXELk and determine the correction point data set CP_Lk as the selected one of the correction point data sets CP#1 to CP#m. Alternatively, the interpolation/selection circuit 42 may determine the correction point data set CP_Lk by selecting two of correction point data sets CP#1 to CP#m on the basis of the gamma value γ—PIXELk and applying a linear interpolation to the selected two correction point data sets. Details of the determination of the correction point data sets CP_LR, CP_LG and CP_LB are described later. The correction point data sets CP_LR, CP_LGand CP_LB determined by the interpolation/selection circuit 42 are forwarded to the correction point data adjustment circuit 43.
The correction point data adjustment circuit 43 modifies the correction point data sets CP_LB, CP_LG and CP_LBon the basis of the variance data σ2_PIXEL(y, x) included in the pixel-specific characterization data DCHR
Next, an overview of the operation of the liquid crystal display device 1 in the present embodiment, particularly the correction calculation for contrast correction, is given below.
Overall, the correction calculation in the present embodiment includes a first phase in which the shape of the gamma curve used for the contrast correction is determined for each subpixel 11 of each pixel 9 (steps S10 to S16) and a second phase in which a correction calculation is performed on input image data DIN associated with each subpixel 11 of each pixel 9 in accordance with the determined gamma curve (step S17). As the shape of a gamma curve used for contrast correction is specified by a correction point data set CP_selk in the present embodiment, the first phase involves determining a correction point data set CP_selk is determined for each subpixel 11 of each pixel 9 and the second phase involves performing correction calculation on input image data DIN associated with each subpixel 11 in accordance with the determined correction point data sat CP_selk.
Overall, the determination of the shape of the gamma curve in the first phase is achieved as follows: Note that details of the calculation at each step in the first phase are described later.
At step S10, APL-calculation image data DFILTER
At step S11, area characterization data DCHR
At step S12, filtered characterization data DCHR
Furthermore, at step S13, pixel-specific characterization data DCHR
At step S14, the gamma values γ—PIXELR, γ—PIXELG and γ—PIXELB of gamma curves used for correction calculation of each pixel 9 are calculated from APL data APL_PIXEL(y, z) of pixel-specific characterization data DCHR
At step S16, the correction point data sets CP_LR, CP_LG and CP_LB selected for each pixel 9 are modified in response to variance data σ2_PIXEL(y, x) of pixel-specific characterization data DCHR
The correction point data sets CP_selR, CP_selG and CP_selB are forwarded to the approximate gamma correction circuit 22. At step S17, the approximate gamma correction circuit 22 performs a correction calculation on input image data DIN associated with each pixel 9 in accordance with the gamma curves specified by the correction point data sets CP_selR, CP_selG and CP_selB determined for each pixel 9.
At the above-described processes at steps S11 to S16, a correction calculation for input image data DIN associated with each pixel 9 located in a certain area is basically achieved by determining pixel-specific characterization data DCHR
In such a case, as discussed in the above with reference to
The APL-calculating filtering process and square-mean-calculating filtering process performed at step S10 are directed to address the problem of the halo effect.
The APL-calculating filtering process in the present embodiment includes a calculation to set the luminance value of a pixel 9 of interest (which may be referred to as “target pixel”, hereinafter) to a specific luminance value (hereinafter, referred to as “APL-calculation alternative luminance value”) in response to the differences of the luminance value of the target pixel from those of the pixels 9 near the target pixel in the original image (that is, the luminance image associated with the input image data DIN). When the differences of the luminance value of the target pixel from those of the pixels 9 near the target pixel in the original image are small, the luminance value of the target pixel of the APL-calculation luminance image (luminance image obtained by the APL-calculating filtering processes) is set to the APL-calculation alternative luminance value. Note that the APL-calculation alternative luminance value is a fixed value. When the differences of the luminance value of the target pixel from those of the pixels 9 near the target pixel in the original image are large, on the other hand, the luminance value of the target pixel of the APL-calculation luminance image is set to be equal to the luminance value of the target pixel of the original image. When the differences of the luminance value of the target pixel from those of the pixels 9 near the target pixel in the original image are medium, the luminance value of the target pixel of the APL-calculation luminance image is determined as a weighted average of the luminance value of the target pixel of the original image and the APL-calculation alternative luminance value.
According to such calculation, the APL of an area mainly consisting of a region in which the changes in the luminance value are small is calculated as the APL-calculation alternative luminance value or a value close to the APL-calculation alternative luminance value. As a result, when two areas each of which mainly consists of a region in which the changes in the luminance value are small are adjacent, the APLs of the adjacent two areas are calculated as close values and therefore the gamma values of the gamma curves are calculated as almost the same value with respect to the adjacent two areas at step S14. This results in that gamma curves with similar shapes are determined for the pixels 9 in the adjacent two areas, effectively suppressing occurrence of a halo effect. It should be noted here that, although the luminance values of pixels 9 remain unchanged in the APL-calculating filtering process for a region in which the changes in the luminance value are large, the halo effect is not remarkable in such a case. Furthermore, discontinuities in an image finally displayed in the display region 5 are reduced, because an intermediate calculation of the calculations performed for a region in which the changes in the luminance value are large and for a region in which the changes in the luminance value are small is performed for a region in which the changes in the luminance value are medium.
The APL-calculation alternative luminance value is preferably determined as the average value of the allowed maximum value and allowed minimum value of the luminance value of the luminance image associated with the input image data DIN (that is, the luminance image obtained by performing a color transformation on the input image data DIN). Note that the allowed maximum value and allowed minimum value of the luminance value of the luminance image associated with the input image data DIN are determined on the number of bits of data representing the luminance value of each pixel of the luminance image. When the number of bits of data representing the luminance value of each pixel of the luminance image of the input image data DIN is eight, the allowed minimum value is 0 and the allowed maximum value is 255; in this case, the APL-calculation alternative luminance value is preferably determined as 128. It should be noted however that the APL-calculation alternative luminance value may be determined as any value ranging from the allowed minimum value to the allowed maximum value.
Similarly, the square-mean-calculating filtering process in the present embodiment includes a calculation to set the luminance value of the target pixel to a specific luminance value (hereinafter, referred to as “square-mean-calculation alternative luminance value”) in response to the differences of the luminance value of the target pixel from those of the pixels 9 near the target pixel in the original image (that is, the luminance image associated with the input image data DIN). Note that the square-mean-calculation alternative luminance value is a fixed value. When the differences of the luminance value of the target pixel from those of the pixels 9 near the target pixel in the original image are small, the luminance value of the target pixel of the square-mean calculation luminance image is set to the square-mean calculation alternative luminance value. When the differences of the luminance value of the target pixel from those of the pixels 9 near the target pixel in the original image are large, on the other hand, the luminance value of the target pixel of the square-mean calculation luminance image is set to be equal to the luminance value of the target pixel of the original image. When the differences of the luminance value of the target pixel from those of the pixels 9 near the target pixel in the original image are medium, the luminance value of the target pixel of the square-mean calculation luminance image is determined as a weighted average of the luminance value of the target pixel of the original image and the square-mean-calculation alternative luminance value.
According to such calculation, the mean of squares of the luminance values indicated by the square-mean data associated with an area mainly consisting of a region in which the changes in the luminance value are small is calculated as the square-mean-calculation alternative luminance value or a value close to the square-mean-calculation alternative luminance value. As a result, when two areas each of which mainly consists of a region in which the changes in the luminance value are small are adjacent to each other, the square means of the luminance values are calculated as close values for the adjacent two areas and therefore the shapes of the gamma curves are modified to almost the same degree with respect to the adjacent two areas at step S16. This results in that gamma curves with similar shapes are determined for the pixels 9 in the adjacent two areas, effectively suppressing occurrence of a halo effect. It should be noted here that, although the luminance values of pixels 9 remain unchanged in the square-mean-calculating filtering process for a region in which the changes in the luminance value are large, the halo effect is not remarkable in such a case. Furthermore, discontinuities in an image finally displayed in the display region 5 are reduced, because an intermediate calculation of the calculations performed for a region in which the changes in the luminance value are large and performed for a region in which the changes in the luminance value are small is performed for a region in which the changes in the luminance value are medium.
When the APL-calculating filtering process and the square-mean calculating filtering process are not performed, as illustrated in the upper row of
When the APL-calculating filtering process and the square-mean calculating filtering process are performed, on the other hand, as illustrated in the lower row of
In the following, a detailed description is given of the calculations performed at the respective steps illustrated in
As described above, at, step S10, the APL-calculating filtering process and the square-mean-calculating filtering processes are performed on input image data DIN to calculate APL-calculation image data (image data of an APL-calculation luminance image) and square-mean-calculation image data (image data of an square-mean-calculation image).
In the APL-calculating filtering process in the present embodiment, the luminance value YjAPL of pixel #j (that is, the target pixel) in the APL-calculation luminance image is calculated in accordance with the following expression (1):
Y
j
APL=(1−α)·YAPL
where Yj is the luminance value of pixel #j in the luminance image corresponding to the input image data DIN, YAPL
The above-described expression (1) means that the luminance value YjAPL of pixel #j in the APL-calculation luminance image is calculated as a weighted average of the APL-calculation alternative luminance value and the luminance value of pixel #j in the luminance image corresponding to the input image data DIN, and the weights given to the APL-calculation alternative luminance value and the luminance value of pixel #j in the luminance image corresponding to the input image data DIN depend on the coefficient of change α in the calculation of the weighted average. The luminance value YjAPL of pixel #j in the APL-calculation luminance image is equal to the APL-calculation alternative luminance value YAPL
Correspondingly, the luminance value Yj<Y2> of pixel #j (that is, the target pixel) in the square-mean-calculation luminance image is calculated in accordance with the following expression (2):
Y
j
<Y2>=(1−α)·Y<Y2>
where Y<Y2>
The above-described expression (2) means that the luminance value Yj<Y2> of pixel #j in the square-mean-calculation luminance image is calculated as a weighted average of the square-mean-calculation alternative luminance value and the luminance value of pixel #j in the luminance image corresponding to the input image data DIN, and the weights given to the square-mean-calculation alternative luminance value and the luminance value of pixel #1 in the luminance image corresponding to the input image data DIN depend on the coefficient of change α in the calculation of the weighted average. The luminance value Yj<Y2> of pixel #j in the square-mean-calculation luminance image is equal to the square-mean-calculation alternative luminance value YAPL
In the example illustrated in
α=|YSUM|/K(for |YSUM|<K), and
α=1 (for |YSUM|≧K), (3)
where K is a predetermined coefficient (fixed value).
Let us consider the case when pixels #1 to #3 are arrayed in the X-axis direction (that is, the sub-pixels 11 of pixels #1 to #3 are connected with the same gate line 7) and pixel #3 is selected as the target pixel, where pixel #2 is the pixel adjacent on the left of pixel #3 and pixel #1 is the pixel adjacent on the left of pixel #2. The coefficient of change α is calculated from the convolution sum YSUM of the respective elements of a 1×3 filter matrix and the luminance values of pixels #1 to #3. The values of the respective elements of the filter matrix are defined as illustrated in
In Example 1 in which the luminance values of pixels #1, #2 and #3 in the original image are 100, 101 and 102, respectively, the convolution sum YSUM is calculated as zero and the coefficient of change α is also calculated as zero. In Example 2 in which the luminance values of pixels #1, #2 and #3 in the original image are 100, 101 and 104, respectively, on the other hand, the convolution sum |YSUM| is calculated as −2 (that is, the absolute value |YSUM| of the convolution sum YSUM is calculated as 2) and the coefficient of change α is calculated as 0.5.
In the configuration in which the coefficient of change α is calculated from the convolution sum YSUM of the respective elements of a filter matrix and the luminance values of pixels 3 which include the target pixel and arrayed in the X-axis direction in the original image, the coefficient of change α can be calculated without using input image data DIN associated with pixels connected with the gate lines 7 adjacent to the gate line 7 connected with the target pixel. This preferably reduces the size of the circuit used for the calculation of the coefficient of change α.
Various matrixes may be used as a filter matrix used for the calculation of the coefficient of change α.
At step S11, area characterization data DCHR
More specifically, in the present embodiment, APL data of area characterization data DCHR
where Data_count is the number of pixels 9 located in the area A (N, M), YjAPL is the luminance value of each pixel 9 in the APL-calculation luminance image and Σ represents the sum with respect to area A(N, M).
On the other hand, square-mean data of area characterization data DCHR
where Data_count in the number of pixels 9 located in the area A(N, M), Yj<Y2> is the luminance value of each pixel 9 in the square-mean-calculation luminance image and Σ represents the sum with respect to area A(N, M).
At step S12, filtered characterization data DCHR
As understood from
Referring to
The area characterization data DCHR
APL_FILTER(0,0)=APL(0,0 ), (6a)
σ2_FILTER(0,0)=σ2(0,0), (6b)
APL_FILTER(0,Mmax)=APL (0,Mmax−1), (6c)
σ2_FILTER(0,Mmax)=σ2 (0,Mmax−1), (6d)
APL_FILTER(Nmax,0)=APL(Nmax−1,0), (6e)
σ2_FILTER(Nmax,0)=σ2(Nmax−1,0), (6 f)
APL_FILTER(Nmax,Mmax)=APL(Nmax−1,Mmax−1), and (6g)
σ2_FILTER (Nmax,Mmax)=σ2(Nmax−1,Mmax−1), (6h)
where APL_FILTER (i, j) is the value of APL data associated with the vertex VTX(i, j) and σ2FILTER(i, j) is the value of variance data associated with the vertex VTX(i, j). As described above, APL(i, j) is the APL of the area A(i, j) and σ2(i, j) is the variance of the luminance values of the pixels 9 in the area A(i, j) and obtained by the following expression (A):
σ2(i,j)=<Y2>(i,j)−{APL(i,j)}2. (A)
The vertices positioned on the four sides of the display region 5 (in the example illustrated
APL_FILTER (0,M)={APL(0,M−1)+APL(0,M)}/2, (7a)
σ2_FILTER(0,M)={σ2(0,M−1)+σ2(0,M)}/2, (7b)
APL_FILTER(N,0)={APL(N−1,0)+APL(N,0)}/2, (7c)
σ2_FILTER(N,0)={σ2(N−1,0)+σ2(N,0)}/2, (7d)
APL_FILTER (Nmax,M)={APL(Nmax,M−1)+APL(Nmax,M)}/2, (7e)
σ2_FILTER(Nmax,M)={σ2(Nmax,M−1)+σ2(Nmax,M)}/2, (7f)
APL_FILTER (N,Mmax)={APL(N−1,Mmax)+APL(N,Mmax)}/2, and (7g)
σ2_FILTER(N,Mmax)={σ2(N−1,Mmax)+σ2(N,Mmax)}/2, (7h)
where M is an integer from one to Mmax−1 and N is an integer from one to Mmax−1. Note that σ2(i, j) is given by the above-described expression (A).
The vertices which are located neither at the four corners of the display region 5 nor on the four sides (that is, the vertices located at intermediate positions) each belong to adjacent four areas arrayed in two rows and two columns. APL data of filtered characterization data DCHR
APL_FILTER(N,M)={APL(N−1,M−1)+APL(N−1,M)+APL(N,M−1)+APL(N,M)}/4, and (8a)
σ2_FILTER(N,M)={σ2(N−1,M−1)+σ2(N−1,M)+σ2(N,M−1)+σ2(N,M)}/4. (8b)
Note that σ2(i, j) is given by the above-described expression (A).
At step S13, pixel-specific characterization data DCHR
In
s=x−(Xarea×M), and (9a)
t=y−(Yarea×N), (9b)
where x is the position represented in units of pixels in the display region 5 in the X-axis direction, Xarea is the number of pixels arrayed in the X-axis direction in each area, y is the position represented in units of pixels in the display region 5 in the Y-axis direction, and Yarea is the number of pixels arrayed in the Y-axis direction in each area. As described above, when the display region 5 of the LCD panel 2 includes 1920×1080 pixels and is divided into areas arrayed in six rows and six columns, Xarea (the number of pixels arrayed in the X-axis direction in each area) is 320 (−1920/6) and Yarea (the number of pixels arrayed in the Y-axis direction in each area) is 180 (=1080/6).
The pixel-specific characterization data DCHR
where APL_PIXEL(y, x) is the value of APL data calculated for a pixel 9 positioned at an X-axis direction position x and a Y-axis direction position y in the display region 5 and σ2_PIXEL(y, x) is the value of variance data calculated for the pixel 9.
The above-described processes at steps S12 and S13 would be understood as a whole as processing to calculate pixel-specific characterization data DCHR
At step S14, the gamma values to be used for the gamma correction of input image data DIN associated with each pixel 9 is calculated from the APL data of the pixel-specific characterization data DCHR
γ_PIXELR=γ_STDR+APL_PIXEL(y,x)·ηR, (11a)
where γ_PIXELR is the gamma value to be used for the gamma correction of the input image data DIN associated with the R subpixel 11R of the certain pixel 9, γ_STDR is a given reference gamma value and ηR is a given positive proportionality constant. It should be noted that, in accordance with expression (11a) (the gamma value γ_PIXELR increases as APL_PIXEL(y, x) increases.
Correspondingly, the gamma values to be used for the gamma corrections of input image data DIN associated with the G subpixel 11G and B subpixel 11B of the certain pixel 9 positioned at the X-axis direction position x and the Y-axis direction position y in the display region 5 are respectively calculated in accordance with the following expressions:
γ_PIXELG=γ_STDG+APL_PIXEL(y,x)·ηG, and (11b)
γ_PIXELB=γ_STDB+APL_PIXEL(y,x)·ηB, (11c)
where γ_PIXELG and γ_PIXELB are the gamma values to be respectively used for the gamma corrections of the input image data DIN associated with the G subpixel 11G and B subpixel 11B of the certain pixel 9, γ_STDG and γ_STDB are given reference gamma values and ηG and ηB are given proportionality constants. γ_STDR, γ_STDG and γ_STDB may be equal to each other, or different, and ηR, ηG and ηB may be equal to each other, or different. It should be noted that the gamma values γ_PIXELR, γ_PIXELG and γ_PIXELB are calculated for each pixel 9.
At step S15, correction point data sets CP_LR, CP_LG and CP_LB are selected or determined on the basis of the calculated gamma values γ_PIXELR, γ_PIXELG and γ_PIXELB, respectively. It should be noted that the correction point data sets CP_LR, CP_LG and CP_LB are seed data used for calculating the correction point data sets CP_selR, CP_selG and CP_selB, which are finally fed to the approximate gamma correction circuit 22. The correction point data sets CP_LR, CP_LG and CP_LB are determined for each pixel 9.
In one embodiment, the correction point data sets CP_LR, CP_LG and CP_LB are determined as follows: A plurality of correction point data sets CP#1 to CP#m are stored in the correction point data set storage register 41 of the correction point data calculation circuit 29 and the correction point data sets CP_LR, CP_LG and CP_LB are each selected from among the correction point data sets CP#1 to CP#m. As described above, the correction point data sets CP#1 to CP#m correspond to different gamma values γ and each of the correction point data sets CP#1 to CP#m includes correction point data CP0 to CP5.
The correction point data CP0 to CP5 of a correction point data set CP#j corresponding to a certain gamma value γ are determined as follows:
where DINMAX is the allowed maximum value of the input image data DIN and depends on the number of bits of the input image data DINR, DING and DINB. Similarly, DOUTMAX is the allowed maximum value of the output image data DOUT and depends on the number of bits of the output image date DOUTR, DOUTG and DOUTB. K is a constant given by the following expression:
K=(DINMAX+1)/2. (13a)
In the above, the function Gamma [x], which is a function corresponding to the strict expression of the gamma correction, is defined by the following expression:
Gamma[x]=DOUTMAX·(x/DINMAX)γ (13b)
In the present embodiment, the correction point data sets CP#1 to CP#m are determined so that the gamma value γ recited in expression (13b) to which a correction point data set CP#j selected from the correction point data sets CP#1 to CP#m corresponds is increased as j is increased. In other words, it holds:
γ1<γ2< . . . γm−1<γm, (14)
where γj is the gamma value corresponding to the correction point data set CP#j.
In one embodiment, the correction point data set CP_LR selected from the correction point data sets CP#1 to CP#m on the basis of the gamma value γ_PIXELR. The correction point data set CP_LR is determined as a correction point data set CP#j with a larger value of j as the gamma value γ_PIXELR increases. Correspondingly, the correction point data sets CP_LG and CP_LB are selected from the correction point data sets CP#1 to CP#m on the basis of the gamma values γ_PIXELG and γ_PIXELB, respectively.
In an alternative embodiment, the correction point data sets CP_LR, CP_LG and CP_LB may be determined as follows: The correction point data sets CP#1 to CP#m are stored in the correction point data set storage register 41 of the correction point data calculation circuit 29. The number of the correction point data sets CP#1 to CP#m stored in the correction point data set storage register 41 is 2P−(Q−1), where P is the number of bits used to describe APL_PIXEL(y, x) and Q is a predetermined integer equal to more than two and less than P. This implies that m=2P−(Q−1). The correction point data sets CP#1 to CP#m to be stored in the correction point data set storage register 41 may be fed from the processor 4 to the drive IC 3 as initial settings.
Furthermore, two correction point data sets CP#q and CP#(q+1) are selected on the basis of the gamma value γ_PIXELk (k is any one of “R”, “G” and “B”) from among the correction point data sets CP#1 to CP#m stored in the correction point data set storage register 41 for determining the correction point data set CP_Lk, where g is an integer from one to m−1. The two correction point data sets CP#q and CP#(q+1) are selected to satisfy the following expression (15):
γq<γ_PIXELk<γq+1. (15)
The correction point data CP0 to CP5 of the correction point data sot CP_Lk are respectively calculated with an interpolation of correction point data CP0 to CP5 of the selected two correction point data sets CP#q and CP#(q+1).
More specifically, the correction point data CP0 to CP5 of the correction point data set CP_Lk (where k is any of “R”, “G” and “B”) are calculated from the correction point data CP0 to CP5 of the selected two correction point data sets CP#q and CP#(q+1) in accordance with the following expressions:
CPα
—
L
K
=CPα(#q)+{(CPα(#(q+1))−CPα(#q))/2Q}×APL_PIXEL[Q−1:0], (16)
where is an integer from aero to five, CP_Lk is the correction point data CP of correction point data set CP_Lk, CP(#q) is the correction point data CP of the selected correction point data set CP#q, CP(#(q+1)) is the correction point data CP of the selected correction point data set CP#(q+1), and APL_PIXEL[Q−1:0] is the lowest Q bits of APL_PIXEL (y, x).
At step S16, the correction point data set CP_Lk (where k is any of “R”, “G” and “B”) determined at step S15 are modified on the basis of variance data σ2_PIXEL(y, x) included in the pixel-specific characterization data DCHR
Since the correction point data CP1 and CP4 of the correction point data set CP_Lk largely influence the contrast, the correction point data CP1 and CP4 of the correction point data set CP_Lk are adjusted on the basis of the variance data σ2_PIXEL(y, x) in the present embodiment. The correction point data CP1 of the correction point data set CP_Lk is modified so that the correction point data CP1 of the correction point data set CP_selk, which is finally fed to the approximate gamma correction circuit 22, is decreased as the value of the variance data σ2_PIXEL(y, x) decreases. The correction point data CP4 of the correction point data set CP_Lk is, on the other hand, modified so that the correction point data CP4 of the correction point data set CP_selk, which is finally fed to the approximate gamma correction circuit 22, is increased as the value of the variance data σ2_PIXEL(y, x) decreases. Such modification results in that the correction calculation in the approximate gamma correction circuit 22 is performed to enhance the contrast, when the contrast of the image corresponding to the input image data DIN is small. It should be noted that the correction point data CP0, CP2, CP3 and CP5 of the correction point data set CP_Lk are not modified in the present embodiment. In other words, the values of the correction point data CP0, CP2, CP3 and CP5 of the correction point data set CP_selk are equal to the correction point data CP0, CP2, CP3 and CP5 of the correction point data set CP_Lk, respectively.
In one embodiment, the correction point data CP1 and CP4 of the correction point data set CP_selk are calculated in accordance with the following expressions:
CP1—selR=CP1—LR−(DINMAX−σ2_PIXEL(y,x))·ξR, (17a)
CP1—selG=CP1—LG−(DINMAX−σ2_PIXEL(y,x))·ξG, (17b)
CP1—selB=CP1—LB−(DINMAX−σ2_PIXEL(y,x))·ξB, (17c)
CP4—selR=CP4—LR+(DINMAX−σ2_PIXEL(y,x))·ξR, (18a)
CP4—selG=CP4—LG+(DINMAX−σ2_PIXEL(y,x))·ξG, (18b)
and
CP4—selB=CP4—LB+(DINMAX−σ2_PIXEL(y,x))·ξB, (18c)
where DINMAX is the allowed maximum value of the input image data DIN as described above, and ξR, ξG, and ξB are given proportionality constants; the proportionality constants ξR, ξG, and ξB may be equal to each other, or different. Note that CP1_selk and CP4_Lk are correction point, data CP1 and CP4 of the correction point data set CP_Lk and CP1_Lk and CP4_Lk are correction point data CP1 and CP4 of the correction point data set CP_Lk.
At step S17, a correction calculation is performed on input image data DINR, DING and DINB associated with each pixel 9 on the basis of the correction point data sets CP_selR, CP_selG and CP_selB calculated at step S16 for each pixel 9, respectively, to thereby generate the output image data DOUTR, DOUTG and DOUTB. This correction is performed by the approximate gamma correction units 22R, 22G and 22B.
In the correction calculation at step S17, the output image data DOUTk are calculated from the input image data DINk in accordance with the following expressions.
(1) For the case when DINk<DINCenter and CP1>CP0
It should be noted that the fact that the value of the correction point data CP0 is larger than that of the correction point data CP1 implies that the gamma value γ used for the gamma correction is smaller than one.
(2) For the case when DINk<DINCenter and CP1≦CP0
It should be noted that the fact that the value of the correction point data CP0 is equal to or less than that of the correction point data CP1 implies that the gamma value γ used for the gamma correction is equal to or larger than one.
(3) For the case when DINk>DINCenter
In the above, the center data value DINCenter is a value defined by the following expression:
D
IN
Center
=D
IN
MAX/2, (20)
where DINMAX is the allowed maximum value and K is the parameter given by the above-described expression (13a). Furthermore, DINS, PDINS, and NDINS recited in expressions (19a) to (19c) are values defined as follows:
DINS is a value which depends on the input image data DINk; DINS is given by the following expressions (21a) and (21b):
D
INS
=D
IN
k (for DINk<DINCenter) (21a)
DINS=DINk+1−K (for DINk>DINCenter) (21b)
PDINS is defined by the following expression (22a) with a parameter R defined by expression (22b):
PD
INS=(K−R)·R (22a)
R=K
1/2
·D
INS
1/2 (22b)
As understood from expressions (21a), (21b) and (22b), the parameter R is proportional to a square root of input image data DINk and therefore PDINS a value calculated by an expression including a term proportional to a square root of DINk and a term proportional to DINk (or one power of DINk).
NDINS is given by the following expression (23):
ND
INS=(K−DINS)·DINS. (23)
As understood from expressions (21a), (21b) and (23), NDINS is a value calculated by an expression including a term proportional to a square of DINk.
The output image data DOUTR, DOUTG and DOUTB, which are calculated by the approximate gamma correction circuit 22 with the above-described series of expressions, are forwarded to the color reduction circuit 23. The color reduction circuit 23 performs a color reduction on the output image data DOUTR, DOUTG and DOUTB to generate the color-reduced image data DOUT
As described above, occurrence of a halo effect is suppressed in the present embodiment, by performing an APL-calculating filtering process which involves setting the luminance value of the target pixel to a specific APL-calculation alternative luminance value in response to the differences of the luminance value of the target pixel from those of the pixels 9 near the target pixel in the original image. In detail, APL data of area characterization data associated with each area are calculated from an APL-calculation luminance image obtained by the APL-calculating filtering process. APL data of pixel-specific characterization data associated with a certain pixel 9 located in a certain area are calculated on the basis of the APL data of the area characterization data associated with the certain area, the APL data of the area characterization data associated with areas adjacent to the certain area, and the position of the certain pixel 9 in the area. The luminance values of pixels in an area in which changes in the luminance value are small are set to the APL-calculation alternative luminance value in the APL-calculation luminance image obtained by the APL-calculating filtering process, and accordingly APL data of area characterization data associated with adjacent two areas each of which includes a region in which changes in the luminance value are small are determined as close values. As a result, APL data of pixel-specific characterization data associated with the pixels 9 located in the adjacent two areas are also determined as close values. By determining the shape of the gamma curve (in the present embodiment, the gamma value) on the basis of the thus-determined APL data of the pixel-specific characterization data associated with each pixel 9, the shapes of the gamma curves are determined as similar for the pixels 9 located in the two areas, and this effectively suppresses occurrence of a halo effect.
In addition, occurrence of a halo effect is suppressed in the present embodiment by performing a square-mean-calculating filtering process which involves setting the luminance value of the target pixel to a specific square-mean-calculation alternative luminance value in response to the differences of the luminance value of the target pixel from those of the pixels 9 near the target pixel in the original image. In detail, square-mean data of area characterization data associated with each area are calculated from a square-mean-calculation luminance image obtained by the square-mean-calculating filtering process. Variance data of pixel-specific characterization data associated with a certain pixel 9 located in a certain area are calculated on the basis of the APL data and square-mean data of the area characterization data associated with the certain area, the APL data and square-mean data of the area characterization data associated with areas adjacent to the certain area, and the position of the certain pixel 9 in the area. The luminance values of pixels in an area in which changes in the luminance value are small are set to the square-mean-calculation alternative luminance value in the square-mean-calculation luminance image obtained by the square-mean-calculating filtering process, and accordingly variance data of area characterization data associated with adjacent two areas each of which includes a region in which changes in the luminance value are small are determined as close values. By determining the shape of the gamma curve (in the present embodiment, the gamma value) on the basis of the thus-determined variance data of the pixel-specific characterization data associated with each pixel 9, the shapes of the gamma curves are determined as similar for the pixels 9 located in the two areas, and this effectively suppresses occurrence of a halo effect.
Although the above-described embodiments recite that the gamma curves associated with each pixel 9 are modified on the basis of the variance data of the pixel-specific characterization data associated with each pixel 9 (that is, the correction point data CP1 and CP4 of the correction point data set CP_selk are determined by modifying the correction point data CP1 and CP4 of the correction point data set CP_Lk on the basis of the variance data of the pixel-specific characterization data associated with each pixel 9), the modification of the gamma curves based on the variance data of the pixel-specific characterization data associated with each pixel 9 may be omitted. In other words, step S16 may be omitted and the correction point data set CP_Lk determined at step S15 may be used as the correction point data set CP_selk without modification.
In this case, processes related to square-mean data and variance data may be omitted. That is, the square-mean data calculating filtering process at step S10, the calculation of variance data of area characterization data DCHR
Although the above-described embodiments recite that gamma values γ_PIXELR, γ_PIXELG and γ_PIXELB are individually calculated for the R subpixel 11R, G subpixel 11G and B subpixel 11B of each pixel 9 and the correction calculation is performed depending on the calculated gamma values γ_PIXELR, γ_PIXELG and γ_PIXELB, a common gamma value γ_PIXEL may be calculated for the R subpixel 11R, G subpixel 11G and B subpixel 11B of each pixel 9 to perform the same correction calculation.
In this case, for each pixel 9, a gamma value γ_PIXEL common to the R subpixel 11R, G subpixel 11G and B subpixel 11B is calculated from the APL data APL_PIXEL(y, x) associated with each pixel 9 in accordance with the following expression:
γ_PIXEL=γ_STD+APL_PIXEL(y,x)·η, (11a′)
where γ_STD is a given reference gamma value and η is a given positive proportionality constant. Furthermore, a common correction point data set CP_L is determined from the gamma value γ_PIXEL. The determination of the correction point data set CP_L from the gamma value γ_PIXEL is achieved in the same way as the above-described determination of the correction point data set CP_Lk (k is any of “R”, “G” and “B”) from the gamma value γ_PIXELk. Furthermore, the correction point data set CP_L is modified on the basis of the variance data σ2_PIXEL(y, x) associated with each pixel 9 to calculate a common correction point data set CP_sel. The correction point data set CP_sel is calculated in the same way as the correction point data set CP_selk (k is any of “R”, “G” and “B”), which is calculated by modifying the correction point data set CP_Lk on the basis of the variance data σ2_PIXEL(y, x) associated with each pixel 9. For the input image data DIN associated with any of the R subpixel 11R, G subpixel 11G and B subpixel 11B of each pixel 9, the output image data DOUT are calculated by performing a correction calculation based on the common correction point data set CP_sel.
Is should be also noted that, although the above-described embodiments recite the liquid crystal display device 1 including the LCD panel 2, the present invention is applicable to various panel display devices including different display panels (for example, a display device including an OLED (organic light, emitting diode) display panel).
It would be apparent that the present invention is not limited to the above-described embodiments, which may be modified and changed without departing from the scope of the invention.
Number | Date | Country | Kind |
---|---|---|---|
2014-023874 | Feb 2014 | JP | national |