This application is based upon and claims the benefit of priority from the prior Japanese Patent Application No. 2016-087977, filed on Apr. 26, 2016, the entire contents of which are incorporated herein by reference.
The present invention relates to a display device, and the embodiments of the invention disclosed in the present specification relate to display devices such as organic electroluminescence displays.
Low power consumption is regarded as one challenge for display devices such as organic electroluminescence displays. The simplest method for reducing power consumption in display devices such as organic electroluminescence displays is to reduce the luminance (quantity of luminescence) of each pixel. The power consumption of display devices such as organic electroluminescence displays is determined by accumulate the luminance of each pixel thereby. It becomes possible to reduce the power consumption of display devices such as organic electroluminescence displays by reducing the luminance of each pixel as described above.
However, when reducing power consumption by this method, because the screen darkens, the user is given the impression that the image quality has deteriorated. Therefore, various techniques have been proposed for reducing power consumption without giving the user such an impression.
The display device according to one embodiment of the present invention is a display device having a division circuit dividing an input image including a plurality of pixels into a plurality of regions based on the feature quantity of the plurality of pixels, a luminance reduction rate calculation circuit calculating the reduction rate of luminance of each region based on the surface area of each of the plurality of regions, and an image generation circuit generating output images by correcting the luminance of each of the plurality of pixels based on the reduction rate calculated by the luminance reduction rate calculation circuit.
Hereinafter, the driving method of the display device according to the present invention will be described in detail while referencing the drawings. Further, the driving method of the display device according to the present invention is not limited to the embodiments below, and may be implemented in many different ways. For convenience of explanation, the dimensions of the drawings are different from the actual dimensions, and parts of the structure may be omitted from the drawings.
The simplest method for reducing power consumption of display devices such as organic electroluminescence displays is to reduce the luminance (quantity of luminescence) of each pixel which makes the screen darker. One conceivable method for saving power without simply making the screen darker is to determine the quantity of luminance to be reduced on a pixel by pixel basis in proportion to the feature quantity of each pixel (hue, saturation, brightness). For example, the greater the brightness of the pixel, the more the luminance will be reduced.
Even if the quantity of luminance to be reduced is decided in this way, the screen inevitably becomes dark as it does when the luminance of every pixel is reduced. Under such conditions, it is conceivable that increasing contrast by selectively brightening regions with small surface areas appearing in the output images may be a method for not giving the viewer the impression, as much as possible, that the image quality has deteriorated.
The display device 1 is an organic electroluminescence display using an active matrix drive system, and carries out display operations of the output images by controlling the light emission of the organic electroluminescence elements in accordance with the output images. Further, the display device 1 may be a top emission type organic electroluminescence display, or a bottom emission type organic electroluminescence display.
As is shown in
The frame buffer B1 is a storage circuit configured to store an input image for one frame. RGB data input images are first stored in frame buffer B1 (Step S1 in
The image pre-processing circuit 10 is a function part performing the predetermined pre-processing of the input image stored in the frame buffer B1 (Step S2 in
The image pre-processing circuit 10 is configured to extract input images from the frame buffer B1 of a total of 9 pixels, 3 vertical pixels×3 horizontal pixels, at a time, and performs pre-processing, providing the image after pre-processing (the input image when pre-processing is not performed) sequentially one row by one row in order from the top (in the order of row 1, row 2 . . . row N as illustrated in
The line buffer B2 is a storage circuit configured to store up to two rows of data input in sequential order from the image pre-processing circuit 10. When one new row of data is supplied from the image pre-processing circuit 10, the line buffer B2 cancels the data supplied two times previously. As a result, the newly supplied data and the second row of data supplied one time previously are stored in the line buffer B2. The stored content of the line buffer B2 is reset when the processing of the new frame begins.
The edge detection and labeling circuit 11 is a division circuit dividing the input image into a plurality of regions based on the feature quantity of each pixel. Specifically, the edge detection and labeling circuit 11 is configured to assign a label showing the affiliated region by performing a detection and labeling process in order from the left side (in order of a first row of pixels, a second row of pixels . . . an M row of pixels shown in
In order to perform the edge detection and labeling process, the edge detection and labeling circuit 11 references three pixels A to C shown in
The edge detection and labeling process firstly determines whether or not pixel A and pixel B have the same feature quantity (Step S21). The feature quantity indicates one or two or more combinations of hue, saturation, and brightness calculated from the luminance of each color of each pixel. The “same feature quantity” includes feature quantity within the range of a predetermined value, not just feature quantity that are exactly the same.
Hereinafter, the specific determination method in Step S21 will be described with four examples. In the description below, the feature quantity of pixel A will be represented as t, and the feature quantity of pixel B will be represented as a.
The first example is a method using addition threshold value c. The addition threshold value c is preferably a numerical value of 1 or more, for example. When using this method, the edge detection and labeling circuit 11 determines whether or not a−c<t<a+c is satisfied in Step S21, and when positive determination results are given, it is determined that pixel B and pixel A have the same feature quantity.
The second example is a method using the integration threshold value r. The integration threshold value r is, for example, preferably a numerical value larger than 0.0 and smaller than 1.0 (0.0<r<1.0). When using this method, the edge detection and labeling circuit 11 determines whether or not a/r<t<a×r is satisfied in Step S21, and when positive determination results are given, it is determined that pixel B and pixel A have the same feature quantity.
The third example is a method using the addition threshold value decision function f(t).
When using the method according to the third example, the edge detection and labeling circuit 11 determines whether or not a−f(t)<t<a+f(t) is satisfied in Step S21, and when positive determination results are given, it is determined that pixel B and pixel A have the same feature quantity. When the exponential function shown in
The fourth example is a method using an integrated threshold determination function g(t). This function g(t) may also be used as the same exponential function as function f(t), or as a linear function, curve function, a logarithmic function, or the like. When using this method, the edge detection and labeling circuit 11 determines whether or not a/g(t)<t<a×g(t) is satisfied in Step S21, and when positive determination results are given, it is determined that pixel B and pixel A have the same feature quantity.
Refer to
When it is determined that pixel C and pixel A have the same feature quantity in Step S23, the edge detection and labeling circuit 11 decides to assign the same label for pixel A and pixel C (label assigned to pixel C by the edge detection and labeling process in which pixel C is the target pixel) (Step S24). On the other hand, if it is determined that pixel C and pixel A do not have the same feature quantity, the edge detection and label circuit 11 assigns a new label (a label not yet assigned to a pixel in the same frame) for pixel A (Step S25).
The edge detection and labeling process ends here and Step S7 shown in
Refer to
The labeling correction circuit 12 is provided to compensate for flaws in the previously described edge detection and labeling process. That is to say, it is possible that different labels will be assigned by the previously described edge detection and labeling process to two regions adjacent to each other having the same feature quantity (see the examples described below). For example, when there is a region in which the feature quantity in the input image gradually changes, the feature quantity of two pixels (especially two pixels in separate locations) located inside one region identified by the same label may be completely different. The labeling correction circuit 12 compensates for flaws in an edge and labeling process such as this, and performs a labeling correction process with the purpose of assigning an appropriate label to each pixel. This is described in specific terms below.
In the individual loops in Step S31, the labeling correction circuit 12 first calculates the feature quantity of the focus region (Step S32). It is preferable that the average values of a feature quantity (average hue, average saturation, and average brightness) of a pixel in the region are used as the feature quantity of the region.
Next, regarding each pixel inside the focus region, the labeling correction circuit 12, determines whether or not the feature quantity of the target pixel and the target region are the same (Step S34). This determination is preferably made by the same process as Step S21 and S23 shown in
When it is determined that the feature quantity is not the same in Step S34, the labeling correction circuit 12 assigns a new label different from the target region to each pixel located inside a part of the target region including the target pixels (Step S35). An area configured by pixels having the same feature quantity as that of the target pixels (namely, pixels determined to be the same in Step S21) is preferably a specific area of this part. In this way, the target region is divided into two new regions.
When the target region is divided in Step S35, the labeling correction circuit 12 once exits the loop process of Step S31 and starts the same loop process again from the beginning. As a result, all areas including the areas newly generated by division are again subjected to loop processing. When loop processing is repeated, the processing in Steps S32 to S34 is preferably omitted in the regions already subjected to that processing.
When the loop processing in Step S31 is completed, next, the labeling correction circuit 12 re-calculates the feature quantity of each pixel (Step S36, S37). Then, the entire combination of adjacent regions is extracted, and the processing in Step S39 is executed for each combination (Step S38).
In Step S39, the labeling correction circuit 12 determines whether or not the feature quantity of the two regions in the target combination is the same (Step S39). This determination is also preferably carried out by the same processing as in Step S21 and S23 shown in
When it is determined that the feature quantity of the two regions are the same in Step S39, the labeling correction circuit 12 executes a process for changing the labels of the pixels in one target region to the labels of the pixels in another target region (Step S40). In this way, the two regions in the target combination are unified. When it is determined that the feature quantity of the two regions are not the same in Step S39, processing is shifted to the next combination without performing special processing. When processing of all combinations is completed, the labeling correction process by the labeling correction circuit 12 is completed.
Refer to
Subsequently, the region-specific luminance reduction rate calculation circuit 13 tentatively decides the maximum value of the reduction rate to be applied to each pixel (maximum reduction rate Max) and the minimum value of the reduction rate to be applied to each pixel (minimum reduction rate Min) (Step S52). The value tentatively decided upon is also preferably stored in advance in a memory not shown in the drawings of the display device 1.
Next, the region-specific luminance reduction rate calculation circuit 13 sets the reduction rate curve (Step S53). The reduction rate curve is for calculating the reduction rate of each region from the maximum reduction rate Max and the minimum reduction rate Min, and it is configured by a curve (including linear portions) formed on a coordinate plane having a pre-determined horizontal axis and a pre-determined vertical axis.
The reduction rate curve is shown in the example in
Refer to
Next, the region-specific luminance reduction rate calculation circuit 13 temporarily reduces the luminance of each pixel based on the calculated reduction rate of each region, and calculates the total reduction quantity D2 by subtracting the total luminance of each pixel after reduction from the total luminance of each pixel before reduction (Step S55). Then, the region-specific luminance reduction rate calculation circuit 13 determines whether or not the calculated total reduction quantity D2 and the total reduction quantity D1 calculated in Step S51 match (Step S56). Here, the word “match” does not necessarily mean a perfect match. For example, when the total reduction quantity D2 is within a pre-determined range with the total reduction quantity D1 at the center, the determination results of Step S56 may be a “match.”
When it is determined that the calculated total reduction quantity D2 and the total reduction quantity D1 do not match in Step S56, the region-specific luminance reduction rate calculation circuit 13 changes at least one of the maximum reduction rate Max and the minimum reduction rate Min in the range satisfying the predetermined search conditions (Step S57). Here, the predetermined search conditions are, for example with C as the constant, Max−Min=C or Tar−Min=C. The region-specific luminance reduction rate calculation circuit 13 returns to Step S53 and re-executes the process after this change.
Refer to
When it is determined that the total reduction quantity D1 and the total reduction quantity D2 match in Step S56, the region-specific luminance reduction rate calculation circuit 13 obtains the reduction rate of each pixel based on the newest reduction rate of each region calculated in Step S54 and stores it in the luminance reduction rate data buffer B4 shown in
Refer to
Described more specifically, the pixel light emission amount calculation circuit 14 may calculate the luminance of each pixel in the output image by multiplying the reduction rate corresponding to the luminance of each pixel stored in the frame buffer B1. When the multiplication result of the reduction rate is not a round number, an integer is preferably obtained by a predetermined rounding process such as rounding to the nearest number, omitting the numbers after the decimal, rounding up, or the like and set as the luminance of the output image.
As described above, in the display device 1 according to the present embodiment, an input image is divided into a plurality of regions based on the feature quantity of each of the plurality of pixels, and the reduction rate of luminance is calculated for each region based on the surface area of each region thereby the reduction rate is assigned to the regions with a greater surface area, and it becomes possible to make regions with a smaller surface area selectively brighter. Therefore, it is possible to minimize the impression the viewer may have that the image quality has deteriorated because the image becomes dark by reducing the luminance.
Below, examples of the present invention will be described while referencing
The numerical values mentioned in regions A to F show the RGB data of the pixels in those regions. For example, the pixels in region C are configured by RGB data (0, 214, 251) in which the luminance of red (R) is 0, the luminance of green (G) is 214, and the luminance of blue (B) is 251. This RGB data more or less shows aqua, in which the luminance of each pixel is 645 (=0+214+251). Similarly, the pixels in region A are configured by RGB data (255, 255, 255) more or less showing white (the luminance of each pixel is 765), the pixels in region B are configured by RGB data (3, 3, 228) more or less showing blue (the luminance of each pixel is 234), the pixels in region D are configured by RGB data (255, 242, 0) more or less showing white (the luminance of each pixel is 497), the pixels in region E are configured by RGB data (230, 2, 218) more or less showing pink (the luminance of each pixel is 450), and the pixels in region F are configured by RGB data (9, 253, 2) more or less showing green (the luminance of each pixel is 264).
Below, in order to keep the description brief, the feature quantity of each pixel will be described as different from each other in regions A to F (that is to say, it is not determined that the feature quantities are the same in Steps S21 and S23 in
As is shown in
The label map 101 has such results because when the edge detection and labeling circuit 11 assigns labels to pixels P1 to P3 shown in
It is not preferable for the number of labels to be greater than the number of regions in this way and this is corrected by the labeling correction circuit 12 shown in
In the present example, the results of the labeling correction process by the labeling correction circuit 12 are calculated by the reduction rates of each region A to F, respectively, 0.4, 0.3, 0.3, 0.1, 0.2, and 0.2. From these results, it is understood that the smaller the surface area of the region, the smaller the reduction rate calculated by the labeling correction circuit 12.
In this way, in the output image 103 according to the present example, the greater the surface area of a region, the greater the quantity of luminance of each pixel is reduced, and regions with smaller surface areas are selectively bright. However, as is described above, it is possible to minimize the impression the viewer may have that the image quality has deteriorated because the image becomes dark by reducing the luminance.
Although the preferable embodiments of the present invention have been described above, the present invention is not at all limited to these embodiments. Naturally, the present invention may be implemented in various ways without deviating from the gist of the invention.
For example, in the embodiments above, although the pixels referenced during the edge detection and labeling process shown in
Number | Date | Country | Kind |
---|---|---|---|
2016-087977 | Apr 2016 | JP | national |
Number | Name | Date | Kind |
---|---|---|---|
20080198263 | Kiuchi et al. | Aug 2008 | A1 |
20100315444 | Mori et al. | Dec 2010 | A1 |
20160042701 | Furumoto | Feb 2016 | A1 |
20160127655 | Li | May 2016 | A1 |
Number | Date | Country |
---|---|---|
2007-148064 | Jun 2007 | JP |
2007-298693 | Nov 2007 | JP |
2011-002520 | Jan 2011 | JP |
2006049058 | May 2006 | WO |
Number | Date | Country | |
---|---|---|---|
20170309251 A1 | Oct 2017 | US |