1. Field of the Invention
The present invention relates to image processing apparatuses and methods in which luminance values of input images are converted for output.
2. Description of the Related Art
In recent years, large screen flat display devices such as plasma displays, liquid crystal displays, rear projection displays, and the like have become widespread.
Generally, since limitations occur in the range of luminance levels for reproducible images (dynamic range) in these display devices due to characteristics of individual display devices, it is common for processing to be carried out for outputting of images in which contrast is emphasized within the limited dynamic range.
Histogram flattening is known as a typical processing technique. Specific description is given of basic processing techniques in histogram flattening using the following diagrams.
Furthermore, H(x) shown on the vertical axis is a luminance histogram indicating a number of pixels appearing of the input luminance level x. And C(x) shown as a dotted line is a cumulative luminance histogram up to the input luminance level x. It should be noted that a relationship between the luminance histogram and the cumulative luminance histogram can be expressed by formula (1).
Here, a smallest output luminance level is given as x′min and a largest output luminance level is given as x′max. And the vertical axis of the cumulative luminance histogram C (x) is normalized to C(Xmin)=x′min and C(xmax)=x′max.
When a luminance level numeral is given as L, a relationship between C(x) and C′(x) is shown by formula (2).
Histogram flattening refers to a process by which input luminance levels are converted using the histogram flattening function C′(x) that is calculated as shown above, and after this processing it is possible to obtain an output image in which the frequency distribution of luminance levels have become uniform.
Generally since each luminance level is used uniformly in histogram flattening, conversion can be achieved to images having rich tone expression in which contrast is emphasized as a whole. On the other hand, excessive contrast is emphasized when there is a large disparity in the frequency distribution of the input luminance levels, which may result in the output of an unnatural image.
For this reason, in picture quality correction circuits, such as that disclosed in Japanese Patent Application Laid-Open No. 2001-125535 for example, picture quality deterioration due to excessive emphasis of contrast is suppressed by carrying out a picture quality correction process in which a limit is set on the number of appearances for the respective input luminance levels and distribution of extreme characteristic points is suppressed.
However, with conventional histogram flattening, the histogram flattening has been carried out based on a histogram of the image of the entire screen, and therefore there has been a problem of tone expression deteriorating in some areas. For example, when the histogram of an entire screen favors bright areas, partial regions having a low luminance level within the screen end up being converted to a very low luminance level, thus resulting in a problem that partial darkish regions become all black.
Also, conversely, when the histogram of an entire screen favors dark areas, partial regions having a high luminance level within the screen end up being converted to a very high luminance level, thus resulting in a problem that partially bright regions become all white.
An embodiment of the present invention is provided to enable adjustment of an image's luminance values so as to reduce clipped shadows and clipped highlights that occurs in some areas when contrast has been emphasized within a limited dynamic range.
According to one aspect of the present invention, there is provided an image processing apparatus in which luminance values of an input image are converted for output, comprising: a first image information extracting unit configured to extract first image information from a first image region including a target pixel; a first conversion characteristics calculating unit configured to calculate a first conversion characteristic from the first image information; a second image information extracting unit configured to extract second image information from a second image region including the first image region; a second conversion characteristics calculating unit configured to calculate a second conversion characteristic from the second image information; a weighted coefficient calculating unit configured to calculate a weighted coefficient; and a third conversion characteristics calculating unit configured to calculate a third conversion characteristic for converting a luminance value of the target pixel, using the first conversion characteristic, the second conversion characteristic, and the weighted coefficient, wherein the luminance value of the target pixel is converted and outputted based on the third conversion characteristic.
According to another aspect of the present invention, there is provided a method in which luminance values of an input image are converted for output, comprising: extracting first image information from a first image region including a target pixel; calculating a first conversion characteristic from the first image information; extracting second image information from a second image region including the first image region; calculating a second conversion characteristic from the second image information; and calculating a third conversion characteristic for converting a luminance value of the target pixel, using the first conversion characteristic, the second conversion characteristic, and a weighted coefficient, wherein the luminance value of the target pixel are converted and outputted based on the third conversion characteristic.
Further features of the present invention will become apparent from the following description of exemplary embodiments with reference to the attached drawings.
Hereinafter, detailed description of preferred embodiments for executing the present invention is given with reference to the accompanying drawings.
[First Embodiment]
Next, the memory unit 320 corresponds to a frame memory, which is for receiving signals that are outputted from the image input unit 310 and outputting with a delay of at least one frame. For example, this includes an SDRAM (synchronous dynamic random access memory) and an interface for a memory controller thereof. The signals that are outputted from the image input unit 310 are inputted to the memory unit 320 and also inputted to the luminance value adjusting unit 300.
Next, the image output unit 330 converts the output video signals, which have undergone luminance adjustment in the luminance value adjusting unit 300, into signals suitable for an image display device and outputs these. Image display devices include for example plasma displays, liquid crystal displays, rear projection displays, and the like.
Here, description is given of a detailed configuration of the luminance value adjusting unit 300 and a luminance value adjusting process in which luminance values of signals inputted from the image input unit 310 are adjusted.
In the first embodiment, when a target pixel is in the position of the reference point 401, the region 402 of a fixed size centering on the reference point 401 is designated as a first image region and the region 400 of the entire screen is designated as a second image region. Then, a luminance histogram (called a first luminance histogram) is obtained from the first image region and a luminance histogram (called a second luminance histogram) is obtained from the second image region. Further still, a conversion function is calculated based on a luminance histogram (called a third luminance histogram) in which the first and second luminance histograms are combined, and a luminance value of the target pixel is adjusted according to the calculated conversion function.
It should be noted that “a target pixel” is the reference point based on which the first image region is set and also is a pixel to be focused on whose the luminance value is to be adjusted. Furthermore, by handling all the pixels in the input signals in succession as the target pixel, luminance value adjustments are carried out for the entire screen.
In the luminance value adjusting unit 300 shown in
Numeral 301 indicates a first image information extracting unit, which receives video signals from the memory unit 320 and determines a position of the target pixel, then extracts video signals of the first image region based on position information of the target pixel and converts these to luminance values. Numeral 303 indicates a first luminance histogram calculation unit, which receives luminance values from the first image information extracting unit 301 and calculates a luminance histogram (the first luminance histogram) of the first image region for each time the position of the target pixel changes. After calculation, the first luminance histogram of each time the position of the target pixel changed is stored in the conversion characteristics calculating unit, which is described later.
As described above, in the first embodiment, the region 402 of a fixed size centering on the target pixel is set as the first image region and therefore the position of the first image region changes each time the position of the target pixel changes. For this reason, it is necessary to calculate the first luminance histogram each time the position of the target pixel changes.
On the other hand, the second image region is the region 400 of the entire screen and therefore the second image region is constant regardless of the position of the target pixel. That is, the second luminance histogram may be calculated for each frame (each time the display screen switches).
Furthermore, by providing the memory unit 320, the first image information extracting unit 301 and the second image information extracting unit 302 can process the same video signals at different timings. Consequently, the second luminance histogram, which requires a large processing capacity until the luminance histogram is calculated since the surface area of the region is large, can be calculated in advance and stored in the conversion characteristics calculating unit, which is to be described later. On the other hand, the first luminance histogram, which requires little processing capacity since the surface area of the region is small, can be calculated and stored successively using the video signals from the memory unit 320.
When obtaining luminance values from the input image signals, the first image information extracting unit 301 and the second image information extracting unit 302 operate based on formula (3) for example. Note that the input video signals (RGB values) are set to Rin, Gin, and Bin, and the luminance values are set to Yin.
Yin=0.299 Rin+0.587 Gin+0.114 Bin (3)
Here, description is given concerning a luminance histogram calculating technique in the first luminance histogram calculation unit 303 and the second luminance histogram calculation unit 304.
First, the luminance histogram can be obtained by using a counter to count the number of pixels appearing in the input luminance values for each set of input luminance values. Here, specific description is given using
It should be noted that the above-mentioned number of luminance levels is not limited to 16 and may be any number. For example,
Returning to
In
In the combining process, a third luminance histogram H(x) is obtained by adding the first luminance histogram H1(x) and the second luminance histogram H2(x) based on formula (4) for each luminance level. Here, w1 and w2 are weighted coefficients.
H(x)=w1·H1(x)+w2·H2(x) (4)
When adding luminance histograms without using weighted coefficients, the second luminance histogram has a higher proportion than the first luminance histogram with respect to the third luminance histogram. This is because, compared to the first luminance histogram, the second luminance histogram is calculated from luminance values of a region having a large image region size (large number of pixels). Accordingly, in the first embodiment, a weighted coefficient calculating unit 305 sets weighted coefficients depending on each region size (number of pixels) of the first image region and the second image region.
For example, when a Full HD (1,920×1,080 pixels) display device is used, the first image region size is set to 16×16 pixels and the second image region size is set to 1,920×1,080 pixels. In this case, it is preferable to set w1=1,024 and w2=1, approximately, as the weighted coefficients. Also, in consideration of the circuit scale, these may be set as w1=1 and w2=1/1,024.
As mentioned earlier, the histogram flattening function obtained here is calculated based on a luminance histogram including both luminance information of the entire screen and luminance information of areas around the target pixel.
For this reason, compared to when converting a luminance value of the target pixel using a conventional histogram flattening function obtained from only luminance information of the entire screen, the probability is higher of being able to convert the luminance levels of regions in the vicinity of target pixel to broader luminance levels. That is, the conventional problems of darkish regions becoming all black and light regions becoming all white are reduced.
On the other hand, when the histogram flattening function is used as it is to carry out adjustment of luminance values, excessive luminance extension can be applied due to conversion characteristics of the histogram flattening function, and unnatural images may be outputted. For this reason, the conversion characteristics calculating unit 306 carries out a limiting process on the conversion intensity of the histogram flattening function to suppress excessive luminance extension.
Next, description is given using a specific example concerning a limiting process of the histogram flattening function in the conversion characteristics calculating unit 306.
In order to suppress excessive luminance extension, the conversion characteristics of the histogram flattering function may be close to those when there is no conversion. Accordingly, in the limiting process of the conversion characteristics calculating unit 306, processing is carried out so that while the conversion characteristics of the histogram flattening function are maintained as much as possible, they are kept close to a constant proportion to the conversion characteristics when there is no conversion. Specifically, as shown in formula (5), a difference value between the histogram flattening function and the no-conversion function is obtained and 40% of the difference value is added to the no-conversion function. In this way, the conversion function shown by (C) in
F(x)=x+0.4(C′(x)−x) (5)
It should be noted that the coefficient 0.4 in the second section on the right in formula (5) refers to 40% of the difference value (C′(x)−x).
Here, returning to
The converted luminance value is inversely converted to RGB values based on formula (6) for example and outputted to the image output unit 330. Here, the input video signals (RGB values) of the target pixel are set to Rin, Gin, and Bin, and the input luminance value is set to Yin. Furthermore, the output luminance value after conversion of the target pixel is set to Yout and the output video signals (RGB values) are set to Rout, Gout, and Bout.
Rout=Rin+Yout−Yin
Gout=Gin+Yout−Yin
Bout=Bin+Yout−Yin (6)
It should be noted that the formula used in inversely converting to RGB values is not limited to the above-mentioned formula (6). For example, an inverse conversion formula may be used after the luminance values Y and color components Cb and Cr are separated based on formula (7) in the first image information extracting unit 301 and the second image information extracting unit 302.
Yin=0.299 Rin+0.587 Gin+0.114 Bin
Cb=−0.169 Rin−0.331 Gin+0.500 Bin
Cr=0.500 Rin−0.419 Gin−0.081 Bin (7)
Next, description is given using
First, at step S1301, the position coordinates (p, q) of a video signal outputted from the image input unit 310 are set to (0, 0) and the procedure proceeds to step S1302. At step S1302, the video signal outputted from the image input unit 310 is written to the memory unit 320 and the video signal is converted to luminance value and undergo calculation of the second luminance histogram. This process is carried out by the second image information extracting unit 302 and the second luminance histogram calculation unit 304.
At step S1303 to S1306, a determination is carried out as to whether or not the calculation process at step S1302 has been completed for all of the video signals for a single screen. Here, if the processing of step S1302 has been completed for all video signals (YES at step S1305), the procedure proceeds to step S1307. And if the processing of step S1302 has not been completed for all video signals (NO at step S1305), the position coordinates (p, q) of the video signals are updated at step S1306 and the process returns to step S1302.
Here, specific description is given concerning the updating of (p, q). First, at step S1303, a determination is carried out as to whether or not the position coordinates of the video signal are at a right edge of the input image (that is, p=h), and if not at the right edge (NO at step S1303), then the process proceeds to step S1304 in which the position coordinates of the video signal are moved adjacently right (that is, p=p+1). Furthermore, at step S1303, if the position coordinates of the video signal are at the right edge of the image (that is, p=h), then the procedure proceeds to the next step, step S1305. At step S1305, a determination is carried out as to whether or not the position coordinates of the video signal are at a final line of the image (that is, q=v), and if not at the final line of the image, then the process proceeds to step S1306 in which the position coordinates of the video signal are moved to the left edge one line downward (that is, p=0, q=q+1). Furthermore, if the position coordinates of the video signal at step S1305 are at the final line, then the procedure proceeds to the next step, step S1307 and the second luminance histogram that has been calculated is stored in the conversion characteristics calculating unit 306.
With the above-mentioned process, the video signals for a single screen are written to the memory unit 320, and the calculation of the second luminance histogram, which is a luminance histogram of the entire screen, and the storage thereof to the conversion characteristics calculating unit 306 are completed.
Next, at step S1308, the video signal outputted from the memory unit 320 is received and position coordinates (r, s) of the target pixel are set to (0, 0). Next, at step S1309, the first luminance histogram is calculated based on the position coordinates (r, s) of the target pixel. This process is carried out by the first image information extracting unit 301 and the first luminance histogram calculation unit 303. Next, at step S1310, the first luminance histogram that has been calculated is stored in the conversion characteristics calculating unit 306.
Next, at step S1311, a conversion function for converting the luminance value of the target pixel is calculated using the second luminance histogram stored at step S1307, the first luminance histogram stored at step S1310, and the weighted coefficients. Then, at step S1312, the luminance value of the target pixel at the position coordinates (r, s) is converted using the conversion function calculated at step S1311 and new luminance value is obtained. This process is carried out by the conversion characteristics calculating unit 306, the conversion processing unit 308, and the weighted coefficient calculating unit 305. Then the obtained luminance values are finally inversely converted to RGB values and outputted to the image output unit 330.
Next, at step S1313 to S1316, a determination is carried out as to whether or not the processes of step S1309 to step S1312 have been completed for all of the video signals for a single screen, and if they have been completed, then processing in the luminance value adjusting unit 300 is completed. And if this has not been completed, then the position coordinates (r, s) of the target pixel are updated.
The above was one specific example of a flowchart showing a process of the luminance value adjusting unit 300. According to an embodiment, a processing flow for a single screen is shown and described with reference to
With the first embodiment, the histogram flattening function is calculated based on a luminance histogram including both luminance information of the entire screen and luminance information of areas around the target pixel. This increases the probability that the luminance levels of regions in the vicinity of target pixel will be converted to broader luminance levels and makes it possible to reduce problems of darkish regions becoming all black and light regions becoming all white.
Furthermore, with the first embodiment, excessive luminance extension can be suppressed and the probability of unnatural images being outputted is reduced.
Further still, with the first embodiment, the memory unit 320 is provided and the same video signals are inputted at different timings into the first image information extracting unit 301 and the second image information extracting unit 302. In this way, the second luminance histogram, which requires a large processing capacity until the luminance histogram is calculated since the surface area of the region is large, is calculated in advance and stored in the conversion characteristics calculating unit 306. On the other hand, the first luminance histogram, which requires little processing capacity since the surface area of the region is small, can be calculated and stored successively using the video signals that are outputted from the memory unit 320. This makes it possible to shorten the processing time required until calculation of the conversion function.
[Second Embodiment]
Next, detailed description is given concerning a second embodiment of the present invention with reference to the accompanying drawings. In the first embodiment, a combination of the first luminance histogram and the second luminance histogram was carried out. In the second embodiment, a first conversion function and a second conversion function are obtained from the first luminance histogram and the second luminance histogram respectively, and these two conversion functions are combined.
After calculating the first luminance histogram, the first conversion function calculating unit 1401 obtains a first cumulative luminance histogram C1(x) based on formula (8). Further still, a histogram flattening function C′1(x) is obtained based on formula (9). Then, after calculating the second luminance histogram, the second conversion function calculating unit 1402 similarly obtains a second cumulative luminance histogram C2(x) based on formula (8). Further still, a histogram flattening function C′2(x) is obtained based on formula (9).
Next, with the third conversion function calculating unit 1403, a process of combining the histogram flattening functions C′1(x) and C′2(x) is performed giving consideration to the weighted coefficients w1 and w2. Specifically, this is based on formula (10).
C′1+2(x)=w1·C′1(x)+w2·C′2(x)(w1+w2=1) (10)
The histogram flattening functions shown in
A limiting process is carried out on the combination result shown in
F(x)=x+0.4(C′1+2(x)−x) (11)
Incidentally, the limiting process of the above-mentioned (11) is not limited to being applied only to the combination result shown in
With the second embodiment, normalized functions are combined together and therefore, compared to a case of combining luminance histograms together such as in the first embodiment, there is an advantage of there being only a small amount of data required to be handled. Furthermore, conversion characteristics preferred for each region can be calculated.
[Third Embodiment]
Next, detailed description is given concerning a third embodiment of the present invention with reference to the accompanying drawings. In the third embodiment, the value of the weighted coefficients and the image region size and moreover the proportion of the difference value in the limiting process are changed in response to position information of the target pixel.
Generally in videos, the content being expressed tends to be positioned in the vicinity of the center of the screen and the gaze of viewers is commonly directed towards the center of the screen. As such, by carrying out tone correction with greater emphasis on areas in the vicinity of the center of the screen and setting the emphasis to be reduced in extent toward peripheral areas of the screen, it is possible to achieve luminance value adjustments giving consideration to the tendency of videos and the gaze of viewers.
Also, the size of the first image region is set for each area in the first image information extracting unit 1901. That is, the size of the first image region can be changed in response to position information of the target pixel. For example, when a Full HD (1,920×1,080 pixels) display device is used, the first image region size in the area 2001 is set to 4×4 pixels, and set to 16×16 pixels in the area 2002, while being set to 32×32 pixels in the area 2003. With these settings, luminance value adjustments can be achieved in which areas of the vicinity of the center of the screen are more emphasized.
Furthermore, the proportion of the difference value in the limiting process of the conversion characteristics calculating unit 306 (the coefficient in the second section on the right in formula (5)) is similarly changed for each position of the target pixel.
With the third embodiment, as described above, by adjusting the value of the weighted coefficients and the image region size and moreover the proportion of the difference value in the limiting process for each position of the target pixel, it is possible to achieve luminance value adjustments giving consideration to positions within the screen. For example, it is possible to achieve luminance value adjustments giving consideration to the tendency for content being mainly expressed in the video to be positioned in the vicinity of the center of the screen as well as the gaze of viewers.
[Fourth Embodiment]
Next, detailed description is given concerning a fourth embodiment of the present invention with reference to the accompanying drawings. In the fourth embodiment, the value of the weighted coefficients and the proportion of the difference value in the limiting process are changed based on a shape of the first luminance histogram and the second luminance histogram.
Here, measuring distribution shapes refers to performing calculations based on the luminance histograms in regard to such factors as an average luminance value, largest and smallest luminance values, and a largest and smallest number of pixels in each luminance level for example. A weighted coefficient is set in the weighted coefficient calculating unit 2103 for each measurement result of distribution shapes and the weighted coefficient is selected and outputted based on the measurement result.
Furthermore, the proportion of the difference value in the limiting process of the conversion characteristics calculating unit 306 (the coefficient in the second section on the right in formula (5)) is similarly changed depending on the distribution shape of the luminance histogram.
Further still, in addition to the distribution shape of the luminance histogram, it is also possible to extract color information such as memory colors of skin and sky colors and the like from image information of partial areas and use this in adjustments of conversion characteristics.
With the fourth embodiment, by adjusting the value of the weighted coefficients based on the distribution shape and color information of the luminance histograms as well as the proportion of the difference value in the limiting process, it is possible to achieve luminance value adjustments giving consideration to luminance information and color information for each video pattern.
[Fifth Embodiment]
Next, detailed description is given concerning a fifth embodiment of the present invention with reference to the accompanying drawings. In the fifth embodiment, conversion characteristics of a current frame are calculated in response to conversion characteristics of a further preceding single frame.
When carrying out luminance value adjustments for continuous video signals such as for moving pictures, screen flickering and the like may be caused when a large difference occurs in the correction amounts between frames.
In the fifth embodiment, in order to reduce this phenomenon, the frame characteristics calculating unit 2201 performs control so that no large difference is created in the correction amounts between the second conversion function of a preceding single frame and the second conversion function of a current frame. That is, the frame characteristics calculating unit 2201 carries out processing so that while the conversion characteristics of the current frame are maintained as much as possible, they are kept close to a constant proportion to the conversion characteristics of the preceding single frame.
Specifically, a difference value between a second histogram flattening function of the preceding single frame and a second histogram flattening function of the current frame is obtained, and 60% of the difference value is added to the second histogram flattening function of the preceding single frame to calculate a new conversion function. A formula for calculation is shown in formula (12). Here, the second histogram flattening function of the preceding single frame is given as C′2old(x) and the second histogram flattening function of the current frame is given as C′2new(x).
C′2(x)=C′2old(x)+0.6(C′2new(x)−C′2old(x) (12)
The newly obtained second conversion function is outputted to the third conversion function calculating unit 1403 and in the third conversion function calculating unit 1403 a process of combining this with the first conversion function is carried out as described in the second embodiment.
With the above-described process, adjustments are performed so that no large difference is created in correction amounts between frames, which makes it possible to reduce screen flickering.
On the other hand, if the above-described process is carried out when there is a scene change in the inputted video, then video signals requiring a rapid change are made to change gently, which may give the viewer a sense of unnaturalness. Here, scene change refers to images changing in a large area region of the screen, which typically includes switching between scenes or panning screens.
To order to reduce problems relating to such scene changes, the frame characteristics calculating unit 2201 applies the above formula (12) only when a scene change is not detected.
As a technique for detecting a scene change, a scene change may be determined when a difference value between the conversion characteristics of the current frame and the conversion characteristics of the preceding single frame has exceeded a specified numerical value (a threshold value) for example.
Specifically, an absolute value of a difference in the second section on the right in formula (12) is given as the difference value and the threshold value is set with respect to this difference value.
With the fifth embodiment, screen flickering resulting from large differences in correction amounts between frames can be reduced. Moreover, slow changing of video signals at times of scene changes can be reduced, which enables any sense of unnaturalness in the viewer to be made lighter.
It should be noted that in the above-described example, a comparison was carried out between the current frame and the preceding single frame using the histogram flattening functions, but it is also possible to carry out comparisons between frames using the luminance histograms.
Furthermore, in the foregoing embodiments, there is no limitation to the second section on the right in formula (5), formula (11), and formula (12) being coefficients 0.4 and 0.6, and these may be any number. Additionally, various modifications may be carried out without departing from the purport of the present invention.
Furthermore, a system or a device may be provided with a recording medium on which is recorded program code of software for achieving a function of the embodiments, and a computer (CPU or MPU) of this system or device may read out and execute the program code stored on the recording medium. It is evident that an object of the present invention may be achieved in this manner.
In this case, the actual program code that is read out from the recording medium achieves the functionality of the above-described embodiments, such that the recording medium on which the program code is stored constitutes the present invention.
Furthermore, it is evident that the functionality of the foregoing embodiments may be achieved not only by executing program code read out by a computer, but also includes the following cases. Namely, this includes a case of having an OS (operating system) or the like that runs on a computer carry out a part or all of the actual processing according to instructions of the program code such that the functionality of the foregoing embodiments is achieved by the processing thereof.
Further still, it is possible for the program code read out from the recording medium to be written onto a memory provided in an extension board inserted into the computer or an extension unit connected to the computer. It is evident that this may subsequently also include having a CPU or the like provided in the extension board or extension unit carry out a part or all of the actual processing according to instructions of the program code such that the functionality of the foregoing embodiments is achieved by the processing thereof.
Furthermore, the program may enable the functionality of the foregoing embodiments to be executed by a computer, and the form of the program may include forms such as object code, a program executed by an interpreter, or script data or the like provided in an OS.
Recording media for providing the program include a RAM, an NV-RAM, a floppy (registered trademark) disk, an optical disk, a magneto-optical disk, a CD-ROM, an MO, a CD-R, and a CD-RW for example. Further still, any medium that can store the above-described program may be used, including DVDs (DVD-ROM, DVD-RAM, DVD-RW, and DVD+RW), magnetic tape, a nonvolatile memory card, and a ROM or the like. Alternatively, the program can be supplied by being downloaded from a not shown other computer or database or the like connected to the Internet, a business network, or a local area network or the like.
As described above, a display device of the present invention enables adjustment of an image's luminance values while reducing clipped shadows and clipped highlights that occur in some areas when contrast has been emphasized within a limited dynamic range.
The present invention was described using preferred embodiments, but the present invention is not limited to the foregoing embodiments and various modifications are possible within the scope of the claims.
While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
This application claims the benefit of
Japanese Patent Application No. 2006-117187, filed Apr. 20, 2006, which is hereby incorporated by reference herein in its entirety.
Number | Date | Country | Kind |
---|---|---|---|
2006-117187 | Apr 2006 | JP | national |
Number | Name | Date | Kind |
---|---|---|---|
5450502 | Eschbach et al. | Sep 1995 | A |
5581370 | Fuss et al. | Dec 1996 | A |
5724456 | Boyack et al. | Mar 1998 | A |
5751844 | Bolin et al. | May 1998 | A |
5862254 | Kim et al. | Jan 1999 | A |
5963665 | Kim et al. | Oct 1999 | A |
6266441 | Hashimoto et al. | Jul 2001 | B1 |
6438165 | Normile | Aug 2002 | B2 |
6463173 | Tretter | Oct 2002 | B1 |
6665450 | Cornog et al. | Dec 2003 | B1 |
6807298 | Park et al. | Oct 2004 | B1 |
6865295 | Trajkovic | Mar 2005 | B2 |
6950114 | Honda et al. | Sep 2005 | B2 |
6985623 | Prakash et al. | Jan 2006 | B2 |
7003153 | Kerofsky | Feb 2006 | B1 |
7012625 | Kobayashi et al. | Mar 2006 | B1 |
7058220 | Obrador | Jun 2006 | B2 |
7158674 | Suh | Jan 2007 | B2 |
7245764 | Nishizawa | Jul 2007 | B2 |
7310589 | Li | Dec 2007 | B2 |
7474846 | Subbotin | Jan 2009 | B2 |
7676111 | Yamauchi | Mar 2010 | B2 |
7738698 | Altunbasak et al. | Jun 2010 | B2 |
7760961 | Moldvai | Jul 2010 | B2 |
8254677 | Abe et al. | Aug 2012 | B2 |
20020102021 | Pass et al. | Aug 2002 | A1 |
20020136454 | Park et al. | Sep 2002 | A1 |
20030081856 | Sartor et al. | May 2003 | A1 |
20040170316 | Saquib et al. | Sep 2004 | A1 |
20040213478 | Chesnokov | Oct 2004 | A1 |
20050031201 | Goh | Feb 2005 | A1 |
20050088534 | Shen et al. | Apr 2005 | A1 |
20060182360 | Lee et al. | Aug 2006 | A1 |
20060192693 | Yamauchi | Aug 2006 | A1 |
20070001997 | Kim et al. | Jan 2007 | A1 |
20070053587 | Ali | Mar 2007 | A1 |
20070183682 | Weiss | Aug 2007 | A1 |
20070269132 | Duan et al. | Nov 2007 | A1 |
20070286480 | Mizuno et al. | Dec 2007 | A1 |
20070297689 | Neal | Dec 2007 | A1 |
20100157078 | Atanassov et al. | Jun 2010 | A1 |
Number | Date | Country |
---|---|---|
10-187949 | Jul 1998 | JP |
2001-125535 | May 2001 | JP |
2002-281312 | Sep 2002 | JP |
Number | Date | Country | |
---|---|---|---|
20070286480 A1 | Dec 2007 | US |