The present application claims priority from Japanese Patent Application No. JP 2008-270512 filed in the Japanese Patent Office on Oct. 21, 2008, the entire content of which is incorporated herein by reference.
1. Field of the Invention
The present invention relates to an image processing apparatus, an image processing method, and a program. Particularly, the present invention relates to an image processing apparatus, an image processing method, and a program that enable an improvement in perceived quality of a gradation-converted image.
2. Description of the Related Art
For example, in a case where an image of a large number of bits, such as an image in which each of RGB (Red, Green, and Blue) values is 8 bits, is to be displayed on a display of a small number of bits, such as an LCD (Liquid Crystal Display) capable of displaying an image in which each of RGB values is 6 bits, it is necessary to perform gradation conversion for converting the gradation level of the image.
An example of a method for performing the gradation conversion is FRC (Frame Rate Control).
In the FRC, the frame rate of images to be displayed on a display is adjusted to match the display rate of the display, the display rate being four times higher than the frame rate, for example, and then the images are displayed on the display.
That is, for example, assume that 8-bit images are to be displayed on a 6-bit LCD. When the focus is put on a pixel in a frame of the 8-bit images, the frame is called a target frame and the pixel is called a target pixel.
Furthermore, assume that the pixel value of the target pixel is 127. Also, assume that the frame rate (or the field rate) of the 8-bit images is 60 Hz and that the display rate of the 6-bit LCD is four times the frame rate of the 8-bit images, that is, 240 Hz.
In the FRC, the frame rate of the images is controlled to be four times so that the frame rate matches the display rate of the display, and then images having a frame rate that has been controlled are displayed.
The FRC is described with reference to
A value 127 (=01111111b), which is an 8-bit pixel value of the target pixel, can be expressed by an expression 127=(124+128+128+128)/4=31+32+32+32, as illustrated in
On the right side of the expression 127=(124+128+128+128)/4, 124 is expressed by 01111100b in a binary number, whereas 128 is expressed by 10000000b in a binary number, and thus both of the values are 8-bit values. Here, b represents that the preceding value is a binary number.
When the frame rate of images is multiplied by n so that the frame rate matches the display rate of a display, n is called “control multiple”.
When 124 (=01111100b) is divided by 4 (=22) as a control multiple, 31 (=011111b) is obtained. When 128 (=10000000b) is divided by 4 as a control multiple, 32 (=100000b) is obtained. Both of the values are 6-bit values.
A value 127, which is an 8-bit pixel value of the target pixel, can be displayed on a 6-bit display by converting it into four 6-bit pixel values 31, 32, 32, and 32 in accordance with the expression 127=(124+128+128+128)/4=31+32+32+32.
In the FRC, a target frame is converted into frames the number of which is equal to a control multiple, that is, into four frames in this case. Now, assume that the four frames are called first, second, third, and fourth frames in display time series. In this case, the pixel values of pixels at the position of the target pixel in the first to fourth frames correspond to the 6-bit pixel values 31, 32, 32, and 32 in the FRC.
In the FRC, the first to fourth frames are displayed on the display at a display rate four times the original frame rate. In this case, at the position of the target pixel, the 6-bit pixel values 31, 32, 32, and 32 are integrated in a time direction in human vision, so that the pixel value looks like 127.
As described above, in the FRC, 127 as an 8-bit pixel value is expressed in a pseudo manner with use of a visual integration effect in which integration in a time direction is performed in human vision.
Another example of the method for performing the gradation conversion is an error diffusion method (e.g., see “Yoku wakaru dijitaru gazou shori” by Hitoshi KIYA, Sixth edition, CQ Publishing, Co. Ltd., January 2000, pp. 196-213).
In the error diffusion method, two-dimensional ΔΣ modulation is performed, whereby error diffusion of shaping a quantization error as noise to a high range of spatial frequencies is performed. The noise is added to a pixel value and then the pixel value is quantized into a desired number of bits.
In the error diffusion method, a pixel value is quantized after noise has been added thereto, as described above. Therefore, in a quantized (gradation-converted) image, it looks like PWM (Pulse Width Modulation) has been performed on pixel values that become constant only by truncating lower bits. As a result, the gradation of a gradation-converted image looks like it smoothly changes due to a visual integration effect in which integration in space directions is performed in human vision. That is, a gradation level equivalent to that of an original image (e.g., 256 (=28)-gradation when the original image is an 8-bit image as described above) can be expressed in a pseudo manner.
Also, in the error diffusion method, noise (quantization error) after noise shaping is added to a pixel value in consideration that the sensitivity of human vision is low in a high range of spatial frequencies. Accordingly, the level of noise noticeable in a gradation-converted image can be decreased.
In the FRC, a high gradation level is realized in a pseudo manner by using a visual integration effect in a time direction, as described above. However, when changes of pixel values at the same position are significant due to a motion in images, so-called flicker may become noticeable.
On the other hand, in the error diffusion method, noise (quantization error) after noise shaping is added to a pixel value, but the quantization error on which noise shaping is performed is a quantization error of (a pixel value of) a pixel that is spatially approximate to a target pixel. For this reason, in the error diffusion method, when an original image is a still image, quantization errors of pixels at the same position of respective frames have a same value, and same noise (noise after noise shaping) is added to the pixel values of the pixels at the same position of the respective frames.
When the target image is a still image, the pixel values of pixels at the same position in the respective three frames in
Note that, in the FRC, the pixel values of pixels at the same position in a plurality of sequential frames of a gradation-converted image may be different from each other, even when the image is a still image.
In the error diffusion method, flicker does not occur unlike in the FRC. However, in the error diffusion method, noise is added to a target image, and thus the noise added to pixel values may be noticeable when a gradation-converted image in which (quantized) pixel values change in low frequencies is obtained in a case where an amplitude level of one step of gradation is high in the gradation-conversion image (e.g., in a case where an amplitude level of one step of gradation is high in a 6-bit image obtained through gradation conversion performed on an 8-bit image).
Accordingly, it is desirable to improve a perceived quality of a gradation-converted image.
According to an embodiment of the present invention, there is provided an image processing apparatus including gradation converting means for simultaneously performing space-direction ΔΣ modulation and time-direction ΔΣ modulation on an image, thereby converting a gradation level of the image. Also, there is provided a program causing a computer to function as the image processing apparatus.
According to an embodiment of the present invention, there is provided an image processing method for an image processing apparatus that performs gradation conversion on an image. The image processing method includes the step of simultaneously performing space-direction ΔΣ modulation and time-direction ΔΣ modulation on an image, thereby converting a gradation level of the image.
In the foregoing image processing apparatus, image processing method, and program, space-direction ΔΣ modulation and time-direction ΔΣ modulation are simultaneously performed on an image, whereby a gradation level of the image is converted.
The image processing apparatus may be an independent apparatus or may be an internal block constituting an apparatus.
The program can be provided by being transmitted via a transmission medium or by being recorded on a recording medium.
According to the above-described embodiments of the present invention, a perceived quality of a gradation-converted can be improved.
Exemplary Configuration of an Image Processing Apparatus According to an Embodiment of the Present Invention
The image processing apparatus in
The gradation converting unit 11 is supplied with, as a target image, an image in which each of RGB components is 8 bits. The gradation converting unit 11 converts the gradation level of the 8-bit image (target image) supplied thereto by simultaneously performing ΔΣ modulation in space directions (hereinafter referred to as space-direction ΔΣ modulation) and ΔΣ modulation in a time direction (hereinafter referred to as time-direction ΔΣ modulation) on the 8-bit image.
That is, the gradation converting unit 11 converts the 8-bit image supplied thereto into a 6-bit image (an image in which each of RGB components is 6 bits), for example.
Then, the gradation converting unit 11 supplies the (gradation-converted) 6-bit image obtained through gradation conversion to the display 12.
The display 12 is a 6-bit LCD capable of displaying a 6-bit image and displays the 6-bit image supplied from the gradation converting unit 11.
In the gradation converting unit 11, gradation conversion of the 8-bit image is performed independently for each of RGB components.
The gradation converting unit 11 includes a calculating unit 31, a quantizing unit 32, a calculating unit 33, and a three-dimensional filtering unit 34.
The calculating unit 31 is supplied with pixel values IN(x, y) of pixels in a target image in a raster scanning order. Furthermore, the calculating unit 31 is supplied with outputs of the three-dimensional filtering unit 34.
The calculating unit 31 adds the pixel value IN(x, y) of the target image and the output of the three-dimensional filtering unit 34 and supplies (outputs) a sum value U(x, y) obtained thereby to the quantizing unit 32 and the calculating unit 33.
Here, IN(x, y) represents the pixel value of the pixel (x, y) x-th from the left and y-th from the top. U(x, y) represents a sum value of the pixel value IN(x, y) and the output of the three-dimensional filtering unit 34.
The quantizing unit 32 quantizes the sum value U(x, y), which is the output of the calculating unit 31, into the number of bits of an image that can be displayed on the display 12 (
The pixel value OUT(x, y) output from the quantizing unit 32 is supplied to the display 12 and also to the calculating unit 33.
The calculating unit 33 calculates a difference U(x, y)−OUT(x, y) between the sum value U(x, y), which is the output of the calculating unit 31, and the pixel value OUT(x, y) as a quantized value of the sum value U(x, y), which is the output of the quantizing unit 32, thereby obtaining a quantization error Q(x, y) included in the pixel value OUT(x, y) as a quantized value, and outputs the quantization error Q(x, y).
The quantization error Q(x, y) output from the calculating unit 33 is supplied to the three-dimensional filtering unit 34.
The three-dimensional filtering unit 34 performs filtering in space and time directions on the quantization error Q(x, y) supplied from the calculating unit 33 and supplies (outputs) a filtering result to the calculating unit 31.
Specifically, referring to
The space-direction filter 41 is supplied with the quantization error Q(x, y) from the calculating unit 33. The space-direction filter 41 is a two-dimensional filter, such as a two-dimensional FIR (Finite Impulse Response) filter. The space-direction filter 41 performs filtering in space directions (hereinafter referred to as space-direction filtering) on the quantization error Q(x, y) supplied from the calculating unit 33 and supplies (outputs) a filtering result to the calculating unit 42.
That is, assume that the calculating unit 31 is supplied with the pixel value IN(x, y) and that the pixel (x, y) as a target of gradation conversion is called a target pixel. Then, the space-direction filter 41 performs filtering by using quantization errors of pixels that are spatially approximate to the target pixel.
The calculating unit 42 multiplies the output of the space-direction filter 41 by a predetermined weight 1−k and supplies a product obtained thereby to the calculating unit 45.
Here, k is a real number in the range from 0 to 1. The value k may be a fixed value of 0.5, for example, or may be a variable value that varies in accordance with a user operation.
The time-direction filter 43 is supplied with the quantization error Q(x, y) from the calculating unit 33. The time-direction filter 43 is a one-dimensional filter, such as a one-dimensional FIR filter. The time-direction filter 43 performs filtering in a time direction (hereinafter referred to as time-direction filtering) on the quantization error Q(x, y) supplied from the calculating unit 33 and supplies (outputs) a filtering result to the calculating unit 44.
That is, the time-direction filter 43 performs filtering by using quantization errors of pixels that are temporally approximate to the target pixel.
The calculating unit 44 multiplies the output of the time-direction filter 43 by a predetermined weight k and supplies a product obtained thereby to the calculating unit 45.
The calculating unit 45 adds the product supplied from the calculating unit 42 and the product supplied from the calculating unit 44 and supplies a sum value obtained thereby, as a filtering result of the three-dimensional filtering unit 34, to the calculating unit 31.
In
When a transfer function of the space-direction filter 41 is represented by G and when a transfer function of the time-direction filter 43 is represented by F, the pixel value OUT(x, y) of a gradation-converted image is expressed by expression (1).
OUT(x, y)=IN(x, y)−(1−k)(1−G)Q(x, y)−k(1−F)Q(x, y) (1)
In expression (1), the quantization error Q(x, y) is modulated with each of −(1−G) and −(1−F). The modulation with −(1−G) corresponds to noise shaping based on space-direction ΔΣ modulation performed on the quantization error Q(x, y), and the modulation with −(1−F) corresponds to noise shaping based on time-direction ΔΣ modulation performed on the quantization error Q (x, y).
Also, in expression (1), a value obtained through weighting with a weight 1−k performed on a value obtained by modulating the quantization error Q(x, y) with −(1−G) (a value obtained through multiplication by a weight 1−k) and a value obtained through weighting with a weight k performed on a value obtained by modulating the quantization error Q(x, y) with −(1−F) are added to the pixel value IN(x, y).
Therefore, in the gradation converting unit 11 in
The calculating unit 31 waits for and receives a pixel value of a pixel in a target image supplied thereto, and adds an output of the three-dimensional filtering unit 34 while regarding the pixel having the supplied pixel value as a target pixel in step S11.
Specifically, in step S11, the calculating unit 31 adds the pixel value and a value obtained through the preceding filtering performed by the three-dimensional filtering unit 34 in step S14 described below (an output of the three-dimensional filtering unit 34), and outputs a sum value obtained thereby to the quantizing unit 32 and the calculating unit 33. Then, the process proceeds to step S12.
In step S12, the quantizing unit 32 quantizes the sum value as the output of the calculating unit 31, and outputs a quantized value including a quantization error, the quantized value serving as a result of ΔΣ modulation (a result of gradation conversion based on ΔΣ modulation). Then, the process proceeds to step S13.
In step S13, the calculating unit 33 calculates a difference between the sum value as the output of the calculating unit 31 and the output of the quantizing unit 32 (the quantized value of the sum value as the output of the calculating unit 31), thereby obtaining a quantization error of the quantization performed by the quantizing unit 32. Furthermore, the calculating unit 33 supplies the quantization error to the three-dimensional filtering unit 34, and the process proceeds from step S13 to step S14.
In step S14, the three-dimensional filtering unit 34 performs space-direction filtering and time-direction filtering on the quantization error supplied from the calculating unit 33.
Specifically, in the three-dimensional filtering unit 34, the quantization error supplied from the calculating unit 33 is supplied to the space-direction filter 41 and the time-direction filter 43.
The space-direction filter 41 performs space-direction filtering on the quantization error supplied from the calculating unit 33 and supplies a filtering result to the calculating unit 42. The calculating unit 42 multiplies the output of the space-direction filter 41 by a predetermined weight 1−k and supplies a product obtained thereby to the calculating unit 45.
On the other hand, the time-direction filter 43 performs time-direction filtering on the quantization error supplied from the calculating unit 33 and supplies a filtering result to the calculating unit 44. The calculating unit 44 multiplies the output of the time-direction filter 43 by a predetermined weight k and supplies a product obtained thereby to the calculating unit 45.
The calculating unit 45 adds the product supplied from the calculating unit 42 and the product supplied from the calculating unit 44 and supplies a sum value as a filtering result of the three-dimensional filtering unit 34 to the calculating unit 31.
Then, when a pixel value of a pixel next to the target pixel in the raster scanning order is supplied to the calculating unit 31, the process returns from step S14 to step S11.
In step S11, the calculating unit 31 regards the pixel next to the target pixel as a new target pixel, and adds the pixel value of the new target pixel and the filtering result supplied from the three-dimensional filtering unit 34 in the preceding step S14. Thereafter, the same process is repeated.
As described above, in the gradation converting unit 11, the three-dimensional filtering unit 34 performs space-direction filtering and time-direction filtering on the quantization error supplied from the calculating unit 33. Accordingly, space-direction ΔΣ modulation and time-direction ΔΣ modulation are simultaneously performed on the target image.
That is, in the gradation converting unit 11, an effect of space-direction ΔΣ modulation (an effect of noise shaping) occurs in accordance with only the weight 1−k, and an effect of time-direction ΔΣ modulation occurs in accordance with only the weight k, whereby the quantization error is diffused in both space and time directions.
As a result of the quantization error diffusion in both space and time directions, the gradation of an image on which gradation conversion has been performed by the gradation converting unit 11 looks like it smoothly changes due to integration effects in space and time directions in human vision.
Furthermore, since the quantization error is diffused not only in space directions but also in a time direction, noticeable noise (quantization error) in a gradation-converted image can be suppressed compared to the error diffusion method according to a related art in which only space-direction ΔΣ modulation is performed, so that a perceived quality of the gradation-converted image can be improved.
Also, since the quantization error is diffused not only in a time direction but also in space directions, flicker in an image, like flicker occurring in the FRC, which would occur if only time-direction ΔΣ modulation is performed, can be suppressed. Accordingly, a perceived quality of the gradation-converted image can be improved.
In a case where the time-direction filter 43 performs time-direction filtering of multiplying a quantization error of a pixel in the preceding frame by a filter coefficient of 1.0 and outputting a product as a filtering result, the time-direction filter 43 can be constituted by a single frame memory.
The time-direction filtering of multiplying a quantization error of a pixel in the preceding frame by a filter coefficient of 1.0 and outputting a product as a filtering result can be performed also by a FIFO (First In First Out) memory or the like that is capable of storing quantization errors of one frame, as well as by a single frame memory.
In the gradation converting unit 11 in
When a transfer function F of the time-direction filter 43 that causes the delay corresponding to the time of a frame as time-direction filtering is represented by Z−1, the following expressions (2) and (3) are established in the gradation converting unit 11 in FIG. 6.
Q(x, y)=U(x, y)−OUT(x, y) (2)
U(x, y)=IN(x, y)+(1−k)GQ(x, y)−kZ−1Q(x, y) (3)
Substituting expression (3) into expression (2) and erasing U(x, y) causes expression (4) to be obtained.
OUT(x, y)=IN(x, y)−(1−k)(1−G)Q(x, y)−k(1−Z−1)Q(x, y) (4)
Expression (4) is equal to expression (1) except that the transfer function F is replaced by Z−1.
According to expression (4), the gradation converting unit 11 performs space-direction ΔΣ modulation in which noise shaping with −(1−G) is performed on the quantization error Q(x, y). Furthermore, the gradation converting unit 11 performs time-direction ΔΣ modulation in which noise shaping with −(1−Z−1) is performed on the quantization error Q(x, y), that is, time-direction ΔΣ modulation of diffusing a quantization error of a pixel in the preceding frame at the same position as that of a target pixel to the target pixel, together with the space-direction ΔΣ modulation.
In the three-dimensional filtering unit 34 (
The time-direction filter 43 performs time-direction filtering by using a quantization error of a pixel at the same position as that of the target pixel (a shaded square region in
Next, the space-direction ΔΣ modulation and the time-direction ΔΣ modulation that are simultaneously performed by the gradation converting unit 11 in
In the space-direction ΔΣ modulator, the calculating unit 31 adds an 8-bit pixel value IN(x, y) of a pixel (x, y) in a target image and an output of the space-direction filter 41, and supplies a sum value obtained thereby to the quantizing unit 32 and the calculating unit 33.
The quantizing unit 32 quantizes the sum value supplied from the calculating unit 31 into 6 bits, and outputs a 6-bit quantized value obtained thereby as a pixel value OUT(x, y) of the pixel (x, y) in a gradation-converted image.
The pixel value OUT(x, y) output from the quantizing unit 32 is also supplied to the calculating unit 33.
The calculating unit 33 subtracts the pixel value OUT(x, y) supplied from the quantizing unit 32 from the sum value supplied from the calculating unit 31, that is, subtracts the output from the quantizing unit 32 from the input to the quantizing unit 32, thereby obtaining a quantization error Q(x, y) generated from the quantization performed by the quantizing unit 32, and supplies the quantization error Q(x, y) to the space-direction filter 41.
The space-direction filter 41 filters the quantization error Q(x, y) supplied from the calculating unit 33 and outputs a filtering result to the calculating unit 31.
The calculating unit 31 adds the filtering result of the quantization error Q(x, y) output from the space-direction filter 41 and the pixel value IN(x, y) in the above-described manner.
According to the space-direction ΔΣ modulator in
For example, as described above with reference to
In
In the space-direction ΔΣ modulator in
In the gradation converting unit 11 in
In the space-direction ΔΣ modulation performed by the gradation converting unit 11 in
Therefore, in the space-direction ΔΣ modulation performed by the gradation converting unit 11 in
In the time-direction ΔΣ modulator, the calculating unit 31 adds an 8-bit pixel value IN(x, y) of a pixel (x, y) in a target image and an output of the time-direction filter 43, and supplies a sum value obtained thereby to the quantizing unit 32 and the calculating unit 33.
The quantizing unit 32 quantizes the sum value supplied from the calculating unit 31 into 6 bits, and outputs a 6-bit quantized value obtained thereby as a pixel value OUT(x, y) of the pixel (x, y) in a gradation-converted image.
The pixel value OUT(x, y) output from the quantizing unit 32 is also supplied to the calculating unit 33.
The calculating unit 33 subtracts the pixel value OUT(x, y) supplied from the quantizing unit 32 from the sum value supplied from the calculating unit 31, that is, subtracts the output from the quantizing unit 32 from the input to the quantizing unit 32, thereby obtaining a quantization error Q(x, y) generated from the quantization performed by the quantizing unit 32, and supplies the quantization error Q(x, y) to the time-direction filter 43.
The time-direction filter 43 filters the quantization error Q(x, y) supplied from the calculating unit 33 and outputs a filtering result to the calculating unit 31.
The calculating unit 31 adds the filtering result of the quantization error Q(x, y) output from the time-direction filter 43 and the pixel value IN(x, y) in the above-described manner.
According to the time-direction ΔΣ modulator in
In the time-direction ΔΣ modulator in
In the gradation converting unit 11 in
In the time-direction ΔΣ modulation performed by the gradation converting unit 11 in
Therefore, in the time-direction ΔΣ modulation performed by the gradation converting unit 11 in
When the weight k is larger than 0 and smaller than 1, space-direction ΔΣ modulation and time-direction ΔΣ modulation are performed in the gradation converting unit 11 in
Therefore, noticeable noise (quantization error) in a gradation-converted image can be suppressed compared to the error diffusion method according to a related art in which only space-direction ΔΣ modulation is performed, so that a perceived quality of the gradation-converted image can be improved.
Furthermore, flicker in an image, like flicker occurring in the FRC, which would occur due to time-direction ΔΣ modulation, can be suppressed, so that a perceived quality of the gradation-converted image can be improved.
In
In the three-dimensional filtering unit 34 illustrated in
In
The gradation converting unit 11 in
However, the gradation converting unit 11 in
The image analyzing unit 51 is supplied with a target image. The image analyzing unit 51 analyzes a target frame of the target image, thereby detecting a motion in the target frame, and supplies motion information indicating the motion to the setting unit 52.
Here, the image analyzing unit 51 obtains, as the motion information, a sum of absolute differences of pixel values of pixels at the same position of the target frame and the preceding frame.
The setting unit 52 sets a weight k on the basis of a result of analysis of the target frame performed by the image analyzing unit 51, that is, on the basis of the motion information supplied from the image analyzing unit 51, and supplies the weight k to the calculating units 42 and 44.
Here, the setting unit 52 sets the weight k the value of which becomes smaller as the motion information is larger, that is, as the motion in the target frame is larger.
In the gradation converting unit 11 in
In the gradation converting unit 11 in
Therefore, in the gradation converting unit 11 in
If an effect of time-direction ΔΣ modulation is large when the motion in the target frame is large, a quantization error of a pixel having a weak correlation with a target pixel is diffused to the target pixel, which may cause a negative influence on a gradation-converted image.
In such a case where the motion in the target frame is large, a small value is set as the weight k so that the effect of time-direction ΔΣ modulation becomes small. Accordingly, a negative influence on a gradation-converted image, caused by diffusion of a quantization error of a pixel having a weak correlation with a target pixel to the target pixel, can be prevented.
As the space-direction filter 41 (
Examples of the noise shaping filter used in the error diffusion method according to the related art include a Jarvis, Judice & Ninke filter (hereinafter referred to as Jarvis filter) and a Floyd & Steinberg filter (hereinafter referred to as Floyd filter).
In
In
Here, the unit of the spatial frequency is cpd (cycles/degree), which indicates the number of stripes that are seen in the range of a unit angle of view (one degree in the angle of view). For example, 10 cpd means that ten pairs of a white line and a black line are seen in the range of one degree in the angle of view, and 20 cpd means that twenty pairs of a white line and a black line are seen in the range of one degree in the angle of view.
The gradation-converted image that is generated by the gradation converting unit 11 is eventually displayed on the display 12 (
If the spatial frequency corresponding to the resolution of the display 12 is very high, e.g., about 120 cpd, noise (quantization error) is sufficiently modulated to a high range of a frequency band where the sensitivity of human vision is low by either of the Jarvis filter and the Floyd filter, as illustrated in
The maximum spatial frequency of the image displayed on the display 12 depends on the resolution of the display 12 and the distance between the display 12 and a viewer who views the image displayed on the display 12 (hereinafter referred to as viewing distance).
Here, assume that the length in the vertical direction of the display 12 is H inches. In this case, about 2.5 H to 3.0 H is adopted as the viewing distance to obtain the maximum spatial frequency of the image displayed on the display 12.
In this case, for example, when the display 12 has a 40-inch display screen, having 1920 horizontal×1080 vertical pixels, for displaying a so-called full HD (High Definition) image, the maximum spatial frequency of the image displayed on the display 12 is about 30 cpd.
As illustrated in
Therefore, when the Jarvis filter or the Floyd filter is used, noise may be noticeable in a gradation-converted image, so that the perceived image quality thereof may be degraded.
In order to suppress degradation of the perceived image quality due to noticeable noise in the gradation-converted image, the amplitude characteristic of noise shaping illustrated in
That is,
Here, a noise shaping filter used for ΔΣ modulation to realize the degradation suppressing noise shaping is also called an SBM (Super Bit Mapping) filter.
In the amplitude characteristic of the degradation suppressing noise shaping, the characteristic curve in a midrange and higher has an upside-down shape (including a similar shape) of the visual characteristic curve (contrast sensitivity curve). Hereinafter, such a characteristic is called a reverse characteristic.
Furthermore, in the amplitude characteristic of the degradation suppressing noise shaping, the gain increases in a high range more steeply compared to that in the amplitude characteristic of noise shaping using the Jarvis filter or the Floyd filter.
Accordingly, in the degradation suppressing noise shaping, noise (quantization error) is modulated to a higher range where visual sensitivity is lower, compared to the noise shaping using the Jarvis filter or the Floyd filter.
By adopting the SBM filter as the space-direction filter 41 (
In the amplitude characteristic of noise shaping using the SBM filter illustrated in
Also, in the amplitude characteristic of noise shaping using the SBM filter illustrated in
That is, in a case of realizing an amplitude characteristic in which the gain is 0 in the low range and midrange and the gain steeply increases only in the high range as the amplitude characteristic of noise shaping using the SBM filter, the SBM filter is a two-dimensional filter having many taps.
On the other hand, in a case of realizing an amplitude characteristic of noise shaping using the SBM filter in which the gain is negative in the low range or midrange, the SBM filter can be constituted by a two-dimensional filter having a small number of taps, for example, a 12-tap two-dimensional filter that performs filtering by using quantization errors of twelve pixels on which gradation conversion has already been performed in the raster scanning order among 5 horizontal×5 vertical pixels with a target pixel being at the center, as described above with reference to
Adopting such an SBM filter as the space-direction filter 41 enables the gradation converting unit 11 to be miniaturized.
Specifically,
In
Here, the SBM filter is a two-dimensional FIR filter. The filter coefficient g(i, j) is a filter coefficient multiplied by a quantization error of the pixel i-th from the left and j-th from the top in 12 pixels on which gradation conversion has already been performed in the raster scanning order among 5 horizontal×5 vertical pixels with a target pixel being at the center described above with reference to
In the amplitude characteristic of noise shaping in
Specifically,
In
In the amplitude characteristic of noise shaping in
Specifically,
In
In the amplitude characteristic of noise shaping in
The filter coefficients of the 12-tap SBM filter illustrated in
Additionally, according to a simulation that was performed by using SBM filters having the filter coefficients illustrated in
Descriptions have been given about a case where an embodiment of the present invention is applied to the image processing apparatus (
For example, in a case of performing color space conversion of converting an image in which each of YUV components is 8 bits into an image having each of RGB components as a pixel value and then displaying the image that has been obtained through the color space conversion and that has RGB components as a pixel value on an 8-bit LCD, an image in which each of RGB components exceeds the original 8 bits, e.g., expanded to 16 bits, may be obtained through the color space conversion. In this case, it is necessary to perform gradation conversion on the image in which each of RGB components has been expanded to 16 bits in order to obtain an 8-bit image that can be displayed on the 8-bit LCD. The embodiment of the present invention can also be applied to such gradation conversion.
The above-described series of processes can be performed by either of hardware and software. When the series of processes are performed by software, a program constituting the software is installed to a multi-purpose computer or the like.
The program can be recorded in advance in a hard disk 105 or a ROM (Read Only Memory) 103 serving as a recording medium mounted in the computer.
Alternatively, the program can be stored (recorded) temporarily or permanently in a removable recording medium 111, such as a flexible disk, a CD-ROM (Compact Disc Read Only Memory), an MO (Magneto Optical) disc, a DVD (Digital Versatile Disc), a magnetic disk, or a semiconductor memory. The removable recording medium 111 can be provided as so-called package software.
The program can be installed to the computer via the above-described removable recording medium 111. Also, the program can be transferred to the computer from a download site via an artificial satellite for digital satellite broadcast in a wireless manner, or can be transferred to the computer via a network such as a LAN (Local Area Network) or the Internet in a wired manner. The computer can receive the program transferred in that manner by using a communication unit 108 and can install the program to the hard disk 105 mounted therein.
The computer includes a CPU (Central Processing Unit) 102. An input/output interface 110 is connected to the CPU 102 via a bus 101. When a command is input to the CPU 102 by a user operation of an input unit 107 including a keyboard, a mouse, and a microphone via the input/output interface 110, the CPU 102 executes the program stored in the ROM 103 in response to the command. Alternatively, the CPU 102 loads, to a RAM (Random Access Memory) 104, the program stored in the hard disk 105, the program transferred via a satellite or a network, received by the communication unit 108, and installed to the hard disk 105, or the program read from the removable recording medium 111 loaded into a drive 109 and installed to the hard disk 105, and executes the program. Accordingly, the CPU 102 performs the process in accordance with the above-described flowchart or the process performed by the above-described configurations illustrated in the block diagrams. Then, the CPU 102 allows an output unit 106 including an LCD (Liquid Crystal Display) and a speaker to output, allows the communication unit 108 to transmit, or allows the hard disk 105 to record a processing result via the input/output interface 110 as necessary.
In this specification, the process steps describing the program allowing the computer to execute various processes are not necessarily performed in time series along the order described in a flowchart, but may be performed in parallel or individually (e.g., a parallel process or a process by an object is acceptable).
The program may be processed by a single computer or may be processed in a distributed manner by a plurality of computers. Furthermore, the program may be executed by being transferred to a remote computer.
It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and alterations may occur depending on design requirements and other factors insofar as they are within the scope of the appended claims or the equivalents thereof.
Number | Date | Country | Kind |
---|---|---|---|
P2008-270512 | Oct 2008 | JP | national |