The present application claims priority from Japanese Patent Application No. JP 2008-247291 filed in the Japanese Patent Office on Sep. 26, 2008, the entire content of which is incorporated herein by reference.
1. Field of the Invention
The present invention relates to a gradation conversion device, a gradation conversion method, and a program, and specifically to a gradation conversion device, a gradation conversion method, and a program for downsizing and cost reduction of the device, for example.
2. Background Art
For example, in order to display an image of N-bit pixel values (hereinafter, also referred to as “N-bit image”) by a display device for displaying an image of M (smaller than N)-bit pixel values, it is necessary to convert the N-bit image into an M-bit image, that is, perform gradation conversion of converting the gradation of the image.
As a method of gradation conversion (gradation conversion method) of an N-bit image into an M-bit image, for example, there is a method of dropping the last (N minus M) bits of the N-bit pixel values and using the rest as M-bit pixel values.
Referring to
That is,
In the image in
The pixel value at the left end is 100 and pixel values become larger toward the right. Further, the pixel value at the right end is 200.
That is,
8 bits can represent 256 (=28) levels, but 4 bits can represent only 16 (=24) levels. Accordingly, in gradation conversion of dropping the last 4 bits of the 8-bit image, banding that changes of levels are seen like a band is produced.
In a gradation conversion method of preventing production of banding and performing pseudo representation of the gray scale of the image before gradation conversion in the image after gradation conversion, that is, for example, as described above, in a 16-level image obtained by gradation conversion of a 256-level image, as methods of representing 256 levels by 16 levels visually when a human sees the image, there are the random dither method, ordered dither method, and error diffusion method.
That is,
In
To the calculation part 11, for example, pixel values IN(x,y) of the respective pixels (x,y) of an 8-bit image as a target image of gradation conversion (an image before gradation conversion) are supplied in the sequence of raster scan. Note that, the pixel (x,y) indicates a pixel in the position of xth from the left and yth from the top.
Further, to the calculation part 11, random noise from the random noise output part 12 that generates and outputs random noise is supplied.
The calculation part 11 adds the pixel values IN(x,y) and the random noise from the random noise output part and supplies the resulting additional values to the quantization part 13.
The quantization part 13 quantizes the additional values from the calculation part 11 into 4 bits, for example, and outputs the resulting 4-bit quantized values as pixel values OUT(x,y) of the pixels (x,y) of the image after gradation conversion.
In the random dither method, the configuration of the gradation conversion device is simpler, however, as shown in
That is,
In
To the calculation part 21, for example, pixel values IN(x,y) of the respective pixels (x,y) of an 8-bit image as a target image of gradation conversion are supplied in the sequence of raster scan.
Further, to the calculation part 21, a dither matrix is supplied.
The calculation part 21 adds the pixel values IN(x,y) and values of the dither matrix corresponding to the positions (x,y) of the pixels (x,y) having the pixel values IN(x,y), and supplies the resulting additional values to the quantization part 22.
The quantization part 22 quantizes the additional values from the calculation part 21 into 4 bits, for example, and outputs the resulting 4-bit quantized values as pixel values OUT(x,y) of the pixels (x,y) of the image after gradation conversion.
In the ordered dither method, the image quality of the image after gradation conversion can be improved compared to that in the random dither method, however, as shown in
That is,
In
To the calculation part 31, for example, pixel values IN(x,y) of the respective pixels (x,y) of an 8-bit image as a target image of gradation conversion are supplied in the sequence of raster scan.
Further, to the calculation part 31, output of the two-dimensional filter 34 is supplied.
The calculation part 31 adds the pixel values IN(x,y) and the output of the two-dimensional filter 34, and supplies the resulting additional values to the quantization part 32 and the calculation part 33.
The quantization part 32 quantizes the additional values from the calculation part 31 into 4 bits, for example, and outputs the resulting 4-bit quantized values as pixel values OUT(x,y) of the pixels (x,y) of the image after gradation conversion.
Further, the pixel values OUT(x,y) output by the quantization part 32 are also supplied to the calculation part 33.
The calculation part 33 obtains quantization errors −Q(x,y) produced by the quantization in the quantization part by subtracting the pixel values OUT(x,y) from the quantization part 32 from the additional values from the calculation part 31, that is, subtracting the output from the quantization part 32 from the input to the quantization part 32, and supplies them to the two-dimensional filter 34.
The two-dimensional filter 34 is a two-dimensional filter of filtering signals, and filters the quantization errors −Q(x,y) from the calculation part 33 and outputs the filtering results to the calculation part 31.
In the calculation part 31, the filtering results of the quantization errors −Q(x,y) output by the two-dimensional filter 34 and the pixel values IN(x,y) are added in the above described manner.
In the gradation conversion device in
According to the two-dimensional ΔΣ modulator, the quantization errors −Q(x,y) are diffused (noise-shaped) in an area at higher spatial frequencies with respect to both of the horizontal direction (x-direction) and the vertical direction (y-direction). As a result, as shown in
Note that, regarding a method of performing gradation conversion into a good quality image by the two-dimensional ΔΣ modulator, details thereof are disclosed in Japanese Patent No. 3959698, for example.
As described above, according to the two-dimensional ΔΣ modulator, gradation conversion into a good quality image can be performed.
However, the two-dimensional ΔΣ modulator has the two-dimensional filter 34 as shown in
That is, interest is attracted to a certain pixel (x,y) as a pixel of interest (x,y), in the two-dimensional filter 34, filtering of the quantization error −Q(x,y) of the pixel of interest (x,y) is performed using the quantization errors that have been already obtained with respect to the plural pixels located near the pixel of interest (x,y) on the same horizontal line (the yth line) as that of the pixel of interest (x,y) and the plural pixels located near the pixel of interest (x,y) on horizontal lines (e.g., the (y−1)th line, the (y−2)th line, and so on) above the pixel of interest (x,y).
Therefore, in the two-dimensional filter 34, it is necessary to hold the quantization errors of the pixels on the horizontal lines other than the yth line in addition to the quantization errors of the pixels on the same yth line as the pixel of interest (x,y), and for the purpose, a line memory for plural horizontal lines is necessary.
As described above, in the two-dimensional filter 34, the line memories for plural horizontal lines are necessary, and the gradation conversion device in
Thus, it is desirable that gradation conversion providing a high-quality image can be performed without using a line memory, and thereby, for example, downsizing and cost reduction of the device can be realized.
An embodiment of the invention is directed to a gradation conversion device or program which converts a gradation of an image, and includes dither means for dithering the image by adding random noise to pixel values forming the image, and one-dimensional ΔΣ modulation means for performing one-dimensional ΔΣ modulation on the dithered image, or a program allowing a computer to function as the gradation conversion device.
Another embodiment of the invention is directed to a gradation conversion method of a gradation conversion device that converts a gradation of an image, including the steps of allowing the gradation conversion device to dither the image by adding random noise to pixel values forming the image, and allowing the gradation conversion device to perform one-dimensional ΔΣ modulation on the dithered image.
In the above described embodiments of the invention, the image is dithered by adding random noise to pixel values forming the image, and one-dimensional ΔΣ modulation is performed on the dithered image.
The gradation conversion device may be an independent device or an internal block forming one apparatus.
Further, the program may be provided by transmission via a transmission medium or being recorded in a recording medium.
According to the embodiments of the invention, gradation conversion can be performed. Especially, gradation conversion providing a high quality image can be performed without using a line memory.
In
The tuner 41 receives broadcast signals of digital broadcasting, for example, and demodulates the broadcast signals into a transport stream and supplies it to the demultiplexer 42.
The demultiplexer 42 separates a necessary TS (Transport Stream) packet from the transport stream from the tuner 41 and supplies it to the decoder 43.
The decoder 43 decodes MPEG (Moving Picture Expert Group)-encoded data contained in the TS packet from the demultiplexer 42, and thereby, obtains an 8-bit image (data), for example, and supplies it to the noise reduction unit 44.
The noise reduction unit 44 performs noise reduction processing on an 8-bit image from the decoder 43 and supplies a resulting 12-bit image, for example, to the gradation conversion unit 45.
That is, according to the noise reduction processing by the noise reduction unit 44, the 8-bit image is extended to the 12-bit image.
The gradation conversion unit 45 performs gradation conversion of converting the 12-bit image supplied from the noise reduction unit 44 into an image in a bit number that can be displayed by the display unit 47.
That is, the gradation conversion unit 45 acquires necessary information on the bit number of the image that can be displayed by the display unit 47 etc. from the display control unit 46.
If the bit number of the image that can be displayed by the display unit 47 is 8 bits, for example, the gradation conversion unit 45 performs gradation conversion of converting the 12-bit image supplied from the noise reduction unit 44 into an 8-bit image and supplies it to the display control unit 46.
The display control unit 46 controls the display unit 47 and allows the display unit 47 to display the image from the gradation conversion unit 45.
The display unit 47 includes an LCD (Liquid Crystal Display), organic EL (organic Electro Luminescence), or the like, for example, and displays the image under the control of the display control unit 46.
In
That is, to the dither addition part 51, the image from the noise reduction unit 44 (
The dither addition part 51 performs dithering on the target image by adding random noise to pixel values IN(x,y) forming the target image from the noise reduction unit 44, and supplies it to the one-dimensional ΔΣ modulation part 52.
The one-dimensional ΔΣ modulation part 52 performs one-dimensional ΔΣ modulation on the dithered target image from the dither addition part 51, and supplies a resulting image having pixel values OUT(x,y) as an image after gradation conversion to the display control unit 46 (
From the noise reduction unit 44 (
Next, referring to
In the gradation conversion processing, the dither addition part 51 waits for the supply of the pixel values IN(x,y) of the pixels (x,y) of the target image from the noise reduction unit 44 (
At step S12, the one-dimensional ΔΣ modulation part performs one-dimensional ΔΣ modulation on the dithered pixel values from the dither addition part 51 and supplies resulting pixel values OUT(x,y) as pixel values of the image after gradation conversion to the display control unit 46 (
At step S13, the gradation conversion unit 45 determines whether there are pixel values IN(x,y) supplied from the noise reduction unit 44 or not, if the unit determines there are, the process returns to step S11 and the same processing is repeated.
Further, at step S13, if the unit determines there are not pixel values IN(x,y) supplied from the noise reduction unit 44, the gradation conversion processing ends.
That is,
8 bits can represent 256 levels while 4 bits can represent only 16 levels. However, in the 4-bit image after gradation conversion by the gradation conversion unit 45, coarse and dense areas having coarse and dense distributions of pixels having pixel values of a certain quantization value Q and pixels having pixel values of a quantization value (Q+1) one larger than the quantization value Q (or a quantization value (Q−1) one smaller than the quantization value Q), i.e., areas with a larger ratio of pixels having pixel values of the quantization value Q and areas with a larger ratio of pixels having pixel values of the quantization value (Q+1) (areas with a smaller ratio of pixels having pixel values of the quantization value (Q+1) and areas with a smaller ratio of pixels having pixel values of the quantization value Q) are produced, and the pixel values of the coarse and dense areas seem to smoothly change because of the integration effect of the visual sense of human.
As a result, although 4 bits can represent only 16 levels, in the 4-bit image after gradation conversion by the gradation conversion unit 45, pseudo representation of 256 levels can be realized as if the image were the 8-bit target image before gradation conversion.
Next,
In
To the calculation part 61, the pixel values IN(x,y) of the target image from the noise reduction unit 44 (
The calculation part 61 adds the output of the HPF to the pixel values IN(x,y) of the target image, and supplies the resulting additional values as dithered pixel values F(x,y) to the one-dimensional ΔΣ modulation part 52 (
The HPF 62 filters the random noise output by the random noise output part 63 based on a filter coefficient set by the coefficient setting part 64, and supplies the high-frequency component of the random noise obtained as a result of filtering to the calculation part 61.
The random noise output part 63 generates random noise according to a Gaussian distribution or the like, for example, and outputs it to the HPF 62.
The coefficient setting part 64 determines the filter coefficient of the HPF 62 based on the spatial frequency characteristic of the visual sense of human and the resolution of the display unit 47 (
That is, the coefficient setting part 64 stores the spatial frequency characteristic of the visual sense of human. Further, the coefficient setting part 64 acquires the resolution of the display unit 47 from the display control unit 46 (
Note that the coefficient setting part 64 adjusts the filter coefficient of the HPF 62 in response to the operation of a user or the like. Thereby, the user can adjust the image quality of the image after gradation conversion in the gradation conversion unit 45 to desired image quality.
In the dither addition part 51 having the above described configuration, the coefficient setting part 64 determines the filter coefficient of the HPF 62 from the spatial frequency characteristic of the visual sense of human and the resolution of the display unit 47, and sets it in the HPF 62.
Then, the HPF 62 performs product-sum operation of the filter coefficient set by the coefficient setting part 64 and the random noise output by the random noise output part 63 or the like, and thereby, filters the random noise output by the random noise output part 63 and supplies the high-frequency component of the random noise to the calculation part 61.
The calculation part 61 adds the 12-bit pixel values IN(x,y) of the target image from the noise reduction unit 44 (
Next, a method of determining the filter coefficient of the HPF 62 based on the spatial frequency characteristic of the visual sense of human and resolution of the display unit performed in the coefficient setting part 64 will be explained referring to
In
As shown in
Here,
cycle/degree expresses the number of stripes seen in a range of a unit angle of a viewing angle. For example, 10 cycles/degree expresses that 10 pairs of white lines and black lines are seen in the range of the viewing angle of one degree and 20 cycles/degree expresses that 20 pairs of white lines and black lines are seen in the range of the viewing angle of one degree.
Since the image after gradation conversion by the gradation conversion unit 45 is finally displayed on the display unit 47 (
Accordingly, the coefficient setting part 64 (
That is, the highest spatial frequency of the image displayed on the display unit 47 can be obtained in the spatial frequency in units of cycle/degree from a distance from a viewer to the display unit 47 (hereinafter, also referred to as “viewing distance”) when the image displayed on the display unit 47 is viewed.
If the (longitudinal) length in the vertical direction of the display unit 47 is expressed by H inches, about 2.5H to 3.0H of the viewing distance is employed, for example.
For instance, when the display unit 47 has a display screen in a size of 40 inches of lateral and longitudinal pixels of 1920×1080 for display of a so-called full-HD (High Definition) image, the highest spatial frequency of the image displayed on the display unit 47 is about 30 cycles/degree.
Here, the highest spatial frequency of the image displayed on the display unit 47 is determined by the resolution of the display unit 47, and also appropriately referred to as “spatial frequency corresponding to resolution”.
That is,
Here,
The coefficient setting part 64 determines the filter coefficient of the HPF 62 based on the spatial frequency characteristic of the visual sense of human in
That is,
The amplitude characteristic in
Therefore, in the HPF 62 (
As a result, in the calculation part 61 (
Note that, the amplitude characteristic of the HPF at the high frequencies does not necessarily completely match the characteristic opposite to the visual sense of human. That is, the amplitude characteristic of the HPF 62 at the high frequencies may be enough to be similar to the characteristic opposite to the visual sense of human.
Further, as the filter that filters the random noise output by the random noise output part 63 (hereinafter, also referred to as “noise filter”), in place of the HPF 62, a filter having a whole amplitude characteristic that is inverse of the spatial frequency characteristic of the visual sense of human in
That is, according to the spatial frequency characteristic of the visual sense of human in
Note that, when the bandpass filter is employed as the noise filter, the number of taps of the noise filter becomes greater and the device is increased in size and cost.
Further, according to a simulation performed by the inventors of the invention, even when the above described bandpass filter is employed as the noise filter, compared to the case of employing the HPF 62, no significant improvement is recognized in the image quality of the image after gradation conversion.
Furthermore, when the above described bandpass filter is employed as the noise filter, not only the high-frequency components but also the low-frequency components are added to the pixel values IN(x,y) of the target image. As a result, in some cases, in the coarse and dense areas described in
Therefore, in view of the size and cost of the device and also in view of the image quality of the image after gradation conversion, it is desirable that the HPF 62 having the amplitude characteristic at high frequencies of the characteristic opposite to the visual sense of human as shown in
Next,
In the drawing, the same signs are assigned to the parts corresponding to those in the gradation conversion devices as the two-dimensional ΔΣ modulator in
In
To the calculation part 31, the pixel values F(x,y) of the dithered target image are supplied from the dither addition part 51 (
The calculation part 31 adds the pixel values F(x,y) from the dither addition part 51 and the output of the one-dimensional filter 71, and supplies the resulting additional values to the quantization part 32 and the calculation part 33.
The quantization part 32 quantizes the additional values from the calculation part 31 into 8 bits as the bit number of the image to be displayed on the display unit 47 (
Here, the one-dimensional ΔΣ modulation part 52 acquires the bit number of the image to be displayed by the display unit 47 from the display control unit 46 and controls the quantization part 32 to perform quantization into the quantization values in the bit number.
The calculation part 33 obtains quantization errors −Q(x,y) produced by the quantization in the quantization part by subtracting the pixel values OUT(x,y) from the quantization part 32 from the additional values from the calculation part 31, that is, subtracting the output from the quantization part 32 from the input to the quantization part 32, and supplies them to the one-dimensional filter 71.
The one-dimensional filter 71 is a one-dimensional filter that filters signals, and filters the quantization errors −Q(x,y) from the calculation part 33 and outputs the filtering results to the calculation part 31.
Here, in the calculation part 31, the filtering results of the quantization errors −Q(x,y) output by the one-dimensional filter 71 and the pixel values IN(x,y) are added in the above described manner.
The coefficient setting part 72 determines the filter coefficient of the one-dimensional filter 71 based on the spatial frequency characteristic of the visual sense of human and the resolution of the display unit 47 (
That is, the coefficient setting part 72 stores the spatial frequency characteristic of the visual sense of human. Further, the coefficient setting part 72 acquires the resolution of the display unit 47 from the display control unit 46 (
Note that the coefficient setting part 72 adjusts the filter coefficient of the one-dimensional filter 71 in response to user operation or the like. Thereby, the user can adjust the image quality of the image after gradation conversion in the gradation conversion unit 45 to desired image quality.
In the one-dimensional ΔΣ modulation part 52 having the above described configuration, the coefficient setting part 72 determines the filter coefficient of the one-dimensional filter 71 from the spatial frequency characteristic of the visual sense of human and the resolution of the display unit 47 and sets it in the one-dimensional filter 71.
Then, the one-dimensional filter 71 performs product-sum operation of the filter coefficient set by the coefficient setting part 72 and the quantization errors −Q(x,y) output by the calculation part 33 or the like, and thereby, filters the quantization errors −Q(x,y) output by the calculation part 33 and supplies the high-frequency component of the quantization errors −Q(x,y) to the calculation part 31.
The calculation part 31 adds the pixel values F(x,y) from the dither addition part 51 and the output of the one-dimensional filter 71, and supplies the resulting additional values to the quantization part 32 and the calculation part 33.
The quantization part 32 quantizes the additional values from the calculation part 31 into 8 bits as the bit number of the image to be displayed on the display unit 47 (
The calculation part 33 obtains quantization errors −Q(x,y) contained in the pixel values OUT(x,y) the quantization part 32 by subtracting the pixel values OUT(x,y) from the quantization part 32 from the additional values from the calculation part 31, and supplies them to the one-dimensional filter 71.
The one-dimensional filter 71 filters the quantization errors −Q(x,y) from the calculation part 33 and outputs the filtering results to the calculation part 31. In the calculation part 31, the filtering results of the quantization errors −Q(x,y) output by the one-dimensional filter 71 and the pixel values IN(x,y) are added in the above described manner.
In the one-dimensional ΔΣ modulation part 52, the quantization errors −Q(x,y) are fed back to the input side (calculation part 31) via the one-dimensional filter 71, and thereby, the one-dimensional ΔΣ modulation is performed. Therefore, in the one-dimensional ΔΣ modulation part 52, the one-dimensional ΔΣ modulation is performed on the pixel values F(x,y) from the dither addition part 51, and the pixel values OUT(x,y) are output as results of the one-dimensional ΔΣ modulation.
In the one-dimensional ΔΣ modulation part 52 in
That is, in the calculation part 31, the filtering results of the one-dimensional filter 71 using the quantization errors respectively corresponding to pixel values F(x−1,y), F(x−2,y), F(x−3,y), F(x−4,y), F(x−5,y) of five pixels, for example, which have been processed immediately before the pixel values F(x,y), are added to the pixel values F(x,y).
Next,
In
That is, to the delay part 81i (i=1, 2, 3, 4, 5), stored values of the delay part 81i−1 at the upstream are input. The delay part 81i temporarily stores the input there, delays the input by a time for one pixel, and outputs it to the delay part 81i+1 at the downstream and the multiplication parts 82i.
To the delay part 811 at the most upstream, the quantization errors −Q(x,y) from the calculation part 33 (
Further, the delay part 815 at the most downstream outputs the delayed input to the multiplication parts 825 only.
The multiplication part 82i multiplies the output of the delay part 81i by a filter coefficient a(i) and supplies a resulting multiplication value to the addition part 83.
The addition part 83 adds the multiplication values from the respective multiplication parts 821 to 825, and supplies a resulting additional value as a result of filtering of the quantization errors −Q(x,y) to the calculation part 31 (
As described above, it is necessary for the one-dimensional filter 71 to have delay parts 81i that store quantization errors of some (five in
Therefore, according to the one-dimensional ΔΣ modulation part 52 including such a one-dimensional filter 71, compared to the two-dimensional ΔΣ modulation part in
Next, referring to
Now, if the additional values output by the calculation part 31 are expressed by U(x,y) in the one-dimensional ΔΣ modulation part 52 in
−Q(x,y)=U(x,y)−OUT(x,y) (1)
U(x,y)=F(x,y)+K×(−Q(x,y)) (2)
By substituting equation (2) into equation (1) and eliminating U(x,y), equation (3) is obtained.
OUT(x,y)=F(x,y)+(1−K)×Q(x,y) (3)
Here, in equation (3), K represents a transfer function of the one-dimensional filter 71.
In ΔΣ modulation, noise shaping of, as it were, pushing the quantization errors toward the high frequencies is performed. In equation (3), the quantization errors Q(x,y) are modulated by (1−K), and the modulation is noise shaping.
Therefore, the amplification characteristic of the noise shaping in the ΔΣ modulation of the one-dimensional ΔΣ modulation part 52 is determined by the property of the one-dimensional filter 71, i.e., the filter coefficient of the one-dimensional filter 71.
Here, as described in
On the other hand, since the image of the gradation conversion by the gradation conversion unit 45 is finally displayed on the display unit 47 (
Accordingly, the coefficient setting part 72 (
That is,
Here,
The coefficient setting part 72 determines the filter coefficient of the one-dimensional filter 71 based on the spatial frequency characteristic of the visual sense of human in
That is,
The amplitude characteristic in
Therefore, according to the noise shaping having the amplitude characteristic in
As a result, in the image after gradation conversion by the gradation conversion unit 45, visual recognition of noise can be prevented and visual image quality can be improved.
Note that, the amplitude characteristic of the noise shaping at high frequencies does not necessarily completely match the characteristic opposite to the visual sense of human as is the case of the HPF 62 (
Further, the whole amplitude characteristic of the noise shaping at high frequencies may be the characteristic opposite to the spatial frequency characteristic of the visual sense of human in
Here, the one-dimensional filter 71 that determines the amplitude characteristic of the noise shaping has five delay parts 811 to 815, as shown in
If the immediately preceding processed pixels are pixels on the horizontal line on which the pixels (x,y) are, generally, the pixel (x,y) may be correlated with the immediately preceding processed pixels. However, if the immediately preceding processed pixels are on a horizontal line different from that on which the pixels (x,y) are, i.e., if the pixels (x,y) are pixels at the head of the horizontal line, it may be possible that there is no correlativity between the pixels (x,y) and all of the immediately preceding processed pixels.
Since it is apparently not preferable that the values to be added to the pixel values F(x,y) of the pixels (x,y) are obtained using the quantization errors for the pixel values of the immediately preceding processed pixels not correlated with the pixels (x,y) in the one-dimensional filter 71, it is considered that the stored values of the five delay parts 811 to 815 of the one-dimensional filter 71 are initialized to a fixed value of zero or the like, for example, in the horizontal flyback period (and vertical flyback section) of the (dithered) image supplied from the dither addition part 51 (
However, according a simulation performed by the inventors of the invention, it is confirmed that the image (image after gradation conversion) with better image quality can be obtained in the case where the stored values of the delay parts 811 to 815 of the one-dimensional filter 71 are not initialized but stored without change in the delay parts 811 to 815 in the horizontal flyback period than in the case of initialization to the fixed value.
Therefore, in the one-dimensional filter 71, it is desirable that, in the horizontal flyback period of the dithered image, the stored values of the delay parts 81i are not initialized but stored in the delay parts 81i without change.
Note that it is considered that the image with better image quality can be obtained in the case where the stored values of the delay parts 81i are not initialized to the fixed value but stored without change because the diffusivity of the quantization errors becomes better than in the case of initialization to the fixed value.
Therefore, in view of improvement in the diffusivity of the quantization errors, in the one-dimensional filter 71, not only that the stored values of the delay parts 81i are not initialized in the horizontal flyback period but also the stored values of the delay parts 81i may be initialized by random numbers.
That is,
In the drawing, the same signs are assigned to the parts corresponding to those in the case of
In
The random number output part 84 generates and outputs random numbers that can be taken as quantization errors −Q(x,y) obtained by the calculation part 33 (
The switch 85 selects the output of the random number output part 84 in the horizontal flyback period (and vertical flyback period), and selects the quantization errors −Q(x,y) from the calculation part 33 (
In the one-dimensional filter 71 in
On the other hand, in the period of the horizontal flyback period, the switch 85 selects the output of the random number output part 84 and the random number output part 84 sequentially supplies five random numbers to the delay part 811. Thereby, (5−i+1)th random number is stored in the delay part 81i, and, regarding the pixels at the head of the horizontal line after the horizontal flyback period ends, in the horizontal flyback period, the output of the one-dimensional filter 71 as the values to be added in the calculation part 31 (
Note that, in the horizontal flyback period, the output from the one-dimensional filter 71 to the calculation part 31 is not performed.
As described above, in the gradation conversion unit (
Therefore, the gradation conversion that provides the high quality image can be performed without using a line memory, and downsizing and cost reduction of the device can be realized.
That is, since the gradation conversion is performed without using a line memory, not the two-dimensional ΔΣ modulation, but the one-dimensional ΔΣ modulation is performed in the gradation conversion unit 45.
Since the one-dimensional ΔΣ modulation is performed on the pixel values supplied in the sequence of raster scan in the one-dimensional ΔΣ modulation part 52 in the image after one-dimensional ΔΣ modulation, the effect of ΔΣ modulation (effect of noise shaping) is produced in the horizontal direction but the effect of ΔΣ modulation is not produced in the vertical direction.
Accordingly, only by the one-dimensional ΔΣ modulation, apparent gray levels are poor with respect to the vertical direction of the image after one-dimensional ΔΣ modulation, and quantization noise (quantization errors) is highly visible.
On this account, dither is performed before one-dimensional ΔΣ modulation in the gradation conversion unit 45. As a result, in the image after gradation conversion by the gradation conversion unit 45, the effect of dithering is produced in the vertical direction, the effect of one-dimensional ΔΣ modulation is produced in the horizontal direction, and thereby, apparent image quality can be improved with respect to both the horizontal directions and vertical direction.
Further, in the gradation conversion unit 45, the high-frequency components of the random noise obtained by filtering the random noise with the HPF 62 are used for dithering. Furthermore, the filter coefficient of the HPF 62 is determined based on the characteristic equal to or less than the spatial frequency corresponding to the resolution of the display unit 47 (
Therefore, the frequency components of noise used for dithering are frequency components at which the sensitivity of the visual sense of human is lower, and the apparent image quality of the image after gradation conversion can be improved.
Further, in the gradation conversion unit 45, the filter coefficient of the one-dimensional filter 71 (
Therefore, the frequency components of quantization errors are frequency components at which the sensitivity of the visual sense of human is lower, and the apparent image quality of the image after gradation conversion can be improved.
Note that the dither addition part 51 (
Further, if the image as a target image of gradation conversion (target image) in the gradation conversion unit 45 has plural components of Y, Cb, Cr, etc. as pixel values, the gradation conversion processing is performed independently with respect to each component. That is, if the target image has a Y-component, a Cb-component, and a Cr-component as the pixel values, the gradation conversion processing is performed only on the Y-component. In the same manner, the gradation conversion processing is performed only on the Cb-component, and the gradation conversion processing is performed only on the Cr-component.
As above, the case where the invention is applied to gradation conversion in a TV has been described, however, the embodiment of the invention can be applied to any device that handles images other than those of the TV.
That is, for example, in HDMI(R) (High-Definition Multimedia Interface) that has rapidly spread recently, Deep Color that transmits not only 8-bit pixel values but also 10-bit or 12-bit pixel values are specified, and the gradation conversion processing by the gradation conversion unit 45 can apply the images having 10-bit or 12-bit pixel values transmitted via the HDMI to gradation conversion when the images are displayed on a display that displays 8-bit images or the like.
Further, for example, in the case where a video device that reproduces a disc such as a Blu-ray (R) disc or the like reproduces a 12-bit image, for example, when images are displayed on a display that displays 8-bit images from the video device via a transmission path for transmitting 8-bit images, gradation conversion processing by the gradation conversion unit 45 is performed in the video device, 12-bit images are converted into 8-bit images and transmitted to the display, and thereby, pseudo display of the 12-bit images can be performed on the display.
Next, the amplitude characteristic of the HPF 62 (
As the two-dimensional filter 34 in
Here, in
Further,
The vertical axes (gain) of the amplitude characteristic of the HPF 62 in
Further, the Jarvis filter is a two-dimensional filter, and there are spatial frequencies in two directions of the horizontal direction and the vertical direction as (the axes of) the spatial frequency of the amplitude characteristic of noise shaping using the Jarvis filter. In
If the spatial frequency corresponding to the resolution of the display unit 47 takes an extremely high value of about 120 cycles/degree, for example, noise (quantization errors) is sufficiently modulated in the frequency band in which the sensitivity of the visual sense of human is lower with the Jarvis filter or the Floyd filter.
Note that, if the spatial frequency corresponding to the resolution of the display unit 47 takes about 30 cycles/degree, for example, it is difficult to sufficiently modulate noise in the high frequency band in which the sensitivity of the visual sense of human is lower with the Jarvis filter or the Floyd filter.
In this case, noise is highly visible and apparent image quality is deteriorated in the image after gradation conversion.
In order to reduce the deterioration of the apparent image quality because the noise is highly visible in the image after gradation conversion, it is necessary to set the amplitude characteristic of noise shaping as shown in
That is,
Here, a filter for noise shaping used for ΔΣ modulation that realizes deterioration reducing noise shaping (a filter corresponding to the two-dimensional filter 34 in
In the amplitude characteristic of deterioration reducing noise shaping, the characteristic at high frequencies is the characteristic opposite to the visual characteristic like the amplitude characteristic of the HPF 62 in
Furthermore, the amplitude characteristic of deterioration reducing noise shaping increases at high frequencies more rapidly than the amplitude characteristic of noise shaping using the Jarvis filter or the Floyd filter.
Thereby, in the deterioration reducing noise shaping, noise (quantization errors) is modulated toward the higher frequencies at which the sensitivity of the visual sense of human is lower than in the noise shaping using the Jarvis filter or the Floyd filter.
By determining the filter coefficient of the one-dimensional filter 71 so that the amplitude characteristic of noise shaping using the one-dimensional filter 71 in
Similarly, by determining the filter coefficient of the HPF 62 so that the amplitude characteristic of noise shaping using the HPF 62 in
Here, in
Further, in
The filter coefficient g(1) corresponds to the filter coefficient a(1) of the one-dimensional filter 71 with five taps shown in
That is,
In
In the amplitude characteristic of noise shaping of
That is,
In
In the amplitude characteristic of noise shaping of
That is,
In
In the amplitude characteristic of noise shaping of
Here, in
The filter coefficients h(1), h(2), and h(3) are multiplied by three continuous values of noise in the FIR filter with three taps as the HPF.
That is,
In
In the amplitude characteristic of noise shaping of
That is,
In
In the amplitude characteristic of noise shaping of
That is,
In
In the amplitude characteristic of noise shaping of
Next, the above described series of processing may be performed by hardware or software. When the series of processing is performed by software, a program forming the software is installed in a general-purpose computer or the like.
Accordingly,
The program may be recorded in a hard disk 105 and a ROM 103 as recording media within the computer in advance.
Alternatively, the program may temporarily or permanently stored (recorded) in a removal recording medium 111 such as flexible disc, CD-ROM (Compact Disc Read Only Memory), MO (Magneto Optical) disc, DVD (Digital Versatile Disc), magnetic disc, and semiconductor memory. Such a removal recording medium 111 may be provided as a so-called packaged software.
Note that the program may be not only installed from the above described removal recording medium 111 in the computer but also installed in the hard disk 105 within the computer by wireless transfer from a download site via an artificial satellite for digital satellite broadcasting or wired transfer via a network such as LAN (Local Area Network) or the Internet to the computer, and receiving the program transferred in that way by a communication unit 108 in the computer.
The computer contains a CPU (Central Processing Unit) 102. An input/output interface 110 is connected via a bus 101 to the CPU 102, and, when a user inputs a command by operating an input unit 107 including a keyboard, a mouse, a microphone, etc. or the like, the CPU 102 executes the program stored in the ROM (Read Only Memory) 103 according to the command via the input/output interface 110. Alternatively, the CPU 102 loads in a RAM (Random Access Memory) 104 the program stored in the hard disk 105, the program transferred from the satellite or the network, received by the communication unit 108, and installed in the hard disk 105, and the program read out from the removable recording medium 111 mounted on a drive 109 and installed in the hard disk 105 and executes it. Thereby, the CPU 102 performs processing according to the above described flowchart or processing executed by the above described configuration in the block diagram. Then, the CPU 102 allows the processing result according to need to be output from an output unit 106 formed by an LCD (Liquid Crystal Display), speakers etc., or transmitted from the communication unit 108, and further recorded in the hard disk 105 via the input/output interface 110, for example.
Here, in this specification, the processing steps for describing the program for allowing the computer to execute various processing may not necessarily be processed in time sequence, but includes processing to be executed in parallel or individually (e.g., parallel processing or object-based processing).
Further, the program may be processed by one computer or distributed-processed by plural computers. Furthermore, the program may be transferred to a remote computer and executed.
The embodiments of the invention are not limited to the above described embodiments but various changes can be made without departing from the scope of the invention.
Number | Date | Country | Kind |
---|---|---|---|
P2008-247291 | Sep 2008 | JP | national |