The present application claims priority from Japanese Patent Application No. JP 2008-028470, filed in the Japanese Patent Office on Feb. 8, 2008, the entire content of which is incorporated herein by reference.
1. Field of the Invention
The present invention relates to an image processing apparatus, and, more particularly to an image processing apparatus that quantizes pixel values of respective pixels of an image signal, a gradation converting device for the quantization, a processing method for the image processing apparatus and the gradation converting device, and a computer program for causing a computer to execute the method.
2. Description of the Related Art
In digital video display in digital camcorders, computer graphics, animations, and the like, the number of bits of a gradation of a material and the number of bits of a display apparatus or the number of bits on a digital transmission interface such as an HDMI (High-Definition Multimedia Interface) or a DVI (Digital Visual Interface) do not always coincide with each other. In signal processing in an apparatus that treats a digital video signal, a calculation process of the processing and the number of transmitted bits of video signal data in the apparatus may be different.
However, when the lower order 2 bits are omitted in this way, in an image with smooth gradation and a flat image with a little change of a gray scale such as an image of the blue sky in a sunny day, steps called banding or Mach band may appear because of the influence of the human visual characteristic.
Such quantization errors due to a reduction in the number of bits cause deterioration in an image quality. As measures against the quantization errors, in general, methods called a dither method and an error diffusion method are used. These methods are methods of adding PDM (Pulse Depth Modulation) noise to a boundary of the banding to thereby making the steps less conspicuous.
In order to represent the human visual characteristic, a contrast sensitivity curve representing a spatial frequency f [unit: cpd (cycle/degree)] on the abscissa and representing contrast sensitivity on the ordinate is used. The spatial frequency represents the number of stripes that can be displayed per unit angle (1 degree in angle of field) with respect to the angle of field. A maximum frequency in the spatial frequency depends on pixel density (the number of pixels per unit length) of a display apparatus and a viewing distance.
tan(θ/2)=(d/2)/D
The maximum frequency in the spatial frequency as the number of stripes on the display screen per 1 degree in angle of field can be calculated by dividing the width “d” on the display screen by length per two pixels (the two pixels form one set of stripes) calculated from the pixel density of the display screen.
When, for example, a high-resolution printer having the maximum frequency of about 120 cpd is assumed as the display apparatus, as shown in
However, when a high-definition display having 1920 pixels×1080 pixels in the horizontal and vertical directions is assumed as the display apparatus, the maximum frequency per unit angle with respect to the angle of field is about 30 cpd. As shown in
It is possible to modulate, using the error diffusion method, quantization errors due to gradation conversion involved in processing in an image processing apparatus or digital transmission to a frequency band less easily sensed by the human visual characteristic. However, a filter characteristic used for the error diffusion method is uniquely decided. Therefore, if viewing conditions such as performance of a display apparatus for viewing and a viewing distance between a viewer and the display apparatus change, a maximum frequency in a spatial frequency in the display apparatus also changes. As a result, error diffusion processing suitable for the display apparatus is not obtained by the uniquely-decided filter characteristic. For a display apparatus that displays an image signal, it is difficult to modulate the quantization errors to a frequency band with sufficiently low sensitivity with respect to the human visual characteristic using the Jarvis filter and the Floyd filter.
Therefore, it is desirable to modulate the quantization errors to a band with sufficiently low sensitivity with respect to the human visual characteristic by setting an optimum filter coefficient according to viewing conditions.
According to an embodiment of the present invention, there is provided an image processing apparatus including: filter-coefficient storing means for storing filter coefficients respectively associated with spatial frequencies, which are the numbers of strips displayed per unit angle with respect to an angle of field of a display apparatus; viewing-condition determining means for determining, as viewing conditions, a viewing distance between a viewer and the display apparatus and pixel density of the display apparatus; filter-coefficient setting means for setting a filter coefficient selected on the basis of a spatial frequency calculated from the viewing conditions among the stored filter coefficients; and gradation modulating means including quantizing means for quantizing a pixel value in a predetermined coordinate position in an image signal and outputting the pixel value as a quantized pixel value in the predetermined coordinate position, the gradation modulating means gradation-modulating the image signal by multiply-accumulating a set filter coefficient with respect to quantization errors caused by the quantizing means to feed back the quantization errors to an input side of the quantizing means. Therefore, there is an effect that the quantization errors are modulated to a band with sufficiently low sensitivity with respect to the human visual characteristic by setting an optimum filter coefficient according to viewing conditions.
Preferably, the viewing-condition determining means receives the number of pixels and a screen size of the display apparatus from the display apparatus and determines the viewing conditions on the basis of the number of pixels and the screen size. Therefore, there is an effect that the number of pixels and the screen size are received from the display apparatus and the viewing conditions are calculated on the basis of the number of pixels and the screen size.
Preferably, the viewing-condition determining means receives the pixel density and a screen size of the display apparatus from the display apparatus and determines the viewing conditions on the basis of the pixel density and the screen size. Therefore, there is an effect that the pixel density and the screen size are received from the display apparatus and the viewing conditions are calculated on the basis of the pixel density and the screen size.
Preferably, the filter coefficient is set to reduce the quantized error of frequency components lower than a predetermined spatial frequency. Therefore, there is an effect that quantization noise of the frequency components lower than the predetermined spatial frequency is reduced. In this case, the predetermined spatial frequency is set to about two third of a maximum frequency in the spatial frequency. Therefore, there is an effect that quantization noise of frequency components lower than about two third of the maximum frequency in the spatial frequency is reduced.
Preferably, the gradation modulating means further includes: inverse quantization means for inversely quantizing the quantized pixel value in the predetermined coordinate position and outputting the quantized pixel value as an inversely quantized pixel value in the predetermined coordinate position; differential generating means for generating, as quantization errors in the predetermined coordinate position, a difference value between the quantized pixel value in the predetermined coordinate position and the inversely quantized pixel value in the predetermined coordinate position; arithmetic means for calculating, as a feedback value in the predetermined coordinate position, a value obtained by multiplying the respective quantization errors in a predetermined area corresponding to the predetermined coordinate position with the set filter coefficient and adding up the quantization errors; and adding means for adding the feedback value in the predetermined coordinate position to the corrected pixel value in the predetermined coordinate position. Therefore, there is an effect that the quantization errors are modulated to a band with sufficiently low sensitivity with respect to the human visual characteristic by setting an optimum filter coefficient according to viewing conditions.
According to another embodiment of the present invention, there is provided a filter coefficient setting processing method for an image processing apparatus including a display apparatus, filter-coefficient storing means for storing filter coefficients respectively associated with spatial frequencies, which are the numbers of strips displayed per unit angle with respect to an angle of field of a display apparatus, and gradation modulating means including quantizing means for quantizing a pixel value in a predetermined coordinate position in a pixel signal and outputting the pixel value as a quantized pixel value in the predetermined coordinate position, the gradation modulating means gradation-modulating the image signal by multiply-accumulating a set filter coefficient with respect to quantization errors caused by the quantizing means to feed back the quantization errors to an input side of the quantizing means, the filter coefficient setting processing method including: a viewing-condition determining step of determining, as viewing conditions, a viewing distance between a viewer and the display apparatus and pixel density of the display apparatus; and a filter-coefficient setting step of setting, in the gradation modulating means, a filter coefficient selected on the basis of a spatial frequency calculated from the viewing conditions among the filter coefficients stored in the filter-coefficient storing means. There is also provided a computer program for causing a computer to execute these steps. Therefore, there is an effect that the gradation modulating means is caused to set an optimum filter coefficient according to viewing conditions.
According to the embodiments of the present invention, it is possible to realize an excellent effect that quantization errors can be modulated to a band with sufficiently low sensitivity with respect to the human visual characteristic by setting an optimum filter coefficient according to viewing conditions.
An embodiment of the present invention is explained in detail below with reference to the accompanying drawings.
The reproducing apparatus 10 includes a tuner 11, a decoder 12, a processor 15, a ROM (Read-Only Memory) 16, a RAM (Random Access Memory) 17, a digital transmission interface (I/F) 18, a network interface (I/F) 19, a recording control unit 21, a recording medium 22, an operation receiving unit 23, and a bus 24. The reproducing apparatus 10 transmits the processed image signal and sound signal to the display apparatus 30 via the digital transmission interface 18.
The tuner 11 receives a radio wave of a digital broadcast and demodulates a modulated wave of a channel designated by the operation receiving unit 23. The tuner 11 supplies demodulated image data and sound data to the decoder 12.
The decoder 12 decodes the image data and the sound data demodulated by the tuner 11. The decoder 12 supplies the decoded image signal and sound signal to the processor 15.
The ROM 16 is a memory that stores various control programs and the like. The RAM 17 is a memory that has a work area for the processor 15.
The digital transmission interface 18 performs data communication between the reproducing apparatus 10 and the display apparatus 30 connected to the digital transmission signal line 50. The digital transmission interface 18 can be realized by a digital transmission interface such as an HDMI (High-Definition Multimedia Interface) or a DVI (Digital Visual Interface). The digital transmission interface 18 transmits the image signal and the sound signal processed by the processor 15 to the display apparatus 30. Specifically, the digital transmission interface 18 acquires screen information concerning viewing conditions from the display apparatus 30 and supplies the screen information to the processor 15. The viewing conditions are a viewing distance between a viewer and the display apparatus 30 and pixel density of the display apparatus 30. As shown in
As an example, the pixel density, which is one of the viewing conditions, is calculated from the vertical length of the screen and the number of pixels in the vertical direction of the screen. However, the pixel density may be calculated from the horizontal width of the screen and the number of pixels in the horizontal direction of the screen. The viewing distance, which is the other of the viewing conditions, is calculated on the basis of the vertical length of the screen. However, the viewing distance may be calculated by using the horizontal width of the screen instead of the vertical length of the screen. In this case, for example, the viewing distance can be calculated by using the following relational expression of the horizontal width of the screen in the display screen with the aspect ratio of 16:9 and the viewing distance:
Horizontal width of the screen=viewing distance×0.650
The network interface 19 performs data communication with an external apparatus connected to the Internet, a LAN (Local Area Network), or the like.
The recording control unit 21 records image data in the recording medium 22 in a predetermined format on the basis of the control by the processor 15 or reads out image data recorded in the recording medium 22.
The recording medium 22 stores video data. The recording medium 22 is, for example, a hard disk driver or a Blu-ray disk.
The operation receiving unit 23 receives channel selection or operation inputs such as reproduction and stop of the reproduction of the image data stored in the recording medium 22 from a user of the reproducing apparatus 10.
The processor 15 controls the respective components of the reproducing apparatus 10 on the basis of the control programs stored in the ROM 16. For example, the processor 15 calculates the viewing conditions (the pixel density and the viewing distance) from the screen information (the vertical length of the screen and the number of pixels in the vertical direction of the screen) supplied from the digital transmission interface 18 and calculates a maximum frequency in a spatial frequency in the display apparatus 30 from the viewing conditions as shown in
The bus 24 is a system bus of the reproducing apparatus 10 that connects the processor 15 and the respective components to each other.
The processor 15 calculates the pixel density, which is one of the viewing conditions, from the vertical length of the screen and the number of pixels in the vertical direction of the screen acquired from the display apparatus 30 and calculates the viewing distance, which is the other of the viewing conditions, on the basis of the vertical length of the screen. However, the processor 15 may acquire the pixel density and the vertical length of the screen and calculate only the viewing distance from the vertical length of the screen. Further, the processor 15 calculates the viewing distance on the basis of the vertical length of the screen. However, the processor 15 may measure a distance between a remote controller of the display apparatus 30 and the display screen of the display apparatus 30 using a technique such as UWB (Ultra Wide Band) and causes the display apparatus 30 to transmit the distance to the reproducing apparatus 10 as a viewing distance. The processor 15 acquires, for calculation of the viewing conditions, the screen information of the display apparatus 30 via the digital transmission interface 18. However, the processor 15 may directly set the viewing conditions (the pixel density and the viewing distance) in the operation receiving unit 23.
The display apparatus 30 includes a tuner 31, a decoder 32, a display control unit 33, a display unit 34, a processor 35, a ROM 36, a RAM 37, a digital transmission interface (I/F) 38, a network interface (I/F) 39, an operation receiving unit 43, and a bus 44. The display apparatus 30 receives, via the digital transmission interface 38, an image signal subjected to image processing by the reproducing apparatus 10 and displays the image signal on the display screen. Functions of the components other than the display control unit 33, the display unit 34, the processor 35, and the digital transmission interface 38 are the same as those of the reproducing apparatus 10. Therefore, explanation of the functions is omitted.
The display control unit 33 causes the display unit 34 to display the image signal on the basis of the control by the processor 35.
The display unit 34 displays the image signal on the basis of the control by the display control unit 33. The digital transmission interface 38 performs data communication between the display apparatus 30 and the reproducing apparatus 10 connected to the digital transmission signal line 50. Specifically, the digital transmission interface 38 transmits the screen information (the vertical length of the screen and the number of pixels in the vertical direction of the screen) on the basis of the control by the processor 35. The digital transmission interface 38 receives an image signal and a sound signal processed by the reproducing apparatus 10.
The processor 35 controls the respective components of the display apparatus 30 on the basis of the control programs stored in the ROM 36. Specifically, for example, the processor 35 controls the display apparatus 30 to transmit screen information of the display apparatus 30 to the reproducing apparatus 10 via the digital transmission interface 38. The processor 35 controls the display apparatus 30 to display an image signal supplied from the digital transmission interface 38 or the decoder 32 on the display unit 34.
The gradation modulator 200 receives a two-dimensional image signal from a signal line 201 as an input signal IN(x,y) and outputs an output signal OUT (x,y) from a signal line 209. The gradation modulator 200 configures a ΔΣ modulator and has a noise shaping effect for modulating quantization errors to a high-frequency region.
The quantizing unit 210 is a quantizer that quantizes an output of an adder 250. For example, when data having 12-bit width is inputted from the adder 250, the quantizing unit 210 omits lower order 4 bits and outputs higher order 8 bits as an output signal OUT(x,y).
The inverse quantization unit 220 is an inverse quantizer that inversely quantizes the output signal OUT(x,y) quantized by the quantizing unit 210. For example, when the quantized output signal OUT(x,y) has 8-bit width, the inverse quantization unit 220 embeds “0000” in the lower order 4 bits (padding) and outputs 12-bit width data.
A subtracter 230 is a subtracter that calculates a difference between the output of the adder 250 and the output of the inverse quantization unit 220. The subtracter 230 subtracts the output of the inverse quantization unit 220 from the output of the adder 250 to thereby output quantized errors Q(x,y) omitted by the quantizing unit 210 to a signal line 239.
A feedback arithmetic unit 240 multiplies the quantization errors Q(x,y) in the past outputted from the subtracter 230 with a filter coefficient set by the filter-coefficient setting unit 260 and adds up the quantization errors Q(x,y). A value calculated by multiply-accumulate by the feedback arithmetic unit 240 is supplied to the adder 250 as a feedback value.
The adder 250 is an adder for feeding back the feedback value calculated by the feedback arithmetic unit 240 to a correction signal F(x,y) inputted to the gradation modulator 200. The adder 250 adds up the correction signal F(x,y) inputted to the gradation modulator 200 and the feedback value calculated by the feedback arithmetic unit 240 and outputs a result of the addition to the quantizing unit 210 and the subtracter 230.
In the image processing apparatus, the gradation modulator 200 has an input and output relation explained below.
OUT(x,y)=F(x,y)+(1−G)×Q(x,y)
It is seen that the quantization errors Q(x,y) is modulated to a high frequency by the noise shaping of “1−G”.
The filter-coefficient setting unit 260 selects, on the basis of the viewing conditions supplied from the viewing-condition determining unit 280, a filter coefficient associated with a spatial frequency determined on the basis of the viewing conditions from the filter-coefficient storing unit 270. The filter-coefficient setting unit 260 sets the selected filter coefficient in the feedback arithmetic unit 240. The filter-coefficient setting unit 260 can be realized by the processor 15.
The filter-coefficient storing unit 270 stores filter coefficients associated with spatial frequencies, respectively. The filter-coefficient storing unit 270 can be realized by the ROM 16.
The viewing-condition determining unit 280 receives screen information from the display apparatus 30 and calculates viewing conditions. When it is difficult for the viewing-condition determining unit 280 to receive the screen information, the viewing-condition determining unit 280 may calculate the viewing conditions using a value decided in advance. The viewing-condition determining unit 280 supplies the calculated viewing conditions to the filter-coefficient setting unit 260. The viewing-condition determining unit 280 can be realized by the digital transmission interface 18 and the processor 15.
Image processing according to this embodiment is performed to sequentially raster-scan the pixels from the left to the right and from the top to the bottom as indicated by arrows in the figure. Input signals are inputted in order of IN(0,0), IN(1,0), IN(2,0), . . . , IN(0,1), IN(1,1), IN(2,1), . . . . .
The feedback arithmetic unit 240 takes into account the order of the raster scan as a predetermined area in referring to the other pixels. For example, when the feedback arithmetic unit 240 calculates a feedback value corresponding to the correction signal F(x,y), the feedback arithmetic unit 240 refers to twelve quantization errors Q(x−2,y−2), Q(x−1,y−2), Q(x,y−2), Q(x+1,y−2), Q(x+2,y−2), Q(x−2,y−1), Q(x−1,y−1), Q(x,y−1), Q(x+1,y−1), Q(x+2,y−1), Q(x−2, y), and Q(x−1,y) in an area surrounded by a dotted line, i.e., quantization errors in the past.
In the case of a color image signal including a luminance signal Y, color difference signals Cb and Cr, and the like, gradation conversion processing is applied to the respective signals. The luminance signal Y is independently subjected to the gradation conversion processing. The color difference signals Cb and Cr are also independently subjected to the gradation conversion processing.
The quantization-error supplying unit 241 supplies values in the past of the quantization errors Q(x,y). In this example, it is assumed that the twelve quantization errors Q(x−2,y−2), Q(x−1,y−2), Q(x,y−2), Q(x+1,y−2), Q(x+2,y−2), Q(x−2,y−1), Q(x−1,y−1), Q(x,y−1), Q(x+1,y−1), Q(x+2,y−1), Q(x−2, y), and Q(x−1,y) are supplied.
The multipliers 2461 to 2472 are multipliers that multiply the quantization errors Q supplied from the quantization-error supplying unit 241 and filter coefficients “g” together. In this example, assuming twelve filter coefficients, the multiplier 2461 multiplies the quantization error Q(x−2,y−2) and a filter coefficient g(1,1) together, the multiplier 2462 multiplies the quantization error Q(x−1,y−2) and a filter coefficient g(2,1) together, the multiplier 2463 multiplies the quantization error Q(x,y−2) and a filter coefficient g(3,1) together, the multiplier 2464 multiplies the quantization error Q(x+1,y−2) and a filter coefficient g(4,1) together, the multiplier 2465 multiplies the quantization error Q(x+2,y−2) and a filter coefficient g(5,1) together, the multiplier 2466 multiplies the quantization error Q(x−2,y−1) and a filter coefficient g(1,2) together, the multiplier 2467 multiplies the quantization error Q(x−1,y−1) and a filter coefficient g(2,2) together, the multiplier 2468 multiplies the quantization error Q(x,y−1) and a filter coefficient g(3,2) together, the multiplier 2469 multiplies the quantization error Q(x+1,y−1) and a filter coefficient g(4,2) together, the multiplier 2470 multiplies the quantization error Q(x+2,y−1) and a filter coefficient g(5,2) together, the multiplier 2471 multiplies the quantization error Q(x−2,y) and a filter coefficient g(1,3) together, and the multiplier 2472 multiplies the quantization error Q(x−1,y) and a filter coefficient g(2,3) together.
The adder 248 is an adder that adds up outputs of the multipliers 2461 to 2472. A result of the addition by the adder 248 is supplied to one input of the adder 250 as a feedback value via a signal line 249.
The memory 2411 includes line memories #0 (2412) and #1 (2413). The line memory #0 (2412) is a memory that stores the quantization errors Q of a line in the vertical direction Y=(y−2). The line memory #1 (2413) is a memory that stores the quantization errors Q of a line in the vertical direction Y=(y−1).
The write unit 2414 writes the quantization errors Q(x,y) in the memory 2411. The read unit 2415 reads out the quantization errors Q of the line in the vertical direction Y=(y−2) one by one from the line memory #0 (2412). The quantization error Q(x+2,x−2) as an output of the read unit 2415 is inputted to the delay element 2424 and supplied as one input to the multiplier 2465 via a signal line 2455. The read unit 2416 reads out the quantization errors Q of the line in the vertical direction Y=(y−1) one by one from the line memory #1 (2413). The quantization error Q(x+2,y−1) as an output of the read unit 2416 is inputted to the delay element 2429 and supplied as one input to multiplier 2470 via a signal line 2450.
The delay elements 2421 to 2424 configure a shift resistor that delays an output of the read unit 2415. The quantization error Q(x+1,y−2) as an output of the delay element 2424 is inputted to the delay element 2423 and supplied as one input to the multiplier 2464 via a signal line 2444. The quantization error Q(x,y−2) as an output of the delay element 2423 is inputted to the delay element 2422 and supplied as one input to the multiplier 2463 via a signal line 2443. The quantization error Q(x−1,y−2) as an output of the delay element 2422 is inputted to the delay element 2421 and supplied as one input to the multiplier 2462 via a signal line 2442. The quantization error Q(x−2,y−2) as an output of the delay element 2421 is supplied as one input to the multiplier 2461 via a signal line 2441.
The delay elements 2426 to 2429 configure a shift register that delays an output of the read unit 2416. The quantization error Q(x+1,y−1) as an output of the delay element 2429 is inputted to the delay element 2428 and supplied as one input to the multiplier 2469 via a signal line 2449. The quantization error Q(x,y−1) as an output of the delay element 2428 is inputted to the delay element 2427 and supplied as one input to the multiplier 2468 via a signal line 2448. The quantization error Q(x−1,y−1) as an output of the delay element 2427 is inputted to the delay element 2426 and supplied as one input to the multiplier 2467 via a signal line 2447. The quantization error Q(x−2,y−1) as an output of the delay element 2426 is supplied as one input to the multiplier 2466 via a signal line 2446.
The delay elements 2431 and 2432 configure a shift resister that delays the quantization errors Q(x,y). The quantization error Q(x−1,y) as an output of the delay element 2432 is inputted to the delay element 2431 and supplied as one input to the multiplier 2472 via a signal line 2452. The quantization error Q(x−2,y) as an output of the delay element 2431 is supplied as one input to the multiplier 2471 via a signal line 2451.
The quantization errors Q(x,y) of the signal line 239 are stored in an address “x” of the line memory #0 (2412). When processing for one line is finished in the order of the raster scan, the line memory #0 (2412) and the line memory #1 (2413) are interchanged. Therefore, quantization errors stored in the line memory #0 (2412) correspond to the lines in the vertical direction Y=(y−2) and quantization errors stored in the line memory #1 (2413) correspond to the lines in the vertical direction Y=(y−1).
The human visual characteristic 840 reaches a peak value near the spatial frequency “f” of 7 cpd and is attenuated to near 60 cpd. On the other hand, the amplitude characteristic 860 by the reproducing apparatus according to this embodiment is a curve that is attenuated in a minus direction to near the spatial frequency “f” of 12 cpd and, thereafter, steeply rises. In the amplitude characteristic 860, a quantization error of low frequency components is attenuated to about two third of the maximum frequency in the spatial frequency. The quantization error is modulated to a band with sufficiently low sensitivity with respect to the human visual characteristic 840.
In the Jarvis filter 851 and the Floyd filter 852 in the past, it is difficult to modulate quantization errors to a band with a sufficiently low sensitivity with respect to the human visual characteristic 840.
A TMDS serial transmission system is used for the transmission between the transmitter 311 and the receiver 321. In the HDMI standard, an image signal and a sound signal are transmitted by using three TMDS channels 331 to 333. In a valid image section, which is a section obtained by excluding a horizontal blanking section and a vertical blanking section in a section from a certain vertical synchronization signal to the next vertical synchronization signal, a differential signal corresponding to pixel data of an image for uncompressed one screen is transmitted in one direction to the sync apparatus 320 by the TMDS channels 331 to 333. In the horizontal blanking section and the vertical blanking section, a differential signal corresponding to sound data, control data, other auxiliary data, or the like is transmitted in one direction to the sync apparatus 320 by the TMDS channels 331 to 333.
In the HDMI standard, a clock signal is transmitted by a TMDS clock channel 334. In each of the TMDS channels 331 to 333, pixel data for 10 bits can be transmitted during one clock transmitted by the TMDS clock channel 334.
In the HDMI standard, a display data channel (DDC) 335 is provided. The display data channel 335 is used by the source apparatus 310 to read out EDID (Extended Display Identification Data) in the sync apparatus 320. The EDID information indicates, when the sync apparatus 320 is a display apparatus, information concerning a model, setting of a screen size, timing, and the like, and performance of the sync apparatus 320. The EDID information is stored in an EDID ROM 322 of the sync apparatus 320. Further, in the HDMI standard, a CEC (Consumer Electronics Control) line 336 is provided. The CEC line 336 is a line for performing bidirectional communication of an apparatus control signal. Whereas the display data channel 335 connects apparatuses in a one to one relation, the CEC line 336 directly connects all apparatuses connected to the HDMI.
Consequently, in this embodiment, the viewing-condition determining unit 280 receives, as screen information, the Max. Vertical Image Size 422 and the number of pixels in vertical direction 432 among the EDID information via the display data channel. The viewing-condition determining unit 280 calculates, as a viewing condition, a viewing distance from the Max. Vertical Image Size 422 and calculates, as a viewing condition, pixel density from the Max. Vertical Image Size 422 and the number of pixels in vertical direction 432. The viewing-condition determining unit 280 acquires screen information via the display data channel 335. However, when the Max. Vertical Image Size 422 and the number of pixels in vertical direction 432 are not stored in the EDID ROM 322, the viewing-condition determining unit 280 acquires screen information via the CEC line 336. When it is still difficult to acquire one of both of these kinds of screen information, the viewing-condition determining unit 280 calculates viewing conditions using values decided in advance.
As explained above, the filter-coefficient storing unit 270 is configured to store the correspondence table between the viewing conditions and the spatial frequencies shown in
As explained above, the filter-coefficient storing unit 270 may be configured to store a correspondence table shown in
Subsequently, the reproducing apparatus 10 determines whether information indicating the number of pixels in the vertical direction of the screen has been successfully acquired (step S916). When the information has not been successfully acquired, the reproducing apparatus 10 establishes communication through the CEC line 336 and determines whether the information indicating the number of pixels in the vertical direction of the screen has been successfully acquired through the CEC line 336 (step S917). When the information has not been successfully acquired through the CEC line 336 either, the reproducing apparatus 10 calculates, with the viewing-condition determining unit 280, pixel density from a default value of the number of pixels in the vertical direction of the screen (step S918) and the information indicating the vertical length of the screen used in step S915 (step S919). On the other hand, when the information indicating the number of pixels in the vertical direction of the screen has been successfully acquired in step S916 or S917, the reproducing apparatus 10 calculates, with the viewing-condition determining unit 280, pixel density from the information indicating the number of pixels in the vertical direction of the screen and the information indicating the vertical length of the screen used in step S915 (step S919).
The reproducing apparatus 10 acquires, with the filter-coefficient setting unit 260, a filter coefficient corresponding to the calculated viewing distance and pixel density among the filter coefficient stored in the filter-coefficient storing unit 270 (step S921). The reproducing apparatus 10 sets, with the filter-coefficient setting unit 260, the filter coefficient acquired in this way in the feedback arithmetic unit 240 (step S922).
The reproducing apparatus 10 calculates a quantized error Q(x,y) by calculating, with the subtracter 230, a difference between a signal before the quantization by the quantizing unit 210 and a signal inversely quantized by the inverse quantization unit 220 (step S953).
The reproducing apparatus 10 accumulates the quantized error Q(x,y) calculated in this way and uses, with the feedback arithmetic unit 240, the quantized error Q(x,y) for calculation of a feedback value (step S954). The reproducing apparatus 10 feeds back the feedback value calculated in this way to the adder 250 (step S955).
A first modification of the embodiment of the present invention is explained with reference to the drawings. In the example explained with reference to
The management server 710 unitarily manages the content servers 731 to 734. The management server 710 acquires content data from the content servers 731 to 734 in response to a request from the content viewing apparatus 750 and transmits the content data to the content viewing apparatus 750. Specifically, the management server 710 acquires screen information concerning viewing conditions from the display apparatus 760 and, as explained with reference to
The communication units 741 and 742 perform communication between the viewing apparatus 750 and the content providing apparatus 700 via a network such as the Internet.
The display apparatus 760 displays the image signal transmitted from the content providing apparatus 700 on a display screen.
Thereafter, in step S922, after setting the acquired filter coefficient in the feedback arithmetic unit 240, the reproducing apparatus 10 performs the gradation modulation processing and transmits an image signal subjected to other predetermined image processing to the display apparatus 760 (step S962).
Consequently, the management server 710 can transmit the image signal, which is subjected to the gradation modulation processing on the basis of the screen information concerning the viewing conditions from the display apparatus 760, to the display apparatus 760 connected to the network such as the Internet.
A second modification of the embodiment is explained with reference to the drawings. In the example explained with reference to
The apparatus-information storing unit 720 stores a manufacturing number of a display apparatus and screen information concerning viewing conditions in association with each other.
When it is difficult to acquire one or both of the pieces of screen information concerning the viewing conditions from the display apparatus 760, the management server 710 acquires a manufacturing number from the display apparatus 760 and acquires screen information corresponding to the manufacturing number from the apparatus-information storing unit 720. Functions other than this function are the same as those of the management server 710 explained with reference to
Since the apparatus-information storing unit 720 is provided in this way, when it is difficult to obtain the screen information concerning the viewing conditions from the display apparatus 760, the management server 710 acquires the manufacturing number from the display apparatus 760. Therefore, the management server 710 can acquire screen information from the manufacturing number and perform gradation modulation processing suitable for the display apparatus 760.
As explained above, according to this embodiment, when the gradation modulation processing is performed, a filter coefficient is selected on the basis of the viewing conditions, which are calculated according to the screen information from the display apparatus 30 that displays an image signal, and set in the feedback arithmetic unit 240. This makes it possible to modulate quantization errors to a band with sufficiently low sensitivity with respect to the human visual characteristic.
Consequently, for example, even if bit widths of respective pixel values of a liquid crystal panel of a television are 8 bits, an image quality equivalent to 12 bits can be represented. Even if an input signal to the television is an 8-bit signal, bit length can be expanded to 8 bits or more by various kinds of image processing. For example, 8-bit image is expanded to 12 bits by noise reduction. When the bit widths of the respective pixel values of the liquid crystal panel are 8 bits, 12-bit data needs to be quantized to 8 bits. In this case, an image quality equivalent to 12 bits can be represented by the 8-bit liquid crystal panel by applying the present invention. The present invention can be applied in a transmission line in the same manner. For example, when a transmission line from a video apparatus to a television has 8-bit width, if a 12-bit image signal in the video apparatus is converted into 8 bits according to the present invention and transferred to the television, an image quality equivalent to 12 bits can be viewed on the television side.
The embodiment of the present invention indicates an example for embodying the present invention. The embodiment has correspondence relations with the respective elements explained above in the section of the summary of the invention. However, the present invention is not limited to this. Various modifications are possible without departing from the spirit of the present invention.
The filter-coefficient storing means corresponds to, for example, the filter-coefficient storing unit 270. The viewing-condition determining means corresponds to, for example, the viewing-condition determining unit 280. The filter-coefficient setting means corresponds to, for example, the filter-coefficient setting unit 260. The gradation modulating means corresponds to, for example, the gradation modulator 200. The quantizing means corresponds to, for example, the quantizing unit 210. The filter coefficient corresponds to, for example, the filter coefficient G of the filter-coefficient storing unit 270.
The number of pixels corresponds to, for example, the number of pixels 432 or the number of pixels 783 in the vertical direction. The screen size corresponds to, for example, the vertical length 422 or the screen size 784.
The inverse quantization means corresponds to, for example, the inverse quantization unit 220. The difference generating means corresponds to, for example, the subtracter 230. The arithmetic means corresponds to, for example, the feedback arithmetic unit 240. The adding means corresponds to, for example, the adder 250.
The viewing condition determining step corresponds to, for example, steps S912 to S919. The filter coefficient setting step corresponds to, for example, steps S921 and S922.
The processing procedures explained in the embodiment may be grasped as a method having the series of procedures or may be grasped as a computer program for causing a computer to execute these series of procedures or a storage medium that stores the computer program.
It should be understood by those skilled in the art that various modifications, combinations, sub-combinations, and alterations may occur depending on design requirements and other factors insofar as they are within the scope of the appended claims or the equivalents thereof.
Number | Date | Country | Kind |
---|---|---|---|
2008-028470 | Feb 2008 | JP | national |
Number | Name | Date | Kind |
---|---|---|---|
6847737 | Kouri et al. | Jan 2005 | B1 |
7292733 | Monobe et al. | Nov 2007 | B2 |
7944464 | Fukushima et al. | May 2011 | B2 |
20020054354 | Fukao | May 2002 | A1 |
20050259884 | Murakami et al. | Nov 2005 | A1 |
Number | Date | Country |
---|---|---|
61-201575 | Sep 1986 | JP |
4-040071 | Feb 1992 | JP |
4-356869 | Dec 1992 | JP |
05-103204 | Apr 1993 | JP |
7-312688 | Nov 1995 | JP |
8-032900 | Feb 1996 | JP |
9-321987 | Dec 1997 | JP |
11-239275 | Aug 1999 | JP |
2001-257880 | Sep 2001 | JP |
2002-221950 | Aug 2002 | JP |
2002-281306 | Sep 2002 | JP |
2003-044847 | Feb 2003 | JP |
9921356 | Apr 1999 | WO |
Entry |
---|
Eurpoean Search Report, EP 09152148, dated May 11, 2009. |
Girod B et al: “A Subjective Evaluation of Noise-Shaping Quantization for Adaptive Intra-/Interframe DPCM Coding of Color Television Signals” IEEE Transactions on Communications, IEEE Service Center, Piscataway, NJ, US, vol. 36, No. 3, (Mar. 1, 1988), pp. 332-346. |
Kolpatzik B W et al: “Optimized Error Diffusion for Image Display” Journal of Electronic Imaging, SPIE / IS &T, vol. 1, No. 3 (Jul. 1, 1992), pp. 277-292. |
Number | Date | Country | |
---|---|---|---|
20090201516 A1 | Aug 2009 | US |