The present invention relates to an image signal processing device, an image signal processing method, and a program. More specifically, the present invention relates to an image signal processing device, an image signal processing method, and a program in which a signal process for an FPD (Flat Panel Display) (flat display) including, for example, an ABL (Automatic Beam current Limiter) process, a VM (Velocity Modulation) process, and a γ process for a CRT (Cathode Ray Tube) is performed to allow an FPD display apparatus that is a display apparatus of an FPD to provide a natural display equivalent to that of a CRT display apparatus that is a display apparatus of a CRT.
A brightness adjustment contrast adjustment unit 11 applies an offset to an input image signal to perform brightness adjustment of the image signal, adjusts the gain to perform contrast adjustment of the image signal, and supplies a result to an image quality improvement processing unit 12.
The image quality improvement processing unit 12 performs an image quality improvement process such as DRC (Digital Reality Creation). That is, the image quality improvement processing unit 12 is a processing block for obtaining a high-quality image, performs an image signal process including number-of-pixels conversion and the like on the image signal from the brightness adjustment contrast adjustment unit 11, and supplies a result to a γ correction unit 13.
Here, DRC is described in, for example, Japanese Unexamined Patent Application Publication No. 2005-236634, Japanese Unexamined Patent Application Publication No. 2002-223167, or the like as a class classification adaptive process.
The γ correction unit 13 is a processing block for performing a gamma correction process of adjusting the signal level of a dark portion using a signal process, in addition to γ characteristics inherent to fluorescent materials (light-emitting units of a CRT), for reasons such as poor viewing of a dark portion on a CRT display apparatus.
Here, since an LCD also contains in an LCD panel thereof a processing circuit for adjusting the photoelectric conversion characteristics (transmission characteristics) of liquid crystal to the γ characteristics of the CRT, an FPD display apparatus of the related art performs a γ correction process in a manner similar to that of a CRT display apparatus.
The γ correction unit 13 subjects the image signal from the image quality improvement processing unit 12 to a gamma correction process, and supplies a resulting image signal to an FPD (not illustrated), for example, an LCD. Thereby, an image is displayed on the LCD.
As above, in an FPD display apparatus of the related art, after a contrast or brightness adjustment process is performed, an image signal is directly input to an FPD through an image quality improvement process and a gamma correction process. (
Thus, in the FPD display apparatus, the brightnesses of an input and a displayed image have a proportional relationship according to gamma. The displayed image, however, becomes an image that seems brighter and more glaring than that of a CRT display apparatus.
Accordingly, there is a method for adaptively improving the gradation representation capability without using a separate ABL circuit in a display apparatus having lower panel characteristics than a CRT in terms of the gradation representation capability for a dark portion (see, for example, Patent Document 1).
Patent Document 1: Japanese Unexamined Patent Application Publication No. 2005-39817
Incidentally, as described above, an image displayed on an FPD display apparatus becomes an image that seems brighter and more glaring than that of a CRT display apparatus because only an image signal processing system incorporated in a CRT display apparatus of the related art for performing a process only on an image signal is modified for use in an FPD and is incorporated in an FPD display apparatus. This results from no consideration of a system structure in which a CRT display apparatus is a display apparatus based on comprehensive signal processing, including not only an image signal processing system but also response characteristics specific to a driving system itself and the driving system.
The present invention has been made in view of such a situation, and is intended to allow for a natural display equivalent to that of a CRT display apparatus such that an image obtained when an image signal is displayed on a display apparatus of a display type other than that of a CRT display apparatus, for example, on an FPD display apparatus, can look like an image displayed on a CRT display apparatus.
An image signal processing device or a program of an aspect of the present invention is an image signal processing device for processing an image signal so that an image obtained when the image signal is displayed on a display apparatus of a non-CRT (Cathode Ray Tube) display type looks like an image displayed on a CRT display apparatus, including ABL processing means for applying a process that emulates an ABL (Automatic Beam current Limiter) process to the image signal, VM processing means for applying a process that emulates a VM (Velocity Modulation) process to the image signal processed by the ABL processing means, and gamma correction means for performing gamma correction on the image signal processed by the VM processing means, or a program for causing a computer to function as the image signal processing device.
An image signal processing method of an aspect of the present invention is an image signal processing method for an image signal processing device for processing an image signal so that an image obtained when the image signal is displayed on a display apparatus of a non-CRT (Cathode Ray Tube) display type looks like an image displayed on a CRT display apparatus, including the steps of applying a process that emulates an ABL (Automatic Beam current Limiter) process to the image signal; applying a process that emulates a VM (Velocity Modulation) process to the image signal on which the process that emulates the ABL process has been performed; and performing gamma correction on the image signal on which the process that emulates the VM process has been performed.
Furthermore, the processed image signal is gamma corrected.
In an aspect of the present invention, a process that emulates an ABL process is applied to the image signal and a process that emulates a VM process is applied to the processed image signal.
According to an aspect of the present invention, a natural display equivalent to that of a CRT display apparatus can be performed.
11 brightness adjustment contrast adjustment unit, 12 image quality improvement processing unit, 13 γ correction unit, 31 brightness adjustment contrast adjustment unit, 32 image quality improvement processing unit, 33 ABL processing unit, 34 VM processing unit, 35 CRT γ processing unit, 36 full screen brightness average level detection unit, 37 peak detection differential control value detection unit, 38 ABL control unit, 39 VM control unit, 40 display color temperature compensation control unit, 51 brightness adjustment contrast adjustment unit, 52 image quality improvement processing unit, 53 gain adjustment unit, 54 γ correction unit, 55 video amplifier, 56 CRT, 57 FBT, 58 beam current detection unit, 59 ABL control unit, 60 image signal differentiating circuit, 61 VM driving circuit, 101 bus, 102 CPU, 103 ROM, 104 RAM, 105 hard disk, 106 output unit, 107 input unit, 108 communication unit, 109 drive, 110 input/output interface, 111 removable recording medium 210 luminance correction unit, 211 VM coefficient generation unit, 212 computation unit, 220 EB processing unit, 241 EB coefficient generation unit, 242A to 242D and 242F to 242I computation unit, 251 to 259 delay unit, 260 EB coefficient generation unit, 261 product-sum operation unit, 271, 272 selector, 281 control unit, 282 level shift unit, 283 gain adjustment unit, 310 luminance correction unit, 311 delay timing adjustment unit, 312 differentiating circuit, 313 threshold processing unit, 314 waveform shaping processing unit, 315 multiplying circuit, 321 tap selection unit, 322 class classification unit, 323 class prediction coefficient storage unit, 324 prediction unit, 325 class decision unit, 326 tap coefficient storage unit, 327 prediction unit
Embodiments of the present invention will be described hereinafter with reference to the drawings.
The image signal processing device of
Here, before explaining the image signal processing device of
In the CRT display apparatus, in a brightness adjustment contrast adjustment unit 51 and an image quality improvement processing unit 52, an image signal is subjected to processes similar to those of the brightness adjustment contrast adjustment unit 11 and image quality improvement processing unit 12 of
The gain adjustment unit (limiter) 53 limits the signal level of the image signal from the image quality improvement processing unit 52 according to an ABL control signal from an ABL control unit 59 described below, and supplies a result to a γ correction unit 54. That is, the gain adjustment unit 53 adjusts the gain of the image signal from the image quality improvement processing unit 52 instead of directly limiting the amount of current of an electron beam of a CRT 56 described below.
The γ correction unit 54 subjects the image signal from the gain adjustment unit 53 to a γ correction process which is similar to that of the γ correction unit 13 of
The video amplifier 55 amplifies the image signal from the γ correction unit 54, and supplies a result to the CRT as a CRT driving image signal.
On the other hand, an FBT (Flyback Transformer) 57 is a transformer for generating a horizontal deflection drive current for providing horizontal scanning of an electron beam and an anode voltage of the CRT (Braun tube) 56 in the CRT display apparatus, the output of which is supplied to a beam current detection unit 58.
The beam current detection unit 58 detects the amount of current of an electron beam necessary for ABL control from the output of the FBT 57, and supplies the amount of current to the CRT 56 and an ABL control unit 59.
The ABL control unit 59 measures a current value of the electron beam from the beam current detection unit 58, and outputs an ABL control signal for ABL control for controlling the signal level of the image signal to the gain adjustment unit 53.
On the other hand, the image signal differentiating circuit 60 differentiates the image signal from the image quality improvement processing unit 52 and supplies a resulting differentiated value of the image signal to a VM driving circuit 61.
The VM (Velocity Modulation) driving circuit 61 performs a VM process of partially changing the deflection (horizontal deflection) velocity of an electron beam in the CRT display apparatus so that the display luminance of even the same image signal is changed. In the CRT display apparatus, the VM process is implemented using a dedicated VM coil (not illustrated) and the VM driving circuit 61 separate from a main horizontal deflection circuit (which is constituted by a deflection yoke DY, the FBT 57, a horizontal driving circuit (not illustrated), and the like).
That is, the VM driving circuit 61 generates a VM coil driving signal for driving the VM coil on the basis of the differentiated value of the image signal from the image signal differentiating circuit 60, and supplies the VM coil driving signal to the CRT 56.
The CRT 56 is constituted by an electron gun EG, the deflection yoke DY, and the like. In the CRT 56, the electron gun EG emits an electron beam in accordance with the output of the beam current detection unit 58 or the CRT driving image signal from the video amplifier 55, and the electron beam is changed (and scanned) in the horizontal and vertical directions in accordance with magnetic fields generated by the deflection yoke DY serving as a coil, and impinges on a fluorescent surface of the CRT 56. Thereby, an image is displayed.
Further, in the CRT 56, the VM coil is driven in accordance with the VM coil driving signal from the VM driving circuit 61. Thereby, the deflection velocity of the electron beam is partially changed, thereby providing, for example, enhancement or the like of edges of an image to be displayed on the CRT 56.
As can be seen from
In order to display on an FPD such an image in which the influence by the VM process and the ABL process appears, it is necessary to take the form of performing processes equivalent to the VM process and the ABL process over the path over which the image signal is processed because the driving method of the FPD is completely different from that of a CRT.
Accordingly, the image signal processing device of
That is, in the image signal processing device of
In order to obtain, at the LCD, brightness characteristics similar to those of a CRT, the ABL processing unit 33 performs an ABL emulation process of limiting the level of the image signal from the image quality improvement processing unit 32 according to the control from an ABL control unit 38 in a case where an image having a brightness (luminance and its area) of a certain value or more is obtained.
Here, the ABL emulation process in
That is, an ABL process performed in a CRT display apparatus is a process of limiting a current, in a case where a brightness (luminance and its area) of a certain value of more is obtained in a CRT, so as not to cause an excessive amount of electron beam (current). The ABL processing unit 33, however, performs emulation of the ABL process in
In
That is, in
The image signal subjected to the ABL process in the ABL processing unit 33 is supplied to a VM processing unit 34.
The VM processing unit 34 is a processing block for performing a process equivalent to the VM process in the CRT display apparatus of
That is, in
The VM processing unit 34 performs a process for partially changing the level of the image signal from the ABL processing unit 33 according to the VM control signal generated by the VM control unit 39, that is, a process such as partial correction of the image signal or enhancement of an edge portion or a peak of the image signal.
Here, in the CRT display apparatus of
The VM processing unit 34 performs a computation process of computing a correction value equivalent to the amount of change in luminance caused by the VM process performed in the CRT display apparatus and correcting the image signal using this correction value, thereby emulating the VM process performed in the CRT display apparatus.
A CRT γ processing unit 35 performs a process of adjusting the level of each color signal (component signal) in order to perform, in the LCD, a γ correction process including a process performed in a processing circuit (conversion circuit) for obtaining γ characteristics equivalent to those of a CRT, which is provided in an LCD panel of the related art inside the panel, and a color temperature compensation process.
Here, the CRT γ processing unit 35 in
That is, in
White balance, color temperature, and luminance change with respect thereto differ depending on a CRT, an LCD, and a PDP. Thus, the display color temperature compensation control unit 40 of
The process performed by the CRT γ processing unit 35 according to the control signal from the display color temperature compensation control unit 40 includes a process performed by a processing circuit that has converted the gradation characteristics of each panel so as to become equivalent to those of a CRT, which has been traditionally processed within a flat panel such as an LCD, and a process of absorbing the difference in characteristic from one display panel to another is performed.
Then, the CRT γ processing unit 35 subjects the image signal from the VM processing unit 34 to the foregoing processes and then supplies the processed image signal to an LCD as an FPD (not illustrated) for display.
As above, the image signal processing device of
According to the image signal processing device of
Further, according to the image signal processing device of
According to the image signal processing device of
Further, according to the image signal processing device of
Next, the flow of a process for an image signal by the image signal processing device of
When an image signal is supplied to the brightness adjustment contrast adjustment unit 31, in step S11, the brightness adjustment contrast adjustment unit 31 performs brightness adjustment of the image signal supplied thereto, followed by contrast adjustment, and supplies a result to the image quality improvement processing unit 32. The process proceeds to step S12.
In step S12, the image quality improvement processing unit 32 performs an image signal process including number-of-pixels conversion and the like on the image signal from the brightness adjustment contrast adjustment unit 11, and supplies an image signal obtained after the image signal process to the ABL processing unit 33, the full screen brightness average level detection unit 36, and the peak detection differential control value detection unit 37. The process proceeds to step S13.
Here, the full screen brightness average level detection unit 36 detects the brightness or average level of the screen on the basis of the image signal from the image quality improvement processing unit 32, and supplies a result to the peak detection differential control value detection unit 37 and the ABL control unit 38. The ABL control unit 38 generates a control signal for limiting the brightness of the screen on the basis of the detected brightness or average level of the screen from the full screen brightness average level detection unit 36, and supplies the control signal to the ABL processing unit 33.
Further, the peak detection differential control value detection unit 37 determines a partial peak signal of the image signal or an edge signal obtained by the differentiation of the image signal from the image signal from the image quality improvement processing unit 32, and supplies a result to the VM control unit 39 together with the brightness or average level of the screen from the full screen brightness average level detection unit 36. The VM control unit 39 generates a VM control signal equivalent to the VM coil driving signal in the CRT display apparatus on the basis of the partial peak signal of the image signal, the edge signal obtained by the differentiation of the image signal, the brightness of the screen, or the like from the peak detection differential control value detection unit 37, and supplies the VM control signal to the VM processing unit 34.
In step S33, the ABL processing unit 33 applies a process that emulates an ABL process to the image signal from the image quality improvement processing unit 32.
That is, the ABL processing unit 33 performs a process (ABL emulation process) that emulates an ABL process such as limiting the level of the image signal from the image quality improvement processing unit 32 according to the control from the ABL control unit 38, and supplies a resulting image signal to the VM processing unit 34.
Then, the process proceeds from step S13 to step S14, in which the VM processing unit 34 applies a process that emulates a VM process to the image signal from the ABL processing unit 33.
That is, in step S14, the VM processing unit 34 performs a process (VM emulation process) that emulates a VM process such as correcting the luminance of the image signal from the ABL processing unit 33 according to the VM control signal supplied from the VM control unit 39, and supplies a resulting image signal to the CRT γ processing unit 35. The process proceeds to step S15.
In step S15, the CRT γ processing unit 35 subjects the image signal from the VM processing unit 34 to a γ correction process, and further performs a color temperature compensation process of adjusting the balance of the respective colors of the image signal from the VM processing unit 34 according to the control signal from the display color temperature compensation control unit 40. Then, The CRT γ processing unit 35 supplies an image signal obtained as a result of the color temperature compensation process to an LCD as an FPD (not illustrated) for display.
Next,
In
The luminance correction unit 210 performs a luminance correction process, for the image signal supplied from the ABL processing unit 33 (
That is, the luminance correction unit 210 is constructed from a VM coefficient generation unit 211 and a computation unit 212.
The VM coefficient generation unit 211 is supplied with a VM control signal from the VM control unit 39 (
The computation unit 212 is supplied with, in addition to the VM coefficient from the VM coefficient generation unit 211, the image signal from the ABL processing unit 33 (
The computation unit 212 multiplies the image signal from the ABL processing unit 33 (
The EB processing unit 220 subjects the image signal from the luminance correction unit 210 (image signal processed by the ABL processing unit 33 and further processed by the luminance correction unit 210) to a process (EB (Erectron Beam) emulation process) that emulates the electron beam of the CRT display apparatus spreading out and impinging on a fluorescent material of the CRT display apparatus, and supplies a result to the CRT γ processing unit 35 (
As above, the VM emulation process performed in the VM processing unit 34 is composed of the luminance correction process performed in the luminance correction unit 210 and the EB emulation process performed in the EB processing unit 220.
The VM coefficient is a coefficient to be multiplied with the pixel values (luminance) of pixels to be corrected for the luminance in order to delay, in the CRT display apparatus, the deflection velocity of horizontal deflection (deflection in the horizontal direction) at the position of a pixel of interest (here, a pixel to be corrected so as to enhance the luminance by a VM process) by the VM coil driving signal to equivalently emulate a VM process of increasing the luminance of the pixel of interest, where a plurality of pixels arranged in the horizontal direction centered on the pixel of interest are used as the pixels to be corrected for the luminance.
In the VM coefficient generation unit 211, as illustrated in
That is, part A of
As illustrated in part A of
Part B of
In the CRT display apparatus, the VM coil located in the deflection yoke DY (
That is, part C of
Due to the magnetic field generated by the VM coil, the temporal change of the position in the horizontal direction of the electron beam (the gradient of the graph of part C of
Part D of
Based on a case where the horizontal deflection of the electron beam is performed only by the deflection voltage of part A of
The VM coefficient generation unit 211 (
Note that the specific value of the VM coefficient, the range of pixels to be multiplied with the VM coefficient (the pixel value of how many pixels arranged in the horizontal direction centered on the pixel of interest is to be multiplied with the VM coefficient), the pixel value (level) of the pixel to be set as a pixel of interest, and the like are determined depending on the specification or the like of the CRT display apparatus for which the image signal processing device of
Next, the EB emulation process performed in the EB processing unit 220 of
In the EB emulation process, as described above, a process that emulates an electron beam of the CRT display apparatus spreading out and impinging on a fluorescent material of the CRT 56 (
That is, now, if it is assumed that a pixel (sub-pixel) corresponding to a fluorescent material to which an electron beam is to be radiated is set as a pixel of interest, in a case where the intensity of the electron beam is high, the shape of the spot of the electron beam becomes large so that the electron beam impinges not only on the fluorescent material corresponding to the pixel of interest but also on fluorescent materials corresponding to neighboring pixels thereto to have the influence on the pixel values of the neighboring pixels. In the EB emulation process, a process that emulates this influence is performed.
Note that in
Although the relationship between the beam current and the spot size may differ depending on the CRT type, the setting of maximum luminance, or the like, the spot size increases as the beam current increases. That is, the higher the luminance, the larger the spot size.
Such a relationship between the beam current and the spot size is described in, for example, Japanese Unexamined Patent Application Publication No. 2004-39300 or the like.
The display screen of the CRT is coated with a fluorescent materials (fluorescent substances) of three colors, namely, red, green, and blue, and electron beams (used) for red, green, and blue impinge on the red, green, and blue fluorescent materials, thereby discharging light of red, green, and blue. Thereby, an image is displayed.
The CRT is further provided with a color separation mechanism on the display screen thereof having openings through which electron beams pass so that the electron beams of red, green, and blue are radiated on the fluorescent materials of three colors, namely, red, green, and blue.
That is, part A of
The shadow mask is provided with circular holes serving as openings, and electron beams passing through the holes are radiated on fluorescent materials.
Note that in part A of
Part B of
An aperture grille is provided with slits serving as openings extending in the vertical direction, and electron beams passing through the slits are radiated on fluorescent materials.
Note that in part B of
As explained in
Note that parts A of
As the luminance increases, the intensity of the center portion of (the spot of) the electron beam increases, and accordingly the intensity of a portion around the electron beam also increases. Thus, the spot size of the spot of the electron beam formed on the color separation mechanism is increased. Consequently, the electron beam is radiated not only on the fluorescent material corresponding to the pixel of interest (the pixel corresponding to the fluorescent material to be irradiated with the electron beam) but also on the fluorescent materials corresponding to pixels surrounding the pixel of interest.
That is, part A of
In
On the other hand, in a case where the beam current has the second current value, as illustrated in part B of
That is, in a case where the beam current has the second current value, the spot size of the electron beam becomes large enough to include other slits as well as the slit for the fluorescent material corresponding to the pixel of interest, and, consequently, the electron beam passes through the other slits and is also radiated on the fluorescent materials other than the fluorescent material corresponding to the pixel of interest.
Note that as illustrated in part B of
In the EB emulation process, as above, the influence of an image caused by radiating an electron beam not only on the fluorescent material corresponding to the pixel of interest but also on other fluorescent materials is reflected in the image signal.
Here,
That is, part A of
A majority portion of electron beams passes through the slit for the fluorescent material corresponding to the pixel of interest while a portion of the remainder of the electron beams passes through a left slit adjacent left and a right slit adjacent right to the slit for the fluorescent material corresponding to the pixel of interest. The electron beams passing therethrough have the influence on the display of the pixel corresponding to the fluorescent material of the left slit and the pixel corresponding to the fluorescent material of the right slit.
Note that part B of
That is, part A of
The electron beams of part A of
Part B of
In part B of
Note that part C of
That is, part A of
Part B of
That is, part of B of
Part C of
That is, part A of
The electron beams of part of A
Part B of
In part B of
Part C of
Note that in
Incidentally, the area of a certain section of the one-dimensional normal distribution (normal distribution in one dimension) can be determined by integrating the probability density function f(x) in Equation (1) representing the one-dimensional normal distribution over the section of which the area is to be determined.
Here, in Equation (1), μ represents the average value and σ2 represents variance.
As described above, in a case where the distribution of the intensity of an electron beam is approximated by the two-dimensional normal distribution (normal distribution in two dimensions), the intensity of the electron beam in a certain range can be determined by integrating the probability density function f(x, y) in Equation (2) representing the two-dimensional normal distribution over the range for which the intensity is to be determined.
Here, in Equation (2), μx represents the average value in the x direction and μy represents the average value in the y direction. Further, σx2 represents the variance in the x direction and σy2 represents the variance in the x direction. ρxy represents the correlation coefficient in the x and y directions (the value obtained by dividing the covariance in the x and y directions by the product of the standard deviation σx in the x direction and the standard deviation σy in the y direction).
The average value (average vector) (μx, μy) ideally represents the position (x, y) of the center of the electron beam. Now, for ease of explanation, if it is assumed that the position (x, y) of the center of the electron beam is (0, 0) (origin), the average values μx and μy become 0.
Further, in a CRT display apparatus, since an electron gun, a cathode, and the like are designed so that a spot of an electron beam can be round, the correlation coefficient ρxy is set to 0.
Now, if it is assumed that the color separation mechanism is an aperture grille, the probability density function f(x, y) in Equation (2) in which the average values μx and μy and the correlation coefficient ρxy are set to 0 is integrated over the range of a slit. Thereby, the intensity (amount) of the electron beam passing through the slit can be determined.
That is,
Part A of
The intensity of an electron beam passing through a slit in a fluorescent material corresponding to a pixel of interest (a slit of interest) can be determined by integrating the probability density function f(x, y) over the range from −S/2 to +S/2, where S denotes the slit width of a slit in the aperture grille in the x direction.
Further, the intensity of the electron beam passing through the left slit can be determined by, for the x direction, integrating the probability density function f(x, y) over the slit width of the left slit, and the intensity of the electron beam passing through the right slit can be determined by, for the x direction, integrating the probability density function f(x, y) over the slit width of the right slit.
Parts A and C of
The intensity of the electron beam passing through the slit of interest can be determined by, for the y direction, as illustrated in part B of
The intensities of the electron beams passing through the left and right slits can also be determined by, for the y direction, as illustrated in part C of
On the other hand, the overall intensity of the electron beams can be determined by, for both the x and y directions, integrating the probability density function f(x, y) over the range from −∞ to +∞, the value of which is now denoted by P0.
Further, it is assumed that the intensity of the electron beam passing through the slit of interest is represented by P1 and the intensities of the electron beams passing through the left and right slits are represented by PL and PR, respectively.
In this case, only the intensity P1 within the overall intensity P0 of the electron beams has the influence on the display of the pixel of interest. Due to the display of this pixel of interest, within the overall intensity P0 of the electron beams, the intensity PL has the influence on the display of the pixel (left pixel) corresponding to the fluorescent material of the left slit, and the intensity PR influences the display of the pixel (right pixel) corresponding to the fluorescent material of the left slit.
That is, based on the overall intensity P0 of the electron beams, Pl/P0 of the intensity of the electron beam has the influence on the display of the pixel of interest. Furthermore, PL/P0 of the intensity of the electron beam has the influence on the display of the left pixel, and PR/P0 of the intensity of the electron beam has the influence on the display of the right pixel.
Therefore, based on the display of the pixel of interest, the display of the pixel of interest has the influence on the display of the left pixel only by PL/P0/(Pl/P0), and has the influence on the display of the right pixel only by PR/P0/(Pl/P0).
In the EB emulation process, for the left pixel, in order to reflect the influence of the display of the pixel of interest, the pixel value of the left pixel is multiplied by the amount of influence PL/P0/(Pl/P0) of the display of the pixel of interest as an EB coefficient used for the EB emulation process, and a resulting multiplication value is added to the (original) pixel value of the left pixel. Further, in the EB emulation process, a similar process is performed using, as an EB coefficient, the amount of influence of the display of pixels surrounding the left pixel, which has the influence on the display of the left pixel, thereby determining the pixel value of the left pixel, which takes the influence caused by the electron beam spreading out at the time of display of the pixels surrounding the left pixel and impinging on the fluorescent material of the left pixel into account.
Also for the right pixel, likewise, the pixel value of the right pixel, which takes the influence caused by the electron beam spreading out at the time of display of the pixels surrounding the right element and impinging on the fluorescent material of the right pixel into account, is determined.
Note that also in a case where the color separation mechanism is a shadow mask, the EB coefficient used for the EB emulation process can be determined in a manner similar to that in the case of an aperture grille. With regard to a shadow mask, however, the complexity of integration is increased as compared with the case of an aperture grille. With regard to a shadow mask, it is easier to determine the EB coefficient using Monte Carlo Method or the like, from the position of a hole in the shadow mask and the radius of the hole, rather than using the integration described above.
As above, it is theoretically possible to determine the EB coefficient by calculation. However, as illustrated in
Further, in the case described above, it is a reasonable premise that an electron beam is incident on a color separation mechanism (an aperture grille and a shadow mask) at a right angle. In actuality, however, the angle at which an electron beam is incident on a color separation mechanism becomes shallow as the incidence occurs apart from the center of the display screen.
That is,
Part A of
As illustrated in part A of
Part B of
As illustrated in part B of
In a case where, as illustrated in part B of
As above, it is desirable that the EB coefficient be determined not only by calculation but also using an experiment.
Next, the EB emulation process performed in the EB processing unit 220 of
That is, part A of
Now, it is assumed that in part A of
In this case, if it is assumed that the distance between pixels is 1, the position of the pixel A is set to (x−1, y−1), the position of the pixel B to (x, y−1), the position of the pixel C to (x+1, y−1), the position of the pixel D to (x−1, y), the position of the pixel F to (x+1, y), the position of the pixel G to (x−1, y+1), the position of the pixel H to (x, y+1), and the position of the pixel I to (x+1, y+1).
Here, the pixel A is also referred to as the pixel A(x−1, y−1) also using its position (x−1, y−1), and the pixel value of the pixel A(x−1, y−1) is also referred to as a pixel value A. Similarity applies to the other pixels B to I.
Parts B and C of
That is, part B of
As the pixel value E of the pixel of interest E(x, y) increases, as illustrated in parts B and C of
Thus, the EB processing unit 220 of
The pixel value A is supplied to a computation unit 242A, the pixel value B to a computation unit 242B, the pixel value C to a computation unit 242C, the pixel value D to a computation unit 242D, the pixel value E to an EB coefficient generation unit 241, the pixel value F to a computation unit 242F, the pixel value G to a computation unit 242G, the pixel value H to a computation unit 242H, and the pixel value I to a computation unit 242I.
The EB coefficient generation unit 241 generates EB coefficients AEB, BEB, CEB, DEB, FEB, GEB, HEB, and IEB representing the degree to which the electron beams when displaying the pixel of interest E(x, y) have the influence on the display of the other pixels A(x−1, y−1) to D(x−1, y) and F(x+1, y) to I(x+1, y+1) on the basis of the pixel value E, and supplies the EB coefficients AEB, BEB, CEB, DEB, FEB, GEB, HEB, and IEB to the computation units 242A, 242B, 242C, 242D, 242F, 242G, 242H, and 242I, respectively.
The computation units 242A to 242D and 242F to 242I multiply the pixel values A to D and F to I supplied thereto with the EB coefficients AEB to DEB and FEB to IEB from the EB coefficient generation unit 241, respectively, and outputs resulting values A′ to D′ and F′ to I′ as an amount of EB influence.
The pixel value E is directly output and is added to the amount of EB influence of each of the electron beams on the display of the pixel of interest E(x, y) when displaying the other pixels A(x−1, y−1) to D(x−1, y) and F(x+1, y) to I(x+1, y+1), and the resulting addition value is set as a pixel value, obtained after the EB emulation process, of the pixel of interest E(x, y).
In
The EB function unit 250 determines the pixel value, obtained after the EB emulation process, of the pixel E(x, y) by assuming that, for example, as illustrated in
That is, the EB function unit 250 is supplied with the image signal from the luminance correction unit 210 (
In the EB function unit 250, the pixel values of pixels constituting the image signal from the luminance correction unit 210 are supplied to the delay units 251, 253, and 258, the EB coefficient generation unit 260, and the product-sum operation unit 261 in raster scan order.
The delay unit 251 delays the pixel value from the luminance correction unit 210 by an amount corresponding to one line (horizontal line), and supplies a result to the delay unit 252. The delay unit 252 delays the pixel value from the delay unit 251 by an amount corresponding to one line, and supplies a result to the delay unit 254 and the product-sum operation unit 261.
The delay unit 254 delays the pixel value from the delay unit 252 by an amount corresponding to one pixel, and supplies a result to the delay unit 255 and the product-sum operation unit 261. The delay unit 255 delays the pixel value from the delay unit 254 by an amount corresponding to one pixel, and supplies a result to the product-sum operation unit 261.
The delay unit 253 delays the pixel value from the luminance correction unit 210 by an amount corresponding to one line, and supplies a result to the delay unit 256 and the product-sum operation unit 261. The delay unit 256 delays the pixel value from the delay unit 253 by an amount corresponding to one pixel, and supplies a result to the delay unit 257 and the product-sum operation unit 261. The delay unit 257 delays the pixel value from the delay unit 256 by an amount corresponding to one pixel, and supplies a result to the product-sum operation unit 261.
The delay unit 258 delays the pixel value from the luminance correction unit 210 by an amount corresponding to one pixel, and supplies a result to the delay unit 259 and the product-sum operation unit 261. The delay unit 259 delays the pixel value from the delay unit 258 by an amount corresponding to one pixel, and supplies a result to the product-sum operation unit 261.
The EB coefficient generation unit 260 generates an EB coefficient as described above for determining the amount of EB influence of this pixel value on adjacent pixel values on the basis of the pixel value from the luminance correction unit 210, and supplies the EB coefficient to the product-sum operation unit 261.
The product-sum operation unit 261 multiplies each of a total of eight pixel values, namely, the pixel value from the luminance correction unit 210 and the pixel values individually from the delay units 252 to 255 and 257 to 259, with the EB coefficient from the EB coefficient generation unit 260 to thereby determine the amount of EB influence on the pixel value delayed by the delay unit 256 from the eight pixel values, and adds this amount of EB influence to the pixel value from the delay unit 256, thereby determining and outputting the pixel value obtained after the EB emulation process for the pixel value from the delay unit 256.
Therefore, for example, if it is assumed that the pixel values A to I illustrated in
Further, the EB coefficient generation unit 260 and the product-sum operation unit 261 are supplied with the pixel value I supplied to the EB function unit 250.
Since the pixel values A to H have been supplied to the EB coefficient generation unit 260 before the pixel value I is supplied, in the EB coefficient generation unit 260, an EB coefficient for determining the amount of EB influence of each of the pixel values A to I on the adjacent pixel value has been generated and supplied to the product-sum operation unit 261.
The product-sum operation unit 261 multiplies the pixel value E from the delay unit 256 and each of EB coefficients from the EB coefficient generation unit 260 for determining the amount of EB influence of each of the pixel values A to D and F to I on the pixel value E to thereby determine the amount of EB influence of each of the pixel values A to D and F to I on the pixel value E, and adds it to the pixel value E from the delay unit 256. The resulting addition value is output as the pixel value obtained after the EB emulation process for the pixel value E from the delay unit 256.
Next,
Note that in the figure, portions corresponding to those in the case of
That is, the EB processing unit 220 of
In the EB processing unit 220 of
Further, the selector 271 is also supplied with an image signal from the selector 272.
The selector 271 selects either the image signal from the luminance correction unit 210 or the image signal from the selector 272, and supplies the selected one to the EB function unit 250.
The selector 272 is supplied with the image signal obtained after the EB emulation process from the EB function unit 250.
The selector 272 outputs the image signal from the EB function unit 250 as a final image signal obtained after the EB emulation process or supplies it to the selector 271.
In the EB processing unit 220 constructed as above, the selector 271 first selects the image signal the from the luminance correction unit 210, and supplies it to the EB function unit 250.
The EB function unit 250 subjects the image signal from the selector 271 to an EB emulation process, and supplies a result to the selector 272.
The selector 272 supplies the image signal from the EB function unit 250 to the selector 271.
The selector 271 selects the image signal from the selector 272, and supplies it to the EB function unit 250.
As above, in the EB function unit 250, after the image signal from the luminance correction unit 210 is repeatedly subjected to the EB emulation process a predetermined number of times, the selector 272 outputs the image signal from the EB function unit 250 as a final image signal obtained after the EB emulation process.
As above, the EB emulation process can be recursively performed.
Note in
Next,
In
The control unit 281 controls the level shift unit 282 and the gain adjustment unit 283 on the basis of the setting value of the color temperature represented by the control signal from the display color temperature compensation control unit 40.
The level shift unit 282 performs a shift (addition) of the level for the color signals R, G, and B from the VM processing unit 34 according to the control from the control unit 281 (in the CRT display apparatus, DC bias), and supplies resulting color signals R, G, and B to the gain adjustment unit 283.
The gain adjustment unit 283 performs adjustment of the gain of the color signals R, G, and B from the level shift unit 282 according to the control from the control unit 281, and outputs resulting color signals R, G, and B as color signals R, G, and B obtained after the color temperature compensation process.
Note that any other method, for example, the method described in Japanese Unexamined Patent Application Publication No. 08-163582 or 2002-232905, can be adopted as a method of the color temperature compensation process.
Note that in the figure, portions corresponding to those of the VM processing unit 34 of
That is, the VM processing unit 34 of
In
That is, the luminance correction unit 310 is supplied with the image signal from the ABL processing unit 33 (
The delay timing adjustment unit 311 delays the image signal from the ABL processing unit 33 by an amount of time corresponding to the amount of time required for the processes performed in the differentiating circuit 312, the threshold processing unit 313, and the waveform shaping processing unit 314, and supplies a result to the multiplying circuit 315.
On the other hand, the differentiating circuit 312 performs first-order differentiation of the image signal from the ABL processing unit 33 to thereby detect an edge portion of this image signal, and supplies the differentiated value (differentiated value of the first-order differentiation) of this edge portion to the threshold processing unit 313.
The threshold processing unit 313 compares the absolute value of the differentiated value from the differentiating circuit 312 with a predetermined threshold value, and supplies only a differentiated value of which the absolute value is greater than the predetermined threshold value to the waveform shaping processing unit 314 to limit the implementation of luminance correction for the edge portion of which the absolute value of the differentiated value is not greater than the predetermined threshold value.
The waveform shaping processing unit 314 calculates a VM coefficient having an average value of 1.0 as a VM coefficient for performing luminance correction by multiplying it with the pixel value of the edge portion on the basis of the differentiated value from the threshold processing unit 313, and supplies the VM coefficient to the multiplying circuit 315.
The multiplying circuit 315 multiplies the pixel value of the edge portion in the image signal supplied from the delay timing adjustment unit 311 with the VM coefficient supplied from the waveform shaping processing unit 314 to thereby perform luminance correction of this edge portion, and supplies a result to the EB processing unit 220 (
Note that the VM coefficient to be calculated in the waveform shaping processing unit 314 can be adjusted in accordance with, for example, a user operation so as to allow the degree of the luminance correction of the edge portion to meet the user preference.
Further, each of the threshold processing unit 313 and the waveform shaping processing unit 314 sets an operation condition according to the VM control signal supplied from the VM control unit 39 (
That is, part A of
In part A of
Part B of
In part B of
Part C of
In part C of
Part D of
In the image signal of part D of
Part E of
In the image signal of part E of
Note that the VM coefficients of
Next,
In
Here, DRC will be explained.
DRC is a process of converting (mapping) a first image signal into a second image signal, and various signal processes can be performed by the definition of the first and second image data.
That is, for example, if the first image signal is set as a low spatial resolution image signal and the second image signal is set as a high spatial resolution image signal, DRC can be said to be a spatial resolution creation (improvement) process for improving the spatial resolution.
Also, for example, if the first image signal is set as a low S/N (Signal/Noise) image signal and the second image signal is set as a high S/N image signal, DRC can be said to be a noise removal process for removing noise.
Further, for example, if the first image signal is set as an image signal having a predetermined number of pixels (size) and the second image signal is set as an image signal having a larger or smaller number of pixels than the first image signal, DRC can be said to be a resizing process for resizing (increasing or decreasing the scale of) an image.
Also, for example, if the first image signal is set as a low temporal resolution image signal and the second image signal is set as a high temporal resolution image signal, DRC can be said to be a temporal resolution creation (improvement) process for improving the temporal resolution.
Further, for example, if the first image signal is set as a decoded image signal obtained by decoding an image signal encoded in units of blocks such as MPEG (Moving Picture Experts Group) and the second image signal is set as an image signal that has not been encoded, the DRC can be a said to be a distortion removal process for removing various distortions such as block distortion caused by MPEG encoding and decoding.
Note that in the spatial resolution creation process, when a first image signal that is a low spatial resolution image signal is converted into a second image signal that is a high spatial resolution image signal, the second image signal can be set as an image signal having the same number of pixels as the first image signal or an image signal having a larger number of pixels than the first image signal. In a case where the second image signal is set as an image signal having a larger number of pixels than the first image signal, the spatial resolution creation process is a process for improving the spatial resolution and is also a resizing process for increasing the image size (the number of pixels).
As above, according to DRC, various signal processes can be realized depending on how first and second image signals are defined.
In DRC, predictive computation is performed using a tap coefficient of a class obtained by class-classifying a pixel of interest to which attention is directed within the second image signal into one class among a plurality of classes and using (the pixel values of) a plurality of pixels of the first image signal that is selected relative to the pixel of interest. Thereby, (the prediction value of) the pixel value of the pixel of interest is determined.
In
The tap selection unit 321 uses an image signal obtained by performing luminance correction of the first image signal from the ABL processing unit 33 as the second image signal and sequentially uses the pixels constituting this second image signal as pixels of interest to select, as prediction taps, some of (the pixel values of) the pixels constituting the first image signal which are used for predicting (the pixel values of) the pixels of interest.
Specifically, the tap selection unit 321 selects, as prediction taps, a plurality of pixels of the first image signal which are spatially or temporally located near the time-space position of a pixel of interest.
Furthermore, the tap selection unit 321 selects, as class taps, some of the pixels constituting the first image signal which are used for class classification for separating the pixel of interest into one of a plurality of classes. That is, the tap selection unit 321 selects class taps in a manner similar to that in which the tap selection unit 321 selects prediction taps.
Note that the prediction taps and the class taps may have the same tap configuration (positional relationship with respect to the pixel of interest) or may have different tap configurations.
The prediction taps obtained by the tap selection unit are supplied to the prediction unit 327, and the class taps obtained by the tap selection unit 321 are supplied to a class classification unit 322.
The class classification unit 322 is constructed from a class prediction coefficient storage unit 323, a prediction unit 324, and a class decision unit 325, and performs class classification of the pixel of interest on the basis of the class taps from the tap selection unit 321 and supplies the class code corresponding to a resulting class to the tap coefficient storage unit 326.
Here, the details of the class classification performed in the class classification unit 322 will be described below.
The tap coefficient storage unit 326 stores a tap coefficient for each class, which is determined by learning described below, as a VM coefficient, and further outputs the tap coefficient (tap coefficient of the class represented by the class code supplied from the class classification unit 322) stored at the address corresponding to the class code supplied from the class classification unit 322 among the stored tap coefficients. This tap coefficient is supplied to the prediction unit 327.
Here, the term tap coefficient is equivalent to a coefficient to be multiplied with input data at a so-called tap of a digital filter.
The prediction unit 327 obtains the prediction taps output from the tap selection unit 321 and the tap coefficient output from the tap coefficient storage unit 326, and performs predetermined predictive computation for determining a prediction value of the true value of the pixel of interest using the prediction taps and the tap coefficient. Thereby, the prediction unit 327 determines and outputs (the prediction value of) the pixel value of the pixel of interest, that is, the pixel values of the pixels constituting the second image signal, i.e., the pixel values obtained after the luminance correction.
Note that each of the class prediction coefficient storage unit 323, the prediction unit 324, which constitute the class classification unit 322, and the tap coefficient storage unit 326 performs the setting of an operation condition or necessary selection according to the VM control signal supplied from the VM control unit 39 (
Next, the learning of tap coefficients for individual classes, which are stored in the tap coefficient storage unit 326 of
The tap coefficients used for predetermined predictive computation of DRC are determined by learning using multiple image signals as learning image signals.
That is, for example, now, it is assumed that an image signal before luminance correction is used as the first image signal and an image signal after the luminance correction, which is obtained by performing luminance correction for the first image signal, is used as the second image signal to select in DRC a prediction tap from the first image signal, and that the pixel value of a pixel of interest of the second image signal is determined (predicted) using its prediction taps and tap coefficients by using predetermined predictive computation.
As the predetermined predictive computation, if, for example, linear first-order predictive computation is adopted, a pixel value y of the second image signal can be determined by the following linear first-order equation.
In this regard, in Equation (3), xn represents the pixel value of the n-th pixel (hereinafter referred to as an uncorrected pixel, as necessary) of the first image signal constituting the prediction taps for the pixel of interest y of the second image signal, and wn represents the n-th tap coefficient to be multiplied with (the pixel value of) the n-th uncorrected pixel. Note that in Equation (3), the prediction taps are constituted by N uncorrected pixels x1, x2, . . . xN.
Here, the pixel value y of the pixel of interest of the second image signal can also be determined by a second- or higher-order equation rather than the linear first-order equation given in Equation (3).
Now, if the true value of the pixel value of the k-th sample of the second image signal is represented by yk and if the prediction value of the true value yk thereof, which is obtained by Equation (3), is represented by yk′, a prediction error ek therebetween is expressed by the following equation.
[Math. 4]
e
k
=y
k
−y
k′ (4)
Now, since the prediction value yk′ in Equation (4) is determined according to Equation (3), replacing yk′ in Equation (4) according to Equation (3) yields the following equation.
In this regard, in Equation (5), xn,k represents the n-th uncorrected pixel constituting the prediction taps for the pixel of the k-th sample of the second image signal.
The tap coefficient wn that allows the prediction error ek in Equation (5) (or Equation (4)) to be 0 becomes optimum to predict the pixel of the second image signal. In general, however, it is difficult to determine the tap coefficient wn for all the pixels of the second image signal.
Accordingly, for example, if the least squares method is adopted as the standard indicating that the tap coefficient wn is optimum, the optimum tap coefficient wn can be determined by minimizing the total sum E of square errors expressed by the following equation.
In this regard, in Equation (6), K represents the number of samples (the total number of learning samples) of sets of the pixel yk of the second image signal, and the uncorrected pixels x1,k, x2,k, . . . , xN,k constituting the prediction taps for this pixel yk of the second image signal.
The minimum value (local minimum value) of the total sum E of square errors in Equation (6) is given by wn that allows the value obtained by partially differentiating the total sum E with the tap coefficient wn to be 0, as given in Equation (7).
Then, partially differentiating Equation (5) described above with the tap coefficient wn yields the following equations.
The equations below are obtained from Equations (7) and (8).
By substituting Equation (5) into ek in Equation (9), Equation (9) can be expressed by normal equations given in Equation (10).
The normal equations in Equation (10) can be solved for the tap coefficient wn by using, for example, a sweeping-out method (elimination method of Gauss-Jordan) or the like.
By formulating and solving the normal equations in Equation (10) for each class, the optimum tap coefficient (here, tap coefficient that minimizes the total sum E of square errors) wn can be determined for each class.
As above, learning for determining the tap coefficient wn can be performed by, for example, a computer (
Next, a process of learning (learning process) for determining the tap coefficient wn, which is performed by the computer, will be explained with reference to a flowchart of
First, in step S21, the computer generates teacher data equivalent to the second image signal and student data equivalent to the first image signal from a learning image signal prepared in advance for learning. The process proceeds to step S22.
That is, the computer generates a mapped pixel value of mapping as the predictive computation given by Equation (3), i.e., a corrected pixel value obtained after luminance correction, as the teacher data equivalent to the second image signal, which serves as a teacher (true value) of the learning of tap coefficients, from the learning image signal.
Furthermore, the computer generates a pixel value to be converted by mapping as the predictive computation given by Equation (3), as the student data equivalent to the first image signal, which serves as a student of the learning of tap coefficients, from the learning image signal. Herein, for example, the computer directly sets the learning image signal as the student data equivalent to the first image signal.
In step S22, the computer selects, as a pixel of interest, teacher data unselected as a pixel of interest. The process proceeds to step S23. In step S23, like the tap selection unit 321 of
In step S24, the computer performs class classification of the pixel of interest on the basis of the class taps for the pixel of interest in a manner similar to that of the class classification unit 322 of
In step S25, the computer performs, for the class of the pixel of interest, additional addition given in Equation (10) on the pixel of interest and the student data constituting the prediction taps selected for the pixel of interest. The process proceeds to step S26.
That is, the computer performs computation equivalent to the multiplication (xn,kxn′,k) of student data items in the matrix in the left side of Equation (10) and the summation (Σ), for the class of the pixel of interest, using a prediction tap (student data) xn,k.
Furthermore, the computer performs computation equivalent to the multiplication (xn,kyk) of the student data xn,k and teacher data yk in the vector in the right side of Equation (10) and the summation (Σ), for the class of the pixel of interest, using the prediction tap (student data) xn,k and the teacher data yk.
That is, the computer stores in a memory incorporated therein (for example, the RAM 104 of
In step S26, the computer determines whether or not there remains teacher data unselected as a pixel of interest. In a case where it is determined in step S26 that there remains teacher data unselected as a pixel of interest, the process returns to step S22 and subsequently a similar process is repeated.
Further, in a case where it is determined in step S26 that there remains no teacher data unselected as a pixel of interest, the process proceeds to step S27, in which the computer solves the normal equations for each class, which are constituted by the matrix in the left side and the vector in the right side of Equation (10) for each class obtained by the preceding processing of steps S22 to S26, thereby determining and outputting the tap coefficient wn for each class. The process ends.
The tap coefficients wn for the individual classes determined as above are stored in the tap coefficient storage unit 326 of
Next, the class classification performed in the class classification unit 322 of
In the class classification unit 322, the class taps for the pixel of interest from the tap selection unit 321 are supplied to the prediction unit 324 and the class decision unit 325.
The prediction unit 324 predicts the pixel value of one pixel among a plurality of pixels constituting the tap classes from the tap selection unit 321 using the pixel values of the other pixels and class prediction coefficients stored in the class prediction coefficient storage unit 323, and supplies the predicted value to the class decision unit 325.
That is, the class prediction coefficient storage unit 323 stores a class prediction coefficient used for predicting the pixel value of one pixel among a plurality of pixels constituting class taps for each class.
Specifically, if it is assumed that the class taps for the pixel of interest are constituted by pixel values of (M+1) pixels and that the prediction unit 324 regards, for example, xM+1 of (M+1) pixels constituting the class taps, the (M+1)-th pixel value xM+1 as an object to be predicted among the pixel values x1, x2, . . . , xM, and predicts the (M+1)-th pixel value xM+1, which is an object to be predicted, using the other M pixels x1, x2, . . . , xM, the class prediction coefficient storage unit 323 stores, for example, M class prediction coefficients cj,1, cj,2, . . . , cj,m to be multiplied with each of the M pixels x1, x2, . . . , xM for the class #j.
In this case, the prediction unit 324 determines the prediction value x′j,M+1 of the pixel value xM+1, which is an object to be predicted, for the class #j according to, for example, the equation x′j,M+1=x1cj,1+x2cj,2+ . . . +, xMcj,M.
For example, now, if the pixel of interest is classified into any class among J classes #1 to #J by class classification, the prediction unit 324 determines prediction values x′1,M+1 to x′J,M+1 for each of the classes #1 to #J, and supplies them to the class decision unit 325. The class decision unit 325 compares each of the prediction values x′1,M+1 to x′J,M+1 from the prediction unit with the (M+1)-th pixel value (true value) xM+1, which is an object to be predicted, of the class taps for the pixel of interest from the tap selection unit 321, and decides the class #j of the class prediction coefficients cj,1, cj,2, . . . cj,M used for determining the prediction value x′j,M+1 having the minimum prediction error with respect to the (M+1)-th pixel value xM+1, which is an object to be predicted, among the prediction values x′1,M+1 to be x′J,M+1 to the class of the pixel of interest, and supplies the class code representing this class #j to the tap coefficient storage unit 326 (
Here, the class prediction coefficient cj,m stored in the class prediction coefficient storage unit 323 is determined by learning.
The learning for determining the class prediction coefficient cj,m can be performed by, for example, a computer (
The process of the learning (learning process) for determining the class prediction coefficient cj,m, which is performed by the computer, will be explained with reference to a flowchart of
In step S31, for example, similarly to step S21 of
In step S32, the computer initializes a variable j representing a class to 1. The process proceeds to step S33.
In step S33, the computer selects all the class taps obtained in step S31 as class taps for learning (learning class taps). The process proceeds to step S34.
In step S34, similarly to the learning of the tap coefficients of
In step S35, the computer solves the normal equations obtained in step S34 to determine the class prediction coefficient cj,m for the class #j (m=1, 2, . . . , M) The process proceeds to step S36.
In step S36, the computer determines whether or not the variable j is equal to the total number J of classes. In a case where it is determined that they do not equal, the process proceeds to step S37.
In step S37, the computer increments the variable j by only 1. The process proceeds to step S38, in which the computer determines, for the learning class taps, the prediction error when predicting the pixel xM+1 of the object to be predicted, by using the class prediction coefficient cj,m obtained in step S35. The process proceeds to step S39.
In step S39, the computer selects a learning class tap for which the prediction error determined in step S38 is greater than or equal to a predetermined threshold value as a new learning class tap.
Then, the process returns from step S39 to step S34, and subsequently, the class prediction coefficient cj,m for the class #j is determined using the new learning class tap in a manner similar to that described above.
On the other hand, in a case where it is determined in step S36 that the variable j is equal to the total number J of classes, that is, in a case where the class prediction coefficients c1,m to cj,m have been determined for all the J classes #1 to #J, the process ends.
As above, in the image signal processing device of
According to the image signal processing device of
Further, according to the image signal processing device of
According to the image signal processing device of
Further, according to the image signal processing device of
Next, at least a portion of the series of processes described above can be performed by dedicated hardware or can be performed by software. In a case where the series of processes is performed by software, a program constituting the software is installed into a general-purpose computer or the like.
Accordingly,
The program can be recorded in advance on a hard disk 105 or a ROM 103 serving as a recording medium incorporated in the computer.
Alternatively, the program can be temporarily or permanently stored (recorded) on a removable recording medium 111 such as a flexible disk, a CD-ROM (Compact Disc Read Only Memory), an MO (Magneto Optical) disk, a DVD (Digital Versatile Disc), a magnetic disk, or a semiconductor memory. The removable recording medium 111 can be provided as so-called packaged software.
Note that the program can be, as well as installed into the computer from the removable recording medium 111 as described above, transferred to the computer from a download site in a wireless fashion via a satellite for digital satellite broadcasting or transferred to the computer in a wired fashion via a network such as a LAN (Local Area Network) or the Internet. In the computer, the thus transferred program can be received at a communication unit 108 and can be installed into the hard disk 105 incorporated therein.
The computer incorporates therein a CPU (Central Processing Unit) 102. The CPU 102 is connected to an input/output interface 110 via a bus 101. When an instruction is input from a user through an operation or the like of an input unit 107 constructed with a keyboard, a mouse, a microphone, and the like via the input/output interface 110, the CPU 102 executes a program stored in the ROM (Read Only Memory) 103 according to the instruction. Alternatively, the CPU 102 loads onto a RAM (Random Access Memory) 104 a program stored in the hard disk 105, a program transferred from a satellite or a network and received at the communication unit 108 and installed into the hard disk 105, or a program read from the removable recording medium 111 attached to a drive 109 and installed into the hard disk 105, and executes the program. Thereby, the CPU 102 performs the processes according to the flowcharts described above or the processes performed by the structure of the block diagrams described above. Then, the CPU 102 causes this processing result to be, according to necessity, for example, output from an output unit 106 constructed with an LCD (Liquid Crystal Display), a speaker, and the like via the input/output interface 110, sent from the communication unit 108, or recorded or the like onto the hard disk 105.
Here, in this specification, processing steps describing a program for causing a computer to perform various processes may not necessarily be processed in time sequence in accordance with the order described as the flowcharts, and include processes executed in parallel or individually (for example, parallel processes or object-based processes).
Further, the program may be processed one computer or processed in a distributed fashion by a plurality of computers. Furthermore, the program may be transferred to a remote computer and executed thereby.
Note that embodiments of the present invention are not limited to the embodiments described above, and a variety of modifications can be made without departing from the scope of the present invention.
Number | Date | Country | Kind |
---|---|---|---|
2006-340080 | Dec 2006 | JP | national |
2007-26162 | Feb 2007 | JP | national |
2007-261601 | Oct 2007 | JP | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/JP07/74260 | 12/18/2007 | WO | 00 | 6/3/2009 |