The present invention relates to an image processing device for applying image processing such as edge enhancement to image data inputted.
Conventionally, there is known an edge enhancement device that extracts an edge of an image from image data inputted by means of using an edge filter, applying gain adjustment to this edge, and then adding the edge to the original image data thereby enhance the edge of the image (see, for example, a Patent Document 1). By using this edge enhancement device, it is possible to enhance an edge included in an image while changing a degree of gain adjustment to adjust a degree of edge enhancement.
[Patent Document 1] Japanese Patent Laid-Open No. 2001-292325 (pages 3 to 11 and FIGS. 1 to 11)
In the edge enhancement device disclosed in the Patent Document 1 described above, it is necessary to apply three kinds of arithmetic operations, namely, (1) extraction of an edge, (2) gain adjustment, and (3) addition of an edge portion, to image data inputted. Thus, there is a problem in that processing is complicated.
The invention has been devised in view of such a point and it is an object of the invention to provide an image processing device that is capable of simplifying processing.
In order to solve the problem, an image processing device according to the invention performs image quality adjustment processing for adjusting an image quality of an image constituted by a plurality of pixels arranged along a plurality of scanning lines and includes three line memories that are inputted with pixel data corresponding to the pixels in a scanning order and store pixel data of three scanning lines adjacent to one another, a pixel-data readout unit that reads out the pixel data of nine pixels in continuous positions from each of the three line memories, i.e., nine pixels in total, a pixel-data storing unit that stores the pixel data of the nine pixels read out by the pixel-data readout unit, and a pixel-data calculating unit that sets a pixel arranged in the center among the nine pixels as a target pixel and calculates new pixel data after image quality adjustment is performed corresponding to the target pixel with using the pixel data of the nine pixels stored in the pixel-data storing unit. This makes it possible to perform image quality adjustment processing such as edge enhancement simply by extracting nine pixels including a target pixel and calculating new pixel data corresponding to the target pixel using pixel data of the nine pixels. This makes it unnecessary to perform complicated processing, for example, performing extraction of an edge and gain adjustment and then adding the edge to the original pixel data, and makes it possible to simplify processing.
It is desirable that the image processing device further includes a pixel-data writing unit that overwrites, every time the scanning line is updated, new pixel data corresponding to the scanning line updated in the line memory which is one of the three line memories and in which pixel data having the earliest scanning order is stored. This makes it possible to store, every time the scanning line is changed, pixel data of necessary three scanning lines simply by storing image data corresponding to the next scanning line in one line memory one after another.
It is desirable that the image processing device further includes a switch circuit that shifts, every time the scanning line is updated, a correspondence relation between the pixel data of the nine pixels read out from the pixel-data storing unit and the three line memories. This makes it possible to change, every time the scanning line is updated, a relation between the three line memories and the nine pixels read out and always keep a relation between an order of the scanning lines and the nine pixels read out the same.
It is desirable that the switch circuit has three selectors with three inputs that selectively output pixel data read out from the three line memories and the three selectors switch, every time the scanning line is updated, the line memories to be selected not to overlap according to each of the updated scanning lines. This makes it easy to switch in order a relation between an order of the scanning lines and the nine pixels extracted.
It is desirable that, when pixel data of two first pixels adjacent to a target pixel along an identical scanning line are D and F, the pixel-data calculating unit applies image quality adjustment processing to the target pixel by adding a value proportional to an added value of these pixel data D and F to pixel data E of the target pixel. This makes it possible to perform image quality adjustment processing with an influence due to the pixels adjacent to the target pixel in the horizontal direction reflected on the pixel data of the target pixel.
It is desirable that, when pixel data of two second pixels that correspond to two scanning lines adjacent to the target pixel and are adjacent to the target pixel in the vertical direction with respect to the scanning lines are B and H, the pixel-data calculating unit applies image quality adjustment processing to the target pixel by adding a value proportional to an added value of these pixel data B and H to pixel data E of the target pixel. This makes it possible to perform image quality adjustment processing with an influence due to the pixels adjacent to the target pixel in the vertical direction reflected on the pixel data of the target pixel.
It is desirable that, when pixel data of four third pixels that correspond to two scanning lines adjacent to the target pixel and are adjacent to the target pixel in oblique directions are A, C, G, and I, the pixel-data calculating unit applies image quality adjustment processing to the target pixel by adding a value proportional to an added value of these pixel data A, C, G, and I to pixel data E of the target pixel. This makes it possible to perform image quality adjustment processing with an influence due to the pixels adjacent to the target pixel in the oblique directions reflected on the pixel data of the target pixel.
It is desirable that enhance processing along a direction in which the pixels to which the added value is added and the target pixel are arranged abreast is performed by setting a proportional constant in calculating the value proportional to the added value to a negative value. By setting the proportional constant to a negative value, it is possible to realize an enhance effect for enhancing an edge portion included in an image.
It is desirable that blurring processing along a direction in which the pixels to which the added value is added and the target pixel are arranged abreast is performed by setting a proportional constant in calculating the value proportional to the added value to a positive value. By setting the proportional constant to a positive value, it is possible to realize a blurring effect for averaging an edge portion included in an image.
It is desirable that the proportional constant is an adjustment parameter, a value of which is changeable, and the image processing device further includes an adjustment-parameter setting unit that variably sets the value of the adjustment parameter. This makes it possible to variably set degrees of the enhance effect and the blurring effect.
It is desirable that the pixel-data calculating unit adjusts a value of pixel data for image quality adjustment according to the value of the adjustment parameter set by the adjustment-parameter setting unit. This makes it possible to easily obtain, simply by changing the value of the adjustment parameter, pixel data with degrees of the enhance effect and the blurring effect adjusted.
It is desirable that the pixel-data calculating unit multiplies pixel data of one pixel by a weighting coefficient indicated by an impulse response waveform indicating an influence of the one pixel on peripheral pixels around the one pixel and calculates new pixel data corresponding to the target pixel by associating an influence of the adjacent pixels on the target pixel with the one pixel. Also, it is desirable that the weighting coefficients that make the influence of the one pixel on the peripheral pixels different are individually set for a partial area of the peripheral pixels and for a remaining area other than the partial area by the impulse response waveform. This makes it possible to finely set a degree of the influence of the one pixel on the peripheral pixels arranged around the one pixel.
It is desirable that, as the weighting coefficient, a positive value corresponds to the partial area close to the one pixel is set and a negative value corresponds to the remaining area distant from the one pixel is set. This makes it possible to impart a negative area to an impulse response in the same manner as a general sampling function for performing interpolation processing among data and obtain a more natural image after image quality adjustment with the degree of the influence of the one pixel on the peripheral pixels accurately reflected thereon.
It is desirable that the image processing device further includes a weighting-coefficient setting unit that variably sets the weighting coefficient. This makes it possible to variably set a degree of image quality adjustment.
It is desirable that it is possible to individually set the impulse response waveform according to a relative positional relation of the peripheral pixels to the one pixel. This makes it possible to perform, when contents of an image have directivity (e.g., depending on a direction that an edge faces), image quality adjustment processing with the direction reflected thereon.
It is desirable that it is possible to individually set the weighting coefficient indicated by the impulse response waveform for a case in which the peripheral pixels are adjacent to the one pixel along a scanning line, a case in which the peripheral pixels are adjacent to the one pixel in the vertical direction with respect to the scanning line, and a case in which the peripheral pixels are adjacent to the one pixel in the oblique directions with respect to the scanning line. This makes it possible to adjust degrees of enhancement and blurring depending on which of the horizontal, vertical and oblique directions a direction that a color and a shade of an image change is along.
Image processing device according to an embodiment to which the invention is applied will be hereinafter explained in detail.
When brightness data Y and color difference data Cb and Cr constituting the video data of the above format are serially inputted in a predetermined order, the serial-parallel conversion circuit 100 separates the brightness data Y and the color difference data Cb and Cr, and outputs the data in parallel. For example, the respective data are constituted by 8 bits. The timing adjusting circuit 200 adjusts output timing of the brightness data Y and the color difference data Cb and Cr outputted from the serial-parallel conversion circuit 100 in parallel.
The serial-parallel conversion circuit 100 extracts and separates color difference data Cb′, brightness data Y′, and color difference data Cr at rising timing of the clock CLK and outputs these data at different timing. The timing adjusting circuit 200 adjusts output timing of the color difference data Cb and the brightness data Y to coincide with output timing of the color difference data Cr. In this embodiment, the output timing of the color difference data Cb and the brightness data Y is adjusted to the output timing of the color difference data Cr. However, output timing of each of the color difference data Cb and Cr and the brightness data Y may be adjusted to timing later than the output timing of the color difference data Cr.
The image-quality adjusting circuit 300 performs image processing for adjusting an image quality using the brightness data Y and the color difference data Cb and Cr outputted from the timing adjusting circuit 200. This image processing is performed individually for each of the brightness data Y and the color difference data Cb and Cr. The brightness data Y and the color difference data Cb and Cr after image quality adjustment are outputted in parallel. It is possible to change a degree of image quality adjustment (a degree of image quality enhancement or blurring) by changing a value of an adjustment parameter. Processing for setting the value of the adjustment parameter in a predetermined range is performed by the adjustment-parameter setting unit 302. For example, when a user operates an operation unit including an operation switch and an operation dial, a signal indicating contents of the operation is sent to the adjustment-parameter setting unit 302. The adjustment-parameter setting unit 302 sets, according to the contents of the operation by the user, a parameter “x” for performing image quality adjustment concerning the horizontal direction (a scanning direction) of a video to be displayed, an image quality adjustment parameter “y” concerning the vertical direction of the video, and an image quality adjustment parameter “z” concerning oblique directions of the video. Details of these three parameters “x”, “y”, and “z” will be described later.
The parallel-serial conversion circuit 400 generates video data of a format conforming to ITU-R.BT601-5/656 on the basis of the brightness data Y and the color difference data Cb and Cr after image quality adjustment outputted from the image-quality adjusting circuit 300 in parallel and outputs the video data. In this way, the image processing device 1 applies image quality adjustment processing to the video data inputted and outputs a video signal of the same format after image quality adjustment.
Details of the image-quality adjusting circuit 300 will be explained.
Each of the line memories 320, 322, and 324 stores the brightness data Y of one horizontal line inputted in a scanning order. For example, the brightness data Y of one line inputted first is stored in the line memory 320. The brightness data Y of one line inputted next is stored in the line memory 322. The brightness data Y of one line inputted next is stored in the line memory 324. When the brightness data Y of the fourth line is inputted after the brightness data Y of the three lines are inputted in this way, the brightness data Y of the fourth line is stored in the line memory 320. In this way, the brightness data Y of the latest three lines are always stored in these three line memories 320, 322, and 324.
The address generating circuit 326 generates a writing address and a readout address of the line memories 320, 322, and 324. The address generating circuit 326 updates a value of the writing address in synchronization with timing when the brightness data Y is inputted and inputs this writing address to any one of the line memories 320, 322, and 324 that are set as writing objects of the brightness data Y at that point. In the line memory 320 and the like, the brightness data Y is stored in a storage area specified by the writing address inputted. The readout address generated by the address generating circuit 326 is simultaneously inputted to the three line memories 320, 322, and 324. The image quality adjustment processing according to this embodiment is performed using the brightness data Y of three pixels in the horizontal direction and three pixels in the vertical direction, i.e., nine pixels in total. Thus, the same readout address is simultaneously inputted to the three line memories 320, 322, and 324 in order to simultaneously read out the brightness data Y of pixels in the same horizontal position.
The switch circuit 328 performs rearrangement of the brightness data Y simultaneously read out from the three line memories 320, 322, and 324. For example, when attention is paid to the inputted brightness data Y of three lines from the beginning, the brightness data Y of one line inputted last is stored in the line memory 324, the brightness data Y of one line inputted before last is stored in the line memory 322, and the oldest brightness data Y of one line is stored in the line memory 320. In general, a scanning order is set in the horizontal direction from the upper left of a screen of a monitor apparatus or the like. Thus, the brightness data Y of three pixels of an upper line in 3×3 pixels to be subjected to the image quality adjustment processing, the brightness data Y of three pixels of a center line, and the brightness data Y of three pixels of a lower line are stored in the line memory 320, the line memory 322, and the line memory 324, respectively. However, since the brightness data Y of the fourth line is overwritten in the line memory 320, it is necessary to shift a relation between the upper line, the center line, and the lower line of 3×3 pixels to be subjected to the image quality adjustment processing and the line memories 320, 322, and 324 by one line. This processing is performed by the switch circuit 328.
The brightness data buffer 330 stores the brightness data Y of 3×3 pixels read out from the three line memories 320, 322, and 324 via the switch circuit 328. The brightness-data calculating circuit 332 calculates brightness data after image quality adjustment corresponding to a center pixel (a target pixel) on the basis of the brightness data of nine pixels stored in the brightness data buffer 330. The control circuit 334 instructs the address generating circuit 326 to generate a readout address and a writing address and sends an enable signal to one or all of the line memories 320, 322, and 324 to control a writing operation or a readout operation for brightness data. The control circuit 334 performs control for switching a selection state in each of the selectors constituting the switch circuit 328.
Details of the image quality adjustment processing will be explained.
In the explanations using FIGS. 7 to 10, the influence of the center pixel on the pixels around the center pixel is considered. However, in order to calculate brightness data E after image quality adjustment of the center pixel, on the contrary, it is necessary to consider an influence of the peripheral pixels on the center pixel.
The same applies to a case in which attention is paid to an adjacent pixel on the right side. A degree aF of an influence on the left half area of the center pixel is obtained by multiplying brightness data F of the adjacent pixel by the weighting coefficient “a”. A degree bF of an influence on the right half area of the center pixel is obtained by multiplying the brightness data F of the adjacent pixel by the weighting coefficient “b”.
The same applies to a case in which attention is paid to an adjacent pixel on the lower side. A degree cH of an influence on the upper half area of the center pixel is obtained by multiplying brightness data H of the adjacent pixel by the weighting coefficient “c”. A degree dH of an influence on the lower half area of the center pixel is obtained by multiplying the brightness data H of the adjacent pixel by the weighting coefficient “d”.
The same applies to a case in which attention is paid to an adjacent pixel on the upper right. A degree fC of an influence on a ¾ area excluding an upper right ¼ area of the center pixel is obtained by multiplying brightness data C of the adjacent pixel by the weighting coefficient “f”. A degree gC of an influence on the upper right ¼ area of the center pixel is obtained by multiplying the brightness data C of the adjacent pixel by the weighting coefficient “g”.
A degree gG of an influence on a lower left ¼ area of the center pixel is obtained by multiplying brightness data G of an adjacent pixel by the weighting coefficient “g”. A degree fG of an influence on a ¾ area excluding the lower left ¼ area of the center pixel is obtained by multiplying the brightness data G of the adjacent pixel by the weighting coefficient “f”.
A degree fI of an influence on a ¾ area excluding a lower right ¼ area of the center pixel is obtained by multiplying brightness data I of an adjacent pixel by the weighting coefficient “f”. A degree gI of an influence on the lower right ¼ area of the center pixel is obtained by multiplying the brightness data I of the adjacent pixel by the weighting coefficient “g”.
Considering all the results described above, brightness data E11 of the upper left ¼ area of the target pixel, brightness data E12 of the upper right ¼ area of the target pixel, brightness data E21 of the lower left ¼ area of the target pixel, and brightness data E22 of the lower right ¼ area of the target pixel are as described below.
E11=(eE+gA+dB+fC+bD+aF+fG+cH+fI)/e (1)
E12=(eE+fA+dB+gC+aD+bF+fG+cH+fI)/e (2)
E21=(eE+fA+cB+fC+bD+aF+gG+dH+fI)/e (3)
E22=(eE+fA+cB+fC+aD+bF+fG+dH+gI)/e (4)
A coefficient of 1/e in each of equations (1) to (4) is a coefficient for adjusting an average value of brightness data not to fluctuate before and after image quality adjustment.
An actual center pixel has one area as a whole rather than being divided into four areas as described above. Thus, as described below, brightness data E′ after image quality adjustment is obtained by averaging the brightness data E11, E12, E21, and E22 of the respective areas calculated according to equations (1) to (4).
“x”, “y”, and “z” are image quality adjustment parameters. Values of “x”, “y”, and “z” are set by the adjustment-parameter setting section 302. In equation (5), when “x”, “y”, and “z” are set as x=0, y=0, and z=0, the brightness data E′ after image quality adjustment=E, which is equivalent to a case in which the image quality adjustment processing is not performed at all. In order to prevent this, “x”, “y”, and “z” only have to be set as x≠0, y≠0, and z≠0. When values of “x”, “y”, and “z” are positive, a blurring effect is obtained rather than an enhance effect (an edge enhance effect). Therefore, when it is desired to obtain the enhance effect, it is necessary to set values of “x”, “y”, and “z” to negative values. In the following explanation, details in obtaining the enhance effect will be explained. The same idea applies to a case in which the blurring effect is obtained. When values of “x”, “y”, and “z” are set to values other than 0, a gain of the brightness data E′ fluctuates as these values are variably set. Thus, brightness data E″ normalized by a sum M(=x+y+z) of coefficients is set as an image quality adjustment result.
The brightness-data calculating circuit 332 performs the image quality adjustment processing by performing the calculation of the contents indicated by equation (6).
The multiplier 384 has a multiplier factor set to “e”, multiplies the brightness data E inputted by “e”, and outputs the result. The multiplier 386 has a multiplier factor set to 4, multiplies an output (eE) of the multiplier 384 by 4, and outputs the result. In this way, a term of “4eE” included in equation (6) is calculated.
The adder 350 adds the brightness data A and the brightness data C inputted. The adder 352 adds the brightness data G and the brightness data I inputted. The adder 358 adds an output (A+C) of the adder 350 and an output (G+I) of the adder 352. The multiplier 374 has a multiplier factor set to the image quality adjustment parameter “z” outputted from the adjustment-parameter setting section 302, multiplies an output (A+C+G+I) of the adder 358 by “z”, and outputs the result. In this way, a term of “z(A+C+G+I)” included in equation (6) is calculated.
The adder 354 adds the brightness data B and the brightness data H inputted. The multiplier 376 has a multiplier factor set to the image quality adjustment parameter “y” outputted from the adjustment-parameter setting section 302, multiplies an output (B+H) of the adder 354 by “y”, and outputs the result. The multiplier 380 has a multiplier factor set to 2, multiplies an output (y(B+H)) of the multiplier 376, and outputs the result. In this way, a term of “2y(B+H)” included in equation (6) is calculated.
The adder 356 adds the brightness data D and the brightness data F inputted. The multiplier 378 has a multiplier factor set to the image quality adjustment parameter “x” outputted from the adjustment-parameter setting section 302, multiplies an output (D+F) of the adder 356 by “x”, and outputs the result. The multiplier 382 has a multiplier factor set to 2, multiplies an output (x(D+F)) of the multiplier 378 by 2, and outputs the result. In this way, a term of “2x(D+F)” included in equation (6) is calculated.
The adder 360 adds the output of the multiplier 374 and the output of the multiplier 380. The adder 362 adds the output of the multiplier 382 and the output of the multiplier 386. Moreover, the adder 368 adds outputs of these two adders 360 and 362. In this way, a term of “4eE+z(A+C+G+I)+2y(B+H)+2x(D+F)” included in equation (6) is calculated.
The adder 370 adds the two image quality adjustment parameters “x” and “y” outputted from the adjustment-parameter setting section 302. The adder 372 adds the output (x+y) of the adder 370 and the adjustment parameter “z” outputted from the adjustment-parameter setting section 302. The multiplier 388 has a multiplier factor set to “e”, multiplies an output (x+y+z=M) of the adder 372 by “e”, and outputs the result. The multiplier 390 has a multiplier factor set to 4, multiplies an output (eM) of the multiplier 388 by 4, and outputs the result.
The divider 392 has a divisor set to the output (4eM) of the multiplier 390, divides an output (4eE+z(A+C+G+I)+2y(B+H)+2x(D+F)) of the adder 368 by 4eM, and outputs the result. In this way, the calculation indicated by equation (6) is performed and the brightness data E″ after image quality adjustment is outputted.
Concerning the horizontal direction, as shown in
Similarly, concerning the vertical direction, as shown in
Concerning the oblique directions, as shown in
The control circuit 334 and the address generating circuit 326 correspond to the pixel-data readout unit, the brightness data buffer 330 corresponds to the pixel-data storing unit, the brightness-data calculating circuit 332 corresponds to the pixel-data calculating unit, the control circuit 334 and the address generating circuit 326 correspond to the pixel-data writing unit, and the adjustment-parameter setting section 302 corresponds to the adjustment-parameter setting unit.
As described above, in the image processing device 1 according to this embodiment, it is possible to perform the image quality adjustment processing such as edge enhancement simply by extracting nine pixels including a target pixel and calculating new pixel data (brightness data, color difference data) corresponding to the target pixel using pixel data of these nine pixels. This makes it unnecessary to perform complicated processing for, for example, performing extraction of an edge and gain adjustment and then adding the edge to original pixel data and makes it possible to simplify processing.
Every time the scanning line is updated, new pixel data corresponding to the scanning line updated is overwritten in the line memory which is one of the three line memories 320, 322, and 324 and in which pixel data of an earliest scanning order is stored. This makes it possible to store, every time the scanning line is changed, pixel data of necessary three scanning lines simply by storing image data corresponding to the next scanning line in one line memory one after another.
The image processing device further includes the switch circuit 328 that shifts, every time the scanning line is updated, a correspondence relation between the pixel data of the nine pixels read out from the brightness data buffer 330 and the three line memories 320, 322, and 324. This makes it possible to change, every time the scanning line is updated, a relation between the three line memories 320, 322, and 324 and the nine pixels extracted and always keep a relation between an order of the scanning lines and the nine pixels read out the same.
The switch circuit 328 has three selectors 340, 342, and 344 with three inputs that selectively output pixel data read out from the three line memories 320, 322, and 324. Every time the scanning line is updated, the three selectors 340, 342, and 344 switch the line memories in corresponding to the scanning lines updated so that the line memories to be selected would not to be overlapped. This makes it easy to switch in order a relation between an order of the scanning lines and the nine pixels extracted.
When pixel data of two pixels that correspond to two scanning lines adjacent to a target pixel and are adjacent to the target pixel in the vertical direction with respect to the scanning lines are B and H, the image quality adjustment processing is applied to the target pixel by adding a value proportional to an added value of these pixel data B and H to the pixel data E of the target pixel. This makes it possible to perform the image quality adjustment processing with an influence due to the pixels adjacent to the target pixel in the vertical direction reflected on pixel data of the target pixel.
When pixel data of four pixels that correspond to two scanning lines adjacent to a target pixel and are adjacent in the oblique direction with respect to the target pixel are A, C, G, and I, the image quality adjustment processing is applied to the target pixel by adding a value proportional to an added value of these pixel data A, C, G, and I to the pixel data E of the target pixel. This makes it possible to perform the image quality adjustment processing with an influence due to the pixels adjacent to the target pixel in the oblique direction reflected on pixel data of the target pixel.
By setting a proportional constant (z, 2y, and 2x in equation (6)) in calculating the value proportional to the added value to a negative value, it is possible to perform enhance processing along a direction in which the pixels to which the added value is added and the target pixel are arranged abreast. It is possible to realize an enhance effect for enhancing an edge portion included in an image.
By setting a proportional constant in calculating the value proportional to the added value to a positive value, it is possible to perform blurring processing along a direction in which the pixels to which the added value is added and the target pixel are arranged abreast. It is possible to realize a blurring effect for averaging an edge portion included in an image.
By setting the proportional constant as an adjustment parameter, a value of which is changeable, and variably setting the value of the adjustment parameter, it is possible to variably set degrees of the enhance effect and the blurring effect. In particular, simply by changing a value of the adjustment parameter, it is possible to easily obtain pixel data with the degrees of the enhance effect and the blurring effect adjusted.
When an impulse response waveform indicating an influence of the one pixel on peripheral pixels around the one pixel is used, weighting coefficients that make an influence of the one pixel different are individually set for a partial area of the peripheral pixels and for the remaining area other than the partial area by this impulse response waveform. This makes it possible to finely set a degree of the influence of the one pixel on the peripheral pixels arranged around the one pixel. In particular, a value of the weighting coefficient corresponds to a partial area close to the one pixel is set to a positive value and a value of the weighting coefficient corresponds to the remaining area distant from the one pixel is set to a negative value. This makes it possible to impart a negative area to an impulse response in the same manner as a general sampling function for performing interpolation processing among data and obtain a more natural image after image quality adjustment with the degree of the influence of the one pixel on the peripheral pixels accurately reflected thereon.
It is possible to variably set a degree of image quality adjustment by changeably setting the weighting coefficient. It is possible to individually set the impulse response waveform according to a relative positional relation of the peripheral pixels to the one pixel. This makes it possible to perform, when contents of an image have directionality (e.g., depending on a direction that an edge faces), image quality adjustment processing with the direction reflected thereon. In particular, it is possible to adjust a degree of enhancement and blurring depending on which of the horizontal, vertical and oblique directions a direction that a color and a shade of an image change is along.
The invention is not limited to the above embodiment. Various modifications are possible within the scope of the gist of the invention. For example, in the above embodiment, the case in which video data of a format conforming to ITU-R.BT601-5/656 is inputted is explained. However, it is possible to perform image quality adjustment processing in the same manner even if it is an image data of other formats as long as a video signal is inputted in a scanning order. RGB data may be inputted in the scanning order or shadow data for a white and black video may be inputted rather than brightness data and color difference data. In the case of the RGB data, pixel data of an R component, pixel data of a G component, and pixel data of a B component are separated in the same manner as the case of brightness data and color difference data to separately perform the image quality adjustment processing.
In the embodiment described above, the enhance effect is obtained by setting values of the image quality adjustment parameters “x”, “y”, and “z” to negative values. However, values of these image quality adjustment parameters “x”, “y”, and “z” may be set to positive values. When these values are set to positive, an effect for blurring an image is obtained instead of the enhance effect.
In the embodiment described above, the image quality adjustment parameters “x” and “y” are set separately. However, when the enhance effects in the horizontal direction and the vertical direction are set the same, these two image quality adjustment parameters “x” and “y” may be set the same. In this case, another adder only has to be inserted at a pre-stage of the multiplier 376 and the multiplier 378 shown in
In the embodiment described above, degrees of influences of one pixel on eight pixels arranged around the one pixel are as shown in
According to the invention, it is possible to perform image quality adjustment processing such as edge enhancement simply by extracting nine pixels including a target pixel and calculating new pixel data corresponding to the target pixel using pixel data of these nine pixels. This makes it unnecessary to perform complicated processing for, for example, performing extraction of an edge and gain adjustment and then adding the edge to original image data and makes it possible to simplify processing.
Number | Date | Country | Kind |
---|---|---|---|
2004-317633 | Oct 2004 | JP | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/JP05/17585 | 9/26/2005 | WO | 3/13/2007 |