The present invention relates to an image processing apparatus for correcting image edges to a desired sharpness, an image processing apparatus capable of changing the number of image pixels by an arbitrary zoom ratio and correcting image edges to a desired sharpness, and an image display apparatus using these types of image processing apparatus.
An example of an image processing method for correcting edges in an image to enhance their sharpness is disclosed in Japanese Patent Application Publication No. 2002-16820. This patent document describes an image processing method that calculates absolute values of derivatives of input image signals and the mean value of the absolute values, obtains difference values by subtracting the mean value from the calculated absolute values, and controls enlargement and reduction ratios of the image according to the difference values. By controlling the enlargement and reduction ratios of the image according to changes in image signals as described above, it is possible to make the rising and falling transitions of edges steeper by using image enlargement and reduction circuits, thereby improving the sharpness of the image.
Japanese Patent Application Publication No. 2000-101870 discloses an image processing method that, when converting the number of pixels in an input image, generates control values based on high frequency components of the image signal and uses the control values to control the interpolation phase in an interpolation filter used in the conversion of the number of pixels. Such control of the interpolation phase according to high frequency components produces sharper transitions at edges in the image, resulting in a crisper image.
A problem in the conventional image processing methods disclosed in the references cited above is that because the edge correction is carried out by using corrective values based on amounts of high frequency components in the image signal, it is difficult to improve the sharpness of edges at which the change in the level of the image signal is small. It is therefore difficult to improve the sharpness without overcorrecting or undercorrecting in the image as a whole. Another problem is that edge corrections made in the vertical direction by the conventional image processing methods require delay circuits for obtaining the pixel data necessary for the corrections and for adjusting the output timing of the corrected image data, making it difficult to reduce costs.
The present invention addresses the problems above with the object of providing an image processing apparatus and an image processing method capable of improving image quality by appropriate improvement of edge sharpness.
A first image processing apparatus according to the present invention comprises an edge correction means for correcting edge widths in the image, an enhancement value calculation means for calculating enhancement values for enhancing edges according to high frequency components of the image with the corrected edge widths, and an edge enhancement means for enhancing the edges by adding the enhancement values to the image with the corrected edge widths.
A second image processing apparatus according to the present invention comprises a frame memory control means for writing luminance data and color difference data of an image into a frame memory and reading them at prescribed timings, and edge width correction means for extracting the data for a plurality of vertically aligned pixels from the luminance data read from the frame memory and correcting vertical edge widths in the extracted pixel data, wherein the frame memory control means reads the color difference data with a delay from the luminance data of at least the interval required for correction of the edge widths by the edge width correction means.
FIGS. 4(a) and 4(b) are drawings illustrating the read and write timings of the frame memory.
FIGS. 7(a) to 7(d) illustrate edge width correction processing.
FIGS. 10(a) to 10(d) illustrate edge enhancement processing.
FIGS. 16(a) to 16(c) illustrate enlargement and reduction processing of an image in the pixel number converter.
FIGS. 17(a) and 17(b) illustrate a change in edge width due to enlargement processing.
FIGS. 21(a) to 21(e) illustrate image combination and image processing control signals.
The receiver 1 receives an externally input image signal Di and a synchronizing signal Si, and converts the image signal Di to digital image data Da, which are output with a synchronizing signal Sa. If the image signal Di is an analog signal, the receiver 1 is configured as an A/D converter. If the image signal Di is a serial or parallel digital signal, the receiver 1 is configured as a receiver of the corresponding type, and may include a tuner if necessary.
The image data Da may comprise color data for the three primary colors red (R), green (G), and blue (B), or may comprise separate data for luminance and color components. In the following description, it will be assumed that the image data Da comprise red-green-blue trichromatic color data.
The image data Da and synchronizing signal Sa output from the receiver 1 are input to converter 3 in the image processor 2. The synchronizing signal Sa is also input to the output synchronizing signal generator 7.
Converter 3 converts the image data Da comprising red-green-blue trichromatic color data to luminance data DY and color difference data DCr, DCb. Converter 3 also delays the synchronizing signal Sa by the time required for this conversion of the image data Da, and outputs a delayed synchronizing signal DS. The luminance data DY, color difference data DCr, DCb, and synchronizing signal DS output from converter 3 are sent to the memory unit 4.
The memory unit 4 temporarily stores the luminance data DY and color difference data DCr, DCb output from converter 3. The memory unit 4 comprises a frame memory that is used as a frame rate conversion memory for converting image signals output from devices having different frame rates, such as personal computers and television sets, to an image signal having a fixed rate (for example, 60 Hz), or as a frame buffer for storing one frame of image data. The luminance data DY and color difference data DCr, DCb are stored in this frame memory.
The output synchronizing signal generator 7 generates a synchronizing signal QS indicating timings for reading the luminance data DY and color difference data DCr, DCb stored in the memory unit 4, and outputs it to the memory unit 4. When the frame rate is converted in the frame memory in the memory unit 4, to output image data from the memory unit 4 with a different frame rate from the frame rate of the image data Da, the output synchronizing signal generator 7 generates a synchronizing signal QS having a different frequency from the frequency of synchronizing signal Sa. When the frame rate is not converted in the memory unit 4, synchronizing signals QS and Sa are the same.
The memory unit 4 reads out the luminance data DY and color difference data DCr, DCb according to the synchronizing signal QS provided from the output synchronizing signal generator 7, and outputs timing-adjusted luminance data QY and color difference data QCr, QCb to the edge corrector 5. In doing so, the memory unit 4 delays the reading of the color difference data QCr, QCb to allow time for performing an edge correction on the luminance data QY.
The edge corrector 5 performs an edge correction on the luminance data QY read from the memory unit 4, and outputs the edge-corrected luminance data ZYb to converter 6 together with the color difference data QCr, QCb, which have been read from the memory unit 4 with the prescribed delay.
Converter 6 converts the luminance data ZYb and color difference data QCr, QCb to image data Qb in a format capable of being displayed by the display unit 9, and outputs the converted image data Qb to the transmitter 8. Specifically, converter 6 converts the image data comprising luminance data and color difference data to image data comprising red-green-blue trichromatic color data. This does not apply, however, when the data format receivable by the display unit 9 is not a trichromatic image data format; in that case, converter 6 converts the data to the appropriate format.
The display unit 9 displays the image data Qc output from the transmitter 8 at the timing indicated by a synchronizing signal Sc. The display unit 9 may include any type of display device, such as a liquid crystal panel, a plasma panel, a cathode ray tube (CRT), or an organic luminescence (EL) display device.
The operation of the image processor 2 will now be described with reference to
Converter 3 converts the image data Da to luminance data DY and color difference data DCr, DCb, and outputs these data to the frame memory controller 11 in the memory unit 4. Simultaneously, converter 3 delays the synchronizing signal Sa by the time required for conversion of the image data Da, and outputs the delayed synchronizing signal DS to the frame memory controller 11.
The luminance data DY and color difference data DCr, DCb input to the frame memory controller 11 are supplied to respective line buffers 14, 15, 16 in the write controller 13. From the synchronizing signal DS, the write address controller 17 generates write addresses WA used when the luminance data DY and color difference data DCr, DCb input to the line buffers 14, 15, 16 are written into the frame memory 10. The write controller 13 sequentially reads out the luminance data DY and color difference data DCr, DCb stored in the line buffers, and writes them as image data WD into the frame memory 10 at the write addresses WA.
In the meantime, from the synchronizing signal QS output by the output synchronizing signal generator 7, the read address controller 22 generates read addresses RA for reading the luminance data DY and color difference data DCr, DCb written into the frame memory 10. The read addresses RA are generated so that the color difference data DCr, DCb are read with a delay from the luminance data DY equal to the interval required for edge correction by the edge corrector 5. The frame memory 10 outputs data RD read according the read addresses to the line buffers 19, 20, 21. The line buffers 19, 20, 21 output luminance data QY and color difference data QCr, QCb, the timings of which have been adjusted as above, to the edge corrector 5.
When DRAMs are used for the frame memory 10, for example, the operation carried out at any one time between the frame memory 10 and frame memory controller 11 is restricted to either writing or reading. Therefore, since the luminance data and color difference data for one line cannot be written or read continuously, a timing adjustment is performed whereby the line buffers 14, 15, 16 write continuous-time luminance data DY and color difference data DCr, DCb intermittently into the frame memory, and the line buffers 19, 20, 21 receive luminance data QY and color difference QCr, QCb read intermittently from the frame memory 10, but output them as continuous-time data.
The luminance data QY input to the edge corrector 5 are supplied to the vertical edge corrector 12. The vertical edge corrector 12 performs edge corrections in the vertical direction on the luminance data QY, and outputs the vertically edge-corrected luminance component data ZYb to converter 6. The edge correction operation in the vertical edge corrector 12 will be described later. This vertical edge correction produces a delay of a prescribed number of lines between the corrected luminance data ZYb and the uncorrected luminance data QY. If the delay is by k lines, the color difference data QCr, QCb input to converter 6 must also be delayed by k lines. The read address controller 22 generates read addresses RA so that the corrected luminance data ZYb and the color difference data QCr, QCb are input to converter 6 in synchronization with each other. That is, the read addresses RA are generated so that the color difference data QCr, QCb are read with a k-line delay from the luminance data QY.
FIGS. 4(a) and 4(b) illustrate the read and write timings of the frame memory.
As shown in
Next, the operation of the vertical edge corrector 12 will be described.
The line delay A unit 23 receives the luminance data QY output from the frame memory controller 11, and outputs luminance data QYa for the number of pixels necessary for vertical edge width correction processing in the edge width corrector 24. If the edge width correction processing is performed using eleven vertically aligned pixels, the luminance data QYa include eleven pixel data values.
FIGS. 7(a) to 7(d) illustrate the edge width correction processing in the edge width corrector 24. The edge width detector 25 detects part of the luminance data QYa as an edge if the part changes continuously in magnitude in the vertical direction over a prescribed interval, detects the width Wa of the edge, and detects a prescribed position within the width as a reference position PM.
On the basis of the detected edge width Wa and reference position PM, the zoom ratio control value generator 26 outputs zoom ratio control values ZC used for edge width correction.
The zoom ratio generator 27 adds the zoom ratio control values ZC to a reference zoom conversion ratio Z0, which is a preset zoom conversion ratio that applies to the entire image, to generate zoom conversion ratios Z as shown in
The interpolation calculation unit 28 carries out an interpolation process on the luminance data QYa according to the zoom conversion ratio Z. In the interpolation process, in the front part (b) and rear part (d) of the edge, in which the zoom conversion ratio Z is greater than the reference zoom conversion ratio Z0, the interpolation density is increased; in the central part (c) of the edge, in which the zoom conversion ratio Z is less than the reference zoom conversion ratio Z0, the interpolation density is decreased. Accordingly, an enlargement process that results in a relative increase in the number of pixels is performed in the front part (b) and rear part (d) of the edge, and a reduction process that results in a relative decrease in the number of pixels is performed in the central part (c) of the edge.
The zoom ratio control values ZC are generated according to the edge width Wa so as to sum to zero over these parts (b, c, and d). This means that if the areas of the hatched sectors in
The corrected value of the edge width Wa, that is, the edge width Wb after the conversion, can be arbitrarily set by the magnitude of the zoom conversion ratio Z shown in
The edge width detector 25 detects as an edge a part of the image in which the luminance data increase or decrease monotonically and the front and rear parts are flatter than the central part. This condition means that each of the difference quantities (a, b, c) has the same positive or negative sign, or has a zero value, and the absolute values of a and c are smaller than the absolute value of b. More specifically, when these values (a, b, c) satisfy both of the following conditions (1a and 1b), the four pixels with the pixel data QYa(ka−2), QYa(ka−1), QYa(ka), QYa(ka+1) shown in
da≧0, db≧0, dc≧0 or
da≦0, db≦0, dc≦0 (1a)
|db|>|da|, |db|>|dc| (1b)
In this case the edge width is three times the pixel spacing (Wa=3×Ws).
As shown in
Edge widths may also be detected by using pixel data extracted from every other pixel (at intervals of 2×Ws).
The luminance data ZYa with the corrected edge width output from the interpolation calculation unit 28 are sent to the line delay B unit 29. The line delay B unit 29 outputs luminance data QYb for the number of pixels necessary for edge enhancement processing in the edge enhancer 30. If the edge enhancement processing is performed using five pixels, the luminance data QYb comprise the luminance data of five pixels.
The edge detector 31 performs a differential operation on the luminance data QYb, such as taking the second derivative, to detect luminance variations across edges with corrected edge widths Wb, and outputs the detection results to the enhancement value generator 32 as edge detection data R. The enhancement value generator 32 generates enhancement values SH for enhancing edges in the luminance data QYb according to the edge detection data R, and outputs the generated values SH to the enhancement value adder 33. The enhancement value adder 33 adds the enhancement values SH to the luminance data QYb to enhance edges therein.
FIGS. 10(a) to 10(d) illustrate the edge enhancement processing in the edge enhancer 30.
As shown in
In the image processing apparatus according to the present invention, as shown in FIGS. 10(a) and 10(b), an edge width correction process is performed in which the edge width Wa of the luminance data QYa is reduced to obtain a steeper luminance variation at the edge. Then the undershoot and overshoot (enhancement values SH) shown in
The edge enhancer 30 may be adapted so as not to enhance noise components, and may also include a noise reduction function that reduces noise components. This can be done by having the edge enhancement value generator 32 perform a nonlinear process on the edge detection data R output from the edge detector 31.
The edge detection data R obtained in the edge detector 31 may be detected by performing pattern matching or other calculations instead of by differentiation.
A pixel delay A unit 35 receives the luminance data ZYb sequentially output from the vertical edge corrector 12, and outputs luminance data QYc for the number of pixels necessary for horizontal edge width correction processing in the edge width corrector 24.
The luminance data QYc for 2ma+1 pixels output from the pixel delay A unit 35 are sent to the edge width corrector 24. The edge width corrector 24 performs the same processing as for vertical edge width correction on the luminance data QYc in the horizontal direction, and outputs luminance data ZYc with corrected horizontal edge widths.
The luminance data ZYc with the corrected edge widths output from the edge width corrector 24 are input to a pixel delay B unit 36. The pixel delay B unit 36 outputs luminance data QYd for the number of pixels necessary for edge enhancement processing in the edge enhancer 30.
The luminance data QYd of 2 mb+1 pixels output from the pixel delay B unit 36 are sent to the edge enhancer 30. The edge enhancer 30 performs the same processing on the luminance data QYd in the horizontal direction as was performed for edge enhancement in the vertical direction, and outputs luminance data ZYd with horizontally enhanced edges.
The luminance data ZYe with the corrected edges output from the horizontal edge corrector 34 are input to converter 6. The frame memory controller 11 outputs color difference data QCr, QCb with a delay equal to the interval of time required for the edge correction so that the color difference data QCr, QCb and luminance data ZYe with the corrected edges are input to converter 6 in synchronization with each other, the interval required for the edge correction including the necessary time, equivalent to a prescribed number of lines, from input of the luminance data QY to the vertical edge corrector 12 to output of the vertically edge-corrected luminance data ZYb and the necessary time, equivalent to a prescribed number of clock cycles, from input of the vertically edge-corrected luminance data ZYb to the horizontal edge corrector 34 to output of the horizontally edge-corrected luminance data ZYd. Specifically, read addresses RA are generated so that the color difference data QCr, QCb are output from the frame memory 10 with a delay from the luminance data ZYd equal to the above interval. The amount of line memory required for delaying the color difference data QCr, QCb can thereby be reduced.
The horizontal edge corrections may be performed before the vertical edge corrections, or the horizontal and vertical edge corrections may be performed concurrently.
In the invented image processing apparatus described above, when a vertical or horizontal edge correction is performed on an image, first the edge widths are corrected and then undershoots and overshoots are added to the edges with the corrected widths. Therefore, the widths of even edges having gradual luminance variations can be reduced to make the luminance variations steeper, and undershoots and overshoots having appropriate widths can be added. By performing adequate corrections on edges having various different luminance variations, it is possible to improve the sharpness of an image without overcorrecting or undercorrecting.
Further, when edge widths are corrected, since the corrections are determined by the widths of the edges instead of their amplitudes, the sharpness of even edges having gradual luminance variations is enhanced, so that adequate edge enhancement processing can be performed.
When an edge correction is performed, luminance data QY and color difference data QCr, QCb are written into a frame memory, edge correction processing is performed on the luminance data QY read from the frame memory, and the color difference data QCr, QCb are read with a delay from the luminance data QY equal to the interval required for the edge correction processing, so edge correction processing can be performed on the luminance data without providing delay elements necessary for a timing adjustment of the color difference data QCr, QCb.
The pixel number converter 38 performs pixel number conversion processing, i.e., image enlargement or reduction processing, on image data comprising the luminance data DY and color difference data DCr, DCb output from converter 3. FIGS. 16(a), 16(b), and 16(c) show examples of enlargement processing, reduction processing, and partial enlargement processing of an image, respectively, in the pixel number converter 38.
When enlargement processing of an image is performed as shown in FIGS. 16(a) and 16(c), a problem of blurred edges occurs as described below. FIGS. 17(a) and 17(b) illustrate luminance changes at edges when enlargement processing of an image is performed, and illustrate the luminance changes at the edges of an input image and an enlarged image, respectively. As shown in
The image data on which enlargement or reduction processing has been performed are temporarily stored in the memory unit 4, then read out with a prescribed timing and sent to the edge corrector 5. The edge corrector 5 performs the edge correction process described in the first embodiment on the luminance data DY output from the memory unit 4, thereby correcting edges blurred by the enlargement processing.
According to the image processing apparatus of the present embodiment, since edges widened by enlargement processing of an image are corrected by the method described in the first embodiment, the image can be enlarged by an arbitrary ratio without reducing its sharpness. As in the first embodiment, it is also possible to add undershoots and overshoots having appropriate widths to the edges widened by the enlargement process, so that the sharpness of the enlarged image can be enhanced without overcorrecting or undercorrecting.
The enlargement or reduction processing of the image may also be performed before the image is converted to image data comprising luminance data and color difference data.
The frame memory controller 46 controls a frame memory 45 that temporarily stores the image processing control signal DYS, luminance data DY, and color difference data DCr, DCb. The frame memory controller 46 reads out the luminance data DY and color difference data DCr, DCb stored in the frame memory 45 with the timing shown in FIGS. 4(a) and 4(b), and outputs timing-adjusted luminance data QY and color difference data QCr, QCb. The frame memory controller 46 also outputs a timing-adjusted image processing control signal QYS by reading out the image processing control signal DYS temporarily stored in the frame memory 45 with a delay equal to the interval required for edge width correction processing in the vertical edge corrector 47. The luminance data QY and image processing control signal QYS output from the frame memory controller 46 are input to the vertical edge corrector 47.
The interpolation calculation unit 28 performs vertical edge width correction processing on the vertically aligned luminance data QYa output from the line delay A unit 23, and outputs corrected luminance data ZYa. The corrected luminance data ZYa are sent to selector 49 together with the uncorrected luminance data QYa and the image processing control signal QYS. For every pixel, according to the image processing control signal QYS, selector 49 selects either the edge-width-corrected luminance data ZYa or the uncorrected luminance data QYa, and outputs the selected data to the line delay B unit 29.
The line delay B unit 29 outputs luminance data QYb, for the number of pixels necessary for edge enhancement processing in the edge enhancer 51, to the edge detector 31 and enhancement value adder 33. The line delay C unit 50 delays the image processing control signal QYS by an interval equivalent to the number of lines necessary for the processing performed in the edge enhancer 51, and outputs the delayed image processing control signal QYSb to selector 52.
The enhancement value adder 33 outputs luminance data ZYb obtained by performing an edge enhancement process on the luminance data QYb in the vertical direction output from the line delay B unit 29. The edge-enhanced luminance data ZYb are sent to selector 52 together with the unenhanced luminance data QYb and the image processing control signal QYSb delayed by the line delay C unit 50. For every pixel, according to the image processing control signal QYSb, selector 52 selects either the edge-enhanced luminance data ZYb or the unenhanced luminance data QYb and outputs the selected data as luminance data ZY.
FIGS. 21(a) to 21(e) illustrate the operation of the image processing apparatus according to the present embodiment. FIGS. 21(a) to 21(c) show examples of the image data Da, image data Db, and combined image data Dc, respectively. FIGS. 21(d) and 21(e) show examples of the image processing control signal Dbs. The image data Da shown in
The selectors 49, 52 in the edge width corrector 48 and edge enhancer 51 select the luminance data before and after the edge correction according to image processing control signals QYS like the ones shown in FIGS. 21(d) and 21(e), thereby preventing text information and the peripheral edges thereof from taking on an unnatural appearance due to unnecessary correction.
As described above, in the image processing apparatus according to the present embodiment, an arbitrary image including text or graphic information is combined with the image data, and while edge correction processing is performed on the combined image, an image processing control signal is generated that designates a specific area in the combined image. The corrected or uncorrected combined image data are selected and output, pixel by pixel, according to the image processing control signal, so that edges are corrected only in the necessary area.
The image processing apparatus according to the present invention comprises an edge correction means for correcting edge widths in an image, an enhancement value calculation means for calculating enhancement values for enhancing edges according to a high frequency component of the image with the corrected edge widths, and an edge enhancement means for enhancing the edges by adding the enhancement values to the image with the corrected edge widths. Therefore, appropriate correction processing can be performed on edges having various different luminance variations to enhance image sharpness without overcorrection or undercorrection.
Number | Date | Country | Kind |
---|---|---|---|
2004-252212 | Aug 2004 | JP | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/JP04/15397 | 10/19/2004 | WO | 11/22/2006 |