The present invention relates to methods and apparatuses for processing images, and image display apparatuses, whereby digitized images are partially magnified (enlarged) or reduced (shrunk), to display the images. More particularly, the invention relates to such apparatuses and methods that avoid moiré in an output image occurring when an image having relatively high spatial frequencies is supplied.
A projection display apparatus such as a rear projection TV causes an image distortion resulting from a positional relationship between a screen and a projection light source, or aberration or the like that is generally inherent in any optical system. To correct the distortion, techniques to project light of an image created by applying an inverse distortion characteristic to the distortion are conventionally known.
As one of the techniques, there is a technique for correcting image data by electrical signal processing; for instance, proposed is a liquid crystal projector that corrects image distortion by changing, on a predetermined scan line number basis, the number of pixels in scan lines of input images in order to correct keystone distortion.
A method of varying the number of pixels includes one in which the pixel number is varied by interpolating pixel data between each pair of sample pixel data adjacent to each other in a scan line of the input image (refer to, for instance, Japanese Unexamined Patent publication No. H08-102900 (paragraphs 0012, 0014, 0027 and 0040; FIGS. 3, 4, 12 and 13).
A problem with the conventional technique in the foregoing description, however, is that when image data containing a high frequency (HF) component such as a checkered pattern is supplied, moiré occurs in an output image, owing to aliasing resulting from re-sampling at a pixel position after conversion of an original image, thus leading to very bad-looking images.
In light of the foregoing description, the present invention provides an image processing apparatus, an image processing method and an image display apparatus, whereby image signals can be generated such that favorable display images are achieved even when digitized images are partially magnified or reduced as in keystone distortion correction.
In order to overcome the forging problem to achieve an object, the image processing apparatus according to the present invention for expanding or reducing input image data supplied thereto, for each area of the image data, comprises a high frequency (HF) component smoothing processor that generates smoothed-HF-component image data by smoothing HF components of the input image data; a partial magnification/reduction controller that creates partial magnification/reduction control information that designates positions of pixels in image data obtained after expanding or reducing the input image data for each area of the image data; and a pixel data generator that generates pixel data for pixel positions designated based on the partial magnification/reduction control information in the smoothed-HF-component image data, by using pixel data in a neighborhood of the designated pixel positions.
According to the present invention, an advantageous effect is that even when a geometrical image distortion due to a projection optical system is corrected through signal processing, favorable images with reduced moiré can be generated by smoothing HF components of input image data and thereby magnifying or reducing the smoothed images for each area of the image data. These and other features, advantages and objects of the present invention will be further understood and appreciated by those skilled in the art by reference to the following drawings.
Preferred embodiments of a method and apparatus for processing images and an image display apparatus according to the present invention, will be described below in greater detail based on the figures. It will be understood by those skilled in the art that the present invention is not limited to the subsequent description, but various modifications may be made accordingly without departing the spirit and scope of the present invention.
The image display apparatus 10 according to Embodiment 1 includes an input terminal 1, a high frequency (HF) component preprocessor 2, a partial magnification/reduction controller 3, a gray level calculator 4, and a display 5.
The input terminal 1 receives input image data Da that in turn is supplied to the HF component preprocessor 2. The preprocessor 2 is a HF-component smoothing processor that smoothes HF components of the input image data Da, to generate and deliver smoothed-HF-component image data Db. The partial magnification/reduction controller 3 delivers to the gray level calculator 4 partial magnification/reduction control information Sa that defines a pixel position obtained after expanding/reducing image data for each area of the data, based on a command of an amount of correction from a controller (not shown). The gray level calculator 4 is a pixel data generator that generates, using pixels in a neighborhood of the position, pixel data at a position designated by the control information Sa in the smoothed-HF-component image data Db, to then produce as an output partially expanded/reduced image data Dc. The display 5 performs representation of the image data Dc with brightness corresponding to the image data Dc.
Here, an internal configuration of the HF-component-preprocessor 2 will be described in greater detail.
The detector 12 detects HF components of the image data Da based on predetermined reference, to thereby generate and deliver HF-component detection information Sb indicating in which pixel in the data Da an HF component exists. The selection unit 13 selects, for each pixel, the input image data Da or the smoothed image data Dd based on the detection information Sb, to deliver the selected image data as the image data Db.
The smoothing processor 11 is configured with, for instance, a mean filter or the like that processes into output pixel data an average value of pixel data in a neighborhood of a pixel of interest (or a given pixel).
For instance, a pixel of the coordinate 7 in
Next, the HF component detector 12 will be described in greater detail.
The pattern match determination unit 23 performs pattern match determination in order to find out in which pixel in the input image data the HF component exists, based on the quantization image data De and the pattern data (here, binary pattern data) of the plurality of preset values, to then produce, as output, pattern match determination information (high frequency (HF) component detection information) Sb.
In the case of
The determination unit 23 performs pattern matching based on the image data De and the preset binary pattern data.
For instance, when the pattern matching by using two reference pixels is performed, determination is made whether either one of
The pattern matching by using four or eight reference pixels is also determined in a similar way, and determination result is output as the pattern match determination information (HF-component detection information) Sb. In this way, the comparison of the binary-encoded image data with the binary pattern data allows for detection of an HF component contained in an input image.
Here, binary pattern data used for the pattern matching shown in
The image data selection unit 13 selectively outputs for each pixel the input image data Da or the smoothed image data Dd according to the determination information (HF-component detection information) Sb. If it is indicated that the determination information (HF-component detection information) Sb contains HF components in a neighborhood of a pixel of interest in the input image data Da, then the selection unit 13 selectively outputs the smoothed image data Dd. In contrast, if it is indicated that the information (HF-component detection information) Sb does not contain HF components in the neighborhood of the pixel of interest in the data Da, then the unit 13 selectively outputs the data Da.
As discussed above, by outputting the smoothed image data Dd in pixels containing HF components, smoothed-HF-component image data Db can be gained in which HF components contained in the input image data Da have been eliminated.
The selection unit 13 may not only selectively output, according to the pattern match determination information (HF-component detection information) Sb, either one of the input image data Da and the smoothed image data Dd, but also calculate a weighted average of the input image data Da and the image data Dd. In this case, the weights of the weighted average are controlled to calculate the smoothed-HF-component image data Db according to the information (HF-component detection information) Sb.
If it is indicated that the information (HF-component detection information) Sb includes HF components in the neighborhood of a pixel of interest of the input image data Da, then the selection unit 13 makes greater a weighting factor to be applied to the smoothed image data Dd, and smaller a weighting factor to the input image data Da. In contrast, if it is indicated that no HF components are included in the neighborhood of the pixel of interest of the data Da, then a weighting factor to be applied to the data Da is controlled to be greater, while a weighting factor to the data Dd is controlled to be smaller.
When the pattern match determination unit 23 determines that the interval Za contains HF components, while the interval Zb does not, in terms of, e.g., the input image data Da shown in
Note that in an example shown in
Next, processing of each color in the pattern match determination unit 23 and the image data selection unit 13, will be described.
Each of the red color pattern match determination subunit 23r, the green color pattern match determination subunit 23g, and the blue color pattern match determination subunit 23b, of the pattern match determination unit 23a performs pattern match determination for each color element of red, green, and blue.
And each of the red determination subunit 23r, the green determination subunit 23g, and the blue determination subunit 23b generates each determination result of respective color elements, as red color pattern match determination information (HF-component detection information) Sbr, green color pattern match determination information (HF-component detection information) Sbg, and blue color pattern match determination information (HF-component detection information) Sbb, to then deliver such information to a selection subunit corresponding to a particular color in the image data selection unit 13a.
The red color image data selection subunit 13r, the green color image data selection subunit 13g, and the blue color image data selection subunit 13b, of the image data selection unit 13a, each select image data in terms of respective color elements based on pattern match determination information corresponding to a color element. This allows the smoothed-HF-component image data Db to be gained by removing, for each color, HF components contained in the input image data Da, and also a color bias in smoothed-HF-component image data to be reduced.
In this way, image data of all the color elements are selected based on pattern match determination information for a particular color element, whereby coloration due to images for a particular color element alone being smoothed can be suppressed in situations where the input image contains an HF component of the particular color element.
The controller 3 generates the partial magnification/reduction control information Sa for expanding/reducing image data for each area of the data, to output it, in order to correct the image data to an arbitrary shape. The control information Sa contains data designating pixel positions after expanding/reducing image data for each area of the data.
The gray level calculator 4 computes a grayscale value at a pixel position indicated by partial magnification/reduction control information Sa. When a pixel position indicated by the control information Sa is located where pixels of the image data Db do not exist, the gray level calculator 4 computes a grayscale value in the pixel position using pixels in a neighborhood of the pixel position of the image data Db
As in the foregoing description, the partial magnification/reduction controller 3 produces data for designating a pixel position obtained after expanding or reducing image data for each area of the data, and the gray level calculator 4 newly computes a grayscale value in the pixel position, whereby an input image is partially magnified or reduced to correct the input image to an arbitrary shape. Here, the smoothed-HF-component image data Db supplied to the gray level calculator 4 is image data in which an HF component contained in the input image data Da has been removed in the HF component preprocessor 2. For this reason, occurrence of moiré resulting from aliasing due to resampling of input images containing HF components, at pixel positions after conversion can be suppressed in the gray level calculator 4.
Next, another processing example of the quantization processor 22 and the threshold calculator 21 will be described. In
In this way, the threshold calculator 21 produces, as the threshold data Tb, the average grayscale value of pixels in the neighborhood of the pixel of interest in the input image data Da, whereby HF components can correctly be detected even when there exist HF components differing in grayscale value.
For comparison, with respect to the input image data Da as shown in
In this situation, the quantization image data De supplied from the quantization processor 22 is one such as shown in
Another different example of processing of the quantization processor 22 and the threshold calculator 21 will be described. While in the example of
Since the pattern match determination unit 23 detects, as an HF component, a pattern such that the values of zero and one of the quantization image data De inverse for each pixel, image data quantized as two will not be detected as an HF component.
As in an input image data Da as shown in
While in Embodiment 1 a situation where the HF component preprocessor 2 performs preprocessing using adjacent pixels in the horizontal direction of a pixel of interest has been described, HF component preprocessing may be performed, in addition to adjacent pixels in the horizontal direction, using those in the vertical direction, based on the smoothed-HF-component image data Db to be supplied from the image data selection unit 13. In this case, since the preprocessing using adjacent pixels in the vertical direction differs in terms of reference pixels alone and the rest of processing is similar to that of the selection unit 13 as discussed above, detail description thereof will be omitted. In this manner, after the preprocessing has been performed by using adjacent pixels in the horizontal direction, another preprocessing is done by using those in the vertical direction, whereby a pattern that cannot be detected by only preprocessing in the horizontal direction—for instance, a striped pattern in the horizontal direction, and the like—can be detected.
In Embodiment 2, an image data delay memory is provided that serves to perform such HF-component preprocessing, as will be described below. Not that since configurations other than that of the HF component preprocessor in the image display apparatus according to Embodiment 2 are substantially similar to those in Embodiment 1, the same reference numerals are applied to units in the configurations similar to Embodiment 1, and the embodiment will be described hereinafter with a particular emphasis on the HP component preprocessor.
The image display apparatus according to Embodiment 2 includes an HF component preprocessor 2a in place of the HF component preprocessor 2 according to Embodiment 1.
The image data delay memory 31, which is configured with a memory that retains image data for a single line of the input image data Da or image data for a plurality of lines for delay, delivers image data of a plurality of lines as a plural line image data Df.
The smoothing processor 11a performs smoothing by using adjacent pixels in the vertical and horizontal directions. Note that detailed description of the processor will be omitted because it is similar to that of Embodiment 1 except for cases where smoothing is performed by using not only the adjacent pixels in the horizontal direction, but also those in the vertical direction. Description of functionalities of the detector 12 and the selection unit 13 will also be omitted because of being similar to those of Embodiment 1.
Next, the detector 12 according to the present embodiment will be described. The internal configuration of the detector 12, as shown in a block diagram, is similar to that of Embodiment 1 as shown in
Further, in the present embodiment, the determination unit 23 in the detector 12 performs pattern match determination by using adjacent pixels of a pixel of interest in the vertical and horizontal directions.
Further, a determination may be made using a plurality of different patterns in such a manner that a determination is made whether it coincides with any one of a binary pattern for detecting a checkered pattern of the HF, as shown in
As discussed, the pattern matching is performed by using the adjacent pixels in the vertical and horizontal directions, whereby only HF components that become a cause of moiré, such as by a checkered pattern, can be detected without mistakenly detecting letters or the like as HF components needed to be removed, even when input images contain patterns that vary minutely, such as of letters. It will be understood by those who practice the invention and those skilled in the art, that various modifications and improvements may be made to the invention without departing from the spirit of the disclosed concept. The scope of protection afforded is to be determined by the claims and by the breadth of interpretation allowed by law.
Number | Date | Country | Kind |
---|---|---|---|
2007-315816 | Dec 2007 | JP | national |
2008-268479 | Oct 2008 | JP | national |
Number | Name | Date | Kind |
---|---|---|---|
5016118 | Nannichi | May 1991 | A |
5418899 | Aoki et al. | May 1995 | A |
5771107 | Fujimoto et al. | Jun 1998 | A |
5920649 | Yasuda et al. | Jul 1999 | A |
7092045 | Haruna et al. | Aug 2006 | B2 |
8014601 | Takahashi | Sep 2011 | B2 |
8331731 | Kashibuchi | Dec 2012 | B2 |
20080278738 | Miyake | Nov 2008 | A1 |
20090085896 | Nagase et al. | Apr 2009 | A1 |
20090175558 | Moriya et al. | Jul 2009 | A1 |
20090268964 | Takahashi | Oct 2009 | A1 |
Number | Date | Country |
---|---|---|
2-86369 | Mar 1990 | JP |
3-80771 | Apr 1991 | JP |
5-328106 | Dec 1993 | JP |
7-50752 | Feb 1995 | JP |
8-102900 | Apr 1996 | JP |
2002-369071 | Dec 2002 | JP |
2003-179737 | Jun 2003 | JP |
2005-332154 | Dec 2005 | JP |
Entry |
---|
Sekine H., et al., “Method and device for converting picture density,” machine-translated Japanese application JP07-050752, published Feb. 1995. |
Number | Date | Country | |
---|---|---|---|
20090147022 A1 | Jun 2009 | US |