This invention relates to inverse halftoning.
Digital halftoning recreates an original shaded image using light and dark pixels. Varying the density and placement of the light and dark pixels in the resulting digitally-halftoned image approximates shading gradations in the original image. Inverse halftoning is a process of enhancing a digitally-halftoned image so that it more closely resembles the original shaded image.
Referring to
In the basic two-part process, process 5 lightly smoothes (101) the halftoned image and then performs (103) lowpass filtering on the halftoned image. To lightly smooth the halftoned image, process 5 applies a two-dimensional (e.g., N×N, where N>1) filter to each pixel in the image.
In one embodiment, a strongly-peaked 3×3 filter is used to perform the light smoothing. Examples of filters to produce different shading effects are (A) and (B) below.
A filter is applied to an image as follows. The filter matrix is overlaid on a block of N×N (e.g., 3×3) pixels. Each numerical value in the filter matrix is multiplied by a corresponding underlying pixel value. The resulting products are added together and multiplied by the reciprocal of the sum of the filter values (e.g., ( 1/36) and ( 1/20) above). The resulting value is stored as a pixel in an intermediary smoothed image. The pixel in the intermediary smoothed image is at the same location in the smoothed image as the center pixel in the block of N×N pixels that were filtered in the original image.
To process other pixels, the filter matrix is moved to an adjacent pixel in the original image. The filter matrix may be moved horizontally or vertically. Referring to
Process 5 performs (103) lowpass filtering using a one-dimensional lowpass filter having a relatively sharp high-frequency cutoff. Examples of well-known lowpass filters that have sharp high-frequency cutoffs, and that may be used, include Hamming and Kaiser filters. In this embodiment, a one-dimensional filter with N (N≧1) taps is used; however, a two-dimensional filter, whose dimensions in some, but not all, instances will be N×N, may be used to perform the lowpass filtering. The lowpass filter may have any number of “Itaps”. e.g., 11, 7, 3, etc., and may be weighted by a sinc function to vary filter width. The lowpass filter is moved over the smoothed image, and filters the smoothed image, in much the same way as the 3×3 filter described above.
However, unlike the case of filtering with a 3×3 two-dimensional matrix, filtering with a one-dimensional matrix is done by making two passes, one horizontal and one vertical, over the image. First, the lowpass matrix is overlaid along either a block of row pixels and moved horizontally or a block of column pixels and moved vertically to create an intermediate image. On the second pass, the filter is applied to the intermediate image along the direction not applied in the first pass. For example, a filter having the matrix values
The value is assigned to a pixel in the enhanced image that has the same location in the enhanced image as the pixel below the center pixel of the lowpass filter in the smoothed image. For example, as shown in
To process other pixels in the smoothed image, the lowpass filter is moved to an adjacent pixel in the smoothed image. That adjacent pixel is then processed using the two-pass filtering described above. As each pixel in the smoothed image is processed, the resulting filtered pixel value is stored at a corresponding location in the enhanced image. By repeating this process for plural pixels in the smoothed image, process 5 generates (104) an enhanced, inverse-halftoned, version of the smoothed image.
Additional features may be incorporated into process 5. These features are shown in dotted lines on
To detect edges in the smoothed image, process 5 applies an edge filter to the smoothed image. The edge filter may be an N×N filter, which processes pixels in substantially the same manner as the N×N filter described above. That is, the filter matrix is overlaid on a block of N×N (e.g., 3×3) pixels. Each numerical value in the filter is multiplied by a corresponding underlying pixel value. The resulting products are added together to obtain a value. In this case, unlike above, the value is not multiplied by the reciprocal of the sum of the pixel values. If the resulting value has a large magnitude, that is an indication of an edge in the image.
One example of an edge detection filter matrix that may be used in this embodiment is as follows:
Applying this edge detection filter to a flat surface of a pixel would result in a near zero value, since the pixels underneath the first column (−1's) would cancel the pixels underneath the third column (1's). If an edge is present at the center column, the value resulting from applying the filter over the center pixel and its adjacent eight pixels would result in a non-zero magnitude because values along one center are different from other adjacent values. The higher this magnitude, the more pronounced the edge.
Process 5 compares the value that results from applying the edge detection filter to a threshold value. Edges whose values exceed the threshold are ignored during lowpass filtering. This threshold value is set beforehand and is set, as desired, to ignore some edges and account for others. The higher that the threshold value is set, the more edges the process 5 ignores during lowpass filtering (103). That is, if the threshold is set relatively high, only highly-pronounced edges will not be subjected to lowpass filtering, whereas if the threshold is set relatively low, relatively less-pronounced edges will also not be subjected to lowpass filtering.
Process 5 may also apply (105) a median filter to the resulting enhanced image. The median filter is designed to reduce artifacts, such as spots or other aberrations, in the enhanced image. The median filter may also be an N×N filter, which is applied in the manner described above, to further smooth the enhanced image. The median filter may be applied to non-edge portions of the enhanced image only, if desired. In this case, an edge detection filter of the type described above is used to detect edges in the enhanced image prior to applying the median filter to the enhanced image.
Although a personal computer is shown in
Each such program may be implemented in a high level procedural or object-oriented programming language to communicate with a computer system. However, the programs can be implemented in assembly or machine language. The language may be a compiled or an interpreted language.
Each computer program may be stored on a storage medium/article (e.g., CD-ROM, hard disk, or magnetic diskette) that is readable by a general or special purpose programmable computer for configuring and operating the computer when the storage medium or device is read by the computer to perform process 5. Process 5 may also be implemented as a machine-readable storage medium, configured with a computer program, where, upon execution, instructions in the computer program cause a machine to operate in accordance with process 5.
The invention is not limited to the specific embodiments described above. For example, a Sobel gradient operator may be used for edge detection instead of, or in addition to, the N×N matrix described above. The invention is not limited to the specific filters described herein, to their numerical values, or to their dimensions. The invention can be used to perform inverse halftoning on images received over a network or on any other types of halftoned images.
Other embodiments not described herein are also within the scope of the following claims.
Number | Name | Date | Kind |
---|---|---|---|
5027078 | Fan | Jun 1991 | A |
5166810 | Sorimachi et al. | Nov 1992 | A |
5323247 | Parker et al. | Jun 1994 | A |
5333064 | Seidner et al. | Jul 1994 | A |
5343309 | Roetling | Aug 1994 | A |
5506699 | Wong | Apr 1996 | A |
5798846 | Tretter | Aug 1998 | A |
5850294 | Apostolopoulos et al. | Dec 1998 | A |
5852475 | Gupta et al. | Dec 1998 | A |
6201613 | Zhang et al. | Mar 2001 | B1 |
6229578 | Acharya et al. | May 2001 | B1 |
6621595 | Fan et al. | Sep 2003 | B1 |
6947178 | Kuo et al. | Sep 2005 | B2 |
Number | Date | Country |
---|---|---|
03259193 | Nov 1991 | JP |
Number | Date | Country | |
---|---|---|---|
20020191857 A1 | Dec 2002 | US |