Color imaging devices sometimes use halftone screens to combine a finite number of colors and produce, what appears to the human eye, many shades of colors. The halftone process converts different tones of an image into dots and clusters of dots of a few colors. In general, halftone screens of as few as three colors may suffice to produce a substantial majority of visible colors and brightness levels. For many color imaging devices, these three colors comprise cyan, magenta, and yellow. These three colors are subtractive in that they remove unwanted colors from white light (e.g., a sheet of paper). The yellow layer absorbs blue light, the magenta layer absorbs green light, and the cyan layer absorbs red light. In many cases, a fourth color, black, is added to deepen the dark areas and increase contrast.
In order to print different colors, they are separated into several monochrome layers for different colorants, each of which are then halftoned. Thresholding with halftone screens converts colorant values into spatial dot patterns. The resolution of the halftone screen may be expressed in lines per inch (LPI) and is distinguishable from the number of dots per inch (DPI) that are reproducible by an image forming device. In many cases, these monochrome halftone screens are overlaid at different angles to reduce interference effects. The screen resolution that is used for each halftone layer is usually sufficient to produce a continuous tone when viewed by the human eye.
Digital scanning of halftone images sometimes produces unwanted effects such as moiré patterns and blurred edges. Blurred edges are most noticeable with fine features, such as text, and may result from limited scan resolution or scattering of light from the scanner's illumination source. A moiré pattern is generally defined as an interference pattern created when two grids (halftone patterns in the case of color printers) are overlaid on one another. The moiré effect is more pronounced when the grids are at certain angles or when they have slightly different mesh sizes. In the case of scanned images, moiré patterns may be caused by differences in resolution between the scanner and the ordered halftone screen patterns of the original image. Moiré effects are often more noticeable over large halftone areas and usually appear as unwanted spotty or checkerboard patterns.
Blurring (e.g., low pass) filters may remove moiré effects at the expense of making blurred edges and details appear even more indistinct. Conversely, the appearance of blurred details can be improved using conventional sharpening (e.g., high pass or unsharp mask) filters at the expense of making moiré effects more pronounced. Adaptive filters may be customized to account for the screen frequencies in the original document. However, both the detection of the input halftone frequencies and the frequency-domain filtering itself can require significant computational effort. Thus, existing techniques may not adequately suppress interference effects while preserving or sharpening fine detail features.
Embodiments disclosed herein are directed to digital image processing algorithms to reduce interference artifacts while preserving or improving detailed features. Digital images comprise a plurality of pixels characterized by one or more pixel intensities. The algorithms disclosed herein examine pixel intensity variations to classify pixels according to their type. Then, different filters may be applied to the different pixel types. For instance, smoothing or descreening filters may be applied to areas characterized by low intensity variations while sharpening filters may be applied to areas characterized by high intensity variations.
Thus, for each pixel, a first action may comprise determining whether the magnitude of pixel intensity variations over a first window of a first size applied at each pixel satisfies a first predetermined condition. Then, those pixels satisfying the first predetermined condition are placed in a first category. As an example, a pixel may satisfy the first predetermined condition if pixel intensity variations over the first window vary by less than a predetermined threshold. Pixel intensity variations may be measured by comparing intensities summed across rows, columns, or diagonals of the first window.
Pixels that do not satisfy the first predetermined condition may then be analyzed to determine if the magnitude of pixel intensity variations over a second window of a second size applied at each of these pixels satisfies a second predetermined condition. Again, as an example, a pixel may satisfy the second predetermined condition if pixel intensity variations over the window of the second size vary by more than a predetermined threshold. Pixel intensity variations may be measured by comparing intensities of pixel pairs disposed on opposite sides or opposite corners of the second window. Further conditions may be imposed on pixel intensity variations, including, for example, spatial correlation, in order to further distinguish text and line edges from halftone dot clusters. Window sizes may be associated with the filters to be applied. For example, to minimize artifacts, windows would typically be at least as large as the associated filters, where filter sizes are determined by the extent of pixel correction desired, as will be understood by those skilled in the art.
The various embodiments disclosed herein are directed to devices and methods for classifying and filtering regions of a digital image to remove artifacts from halftone areas while preserving or sharpening detail features. The process may be applied to some or all pixels of an image and involves classifying pixels as belonging to one or more categories. For example, a pixel may be classified as belonging to a halftone category or a detail category. Pixels that are classified in a given category may be omitted from further classification analysis. Appropriate filtering may then be applied to pixels according to their classification.
The processing techniques disclosed herein may be implemented in a variety of computer processing systems. For instance, the disclosed image processing technique may be executed by a computing system 100 such as that generally illustrated in
The exemplary computing system 100 shown in
An interface cable 38 is also shown in the exemplary computing system 100 of
With regards to the processing techniques disclosed herein, certain embodiments may permit operator control over image processing to the extent that a user may select certain image areas or filter settings that are used in the image conversion. Accordingly, the user interface components such as the user interface panel 22 of the multifunction device 10 and the display 26, keyboard 34, and pointing device 36 of the computer 30 may be used to control various processing parameters. As such, the relationship between these user interface devices and the processing components is more clearly shown in the functional block diagram provided in
The exemplary embodiment of the multifunction device 10 also includes a modem 27, which may be a fax modem compliant with commonly used ITU and CCITT compression and communication standards such as the ITU-T series V recommendations and Class 1-4 standards known by those skilled in the art. The multifunction device 10 may also be coupled to the computer 30 with an interface cable 38 coupled through a compatible communication port 40, which may comprise a standard parallel printer port or a serial data interface such as USB 1.1, USB 2.0, IEEE-1394 (including, but not limited to 1394a and 1394b) and the like.
The multifunction device 10 may also include integrated wired or wireless network interfaces. Therefore, communication port 40 may also represent a network interface, which permits operation of the multifunction device 10 as a stand-alone device not expressly requiring a host computer 30 to perform many of the included functions. A wired communication port 40 may comprise a conventionally known RJ-45 connector for connection to a 10/100 LAN or a 1/10 Gigabit Ethernet network. A wireless communication port 40 may comprise an adapter capable of wireless communications with other devices in a peer mode or with a wireless network in an infrastructure mode. Accordingly, the wireless communication port 40 may comprise an adapter conforming to wireless communication standards such as Bluetooth®, 802.11x, 802.15 or other standards known to those skilled in the art.
The multifunction device 10 may also include one or more processing circuits 48, system memory 50, which generically encompasses RAM and/or ROM for system operation and code storage as represented by numeral 52. The system memory 50 may suitably comprise a variety of devices known to those skilled in the art such as SDRAM, DDRAM, EEPROM, Flash Memory, and perhaps a fixed hard drive. Those skilled in the art will appreciate and comprehend the advantages and disadvantages of the various memory types for a given application.
Additionally, the multifunction device 10 may include dedicated image processing hardware 54, which may be a separate hardware circuit or may be included as part of other processing hardware. For example, image processing and filtering may be implemented via stored program instructions for execution by one or more Digital Signal Processors (DSPs), ASICs or other digital processing circuits included in the processing hardware 54. Alternatively, stored program code 52 may be stored in memory 50, with the image processing techniques described herein executed by some combination of processor 48 and processing hardware 54, which may include programmed logic devices such as PLDs and FPGAs. In general, those skilled in the art will comprehend the various combinations of software, firmware, and/or hardware that may be used to implement the various embodiments described herein.
In the exemplary computer 30 shown, the CPU 56 is connected to the core logic chipset 58 through a host bus 57. The system RAM 60 is connected to the core logic chipset 58 through a memory bus 59. The video graphics controller 62 is connected to the core logic chipset 58 through an AGP bus 61 or the primary PCI bus 63. The PCI bridge 64 and IDE/EIDE controller 66 are connected to the core logic chipset 58 through the primary PCI bus 63. A hard disk drive 72 and the optical drive 32 discussed above are coupled to the IDE/EIDE controller 66. Also connected to the PCI bus 63 are a network interface card (“NIC”) 68, such as an Ethernet card, and a PCI adapter 70 used for communication with the multifunction device 10 or other peripheral device. Thus, PCI adapter 70 may be a complementary adapter conforming to the same or similar protocol as communication port 40 on the multifunction device 10. As indicated above, PCI adapter 70 may be implemented as a USB or IEEE 1394 adapter. The PCI adapter 70 and the NIC 68 may plug into PCI connectors on the computer 30 motherboard (not illustrated). The PCI bridge 64 connects over an EISA/ISA bus or other legacy bus 65 to a fax/data modem 78 and an input-output controller 74, which interfaces with the aforementioned keyboard 34, pointing device 36, floppy disk drive (“FDD”) 28, and optionally a communication port such as a parallel printer port 76. As discussed above, a one-way communication link may be established between the computer 30 and the multifunction device 10 or other printing device through a cable interface indicated by dashed lines in
Relevant to the digital image processing techniques disclosed herein, digital images may be read from a number of sources in the computing system 100 shown. For example, hard copy images may be scanned by scanner 16 to produce a digital reproduction. Alternatively, the digital images may be stored on fixed or portable media and accessible from the HDD 72, optical drive 32, floppy drive 28, or accessed from a network by NIC 68 or modem 78. Further, as mentioned above, the various embodiments of the digital image processing techniques may be implemented within a device driver, program code 52, or software that is stored in memory 50, on HDD 72, on optical discs readable by optical disc drive 32, on floppy disks readable by floppy drive 28, or from a network accessible by NIC 68 or modem 78. Those skilled in the art of computers and network architectures will comprehend additional structures and methods of implementing the techniques disclosed herein.
Digital images are comprised of a plurality of pixels. For scanned images, such as the document 300 shown in
A scanner may produce different effects when scanning these areas of a document. For example, edges of fine detail features 320 may appear jagged while moiré patterns may appear in a halftone area 330. Accordingly, in one embodiment of the image filtering technique, pixels are classified according to the area in which they are located. Then an appropriate filter may be applied to pixels according to their classification.
The process shown in
In block 402, the algorithm looks for more than gradual changes in color intensity over a fixed N1×N1 window in the vicinity of each pixel in the image. As an example, the pixel intensities for a N1×N1 pixel window 500 that is laid over a pixel in an image is shown in
The sum of all intensities for each row H and each column V are calculated. In the present example, N1 row sums Hi (for i=1→N1) and N1 column sums Vi (for i=1→N1) are calculated. Next, the maximum and minimum sums are determined and labeled Hmin, Hmax, Vmin, and Vmax. Further, the sum of intensities across the two major diagonals D1, D2 are also calculated. If the differences between the Hi, Vi, D1 and D2 values are small, this indicates small intensity changes exist over the entire window.
In equation form, the pertinent values may be represented by:
Then, the following inequalities may be used to affirmatively classify pixels as belonging to the halftone category. If:
Hmax−Hmin≦T1 and
Vmax−Vmin≦T1 and
|D1−D2|≦T2,
where T1 and T2 are predetermined threshold values, then the pixel of interest may be classified as being in a halftone category. The threshold values may be adjusted as desired to control the amount of color variation that is needed to fall outside of the halftone category. In general, however, this portion of the algorithm is looking for something more than gradual changes in color intensity over a relatively large N1×N1 window. The higher the threshold values, the more color variation is allowed for the halftone category. Thus, pixels in areas characterized by slow color changes may still be classified in the halftone category. Furthermore, the size of the N1×N1 window may be adjusted to control the rate of change that is needed to classify pixels as halftone. In one embodiment, provided an appropriately sized descreening filter is used, a 17×17 window may be used for scans produced at 600 DPI.
Pixels that are not classified as halftone pixels may be classified as potential text elements (PTE). Notably, the number of pixels analyzed during this block 406 may be less than that analyzed in the halftone classification block 402, particularly where some of the original pixels have been classified as halftone pixels. Thus, the total processing required may be reduced. A smaller window of N2×N2 (where N2<N1) pixels may be considered for block 406 because the algorithm is searching for fine details. This is in contrast to the initial analysis (block 402) described above where gradual changes over larger areas were detected. In one embodiment, only the pixels at the side edges or top and bottom edges of the smaller window may be considered. In essence, block 406 determines whether there are any substantial changes in intensity from one side of this window to the other (or from top to bottom). For example, an N2×N2 window such as that shown in
In one embodiment, the intensities of pixels at the left side of this window are compared to the intensities of the pixels at the right side of this window. In all, N2 pairs of intensity values are compared in the present example. If there is a substantial change in pixel intensity across the window, the pixel of interest is classified as a detail pixel. For example, if |f(1,1)−f(1,N2)|≧T3, where T3 is yet another predetermined threshold, the pixel of interest is classified as a detail pixel. The same comparison may be made for the pixels at the top and bottom of this smaller window. Thus, another N2 pairs of intensity values may be compared. Again, as an example, if |f(1,1)−f(N2,1)|≧T3, the pixel of interest may be classified as a detail pixel. The pixels at the opposite corners 602, 608 and 604, 606 of this window (or a slightly larger N3×N3 window, where N3>N2) may also be compared to look for substantial changes in intensity. In equation form, this threshold operation may be expressed as the following:
If |f(N2+1−i,1)−f(N2+1−i,N2)|≧T3 for i=1,2, . . . ,N2 or
If |f(1,N2+1−i)−f(N2,N2+1−i)|≧T3 for i=1,2, . . . ,N2 or
If |f(1,1)−f(N2,N2)|≧T3 or
If |f(N2,1)−f(1,N2)|≧T3,
then the pixel of interest may be classified as a detail pixel. In one embodiment, a 5×5 window has been found to work well for typical text sizes scanned at 600 DPI. For the corer pixel 602, 608 and 604, 606 comparisons, a slightly larger 7×7 window has been used successfully. Larger windows may be more appropriate for higher resolution scans.
Once pixels are classified as indicated above, the halftone pixels may be filtered using a spatial smoothing mask. Spatial domain masks are known in the art and are applied as follows. For a 3×3 mask having the following values
and the intensity values for the pixels under the mask at any given location x,y being
then the new intensity value for pixel x,y is given by
fnew(x,y)=w1*z1+w2*z2+w3*z3+w4*z4+w5*z5+w6*z6+w7*z7+w8*z8+w9*z9.
Some example smoothing masks that may be applied to the halftone/constant pixels include a 3×3 averaging mask or a 5×5 averaging mask, such as those shown in
The present algorithm may be carried out in other specific ways than those herein set forth without departing from the scope and essential characteristics of the invention. For example, pixels may be classified in two categories: halftone and detail. Other categories of pixel types may be established through alteration of the window sizes and threshold settings. In certain cases, it may be desirable to capture raw scanned image data and prevent image filtering to these image areas in an automated fashion. Halftone areas may be distinguished from image areas in that they are characterized by very low color variations over relatively large areas. To account for this, the thresholds in the initial analysis (block 402) may be lowered and the window sizes may be increased to distinguish between halftone areas and images. Then, filtering may be applied to the halftone areas while preserving the image data. The present algorithm permits modification of the operating parameters to account for these types of scenarios. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, and all changes coming within the meaning and equivalency range of the appended claims are intended to be embraced therein.