Color imaging devices sometimes use halftone screens to combine a finite number of colors and produce, what appears to the human eye, many shades of colors. The halftone process converts different tones of an image into dots of varying size and varying frequency. In general, halftone screens of as few as three colors may suffice to produce a substantial majority of visible colors and brightness levels. For many color imaging devices, these three colors comprise cyan, magenta, and yellow. These three colors are subtractive in that they remove unwanted colors from white light (e.g., a sheet of paper). The yellow layer absorbs blue light, the magenta layer absorbs green light, and the cyan layer absorbs red light. In many cases, a fourth color, black, is added to deepen the dark areas and increase contrast.
In order to print the different color components in a four color process, it is necessary to separate the color layers, with each color layer converted into halftones. In many cases, these monochrome halftone screens are overlaid at different angles to reduce moire effects. The screen frequency that is used for each halftone layer is usually sufficient to produce a continuous tone when viewed by the human eye. In fact, relatively low frequency halftone screens may be used at each color layer considering the natural filtering effect produced by the human visual system. Unfortunately, screens with low frequencies can sometimes produce object edges that appear jagged. Two solutions that may be used to reduce the appearance of rough edges include post-processing boundary transitions after the bitmap is rendered and increasing the halftone screen frequency used during rendering.
Post processing may be used to make the edges of halftoned objects appear more distinct. Established practice achieves this effect by searching for boundary transitions after the bitmap is rendered and enhancing those edges. This approach is imperfect because false boundaries are often detected and modified. Further, actual edges are not always detected. Images pose a particularly significant challenge because they posses many random transitions.
Another technique used to minimize boundary artifacts is to use a higher frequency halftone screen. However, this approach may have the effect of reducing color accuracy while exposing mechanism imperfections. Streaks produced by mechanical jitter may become visible in continuous tone areas of an image. Thus, the known correction techniques may not provide an optimal solution to improving the appearance of edge transitions.
Embodiments disclosed herein are directed to methods and apparatuses for sharpening objects formed by an image forming device. Within a print request transmitted to an image forming device, there may be different types of page objects. These page objects may be processed differently according to the object type. For example, the page objects may comprise rectangular objects, character objects, and irregular objects. One or more edges of these objects may be enhanced by applying a different halftone screen frequency near those edges of the object. For instance, one or more edges of an object may be rendered using a higher screen frequency than the remainder of the object. Accordingly, the objects may be partitioned into separate regions with different screen frequencies applied to each. These regions may comprise an edge region around the perimeter of the object and an interior region disposed therein. Boundaries of the page objects may be eroded according to the type of each page object, thus defining an eroded boundary that partitions the object.
For example, rectangular objects may be identified by height and width, with the boundary eroded by reducing the height and/or width of the rectangle. The eroded boundary may be shifted to relocate the eroded boundary. By comparison, character objects may be identified as a bitmap. The outer boundary of the character may be eroded by performing a bitwise AND operation between the original character and a shifted character. Irregular objects may be identified from one or more edge lists. The boundaries of these irregular objects may be eroded by increasing or decreasing the edge list values. For instance, edge list values on the left side of an object may be increased while edge list values on a right side of the object may be decreased.
Edge sharpening may be skipped for image objects defined as bitmaps. The edge sharpening may also be turned on/off or otherwise controlled by user-adjustable parameters.
The present application is directed to embodiments of devices and methods for performing edge detail sharpening based in part on a knowledge of objects being reproduced. The process may be applicable to images that are halftoned for reproduction by a color image forming device. The techniques are flexible in that the edge sharpening may be applied to a variety of objects, regardless of shape. For each category of object, the object may be split into an interior portion and an edge portion. In one embodiment, different halftone screens may be applied to the interior and edge portions. For example, a lower screen frequency may be used in the interior portion while a higher screen frequency may be used in the edge portion.
The processing techniques disclosed herein may be implemented in a variety of computer processing systems. For instance, the disclosed processing technique may be executed by a computing system 100 such as that generally illustrated in
The exemplary computing system 100 shown in
An interface cable 38 is also shown in the exemplary computing system 100 of
With regards to the processing techniques disclosed herein, certain embodiments may permit operator control over image processing to the extent that a user may select whether edge sharpening is performed by the image forming device 10. Similarly, users may be able to modify adjustable parameters, such as halftone screen frequency settings. Accordingly, the user interface components such as the user interface panel 22 of the image forming device 10 and the display 26, keyboard 34, and pointing device 36 of the computer 30 may be used to control various processing parameters. As such, the relationship between these user interface devices and the processing components is more clearly shown in the functional block diagram provided in
The image forming device 10 may also be coupled to the computer 30 with an interface cable 38 coupled through a compatible communication port 40, which may comprise a standard parallel printer port or a serial data interface such as USB 1.1, USB 2.0, IEEE-1394 (including, but not limited to 1394a and 1394b) and the like.
The image forming device 10 may also include integrated wired or wireless network interfaces. Therefore, communication port 40 may also represent a network interface, which permits operation of the image forming device 10 as a stand-alone device not expressly requiring a host computer 30 to perform many of the included functions. A wired communication port 40 may comprise a conventionally known RJ-45 connector for connection to a 10/100 LAN or a 1/10 Gigabit Ethernet network. A wireless communication port 40 may comprise an adapter capable of wireless communications with other devices in a peer mode or with a wireless network in an infrastructure mode. Accordingly, the wireless communication port 40 may comprise an adapter conforming to wireless communication standards such as Bluetooth®), 802.11x, 802.15 or other standards known to those skilled in the art.
The image forming device 10 may also include one or more processing circuits 48, system memory 50, which generically encompasses RAM and/or ROM for system operation and code storage as represented by numeral 52. The system memory 50 may suitably comprise a variety of devices known to those skilled in the art such as SDRAM, DDRAM, EEPROM, Flash Memory, and perhaps a fixed hard drive. Those skilled in the art will appreciate and comprehend the advantages and disadvantages of the various memory types for a given application.
Additionally, the image forming device 10 may include dedicated image processing hardware 54, which may be a separate hardware circuit, or may be included as part of other processing hardware. For example, image processing and edge sharpening as disclosed herein may be implemented via stored program instructions for execution by one or more Digital Signal Processors (DSPs), ASICs or other digital processing circuits included in the processing hardware 54. Alternatively, stored program code 52 may be stored in memory 50, with the edge sharpening techniques described herein executed by some combination of processor 48 and processing hardware 54, which may include programmed logic devices such as PLDs and FPGAs. In general, those skilled in the art will comprehend the various combinations of software, firmware, and hardware that may be used to implement the various embodiments described herein.
In the exemplary computer 30 shown, the CPU 56 is connected to the core logic chipset 58 through a host bus 57. The system RAM 60 is connected to the core logic chipset 58 through a memory bus 59. The video graphics controller 62 is connected to the core logic chipset 58 through an AGP bus 61 or the primary PCI bus 63. The PCI bridge 64 and IDE/EIDE controller 66 are connected to the core logic chipset 58 through the primary PCI bus 63. A hard disk drive 72 and the optical drive 32 discussed above are coupled to the IDE/EIDE controller 66. Also connected to the PCI bus 63 are a network interface card (“NIC”) 68, such as an Ethernet card, and a PCI adapter 70 used for communication with the image forming device 10 or other peripheral device. Thus, PCI adapter 70 may be a complementary adapter conforming to the same or similar protocol as communication port 40 on the image forming device 10. As indicated above, PCI adapter 70 may be implemented as a USB or IEEE 1394 adapter. The PCI adapter 70 and the NIC 68 may plug into PCI connectors on the computer 30 motherboard (not illustrated). The PCI bridge 64 connects over an EISA/ISA bus or other legacy bus 65 to a fax/data modem 78 and an input-output controller 74, which interfaces with the aforementioned keyboard 34, pointing device 36, floppy disk drive (“FDD”) 28, and optionally a communication port such as a parallel printer port 76. As discussed above, a one-way communication link may be established between the computer 30 and the image forming device 10 or other printing device through a cable interface indicated by dashed lines in
Relevant to the edge sharpening techniques disclosed herein, digital files, images, and documents may be read from a number of sources in the computing system 100 shown. Files to be printed may be stored on fixed or portable media and accessible from the HDD 72, optical drive 32, floppy drive 28, or accessed from a network by NIC 68 or modem 78. Further, as mentioned above, the various embodiments of the edge sharpening techniques may be fully or partially implemented as a device driver, program code 52, or software that is stored in memory 50, on HDD 72, on optical discs readable by optical disc drive 32, on floppy disks readable by floppy drive 28, or from a network accessible by NIC 68 or modem 78. Furthermore, since the edge sharpening technique may be implemented before image rasterization, some or all of sharpening process may be performed by the CPU 56 of the computer 30 that transmits a page description to the image forming device 10. Those skilled in the art of computers and network architectures will comprehend additional structures and methods of implementing the techniques disclosed herein.
The print request includes page description language data for producing the output image. The data may include page layout information, including the position of the objects on the page, font size, style, colors, image bitmaps, and other scaling operations. One embodiment of a page description language is POSTSCRIPT by Adobe Systems, Incorporated. In one embodiment as illustrated in
The IFC 82 receives the page description language and decomposes the image data into smaller objects and further renders the image as a series of monochrome, halftone bitmaps that are delivered for production by one or more image forming units 110, 210, 310, 410. The individual color images are combined as shown in the exemplary image forming device 10 provided in
Within the image forming device housing 12, the image forming device 10 may include one or more image forming units 110, 210, 310, 410, each associated with a single color. Each image forming unit 110, 210, 310, 410 may include removable developer cartridges 116, photoconductive units 112, developer rollers 118 and corresponding transfer rollers 120. The representative image forming device 10 also includes an intermediate transfer mechanism (ITM) belt 114, a fuser 124, and exit rollers 126, as well as various additional rollers, actuators, sensors, optics, and electronics (not shown) as are conventionally known in the image forming device arts, and which are not further explicated herein. Additionally, the image forming device 100 includes one or more controllers, microprocessors, DSPs, or other stored-program processors and associated computer memory, data transfer circuits, and/or other peripherals (not shown in
Each developer cartridge 116 may include a reservoir containing toner 132 and a developer roller 118, in addition to various rollers, paddles and other elements (not shown). Each developer roller 118 is adjacent to a corresponding photoconductive unit 112, with the developer roller 118 developing a latent image on the surface of the photoconductive unit 112 by supplying toner 132. In various alternative embodiments, the photoconductive unit 112 may be integrated into the developer cartridge 116, may be fixed in the image forming device housing 12, or may be disposed in a removable photoconductor cartridge (not shown). In a typical color image forming device, three or four colors of toner—cyan, yellow, magenta, and optionally black—are applied successively (and not necessarily in that order) to a print media sheet 106 to create a color image. Correspondingly,
The operation of the image forming device 10 is conventionally known. Upon command from control electronics, a single media sheet 106 is “picked,” or selected, from either the primary media tray 14 or the multipurpose tray 18 while the ITM belt 114 moves successively past the image forming units 110, 210, 310, 410. As described above, at each photoconductive unit 112, a latent image is formed thereon by optical projection from an optical device 140. The latent image is developed by applying toner to the photoconductive unit 112 from the corresponding developer roller 118. The toner is subsequently deposited on the ITM belt 114 as it is conveyed past the photoconductive unit 112 by operation of a transfer voltage applied by the transfer roller 120. As the ITM belt 114 passes by each successive image forming unit 110, 210, 310, 410, each color is layered onto the ITM belt 114 to form a composite image. The media sheet 106 is fed to a secondary transfer nip 122 where the image is transferred from the ITM belt 114 to the media sheet 106 with the aid of a secondary transfer roller 130. The media sheet proceeds from the secondary transfer nip 122 along media path 138. The toner is thermally fused to the media sheet 106 by the fuser 124, and the sheet 106 then passes through exit rollers 126, to land facedown in the output tray 20 formed on the exterior of the image forming device housing 12.
In one embodiment of the edge sharpening procedure, processing is performed during the rasterization process by the IFC 82 shown in
Accordingly, the process outlined in
When edge sharpening is turned on,
The objects that are parsed in step 402 for edge sharpening may be divided into an interior portion and an edge portion. Different screen frequencies may then be applied to these separate portions. Initially, however, the edge sharpening algorithm creates the interior portion as a duplicate of the original object that is eroded or reduced in size. For instance, rectangles may be identified by the height and width dimensions (steps 412 and 414). The dimensions are used as a template for forming the halftone bitmap for the interior portion (step 416). In one embodiment, the dimensions of the interior portion are the same as the height and width dimensions of the original object. In another embodiment, the dimensions of the interior portion are decreased to dimensions smaller than the original object. Decreasing the dimensions may include decreasing the height, decreasing the width, or decreasing both. In one embodiment, it is necessary to translate the height and width to re-center the interior portion relative to the position of the original object.
One embodiment of decreasing both the height and the width of the interior portion is illustrated in
The amount of reduction of the interior portion 522 may vary depending upon the specific requirements of the print request and the mechanics of the image forming device 10. In one embodiment, the dimensions of the interior portion 522 are reduced 2 pels on each edge. The decreased interior portion 522 may be re-centered relative to the original object 520 by translating the origin from an initial position 526 to a new position 528. The origin may be a point on the surface of the object 520.
Referring to
Referring to
The bitmaps for the interior portions of irregular objects are formed by decreasing the original edge lists (step 436). In one embodiment, the first edge list 1002 is modified as illustrated in
In another embodiment of forming eroded interior portions, an irregular shape 710, such as that illustrated in
Returning to
The various erosion processes described above have generated an eroded boundary that separates an interior portion from an edge portion of an object. However, there may be instances where less than all edges of an object need to be sharpened. For instance, objects may fade in color from one side of the object to the other with edge sharpening indicated only at the darkest edges. The above techniques may still be applied in such cases. For example, the rectangular shape from
In the example shown in
In another embodiment shown in
Similarly, the eroded boundary 630 shown in
In another embodiment, characters that are defined in the page description by their outlines may be categorized as an irregular shape. One example of this occurrence is when the size of a character exceeds a predetermined amount. The bitmaps for the characters are determined in accordance with the irregular shape calculations and the character may be subdivided into subsections.
Further calculations may be performed for each of the object categories. The edge list for an object may be analyzed to determine whether the object comprises a finite area. If the area is not finite, edge sharpening may not be performed. In another embodiment, edge sharpening may not be performed if object erosion results in the interior portion being reduced to zero.
The present invention may be carried out in other specific ways than those herein set forth without departing from the scope and essential characteristics of the invention. For example, while embodiments described above have contemplated dividing an original object into an interior portion and an edge portion, it is also possible to create multiple portions near the edge of an object to implement screen frequency gradients. In other words, three or more halftone screen frequencies may be used to reduce noticeable transitions between the regions. Each portion may be eroded by differing amounts, with different screen frequencies applied to each portion.
The object based edge sharpening may be incorporated in a variety of image forming devices including, for example, printers, fax machines, copiers, and multi-functional machines including vertical and horizontal architectures as are known in the art of electrophotographic reproduction. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, and all changes coming within the meaning and equivalency range of the appended claims are intended to be embraced therein.