This application is based upon and claims the benefit of priority from the prior Japanese Patent Application No. 2005-196256, filed on Jul. 5, 2005, the entire contents of which are incorporated herein by reference.
1. Field of the Invention
This invention relates to an image processing device, an image processing method, and an image processing program. More specifically, the invention relates to an image processing device and similar which performs quantization processing using cells the shapes of which have been rendered symmetrical.
2. Description of the Related Art
In the prior art, printers and other image processing devices use halftone processing of input image data having multivalued grayscale values for each pixel to convert the data into output image data with a smaller number of grayscales (for example, with two data values), to perform printing onto printing paper.
As halftone processing, dot-concentrated dithering methods (multivalued dithering methods) are known. In multivalued dithering methods, thresholds are distributed such that dots grow from the center of a matrix of prescribed size, and results are compared with input grayscale values.
However, in multivalued dithering methods, distribution of thresholds may for example cause the breaking of fine lines when there are fine lines in the input image, or may cause the occurrence of “jaggies” at edge portions of the input image, so that an image which is not true to the input image is output, and there are problems with image quality.
Hence in order to resolve these problems, methods have been proposed in which the center-of-gravity position is determined from grayscale values for each pixel within a cell comprising a plurality of pixels, and a dot corresponding to the sum of the grayscale values for each pixel is generated at the center-of-gravity position (see for example Japanese Patent Application No. 2004-137326; hereafter called the “AAM (Advanced AM screen) method”).
However, when using the AAM method, a dot is generated at the center-of-gravity position in a cell; but when the cell shape is asymmetrical, the cell center of gravity does not coincide with a pixel center, so that a slight change in the input image distribution causes the pixel position at which a dot is generated to move by one pixel. This scattering in dot positions results in unpleasant noise and appears in the output image.
This invention was devised in light of the above problems, and has as an object the provision of an image processing device, image processing method, and image processing program to obtain output images in which the occurrence of unpleasant noise is suppressed.
In order to attain the above object, an image processing device of the present invention, which divides an input image into pixel groups having a plurality of pixels and performs quantization processing in units of the pixel groups, having a quantization unit which converts input image data into output image data having two or more grayscales, using the pixel groups the shapes of which are point-symmetric. Therefore, for example, if a dot is generated at the pixel of the center-of-gravity position of a pixel group, then even if there is a slight change in a uniform input grayscale distribution, the center-of-gravity position of the pixel group is positioned in proximity to the center of the dot generation pixel, so that there is no scattering in the pixel position of dot generation, and an output image is obtained in which unpleasant noise is suppressed.
The image processing device of the present invention, wherein at least one pixel constituting each of the pixel groups is common to a plurality of the pixel groups. Therefore, the shape of each pixel group is rendered symmetric, and an output image is obtained in which the occurrence of noise is suppressed.
Further, the image processing device of the present invention, wherein the pixel hold in common by the pixel groups is a pixel which is at an equal distance from the center of each of the pixel groups. Therefore, the number of pixels common to pixel groups can be made small, and increases in the amount of processing due to common pixels can be reduced.
Further, the image processing device of the present invention, wherein a commonality level is set for each pixel constituting the pixel groups, and for the common pixel, the commonality level is set according to the number of the pixel groups to which the common pixel is common. Therefore, for example, a common pixel is equally divided among a plurality of pixel groups, and the shapes of pixel groups can be rendered symmetrical.
Further, the image processing device of the present invention, wherein the quantization unit having a center-of-gravity position determination unit which determines the center-of-gravity position of the pixel groups from values obtained by multiplying the input image data for each of the pixels included in the pixel group by the commonality level, a positioning unit which positions the center of a multivalued dithering matrix, applied in units of the pixel groups, at the center-of-gravity position of the pixel group, and an output unit which compares the multivalued dithering matrix with the input image data for each of the pixels included in the pixel group, to obtain the output image data. Therefore, the center-of-gravity position is determined using the value obtained by for example multiplying the commonality level by the input image data for each pixel, so that the influence on the accurate center-of-gravity position of common pixels due to processing for a plurality of pixel groups can be reduced.
Further, the image processing device of the present invention, wherein table numbers of tables indicating the correspondence relation between the input image data and the output values are stored in the multivalued dithering matrix, and the output unit references the table number of the multivalued dithering matrix corresponding to the position of each pixel included in the pixel group to obtain output values from the input image data, and outputs, as the output image data, values obtained by multiplying the output values by the commonality level. Therefore, even when for example the output values of common pixels are added a plurality of times for a plurality of pixel groups, output image data can be held within the range of a maximum number of grayscales.
Further, the image processing device of the present invention, wherein the output unit ends the quantization processing in the pixel group when an ideal grayscale value has been obtained based on the sum of values obtained by multiplying the input image data for each pixel in the pixel group by the commonality level. Therefore, when for example the ideal grayscale value which is the sum of the input image data for the pixel group is determined, by using value obtained by multiplying the input image data by a contribution factor, the input image data for each pixel can be regarded as belonging to the pixel group according to its contribution factor, and so output which is true to the input grayscale information can be obtained.
Further, the image processing device of the present invention, wherein the output unit comprises a supplement unit which, when an ideal grayscale value based on the sum of values obtained by multiplying the input image data for each pixel in the pixel group by the commonality levels is not obtained, performs supplement processing such that the sum of the input image data in the pixel group becomes substantially the ideal grayscale value. Therefore, when for example the ideal grayscale value which is the sum of the input image data for the pixel group is determined, by using a value obtained by multiplying the input image data by the contribution factor, the input image data for each pixel can be regarded as belonging to the pixel group according to the contribution factor, and output which is true to the input grayscale information can be obtained.
Further, in order to attain the above objects, an image processing device of the present invention, which divides an input image into pixel groups having a plurality of pixels and performs quantization processing in units of the pixel groups, having a quantization unit, which converts input image data into output image data having two or more grayscales, using the pixel groups in which the center position of the pixel group and the center position of any pixel included in the pixel groups coincide. Therefore, for example, if a dot is generated at the pixel of a pixel group at which the center-of-gravity position exists, then even if the uniform input grayscale distribution changes slightly, the center-of-gravity position of the pixel group is positioned in proximity to the center of the pixel at which the dot was generated, so that there is little scattering in the position of the pixel of dot generation, and an output image is obtained in which unpleasant noise is suppressed.
Further, in order to attain the above objects, an image processing method of the present invention, which divides an input image into pixel groups having a plurality of pixels and performs quantization processing in units of the pixel groups, having the step of converting input image data into output image data having two or more grayscales, using the pixel groups the shapes of which are point-symmetric.
Further, in order to attain the above objects, an image processing method of the present invention, which divides an input image into pixel groups having a plurality of pixels and performs quantization processing in units of the pixel groups, having the step of converting input image data into output image data having two or more grayscales, using the pixel groups in which the center position of the pixel grous and the center position of any pixel included in the pixel group coincide.
Further, in order to attain the above objects, an image processing program of the present invention, which divides an input image into pixel groups having a plurality of pixels and performs quantization processing in units of the pixel groups, the image processing program causing a computer to execute processing to convert input image data into output image data having two or more grayscales, using the pixel groups the shapes of which are point-symmetric.
Further, in order to attain the above objects, an image processing program of the present invention, which divides an input image into pixel groups having a plurality of pixels and performs quantization processing in units of the pixel groups, the image processing program causing a computer to execute processing to convert input image data into output image data having two or more grayscales, using the pixel groups, in which the center position of the pixel group and the center position of any pixel included in the pixel group coincide.
Below, preferred embodiments for implementation of the invention are explained, referring to the drawings.
The host computer 10 comprises an application portion 11 and a rasterizing portion 12.
The application portion 11 generates text data, graphical data, or other data for printing by means of a word processor, graphics tool, or other application program. The rasterizer portion 12 converts each pixel (or dot) of the data for printing into 8-bit input image data, and outputs the result to the image processing device 20. Hence the input image data has, for each pixel, grayscale values ranging from “0” to “255”.
The image processing device 20 comprises an image processing portion 21 and a printing engine 22. The image processing portion 21 comprises a halftone processing portion 211 and a pulse width modulation portion 212.
The halftone processing portion 211 takes as input the input image data from the host computer 10, and converts this data into output image data having quantized data of two or more types. The pulse width modulation portion 212 generates driving data for this quantized data indicating, for each dot, whether there is or is not a laser driving pulse, and outputs the result to the printing engine 22.
The printing engine 22 comprises a laser driver 221 and a laser diode (LD) 222. The laser driver 221 generates control data for this driving data indicating whether there are or are not driving pulses, and outputs this data to the LD 222. The LD 222 is driven based on the control data, and the printing data generated by the host computer 10 is actually printed onto paper through driving of a photosensitive drum or similar.
This invention may be applied to an image processing device 20 configured as hardware as shown in
Next, details of halftone processing in this invention are explained; prior to this, however, a simple summary of this invention is given.
First, an input image is divided in advance into pixel groups (hereafter called “cells”) comprising a plurality of fixed (predetermined) pixels. This is in order to perform processing in cell units. Then, an index matrix, in which are stored table numbers for gamma tables to be referenced, is applied to these cells. Then, by referencing the gamma tables, output grayscale values corresponding to the input grayscale values are obtained for each pixel, and dots are generated.
A characteristic of this invention is the fact that the cells are rendered symmetrical. By rendering cells symmetrical, the center position of a cell coincides with the center position of one of the pixels within the cell. When an input image with uniform grayscales is provided, the center position of the cell becomes the center-of-gravity position, and if a dot is generated at the pixel at which the center-of-gravity position exists, the dot is generated at the center of the pixel.
In this state, even if there is a slight change in the grayscales of the input image, the center-of-gravity position is in the proximity of the center of a pixel, and so there is no shift in the position of the pixel at which the dot is generated itself, and dot scattering can be suppressed. As a result, an output image with noise suppressed is obtained.
The symmetrical rendering of the cell shape is realized through the common possession by each cell of pixels at an equal distance from the center pixels of the cells.
In order to divide common pixels 210 into equal parts, the fraction (contribution factor, commonality level) of a pixel belonging to a cell 200 is assigned to each pixel of the cell 200. An example of this appears in
The cells 200 shown in
Next, the operation of halftone processing using such cells 200 is explained.
First, the CPU 24 reads from ROM 25 a program to execute this processing, and initiates the processing (S10).
Next, the CPU 24 multiplies the input grayscale values for each pixel by the contribution factor (S11). For example, in the example shown in (B) of
Next, the CPU 24 computes the sum of the grayscale values within the cell 200 and the center-of-gravity position of the cell 200 (S12).
In computing the sum of grayscale values and center-of-gravity position 110, values obtained by multiplying the input grayscale values by contribution factors are used. Multiplied values are used in consideration of the facts that the grayscale values of each of the pixels in the cell 200 belong to the cell 200 to the extent of the contribution ratio, and that common cells 210 are processed in a plurality of cells 200, so that if the input grayscale values are used without modification, an accurate center-of-gravity position 110 cannot be computed for the cell 200.
In the example of
Xcenter-of-gravity=Σ{(X coordinate of pixel)×(grayscale value of pixel)}/sum of grayscale values in cell
Ycenter-of-gravity=Σ{(Y coordinate of pixel)×(grayscale value of pixel)}/sum of grayscale values in cell
Next, the CPU 24 determines a processing order enabling processing in order from the pixels existing closest to the center-of-gravity position 110 (S13). In the example of
Next, the CPU 24 shifts the center position of the index matrix such that the center position of the matrix is positioned at the center-of-gravity position 110 of the cell 200 (S14). This is because, by causing the center-of-gravity position 110 to coincide with the pixel position at which a dot is most easily generated in the matrix, a dot can be more easily generated at the center-of-gravity position 110. In the above example, the shift amount to cause the center-of-gravity position 110 and the center of the cell 200 to coincide is (0,0). An example of an index matrix after shifting appears in (A) of
Next, the CPU 24 allocates output grayscale values for each pixel according to the previously determined processing order. That is, “1” is substituted for “n” indicating the order of processing of pixels (S15), and the output value corresponding to the input grayscale value for the “n”th processed pixel is read from the gamma table (S16). In the above example, the index value for the “1”st pixel is “1” (see (B) of
In this embodiment, when referencing the gamma table, an output value is not determined by multiplying the input grayscale value by the contribution factor, but instead the output value is obtained from the input grayscale value itself. This is because if a value obtained by multiplication by the contribution factor is used, the input/output relation assumed in the gamma table at the design stage is destroyed.
Next, the CPU 24 multiplies the output value by the contribution factor (S18). In the above example, “255” is multiplied by the contribution factor “1”.
The output value obtained from the gamma table is multiplied by the contribution factor because common pixels 210 are processed a plurality of times for a plurality of cells 200, and if the value is not multiplied by the contribution factor, the maximum grayscale value of the common pixels 210 exceeds “255”.
Next, the CPU 24 adds the value multiplied by the contribution factor (hereafter the “candidate value”) to the sum of grayscale values already output, and judges whether the value exceeds the ideal grayscale value (S19 in
Hence the CPU 24 adjusts the candidate value such that when the ideal grayscale value is exceeded (YES), the value is equal to the ideal grayscale value, and adds the result to the output buffer (S25). On the other hand, if the ideal grayscale value is not exceeded (NO in S19), the candidate value is added without modification to the output buffer (S20).
In the above example, the ideal grayscale value “320” is not exceeded even when the sum “0” of output grayscale values is added to the candidate value “255”. Hence the candidate value “255” is added without modification to the output buffer 120. This example appears in (A) of
Next, the CPU 24 judges whether processing has ended for all the pixels in the cell 200 (S21), and if processing has not ended (NO), adds “1” to the value of “n” indicating the processing order (S24), and again returns to S16.
In the above example, processing proceeds to the “2”nd pixel (see (B) in
Even when the output value “16” is multiplied by the contribution factor “1” and the result added to the output buffer 120, the value becomes “271” and does not exceed the ideal grayscale value of “320” (NO in S19). Hence the entire value “16” is added (S20). This example appears in (B) of
Below, similar processing is repeated to obtain the output values shown in (C) of
If there is already an output value for a common pixel 210 as a result of processing of another cell 200 (if there has been output to the output buffer 120), the CPU 24 adds this output value to the output value obtained as described above, and outputs the result to the output buffer 120 (S22).
Then, the CPU 24 ends processing for the cell 200 (S23). Process of the next cell 200 is then executed by repeating processing from S10.
In the first embodiment, a case of input of uniform grayscale data was explained. In this second embodiment, an example in which grayscale values are concentrated on the left side of the cell 200 is explained. This example appears in (A) of
First, the CPU 24 multiplies input grayscale values by contribution factors (S11; see (C) of
Next, the CPU 24 computes the sum of grayscale values using the multiplied values (computes the ideal grayscale value) and computes the center-of-gravity position 110 (S12; see (A) in
Next, the CPU 24 determines the order of processing, starting from pixels closer to the center-of-gravity pixel (S13; see (B) in
Next, the CPU 24 shifts the center of the index matrix (S14). In the case of this example, the center-of-gravity position 110 is shifted one pixel to the left of the pixel at the center position of the index matrix. Hence the matrix center is shifted by (−1,0). An example of a matrix after shifting appears in (C) of
Then, the CPU 24 allocates output values to each pixel according to the processing order thus determined. Because the index value is “1” and the input grayscale value is “40” for the first pixel to be processed, the output value “255” corresponding to the input value “40” is read from the first gamma table (S16).
Then, the CPU 24 multiplies the output value “255” by the contribution factor “1”, and adds “255” to the output buffer 120 (S18).
In this case, the added value “255” exceeds the ideal grayscale value “100” (YES in S19), and so the CPU 24 does not add the unmodified output value “255” to the output buffer 120, but instead adds the value “100” necessary to reach the ideal grayscale value (S25). Then, processing ends (S23). The output buffer 120 after the end of processing appears in
In this Embodiment 2 also, similarly to Embodiment 1, computations are performed using the value obtained by multiplying the input grayscale value by the contribution factor when computing the sum of input grayscale values for a cell 200 and when computing the center-of-gravity position 110 of a cell 200. As the input value when referencing a gamma table, the input grayscale value itself is used to obtain the output value. Further, when referencing a gamma table to obtain an output value, the value obtained by multiplying the contribution factor by the output value from the table is employed to obtain output to the extent of the ideal grayscale value.
Advantageous results of the action of this Embodiment 2 are similar to those of Embodiment 1.
This third embodiment is an example of a case in which grayscale values exist only in the common pixels 210 of cells 200. An example of input data appears in (A) of
When the contribution factor is multiplied by the input grayscale value for each pixel (S11), the data shown in (C) of
The processing order is determined (S13; see (B) of
That is, the output value “255” corresponding to the input grayscale value “40” is read from the gamma table for the common pixel 210 (S16). The contribution factor is multiplied to obtain the candidate value “127” (S18), and because this exceeds the ideal grayscale value “20” (YES in S19), only the “20” necessary to reach the ideal grayscale value is added to the output buffer 120 (S25; see (A) of
This common pixel 210 is also processed by the cell 200 adjacent on the left. As an example, suppose that as a result of processing for the cell 200 adjacent on the left, the output value shown in (B) of
In this case, there exist, for the common pixel 210, the output value “20” for the cell 200 adjacent on the left, and the output value “20” for the cell 200 in question. In this case, the sum “40” of these output values is output as the output grayscale value for the common pixel 210 (S22; see (C) of
Because the value for a common pixel 210 is added a plurality of times as a pixel for processing by different cells, if the output values obtained from gamma tables are added without modification, the maximum value “255” is exceeded. As explained above, by multiplying output values by a contribution factor and adding the results, the output grayscale values can be kept within the range from “0” to “255”.
In this Embodiment 3 also, advantageous results of action similar to those of Embodiments 1 and 2 are obtained.
In the above-described examples, output values were obtained from input grayscale values by referring to gamma tables. In addition, output values may be obtained by processing using so-called multivalued dithering methods.
By rendering cells 200 symmetrical, the center of a cell 200 coincides with the center of a pixel, so that even if there is a slight shift from a uniform input grayscale distribution, there is no shift in the pixel position for dot generation, and an output image is obtained with noise suppressed. If cells 200 are rendered symmetrical, in addition to processing using a multivalued dithering method, processing by the AAM method may also be performed.
Further, in the above examples processing was performed taking the contribution factor for common pixels 210 to be “0.5”. This is because a common pixel 210 was a pixel which was processed in two cells 200. Hence when a pixel is common to three cells 200, the contribution factor is “⅓”, and for four cells 200 the value is “0.25”. The commonality level may be set according to the number of cells 200 to which a common pixel 210 is common. In this case also, advantageous results similar to those of the above examples are obtained.
Further, in the above examples, even when during processing in each cell 200 the sum of grayscale values which have been output does not reach the ideal grayscale value, at the end of processing of all pixels in the cell 200, processing ends for the cell 200 (NO in S19, YES in S21). Hence there are also cases in which the output grayscale value in a cell 200 does not reach the ideal grayscale value. In this case, processing may be performed to distribute the grayscale value deficiency to pixels close to the center-of-gravity position 110 for which there has been no dot output; or, the output value may be reset, and for example processing performed using a dithering matrix which a higher dot density than the multivalued dithering matrix (high-line number multivalued dithering processing), or, supplementary processing may be performed to redistribute ideal grayscale values in the cell 200 in the order of pixels with large input grayscale values, so as to obtain output values which substantially coincide with ideal grayscale values.
Further, in the above examples it was explained that the halftone processing of this invention is performed by an image processing device 20; but as shown in
The above examples are explained assuming monochromatic data as the input image data. In addition, this invention may be applied to CMYK color data, as shown in
In this case, the rasterizing portion 12 outputs RGB color data, and a color conversion processing portion 213 within the image processing device 20 converts this into CMYK color data. In this case, in this invention the above-described processing is repeated for each CMYK plane.
The color conversion processing portion 213 may be provided in the host computer 10; or, the color conversion processing portion 213 and halftone processing portion 211 may be provided in the host computer 10. In either case, advantageous results similar to those of the above examples are obtained.
Further, in the above examples the number of grayscales of the input image data was 256 (8 bits), ranging from “0” to “255”, and quantized data similarly had 256 grayscales (8 bits). Of course, similar advantageous results are obtained even when the number of grayscales is 128 (7 bits), 512 (9 bits), or various other numbers of grayscales.
In the above examples, a printer was used as an example of an image processing device 20. Of course, the device may be a photocopier, fax machine, or a hybrid device having several of these functions; and the host computer 10 may be a portable telephone, PDA (Personal Digital Assistant), digital camera, or other portable information terminal.
Number | Date | Country | Kind |
---|---|---|---|
2005-196256 | Jul 2005 | JP | national |