This application is based on and claims priority under 35 USC 119 from Japanese Patent Application No. 2007-156889 filed on Jun. 13, 2007.
1. Technical Field
The present invention relates to an image processor, an image-processing method, and a computer-readable medium.
2. Related Art
A technique to convert input image data represented by pixels of multiple values into image data represented by pixels of two values is known.
According to an aspect of the present invention, there is provided an image processor having: a conversion unit that executes a predetermined conversion process on input image data represented by pixels of M grayscales (M≧3), to convert the input image data into output image data represented by sub-pixels of two grayscales and having a size smaller than the pixel; and an output unit that outputs the output image data obtained by the conversion unit, wherein the conversion process includes: a process to determine a filling pattern of a group of sub-pixels of the output image data corresponding to a pixel of interest of the input image data according to a corrected grayscale value obtained by adding a predetermined correction value to a grayscale value of the pixel of interest; and a process to calculate, as an error of the pixel of interest, an error between the corrected grayscale value and a grayscale value represented by the filling pattern, the filling pattern includes at least a core pattern in which a predetermined number of a plurality of sub-pixels are filled and which forms a core of a dot and a growth pattern in which a number of sub-pixels are filled, the number corresponding to the corrected grayscale value, and which grows an adjacent dot, the correction value is a value including a diffused error obtained by weighting an error of a peripheral pixel of the pixel of interest by an error diffusion coefficient, and at least a part of the diffused error obtained by weighting an error of a pixel, among the peripheral pixels, determined as the core pattern by the error diffusion coefficient is not included in the correction value and is allotted to proximate pixels of the pixel of interest including the pixel of interest.
Exemplary embodiment of the present invention will be described in detail by reference to the following figures, wherein:
An exemplary embodiment of the present invention will now be described with reference to the drawings.
In one configuration, the image processor 10 may be realized by cooperation of hardware and software resources. For example, functions of the image processor 10 may be realized by an image-processing program, recorded on a recording medium such as a ROM (Read Only Memory), being read into a main memory and executed by a CPU (Central Processing Unit). The image-processing program may be provided in a recorded form on a recording medium such as a CD-ROM or may be provided through communication as a data signal. In another configuration, the image processor 10 may be realized solely by hardware.
As shown in
The reception unit 11 receives input of input image data represented by pixels of M grayscales (M≧3). More specifically, the input image data include multiple pixels each having a grayscale value of M values. The input image data are also called image data represented by multiple values. The reception unit 11 receives input image data by means of, for example, a RAM (Random Access Memory).
The conversion unit 12 executes a predetermined conversion process on input image data received by the reception unit 11, to convert the input image data into output image data represented by sub-pixels of two grayscales and having a size smaller than the pixel of the input image data. The above-described predetermined conversion process will be described later in more detail.
The output unit 13 outputs the output image data obtained by the conversion unit 12 to the outside (other devices and software modules). For example, the output unit 13 outputs the output image data to the RAM.
A conversion process performed by the conversion unit 12 will now be described.
The conversion process performed by the conversion unit 12 includes: (a) a filling pattern determination process to determine a filling pattern of a group of sub-pixels of output image data corresponding to a pixel of interest of the input image data in accordance with a corrected grayscale value obtained by adding a predetermined correction value to the grayscale value of the pixel of interest; and (b) an error calculation process to calculate, as an error of the pixel of interest, an error between the corrected grayscale value and the grayscale value represented by the determined filling pattern.
Next, (a) the filling pattern determination process will be described in detail.
The “pixel of interest” is a pixel to be processed in the input image data. The pixels in the input image data are sequentially set as the pixel of interest. For example, in input image data in which pixels are arranged in a matrix form along a primary scan direction and a secondary scan direction, the pixel of interest is moved in a predetermined order along the primary scan direction and the secondary scan direction.
The “group of sub-pixels” is a collection of multiple sub-pixels corresponding to a pixel in input image data, and is, for example, multiple sub-pixels arranged in a matrix form.
The “filling pattern of a group of sub-pixels” is a pattern representing whether or not each sub-pixel in the group of sub-pixels is to be filled. The filled sub-pixel is also referred to as a colored sub-pixel, an ON sub-pixel, or a black sub-pixel, and is a sub-pixel having, for example, a grayscale value of “1”. Meanwhile, the non-filled sub-pixel is also referred as a non-colored sub-pixel, an OFF sub-pixel, or a white sub-pixel, and is a sub-pixel, for example, having a grayscale value of “0”. In the following description, the filled state will be referred to as “black” and the non-filled state will be referred to as “white”.
The filling pattern includes at least a core pattern in which a predetermined multiple number of the sub-pixels are filled and which forms a core of a dot and a growth pattern in which a number of sub-pixels are filled, the number corresponding to the corrected grayscale value, and which grows an adjacent dot.
Because the above-described core pattern forms a core of the black dot, in the following description, this core pattern will be referred to as a “black core pattern”. More specifically, the black core pattern is a pattern in which a predetermined number of sub-pixels at predetermined positions are filled and the other sub-pixels are not filled. The number and the positions of the sub-pixels to be filled in the black core pattern may be set, for example, in consideration of the reproducibility of the image and graininess in the printing process.
Because the above-described growth pattern grows an adjacent black dot, in the following description, this growth pattern will be referred to as a “black growth pattern”. More specifically, the black growth pattern is a pattern in which one or more sub-pixels corresponding to a corrected grayscale value are filled and the other sub-pixels are not filled so that the size of an adjacent black dot is increased.
From the viewpoint of executing a superior process in a low-concentration region, the filling pattern may include an all-white pattern in which no sub-pixel is filled.
From the viewpoint of executing a superior process in a high-concentration region, the filling pattern may include a white core pattern for forming a core of a white dot and a white growth pattern for growing the white dot. The white core pattern is a pattern in which a predetermined multiple number of sub-pixels are not filled. More specifically, the white core pattern is a pattern in which a predetermined number of sub-pixels at predetermined positions are not filled and the other sub-pixels are filled. The white growth pattern is a pattern in which a number of sub-pixels are not filled, the number corresponding to the corrected grayscale value. More specifically, the white growth pattern is a pattern in which one or more sub-pixels corresponding to the corrected grayscale value are not filled and the other sub-pixels are filled so that the size of an adjacent white dot is increased.
From the viewpoint of executing a superior process in a high-concentration region, the filling pattern may include an all-black pattern in which all sub-pixels are filled.
A collection of a black sub-pixel of the black core pattern and a black sub-pixel of the black growth pattern forms a black dot, and a collection of a white sub-pixel of the white core pattern and a white sub-pixel of the white growth pattern forms a white dot. The collections are sometimes referred to as “clusters”.
The “correction value” is a value including a diffused error obtained by weighting an error of a pixel at a periphery of a pixel of interest (hereinafter referred to as “peripheral pixel”) by an error diffusion coefficient. Here, the error diffusion coefficient is a coefficient which is set on the basis of a relative position between the pixel of interest and the peripheral pixel, and, in one configuration, the error diffusion coefficient is set so that the weight is increased as the relative position of the peripheral pixel becomes closer to the pixel of interest. The error diffusion coefficient may be, for example, suitably determined in consideration of the reproducibility of an image or the like, and, thus, may be a coefficient which is used in an error diffusion process in the related art or a coefficient which is newly set.
If the errors of N (N≧1) peripheral pixels Pn (n=1, 2, . . . N) are En (n=1, 2, . . . N) and the error diffusion coefficients corresponding to the peripheral pixels Pn are Dn (n=1, 2, . . . N), one error diffusion method is to simply set a weighted sum of the errors of the peripheral pixels as the correction value A′ as shown in the following equation (1).
In the above-described method, however, a negative error having a relatively large absolute value of a pixel determined to be a black core pattern may inhibit growth of the black dot in a low-concentration region.
In the exemplary embodiment, as compared with the above-described method, from the viewpoint of more easily growing the black dot; that is, from the viewpoint of promoting growth of the black dot, at least a part of the diffused error obtained by weighting the error of the pixel determined to be a black core pattern by the error diffusion coefficient is not included in the correction value.
In one configuration, none of the diffused error obtained by weighting the error of the pixel determined to be a black core pattern by the error diffusion coefficient is included in the correction value, from the viewpoint of simplifying the process. According to this configuration, for example, the correction value is a total of diffused errors obtained by weighting the errors of the peripheral pixels other than the pixel determined to be a black core pattern by the error diffusion coefficients.
In another configuration, a predetermined part of the diffused error obtained by weighting the error of the pixel determined to be the black core pattern by the error diffusion coefficient is included in the correction value and the remaining part is not included in the correction value. The predetermined part is, for example, an amount corresponding to a predetermined ratio. In this configuration, the correction value is, for example, a sum of a total of diffused errors obtained by weighting the errors of the peripheral pixels other than the pixel determined to be the black core pattern by the error diffusion coefficients and a total of a predetermined part of the diffused error obtained by weighting the error of the pixel determined to be the black core pattern by the error diffusion coefficients.
In the exemplary embodiment, the correction value A is represented by, for example, the following equation (2). In equation (2), Fn (n=1, 2, . . . N) are coefficients corresponding to the peripheral pixels Pn (n=1, 2, . . . N). Fn is “1” for a pixel determined to be a filling pattern other than the black core pattern and is “0” or “a positive number less than 1” for a pixel determined to be the black core pattern.
In the exemplary embodiment, from the viewpoint of maintaining a concentration over the entire image, among the diffused errors obtained by weighting the errors of the pixels determined to be the black core pattern by the error diffusion coefficients, a diffused error which is not included in the correction value is allotted to pixels proximate to the pixel of interest, including the pixel of interest.
In one configuration of the present invention, from the viewpoint of simplifying the process, the diffused error which is not included in the correction value is allotted only to the pixel of interest. In this configuration, for example, the diffused error which is not included in the correction value is superposed on the error of the pixel of interest. In other words, the diffused error which is not included in the correction value is added to the error between the corrected grayscale value of the pixel of interest and a grayscale value represented by the determined filling pattern, and the error thus obtained is set as the error of the pixel of interest.
The diffused error which is not included in the correction value may alternatively be allotted to one or multiple proximate pixels other than the pixel of interest or may be allotted to the pixel of interest and one or multiple proximate pixels other than the pixel of interest. Here, the proximate pixels other than the pixel of interest are unprocessed pixels.
When the diffused error which is not included in the correction value is to be allotted to the proximate pixels other than the pixel of interest, the diffused error which is not included in the correction value may be, for example, attached as an error of a pixel to which the diffused error is allotted; that is, an error diffused to peripheral pixels of the pixel in the error diffusion process, or may be added to the grayscale value of the pixel to which the diffused error is allotted.
A range of the proximity of the pixel of interest to which the diffused error is allotted is a range in which the advantage of easier growth of the dot can be obtained as compared with a case where all diffused errors are included in the correction value, and, more specifically, may be suitably determined in consideration of the quality of the image or the like. In one example configuration, the proximate pixels other than the pixel of interest are pixels adjacent to the pixel of interest.
Alternatively, the process to not include at least a part of the diffused error obtained by weighting the error of the pixel determined to be the black core pattern by the error diffusion coefficient and allot this part of the diffused error to the proximate pixels of the pixel of interest including the pixel of interest may be applied only in a predetermined grayscale region corresponding to a low-concentration region and not in the other grayscale regions. In other words, for the region other than the low-concentration region, all of the diffused error obtained by weighting the error of the pixel determined to be the black core pattern by the error diffusion coefficient may be included in the correction value. That is, the weighted sum of the errors of the peripheral pixels may be simply set as the correction value as described by the above-described equation (1).
The correction value may include a value other than the diffused error, and may be, for example, a value obtained by adding a random number to a value based on the diffused error.
In
In step S2, the conversion unit 12 calculates a correction value A. In the example configuration, the correction value A is calculated as follows.
As shown in
The conversion unit 12 calculates, as the correction value A, a sum of the diffused errors obtained by multiplying the errors of the peripheral pixels other than the pixel determined to be the black core pattern by the error diffusion coefficients based on the values stored in the storage units 21-23.
For example, when the errors of multiple peripheral pixels of the pixel of interest P are those shown in
A=10×2/64+2×3/64+2×3/64+20×3/64+30×6/64+5×12/64+10×6/64+10×3/64+10×6/64≈7.5
Referring again to
In the exemplary embodiment, the filling pattern is determined as one of an all-white pattern, an all-black pattern, a black core pattern, a white core pattern, a black growth pattern, and a white growth pattern. The group of sub-pixels corresponding to a pixel of the input image data is a group of sub-pixels in a form of a 4×4 matrix. As shown in
In addition, in the exemplary embodiment, the filling pattern is determined in a manner described below on the basis of a first determination map shown in
When, as shown in
When, as shown in
When, as shown in
When, as shown in
When a black core pattern is present in any of the three reference pixels adjacent to the pixel of interest P and the corrected grayscale value Ca of the pixel of interest P is less than the center threshold value Th_Center, the filling pattern is determined to be the black growth pattern. The number of black sub-pixels to be filled is determined according to the second determination map of
When a white core pattern is present in any of the three reference pixels adjacent to the pixel of interest P and the corrected grayscale value Ca of the pixel of interest P is greater than or equal to the center threshold value Th_Center, the filling pattern is determined to be the white growth pattern. The number of white sub-pixels which are not filled is determined according to the second determination map of
With reference again to
In the illustrated configuration of
Then, in step S5, the conversion unit 12 superposes, on the error ED of the pixel of interest obtained in step S4, the diffused error obtained by weighting the error of the peripheral pixel of the pixel of interest determined to be the black core pattern by the error diffusion coefficient.
More specifically, the conversion unit 12 calculates the error E of the pixel of interest by determining, from the values stored in the storage units 21-23, a total of the diffused errors obtained by multiplying the errors of the peripheral pixels of the pixel of interest determined to be the black core pattern by the error diffusion coefficients and adding the total of the diffused error to the error E0 of the pixel of interest. The conversion unit 12 stores the error E of the pixel of interest in the error storage unit 21.
In the illustrated configuration of
Then, in step S6, the conversion unit 12 stores in the filling pattern storage unit 23 information indicating the filling pattern determined in step S3. More specifically, the conversion unit 12 stores a value of “0” when the filling pattern is determined to be the black core pattern in step S3 and stores a value of “1” otherwise.
Next, in step s7, the conversion unit 12 moves the pixel of interest, and returns the process to step S1.
As a comparative example, in the example of
A′=10×2/64+2×3/64−30×6/64+2×3/64−20×2/64+20×3/64+30×6/64+5×12/64+10×6/64+10×3/64+10×6/64−40×12/64≈−3.4.
The filling pattern is determined on the basis of the corrected grayscale value Ca′ (=Cin+A′). When the number of filling in the filling pattern of the pixel of interest is k′ (0≦k′≦16), the error E′ of the pixel of interest is E′=Ca′−16×k′≈(−3.4+Cin)−16×k′.
Output image data shown at the right of
In contrast, output image data shown at the left of
In
The foregoing description of the exemplary embodiments of the present invention has been provided for the purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise forms disclosed. Obviously, many modifications and variations will be apparent to practitioners skilled in the art. The embodiments were chosen and described in order to best explain the principles of the invention and its practical applications, thereby enabling others skilled in the art to understand the invention for various embodiments and with various modifications as are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the following claims and their equivalents.
Number | Date | Country | Kind |
---|---|---|---|
2007-156889 | Jun 2007 | JP | national |
Number | Name | Date | Kind |
---|---|---|---|
20020051233 | Morimatsu | May 2002 | A1 |
20030231348 | Ishii et al. | Dec 2003 | A1 |
20050135673 | Mantell | Jun 2005 | A1 |
Number | Date | Country |
---|---|---|
2003348347 | Dec 2003 | JP |
Number | Date | Country | |
---|---|---|---|
20080310748 A1 | Dec 2008 | US |