This application is based on Japanese Patent Application No. 2011-000057 filed on Jan. 4, 2011, the contents of which are hereby incorporated by reference.
1. Field of the Invention
The present invention relates to an image processing apparatus and a technique related thereto.
2. Description of the Background Art
In an image forming apparatus that forms an image based on a scanned image, etc., a plurality of types of regions such as character regions and halftone-dot regions are distinguished from each other and each region is subjected to image processing, according to the type thereof (see Japanese Patent Application Laid-Open No. 2002-218235). For example, a smoothing process is performed on halftone-dot regions, thereby suppressing the occurrence of moiré, etc.
Note that Japanese Patent Application Laid-Open No. 2002-218235 describes detection of halftone-dot regions by an isolated point detection process.
However, as will be described later, when halftone-dot regions are determined only by an isolated point detection process, dots inside a character (pixels representing the lines of the character) are also extracted as pixels in a halftone-dot region. Then, if a smoothing process is performed on the halftone-dot region in terms of the prevention of moiré, etc., then a problem of a blurred character edge occurs.
An object of the present invention is to provide an image processing technique capable of more appropriately extracting halftone-dot regions from an image containing both characters and halftone dots.
A first aspect of the present invention is directed to an image processing apparatus including: an isolated point detecting unit that detects isolated points in image data; a line-shaped region extracting unit that extracts line-shaped regions in the image data, as character line candidate regions; an isolated point type determining unit that determines a representative pixel of each isolated point in each line-shaped region to be a pixel of interest, determines discontinuity of each line-shaped region around the pixel of interest for each isolated point, determines an isolated point determined to have discontinuity to be a true isolated point, and determines an isolated point determined to have no discontinuity to be a pseudo isolated point; and a halftone-dot region determining unit that determines a halftone-dot region, based on isolated point type determination results for the respective isolated points detected by the isolated point detecting unit.
A second aspect of the present invention is directed to a non-transitory computer-readable recording medium having recorded therein a program for causing a computer to perform the steps of (a) detecting isolated points in image data; (b) extracting line-shaped regions in the image data, as character line candidate regions; (c) determining a representative pixel of each isolated point in each line-shaped region to be a pixel of interest, determining discontinuity of each line-shaped region around the pixel of interest for each isolated point, determining an isolated point determined to have discontinuity to be a true isolated point, and determining an isolated point determined to have no discontinuity to be a pseudo isolated point; and (d) determining a halftone-dot region, based on isolated point type determination results obtained in the step (c) for the respective isolated points detected in the step (a).
These and other objects, features, aspects and advantages of the present invention will become more apparent from the following detailed description of the present invention when taken in conjunction with the accompanying drawings.
An embodiment of the present invention will be described below with reference to the drawings.
<1. Configuration>
The MFP 1 is an apparatus having a scanner function, a printer function, a copy function, a facsimile function, etc. (also referred to as a multifunction product). Specifically, the MFP 1 includes an image reading unit 2, a printout unit 4, a communication unit 5, an input-output unit 6, a storage unit 8, and a controller 9. By allowing these units to operate in an integrated manner, the above-described functions are implemented.
The image reading unit 2 is a processing unit that optically reads a document placed in a predetermined position of the MFP 1 and thereby creates an image of the document (also referred to as a document image or a scanned image). The image reading unit 2 is also referred to as a scanner unit.
The printout unit 4 is an output unit that prints out an image on various types of media such as paper, based on image data on a target image.
The communication unit 5 is a processing unit capable of performing facsimile communication via a public telephone line, etc. Also, the communication unit 5 can perform network communication over a communication network. By using the network communication, the MFP 1 can give and receive various data to/from a desired target. In addition, by using the network communication, the MFP 1 can send and receive emails.
The input-output unit 6 includes an operation input unit 61 that accepts input to the MFP 1; and a display unit 62 that performs display output of various types of information.
The storage unit 8 is composed of a storage apparatus such as a hard disk drive (HDD). The storage unit 8 stores a document image, etc., created by the image reading unit 2, etc.
The controller 9 is a control apparatus that performs overall control of the MFP 1, and is configured to include a CPU and various semiconductor memories (a RAM, a ROM, and the like). Various functions of the MFP 1 are implemented by various processing units operating under control of the controller 9.
The processing target image creating unit 21 is a processing unit that creates, based on a scanned image, images for a region type determination process (also referred to as images for region type determination) for various regions (including halftone-dot regions, character line regions, and the like).
The isolated point detecting unit 23 is a processing unit that detects “isolated points” (described later) by performing an isolated point detection process on the scanned image (specifically, the above-described images for region type determination).
The line-shaped region extracting unit 24 is a processing unit that extracts pixels in a “line-shaped region” (described later) by performing an edge extraction process, etc., on the scanned image (specifically, the above-described images for region type determination).
The isolated point type determining unit 25 is a processing unit that determines the type of each isolated point detected by the isolated point detecting unit 23. The isolated point type determining unit 25 determines, as will be described later, whether each isolated point is a “true isolated point” or a “pseudo isolated point”.
The character line region determining unit 26 is a processing unit that determines character line regions in the scanned image, based on the extraction results (line-shaped region extraction results) obtained by the line-shaped region extracting unit 24 and the determination results (isolated point type determination results) obtained by the isolated point type determining unit 25.
The halftone-dot region determining unit 27 is a processing unit that determines halftone-dot regions in the scanned image, based on the detection results (isolated point detection results) obtained by the isolated point detecting unit 23 and the determination results (isolated point type determination results) obtained by the isolated point type determining unit 25.
The modified image creating unit 28 is a processing unit that creates a modified image in which appropriate image processing is performed on the character line regions and the halftone-dot regions which are determined by the character line region determining unit 26 and the halftone-dot region determining unit 27.
Various types of image processing are performed on the scanned image by the processing units 21 and 23 to 28, whereby a modified image is created. Then, the modified image is printed out by the printout unit 4, whereby a so-called copy function, etc., are implemented.
<2. Image Processing>
Next, image processing, etc., performed by the processing units 21 and 23 to 28 will be described.
As shown in
In step S50, the modified image creating unit 28 creates a modified image in which appropriate image correction processes according to the types of the respective regions are performed on the character line regions and the halftone-dot regions which are determined by the character line region determining unit 26 and the halftone-dot region determining unit 27, respectively. For example, a smoothing process is performed on the halftone-dot regions and an edge enhancement process is performed on the character line regions, whereby the modified image is created.
Thereafter, the modified image is printed out by the printout unit 4. Accordingly, a document image printout operation (a so-called copy function) is implemented.
In steps S11 to S15, the operation of determining the type of a region is performed using detection results concerning “positive type line-shaped regions” (described later) and “black isolated points” (isolated points each having a smaller grayscale value than its surrounding pixels (surrounding region)) in a scanned image. In addition, in parallel with the processes in steps S11 to S15, the processes in steps S21 to S25 are performed. In steps S21 to S25, the operation of determining the type of a region is performed using detection results concerning “negative type line-shaped regions” (described later) and “white isolated points” (isolated points each having a larger grayscale value than its surrounding pixels (surrounding region)) in the scanned image. In steps S31 and S32, the operations of finally determining the types of regions are performed using the determination results obtained in steps S11 to S15 and the determination results obtained in steps S21 to S25.
Note that prior to these processes, a min (R, G) image, a max (R, G, B) image, an R-plane image, a G-plane image, etc., are created as images for region type determination, based on the scanned image. Here, the min (R, G) image is an image obtained after conversion of the original full color image (scanned image) and is a grayscale image obtained by converting a minimum value between the R component value and G component value of each of the pixels forming the original full color image, into a new pixel value (grayscale value) of the pixel. Likewise, the max (R, G, B) image is an image obtained after conversion of the original full color image and is a grayscale image obtained by converting a maximum value among the R component value, G component value, and B component value of each of the pixels forming the original full color image, into a new pixel value of the pixel.
In this embodiment, as line-shaped regions (line-shaped regions forming the lines of a character, etc.), two types of regions, a positive type line-shaped region and a negative type line-shaped region, are detected. Specifically, a positive type line-shaped region is detected based on the min (R, G) image, and a negative type line-shaped region is detected based on the max (R, G, B) image. The positive type line-shaped region is a line-shaped region with a relatively dark color relative to a background color, and the negative type line-shaped region is a line-shaped region with a relatively light color relative to a background color. The positive type line-shaped region is also represented as a line-shaped region with a relatively low luminance relative to the background, and the negative type line-shaped region as a line-shaped region with a relatively high luminance relative to the background.
In addition, in this embodiment, based on the R-plane image, “black isolated points” are detected and “white isolated points” are also detected. Likewise, based on the G-plane image, “black isolated points” are detected and “white isolated points” are also detected. Note that here the B-plane image is not used for the process of detecting “black isolated points” and “white isolated points”. Note, however, that the configuration is not limited thereto and the B-plane image may also be used for the process of detecting “black isolated points” and/or “white isolated points”.
First, in step S11 (
In the isolated point detection process, the sizes and positions of the isolated points are detected.
For example, an isolated point (black isolated point) of the isolated point size “1” is detected by a process such as that shown below. Specifically, as shown in
Likewise, an isolated point of the isolated point size “3” is detected by a process such as that shown below. Specifically, as shown in
In a likewise manner, black isolated points of a plurality of other sizes are detected. In this manner, the position of the barycenter (representative position) of a black isolated point of each size, etc., are detected. Note that when one same pixel of interest EB is detected in an overlapping manner as isolated points of a plurality of sizes, the largest one of the detected isolated point sizes may be determined to be the isolated point size of the pixel of interest EB.
By such a process, for example, as shown in
In the above-described manner, a black isolated point detection process based on the R-plane image is performed. Cyan halftone dots are particularly easily detected as black isolated points in the R-plane image (a plane image of red which is a complementary color of cyan).
In a likewise manner, a black isolated point detection process based on the G-plane image is also performed. Note that although the G-plane image in
Note that in each color component plane, a plurality of dots forming a halftone-dot region of its complementary color are easily detected as isolated points. For example, as described above, in an R-plane, halftone dots of cyan (C) which is a complementary color of red (R) are easily detected as isolated points. Likewise, in a G-plane, halftone dots of magenta (M) which is a complementary color of green (G) are easily detected as isolated points. By using plane images of a plurality of color components, halftone dots of various colors can be favorably detected.
Thereafter, subsequent processes are performed using both of the black isolated points detected based on the R-plane image and the black isolated points detected based on the G-plane image.
Here, as can be seen by referring to
If a region where those isolated points PS are present is determined to be a halftone-dot region as it is, then a region including the character line region RL is determined to be a halftone-dot region. Then, as described above, if a smoothing process is performed on such a halftone-dot region, then a smoothing process is also performed on the character line region RL, which may cause a problem of a blurred character edge.
In the embodiment, on the other hand, such a problem is solved by performing processes at and subsequent to the next step S12.
Specifically, first, in step S12, line-shaped regions RE are extracted from a scanned image (here, the min (R, G) image) by the line-shaped region extracting unit 24. More specifically, an edge extraction process is performed on the min (R, G) image, whereby edges of a character, etc., are extracted. Then, closed regions surrounded by the extracted edges are extracted as regions (line-shaped regions) formed of the “lines” of a character, etc. The line-shaped regions RE are extracted as candidate regions for lines forming a character (also referred to as character line candidate regions). Note that the thickness of the line-shaped regions RE is not limited to a one-pixel width and can have various appropriate sizes. The shape of the line-shaped regions RE is, for example, a dot, a straight line, a curve, or the like. A line-shaped region RE here is a region having a smaller grayscale value than the pixels in its outer region (surrounding region) and is a candidate region for a line of a positive-state character (a character with a darker (blacker) color than a background color), and thus is also represented as a “positive type character line candidate region” or a “positive type line-shaped region”.
Accordingly, for example, as shown in
Here, in the line-shaped region extraction process, an original character line region RL is extracted as a line-shaped region RE and the above-described isolated points PS are also extracted as line-shaped regions RE. For example, in
On the other hand, in the embodiment, processes at and subsequent to the next step S13 are further performed.
In step S13, the types of a plurality of isolated points (specifically, black isolated points) detected in step S11 are determined by the isolated point type determining unit 25. Specifically, the isolated point type determining unit 25 determines whether each isolated point is a “true isolated point” or a “pseudo isolated point”.
Here, a “pseudo isolated point” is one of a plurality of isolated points (see
The isolated point type determining unit 25 determines a pixel that is present in a line-shaped region RE and that is also a representative pixel (barycentric pixel) PB of an isolated point PS, to be a pixel of interest EB, and determines discontinuity of the line-shaped region RE around the pixel of interest EB. In other words, the isolated point type determining unit 25 determines discontinuity of each line-shaped region for each isolated point. An isolated point that is an isolated point PS associated with a pixel of interest EB and that is determined to have discontinuity regarding a line-shaped region RE is determined to be a “true isolated point”. On the other hand, an isolated point that is an isolated point PS associated with a pixel of interest EB and that is determined to have no discontinuity regarding a line-shaped region RE is determined to be a “pseudo isolated point”.
Specifically, as shown in
Then, in step S42, the size of a detection target region is determined based on the isolated point size of the isolated point PS associated with the pixel of interest EB. For example, the range of a predetermined number of pixels with the pixel of interest EB being at the center (e.g., an isolated point size N of the pixel of interest EB) is determined to be a detection target region TD (see
Then, in step S43, the isolated point type determining unit 25 determines discontinuity of the line-shaped region around the pixel of interest EB, in four directions DR1 to DR4 (see
The isolated point type determining unit 25 first determines whether discontinuity of the line-shaped region RE is detected on both sides of the pixel of interest EB in the first direction DR1 (e.g., a direction having an inclination of 0 degrees with respect to an X-axis). The discontinuity detection is performed on the detection target region TD determined in step S42.
Specifically, when the pixel of interest EB is an isolated point of the isolated point size “3” (N=3), a determination as to whether discontinuity of a line-shaped region is detected on “both sides” of the pixel of interest EB in the first direction DR1 is made as follows. More specifically, it is determined whether to satisfy conditions where, in the direction DR1 (X direction), there is a pixel not in a line-shaped region RE within the range of from the pixel of interest EB to the third pixel on the left side (−X side) from the pixel of interest EB (the range including pixels at both ends) and there is a pixel not in a line-shaped region RE within the range of from the pixel of interest EB to the third pixel on the right side (+X side) from the pixel of interest EB (the range including pixels at both ends). In short, it is determined whether there are pixels in “non-line-shaped regions” in both directions, the left and right directions, starting from the pixel of interest EB. When the conditions are satisfied, it is determined that discontinuity of the line-shaped region RE is detected on “both sides” of the pixel of interest EB in the first direction DR1. On the other hand, when the conditions are not satisfied, it is not determined that discontinuity of the line-shaped region RE is detected on “both sides” of the pixel of interest EB in the first direction DR1.
For example, for the pixel of interest EB1 in
On the other hand, for a pixel of interest EB2 in
For a pixel of interest EB associated with an isolated point of a size other than the isolated point size “3”, too, likewise, the same determination process as that described above is performed in a specific range (the range of a predetermined number of pixels) TD around the pixel of interest EB (specifically, in a detection target region TD according to the isolated point size).
Then, the isolated point type determining unit 25 next determines in a likewise manner whether discontinuity of the line-shaped region RE is detected on both sides of the pixel of interest EB in the second direction DR2 perpendicular to the first direction DR1 (e.g., a direction having an inclination of 90 degrees with respect to the X-axis (i.e., a Y direction)). For example, when, in the second direction DR2 (Y direction), there is a pixel not in a line-shaped region RE within the range of from the pixel of interest EB to an Nth pixel on the lower side (−Y side) from the pixel of interest EB and there is a pixel not in a line-shaped region RE within the range of from the pixel of interest EB to an Nth pixel on the upper side (+Y side) from the pixel of interest EB, it is determined that discontinuity of the line-shaped region RE is detected on both sides of the pixel of interest EB in the second direction DR2.
In addition, the isolated point type determining unit 25 next determines in a likewise manner whether discontinuity of the line-shaped region RE is detected on both sides of the pixel of interest EB in the third direction DR3 (e.g., a direction having an inclination of +45 degrees with respect to the X-axis). For example, when, in the third direction DR3, there is a pixel not in a line-shaped region RE within the range of from the pixel of interest EB to an Nth pixel on the lower left side (−X side and −Y side) from the pixel of interest EB and there is a pixel not in a line-shaped region RE within the range of from the pixel of interest EB to an Nth pixel on the upper right side (+X side and +Y side) from the pixel of interest EB, it is determined that discontinuity of the line-shaped region RE is detected on both sides of the pixel of interest EB in the third direction DR3.
Furthermore, the isolated point type determining unit 25 next determines in a likewise manner whether discontinuity of the line-shaped region RE is detected on both sides of the pixel of interest EB in the fourth direction DR4 perpendicular to the third direction DR3 (e.g., a direction having an inclination of 135 degrees (−45 degrees) with respect to the X-axis). For example, when, in the fourth direction DR4, there is a pixel not in a line-shaped region RE within the range of from the pixel of interest EB to an Nth pixel on the upper left side (−X side and +Y side) from the pixel of interest EB and there is a pixel not in a line-shaped region RE within the range of from the pixel of interest EB to an Nth pixel on the lower right side (+X side and −Y side) from the pixel of interest EB, it is determined that discontinuity of the line-shaped region RE is detected on both sides of the pixel of interest EB in the fourth direction DR4.
Then, in step S44, the isolated point type determining unit 25 determines the type of the isolated point associated with the pixel of interest EB, according to whether a predetermined criterion is satisfied. Here, whether discontinuity of the line-shaped region RE is detected on both sides of the pixel of interest EB in a set of two perpendicular directions ((DR1 and DR2) or (DR3 and DR4)) among the four directions is adopted as the predetermined criterion.
Specifically, when the isolated point type determining unit 25 determines that discontinuity of the line-shaped region RE is detected on both sides of the pixel of interest EB in both of the two directions DR1 and DR2 in given two perpendicular directions (DR1 and DR2), the isolated point type determining unit 25 determines that the isolated point associated with the pixel of interest EB is a “true isolated point” (step S45). Likewise, when the isolated point type determining unit 25 determines that discontinuity of the line-shaped region RE is detected on both sides of the pixel of interest EB in both of the two directions DR3 and DR4 in the other two perpendicular directions (DR3 and DR4), too, the isolated point type determining unit 25 determines that the isolated point associated with the pixel of interest EB is a “true isolated point” (step S45). Such determinations are made because when “disconnection” of the line-shaped region RE on both sides of the pixel of interest EB is present in both of the two perpendicular directions in the above-described manner, the pixel of interest EB (specifically, the isolated point associated with the pixel of interest EB) is highly likely to be an “isolated point” (true isolated point) in the original meaning.
On the other hand, isolated points other than “true isolated points” are determined to be “pseudo isolated points”. Specifically, when it is not determined that discontinuity of the line-shaped region RE is detected on both sides of the pixel of interest EB in at least one of given two perpendicular directions (DR1 and DR2) and it is not determined that discontinuity of the line-shaped region RE is detected on both sides of the pixel of interest EB in at least one of the other two perpendicular directions (DR3 and DR4), the isolated point associated with the pixel of interest EB is determined to be a “pseudo isolated point” (step S46). Such a determination is made because when “disconnection” of the line-shaped region RE on both sides of the pixel of interest EB is not present in at least one of two perpendicular directions in the above-described manner, the pixel of interest EB is highly likely not to be an original isolated point.
For example, for the pixel of interest EB1 in
On the other hand, for the pixel of interest EB2 in
For the pixel of interest EB3 in
Here, in
Note, however, that there is also a case in which the halftone-dot angle is 0 degrees. In that case, it is particularly useful to determine, when discontinuity of a line-shaped region RE is detected on both sides of a pixel of interest EB in both of the two directions DR3 and DR4 perpendicular to each other, an isolated point associated with the pixel of interest EB to be a “true isolated point” (step S45). In this case, as shown in
Therefore, to more appropriately handle halftone dots with various angles, it is preferable to perform, as described above, an isolated point type determination operation using discontinuity not only for the directions DR1 and DR2 but also for the directions DR3 and DR4.
In the next step S47, a process of removing pseudo isolated points (pseudo black isolated points) from a plurality of isolated points (black isolated points) detected in step S11 (
Then, in step S48, it is determined whether the processes in steps S41 to S46 have been done for all of the isolated points (black isolated points) in the line-shaped regions RE. If there is a remaining undone pixel, then processing returns to step S41. On the other hand, if it is determined that the processes in steps S41 to S46 have been done for all of the isolated points in the line-shaped regions RE, then the process in step S13 is completed. In this manner, the processes in steps S41 to S46 are performed for all of the isolated points (black isolated points) in the line-shaped regions RE.
By the processes such as those described above (steps S41 to S48), all of the pseudo isolated points (pseudo black isolated points) are removed from the isolated point detection results obtained in step S11.
As a result, ideally, of a plurality of isolated points detected in step S11 (see
In the next step S14, a halftone-dot region determination process based on the isolated point type determination results is performed. Specifically, the halftone-dot region determining unit 27 determines a region in which pseudo isolated points (pseudo black isolated points) are removed from a plurality of isolated points (black isolated points) (in other words, a region not including pseudo black isolated points but including true black isolated points among a plurality of black isolated points), to be a halftone-dot region. More specifically, the halftone-dot region determining unit 27 performs a process of extending each isolated point (black isolated point) to its adjacent isolated point and thereby creates a continuous region including the isolated points (black isolated points), and determines the continuous region to be a halftone-dot region. For example, in
In step S15, a line-shaped region determination process is performed. Specifically, the character line region determining unit 26 determines a line-shaped region RE (positive type line-shaped region REp) from which true isolated points (true black isolated points) are removed, to be a character line region. More specifically, the character line region determining unit 26 modifies a line-shaped region by excluding a region obtained by extending each true isolated point (true black isolated point) according to its isolated point size, from a line-shaped region RE extracted in step S12. Then, the line-shaped region after the exclusion (after the modification) is determined to be a character line region. For example, in
In steps S21 to S25 (
Specifically, first, in step S21, the isolated point detecting unit 23 performs an isolated point detection process on images (the R-plane image, etc.) created based on the scanned image and thereby detects isolated points (specifically, white isolated points). A “white isolated point” is an isolated point having a larger grayscale value than its surrounding pixels. In short, a “white isolated point” is a white dot isolated from its surrounding.
Note that detection of a white isolated point differs from detection of a black isolated point in that, for example, the large and small of their grayscale values are reversed. For example, a white isolated point of the isolated point size “1” is detected by a process such as that shown below. Specifically, when a condition is established where the pixel value V5 of a pixel of interest P5 (see
Then, in step S22, line-shaped regions RE are extracted from a scanned image (here, the max (R, G, B) image) by the line-shaped region extracting unit 24. Specifically, an edge extraction process is performed on the max (R, G, B) image, whereby edges of a character, etc., are extracted. Then, closed regions surrounded by the extracted edges are extracted as line-shaped regions (character line candidate regions). Note that a line-shaped region RE is a region having a larger grayscale value than the pixels in its outer region (surrounding region) and is a candidate region for a line of a negative-state character (a character with a lighter (whiter) color than a background color), and thus is also represented as a “negative type character line candidate region” or a “negative type line-shaped region”.
As a result, for example, as shown in
Then, in step S23, the types of a plurality of isolated points (specifically, white isolated points) detected in step S21 are determined by the isolated point type determining unit 25. The process in this step S23 is the same as that in step S13.
Accordingly, it is determined whether each white isolated point is a “true white isolated point” or a “pseudo white isolated point”, and a process of removing pseudo isolated points (pseudo white isolated points) from a plurality of isolated points (white isolated points) detected in step S21 is performed.
In the next step S24, a halftone-dot region determination process based on the isolated point type determination results is performed. Specifically, the halftone-dot region determining unit 27 determines a region in which pseudo white isolated points are removed from a plurality of white isolated points (in other words, a region not including pseudo white isolated points but including true white isolated points among a plurality of white isolated points), to be a halftone-dot region. More specifically, the halftone-dot region determining unit 27 performs a process of extending each white isolated point to its adjacent white isolated point and thereby creates a continuous region including true white isolated points, and determines the continuous region to be a halftone-dot region.
Furthermore, in step S25, a line-shaped region determination process is performed. Specifically, the character line region determining unit 26 determines a negative type line-shaped region REn from which true white isolated points are removed, to be a character line region. More specifically, the character line region determining unit 26 modifies a negative type line-shaped region by excluding a region obtained by extending each true white isolated point according to its isolated point size, from a negative type line-shaped region REn extracted in step S22. Then, the negative type line-shaped region after the exclusion (after the modification) is determined to be a character line region.
Thereafter, in step S31, the results of determinations in steps S14 and S24 are integrated. Specifically, two regions, a halftone-dot region determined in step S14 by a combination of black isolated points PSp and positive type line-shaped regions REp and a halftone-dot region determined in step S24 by a combination of white isolated points PSn and negative type line-shaped regions REn, (in other words, the OR region of the two regions (union region)) are finally determined to be a “halftone-dot region”.
In addition, in step S32, the results of determinations in steps S15 and S25 are integrated. Specifically, two regions, a character line region determined in step S15 by a combination of black isolated points PSp and positive type line-shaped regions REp and a character line region determined in step S25 by a combination of white isolated points PSn and negative type line-shaped regions REn, (in other words, the OR region of the two regions (union region)) are finally determined to be a “character line region”.
According to the processes such as those described above, in steps S13 and S23, pseudo isolated points are appropriately removed from line-shaped regions RE which are character line candidate regions. For example, as shown in
Therefore, when a smoothing process is performed on halftone-dot regions, the smoothing process is not performed on a character line region RL. Accordingly, the problem of a blurred character edge is avoided.
In steps S15 and S25, of true isolated points and pseudo isolated points, only the true isolated points are appropriately removed from line-shaped regions RE which are character line candidate regions. For example, as shown in
<3. Variants, Etc.>
Although the embodiment of the present invention is described above, the present invention is not limited to the content described above.
For example, although in the above-described embodiment, in step S42 (
Although in the above-described embodiment the case is exemplified in which discontinuity of a line-shaped region in two sets of two perpendicular directions (DR1 and DR2) and (DR3 and DR4) is determined, the present invention is not limited thereto, and discontinuity of a line-shaped region in other perpendicular directions (DR5 and DR6) may be determined. For example, a direction having an inclination angle of 30 degrees with respect to the X-axis and a direction having an inclination angle of 120 degrees (−60 degrees) with respect to the X-axis may be adopted as DR5 and DR6, respectively.
When a halftone-dot direction is known in advance, it is preferable to determine discontinuity of a line-shaped region in two perpendicular directions which are rotated by 45 degrees from the halftone-dot direction. For example, when it is known that the halftone-dot angle is 45 degrees, it is preferable to determine whether discontinuity of a line-shaped region is detected on both sides of a pixel of interest EB in two directions, a direction DR2 having an inclination angle of 90 degrees with respect to the X-axis and a direction DR1 having an inclination angle of 0 degrees with respect to the X-axis. Note that a halftone-dot direction may be detected by performing, for example, a process using a plurality of filters (linear direction detection filters) for pixel array detection for different specific directions (e.g., 0 degrees, 45 degrees, 90 degrees, and 135 degrees), on an image resulting from isolated point detection results (see
In particular, when a halftone-dot direction is known in advance, discontinuity of a line-shaped region only in two directions which are rotated by 45 degrees from the halftone-dot direction may be determined. For example, when it is known that the halftone-dot angle is 45 degrees, an isolated point type determination operation may be performed by determining only discontinuity of a line-shaped region in the above-described two perpendicular directions (DR1 and DR2). According to this, processing efficiency can be achieved.
Although in the above-described embodiment the case is exemplified in which an isolated point type is determined according to whether discontinuity of a line-shaped region is detected on both sides of a pixel of interest EB in both of two perpendicular directions (e.g., (DR1 and DR2)), the present invention is not limited thereto.
Specifically, an isolated point type may be determined according to whether discontinuity of a line-shaped region is detected on both sides of a pixel of interest EB in at least one of a plurality of directions (e.g., the above-described four directions DR1 to DR4). More specifically, when a condition is satisfied where discontinuity of a line-shaped region is detected on both sides of a pixel of interest EB in at least one of a plurality of different directions (DR1 to DR4, etc.), an isolated point associated with the pixel of interest EB may be determined to be a “true isolated point”. On the other hand, when the condition is not satisfied, the isolated point associated with the pixel of interest EB may be determined to be a “pseudo isolated point”.
Although in the above-described embodiment the case of performing two processes, the process in steps S11 to S15 and the process in steps S21 to S25, is exemplified, the present invention is not limited thereto and only one of the two processes may be performed. For example, of the two processes, only the process in steps S11 to S15 may be performed.
Although in the above-described embodiment the case is exemplified in which line-shaped regions are extracted by performing an edge extraction process, etc., on a scanned image in step S12, etc., the present invention is not limited thereto. For example, positive type line-shaped regions may be extracted by performing a low luminance region extraction process, etc., on a scanned image, etc. Likewise, negative type line-shaped regions may be extracted by performing a high luminance region extraction process, etc., on a scanned image, etc. Note that at this time since regions (solid filled regions, etc.) other than character line regions are excluded, it is further preferable to extract the above-described line-shaped regions, with regions having an area of a certain size or more or having a certain thickness or more being excluded.
Although in the above-described embodiment a scanned image created by the image reading unit 2 is exemplified as image data (digital image data), the present invention is not limited thereto. For example, image data may be image data for printing (printing image data) sent from an external device, etc. The printing image data may be generated by a scanning process or may be generated by predetermined application software (a word processor, a graphics processor, etc.).
Although in the above-described embodiment the case in which the MFP 1 functions as an image processing apparatus is exemplified, the present invention is not limited thereto, and a computer (a personal computer, etc.) may function as an image processing apparatus.
The image processing apparatus (computer) 1B performs image processing such as that described above, on the scanned image. Specifically, the image processing apparatus (computer) 1B reads, from various types of non-transitory (or portable) computer-readable recording media 91 (e.g., a flexible disk, a CD-ROM, a DVD-ROM, etc.) having recorded therein a predetermined program PG, the program PG and executes the program PG using its CPU, etc., and thereby implements the same functions as those of the above-described controller 9. Accordingly, the image processing apparatus (computer) 1B can perform the same image processing, etc., as those in the above-described embodiment. Note that the program PG may be supplied through a recording medium or may be supplied, for example, by being downloaded over the Internet.
While the invention has been shown and described in detail, the foregoing description is in all aspects illustrative and not restrictive. It is therefore understood that numerous modifications and variations can be devised without departing from the scope of the invention.
Number | Date | Country | Kind |
---|---|---|---|
2011-000057 | Jan 2011 | JP | national |