1. Field of the Invention
The present invention relates to an image processing apparatus and an image processing method, and more specifically, to an image processing apparatus capable of handling information of paper fiber, a so-called paper fingerprint information, an image processing method therefor, a program therefor, and a recording medium therefor.
2. Description of the Related Art
Conventionally, there is technique of embedding auxiliary data, such as a model number, in image data for the purpose of follow-up investigation of a counterfeit copy of a document in a digital multi-function machine. Moreover, in the processing of embedding the auxiliary data, there exist a technique of embedding the auxiliary data in a place hard for a third person to find it at the time of embedding processing in order to prevent the auxiliary data to be easily found from a printed image and decoded by the third person. For example, such a technique is disclosed in Japanese Patent Laid-Open No. 2004-214831.
Moreover, there is a technique with the purpose to guarantee originality of a document whereby an irreproducible disorder part included in the image information on the image data is extracted and the extracted data and the disorder part extracted at the time of reading are compared. Then, based on the comparison result, it is determined whether the read original is the original. For example, such a technique is disclosed in Japanese Patent Laid-Open No. 2004-153405.
Moreover, as a technique of reading information of paper fiber, there is a technique whereby the information of the fiber is read, with a mark added to a paper form used as a reference, and this information of fiber is converted to a predetermined pattern and is printed. For example, such a technique is disclosed in Japanese Patent Laid-Open No. 2004-112644.
Furthermore, as a technique of changing an area of added data to be embedded in paper, such a technique is disclosed in Japanese Patent Laid-Open No. 2001-127983.
However, the technique disclosed in Japanese Patent Laid-Open No. 2004-214831 embeds auxiliary data for follow-up investigation of a counterfeit copy in an image part of a paper form and renders it difficult to find. This technique has an effect of rendering it difficult to forge but has a problem that if the auxiliary data embedded in the image section is hidden by cut and paste being performed, the counterfeiting will become possible.
Moreover, a technique disclosed in Japanese Patent Laid-Open No. 2004-153405 is a technique to guarantee the originality by using a characteristic of a disorder part in the image data that cannot be reproduced. In the case of this technique, the disorder part means scattering of toner to be placed on the paper form, shaggy part of a line edge, etc., and works to guarantee the originality of the paper form using information that is drawn (added) on the paper form. Therefore, in the case where the disorder part being expected is not generated or other cases, there is a problem that it becomes no longer possible to guarantee the originality.
Moreover, the technique disclosed in Japanese Patent Laid-Open No. 2004-112644 is a technique of determining a read area, with a mark added to the paper form used as a reference, but it does not consider whether this area is a suitable area as a paper fingerprint area. Therefore, there is a problem that this technique determines an area that cannot be read correctly, such as a solid black area, as a read area.
Furthermore, the technique disclosed in Japanese Patent Laid-Open No. 2001-127983A is a technique of obtaining a mean density of a predetermined area and determining an area in which added data is embedded, but it does not consider a collation rate after the area is altered. Therefore, in the case where an area with a low collation rate, such as a solid black area, extends in the vicinity of a predetermined area, there is a problem that even when an area is altered according to the mean density, the added data would be embedded in an area in which paper fingerprint cannot be read correctly.
The present invention was made in view of such a problem, and its object is to provide an image processing apparatus capable of registering a paper fingerprint at a position that matching accuracy and security are considered and a method therefor.
In order to attain such an object, an image processing apparatus of the present invention is an image processing apparatus that has a paper fingerprint information acquisition unit for reading a paper fingerprint and a paper fingerprint information collation unit for reading paper fingerprint information acquired by the paper fingerprint information acquisition unit and collating it with other paper fingerprint information, characterized in that the paper fingerprint information acquisition unit includes an image are detection unit for detecting a location where an image area are included, as the paper fingerprint acquisition area.
In order to attain the above-mentioned object, an image formation method of the present invention is an image formation method of a system capable of scanning paper and specifying the paper based on a characteristic of its fiber, characterized by comprising a paper fingerprint information acquisition step of reading a paper fingerprint by a paper fingerprint information acquisition unit and a paper fingerprint information collation step of reading paper fingerprint information acquired in the paper fingerprint information acquisition step by a paper fingerprint information collation unit and collating it with paper fingerprint information, and further characterized in that the paper fingerprint information acquisition step includes an image area detection step of detecting a location where an image area are included, as the paper fingerprint acquisition area, by an image area detection unit.
In order to attain the above-mentioned object, the image processing apparatus of the present invention is characterized by comprising a paper fingerprint information acquisition unit for reading a paper fingerprint, an area determination unit for determining an area to acquire paper fingerprint information by the paper fingerprint information acquisition unit, and a paper fingerprint registration unit for registering a paper fingerprint of the area determined by the area determination unit as a paper fingerprint for the paper.
In order to attain the above-mentioned object, the image formation method of the image processing apparatus of the present invention is characterized by comprising a paper fingerprint information acquisition step of reading a paper fingerprint by a paper fingerprint information acquisition unit, an area determination step of determining an area to acquire paper fingerprint information in the paper fingerprint information acquisition step by an area determination unit, and a paper fingerprint registration step of registering a paper fingerprint in the area determined in the area determination step as a paper fingerprint for paper by a paper fingerprint registration unit.
According to the present invention, when performing originality guarantee and duplication prevention using a paper fingerprint that is a characteristic of paper itself, an effect of making it possible to prevent interference against security by the user is produced. Moreover, since also in paper with much of printing (paper with few non-image areas), a paper fingerprint area of a high collation rate can be found, an effect of making it possible to set and register paper fingerprints in more amount of paper is produced.
Further features of the present invention will become apparent from the following description of exemplary embodiments (with reference to the attached drawings).
Hereafter, embodiments to which the present invention can be applied will be described in detail with reference to the drawings. Note that in each drawing referred to in this specification, constituents having a similar function are designated by the same reference numerals.
A first embodiment will be described in detail with reference to the drawings.
The host computer (hereafter called PC) 40 has functions of a personal computer. This PC 40 can send/receive a file and send/receive an E-mail using an FTP or SMB protocol through the LAN 50 or WAN. Moreover, the PC 40 can instruct printing to the image forming apparatuses 10, 20, and 30 through a printer driver.
The image forming apparatus 10 and the image forming apparatus 20 are apparatuses having the same configuration. The image forming apparatus 30 is an image forming apparatus only with a printing function and does not have a scanner section that the image forming apparatuses 10 and 20 have. Below, the image forming apparatus 10 out of the image forming apparatuses 10 and 20 is paid attention and its configuration will be described in detail.
The image forming apparatus 10 consists of a scanner section 13 that is an image input device, a printer section 14 that is an image output device, a controller unit 11, and an operation panel 12 that is a user interface (UI). The controller 11 takes charge of motion control of an image forming apparatus 10 as a whole.
The scanner section converts information of an image into an electrical signal by exposure-scanning the image on the original and inputting the obtained reflected light into the CCD. Moreover, it converts the electrical signal into brightens signals of R, G, and B colors and outputs the luminance signals concerned into the controller 11 as image data.
Incidentally, the original is set in a tray 202 of a original feeder 201. When the user instructs start of reading from the operation panel 12, a original read instruction will be given to the scanner section 13 from the controller 11. When the scanner section 13 receives this instruction, a reading operation of the original will be performed by feeding the original from the tray 202 of the original feeder 201 one by one. Incidentally, a original read method may be not the automatic feeding method by the original feeder 201, but a method for scanning the original by moving an exposure section with the original placed on an unillustrated glass surface.
A printer section 14 is an image formation device for rendering the image data received from the controller 11 on a paper form. Incidentally, although the image formation method was the electrophotographic method that used a photo conductor drum or photoconductor belt in this embodiment, it is not limited to this. For example, this embodiment can be applied to an ink jet method for performing printing on a paper form by discharging inks from a micro nozzle array. Moreover, the printer part 14 is provided with a plurality of paper form cassettes 203, 204, and 205 that enable different paper form sizes or different paper form orientations to be selected. A paper form after printing is discharged to a paper discharge tray 206.
The controller 11 is electrically connected with the scanner section 13 and the printer section 14, and at the same time is connected with the PC 40 and external devices through the LAN 50 and a WAN 331. This connection enables input and output of image data and device information.
The CPU 301 systematically controls accessing with various devices being connected based on a control program stored in ROM 303 etc. and also systematically controls various processing performed inside the controller. RAM 302 is system work memory on which the CPU 301 operates, and is also the memory for temporarily storing image data. This RAM 302 is constructed with SRAM for holding stored contents even after a power supply is turned off and DRAM that allows stored contents to be erased after the power supply is turned off. The ROM 303 stores a boot program of the system etc. An HDD 304 is a hard disk drive, which enables system software and image data to be stored.
An operation panel I/F 305 is an interface section for connecting the system bus 310 and the operation panel 12. This operation panel I/F 305 receives image data to be displayed on the operation panel 12 from the system bus 310 and outputs it to the operation panel 12, and also outputs information inputted from the operation panel 12 to the system bus 310.
A network I/F 306 connects with the LAN 50 and a system bus 310, and performs input/output of information. A modem 307 connects with the WAN 331 and the system bus 310, and performs input/output of information. A binary image rotation section 308 converts an orientation of the image data before transmission. A binary image compression/decompression section 309 converts resolution of image data before transmission into predetermined resolution or resolution that matches the other party's capability. In performing compression and decompression, any of methods of JBIG, MMR, MR, MH, etc. is used. An image bus 330 is a transmission path for exchanging image data, and is made up of a PCI bus or IEEE1394.
The scanner image processing section 312 performs correction, processing, and editing on the image data received from the scanner section 13 through a scanner I/F 311. In addition, the scanner image processing section 312 determines whether the received imaged data is a color original or monochrome original, a character original or photograph original, and the like. Then, it attaches the determination result to the image data. Such attached information is called attribute data. Details of the processing performed in this scanner image processing section 312 will be described later.
The compression section 313 receives the image data and divides this image data into 32×32 pixel blocks. Incidentally, this image data of 32×32 pixels is called tile data.
The printer image processing section 315 receives the image data sent from the decompression section 316, and performs image processing on the image data while referring to attribute data attached to this image data. The image data after the image processing is outputted to the printer section 14 through a printer I/F 314. Details of processing performed in this printer image processing section 315 will be described later.
An image conversion section 317 performs predetermined conversion processing on the image data. This processing section is constructed with processing sections as shown below.
A decompression section 318 decompresses the received image data. A compression section 319 compresses the received image data. A rotation section 320 rotates the received image data. A variable power magnification section 321 performs resolution conversion processing (for example, from 600 dpi to 200 dpi) on the received image data. A color space conversion section 322 converts the color space of the received image data. This color space conversion section 322 can perform publicly known background removal processing using a matrix or table, perform publicly known LOG conversion processing (RGB to CMY), and perform publicly known output color correction and processing (CMY to CMYK). A binary to multivalue conversion section 323 converts received binary image data into 256 gray-scale image data. Conversely, a multivalue to binary conversion section 324 converts received 256 gray-scale image data into binary image data using a technique of error diffusion processing or the like.
A synthetic section 327 synthesizes received two pieces of image data and generates one piece of image data. When two pieces of image data are synthesized, either a method whereby a mean value of luminance values of the two pixels of image data to be synthesized is assigned as a synthetic luminance value or a method whereby a luminance value of one pixel brighter in the luminance level than the other pixel is assigned as a luminance value of the pixel after synthesis is applied. Alternatively, a method of assigning a luminance value of one pixel darker than the other to the pixel is also usable. Further alternatively, a method of determining a luminance value after the synthesis by an OR operation, an AND operation, an exclusive OR operation of the two pixels to be synthesized, or the like can be applied. Any of these synthetic methods is the well-known technique. A thinning section 326 performs resolution conversion by thinning out the pixels of received image data, generating image data of ½-, ¼-, and ⅛-times resolution etc. A move section 325 attaches a blank space to the received image data, or deletes a blank space.
RIP 328 receives intermediate data sent from the PC 40 and is generated based on the PDL code data and creates bit map data (multivalued).
A filter processing section 502 corrects spatial frequencies of the received image data arbitrarily. This processing section performs arithmetic processing on the received image data, for example, using a matrix of 7×7. Incidentally, the user is allowed to select a character mode, a photograph mode, and character-photograph mode as a copy mode by pressing a tab 704 in
A histogram generation section 503 samples luminance data of each pixel that constitutes the received image data. More specifically, luminance data in a rectangular area surrounded by a start point and an end point specified in the principal scanning direction and a sub scanning direction, respectively, is sampled in a constant pitch in the principal scanning direction and the sub scanning direction. After that, histogram data is generated based on the sampling results. The generated histogram data is used to estimate a ground level when performing the background removal processing. An input-side gamma correction section 504 converts the luminance data into one that has nonlinearity using a table etc.
A color-monochrome determination section 505 determines whether each pixel constituting the received image data is chromatic color, or colorless, and attaches the determination result to the image data as a color-monochrome determination signal (a part of attribute data).
Based on a pixel value of each pixel and pixel values of surrounding pixels of the each pixel, a character-photograph determination section 506 determines: whether each pixel constituting the image data is a pixel constituting a character; whether it is a pixel constituting a halftone dot or a character in the dot; and whether it is a pixel constituting an overall uniform image. Incidentally, a pixel that does not belong to any pixel described above is a pixel that constitutes a white image area. Then, its determination result is attached to the image data as a character-photograph determination signal (a part of attribute data).
A paper fingerprint information acquisition section 507 acquires image data of a predetermined area in RGB image data inputted from the shading correction section 500.
In Step S801, the image data extracted by the paper fingerprint information acquisition section 507 is converted into image data of a gray scale. In Step S802, mask data for performing collation is generated by eliminating possible factors of erroneous determination, such as printing and hand-written characters, from the image as converted into the image data of a gray scale in Step S801. The mask data is binary data consisting of “0” or “1.” In the image data of a gray scale, a mask data value for any pixel whose luminance signal value is equal to or more than the first threshold (namely, being bright) is set to “1.” The mask data value for any pixel whose luminance signal value is less than the first threshold is set to “0.” The above processing is performed on each pixel that is included in the image data of a gray scale. In Step S803, the following two pieces of data are acquired as paper fingerprint information: the image data having been converted into data of a gray scale in Step S801; and the mask data generated in Step S802.
The paper fingerprint information acquisition section 507 sends the paper fingerprint information of the predetermined area to the RAM 302 using an unillustrated data bus. These processing steps constitute a basic flow of paper fingerprint information acquisition processing.
For example, the character-photograph determination section 506 determines a character-photograph area bit by bit (pixel by pixel) 1202 on image data 1201 as shown by
For example, consider that the edge determination is intended to be done in the predetermined area 1303 (thick frame of 3×3) in image/non-image determination information 1301 as shown in
In Step S1106, the area detected in the previous Step S1104 is determined as the paper fingerprint area, and its area information is sent to the RAM 302 using an unillustrated data bus. In Step S1107, the image data corresponding to the area information determined in the previous Step S1106 is acquired by extracting it from the image data acquired in Step S1101. The acquired image data is sent to the RAM 302 using an unillustrated data bus. In Step S1108, in order to inform a user of fail in acquisition of paper fingerprint information, the error is notified to the operation panel 12 through the operation panel I/F 305. The control section that received the error makes an error message, as shown in
A decode section 508 detects the existence of coded image data when it exists in the image data outputted from the masking processing section 501. Then the detected coded image data is decoded to take out information.
A background removal processing section 601 eliminates (removes) a ground color of image data using a histogram generated in the scanner image processing section 312. A monochrome generation section 602 converts color data into monochrome data. A Log conversion section 603 performs luminance density conversion. This Log conversion section 603 converts, for example, inputted RGB image data into CMY image data. An output color correction section 604 performs output color correction. For example, it converts inputted CMY image data into CMYK image using a table or matrix. An output-side gamma correction section 605 performs correction such that a signal value inputted into this output-side gamma correction section 605 may be proportional to the reflection density value after a copy is outputted. A coded image synthesis section 607 synthesizes the image data (original) corrected by the output-side gamma correction section 605 and the coded image data generated by <Paper fingerprint information coding processing> that will be described later. A half tone correction section 606 performs half tone processing according to the number of gray tones of the printer section for outputting. For example, received image data with higher number of half tones is converted to binary data or a 32-valued data.
Each processing section of the scanner image processing section 312 and the printer image processing section 315 is also configured to be able to output received image data without performing the processing thereon. Making data pass through a processing section without performing any processing thereon will be expressed as “making it pass trough” below.
The CPU 301 is configured to be capable of reading paper fingerprint information of a predetermined area sent to the RAM 302 from the paper fingerprint information acquisition section 507, and controlling the encoding processing on the read paper fingerprint information concerned so as to generate coded image data.
In this specification, the coded image means an image, such as a two-dimensional coded image and a bar code image.
Moreover, the CPU 301 is configured to be capable of so controlling that the generated coded image data may be sent to the coded image synthesis section 607 in the printer image processing section 315 using an unillustrated data bus.
The above-mentioned controls (the control of generating the coded image and the control of sending it) by executing a program stored in the RAM 302.
The CPU 301 is configured to be able to read paper fingerprint information sent to the RAM 302 from the paper fingerprint information acquisition section 507, and so control that the read paper fingerprint information concerned may be collated with other paper fingerprint information. Here, the other paper fingerprint information means paper fingerprint information included in the coded image data and paper fingerprint information registered in a server.
In Step S901, paper fingerprint information included in the coded image data and paper fingerprint information registered in the server are taken out from the RAM 302.
In Step S902, in order to collate paper fingerprint information sent from the paper fingerprint information acquisition section 507 with paper fingerprint information taken out in Step S901, the degree of matching of the two pieces of paper fingerprint information is calculated by using Formula (1). Assume that one paper fingerprint information is the other paper fingerprint information that is shifted. The function of Formula (1) is used to calculate an error image (E) of the two pieces of paper fingerprint information by shifting by one pixel each time and finding a position at which a value so acquired becomes a minimum, namely at a position at which a difference between the two pieces of paper fingerprint information is minimized.
In Formula (1), α1 is the mask data in paper fingerprint information taken out (having been registered) in Step S901. f1 is gray-scale image data in paper fingerprint information taken out (having been registered) in Step S901.
α2 is the mask data in paper fingerprint information (the information just now taken out) sent from the paper fingerprint information acquisition section 507 in Step S902. f2 is the gray scale image data in paper fingerprint information (the information just now taken out) sent from the paper fingerprint information acquisition section 507 in Step S902.
A concrete method will be explained using
In a function shown by Formula (1), i and j are shifted by one pixel in ranges of −n+1 to n−1 and −m+1 to m−1, respectively, and (2n−1)×(2m−j) error values (Ei, j) between the paper fingerprint information already registered and the paper fingerprint information just acquired this time are acquired, respectively. That is, error values of E(−n+1, −m+1) to E(n−1, m−1) are calculated.
Moreover, in
Similarly, while the image is being shifted so that the pieces of paper fingerprint information overlaps by at least one pixel or more, the arithmetic operation is performed. Finally, E(n−1, m−1) is acquired like
In this way, a set consisting of (2n−1)×(2m−1) error values of E(i, j) is acquired.
Here, in order to clarify meaning of this Formula (1), a case where i=0 and j=0, and α1(x, y)=1 (however, x=0−n, y=0−m), and α2(x−i, y−j)=1 (however, x=0−n, y=0−m) will be considered. That is, E(0, 0) is acquired in a case where α1(x, y)=1 (however, x=0−n, y=0−m), and α2(x−i, y−j)=1 (however, x=0−n, y=0−m).
Incidentally, i=0 and j=0 indicate that the paper fingerprint information already registered and the paper fingerprint information just acquired this time are for the same position, as of
Here, α1(x, y)=1 (however, x=0−n, y=0−m) indicates that all the pixels of the registered paper fingerprint information are bright. In other words, α1(x, y)=1 indicates that, when the registered paper fingerprint information was acquired, there were neither color materials, such as toner and ink, nor dust on the paper fingerprint acquisition area at all.
Moreover, α2(x−i, y−j)=1 (however, x=0−n, y=0−m) indicates that all the pixels of the paper fingerprint information just acquired this time are bright. In other words, α2(x−i, y−j)=1 indicates that, when the paper fingerprint information just now acquired is acquired, there were neither color materials, such as toner and ink, nor dust on the paper fingerprint acquisition area at all.
When both α1(x, y)=1 and α2(x−i, y−j)=1 hold for all the pixels in this way, Formula (1) will be expressed as follows.
This {f1(x, y)−f2(x, y)}2 represents a cubed value of a difference between the gray-scale image data in the paper fingerprint information already registered and the gray-scale image data in the paper fingerprint information just now taken out. Therefore, this Formula (1) will be summation of squared differences between respective pixels of the two pieces of paper fingerprint information. That is, the more the pixels in each of which f1(x, y) and f2(x, y) resemble each other, the smaller value this E(0, 0) will take.
What is explained above is a method for finding E(0, 0), and by doing similarly, other E(i, j) is found. Incidentally, since the more the pixels in each of which f1(x, y) and f2(x, y) resemble each other, the smaller value E(i, j) takes. If E(k, 1)=min{E(i, j)} holds, it is known that a position at which the paper fingerprint information already registered was acquired and a position at which paper fingerprint information just now acquired are shifted from each other by k, 1.
The numerator of Formula (1) means a result that {f1(x, y)−f2(x−i, y−j)}2 is multiplied by α1 and α2 (to be exact, a total value is calculated further by a symbol Σ). The α1 and α2 indicate zero for a pixel of a thick color and unity for a pixel of a thin color.
Therefore, when either (or both) of α1 or α2 is zero, α1α2{f1(x, y)−f2(x−i, y−j)}2 will be zero.
That is, this indicates that when in either (or both) piece of paper fingerprint information of a targeted pixel is of a thick color, a density difference in the pixel is not considered. This is because a pixel on which dust or color material is placed is disregarded.
Since this processing increases or decreases the number of terms to be summed by Σ symbol, normalization is performed by dividing the numerator with a total number, Σα1(x, y)α2(x−i, y−j). Note that an error value E(i, j) such that Σα1(x, y)α2(x−i, y−j) in the denominator of Formula (1) becomes zero shall not be included in a set of error values (E(−(n−1), −(m−1)) to E(n−1, m−1)) that will be described later.
As described above, when E(k, 1)=min {E(i, j)} holds, it is known that a position at which registered paper fingerprint information was acquired and a position at which paper fingerprint information just now acquired are shifted from each other by k, 1.
Following this, a value that indicates how much two pieces of paper fingerprint information are like each other (this value is called a degree of matching) is calculated using the E(k, 1) and other E(i, j).
First, a mean value (40) is found from a set of error values (for example, E(0, 0)=10*, E(0, 1)=50, E(1, 0)=50, E(1, 1)=50) acquired by the function of Formula (1) . . . (A)
Here, * has no relation with a value. This symbol is described just to attract attention. A reason to attract attention will be described later.
Next, each of the error values (10*, 50, 50, 50) is subtracted from the mean valued to obtain a new set (30*, −10, −10, −10) . . . (B)
A standard deviation (30×30+10×10+10×10+10×10=1200, 1200/4=300, √300=10√3=approximately 17) is calculated from this new set. Further, the above-mentioned new set is divided by 17 to find quotients (1*, −1, −1, −1) . . . (C)
A maximum among the calculated values is designated as the degree of matching (1*). Note that this very value of 1* is a value corresponding to a value of E(0, 0)=10*. The value of (0, 0) is a value that satisfies E(0, 0)=min{E(i, j)} in this case.
The processing of performing the above-mentioned method for determining the degree of matching is, in a short, to calculate how far the smallest error value in a set of a plurality of error values is away from the average error value (A and B).
Then the degree of matching is calculated by dividing this degree of separation by the standard deviation (C).
Finally, a collation result is acquired by comparing the degree of matching with the threshold (D).
Incidentally, the standard deviation means an average value of a “difference between each error value and the mean value.” In other words, the standard deviation is a value that shows roughly how large variation is arisen overall in a set.
Division of the above-mentioned degree of separation with such an overall variation value will show how much the min{E (i, j) } is small in a set of E (i, j) (prominently small or small only slightly).
Then, when min {E (i, j)} is prominently small in the set of E (i, j), the calculation is determined valid; when otherwise, it is determined invalid.
<Reason That, Only When min{E(i, j)} is Prominently Small in the Set of E (i, j), the Calculation is Determined Valid>
Here, assume that registered paper fingerprint information and paper fingerprint information just now acquired are acquired from the same paper.
In this assumption, there must exist a location (shifted position) where the registered paper fingerprint information and the paper fingerprint information just now acquired coincide. At this time, since at this position the registered paper fingerprint information and the paper fingerprint information just now acquired coincide extremely well, E(i, j) should be very small.
On the other hand, if a position is shifted from this position even a little, there is no association between the registered paper fingerprint information and the paper fingerprint information just now acquired. Therefore, at such a position, (i, j) should become a usual large value.
Therefore, a condition that “two pieces of paper fingerprint information were acquired from the same paper” agrees with a condition that “the smallest E(i, j) is prominently small in the set of E(i, j).”
<Paper fingerprint information collation processing> will be explained again.
In Step S903, the degree of matching of two pieces of paper fingerprint information acquired in Step S902 is compared with a predetermined threshold and “Valid” or “Invalid” is determined. Incidentally, sometimes the degree of matching is called the degree of similarity. Sometimes, a comparison result of the degree of matching and the predetermined threshold is called a collation result.
Explanation of the controller 11 is finished here.
An area 708 is a tab for selecting paper fingerprint information registration processing. The paper fingerprint information registration processing will be described later. An area 709 is a tab for selecting paper fingerprint information collation processing. This paper fingerprint information collation processing will be described later.
An area 710 is a tab for setting a security level in the paper fingerprint information registration processing. The processing of security level setting will be described later. An area 711 is a tab for setting a collation rate in the paper fingerprint information registration processing. The processing of collation rate setting will be described later.
Next, the paper fingerprint information registration processing that is executed when the start key is pressed after the paper fingerprint information registration tab 708 shown in
In Step S1602, the scanner image processing section 312 sets a gain adjustment value usually smaller than the gain adjustment value at the time of reading in the shading correction section 500. Then it outputs each luminance signal value acquired by applying the above-mentioned small gain adjustment value to the image data to the paper fingerprint information acquisition section 507. After this, the paper fingerprint information acquisition section 507 acquires paper fingerprint information based on the output data. Acquisition of paper fingerprint information is processed as shown by a flowchart in
In Step S1603, the CPU 301 generates a coded image by encoding paper fingerprint information, and so controls that the generated coded image data may be sent to the coded image synthesis section 607 in the printer image processing section 315.
In Step S1604, the coded image synthesis section 607 prepares a synthetic image with the coded image data generated in Step S1603 and image data to be printed on output paper. Incidentally, in this flow, since it is data already printed thereon, synthesis of the coded image data and the image data is not performed and the coded image data only is printed on the paper. Then, the half tone correction section 606 performs the half tone processing on the synthetic image data acquired by the synthesis concerned in accordance with the number of gray tones of the printer section to which the data is outputted. The synthetic image data after the half tone processing is sent to the printer section 14 through the printer I/F 314.
Steps S1105 to Step S1107 are the same as those of the processing described above. An area extracted in Step S1803 or Step S1804 is determined to be the paper fingerprint area and image data corresponding to the area is acquired. If the area cannot be extracted in Step S1803 or Step S1804, an error is sent. Thus, it is possible to change the paper fingerprint area that is intended to be acquired to “only white background” or “including characters” by branching an extraction condition of an image area by the edge determination according to the security level described above. Incidentally, a plurality of kinds of security levels may exist just as described above. In that case, what is necessary is just to make the flow branch in the above-mentioned Step S1802 according to the security level and perform the edge determination processing stepwise. For example, the processing may be that if the security level is “minimum”, a “white” area will be extracted; and if the security level is “low”, a “10% character inclusion” area will be extracted or the like.
In Step S2106, it is determined whether the image inclusion area was extracted in Step S2105. If the image inclusion area was not extracted, the flow proceeds to Step S2108. If the image inclusion area was extracted, the flow proceeds to Step S2107. In Step S2107, the variable x defined in Step S2104 is incremented. In Step S2108, it is determined whether the constant N and the variable x are equal. Moreover, it is also determined whether the edge determination in Step S2105 is ended in inspection over the whole area of the image data. If the both determinations were given No's, the flow proceeds to Step S2105. If it is determined that the constant N agrees with the variable x, the flow proceeds to Step S1106. Moreover, if it is determined that the inspection is ended over the whole area of the image data, the flow proceeds to Step S2109. In Step S2109, it is determined whether the variable x is equal to zero. If it is determined that x is equal to zero, the flow proceeds to Step S1108. Processing in Step S1108 is just as described above. If x does not agree with zero, the flow proceeds to Step S2110. Since it is determined that x does not agree with the constant N in Step S2108 and further it is determined that x does not agree with zero in Step S2109, apparently the variable x is not zero although x does not reach the number of acquired paper fingerprints calculated by the acquired collation rate. Therefore, in Step S2110, in order to inform the user of the fact, an alarm is notified to the operation panel 12 through the operation panel I/F 305. The operation panel receiving the alarm displays an alarm message as shown in
Next, an action when the start key is pressed after the paper fingerprint information collation tab 709, shown in
In Step S1701, the CPU 301 so controls that the original read by the scanner section 13 may be sent to the scanner image processing section 312 as image data through the scanner I/F 311. In Step S1702, the scanner image processing section 312 performs processing shown in
In Step S1703, the CPU 301 performs paper fingerprint information collation processing. This paper fingerprint information collation processing is just as explained using
One embodiment in the case where a scanner is installed in a body of the image forming apparatus is described.
An operation to be executed when the start key is pressed after the paper fingerprint information registration tab 708, shown in
Thus, in the embodiment in which the scanner is installed in the image forming apparatus, image formation, reading, and paper fingerprint area registration can be completed only by processing in the body.
In the first embodiment, described was a form where an area including an image area of a certain threshold was determined as the paper fingerprint area in order to specify the area including a few or several image areas (image inclusion area) as a paper fingerprint registration area in the paper fingerprint information acquisition processing. Concretely, the edge determination is performed on an area of a predetermined size being read by a scanner section 13 as shown in the flowchart of
In an area with much of printing like solid black, the fiber of paper cannot be read clearly, and accordingly the collation rate of the paper fingerprint becomes low at the time of collating a paper fingerprint. Therefore, in an area of a predetermined size, the area in which the image areas are small in number, that is, a result of the edge determination is small has a high collation rate. A method for finding the paper fingerprint area of the highest collation rate using this fact will be described below.
In Step S1101, like the first embodiment, the scanner image processing section 312 receives image data read in the scanner section 13 through the scanner I/F 311.
Then, in Step S2901, a result of the edge determination (hereinafter referred to as an edge determination value) and the start point of an area of a predetermined size (hereinafter referred to as a determination area) in which the edge determination is performed are initialized. Here, the initialization of the area is an upper left corner of paper. Incidentally, in this embodiment, an initial value of the edge determination is set to 10 (MIN=10), and a determination area is specified to have an area size of 3×3 fixed. The start point shall designate a position in the upper left of the determination area (this is shown in
A control of executing the edge determination (the same determination as in the first embodiment) on the received data will be described below. First, in Step S2902, a start point of the determination area is altered in the image data read in Step S1101 (however, at the first time, an initialized start point shall be maintained). Although various methods are conceivable as the alteration method, this embodiment uses a method whereby the start point is altered every three point so that the determination area determined once may not be determined again as in
Regarding this above-mentioned control (Steps S2903-S2906), when the start point is altered next in Step S2903, if it can be altered, the control is performed repeatedly (Here, the control is repeated up to a start point in the lower right area in
The above control makes possible to find the paper fingerprint area of the highest collation rate.
In Step S2901 of this embodiment, the minimum of the edge determination value was set to an arbitrary value (here 10) as initialization. However, the minimum may not be an arbitrary value. An area that is of fewer characters and fewer images in a predetermined area size in which determination is performed is found by doing block selection for image data received in Step S1101. An edge determination value acquired for this area may substitute for the arbitrary value (MIN). The block selection is a well-known technique (reference literature: Japanese Patent Laid-Open No. H9-51388 etc.) that investigates connectivity of images of pixels constituting the image data by analyzing them and classifies detected connect components (details are omitted) Since this technique can detect a minimum rectangle including a text and a black image, it makes possible to find the determination area of less black images. For image data that is a target on which block selection is performed, image data that is converted into binary image data by the multivalue to binary conversion section 324 beforehand shall be used.
In Step S1107, if a location of paper fingerprint area in which the collation rate becomes highest and information whereby the paper is uniquely recognized are collated with each other and stored or printed on paper, it becomes possible to easily find paper fingerprint information for specific paper.
Moreover, in Step S1101 in which scanned image data is received, it is possible that both rear-side and front-side pieces of image data are received, an optimal paper fingerprint area or one that gives a collation rate equal to or more than a certain threshold is searched from both the rear-side and the front-side thereof.
Moreover, this embodiment can be combined with the first embodiment (that is, in the second embodiment, when an area of the highest collation rate is determined, it is possible to eliminate or add an area consisting of only a white area.
In each of the first and third embodiments, the embodiment that performed determination on an area of a predetermined size read by the scanner section 13 was described. Here, as another embodiment, an embodiment in which the paper fingerprint area that can be collated is detected by altering the size of the area in performing the edge determination at a certain position as a start point will be described. Incidentally, since any controls and units other than detecting a conceivable paper fingerprint area are the same as those in the first and third embodiments, only what is different (another embodiment of
In the case where an area subjected to the edge determination has only an image with much of printing as solid black, paper fiber cannot be read clearly, and accordingly the collation rate at the time of collating the paper fingerprint becomes low. In light of this fact, a method for increasing areas other than the solid black area by enlarging an area to be read and determining the paper fingerprint area that can be collated will be described below.
A control to perform the white image area determination in the area altered in the receive data will be described below. Incidentally, although this white image area determination is the same control as the edge determination of the first embodiment, it is not intended to find how many image areas are included, but to find how many white areas are included. Here, if the white image areas are more than 10, it shall be determined that collation of the paper fingerprint is possible (that is, when a result of the white image area determination is not more than 10, there are may areas that cannot be read, and accordingly it is considered impossible to collate the paper fingerprint of this paper).
First, in Step S3202, the size of an area in which the edge determination is performed is altered (enlarged). As an alteration method, various methods are conceivable. In this embodiment, a method for enlarging the determination area by a certain ratio with a certain start point as shown in
The above control enables the paper fingerprint area that can be collated to be found.
In addition, this embodiment can be combined with the first embodiment or the third embodiment (The combination of this embodiment with the third embodiment makes it possible to detect the paper fingerprint area that can be collated and give a highest recognition rate).
In the embodiment so far described, a printed original was read and the paper fingerprint area was determined, as in Step S1101 of
In Step S3401, the print data (for example, PDL data) from the PC (personal computer) is received via the LA 50 or the WAN 331. Next, this print data is interpreted by a control of the CPU 301, and the intermediate data is sent to the RIP 328. Then the RIP 328 performs rendering to generate bit map data (Step S3401). Incidentally, processing of receiving this PDL data and generating bit map data shall be a known technique, and details thereof are omitted here. The bit map data is not directly put into printing processing. The multivalue to binary conversion section 324 generates binarized image data, to which the edge determination processing of the first and third embodiments is applied to determine the paper fingerprint area (Step S3403). Incidentally, if the paper fingerprint area cannot be detected in Step S3403, error processing is performed, which is the same as in other embodiments and details thereof is omitted.
By the above processing, paper fingerprint information can be acquired from print data. Incidentally, after the paper fingerprint area is determined, it is also possible to perform printing processing of only bit map data as in Step S3404 and printing processing of synthetic data acquired by adding paper fingerprint information to the bit map data.
In the above-mentioned embodiment, an example in which the PDL data was received as print data in Step S3401 was described. However, it is also possible to receive the data read by the scanner installed in the body of the image forming apparatus as print data like the second embodiment.
In the above-mentioned embodiment, the edge determination processing is performed on binary image data to determine the paper fingerprint area. However, processing can be made fast as follows. First, an area with few edges is searched by performing image area processing (searching an area with a small amount of character information by performing common image area processing, that is, searching an area with few edges). Then, the edge determination processing is performed on this area.
Note that this invention can be applied to both a system made up of a plurality of apparatuses (for example, a computer, an interface device, a reader, a printer, etc.) and a system made up of a single apparatus (a digital multi-function machine, a printer, a facsimile apparatus, etc.).
Moreover, the object of the present invention is attained by a system or a computer (CPU or MPU) of a system reading a program code that realizes a procedure of the flowchart shown in the embodiment described above from a storage medium for storing the program code and executing it. In this case, the program code itself read from the storage medium will realize functions of the above-mentioned embodiments. Therefore, this program code and the storage medium that stores the program code will also constitute one aspect of the present invention.
As a storage medium for supplying the program code, for example, a floppy (registered trademark) disk, a hard disk, an optical disk, a magneto-optical disk, CD-ROM, CD-R, magnetic tape, a nonvolatile memory card, ROM, etc. can be used.
The execution of the program code read by the computer includes not only a case where the functions of the embodiments described above are realized but also the following case: an OS (operating system) working on the computer based on instructions of the program code performs a part of or all of actual processing and the functions of the embodiments described above are realized by that processing.
Furthermore, the program code first read from the storage medium is written in memory provided in a function extension board inserted into a computer or a function extension unit connected to a computer. Subsequently, based on instructions of the program code, a CPU or the like provided in the function extension board or the function extension unit performs a part of or all of actual processing, and the functions of the embodiments described above are realized by that processing.
Note that the embodiments described above only show examples of its embodiments in order to carry out the present invention, and these examples shall not intend to limit interpretation of a technical range of the present invention. That is, the present invention can be carried out in various forms, without departing from the technical thought or its main features.
Each of the above configurations enables a paper fingerprint in an area of high recognition accuracy to be registered. Moreover, it becomes possible to register a paper fingerprint of a portion in which necessary information exists (if that portion is covered, necessary information cannot be read as well).
Therefore, when the originality guarantee and duplication prevention are performed using a paper fingerprint that is a characteristic of the paper itself, it becomes possible to prevent interference against security by the user. Moreover, also in paper with a large amount of printing (paper with few non-image areas), the paper fingerprint area of a high collation rate can be found, and therefore it becomes possible to set up and register paper fingerprints in a large amount of paper.
While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structure and functions.
This application claims the benefit of Japanese Patent Application No. 2006-328525, filed Dec. 5, 2006, which is hereby incorporated by reference herein in its entirety.
Number | Date | Country | Kind |
---|---|---|---|
2006-328525 | Dec 2006 | JP | national |