The preferred embodiments of the present invention are shown by way of example, and not limitation, in the accompanying figures, in which:
In the following paragraphs, some preferred embodiments of the invention will be described by way of example and not limitation. It should be understood based on this disclosure that various other modifications can be made by those in the art based on these illustrated embodiments.
As shown in
The CPU 1 serves to control the overall copy machine, but specifically in this embodiment, it further serves to control giving to image a ground pattern that is additional information, performing a tone correction of the image and an output level correction of the ground pattern, and calculating data for these corrections, and etc.
The ROM 2 is a memory that stores a program to make the CPU 1 work, and the CPU 1 executes and controls various processes according to the program stored on the ROM 2.
The RAM 3 is a memory that provides working area for the CPU 1 to work according to the program.
The scanner 4 comprises for example an image scanner, and serves to read image on a document M placed on a document table 9 (shown in
The operation panel 5 comprises a numeric keypad and a touch panel display not shown in Figure, for various user input operations. And it also serves to display messages, works in process and processing results, on the display for users.
The storage 6 serves to store an application program, data of a ground pattern to be given to image, data of test patterns for output level correction of ground pattern and tone correction, and other various data.
The printer 7 comprises a photoreceptor, a development part, a fixing part, a sheet feeder, a transfer belt and etc. not shown in Figure, and serves to form image based on image data from the scanner part 4 and print the image on paper. In this embodiment, the printer 7 prints on paper a plurality of test patterns for output level correction of ground pattern in different output levels, and a plurality of test patterns for tone correction in different tones. Further explanation about the printer 7 is omitted because its configuration is already heretofore known.
The external interface 8 serves as a communication part to exchange data with an external device that works on a network, for example, a user terminal.
As shown in
The document table 9 comprises a transparent board like a glass board on which the document M is placed.
The image reader 10 is located just under the document table 9, and comprises a slider 11 capable of moving back and forth in the sub-scanning direction (the horizontal direction) as indicated by an arrow, mirrors 14 and 15, a lens 16, a prism 17, a CCD 18 as an image sensor, and etc.
The slider 11 comprises an irradiation lamp 12 to irradiate image of the document M with light, and a mirror 13 to direct the light reflected from the image of the document to a predetermined direction, and it serves to read the image of the document by moving back and forth automatically at a constant speed in the sub-scanning direction. The light originating from the irradiation lamp 12 is reflected depending on the tone of the image of the document M placed on the document table 9.
The light directed by the mirror 13 is redirected by the mirrors 14 and 15, and routed through the lens 16 into the prism 17. The prism 17 serves to split the incoming light into the three colors of R (red), G (green) and B (blue), depending on its wavelength.
The three colors of light split by the prism 17 enter the three CCDs 18 exclusively allocated for the respective colors. Elements of the colors R, G and B in one line in the main-scanning direction are picked up by the three CCDs 18 at one time from the image of the document. In this way, the two-dimensional image of the document M is steadily read at several times by the slider 11 that moves back and forth in the sub-scanning direction.
The image signal processor 20 serves to receive analog signals outputted from the CCDs 18, and convert them to a predetermined format of image data in cooperation with the CPU 1.
As shown in
The image signal processor 20 comprises, for example, an A/D converter 22, a shading corrector 23 and an image corrector 30.
The A/D converter 22 performs offset and gain corrections on the analog signals inputted from the CCDs 18, and converts the corrected signals of the respective colors R, G and B to eight-bit image data (r, g and b) (256 tones).
The shading corrector 23 performs corrections of spotty data caused by the irradiation lamp 12 to distribute light and the CCDs 18 to detect pixels, on the image data of the respective colors.
In this way, image data S1 (r′, g′ and b) of the respective colors, which indicate brightness, are outputted from the shading corrector 23 or the external interface 8.
The image corrector 30 comprises a log converter 31, a UCR processor 32, a BP processor 33, a color corrector 34, a tone corrector 35, an error diffusion processor 36, a D/A converter 37, a data holder 38 for tone correction, a data calculator 39 for tone correction, a ground pattern merger 40, a ground pattern data holder 41, a ground pattern image generator 42, a ground pattern output level corrector 43, a data calculator 44 for output level correction of ground pattern, and others.
The log converter 31 converts the image data to image data (Dr, Dg and Db) indicating the optimal tones to meet the human relative visibility.
The UCR processor 32 serves to pick up dark color elements to be reproduced in Black toner, from the image data (Dr, Dg and Db), and correct data values of R, G and B depending on a value of the picked up elements.
The BP processor 33 serves to generate Black data (K data) based on the data from the UCR processor 32 and the log converter 31.
After the UCR processing, the color corrector 34 serves to perform a mask calculation for color correction, and the color correction is that the image data (Dr′, Dg′ and Db′) indicating the optimal tones for the respective colors R, G and B, is converted to toner image data of three colors, C (cyan), M (magenta) and Y (yellow) to adjust to the toner characteristic.
The image data of the four colors C, M, Y and K consists of pixels, each having eight bits, to reproduce image in 256 tones.
The tone corrector 35 corrects a tone caused by a back ground color and a density slope of the image on the document M, according to data such as a γ correction table that is recorded in the data holder 38 for tone correction.
The data calculator 39 for tone correction, calculates data (such as data in the γ correction table) to be used by the tone corrector 35 for tone correction. In this embodiment, there are two methods to calculate data for tone correction: it is calculated based on reading results drew by the scanner 4 from test patterns for tone correction, which are outputted on paper or others, and it is calculated based on detecting results drew by a tone density sensor (a toner adhesive amount sensor) from test patterns for tone correction, which are formed on an image carrier such as a photoreceptor or a transfer belt. These methods will be explained below. The data for tone correction, which is calculated by the data calculator 39 for tone correction, is recorded in the data holder 38 for tone correction. Then, based on the latest data for tone correction, the tone corrector 35 performs a tone correction.
The error diffusion processor 36 performs an error diffusion on the image data (eight-bit) having 256 tones to obtain value-decreased data SG 1 (one-bit) having two tones.
The D/A converter 37 performs a D/A conversion on the digital print data to output analog print data.
The ground pattern merger 40 serves to give ground pattern image data to the image data outputted from the D/A converter 37 after image processings, to create data of image with a ground pattern, when a user inputs an instruction via the operation panel 5. The ground pattern image to be given is originally generated by the ground pattern image generator 42 based on respective data for the background part B and the latent image part A, which are recorded in the ground pattern data holder 41, then the ground pattern output level corrector 43 corrects the generated data to obtain an optimal output level of a ground pattern to be outputted. The ground pattern output level corrector 43 performs the correction based on data calculated by the data calculator 44.
In this way, the ground pattern merger 40 merges the corrected ground pattern image data and the target image data to output data of image with a ground pattern.
In this embodiment, an error diffusion method is taken just as an example, and another image processing method is also applicable. In addition, number of bits per pixel and number of tones are not limited.
Hereinafter, how the image signal processor 20 creates data of image with a ground pattern will be explained with reference to the flowchart in
As shown in
Subsequently, it is judged in Step S3 whether or not a ground pattern print mode is selected by a user via the operation panel 5. If a ground pattern print mode is not selected (NO in Step S3), the routine proceeds to Step S7 where the image data is used directly for outputting.
If a ground pattern print mode is selected (YES in Step S3), ground pattern image data is generated in Step S4. The ground pattern image data is generated by merging the background part B and the latent image part A, according to latent image part definition image data (shown in
In the background part B, some bits (1 for example) indicate black pixels and the other bits (0 for example) indicate white pixels. Similarly, black pixels and white pixels, but more black pixels are given in the latent image part A, than those in the background part B, in order to make dots look larger, as shown in
In this embodiment, ground pattern image data consists of pixels each having one bit for example, but not limited to one bit.
Proceeding to Step S5, wherein the generated ground pattern image data is corrected to obtain a predetermined output level, then in Step S6, the corrected ground pattern image data is merged with the image data of the document. In Step S7, the merged image data is determined to be ready for outputting. The data of image with a ground pattern, which is to be outputted, is transmitted to the printer 7 and printed on paper or others.
Hereinafter, how to acquire data for output level correction of ground pattern will be explained.
The processes are started by user operation to press a button for automatic output level correction of ground pattern (not shown in Figure) prepared in the operation panel 5.
In Step S11, it is judged whether or not an instruction is given by user operation to press the button. If an instruction is not given by user operation (NO in Step S11), the routine directly terminates. If an instruction is given by user operation (YES in Step S11), the printer 7 prints a plurality of test patterns for output level correction of ground pattern in different output levels on paper or others, in Step S12.
Then, a user makes the printed paper carrying the test patterns for output level correction of ground pattern read by the scanner 4. In Step S13, it is judged whether or not the reading is completed. If it is not completed (NO in Step S13), the routine waits until it is completed. If it is completed (YES in Step S13), data for correction is calculated in Step S14.
The test patterns 51 and 52 for output level correction of ground pattern are prepared for the latent image part A and the background part B, respectively.
For the background part B, there are five test patterns 51 (1) to (5), each having different size of dots, aligned in the order of output levels as shown in
Similarly, for the latent image part A, there are five test patterns 52 (a) to (e), each having different size of dots, aligned in the order of output levels. The size of dots in the test patterns 52 of the latent image part A is larger than that in the test patterns 51 of background part B, respectively.
In this embodiment, to obtain different sizes of dots for the patterns 51 of the background part B and the patterns 52 of the latent image part A, a method is taken just as an example, and the method is changing the size of a pixel 54 depending on laser light volume that forms a dot as shown in
As shown in
When a document with a ground pattern is read, the scanner 4 does not ordinarily pick up small dots in the background part B of the ground pattern. Or if it does, data of the picked up dots is erased so as not to be outputted on paper. On the other hand, when it is read for the purpose of output level correction of ground pattern, it is necessary to detect an output level of the patterns 51 for output level correction of the background part B with a high degree of accuracy.
To read the patterns 51 for output level correction of the background part B carefully, a reading speed of the scanner 4 is set to a lower level than ordinary. And, noise removal and corrections ordinarily performed on readout image data, are enabled.
As shown in
Based on the output levels detected from the respective detection patches in this way above, laser light volume needed for the latent image part A and the background part B to be outputted in optimal output levels, is calculated in the data calculation process in Step S14 of the flowchart shown in
Hereinafter, how to calculate laser (LD) light volume for the background part B will be explained with reference to the chart of output characteristic of the background part B, which is shown in
As shown in the table in
According to a plurality of the detecting results, a calculation is performed by inserting a condition to obtain a target output level value into a calculating formula. In this embodiment, if S_t=50 is set as the target output level value, there should exist laser light volume that brings S_t=50, behind between those of the second and third detection patches. Therefore, the laser light volume LD_2 and the detected output level value STN_2 of the second detection patch, and the laser light volume LD_3 and the detected output level value STN_3 of the third detection patch, are used for calculation of the laser light volume LD_t that is to be set, in the following calculating formula: LD_t=(S_t−STN_2)×(LD_3−LD_2)/(STN_3−STN_2)+LD_2=(50−40)×(300−200)/(60−40)+200=250.
As shown in
In Step S23, it is judged whether or not STN_1≦S_t<STN_2, and if it is STN_1≦S_t<STN_2 (YES in Step S23), the routine proceeds to Step S30.
If it is not STN_1≦S_t<STN_2 (NO in Step S23), it is judged in Step S24 whether or not STN_2≦S_t<STN_3. If it is STN_2≦S_t<STN_3, (YES in Step S24), the routine proceeds to Step S31.
If it is not STN_2≦S_t<STN_3 (NO in Step S24), it is judged in Step S25 whether or not STN_3≦S_t<STN_4. If it is STN_3≦S_t<STN_4 (YES in Step S25), the routine proceeds to Step S32.
If it is not STN_3≦S_t<STN_4 (NO in Step S25), it is judged in Step S26 whether or not STN_4≦S_t<STN_5. If it is STN_4≦S_t<STN_5 (YES in Step S2), the routine proceeds to Step S33.
If it is not STN_4≦S_t<STN_5 (NO in Step S26), it is judged in Step S27 whether or not STN_1<S_t. If it is not STN_1<S_t (NO in Step S27), it is determined in Step S28 that the LD light volume to be set (=LD_t)=the maximum light volume, then the routine proceeds to Step S29.
If it is STN_1<S_t (YES in Step S27), it is determined in Step S34 that the LD light volume to be set (=LD_t)=the minimum light volume, then the routine proceeds to Step S29.
In Step S29, the calculated LD light volume is determined, and then the routine terminates.
In Step S30, (x, y)=(LD_1, STN_1), X=LD_2−LD_1, and Y=STN_2−STN_1 are calculated, and then the routine proceeds to Step S35.
In Step S31, i(x, y)=(LD_2, STN_2), X=LD_3−LD_2, and Y=STN_3−STN_2 are calculated, and then the routine proceeds to Step S35.
In Step S32, (x, y)=(LD_3, STN_3), X=LD_4−LD_3, and Y=STN_4−STN_3 are calculated, and then the routine proceeds to Step S35.
In Step S33, (x, y)=(LD_4, STN_4), X=LD_5−LD_4, and Y=STN_5−STN_4 are calculated, and then the routine proceeds to Step S35.
In Step S35, a slope (=A)=Y/X is calculated, and in Step S36, LD_t=(S_t−y)/A+x is calculated. And then in Step S29, the calculated LD light volume is determined as the laser light volume that brings the target output level value.
Similarly, optimal laser light volume for the latent image part A is also calculated according to the flowchart.
Then, the ground pattern output level corrector 43 in
Although this embodiment is explained with the five detection patches 51 and the five detection patches 52, number of the detection patches 51 and 52 is not limited to five, and can be arbitrarily changed.
Meanwhile, in this embodiment, the size of dots is adjusted by changing the size of pixels depending on laser light volume as shown in
In addition, as shown in
In addition, output levels also can be changed depending on location on the print side of paper.
In sum, in this embodiment, a plurality of the detection patches 51 and 52 outputted in different output levels are read by the scanner 4, then data for output level correction of ground pattern is calculated based on the reading results, and then output levels of the latent image part A and the background part B are automatically corrected based on the calculated data for correction. Therefore, the image forming apparatus can optimize an output level of a ground pattern without user operation to select the best ground pattern image, even if the output level of the ground pattern happens to be changed by a disturbance. Further, accurate data for correction is calculated based on data readout by the scanner 4 from detection patches, not based on data acquired by a sensor that senses the amount of used toner when an image stabilization control is performed. Based on the accurate data acquired in this way, the output level of the ground pattern can be corrected with a high degree of accuracy.
If only one time of reading does not allow acquiring data that is accurate enough for correction, it is only necessary to repeat the processes: creating another image sample 53, making the scanner 4 read the detection patches, and calculating data for correction.
Meanwhile, as described above in this embodiment, density (tone) of image data to be given ground pattern data can be also corrected.
Density (tone) of image to be outputted from the printer 7 of an image forming apparatus such as a copy machine, tends to be changed by a disturbance such as an environmental factor or aging, even under the same development conditions.
To remove the inconvenience, a tone correction is performed to correct toner density of image to be outputted. There are two methods to calculate data for tone correction as described above: one method is calculating data for tone correction based on reading results drew by the scanner 4 from test patterns for tone correction, which is outputted on paper or others, and the other method is calculating data for tone correction based on results detected from test patterns for tone correction, which are formed on an image carrier such as a transfer belt.
In the method of calculating based on detecting results from test patterns (also referred to as “toner patches”) for tone correction, which are formed on an image carrier, a sensor to sense the amount of used toner should be prepared. Then, toner patches are formed on an image carrier when an image stabilization control is performed. The amount of used toner is detected by the sensor, and the actual amount of used toner is estimated. Based on the detecting results drew by the sensor, data for tone correction is calculated to print image in an optimal density after the image stabilization control is completed. A tone correction based on the acquired data for tone correction, also can be performed by adjusting image development conditions or others, not by a γ correction or others.
In the method of calculating based on reading results by the scanner 4 from test patterns for tone correction, a tone correction is performed with a higher degree of accuracy than the method of utilizing a sensor that senses the amount of used toner, and high-quality image can be obtained. That is, a test pattern 61 for tone correction, which has a density slope from lower tone to higher tone, is printed on paper in the respective colors of yellow (Y), magenta (M), cyan (C) and black (K) to create an image sample 62 as shown in
Then, based on the detected tone data, a tone of original image data shown in
Hereinafter, timings to acquire data for output level correction of ground pattern and data for tone correction will be explained.
Generally, an image forming apparatus such as a copy machine comprises a counter that counts the number of printed sheets. Thus, there exist many image forming apparatuses that determine the timing to perform an image stabilization control based on the number of printed sheets, which is counted by the counter.
Users are notified of the timings to acquire data for tone correction and data for output level correction of ground pattern, by a message requesting for giving an instruction, which is displayed based on the number of printed sheets. Although acquisitions of the former data and the latter data can be performed in different timings, those are preferably performed simultaneously, because a simultaneous data calculation is more efficient than data calculations in different timings and never reduces productivity of the apparatus. It is also applicable that when test patterns for tone correction are printed on a sheet of paper, test patterns for output level correction of ground pattern are also printed on the same sheet of paper, and then those are read by the scanner 4, simultaneously.
As explained above in this embodiment, a plurality of test patterns for output level correction of additional information are outputted in different output levels by an output part, then data for output level correction of additional information such as a ground pattern is calculated based on reading results drew by an image reader from the test patterns for output level correction of additional information. And then, an output level of additional information is automatically corrected based on the calculated data for correction. In this way, an image forming apparatus can correct an output level of additional information without user operation to select the best image of the additional information, even if the output level of the additional information happens to be changed by a disturbance. In addition, accurate data for correction is calculated based on the reading results drew by the image reader from the test patterns for output level correction of additional information, not based on detecting results by a sensor that senses the amount of used toner when an image stabilization control is performed. Based on the accurate data acquired in this way, the output level of the ground pattern can be corrected with a high degree of accuracy.
A tone correction is further performed on image to be given the additional information, by an image forming apparatus comprises: a calculator for tone correction, which calculates data for tone correction of the image to be given the additional information, based on the reading results drew by the reader from the test patterns outputted by the output part; and a tone corrector that corrects a tone of the image to be given the additional information based on the calculated data for correction.
A tone correction is further performed on image to be given the additional information, by an image forming apparatus comprises: a detector that detects tones of test patterns for tone correction, which are formed on an image carrier owned by the output part; a data calculator for tone correction, which calculates data for tone correction of the image to be given the additional information, based on detecting results drew by the detector from the test patterns for tone correction; and a tone corrector that corrects a tone of the image to be given the additional information, based on the calculated data for tone correction.
In addition, even if the output level of the ground pattern happens to be changed, the output level is corrected by an image forming apparatus, wherein the additional information corresponds to a ground pattern consisting of dotted patterns and a calculator calculates data for output level correction of the ground pattern.
In addition, the output level of the ground pattern can be corrected by changing the size of pixels, if the data for output level correction of ground pattern relates to the size of pixels.
In addition, the output level of the ground pattern can be corrected by changing the layout of pixels, if the data for output level correction of ground pattern relates to the design of pixels.
In addition, spotty data of the output level of the additional information, which is detected in the main-scanning direction, can be corrected by an image forming apparatus, wherein the test patterns for output level correction of additional information are aligned repeatedly in the main-scanning direction, and an output level corrector corrects the spotty data of the output level of the additional information, which is detected in the main-scanning direction.
In addition, if a calculator for output level correction performs a calculation simultaneously with a calculation by the calculator for tone correction, the calculation is performed more efficiently without reducing productivity of the apparatus than a case where those calculators perform the calculation in different timings.
In addition, if the reader reads the test patterns for output level correction of additional information at a slower speed than it reads image to be given the additional information, the reader can reads the test patterns correctly for calculating data for output level correction with high degree of accuracy, even if the test patterns consists of small dots just like the background part of the ground pattern does.
In addition, if an output level of additional information happens to be changed by a disturbance, it is possible to correct the output level of the additional information with a high degree of accuracy automatically without user operation to select the best image of the additional information, by an image processing method comprising: reading image by a reader; outputting the image by an output part; giving additional information to the image before outputting the image by the output part; making the output part output a plurality of test patterns for output level correction of additional information in different output levels; calculating data for output level correction of additional information based on reading results drew by a reader from the outputted test patterns for output level correction of additional information; and correcting the output level of the additional information based on the calculated data for correction.
In addition, it is possible to correct a tone of image to be given the additional information, by an image processing method further comprising: calculating data for tone correction of the image to be given the additional information, based on reading results drew by the reader from test patterns for tone correction, which are outputted by the output part; and correcting the tone of the image to be given the additional information, based on the calculated data for correction.
In addition, it is possible to correct a tone of image to be given the additional information by an image processing method further comprising: detecting a tone of test patterns for tone correction, which are formed on an image carrier owned by the output part; calculating data for tone correction of the image to be given the additional information, based on the detecting results drew by a detector; and correcting the tone of the image to be given the additional information, based on the calculated data for tone correction.
In addition, it is possible not only to calculate data for output level correction of additional information such as a ground pattern based on reading results drew by a reader from test patterns for output level correction of additional information in different output levels, but also to correct an output level of image based on the calculated data for correction, according to an image processing program to make a computer execute: reading image by a reader; outputting the image by an output part; giving additional information to the image before outputting the image by the output part; making the output part output a plurality of test patterns for output level correction of additional information in different output levels; calculating data for output level correction of additional information based on reading results drew by the reader from the outputted test patterns for output level correction of additional information; and correcting an output level of the additional information based on the calculated data for correction.
In addition, it is possible to correct the output level of the image to be given the additional information according to an image processing program to make a computer further execute: calculating data for tone correction of the image to be given the additional information, based on reading results drew by the reader from test patterns for tone correction, which are outputted by the output part; and correcting the image to be given the additional information, based on the calculated data for correction.
In addition, it is possible to correct a tone of the image to be given the additional information according to an image processing program to make a computer further execute: detecting a tone of the test patterns for tone correction, which are formed on an image carrier owned by the output part; calculating data for tone correction of the image to be given the additional information, based on the detecting results drew by a detector from the test patterns for tone correction; and correcting the tone of the image to be given the additional information, based on the calculated data for tone correction.
While the present invention may be embodied in many different forms, a number of illustrative embodiments are described herein with the understanding that the present disclosure is to be considered as providing examples of the principles of the invention and such examples are not intended to limit the invention to preferred embodiments described herein and/or illustrated herein.
While illustrative embodiments of the invention have been described herein, the present invention is not limited to the various preferred embodiments described herein, but includes any and all embodiments having equivalent elements, modifications, omissions, combinations (e.g. of aspects across various embodiments), adaptations and/or alterations as would be appreciated by those in the art based on the present disclosure. The limitations in the claims are to be interpreted broadly based on the language employed in the claims and not limited to examples described in the present specification or during the prosecution of the application, which examples are to be construed as non-exclusive. For example, in the present disclosure, the term “preferably” is non-exclusive and means “preferably, but not limited to”. In this disclosure and during the prosecution of this application, means-plus-function or step-plus-function limitations will only be employed where for a specific claim limitation all of the following conditions are present In that limitation: a) “means for” or “step for” is expressly recited; b) a corresponding function is expressly recited; and c) structure, material or acts that support that structure are not recited. In this disclosure and during the prosecution of this application, the terminology “present invention” or “invention” may be used as a reference to one or more aspect within the present disclosure. The language present invention or invention should not be improperly interpreted as an identification of criticality, should not be improperly interpreted as applying across all aspects or embodiments (i.e., it should be understood that the present invention has a number of aspects and embodiments), and should not be improperly interpreted as limiting the scope of the application or claims. In this disclosure and during the prosecution of this application, the terminology “embodiment” can be used to describe any aspect, feature, process or step, any combination thereof, and/or any portion thereof, etc. In some examples, various embodiments may include overlapping features. In this disclosure and during the prosecution of this case, the following abbreviated terminology may be employed: “e.g.” which means “for example”, and “NB” which means “note well”.
Number | Date | Country | Kind |
---|---|---|---|
2006-281828 | Oct 2006 | JP | national |