IMAGE FORMING APPARATUS

Information

  • Patent Application
  • 20080069578
  • Publication Number
    20080069578
  • Date Filed
    September 05, 2007
    16 years ago
  • Date Published
    March 20, 2008
    16 years ago
Abstract
A CMOS sensor scans an image that is either on a transfer material carrier or on a transfer material that is placed on the transfer material carrier. A sampling timing controller samples an image signal at a predetermined sampling rate and computes a position of a predetermined pattern contained within the sampled image signal. A speed computation processor computes a moving speed of either the transfer material carrier or the transfer material, based on the position of the predetermined pattern thus sampled at the predetermined sampling rate and computed, as well as the predetermined sampling rate. The image region that the CMOS sensor scans is determined in accordance with the rotational speed of the drive motor and the sampling rate.
Description

BRIEF DESCRIPTION OF THE DRAWINGS

The accompanying drawings, which are incorporated in, and constitute a part of, the specification, illustrate embodiments of the invention and, together with the description, serve to explain the principles of the invention.



FIG. 1 depicts a diagram explaining the related art.



FIGS. 2 through 4 depict views explaining conventional detection of conveyance speed.



FIG. 5 depicts a cutaway conceptual view illustrating a configuration of an image forming apparatus, i.e., a laser printer, according to an exemplary embodiment of the present invention.



FIG. 6 is a block diagram describing a primary configuration of the image forming apparatus according to the embodiment of the present invention.



FIG. 7 depicts a view explaining a detection of an image, by an image sensor unit, on a belt.



FIG. 8 depicts a view illustrating an image that is formed on a surface of a feeding belt, and a detection of the image by a sensor.



FIG. 9 is a timing diagram explaining an operation of an image sensor unit.



FIG. 10 is a block diagram illustrating a configuration of the image sensor unit according to the embodiment.



FIGS. 11A through 11D, FIGS. 12A through 12C, and FIGS. 13A through 13C depicts views explaining an example of the movement of an image being scanned in a predetermined sampling interval (t2−t1), each at a different speed.



FIG. 14 is a functional block diagram illustrating a configuration of a function of a DSP that detects and controls a CMOS sensor signal, according to the embodiment.



FIG. 15 is a flowchart explaining a process of the DSP identifying a target pattern, according to the embodiment.



FIG. 16 is a flowchart explaining a DSP segment designation process, according to the embodiment.



FIG. 17 is a flowchart explaining a process of a CPU of the image forming apparatus detecting the conveyance speed, according to the embodiment.



FIG. 18 depicts a cutaway conceptual view illustrating a configuration of an image forming apparatus according to a second embodiment of the present invention.





DESCRIPTION OF THE EMBODIMENTS

Numerous embodiments of the present invention will now herein be described below in detail with reference to the accompanying drawings. The following embodiments are not intended to limit the claims of the present invention and not all combinations of features described in the embodiments are necessarily essential as means for attaining the objects of the present invention.



FIG. 5 depicts a cutaway conceptual view illustrating a configuration of an image forming apparatus, i.e., a laser printer, according to an exemplary embodiment of the present invention.


An image forming apparatus (printer) 1000 comprises a feeding belt 5, i.e., a transfer member, which conveys a transfer material P, i.e., a sheet of printing paper. Yellow, magenta, cyan, and black process cartridges 14 through 17 (“cartridges”) are placed in tandem as an image formation unit, aligned along the carrying surface of the feeding belt 5, in order from the upper end of the sheet of printing paper P, in the direction of the conveyance thereof. Scanner units 18, 19, 20, and 21 are respectively installed above the cartridges, corresponding to each of the cartridges 14 through 17. Transfer rollers 10, 11, 12, and 13 are each positioned, sandwiching the feeding belt 5, corresponding to each of photosensitive drums 6, 7, 8, and 9 of each of the cartridges 14 through 17. The cartridges 14 through 17 respectively comprise charge rollers 14a, 15a, 16a and 17a, developers 14b, 15b, 16b and 17b, and cleaners 14c, 15c, 16c and 17c, which are placed around the periphery of each of the photosensitive drums 6 through 9. The feeding belt 5 is wound around a drive roller 27 and an idler roller 28, and moves in the direction signified by an arrow X in the diagram, in accordance with the rotation of the drive roller 27.


With regard to the preceding configuration, the sheet of printing paper P is supplied from a printing paper cartridge 2 to the feeding belt 5, by way of a pickup roller 3 and a printing paper feed conveyor roller 29. Toner images of yellow, magenta, cyan, and black are obtained via an established electrophotographic method, and are overlaid and transferred to the sheet of printing paper P. The toner image of the sheet of printing paper P is fixed to the sheet by a fixing unit 22 (22a, 22b), and the sheet is discharged from the apparatus via a discharge sensor 24 and a paper path 23. The fixing unit 22 is conceptually configured of a fixing roller 22a, which contains a heater, and a pressurizing roller 22b.


When forming a toner image on the reverse side of the sheet of printing paper P as well as the obverse, the sheet of printing paper P that is outputted from the fixing unit 22 is conveyed once more to the feeding belt 5 via another printing paper path 25, and the toner image is formed on the reverse of the sheet of printing paper P, via a sequence of steps similar to the foregoing.


The image forming apparatus 1000 provides an image sensor unit 26, which provides image scanning means, near to the black cartridge 17 and the feeder belt 5. The image sensor unit 26 detects an image in a particular area of either the feeding belt 5 or the sheet of printing paper P by shining light on the surface thereof, and collecting and focusing the light reflected therefrom.


The image sensor unit 26 is positioned at the lower end of the direction of the conveyance of the sheet of printing paper P, i.e., near the fixing unit 22, because the drive roller 27 is exposed to the greatest degree of heat from the fixing unit 22. The reason for so doing is that the roller radius of the drive roller 27 experiences the most significant thermal expansion, and thus, corresponding fluctuations in the rotational speed of the feeding belt 5 may be detected more quickly.



FIG. 6 is a block diagram illustrating a primary configuration of the image forming apparatus 1000, according to the embodiment of the present invention.


The image forming apparatus 1000 comprises a digital signal processor (DSP) 50, a CPU 51, drum drive motors 52 through 55, which drive the photosensitive drums 6 through 9 for each respective color, and a belt motor 56 of the feeding belt 5, which drives the drive roller 27. The image forming apparatus 1000 also comprises a fixing motor 57, which causes the fixing roller 22a of the fixing unit 22 to rotate, a printing paper feed motor 62, which causes the printing paper feed conveyor roller 29 to rotate, a printing paper feed driver 61, which controls the printing paper feed motor 62, scanner motor units 63 through 66, for each respective color, and a high-voltage power supply unit 59. The DSP 50 controls the drum drive motors 52 through 55, the belt motor 56 of the feeding belt 5, the printing paper feed motor 62, and the image sensor unit 26, while the CPU 51 controls the scanner motor units 63 through 66, the high-voltage power supply unit 59, and the fixing unit 22. The DSP 50 controls the rotation of each motor by deriving the rotational speed of each motor from a detected speed signal from a speed detection MR sensor, and generating a PWM signal to bring the rotational speed of each motor in line with a target speed.



FIG. 7 depicts a view explaining a detection of an image by the image sensor unit 26.


The image sensor unit 26, which is positioned in opposition to the feeder belt 5, comprises an LED 33, i.e., a light element, which shines light, and a CMOS sensor 34, i.e., a detection element, which detects light that is reflected from either the feeding belt 5 or the sheet of printing paper P. The CMOS sensor 34 is a two-dimensional area sensor. The light whose source is the LED 33 irradiates at an angle on either the feeding belt 5 or the sheet of printing paper P, via a lens 35. The reflected light from the belt 5 or sheet P is collected via a focusing lens 36 and focused in the CMOS sensor 34, thus allowing detection of the image on either the feeding belt 5 or the sheet of printing paper P.



FIG. 8 depicts a view illustrating an image that is formed on a surface of the feeding belt 5.


As shown in FIG. 8, the image sensor unit 26 according to the embodiment allows obtaining the image on the feeding belt 5 as an enlargement 71, which is enlarged by the focusing lens 36. The CMOS sensor 34 is configured of partitions of a plurality of sensor elements, as depicted in the enlargement 71. Reference numeral 72 denotes an example of an image of a segment S11 of the enlargement 71, wherein the CMOS sensor 34 detects the tone of the image. According to the embodiment, an image that the image sensor unit 26 scans is configured of a 4×4 arrangement of segments, employing the CMOS sensor 34, which has a resolution of 8×8 pixels per segment, and eight bits, i.e., 256 tones, per pixel. The configuration, i.e., eight-bit resolution, etc., is only an example; the present invention is not restricted thereto.


The surface of either the feeding belt 5 or the sheet of printing paper P may have a minute unevenness because of such factors as scratches, dirt, or the fiber of the printing paper. Such unevenness generates shadows when light shines thereupon at an angle, allowing detecting with ease a target pattern on the surface of either the feeding belt 5 or the sheet of printing paper P.


It is possible to give enhanced characteristics to the scanned image by pre-applying the unevenness within an area of the surface layer of the feeding belt 5 that does not affect the control of the image transfer. It is also possible to detect the target pattern with the enhanced characteristics with the feeding belt 5, the surface of which is configured of a transparent substance, without affecting the image transfer, by pre-configuring a middle layer with either unevenness or a desired pattern.



FIG. 9 is a timing diagram describing an operation of the image sensor unit 26. FIG. 10 is a block diagram illustrating a configuration of the image sensor unit 26.


The DSP 50 sets such control parameters as a designated number of filters for a control circuit 93 in FIG. 10, which uses a /CS signal, a clock signal S2, and a data signal S3, to control a serial communication for a segment selected of the CMOS sensor 34. In such a circumstance, as depicted in FIG. 9, S5, the DSP 50 sets the /CS signal to low level, synchronizes the /CS signal with the clock as a transfer mode of the control parameter, and sends an eight-bit command, i.e., a control parameter, as data. Thus is the gain of the CMOS sensor 34 set to a filter circuit 95. The objective of the gain setting thereof is to adjust the gain to allow constant detection of an optimal image, because, for example, the image on the sheet of printing paper P has a higher reflectivity than the image on the feeding belt 5.


The DSP 50 adjusts the gain of the CMOS sensor 34 vis-à-vis the image that is scanned thereby, in order to facilitate implementation of the image comparison process, to be described hereinafter, with a high degree of accuracy. Implementation is achieved, for example, by controlling the gain of the CMOS sensor 34 vis-à-vis the scanned image until a given level of contrast is obtained.


As depicted in FIG. 9, S1, the DSP 50 sets the /CS signal to high level, and sets the transfer mode of image data from the CMOS sensor 34. In such a circumstance, an output circuit 96 sends digital image data that is supplied from the output of the CMOS sensor 34, via an A/D converter 94 and the filter circuit 95, to the DSP 50, in pixel order in synchronism with the CLOCK signal. In such a circumstance, a transmission synchronization clock TXC, depicted as FIG. 9, S4, is generated by a PLL circuit 97, in accordance with the clock signal S2. Consequently, the DSP 50 receives the respective 8×8 pixel data per segment that is outputted by the image sensor unit 26 in order, i.e., PIXEL0, PIXEL1, etc.


Following is a description of a method of computing a segment change of the CMOS sensor 34, as well as a relative distance of either the feeding belt 5 or the sheet of printing paper P. The computation of the relative distance is executed by the DSP 50.



FIGS. 11A through 13C describe a configuration of the CMOS sensor 34 and the movement of the image being scanned in a predetermined sampling period (t2−t1), each at a different speed. The column address is assigned to the moving direction Y of the feeding belt 5, and the row address to the direction X that is orthogonal thereto.



FIG. 14 is a function block diagram illustrating a configuration of a function of the DSP 50 that detects and controls a signal of the CMOS sensor 34, according to the embodiment. The major areas of the configuration may be broken down into the CMOS sensor 34, the DSP 50 that performs the control and data processing thereof, the CPU 51, and the belt motor 56.


The CMOS sensor 34 is configured of a plurality of segments 340, as per the foregoing, which are S11 through S14, S21 through S24, S31 through S34, and S41 through S44 in the example depicted in FIG. 8. The control signals for each respective segment, i.e., /CS, CLOCK, DATA, and TXC, are connected via selectors (SEL) 341 and 342, each of which inputs or outputs the control signal from the DSP 50 to the designated segment, according to the column and row address supplied from the DSP 50. The DSP 50 receives a speed command 512 of the belt motor 56 and a sampling rate command 511 of the image from the CPU 51 and performs rotational control of the belt motor 56, and image sampling, in response to the designated commands.


The DSP 50 possesses a target identifier 501, which identifies the target pattern from the scanned image, and a position detector 502, which detects the position of the target pattern that the target identifier 501 identifies. The DSP 50 also possesses a CMOS I/O controller 504, which performs handling of signals between the DSP 50 and the CMOS sensor 34, a speed computation unit 506, which derives the speed at the surface of the feeding belt 5, and a motor controller 507, which controls the rotational speed of the belt motor 56.


The CPU 51 directs the motor controller 507 concerning the rotational speed of the belt motor 56, which drives the conveyance of the feeding belt 5 at the rotational speed thus directed. The directed rotational speed corresponds to an assumption speed. A sampling timing controller 503 informs the I/O controller 504 of a sampling timing W0, according to the sampling rate command 511 that has been issued by the CPU 51. A Ctrl signal generator 5041 of the I/O controller 504 outputs each respective control signal, i.e., /CS and CLOCK, to the CMOS sensor 34, at the informed sampling timing W0. A column and row address 505, which is determined by a segment designation section 5040 is also outputted to the CMOS sensor 34.


Following is a description of the segment designation section 5040.


An address designation section 5042 outputs an address used for an initial determination of the target pattern as the column and row address.



FIGS. 11A through 11D depicts views explaining an example of the movement of an image being scanned by the CMOS sensor 34.



FIGS. 11A through 11D treat as the target pattern a pattern that is included in an image that is detected in segment S11. The barycentric position of the pattern in FIG. 11A, in (column, row) coordinates, is (1, 4). FIG. 11A depicts the image that the CMOS sensor 34 scans at the time t1, which is buffered in an image buffer 5010 of the target identifier 501 at a sampling timing signal from the I/O controller 504 and a target area information W1. An area W2, which may be designated, of the pattern that is buffered in the image buffer 5010 is buffered in a target image buffer 5011, as a target pattern 999. A pattern matching section 5012 performs a pattern match between a pattern W3, which is buffered in the image buffer 5010, and a target pattern W4, which is buffered in the target image buffer 5011, at the next sampling timing. An evaluation is made thereby as to whether or not the pattern W3 scanned at the next sampling timing includes the target pattern. If the target pattern cannot be identified, a comparison is made again in the pattern matching section 5012 by shifting the data in the image buffer 5010 one pixel at a time, as the sampling pattern W3. The process is repeated, with pattern matching performed, until either a match with the target pattern is found, or a predetermined number of iterations has occurred; an error may be flagged if no match has been found after the predetermined number of iterations. If the target pattern can be thus identified, the position detector 502 is notified of an address information W5 of the target pattern.


The position detector 502 is configured of a barycentric position computation unit 5020 and a barycentric coordinate detector 5021, and notifies the speed computation unit 506 of the address information W5, and barycentric coordinates W7, of the target pattern.


The example depicted in FIG. 11A denotes a notification of a position (1, 4), in (column, row) format, of a barycenter 3000 from the address information (0 to 2, 3 to 5) of the target pattern 999, i.e., a 3×3 pixel region. While the barycenter 3000 is treated as the central coordinates (centroid) of the target pattern 999 according to the embodiment, the present invention is not limited thereto; it would be permissible, for example, to treat the center of the density in the pattern as the barycenter instead.


The speed computation unit 506 stores the position (1, 4), in (column, row) format, of the barycenter 3000 in memory as a first sampling barycenter position d1. A surface speed V21 is computed from the position of the barycenter 3000 at a second sampling, which is derived in a similar manner, at the next sampling.



FIG. 11B depicts an image that is detected in the second sampling, at the time t2, with a position d2 of the barycenter 3000 being (6, 4), again, in (column row) format. In such a circumstance, it is possible to derive a moving speed W8 as follows:






W8=Δd/Δt





=(d2−d1)/(t2−t1)=(6−1)/(t2−t1)





=5/(t2−t1)


If the present target pattern 999 is further identified in the direction of the Y-axis hereinafter, the position d2 of the barycenter 3000 in the second sampling is treated as the position d1 of the barycenter 3000 in the first sampling, where d1≦d2. If a new target pattern is detected with the segment S11, the positions d1 and d2 of the barycenter are both reset to zero, and the foregoing process is repeated. The target pattern 999 is updated when it is possible to predict that the maximum numerical value of the column in the moving direction, 32 in the present instance, will be exceeded.


According to the embodiment, the target pattern is updated when the target pattern 999 exceeds a column address of (31−α), where α is the size of the error in speed plus the size of the target pattern. The target pattern may thus be updated when it is predicted that it will not be possible to identify the target pattern in any of the segments S14, S24, S34, or S44. According to the embodiment, if speed is 5 and α is 4, the column address is 27, i.e., 31−4. Accordingly, it will be possible to contiguously detect the speed without updating the target pattern 999 up to 27/5, or 5.4, i.e., until a time t5.


Following is a description of a segment designation process.



FIG. 11B depicts a surface pattern on the feeding belt 5 at the time t2, when the moving speed W8 that is outputted by the speed computation unit 506 is 5, i.e., the target pattern is moved by five pixels in the column direction between the time t1 and the time t2. According to the embodiment, the position of the barycenter 3000 at the time t2 is (6, 4), again, in (column row) format, with the barycenter 3000 being positioned upon the segment S11, which must be pre-selected if the target pattern 999 is to be identified at the time t2. Consequently, the segment designation section 5040 performs the process of predicting the position of the target pattern at the next sampling.


The segment designation section 5040 of the I/O controller 504 is notified of the moving speed W8. The moving speed W8 also corresponds to an assumption speed. A segment computation unit 5043 determines the segment wherein the target pattern 999 is positioned at the next timing from the moving speed W8 and the position W7 of the barycenter 3000 of the target pattern.


Given, in FIG. 11B, that the moving speed W8 is 5, and the position W7 of the barycenter 3000 at the time t1 is (1, 4), in (column, row) format, it is predicted that the position W7 of the barycenter 3000 at the time t2, i.e., the next sampling timing, will be (5+1, 4), again, in (column, row) format. The address 505 thus predicted, i.e., (6, 4), again, in (column, row) format, is sent from the address designation section 5042 to the CMOS sensor 34. The segment S11 which contains the address (6, 4), is thus made effective, allowing detection of the target pattern 999 therein.


While the configuration in FIG. 14 derives the next segment to be examined from the address information W5 of the target pattern 999, it would be permissible instead to compute the next segment to be examined from the speed command 512 that is sent by the CPU 51, a speed correction issued by the motor controller 507, or all of the above. The speed designated by the speed command 512 and the corrected speed are also corresponding to an assumption speed. Repeated execution of the foregoing process allows real-time detection of the surface speed of the feeder belt 5.


The moving speed W8 of the feeder belt 5 on the image forming apparatus 1000 switches according to the type, i.e., the thickness, of the sheet of printing paper, in order to improve image quality, including the fixing characteristic of the image thereupon, i.e., the thicker the sheet of printing paper, the slower the speed. In the configuration according to the embodiment, the detection of the speed of the feeder belt 5 is detected across a wide range, from slow to fast speeds. FIG. 11B depicts a comparatively slow moving speed, i.e., a very thick sheet of printing paper mode, vis-à-vis the detectable area of one segment of the CMOS sensor 34. By contrast, FIG. 11C depicts a medium moving speed of the feeder belt 5, i.e., a thick sheet of printing paper mode, and FIG. 11D, a fast moving speed of the feeder belt 5, i.e., a typical sheet of printing paper mode.


Medium Moving Speed (Thick Sheet of Printing Paper Mode)



FIG. 11C depicts a pattern upon the feeder belt 5 at the time t2 when the moving speed W8 thereupon is 10. In FIG. 11C, the target pattern 999 at the time t2 is predicted to be positioned at address (1+10, 4), again, in (column, row) format, as per the foregoing, and segment S12 is made effective. The position of the barycenter 3000 that is actually obtained is (12, 4), again, in (column, row) format, and the speed detection value, i.e., the distance, is 11, i.e., =12−1. A speed detection error of 1 is thus detected. The speed detection error is applied to a correction in the motor controller 507 of the rotational speed of the belt motor 56, i.e., the speed command 512 that is issued from the CPU 51 reduces the speed thereof. Thus, the position of the barycenter 3000 of the target pattern 999 at the next sampling timing, t3, is predicted to be (12+10−1, 4), or (21, 4), again, in (column, row) format, and the segment S13 that includes the position (21, 4) is made effective. If the rotational speed of the belt motor 56 is not immediately corrected, it is permissible to predict that the position of the barycenter 3000 is (12+10, 4), again, in (column, row) format. The target pattern 999 at a next sampling timing, t4, commences with the updating of the target pattern 999, as either exceeding the maximum column value of 31, or being the effective segment 511, owing to a narrow margin.


Fast Moving Speed (Plain Sheet of Printing Paper Mode)



FIG. 11D depicts a pattern upon the feeder belt 5 at the time t2 when the moving speed W8 thereupon is 28. In FIG. 11D, the target pattern 999 at the time t2 is predicted to be positioned at address (1+28, 4), again, in (column, row) format, as per the foregoing, and segment S14 is made effective. The position of the barycenter 3000 that is actually obtained is (28, 4), again, in (column, row) format, and the speed detection value, i.e., the distance, is 27, i.e., =28−1. The speed error is thus −1. The speed error is applied to a correction in the motor controller 507 of the rotational speed of the belt motor 56, i.e., the speed command 512 that is issued from the CPU 51 increases the speed thereof by 1. Thus, the target pattern 999 at the next sampling timing, t3, is predicted to exceed the maximum column value of 31, the next effective segment will be S11, and the process commences with the updating of the target pattern 999.



FIGS. 12A through 12C depict view illustrating examples of a circumstance wherein the target pattern 999 spans two segments.



FIG. 12A depicts a pattern upon the feeder belt 5 at the time t2 when the speed W8 of a transition command thereupon is 15, given the foregoing configuration of one segment comprising 8×8 pixels. The target pattern 999 spans segments S12 and S13 in the present circumstance. Identifying the target pattern thus requires making two segments effective simultaneously. Doing so, however, raises the possibility that the target pattern 999 would be lost, owing to the relation between the scope of the detection area and the processing speed. Following are depictions of two examples of processing in such a circumstance:


1. Changing the Area that Determines the Target Pattern

In the first example wherein the target pattern 999 is predicted to cross over into the next segment, the detection area of the target pattern is changed.


As depicted in FIG. 12B, for example, the area of determination is changed from an area of the target pattern 999 of (0 to 2, 3 to 5), in (column, row) format, to an area of a target pattern 999b of (4 to 6, 4 to 6), again, in (column, row) format. Thus, the target pattern 999b at the time t2 may be evaluated as being positioned only within the segment S13 at the time t2 when the speed W8 of transition command is 15, as depicted in FIG. 12C. The determination of the area that evaluates the target pattern 999b is shifted from the initially predicted position (16, 4) of the target pattern 999 at the time t2 so as to span the segments, and derived by counting the timing t1 backwards from the speed W8 of transition command. It is presumed that a criterion for evaluating whether or not the target pattern 999 crosses over into the next segment includes a margin of a plurality of pixels.


2. Overlapping Segment Configurations

Following is a description of the second example, a configuration wherein the segments overlap with each other.


As depicted in FIG. 13A, the number of pixels that configure each respective segment is 8×8, as per the preceding examples. Segment S11 is the solid line in FIG. 13A, (0 to 7, 0 to 7), in (column, row) format, segment S12 is the dashed line (4 to 11, 0 to 7), again, in (column, row) format, segment S13 is the solid line (8 to 15, 0 to 7), again, in (column, row) format, etc. Each segment thus overlaps halfway, i.e., by four pixels, with its predecessor in the column orientation, such that the number of segments is increased from S11 through S17, and S21, etc, through to S47.



FIG. 13B depicts a surface pattern on the feeding belt 5 at the time t1 in such a circumstance. The barycenter of the target pattern 999 is determined to be (1, 4), in (column, row) format.



FIG. 13C depicts a surface pattern on the feeding belt 5 at the time t2 when the speed W8 of transition command is 15. It is predicted in FIG. 13C that the position of the barycenter of the target pattern 999 at the time t2 is (1+15, 4), in (column, row) format.


As depicted in FIG. 13C, making the segment S14 effective, at (8 to 15, 0 to 7), in (column, row) format, wherein the target pattern 999 does not span a plurality of segments, allows definite identification of the target pattern 999.


While it is presumed according to the embodiment that the scope of overlap between segments is half a segment in the column orientation, the present invention is not restricted thereto in either the orientation or the scope of the overlap.


Following is a description of a process of the DSP 50 and the CPU 51 controlling the image forming apparatus, according to the embodiment.



FIG. 15 is a flowchart explaining a process of the DSP 50 identifying a target pattern, according to the embodiment, which corresponds to the foregoing process carried out by the target identifier 501. A program that executes the process is stored in a program memory (not shown) of the DSP 50. First is a description of variables that are used in the process. F denotes a sampling initialization flag, which signifies that the target pattern is not updated when set to zero, and that the target pattern is updated when set to 1, j denotes the address that is being sampled in the column orientation, i.e., the Y-axis, and seg denotes the maximum value of the column address, 32 in the foregoing example. V denotes the speed of transition that is commanded in the column orientation, Δt denotes a interval for sampling, and α denotes the margin of the distance. These variables and flag are stored in a RAM (not shown) of the DSP 50.


The target pattern is updated in step S101. The variable j of the sampling address is set to zero, and the flag F that denotes whether or not the pattern has been updates is set to 1. The process proceeds to step S102, wherein the address of the barycenter of the target pattern in the column orientation is set to the sampling address j. The process proceeds to step S103, wherein the next sampling address j is computed and predicted from a speed command value v (512) and a sampling rate command value Δt (511). In the present circumstance, the formula is j=j+(v/Δt). The process proceeds to step S104, wherein the next sampling address j, which was computed in the previous step S103, is evaluated as to whether or not it falls within the detection range of the CMOS sensor 34, i.e., whether it is less than the upper bound of the address (seg). As previously described, it would be permissible to set the margin a to take the speed error or other factor into account, i.e., j<(seg−α). If the result of the evaluation in step S104 is in the negative, i.e., j>(seg−α), it signifies that the target pattern 999 is transitioning outside the range of the CMOS sensor 34, causing the process to return to step S101, and repeat the process described therein of determining the target pattern 999.


If the result of the evaluation in step S104 is in the negative, i.e., j<(seg−α), the process proceeds to step S105, wherein the target pattern 999 at the next sampling timing is determined to be within the detectable range of the CMOS sensor 34. The pattern update flag F is set to zero in order that the target pattern is updated. The process then proceeds to step S102, wherein the next sampling address that is computed is set. The target identifier 501 thus repeats the execution of the foregoing process.



FIG. 16 is a flowchart explaining a segment designation process of the DSP 50, according to the embodiment. The process corresponds to the process of the segment designation section 5040. The program that executes the process is stored in the program memory (not shown) of the DSP 50. First is a description of variables that are used in the process. F denotes the sampling initialization flag. Δt denotes a interval for sampling, dn denotes the barycenter position of the currently detected target pattern, i.e., the column address, and dn−1 denotes the barycenter position of the detected target pattern in the previous iteration. Δd denotes the distance, Δv denotes the moving speed that is detected in the column orientation, and column is the address of the barycenter of the target pattern in the column orientation. These variables and flag are stored in the RAM (not shown) of the DSP 50.


In step S201, the CMOS sensor 34 scans the image data, the target pattern is detected therein, the target pattern is detected therefrom, and the position of the barycenter thereof is set to the target position, i.e., dn=column address. The process then proceeds to step S202, wherein an evaluation is made as to whether or not the target pattern 999 has been updated, in accordance with the flag F that is set in the flowchart in FIG. 15. If F=1, the target pattern 999 is updated, and the process proceeds to step S206, whereupon the position detection value of the previous iteration dn−1 is voided, and no detection of the moving speed is performed. The process proceeds to step S205, with the moving speed Δv being treated as Δ v. In step S205, the sampling target position of the previous iteration dn−1 is treated as the latest column address. The process then returns to step S201, wherein the next sampling is performed.


If, on the other hand, the flag F in step S202 is zero, then the target pattern 999 is not updated, and thus, the process proceeds to step S203, wherein the distance Δd is computed from the position dn of the barycenter of the current target pattern, which is derived in step S201, and the position dn−1 of the barycenter of the previous target pattern, i.e., Δ d=dn−dn−1. In step S204, the moving speed Δv of the feeding belt 5 is detected from the distance Δd and the sampling interval Δt, which is instructed by the CPU 51, i.e., Δv=Δd/Δt. In step S205, the position dn of the barycenter of the current target pattern is stored as the position dn−1 of the barycenter of the previously sampled target pattern, i.e., dn−1=dn. The process then returns to step S201, wherein the next sampling is performed.


As described abode, the position of the target pattern 999 is detected, the distance is derived, and the speed of the feeding belt 5 is detected, with the process repeated only if the target position has not been updated.



FIG. 17 is a flowchart explaining a process of the CPU 51 of the image forming apparatus detecting the conveyance speed of the belt, according to the embodiment. A program that executes the process is stored in a program memory (not shown) of the CPU 51. First is a description of variables that are used in the process. P denotes an ID that denotes the type of the sheet of printing paper, while v denotes the average speed in the column orientation, which is outputted as the speed command 512. Δt denotes the sampling interval, vp denotes the moving speed in the column orientation, i.e., the Y-axis, as per the type of the sheet of printing paper, while tp denotes the sampling rate as per the type of the sheet of printing paper. Δv denotes the moving speed that is detected in the column orientation. k denotes a speed correction coefficient, and vd denotes a speed correction command value in the column orientation. These variables are stored in the RAM (not shown) of the DSP 50.


In step S301, P is set to the printing paper ID that denotes the type of the sheet of printing paper, i.e., zero for regular printing paper, 1 for thicker printing paper, or 2 for extra-thick printing paper. The process proceeds to step S302, wherein the value v of the moving speed of the feeding belt 5 and the value Δt of the sampling rate command are determined, where v=vp, and Δt=tp. In step S303, an evaluation is performed as to whether or not the DSP 50 has commenced the speed detection process and the sampling timing has arrived. If the sampling timing has arrived, the process proceeds to step S304, wherein the DSP 50 obtains the speed detection value Δv. It would be permissible to store the speed detection value Δv thus obtained in the RAM. The process proceeds to step S305, wherein it is determined whether or not an evaluation is made to perform the speed detection. Details are omitted herein, although it would be permissible to perform the speed correction whenever the speed detection is performed, or instead to derive the acceleration and perform the speed correction when the acceleration is greater than or equal to a predetermined threshold.


If the speed correction is not performed, the process returns to step S303, and waits for the sampling timing. If the speed correction is performed in step S305, the process proceeds to step S306, wherein the corrected speed is computed from the speed correction coefficient k and the obtained speed detection value Δv, i.e., vd=k×(v−Δv). The DSP 50 is notified via the speed command vd of the corrected speed thus computed, whereupon the process returns to step S303, and waits for the sampling timing. The CPU 51 repeats the process until the type of the sheet of printing paper is changed.


Second Embodiment


FIG. 18 depicts a cutaway conceptual view illustrating a configuration of an image forming apparatus according to a second embodiment of the present invention. The figure depicts an instance of an image forming apparatus that employs an intermediate transfer system, i.e., an intermediate transfer belt.


An image forming apparatus 301 forms four electrostatic latent images, i.e., yellow (Y), magenta (M), cyan (C), and black (Bk), in accordance with a laser light generated by a scanner unit 311, on a photosensitive drum 303. The electrostatic latent image corresponding to each respective color is developed as a toner image by a toner corresponding to each respective color of a developer unit 306. The developer unit 306 for each respective color is mounted in a rotary unit 307, and possesses a development sleeve 304, which develops the electrostatic latent images on the photosensitive drum 303, as well as a controller 305, which delivers the toner to the development sleeve 304 in a uniform fashion.


The toner image that is formed on the photosensitive drum 303 is transferred to an intermediate transfer belt 320 via a primary transfer portion T1, over a primary transfer roller 314. The toner image that is thus transferred to the intermediate transfer belt 320 is conveyed to a secondary transfer portion T2.


The sheet of printing paper P that is contained in a printing paper feed unit 309 is conveyed to the secondary transfer portion T2 via a pick-up roller 330 and a printing paper feed roller 329, and the toner image on the intermediate transfer belt 320 is transferred to the sheet of printing paper P via a secondary transfer unit 308. The intermediate transfer belt 320 rolls around a drive roller 321, a tension roller 322, which is positioned opposite the secondary transfer unit 308, and an idler roller 323. A drive motor (not shown) that is linked to the drive roller 321 drives the intermediate transfer belt 320 in the direction of the arrow shown in the drawing.


The secondary transfer portion T2 transfers the toner image to the sheet of printing paper P, which is then conveyed to a fixer unit 310, wherein heat and pressure are applied to the toner image to fix it to the sheet of printing paper P, which is then discharged from the apparatus via a printing paper path 328. The fixer unit 310 comprises a fixing roller 310a, which houses a heater, and a pressure roller 310b. Reference numeral 312 is a scanning sensor, which scans the image on the intermediate transfer belt 320.


As per the description according to the first embodiment, the image forming apparatus that comprises the intermediate transfer system, i.e., the intermediate transfer belt, places the image sensor unit 312 that comprises the CMOS sensor 34 at a position in opposition to the intermediate transfer belt 320. The sensor 312 identifies the toner image that is formed on the intermediate transfer belt 320, and the DSP 50 derives the relative speed of the intermediate transfer belt 320. Controlling the rotation of the drive motor (not shown), which drives the conveyance of the intermediate transfer belt 320, allows the intermediate transfer belt 320 to be maintained at a constant rotational speed. Doing so in turn allows implementing the image forming apparatus 301 that comprises the intermediate transfer system that has a low degree of color misalignment.


The detection of the moving speed of the intermediate transfer belt 320 using the CMOS sensor 34, as well as the method of correcting the speed of the conveyance drive motor of the intermediate transfer belt 320, may be implemented in a manner similar to the feeding belt 5 according to the first embodiment, and thus, a detailed description thereof is omitted herein. While FIG. 18 depicts a rotary configuration, a tandem configuration would also be similarly applicable thereto.


The foregoing configuration allows detecting the moving speed of the intermediate transfer belt 320 with a high degree of accuracy, without losing the target pattern even if the intermediate transfer belt 320 has a high moving speed. It is thus possible to correct the rotational speed of the conveyance drive motor of the belt, thereby maintaining a given moving speed of the belt 320.


While the determination of the target pattern according to the embodiments is based on an image of the segment S11, the present invention is not restricted thereto. It would be permissible to use an image that is shifted along the X-axis, i.e., in the row orientation, from segment S11, i.e., segment S21, S31, etc., to make the target pattern determination, if segment S11 contains only a non-target image, i.e., an image with little change in density.


While detection of the position of the target pattern is fixed to the X-axis, i.e., the row orientation, according to the embodiments, it goes without saying that segment switching would be performed in a synthesis of the X and Y axes, for example, in a circumstance such as pre-determining the distance of the X-axis component.


While a segment is fixed at 8×8 pixels according to the embodiments, it would be permissible to vary the configuration of the segment to be such as 2×4 pixels or 6×6 pixels. It would also be permissible to change the configuration of the segment each time the surface image is scanned.


While the drive unit and the control of the belt motor 56 are presumed to be a DC motor servo control according to the embodiments, it would be permissible to employ a stepping motor to perform such control as well.


According to the embodiments, it would be possible to reduce the number of pixels handled per sampling, and to detect the surface image on either the feeding belt or the intermediate transfer belt with a high sampling rate. Doing so allows detecting the surface speed of the belt with a high degree of accuracy.


The ability to detect the surface image in a wide region on the belt without reducing the sampling rate allows detecting the target pattern without missing the target pattern outside the detection frame, even at a fast detection speed. It is thus possible to detect the moving speed of the belt that is being driven for conveyance at high speed, without having to reduce accuracy in detection.


The ability to provide feedback in real-time of the moving speed of the belt thus detected to the speed control of the drive motor allows to maintain the moving speed of the belt as constant as possible, irrespective of such conditions as the internal temperature of the apparatus, which in turn facilitates minimization of misalignment in the image or the color therein.


While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.


This application claims the benefit of Japanese Patent Application No. 2006-249954, filed Sep. 14, 2006, which is hereby incorporated by reference herein in its entirety.

Claims
  • 1. An image forming apparatus, comprising: an image carrier configured to carry an image being driven by a driving source;an image reading unit configured to read the image upon said image carrier, wherein a reading area of said image reading unit is segmented into a plurality of regions, and capable of reading a segmented image on each of the plurality of regions;a sampling unit configured to sample an output of said image reading unit at a predetermined sampling rate;a decision unit configured to decide first and second regions of the plurality of regions for respectively being used in first and second samplings;a position detection unit configured to detect positions of a predetermined pattern in the first region at the first sampling and the predetermined pattern in the second region at the second sampling; anda computation unit configured to compute a moving speed of said image carrier based on the predetermined sampling rate and the positions detected by said position detection unit,wherein said decision unit decides the second region based on a assumption speed of said image carrier and the predetermined sampling rate.
  • 2. The image processing apparatus according to claim 1, further comprising a drive control unit configured to control the speed of the driving source in accordance with the moving speed computed by said computation unit.
  • 3. The image processing apparatus according to claim 1, further comprising a photosensitive member configured to form a toner image, wherein said image carrier is an intermediate transfer member to which the toner image formed on said photosensitive member is transferred.
  • 4. The image processing apparatus according to claim 1, further comprising a photosensitive member configured to form a toner image, wherein said image carrier is a transfer carrier for a sheet to which the toner image formed on the photosensitive member is transferred is carried and conveyed.
  • 5. The image processing apparatus according to claim 1, wherein said image reading unit includes an area sensor, and in the reading area of said image reading unit, a plurality of segments, each of which is consisted of a plurality of pixels, are arranged in a two dimension.
Priority Claims (1)
Number Date Country Kind
2006-249954 Sep 2006 JP national