This patent application is based on and claims priority pursuant to 35 U.S.C. § 119(a) to Japanese Patent Application Nos. 2016-145701, filed on Jul. 25, 2016, 2017-131460, filed on Jul. 4, 2017, and 2017-137301, filed on Jul. 13, 2017, in the Japan Patent Office, the entire disclosure of each of which is hereby incorporated by reference herein.
This disclosure relates to a liquid discharge apparatus, a liquid discharge system, and a liquid discharge method.
There are image forming methods that include discharging ink from a print head (so-called inkjet methods). To improve the quality of images formed on recording media, such image forming methods include, for example, adjusting the position of the print head relative to the recording media.
According to an embodiment of this disclosure, a liquid discharge apparatus includes a head to discharge liquid onto a conveyed object, at least one light source to irradiate the conveyed object with light having a high relative intensity in a range of wavelength in which a relative reflectance of the liquid is high, and a detector including at least one optical sensor to perform imaging of the conveyed object being irradiated by the at least one light source, to generate image data. The detector is configured to generate a detection result based on the image data. The detection result including at least one of a conveyance amount of the conveyed object and a conveyance speed of the conveyed object.
According to another embodiment, a system includes the above-described liquid discharge apparatus and a host configured to input image data and control data to the liquid discharge apparatus.
According to another embodiment, a liquid discharge apparatus includes a head to discharge liquid onto a conveyed object. The head moves in an orthogonal direction orthogonal to a conveyance direction of the conveyed object. The liquid discharge apparatus further includes a first light source disposed upstream from the head in the conveyance direction, to irradiate the conveyed object, a second light source disposed downstream from the head in the conveyance direction, to irradiate the conveyed object with light having a high relative intensity in a range of wavelength in which a relative reflectance of the liquid is high, and a detector. The detector includes a first optical sensor configured to perform imaging of the conveyed object being irradiated by the first light source, to generate first image data, and a second optical sensor configured to perform imaging of the conveyed object being irradiated by the second light source, to generate second image data. The detector is configured to generate a detection result based on the first image data and the second image data. The detection result includes at least one of a conveyance amount of the conveyed object and a conveyance speed of the conveyed object.
According to another embodiment, a liquid discharging method includes discharging liquid onto a conveyed object, irradiating the conveyed object with light having a high relative intensity in a range of wavelength in which a relative reflectance of the liquid is high, generating image data of an irradiated portion of the conveyed object and generating a detection result based on the image data, the detection result including at least one of a conveyance amount of the conveyed object and conveyance speed of the conveyed object.
A more complete appreciation of the disclosure and many of the attendant advantages thereof will be readily obtained as the same becomes better understood by reference to the following detailed description when considered in connection with the accompanying drawings, wherein:
The accompanying drawings are intended to depict embodiments of the present invention and should not be interpreted to limit the scope thereof. The accompanying drawings are not to be considered as drawn to scale unless explicitly noted.
In describing embodiments illustrated in the drawings, specific terminology is employed for the sake of clarity. However, the disclosure of this patent specification is not intended to be limited to the specific terminology so selected, and it is to be understood that each specific element includes all technical equivalents that operate in a similar manner and achieve a similar result.
Referring now to the drawings, wherein like reference numerals designate identical or corresponding parts throughout the several views thereof, and particularly to
The suffixes Y, M, C, and K attached to each reference numeral indicate only that components indicated thereby are used for forming yellow, magenta, cyan, and black images, respectively, and hereinafter may be omitted when color discrimination is not necessary.
General Configuration
Examples of the conveyed object include recording media, such as a web 120. In the illustrated example, the image forming apparatus 110 includes a roller 130 and the like to convey the web 120, serving as a recording medium, and discharges liquid onto the web 120 to form an image thereon. The web 120 is a so-called continuous sheet. That is, the web 120 is, for example, paper in the form of a roll that can be reeled. The image forming apparatus 110 is a so-called production printer. The description below concerns an example in which the roller 130 adjusts the tension of the web 120 and conveys the web 120 in a conveyance direction 10. Hereinafter, unless otherwise specified, “upstream” and “downstream” mean those in the conveyance direction 10. A direction orthogonal to the conveyance direction 10 is referred to as an orthogonal direction 20. In the illustrated example, the image forming apparatus 110 is an inkjet printer to discharge four color inks, namely, black (K), cyan (C), magenta (M), and yellow (Y) inks, to form an image on the web 120.
Each liquid discharge head unit 210 discharges the ink onto the web 120 conveyed in the conveyance direction 10. The image forming apparatus 110 includes two pairs of nip rollers, a roller 230, and the like, to convey the web 120. One of the two pairs of nip rollers is a first nip roller pair NR1 disposed upstream from the liquid discharge head units 210 in the conveyance direction 10. The other is a second nip roller pair NR2 disposed downstream from the first nip roller pair NR1 and the liquid discharge head units 210 in the conveyance direction 10. Each nip roller pair rotates while nipping the conveyed object, such as the web 120, as illustrated in
The recording medium such as the web 120 is a continuous sheet. Specifically, the recording medium is preferably longer than the distance between the first nip roller pair NR1 and the second nip roller pair NR2. The recording medium is not limited to webs. For example, the recording medium may be a folded sheet (so-called fanfold paper or Z-fold paper).
In the structure illustrated in
Note that, regarding the order of colors, a color that absorbs light well is preferably disposed extreme downstream in illustrated in
Each liquid discharge head unit 210 discharges the ink to a predetermined position on the web 120, according to image data. The position at which the liquid discharge head unit 210 discharges ink (hereinafter “ink discharge position”) is almost identical to the position at which the ink discharged from the liquid discharge head (e.g., 210K-1, 210K-2, 210K-3, or 210K-4 in
Referring to
In
In
Each of the sensor devices SN includes, at least, one light source LG to irradiate a detection areas of the optical sensor OS. In
The second optical sensor OS2 disposed upstream from the liquid discharge head unit 210M in the conveyance direction 10 is also provided with the magenta light source LGM1. The third optical sensor OS3 disposed downstream from the liquid discharge head unit 210M is provided with the magenta light source LGM2. That is, the second sensor device SN2 and the third sensor device SN3 include light sources configured to emit light having a high relative intensity in a wavelength range in which relative reflectance of magenta ink, discharged from the liquid discharge head unit 210M, is high.
The third optical sensor OS3 is disposed upstream from the liquid discharge head unit 210C in the conveyance direction 10 also provided with the cyan light source LGC1. The fourth optical sensor OS4 disposed downstream from the liquid discharge head unit 210C is provided with the cyan light source LGC2. That is, the third sensor device SN3 and the fourth sensor device SN4r include light sources configured to emit light having a high relative intensity in a wavelength range in which relative reflectance of cyan ink, discharged from the liquid discharge head unit 210C, is high.
The fourth optical sensor OS4 disposed upstream from the liquid discharge head unit 210K in the conveyance direction 10 is provided with the infrared light source LGIR1. The fifth optical sensor OSN5 disposed downstream from the liquid discharge head unit 210K is provided with then infrared light source LGIR2. That is, the fourth sensor device SN4 and the fifth sensor device SN5 include light sources configured to emit light having a high relative intensity in a wavelength range in which relative reflectance of black ink, discharged from the liquid discharge head unit 210K, is high. Note that, for example, a controller 520 operably connected to the liquid discharge head units 210 controls the respective timings of ink discharge of the liquid discharge head units 210 and actuators AC1, AC2, AC3, and AC4 (correctively “actuators AC) of the liquid discharge head units 210. Alternatively, the control of timing and moving of the heads can be performed by two or more controllers or a circuit, instead of the controller 520. The actuators AC are described later.
Referring to
In the configuration illustrated in
The laser light emitted from the light source LG is diffused on the surface of the web 120, and superimposed diffusion waves interfere with each other, generating a pattern such as a speckle pattern. The optical sensor OS of the sensor device SN performs imaging of the pattern to generate image data. Based on the position change of the pattern captured by the optical sensor OS, the image forming apparatus 110 can obtain the amount by which the liquid discharge head unit 210 is to be moved and the timing of ink discharge from the liquid discharge head unit 210.
Additionally, in this structure, the liquid discharge head unit 210 and the sensor device SN can be disposed such that the operation area (e.g., the image formation area) of the liquid discharge head unit 210 overlaps, at least partly, with the detection range of the sensor device SN.
An example outer shape of the liquid discharge head unit 210 is described below with reference to
As illustrated in
The liquid discharge head unit 210K includes four heads 210K-1, 210K-2, 210K-3, and 210K-4 arranged in a staggered manner in the orthogonal direction 20.
Although the description above concerns a liquid discharge head unit including four heads, a liquid discharge head unit including a single head can be used.
In the example illustrated in
The magenta ink is a colorant having the following spectral reflectance property, for example.
In this example, the magenta ink exhibits a peak of the spectral reflectance at about 420 nanometer, and the spectral reflectance is higher at a wavelength of light longer than about 620 nanometers. In other words, the magenta ink reflects well a wavelength of light longer than 620 nanometer. For the ink having such a property, the magenta light source LGM is used to emit light having a spectral property in which the relative intensity is high in a wavelength range longer than 620 nanometer.
Note that any light source to emit light having a high relative intensity at a predetermined wavelength can be used. For the yellow ink and the magenta ink having the properties illustrated in
The cyan ink is a colorant having the following spectral reflectance property, for example.
For such cyan ink, a blue light source (e.g., an LED) having the following property is usable as the cyan light source LCG.
With the above-described combination of the liquid and the light source LG, light is reflected well on the liquid. Although the light source used in the example described above has a high relative intensity in the wavelength range in which the relative reflectance of the ink is close to the peak, the light source is not limited thereto. For example, when a range in which the relative reflectance of the ink is lowest is 0% and a range in which the relative reflectance of the ink is highest is 100% in the visible spectrum, any light source having a relative intensity in the range in which the reflectance is 30% or higher can be used. The light source preferably has a high relative intensity in the range in which the reflectance is 50% or higher and, more preferably, has a high relative intensity in the range in which the reflectance is 80% or higher.
Note that, in another embodiment, the fifth sensor device SN5 is omitted. For example, based on the calculation results of the detection by the third and fourth sensor devices SN3 and SN4, the position and the speed relating to the liquid discharge head unit 210K are predicted. Alternatively, the fourth sensor device SN4 performs sensing twice to also serve as the fifth sensor device SN5.
Further, the term “location of sensor” means the position where the detection is performed. Accordingly, it is not necessary that all components relating to the detection are disposed at the “location of sensor (e.g., the optical sensor OS)”. In one embodiment, some of the components are coupled to the optical sensor OS via a cable and disposed away therefrom.
In the description below, the sensor devices SN1, SN2, SN3, SN4, and SN5 may be collectively referred to as “sensor devices SN”. Similarly, the optical sensors OS1, OS2, OS3, OS4, and OS5 may be collectively referred to as “optical sensors OS”, and the light sources LGY1, LGY2, LGM1, LGM2, LGC1, LGC2, LGIR1, and LGIR2 may be collectively referred to as “light sources LG”.
Although the sensor devices SN are disposed facing the front side of the web 120 (to emit light to the front side and detect the front side) in
The sensor device SN illustrated is configured to capture a speckle pattern, which appears on a conveyed object (i.e., a target in
In the illustrated structure, the CMOS image sensor (the optical sensor OS) performs imaging of the pattern to obtain the image data. Then, the conveyed object detector 600 performs correlation operation using the image captured by one CMOS image sensor and the image captured by the CMOS image sensor of another sensor device SN. For example, the controller 520 performs the correlation operation. Based on a displacement of a correlation peak position obtained through the correlation operation, the controller 520 outputs the amount of movement of the conveyed object (e.g., the recording medium) from one sensor device SN to the other sensor device SN. In the illustrated example, the sensor device SN has a width W of 15 mm, a depth D of 60 mm, and a height H of 32 mm (15×60×32). The correlation operation is described in detail later.
The CMOS image sensor is an example hardware structure to implement an imaging unit 16 (16A or 16B) illustrated in
Although the controller 520 performs the correlation operation in this example, in one embodiment, the control circuit 152 of one of the sensor devices SN performs the correlation operation. For example, the control circuit 152 is a field-programmable gate array (FPGA) circuit.
Referring back to
The memory device 53 is a so-called memory and preferably has a capability to divide the two-dimensional images transmitted from the control circuit 152 or the like and store the divided images in different memory ranges.
For example, the controller 520 is a microcomputer. The controller 520 performs operations using the image data stored in the memory device 53, to implement a variety of processing.
The control circuit 152 and the controller 520 are, for example, central processing units (CPUs) or electronic circuits. Note that a single device can double as the control circuit 152 and the controller 520. The control circuit 152 and the controller 520 are implemented by a single CPU in one embodiment and, alternatively, are implemented by a single FPGA circuit in another embodiment.
The imaging unit 16A captures an image of the web 120 conveyed in the conveyance direction 10.
The imaging controller 14A includes a shutter controller 141A and an image acquisition unit 142A. The imaging controller 14A is implemented by, for example, the control circuit 152 (illustrated in
The image acquisition unit 142A captures the image generated by the imaging unit 16A.
The shutter controller 141A controls the timing of imaging by the imaging unit 16A.
The image memory 15A stores the image acquired by the imaging controller 14A. The image memory 15A is implemented by, for example, the memory device 53 (illustrated in
A calculator 53F can calculate, based on the image data recorded in the image memories 15A and 15B, at least one of a relative position of the web 120 between the sensor devices SEN, the position of the pattern on the web 120, the speed at which the web 120 moves (hereinafter “moving speed”), and the amount of movement of the web 120. Additionally, the calculator 53F outputs, to the shutter controller 141A, data on time difference Δt indicating the timing of shooting (shutter timing). In other words, the calculator 53F instructs the shutter controller 141A of shutter timings of imaging at the position A and imaging at the position 13 with the time difference Δt. The calculator 53F may also control the motor and the like to convey the web 120 at the calculated conveyance speed. The calculator 53F is implemented by, for example, the microcomputer of the controller 520 (illustrated in
The web 120 has diffusiveness on a surface thereof or in an interior thereof. Accordingly, when the web 120 is irradiated with light (e.g., laser beam), the reflected light is diffused. The diffuse reflection creates a pattern on the web 120. The pattern is made of spots called “speckle” (i.e., a speckle pattern). Accordingly, when an image of the web 120 is taken, image data representing the pattern on the web 120. From the image data, the position of the pattern is known, and the position of a specific portion of the web 120 can be detected. Such a pattern is generated as the light emitted to the web 120 interferes with a rugged shape, caused by a projection and a recess, on the surface or inside of the web 120.
As the web 120 is conveyed, the speckle pattern on the web 120 is conveyed as well. When an identical speckle pattern is detected at different time points, the amount of movement of the speckle pattern in the conveyance direction 10 is obtained. In other words, the calculator 53F obtains the amount of movement of the speckle pattern based on the detection of an identical speckle pattern, thereby obtaining the conveyance amount of the web 120 in the conveyance direction 10. Further, the calculator 53F converts the calculated conveyance amount into a conveyance amount per unit time, thereby obtain the conveyance speed of the web 120 in the conveyance direction 10.
As illustrated, the imaging unit 16A and the imaging unit 16B are spaced apart in the conveyance direction 10. The imaging unit 116A and the imaging unit 16B perform imaging of the web 120 at the respective positions.
The shutter controller 141A causes the imaging unit 116A to capture the image of the web 120 at time intervals of time difference Δt. Then, based on the speckle pattern in the image generated by the imaging, the calculator 53F obtains the conveyance amount of the web 120. Specifically, it is assumed that V represents a conveyance speed (minis) under an ideal condition without displacement, and the imaging units 16A and 16B are located at a relative distance L from each other in the conveyance direction 10. Under such conditions, an interval from the shooting at the position A to the shooting at the position B (the time difference Δt) can be expressed by Formula 1 below.
Δt=L/V Formula 1
In Formula 1 above, the relative distance L (mm) between the imaging unit 16A and the imaging unit 16A and is obtained preliminarily (e.g., by measurement).
The calculator 53F performs cross-correlation operation of image data D1(n) generated by the detecting unit 52A and image data D2(n) generated by the detecting unit 52B. Hereinafter an image generated by the cross-correlation operation is referred to as “correlated image”. For example, based on the correlated image, the calculator 53F calculates the displacement amount ΔD(n), which is the amount of displacement from the position detected with the previous frame or by another sensor device.
For example, the cross-correlation operation is expressed by Formula 2 below.
D1★D2*=F−1[F[D1]·F[D2]*] Formula 2
Note that, the image data D1(n) in Formula 2, that is, the data of the image taken at the position A, is referred to as the image data D1. Similarly, the image data D2(n) in Formula 2, that is, the data of the image taken at the position B, is referred to as the image data D2. In Formula 2, “[ ]” represents Fourier transform, “F−1[ ]” represents inverse Fourier transform, “*” represents complex conjugate, and “★” represents cross-correlation operation.
As represented in Formula 2, image data representing the correlation image is obtained through cross-correlation operation “D1★D2” performed on the first image data D1 and the second image data D2. Note that, when the first image data D1 and the second image data D2 are two-dimensional image data, the image data representing the correlation image is two-dimensional image data. When the first image data D1 and the second image data D2 are one-dimensional image data, the image data representing the correlation image is one-dimensional image data.
Regarding the correlation image, when a broad luminance profile causes an inconvenience, phase only correlation can be used. For example, phase only correlation is expressed by Formula 3 below.
D1★D2*=F−1[P[F[D1]]·P[F[D2]*]] Formula 3
In Formula 3, “P[ ]” represent taking only phase out of complex amplitude, and the amplitude is considered to be “1”.
Thus, the calculator 53F can obtain the displacement amount ΔD(n) based on the correlation image even when the luminance profile is relatively broad.
The correlation image represents the correlation between the first image data D1 and the second image data D2. Specifically, as the match rate between the first image data D1 and the second image data D2 increases, a luminance causing a sharp peak (so-called correlation peak) is output at a position close to a center of the correlation image. When the first image data D1 matches the second image data D2, the center of the correlation image and the peak position overlap.
Based on the correlation operation, the calculator 53F outputs the displacement in position between the first image data D1 and the second image data D2 obtained at the time difference Δt, the amount of movement, and the speed of movement. For example, the conveyed object detector 600 detects the amount of movement by which the web 120 has moved in the orthogonal direction 20 from the position of the first image data D1 to the position of the second image data D2. Alternatively, the speed of movement can be detected.
In the arrangement illustrated in
Further, based on the result of correlation operation, the calculator 53F can obtain the difference of the conveyance movement of the web 120 in the conveyance direction 10 from the relative distance L. That is, the calculator 53F can be used to calculate both of the position in the conveyance direction 10 and the position in the orthogonal direction 20, based on the two-dimensional (2D) images taken by the imaging units 16A and 16B. Sharing the sensor can reduce the cost of detecting positions in both directions. Additionally, the space for the detection can be small since the number of sensors is reduced.
Based on the calculated difference of the conveyance amount of the web 120 from an ideal distance, the calculator 53F calculates the timing of ink discharge from the liquid discharge head unit 210Y. Based on the calculation result, the controller 54F controls ink discharge from the liquid discharge head unit 210Y.
Specifically, the controller 54F outputs a signal SIG1 for the liquid discharge head unit 210Y (a signal SIG2 is for the liquid discharge head unit 210M), to control the timing of ink discharge. The controller 54F is implemented by, for example, the microcomputer of the controller 520 (illustrated in
Specifically, the calculator 53F includes a 2D Fourier transform FT1 (a first 2D Fourier transform), a 2D Fourier transform FT2 (second 2D Fourier transform), a correlation image data generator DMK, a peak position search unit SR, an arithmetic unit CAL (or arithmetic logical unit), and a transform-result memory MEM.
The 2D Fourier transform FT1 is configured to transform the first image data D1. The 2D Fourier transform FT1 includes a Fourier transform unit FT1a for transform in the orthogonal direction 20 and a Fourier transform unit FT1b for transform in the conveyance direction 10.
The Fourier transform unit FT1a performs one-dimensional transform of the first image data D1 in the orthogonal direction 20. Based on the result of transform by the Fourier transform unit FT1a for orthogonal direction, the Fourier transform unit FT1b performs one-dimensional transform of the first image data D1 in the conveyance direction 10. Thus, the Fourier transform unit FT1a and the Fourier transform unit FT1b perform one-dimensional transform in the orthogonal direction 20 and the conveyance direction 10, respectively. The 2D Fourier transform FT1 outputs the result of transform to the correlation image data generator DMK.
Similarly, the 2D Fourier transform FT2 is configured to transform the second image data D2. The 2D Fourier transform FT2 includes a Fourier transform unit FT2a for transform in the orthogonal direction 20, a Fourier transform unit FT2b for transform in the conveyance direction 10, and a complex conjugate unit FT2c.
The Fourier transform unit FT2a performs one-dimensional transform of the second image data D2 in the orthogonal direction 20. Based on the result of transform by the Fourier transform unit FT2a for orthogonal direction, the Fourier transform unit FT2b performs one-dimensional transform of the second image data D2 in the conveyance direction 10. Thus, the Fourier transform unit FT2a and the Fourier transform unit FT2b perform one-dimensional transform in the orthogonal direction 20 and the conveyance direction 10, respectively.
Subsequently, the complex conjugate unit FT2c calculates a complex conjugate of the results of transform by the Fourier transform unit FT2a (for orthogonal direction) and the Fourier transform unit FT2b (for conveyance direction). Then, the 2D Fourier transform FT2 outputs, to the correlation image data generator DMK, the complex conjugate calculated by the complex conjugate unit FT2c.
The correlation image data generator DMK then generates the correlation image data, based on the transform result of the first image data D1, output from the 2D Fourier transform FT1, and the transform result of the second image data D2, output from the 2D Fourier transform FT2.
The correlation image data generator DMK includes an adder DMKa and a 2D inverse Fourier transform unit DMKb.
The adder DMKa adds the transform result of the first image data D1 to that of the second image data D2 and outputs the result of addition to the 2D inverse Fourier transform unit DMKb.
The 2D inverse Fourier transform unit DMKb performs 2D inverse Fourier transform of the result generated by the adder DMKa. Thus, the correlation image data is generated through 2D inverse Fourier transform. The 2D inverse Fourier transform unit DMKb outputs the correlation image data to the peak position search unit SR.
The peak position search unit SR searches the correlation image data for a peak position (a peak luminance or peak value), at which rising is sharpest. To the correlation image data, values indicating the intensity of light, that is, the degree of luminance, are input. The luminance values are input in matrix.
Note that, in the correlation image data, the luminance values are arranged at a pixel pitch of the optical sensor OS (i.e., an area sensor), that is, pixel size intervals. Accordingly, the peak position is preferably searched for after performing so-called sub-pixel processing. Sub-pixel processing enhances the accuracy in searching for the peak position. Then, the calculator 53F can accurately output the position, the amount of movement, and the speed of movement.
An example of searching by the peak position search unit SR is described below, with reference to the graph illustrated in
In this graph, the lateral axis represents the position in the conveyance direction 10 of an image represented by the correlation image data, and the vertical axis represents the luminance values of the image represented by the correlation image data.
The luminance values indicated by the correlation image data are described below using a first data value q1, a second data value q2, and a third data value q3. In this example, the peak position search unit SR searches for peak position P on a curved line k connecting the first, second, and third data values q1, q2, and q3.
Initially, the peak position search unit SR calculates each difference between the luminance values indicated by the correlation image data. Then, the peak position search unit SR extracts a largest difference combination, meaning a combination of luminance values between which the difference is largest among the calculated differences. Then, the peak position search unit SR extracts combinations of luminance values adjacent to the largest difference combination. Thus, the peak position search unit SR can extract three data values, such as the first, second, and third data values q1, q2, and q3 in the graph. The peak position search unit SR calculates the curved line K connecting these three data values, thereby obtaining the peak position P. In this manner, the peak position search unit SR can reduce the amount of operation such as sub-pixel processing to increase the speed of searching for the peak position P. The position of the combination of luminance values between which the difference is largest means the position at which rising is sharpest. The manner of sub-pixel processing is not limited to the description above.
Through the searching of the peak position P performed by the peak position search unit SR, for example, the following result is attained.
The arithmetic unit CAL calculates the relative position, amount of movement, or speed of movement of the web 120, or a combination thereof. For example, the arithmetic unit CAL calculates the difference between a center position of the correlation image data and the peak position calculated by the peak position search unit SR, to obtain the relative position and the amount of movement.
For example, the arithmetic unit CAL divides the amount of movement by time, to obtain the speed of movement.
Thus, the calculator 53F can calculate, through the correlation operation, the relative position, amount of movement, or speed of movement of the web 120. The methods of calculation of the relative position, the amount of movement, and the speed of movement are not limited to those described above. For example, alternatively, the calculator 53F obtains the relative position, amount of movement, or speed of movement through the following method.
Initially, the calculator 53F binarizes each luminance value of the first image data D1 and the second image data D2. That is, the calculator 53F binarizes a luminance value not greater than a predetermined threshold into “0” and a luminance value grater than the threshold into “1”. Then, the calculator 53F may compare the binarized first and second image data D1 and D2 to obtain the relative position.
Although the description above concerns a case where fluctuations are present in Y direction, the peak position occurs at a position displaced in the X direction when there are fluctuations in the X direction.
Alternatively, the calculator 53F can adapt a different method to obtain the relative position, amount of movement, or speed of movement. For example, the calculator 53F can adapt so-called pattern matching processing to detect the relative position based on a pattern taken in the image data.
Descriptions are given below of displacement of the recording medium in the orthogonal direction 20, with reference to
The fluctuation of the position of the web 120 in the orthogonal direction 20 (hereinafter “orthogonal position of the web 120”), that is, the meandering of the web 120, is caused by eccentricity of a conveyance roller (the driving roller in particular), misalignment, or tearing of the web 120 by a blade. When the web 120 is relatively narrow in the orthogonal direction 20, for example, thermal expansion of the roller affect fluctuation of the web 120 in the orthogonal position.
Descriptions are given below of the occurrence of misalignment in color superimposition (images out of color registration). Due to fluctuations (meandering illustrated in
Specifically, to form a multicolor image on a recording medium using a plurality of colors, the image forming apparatus 110 superimposes a plurality of different color inks discharged from the liquid discharge head units 210, through so-called color plane, on the web 120.
As illustrated in
The controller 520 is described below.
Examples of the host 71 include a client computer (personal computer or PC) and a server. The apparatus-side controller 72 includes a printer controller 72C and a printer engine 72E.
The printer controller 72C governs operation of the printer engine 72E. The printer controller 72C transmits and receives the control data to and from the host 71 via a control line 70LC. The printer controller 72C further transmits and receives the control data to and from the printer engine 72E via a control line 72LC. Through such data transmission and reception, the control data indicating printing conditions and the like are input to the printer controller 72C. The printer controller 72C stores the printing conditions, for example, in a resistor. The printer controller 72C then controls the printer engine 72E according to the control data to form an image based on print job data, that is, the control data.
The printer controller 72C includes a CPU 72Cp, a print control device 72Cc, and a memory 72Cm. The CPU 72Cp and the print control device 72Cc are connected to each other via a bus 72Cb to communicate with each other. The bus 72Cb is connected to the control line 70LC via a communication interface (I/F) or the like.
The CPU 72Cp controls the entire apparatus-side controller 72 based on a control program and the like. That is, the CPU 72Cp is a processor as well as a controller.
The print control device 72Cc transmits and receives data indicating a command or status to and from the printer engine 72E, based on the control date transmitted from the host 71. Thus, the print control device 72Cc controls the printer engine 72E.
To the printer engine 72E, a plurality of data lines, namely, data lines 70LD-C, 70LD-M, 70LD-Y and 70LD-K are connected. The printer engine 72E receives the image data from the host 71 via the plurality of data lines. Then, the printer engine 72E performs image formation of respective colors, controlled by the printer controller 72C.
The printer engine 72E includes a plurality of data management devices, namely, data management devices 72EC, 72EM, 72EY, and 72EK. The printer engine 72E includes an image output 72Ei and a conveyance controller 72Ec.
The data management device 72EC includes a logic circuit 72ECl and a memory 72ECm. As illustrated in
According to a control signal input from the printer controller 72C (illustrated in
According to a control signal input from the printer controller 72C, the logic circuit 72ECl retrieves, from the memory 72ECm, cyan image data Ic. The logic circuit 72ECl then transmits the cyan image data Ic to the image output 72Ei.
The memory 72ECm preferably has a capacity to store image data extending about three pages. With the capacity to store image data extending about three pages, the memory 72ECm can store the image data input from the host 71, data image being used current image formation, and image data for subsequent image formation.
The output control device 72Eic outputs the image data for respective colors to the liquid discharge head units 210. That is, the output control device 72Eic controls the liquid discharge head units 210 based on the image data input thereto.
The output control device 72Eic controls the plurality of liquid discharge head units 210 either simultaneously or individually. That is, the output control device 72Eic receives timing commands and changes the timings at which the liquid discharge head units 210 discharge respective color inks. The output control device 72Eic may control one or more of the liquid discharge head units 210 based on the control signal input from the printer controller 72C (illustrated in
In the apparatus-side controller 72 illustrated in
The conveyance controller 72Ec (in
At S01, the image forming apparatus 110 irradiates the web 120 with light of wavelength corresponding to the color of ink. In this example, the first optical sensor OS1 disposed upstream from the liquid discharge head unit 210Y generates the first image data D1. At S01, in a state in which the web 120 is irradiated with the yellow light emitted from the yellow light source LGY1, the first optical sensor OS1 generates the first image data D1 through imaging of the irradiated web 120.
At S02, the liquid discharge head unit 210Y discharges yellow ink to the web 120.
At S03, the conveyed object detector 600 obtains the second image data D2 while irradiating the web 120 with light of wavelength corresponding to the color of ink discharged at S02. In this example, the second optical sensor OS2 disposed downstream from the liquid discharge head unit 210Y generates the second image data D2 through the imaging. As S03, the yellow light source LGY2 emits light to the web 120. Then, the second optical sensor OS2 generates the second image data D2 through imaging in the state in which the web 120 is irradiated with the light from the yellow light source LGY.
At S04, the calculator 53F calculates at least one of the position and speed of the web 120 based on the first and second image data D1 and D2. Specifically, at S04, the calculator 53F compares the image data captured upstream from the liquid discharge head unit 210Y with the image data captured downstream from the liquid discharge head unit 210Y, to calculate the displacement in the orthogonal direction 20 and the movement amount of the web 120 in the conveyance direction 10.
Thus, the conveyed object detector 600 can calculate the movement amount and the speed of the web 120 based on the image data.
In another embodiment, regarding the light source located upstream from extreme upstream one of at least one liquid discharge head units in the conveyance direction 10, the color of light emitted is not limited. That is, it is not necessary to limit the color of the light applied to a portion without the liquid applied to the conveyed object.
In this case, the conveyed object detector 600 includes a first light source (e.g., LGY1) and a second light source (e.g., LGY2) respectively disposed, in a conveyance direction 10 of the conveyed object, upstream and downstream from a movable liquid discharge head to discharge liquid onto the conveyed object, to irradiate a conveyed object, and a detector including a first optical sensor to generate first image data of an irradiated portion irradiated by the first light source and a second optical sensor to generate second image data of an irradiated portion irradiated by the second light source. The second light source irradiates the conveyed object with light having a high relative intensity in a wavelength range in which relative reflectance of the liquid (discharged from the liquid discharge head) is high. The detector is to generate a detection result based on the first image data and the second image data, and the detection result includes at least one of a conveyance amount of the conveyed object and conveyance speed of the conveyed object.
Descriptions are given below of an example of combinations of the first and second image data for the liquid discharge head units 210.
In this example, a first pair PR1 (image data pair) used for the calculation for the liquid discharge head unit 210Y includes first image data D1Y and second image data D2Y, both of which are obtained with irradiation of yellow light. The first image data D1Y is obtained before the yellow ink is discharged. The second image data D2Y is obtained after the yellow ink is discharged. The yellow ink easily reflects the yellow light and easily absorbs light other than yellow light. Accordingly, even when a first letter CR1 (“A” in
In this example, a second pair PR2 used for the calculation for the liquid discharge head unit 210M includes first image data D1M and second image data D2M, both of which are obtained with irradiation of magenta light (e.g., the red light illustrated in
In this example, a third pair PR3 used for the calculation for the liquid discharge head unit 210C includes first image data D1C and second image data D2C, both of which are obtained with irradiation of cyan light (e.g., the blue light illustrated in
In this example, a fourth pair PR4 used for the calculation for the liquid discharge head unit 210K includes first image data D1K and second image data D2K, both of which are obtained with irradiation of infrared light. The first image data D1 is obtained, with irradiation with infrared light, after the cyan ink is discharged and before the black ink is discharged. The second image data D2K is obtained, with irradiation with infrared light, after the black ink is discharged. The black ink absorbs most visible light (wavelengths in the visible spectrum). Accordingly, even when a letter or a pattern formed with the black ink enters the sensor detection area irradiated with the infrared light, the infrared light is easily reflected on the black ink. Accordingly, adverse effects caused by the black ink are suppressed in detection by the sensor device SN.
As illustrated in
Functional Configuration
In the arrangement illustrated in
The detecting unit 52 detects the surface of the web 120 being irradiated by the light source LG illustrated in
Note that the image forming apparatus 110 can further include the controller 54F. Based on the calculation by the calculator 53F, the controller 54F controls the timing of ink discharge to the web 120 and the position of the liquid discharge head unit 210 in the orthogonal direction 20. Regarding the liquid discharge head units 210M, 210C, and 210K as well, based on the detection made on the upstream side and that on the downstream side of the liquid discharge head unit 210, the calculator 53F calculates the position or the like of the web 120 in at least one of the orthogonal direction 20 and the conveyance direction 10. Further, the controller 54F controls the timing of ink discharge to the web 120 or the position of the liquid discharge head unit 210 in the orthogonal direction 20.
[Variation 1]
The optical sensor OS and the light source LG can have the following structures.
Each liquid discharge head unit 210 is provided with a plurality of rollers. As illustrated in the drawings, for example, the image forming apparatus 110 includes the rollers respectively disposed upstream and downstream from each liquid discharge head unit 210. In the illustrated example, the roller disposed upstream from the liquid discharge head unit 210 is referred to as a first roller to convey the web 120 to the ink discharge position. Similarly, the roller disposed downstream from each liquid discharge head unit 210 is referred to as a second roller to convey the web 120 from the ink discharge position. Disposing the first roller and the second roller for each ink discharge position can suppress fluttering of the recording medium conveyed. For example, the first roller and the second roller are disposed along the conveyance passage of the recording medium and, for example, are driven rollers. Alternatively, the first roller and the second roller may be a driving roller driven by a motor or the like.
Note that, instead of the first and second rollers that are rotators such as driven rollers, first and second supports to support the conveyed object may be used. For example, each of the first and second supports can be a pipe or a shaft having a round cross section. Alternatively, each of the first and second supports can be a curved plate having an arc-shaped face to contact the conveyed object. In the description below, the first and second supporters are rollers.
Specifically, a first roller CR1Y and a second roller CR2Y (first and second supports to support the recording medium) are disposed upstream and downstream from the yellow ink discharge position PY, respectively, in the conveyance direction 10 of the web 120.
Similarly, a first roller CR1M and a second roller CR2M are disposed upstream and downstream from the liquid discharge head unit 210M, respectively. Similarly, a first roller CR1C and a second roller CR2C are disposed upstream and downstream from the liquid discharge head unit 210C for cyan, respectively. Similarly, a first roller CR1K and a second roller CR2K are disposed upstream and downstream from the liquid discharge head unit 210K, respectively.
As illustrated, the location of sensor is preferably close to the first roller CR1. That is, the distance between the ink discharge position and the location of sensor is preferably short. When the distance between the ink discharge position and the optical sensor OS is short, detection error can be suppressed. Accordingly, the position of the recording medium in the conveyance direction 10 and the orthogonal direction 20 can be detected with a sensor accurately.
Specifically, the sensor device SN is disposed between the first roller CR1 and the second roller CR2. That is, in this example, a first upstream sensor device SN11 and a first downstream sensor device SN12 for yellow are disposed in the inter-roller range INTY1 for yellow. The sensor device SN11 includes an optical sensor OS11 and a light source LGY11. The sensor device SN12 includes an optical sensor OS12 and a light source LGY12. Similarly, a second upstream sensor device SN21 and a second downstream sensor device SN22 for magenta are preferably disposed in an inter-roller range INTM1 between the first and second rollers CR1M and CR2M. The sensor device SN21 includes an optical sensor OS21 and a light source LGM21. The sensor device SN22 includes an optical sensor OS22 and a light source LGM22. Similarly, a third upstream sensor device SN31 and a third downstream sensor device SN32 for cyan are preferably disposed in an inter-roller range INTC1 between the first and second rollers CR1C and CR2C. The sensor device SN31 includes an optical sensor OS31 and a light source LGC31. The sensor device SN32 includes an optical sensor OS32 and a light source LGC32. Similarly, a fourth upstream sensor device SN41 and a fourth downstream sensor device SN42 for black are preferably disposed in an inter-roller range INTK1 between the first and second rollers CR1K and CR2K. The sensor device SN41 includes an optical sensor OS41 and a light source LGIR41. The sensor device SN42 includes an optical sensor OS42 and a light source LGIR42.
The optical sensor OS disposed between the first and second rollers CR1 and CR2 can detect the recording medium at a position close to the ink discharge position. The conveyance speed V is relatively stable in a portion between the rollers. Accordingly, the position of the recording medium in the conveyance direction 10 and the orthogonal direction 20 can be detected with a high accuracy.
In this structure, the first upstream sensor device SN11 and the first downstream sensor device SN12 generate the first pair PR1 (the first and second image data D1Y and D2Y) illustrated in
[Variation 2]
[Variation 3]
Specifically, a first sensor device SN101 is disposed upstream from a second sensor device SN102. The second sensor device SN102 is preferably disposed in a range extending from the yellow ink discharge position PY upstream to the first roller CR1Y for yellow (hereinafter “upstream range INTY2”). The first and second sensor devices SN101 and SN102 include optical sensors OS101 and OS102 and light sources LGY101 and LGY102, respectively. A third sensor device SN103 is preferably disposed in a range extending from the magenta ink discharge position PM upstream to the first roller CR1M for magenta (hereinafter “upstream range INTM2”). Similarly, a fourth sensor device SN104 is preferably disposed in a range extending from the cyan ink discharge position PC upstream to the first roller CR1C for cyan (hereinafter “upstream range INTC2”). Similarly, a fifth sensor device SN105 is preferably disposed in a range extending from the black ink discharge position PK upstream to the first roller CR1K for black in the conveyance direction 10 (hereinafter “upstream range INTK2”). The sensor device SN103 includes an optical sensor OS103 and light sources LGY103 and LGM111. The sensor device SN104 includes an optical sensor OS104 and light sources LGM112 and LGC121. The sensor device SN105 includes an optical sensor OS105 and a light source LGC 122.
When the sensors are respectively disposed in the upstream ranges INTK2, INTC2, INTM2, and INTY2, the image forming apparatus 110 can detect the position of the recording medium (conveyed object) in the conveyance direction 10 and the direction orthogonal thereto, with a high accuracy. The sensor thus disposed is upstream from the ink discharge position in the conveyance direction 10. Therefore, initially, on the upstream side, the sensor can accurately detect the movement amount or conveyance speed of the recording medium in the conveyance direction 10, the orthogonal direction 20, or both.
Accordingly, the image forming apparatus 110 can calculate the ink discharge timings (i.e., operation timing) of the liquid discharge head units 210, the amount by which the head units are to move, or both. In other words, in a period from when the position of the web 120 is detected on the upstream side of the ink discharge position to when the detected portion of the web 120 reaches the ink discharge position, the operation timing is calculated or the head unit is moved. Therefore, the image forming apparatus 110 can adjust the ink discharge position with high accuracy.
Note that, assuming that the location of sensor is directly below the liquid discharge head unit 210, in some cases, a delay of control action renders an image out of color registration. Accordingly, when the location of sensor is upstream from the ink discharge position, misalignment in color superimposition is suppressed, improving image quality. There are cases where layout constraints hinder disposing the sensor close to the ink discharge position. Accordingly, the location of sensor is preferably closer to the first roller CR1 than the ink discharge position.
The sensor can be disposed directly below each liquid discharge head unit 210. In the example described below, the sensor is disposed directly below the liquid discharge head unit 210. The sensor disposed directly below the head unit can accurately detect the amount of movement of the recording medium directly below the head unit. Therefore, in a configuration in which the speed of control action is relatively fast, the sensor is preferably disposed closer to the position directly below each liquid discharge head unit 210. However, the position of the sensor is not limited to a position directly below the liquid discharge head unit 210, and similar calculation is feasible when the sensor device SN is disposed otherwise.
Alternatively, in a configuration where error is tolerable, the sensor can be disposed directly below the liquid discharge head unit 210, or downstream from the position directly below the liquid discharge head unit 210 in the inter-roller range INT1.
Using a detection result pair hereinafter (i.e., a second result RES2) generated by the second sensor device SN102 and the third sensor device SN103, both disposed upstream from the magenta liquid discharge head unit 210M in the conveyance direction 10, the image forming apparatus 110 controls the movement or discharge timing of the liquid discharge head unit 210M.
Using a detection result pair a third result RES3) generated by the third sensor device SN103 and the fourth sensor device SN104, both disposed upstream from the cyan liquid discharge head unit 210C in the conveyance direction 10, the image forming apparatus 110 controls the movement or discharge timing of the liquid discharge head unit 210C.
Using a detection result pair (i.e., a fourth result RES4) generated by the fourth sensor device SN104 and a fifth sensor device SN105, both disposed upstream from the black liquid discharge head unit 210K in the conveyance direction 10, the image forming apparatus 110 controls the movement or discharge timing of the liquid discharge head unit 210K.
Using the first, second, third, fourth, and fifth results RES1, RES2, RES3, RES4, and RES5, the image forming apparatus 110 performs, for example, the processing illustrated in the timing chart in
In the case of the first result RES1, the first sensor device SN101 generates first sensor data SD1, and the second sensor device SN102 generates second sensor data SD2. In the case of the second result RES2, the second sensor device SN102 generates the first sensor data SD1, and the third sensor device SN103 generates the second sensor data SD2. In the case of the third result RES3, the third sensor device SN103 generates the first sensor data SD1, and the fourth sensor device SN104 generates the second sensor data SD2. In the case of the fourth result RES4, the fourth sensor device SN104 generates the first sensor data SD1, and the fifth sensor device SN105 generates the second sensor data SD2.
As illustrated in
Descriptions are given below of calculation of displacement of the web 120 for the cyan liquid discharge head unit 210C, made based on the third result RES3.
It is assumed that the optical sensor OS103 of the third sensor device SN103 and the optical sensor OS104 of the fourth optical sensor OS104 are disposed at a distance L2 (interval) from each other. It is assumed that V represents the conveyance speed calculated based on the data generated by the optical sensors OS, and T2 represents a travel time for the web 120 (conveyed object) to be conveyed from the optical sensor OS103 to the optical sensor OS104. In this case, the travel time is calculated as “T2=L2/V”.
When A represents a sampling interval of the optical sensor OS and n represents the number of times of sampling performed while the web 120 travels from one sensor to the other sensor, the number of times of sampling “n” is calculated as “n=T2/A”.
The calculation result is referred to as a displacement ΔX. In a case of a detection cycle “0” in
Subsequently, the image forming apparatus 110 controls the actuator AC to move the liquid discharge head unit 210C in the orthogonal direction 20, to compensate for the displacement ΔX. With this operation, even when the position of the conveyed object changes in the orthogonal direction 20, the image forming apparatus 110 can form an image on the conveyed object with a high accuracy. Further, as the displacement is calculated based on the sensor data SD at two different positions in the conveyance direction, that is, the detection results generated by the two different optical sensors OS, the displacement of the conveyed object can be calculated without multiplying the position data of the sensor devices SN. This operation can suppress the accumulation of detection errors by the sensor devices SN.
The sensor data SD is not limited to the detection result generated by the sensor device SN next to and upstream from the liquid discharge head unit 210 in the conveyance direction 10. That is, any of the optical sensors OS upstream from the liquid discharge head unit 210 to be moved can be used.
Note that the second sensor data SD2 is preferably generated by the sensor device SN closest to the liquid discharge head unit 210 to be moved.
Alternatively, the displacement of the conveyed object can be calculated based on three or more detection results.
Thus, based on the displacement calculated based on the plurality of sensor data SD, travel of the liquid discharge head unit 210 is controlled. Then, the position of the discharged liquid on the web 120 can be controlled accurately in the orthogonal direction 20. When the discharge timing of the liquid discharge head unit 210 is controlled based on the displacement of the web 120 in the conveyance direction 10 in a similar manner, the position of the discharged liquid on the web 120 can be controlled accurately in the conveyance direction 10.
The image forming apparatus 110 further includes a head moving device 55F (in
The image forming apparatus 110 can further includes a measuring instrument such as an encoder. Descriptions are given below of a configuration including an encoder serving as the measuring instrument. For example, the encoder is attached to a rotation shaft of the roller 230, which is a driving roller. Then, the encoder can measure the amount of movement of the web 120 in the conveyance direction 10, based on the amount of rotation of the roller 230. When the measurement results are used in combination with the detection results generated by the sensor device SN, the image forming apparatus 110 can discharge ink to the web 120 accurately.
A liquid discharge apparatus according to an aspect of this disclosure irradiates a conveyed object, with light having a high relative intensity in a wavelength range in which relative reflectance of liquid is high, and detects an amount of movement or speed of movement of the conveyed object. When a pattern (a letter or the like) drawn with the liquid (e.g., ink) is irradiated with such light (relatively intense on the liquid), the pattern drawn with the liquid is less likely to enter the image data used in detecting the amount of movement or speed of movement of the conveyed object. Thus, adverse effects of the pattern drawn with the liquid are suppressed.
Specifically, as described above, when a light source to emit light corresponding to the color of the liquid is used, the pattern drawn with the liquid assimilates with the light. Accordingly, the pattern is not included in the image data, and adverse effect of the pattern is suppressed. Accordingly, the liquid discharge apparatus can detect the amount of movement or the speed of movement accurately with the detecting unit 52.
In a configuration in which the color of the liquid is different among the liquid discharge head units, the wavelength of the light is different among the liquid discharge head units. For example, the detecting units 52 disposed upstream and downstream from a yellow liquid discharge head unit emit yellow light and generate the image data.
In the illustrative embodiment, detection is performed on the side on which the liquid is discharged. Alternatively, for example, in a case in which the liquid is see-through on the back side of the recording medium, detection is performed on the back side while the back side is irradiated with the light. One or more aspects of this disclosure can adapt to such as configuration.
Additionally, in an image forming apparatus to discharge liquid to form images on a recording medium, as the accuracy in droplet landing positions improves, misalignment in color superimposition is suppressed, improving image quality.
The first light source 51AA and the second light source 51AB emit laser light or the like to the web 120, which is an example of an object to be detected. The first light source 51AA irradiates a position AA with light, and the second light source 51AB irradiates a position AB with light.
Each of the first light source 51AA and the second light source 51AB includes a light-emitting element to emit laser light and a collimator lens to approximately collimate the laser light emitted from the light-emitting element. The first light source 51AA and the second light source 51AB are disposed to emit light in an oblique direction relative to the surface of the web 120.
The optical sensor OS includes an area sensor 11, a first imaging lens 12AA disposed opposing the position AA, and a second imaging lens 12AB disposed opposing the position AB.
The area sensor 11 includes an image sensor 112 on a silicon substrate 111. The image sensor 112 includes an area 11AA and an area 11AB, in each of which a two-dimensional image is captured. For example, the area sensor 11 is a CCD image sensor, a complementary metal oxide semiconductor (CMOS) image sensor, a photodiode array, or the like. The area sensor 11 is housed in a case 13. The first imaging lens 12AA and the second imaging lens 12AB are hold by first lens barrel 13AA and a second lens barrel 13AB, respectively.
In the illustrated structure, the optical axis of the first imaging lens 12AA matches a center of the area 11AA. Similarly, the optical axis of the second imaging lens 12AB matches a center of the area 11AB. The first imaging lens 12AA and the second imaging lens 12AB focus light on the area 11AA and the area 11AB, respectively, to generate two-dimensional image data.
In this case, the sensor device 50 can detect displacement or speed between the positions AA and AB. Further, the sensor device can perform calculation using such a detection result and a detection result generated by a sensor device disposed at a different position in the conveyance direction 10, thereby detecting the displacement and speed between the sensor devices disposed at different positions from each other. When the color of light is different between the first and second light sources 51AA and 51AB, the sensor device 50 can be used as the second, third, or fourth sensor device SN2, SN3, or SN4 in
For example, the sensor device 50 can have the following structure.
Additionally, in this structure, use of an aperture 121 or the like is preferable to prevent interference between the images generated by the first imaging lens 12AA and the second imaging lens 12AB. The aperture 121 or the like can limit a range in which each of the first imaging lens 12AA and the second imaging lens 12AB generates an image. Accordingly, the interference between imaging is suppressed. Then, the optical sensor OS can generate image data at the position AA and image data at the position AB illustrated in
The area sensor 11′ has a structure illustrated in
Image sensors are generally manufactured for imaging. Therefore, image sensors have an aspect ratio (ratio between X-direction size and Y-direction size), such as square, 4:3, and 16:9, that fits an image format. In the present embodiment, image data covering at least two different points spaced apart is captured. Specifically, image data is generated at each of points spaced apart in the X direction, one direction in two dimensions. The X direction corresponds to the conveyance direction 10 illustrated in
In view of the foregoing, in the structure illustrated in
In the lens array illustrated in
One or more of aspects of this disclosure can adapt to a liquid discharge system including at least one liquid discharge apparatus. For example, the liquid discharge head unit 210K and the liquid discharge head unit 210C are housed in one case as one device, and the liquid discharge head unit 210M and the liquid discharge head unit 210Y are housed in another case as another device. The liquid discharge system includes the two devices.
Further, one or more of aspects of this disclosure can adapt a liquid discharge apparatus and a liquid discharge system to discharge liquid other than ink. For example, the liquid is a recording liquid of another type or a fixing solution.
The liquid discharge apparatus (or system) to which one or more of aspects of this disclosure is applicable is not limited to forming apparatus to form two-dimensional images but can be apparatuses to fabricate three-dimensional articles (3D-fabricated object).
The conveyed object is not limited to recording media such as paper sheets but can be any material to which liquid adheres, even temporarily. Examples of the material to which liquid adheres include paper, thread, fiber, cloth, leather, metal, plastic, glass, wood, ceramics, and a combination thereof.
Further, one or more of aspects of this disclosure is applicable to a method of discharging liquid from an forming apparatus, an information processing apparatus, or a computer as a combination thereof, and at least a portion of the method can be implemented by a program.
The light source is not limited to laser light sources but can be, for example, an organic electro luminescence (EL) instead of the light emitting diode (LED) described above. Depending on the light source, the pattern to be detected is not limited to the speckle pattern.
Further, aspects of this disclosure can adapt to any apparatus to perform an operation or processing on a conveyed object, using a movable head to move in the direction orthogonal to the direction of conveyance of the conveyed object. The movable head may be lined in the orthogonal direction.
For example, aspects of this disclosure can adapt to a conveyance apparatus that conveys a substrate (conveyed object) and includes a laser head to perform laser patterning on the substrate. The laser head may be lined in the direction orthogonal to the direction of conveyance of the substrate. The conveyance apparatus detects the position of the substrate and moves the head based on the detection result. In this case, the position at which the laser strikes the substrate is the operation position of the head.
The number of the head units is not necessarily to two or more. Aspects of this disclosure can adapt to a device configured to keep operation at to a reference position, on a conveyed object.
Any one of the above-described operations may be performed in various other ways, for example, in an order different from the one described above. Any of the aforementioned methods may be embodied in the form of a program. The program may be stored on a computer readable media and is adapted to perform any one of the aforementioned methods when run on a computer device (a device including a processor). Thus, the storage medium or computer readable medium, is adapted to store information and is adapted to interact with a data processing facility or computer device to perform the method of any of the above mentioned embodiments.
Each of the functions of the described embodiments may be implemented by one or more processing circuits or circuitry. Processing circuitry includes a programmed processor, as a processor includes circuitry. A processing circuit also includes devices such as an application specific integrated circuit (ASIC), DSP (digital signal processor), FPGA (field programmable gate array) and conventional circuit components arranged to perform the recited functions.
The above-described embodiments are illustrative and do not limit the present invention. Thus, numerous additional modifications and variations are possible in light of the above teachings. For example, elements and/or features of different illustrative embodiments may be combined with each other and/or substituted for each other within the scope of the present invention.
Number | Date | Country | Kind |
---|---|---|---|
2016-145701 | Jul 2016 | JP | national |
2017-131460 | Jul 2017 | JP | national |
2017-137301 | Jul 2017 | JP | national |
Number | Name | Date | Kind |
---|---|---|---|
20050185009 | Claramunt | Aug 2005 | A1 |
20090002424 | Kiyama | Jan 2009 | A1 |
20100310284 | Funato et al. | Dec 2010 | A1 |
20140044460 | Kudo et al. | Feb 2014 | A1 |
20140219670 | Masuda et al. | Aug 2014 | A1 |
20140268180 | Takaura et al. | Sep 2014 | A1 |
20150009262 | Bell et al. | Jan 2015 | A1 |
20160114576 | Tobita | Apr 2016 | A1 |
20160121602 | Nagasu et al. | May 2016 | A1 |
20160136947 | Hommi | May 2016 | A1 |
20160332459 | Ohnishi | Nov 2016 | A1 |
20160347050 | Hommi | Dec 2016 | A1 |
20170057258 | Otsuka | Mar 2017 | A1 |
20170106647 | Inoue | Apr 2017 | A1 |
20170165960 | Sunaoshi et al. | Jun 2017 | A1 |
20170165961 | Hayashi et al. | Jun 2017 | A1 |
20170182764 | Nagasu et al. | Jun 2017 | A1 |
Number | Date | Country |
---|---|---|
2014-035197 | Feb 2014 | JP |
Entry |
---|
U.S. Appl. No. 15/455,539, filed Mar. 10, 2017. |
U.S. Appl. No. 15/456,677, filed Mar. 13, 2017. |
Number | Date | Country | |
---|---|---|---|
20180022088 A1 | Jan 2018 | US |