This invention relates to a method of evaluating a condition of a printhead, such as an inkjet printhead. It has been developed primarily to enable an end-of-life of the printhead to be predicted and communicated to users.
Inkjet printheads are being used increasingly in high volume applications, such as digital presses. For example, U.S. Pat. Nos. 9,139,031 and 10,081,204 (the contents of which are incorporated herein by reference) describe digital presses employing pagewide inkjet printheads.
In such high-volume printing applications, printheads typically require periodic replacement by users. For example, in thermal inkjet printheads, heater elements may fail due to kogation or corrosion over time and the printhead therefore has a finite lifetime.
In many instances, printheads are replaced after a predetermined volume of ink has been consumed by the printhead, in accordance with a manufacturer's recommendation. However, replacement of printheads based on such a crude metric is unreliable. For example, some printing fluids result in shorter printhead lifetimes compared to other printing fluids. In some instances, the type of images printed may lead to faster or slower deterioration of the condition of the printhead—evenly spread usage across the printhead usually results in a relatively longer printhead life; intensive usage in one region of the printhead usually results in a relatively shorter printhead life. Therefore, users may prematurely replace healthy printheads in some instances. Alternatively, users may print images using a poorly performing printhead in other instances, resulting in wastage of media.
Moreover, inspection of print quality is not generally a reliable indicator of the condition of a printhead. Poor print quality may be a result of, for example, printhead misalignment, dust on the printhead or temporarily blocked nozzles. Therefore, print quality may not be a true indicator of the printhead nearing or reaching its end of life.
It would be desirable to provide a method by which the condition of a printhead can be reliably determined. It would further be desirable to provide a method of predicting an end of life of the printhead, enabling users to replace printheads at an optimum time.
In one aspect, there is provided a method of evaluating a condition of a printhead, said method comprising the steps of:
(i) printing an image using the printhead;
(ii) optically imaging at least part of the printed image and determining optical densities for a portion of the printed image;
(iii) converting the optical densities into a one-dimensional signal;
(iv) analyzing one or more portions of the signal using a convolutional neural network to generate a classification for corresponding portions of the signal; and
(v) using each classification to evaluate the condition of corresponding portions of the printhead.
The method advantageously provides a means by which users can evaluate the condition of a printhead during use, without relying on crude indicators such as ink usage. In particular, the method may be used to provide an indication to users as to whether the printhead is nearing its end of life and should be replaced.
Preferred aspects of the invention are described hereinbelow in the claims attached hereto.
One or more embodiments of the present invention will now be described with reference to the drawings, in which:
As foreshadowed above, crude techniques for determining printhead life, such as measurement of ink usage, are generally unreliable and typically result in premature replacement of healthy printheads.
Over the lifetime of multiple printheads in the field, a vast amount of data may be generated from optical analyses of test images. Typically, optical analysis of a test image is used to apply optical density compensation (ODC) for improved print quality. For example, due to MEMS manufacturing tolerances, some nozzles in a printhead may be smaller than others and eject lower volumes of ink resulting in lower optical density in the printed image. ODC is a means by which regions of lower optical density can be compensated by adjusting dither thresholds applied during halftoning. Using ODC, the resulting printed image is a truer representation of the original contone image, even with nozzle size variations across the printhead. ODC is also particularly useful for compensating banding artefacts in single pass printing, which arise from stitching multiple printhead chips or printheads together. In a Memjet® printhead, so-called “banding artefacts” (i.e. vertical streaks) characteristically occur at about 1-inch intervals corresponding to a length of each printhead chip in the printhead.
In order to apply ODC, a test image is printed and scanned using a suitable optical scanner or an inline vision system downstream of the printhead.
However, in the method described herein, data captured from test images, such as the test image 10 used in ODC, are used to characterize a condition of the printhead 1 using machine learning. As will be well known to the person skilled in the art, machine learning is a powerful technique that has been applied to many different fields, such as image and audio recognition. In the present context, machine learning is applied to the one-dimensional optical density signal extracted from a printed test image in order to evaluate the condition of the printhead and provide an indication of the printhead's end-of-life.
The test images 10 may, for example, be printed and analysed periodically (e.g. every 1000 sheets) over the lifetime of the printhead for evaluation thereof. Typically, the condition of the printhead rapidly declines towards its end of life and it is advantageous to notify users when the printhead has entered this state of rapid decline so that the printhead can be replaced.
In one preferred embodiment, the test image 10 is scanned at 600 dpi and processed by identifying the position of bull's-eye fiducials 14. The captured image is then digitally warped in order to move the bull's-eye fiducials 14 to their original coordinates in the test image 10. At this point, the scanned image is considered in perfect pixel-by-pixel alignment with the test image 10. Each fixed-density bar 12 of the scanned image is then cropped such that pixels near the boundary of neighbouring bars are discarded. This guard band of pixels is wide enough such that the bull's-eye fiducials 14 are similarly discarded. The only remaining pixels are therefore safely inside each fixed-density bar 12, as indicated by the dashed boundary lines 16 in
Once the fixed-density bars 12 are segmented to exclude the fiducials 14, the 1-D optical density signal for each bar (“region of interest”) is generated. The rows (m) of the cropped region of interest (ROI) are averaged together down a column (n) of pixels. For instance, the ROI of
where M is the number of rows in the ROI.
Having generated a pagewide 1-D vector representing optical density, the goal of the AI is to develop a sliding window that traverses the vector and assigns a score to that position within the vector corresponding to how many pages have been printed.
A sliding window neural network is identical to a discrete finite impulse response (FIR) filter that is convolved with the input vector except that the sliding window neural network adds a bias to the output. A bias is a scalar constant added to the output. Specifically, the operation of the sliding window neural network is given by:
where xi[n] is the nth sample of the input vector, xo[n] is the nth sample of the output vector, w[m] is the mth weight defining the slide neural network, and β is the scalar bias or offset added to the output of the convolution operation.
Another idiosyncrasy of the sliding window neural network is how it handles the edges of the input vector. In particular, there are values of m and n such that n−m correspond to negative time indices inside xi[n−m]. In the above implementation, these values are discarded from the output so that the output vector, xo[n], has a length of N−M+1 which is M−1 samples shorter than xi[n]. So, with this first set of 8 separate sliding window 16×1 neural networks, the 169-sample input vector becomes 8 separate 169−15=154 sample output vectors.
The 8 length-154 output vectors are then processed by means of a 2×1 max-pooling layer. This is a simple operation that takes each of the 8 windows and divides them into 77 non-overlapping length-2 vectors. For each length-2 vector, the pooling layer chooses the largest of the 2 values, and replaces the length-2 vector with this 1 scalar value. As such, the 8 length-154 output vectors becomes 8 length-77 output vectors.
The next stage of processing is another series of sliding window neural networks, specifically, there are 8 separate sliding window 16×1 neural networks with 1 unique sliding window 16×1 neural network for each of the 8 length-77 output vectors from the previous stage. The outputs of these 8 sliding window neural networks correspond to 8 (77−15)=62-sample vectors which are then summed together into a single 62-sample output vector. This vector is then down sampled using another 2×1 max-pooling layer, choosing the maximum scalar value for each length-2 sample vectors and resulting in a single length-31 vector.
The next stage of processing is a single sliding window 16×1 neural network followed by another 2×1 max-pooling layer. The output of these two operations is a single length-8 vector. The length-8 vector is finally processed by a fully connected convolutional network with 1 hidden layer. Specifically, the mathematical operation on the length-8 vector is that it has multiplied by an 8×8 matrix to produce another length-8 vector. Each element of this resulting vector is then offset by a separate bias term. The resulting length-8 vector is considered the hidden layer, which is then processed by a second fully connected neural network that is performed by multiplying the length-8 hidden layer vector with an 8×1 matrix to produce a length-1 vector, a scalar value. This scalar value is offset by a bias term with the resulting scalar value the output of the network. With proper training, it is this scalar output that is indicative of the condition (or the age) of the printhead.
Whilst the evaluation process has been described above in connection with a pagewide inkjet printhead, it will readily understood by those skilled in the art that this evaluation process may be used in any type of printhead or printing device having a limited lifetime.
Training of the CNN means specifying the weights, w[m], and biases, β, of the sliding window convolutional neural networks as well as the fully connected neural networks used in the final stages of the CNN. Training of a CNN is generally a well-known process, and in the present context, involves choosing an initial set of weights and biases and then running training data through the network whose printhead score is already known. The difference between the output of the network and the already known age is then used to derive an error that is back-propagated through the network in order to update the weights and biases of the various neural networks.
The key to the success of this procedure at training the CNN and obtaining reliable scores in the end is how to acquire the training data. By way of example only, print data was collected from a set of 28 different printheads. Each printhead was mounted in a Memjet® label printer and used to print test images beyond the count that would normally be deemed end of life. Of the many test images printed, the test image of
To acquire the 1D signal from the printed sheets, the test images 10 were scanned using a flatbed scanner at 600 dpi and processed. Specifically, each fixed-density bar 12 of the test chart 10, is analysed with reference to the bull's eye fiducials 14, as described above. A single scan, therefore, generate 16 unique signals with one 1-D signal 18 for each step-in ink density as depicted in
Whilst CNN training has been described above using only a small sample of 28 printheads, it will be readily appreciated that, in practice, CNN training using a larger sample of printheads will improve the accuracy of the CNN for evaluating printhead life. Training data may be acquired from a large number of printheads in the field so as to continuously optimize the CNN. For example, training data from printheads in the field may be sent over a computer network to a server so as to train the CNN; users may then receive periodic software upgrades based on the updated CNN and, therefore, improve the accuracy of printhead end-of-life evaluations in the field.
The term “training data” generally refers to all the data with labels, i.e. ground truth scores; however, training data is divided into collection data used to train the weights and biases of the CNN, as described above, and the remainder, which is used to confirm that the final CNN accurately scores the data. This second set of data is the validation data, as shown in
As shown in
In order to maximize the performance of a neural network, it is often beneficial to pre-process the data as a sort of normalization process. For instance, when classifying images, programmers will apply a histogram equalization or edge detector process so that the classifier detects patterns in the edges and does not focus on features like the mean value of the data. Evaluating printhead life is no different in that the mean value of the signal needs to be removed so that the neural network can focus on the pattern of streaks (corresponding to banding artefacts), which appear as spikes in the 1-D signal.
Specifically, it is noted that the spikes exist primarily in the high frequency spectral bands of the signal, and as such, the signal is prefiltered using a discrete, stationary wavelet transform using the Daubechies-5 wavelet where the original signal is decomposed to its level-5 coefficients. These 5th level coefficients are then discarded and the signal reconstructed. Since the streaks associated with printhead defects are largely restricted to the first few levels, discarding the level-5 coefficients should have no effect on the classifier.
A benefit of the above pre-filtering process is that it allows user image content to exist in the deeper decomposition levels without affecting the ability of the CNN to measure printhead life. This means that dedicated space on the printed page for a calibration target becomes unnecessary and that printhead life can be evaluated with no consequence to the user. Therefore, any relatively ‘flat’ contone region of a printed image may be used for evaluation of printhead life using the CNN, without the need for analysis of a dedicated printhead test image. For example, a blue sky may be a suitable region of a printed image for analysis.
The foregoing describes only some embodiments of the present invention, and modifications of detail may be made thereto without departing from the scope of the invention, the embodiments being illustrative and not restrictive.
This application claims priority to and the benefit of U.S. Provisional Patent Application No. 63/008,499, entitled METHOD OF EVALUATING PRINTHEAD CONDITION, filed on Apr. 10, 2020, the disclosure of which is incorporated herein by reference in its entirety for all purposes.
Number | Date | Country | |
---|---|---|---|
63008499 | Apr 2020 | US |