Printing devices operate to generate a rendered output, for example by depositing discrete amounts of a print agent, such as an ink, on a print medium, for example. In order to render an image, such as a two-dimensional photo, image data is converted into printing instructions for the printing device. In one technique, referred to as halftoning, a continuous tone image is approximated via the rendering of discrete quantities (“dots”) of an available print agent arranged in a spaced-apart configuration. Halftoning may be used to generate a grey-scale image. In some examples, halftone patterns of different colorants, such as Cyan, Magenta, Yellow and Black (CMYK) print agents may be combined to generate color images.
Various features of the present disclosure will be apparent from the detailed description which follows, taken in conjunction with the accompanying drawings, which together illustrate, features of certain examples, and wherein:
Certain examples described herein relate to halftoning. For example, a printed image may appear to have continuous tone from a distance, e.g. colors “blend” into each other. However, when inspected at close range, the printed image is found to be constructed from discrete deposit patterns. Comparative colorant channel approaches to halftoning involve a color separation stage that specifies colorant amounts to match colors within the image to be rendered; e.g. in a printing system, it determines how much of each of the available print agents (which may be inks or other imaging materials) are to be used for printing a given color. For example, a given output color in a CMYK printer may be set as 30% Cyan, 30% Magenta, 40% Yellow and 0% Black (e.g. in a four element vector where each element corresponds to an available colorant). Input colors in an image to be rendered are mapped to such output colors. Following this, when using colorant channel approaches, halftoning involves making a spatial choice as to where to apply each colorant in turn, given the colorant amounts from the color separation stage. For a printing device, this may comprise determining a drop pattern for each print agent, wherein the drop patterns for the set of print agents are layered together and printed.
The terms “halftone screen” and “halftone pattern” refer to the pattern of dots applied to produce an image, and may be defined by characteristics such as a number of lines per inch (LPI) (the number of dots per inch measured along the axis of a row of dots), a screen angle (defining the angle of the axis of a row of dots) and a dot shape (for example, circular, elliptical or square). In some examples a halftone screen is created using amplitude modulation (AM), in which a variation in dot size in a regular grid of dots is used to vary the image tone. In other examples, a halftone screen is created using frequency modulation (FM) (also referred to as “stochastic screening”), in which a pseudo-random distribution of dots is used, and the image tone is varied by varying the density of dots.
Different halftone screens may be suitable for different types of images having different characteristics. For example, halftone screens with a high LPI may be more suitable for images with many small details, whereas halftone screens with a low LPI may produce lower graininess and therefore be more suitable for representing smooth areas. Different halftone screens may therefore be designed for different purposes.
When designing a halftone screen, a halftone version of an original image created using the halftone screen may be rendered on a display (for example, a computer screen) and compared with the original image to assess how faithfully the halftone image reproduces the original image. However, when the halftone image is printed using a printing device, it may appear differently. This may be due to, for example, distortions that occur during the printing process.
In
The system 150 may include a printer 156, which may receive instructions to print an image corresponding to the halftone image 142 on a printing medium, such as paper, generating a printing medium image 152c. In one example, the printer 156 comprises a printing press. The system 150 may include a scanner 158 to scan the printing medium image 152c to generate printed image data 152d, which may be a digital image, for example. The scanner 158 may be an automated scanner comprising a microscope (not shown) capable of capturing high-resolution images from the printing medium image 152c, for example a resolution of 4800 dots per inch (DPI).
As mentioned above, the computing apparatus 100 storing the neural network 110 of
In
The input image data 152b and corresponding printed image data 152d (each derived from the original image 152a) may be considered to form a set of image data. The input image data 152b and printed image data 152d may take the form of data files, such as digital data files, for example. The digital data files may have a format such as TIF, JPG, PNG or GIF (e.g. a lossless version of one of these formats), for example.
At 210, the computing apparatus 100 receives a plurality of sets of image data, each set of image data representing a respective image (original image 152a), using a halftone pattern (halftone screen). The original image 152a may be the whole or part of a digital image, for example. As described above, each of the sets of image data may comprise input image data 152b representing the original image and corresponding printed image data 152d representing a printed version of the original image portion printed on the basis of the halftone pattern. The input image data 152b may be a representation, such as a digital representation of the intended halftone screen e.g. with dots at locations and of sizes as theoretically expected. The corresponding printed image data 152d may be generated based on a printed version of the input image data 152b. For example, the corresponding printed image data may be generated by scanning a printing medium image 152c using a scanner device, such as scanner 158, the printing medium image 152c being generated by printing an image on a printing medium based on the input image data, as described above.
The sets of image data may each relate to different portions of the original image 152a, for example. The different portions may be mutually exclusive portions, or overlapping portions, of the original image 152a. In some examples, the sets of image data relate to different original images 152a.
At 220, the processor trains the neural network 110 to generate a mapping between input image data 152b and printed image data 152d by iteratively performing a training process. The training process may comprise providing given input image data from a given set of the plurality of sets of image data as an input to the neural network 110. An output of the neural network 110, such as the output image 152e, generated on the basis of the given input image data may be compared with given corresponding printed image data 152d from the given set of the plurality of sets of image data. In an example, the printed image data 152d is used as ground-truth data, and the parameters of the neural network 110 are adjusted so as to reduce, for example to minimize, a loss between the an output image 152e and the printed image data.
In an example, the printed image data 152d has a higher resolution than the input image data 152b. For example, the printed image data 152d may have six times the resolution of the input image data 152b (in units of LPI). Using high resolution printed image data may enable a more accurate reflection of the image as perceived by a user. For a given image area, this means that the amount of data (the size) of the printed image data is larger than the amount of data (the size) of the input image data. In order to map between the images of different size, the neural network 110 may include a deconvolution layer (also referred to as a transpose convolution layer). Examples of the neural network 110 and the training process are described in more detail below with reference to
At 230, the processor 102 generates a model, such as a mathematical model, for predicting a characteristic of a printed halftone image on the basis of the mapping. In an example, the model comprises the trained neural network 110 itself, or a representation of same. The model 160 may be saved to a storage media, for example, and include computer-executable instructions. These instructions may be subsequently used on a computing device, such as a general-purpose computer for example, to predict a characteristic of a printed halftone image based on input image data using a given halftone pattern, for example. The predicted characteristic may comprise for example, a dot size or location, or a deviation of same from an intended value, for example. In one example, predicting a characteristic may comprise producing a halftone image (for example, a digital image to be rendered on a computer screen) representing the predicted printed halftone image.
The method 200 described above enables a halftone printing process to be modeled by treating the printing process as a “black box”. This is simpler and more accurate than an analytic approach in which it is attempted to model the various different stages of the printing process individually.
In some examples, different models 160 may be generated for different types of halftone pattern. For example, one model 160 may be generated for halftone screens having a given LPI or range of LPIs. In this case, the plurality of sets of image data described above may comprise a first type of halftone pattern relating to a first type of halftone patter and a second plurality of sets of image data relating to a second type of halftone pattern. A first model 160 may then be generated for predicting a characteristic of a printed halftone image for the first type of halftone pattern based on the first plurality of sets of image data, and a second model 160 generated for predicting a characteristic of a printed halftone image for the second type of halftone pattern. In this case, the plurality of sets of image data described above may comprise a first plurality of sets of image data each representing a respective image of a first color and a second plurality of sets of image data each representing a respective image of a second color. A first model 160 may then be generated for predicting a characteristic of a printed halftone image of the first color based on the first plurality of sets of image data, and a second model 160 generated for predicting a characteristic of a printed halftone image of the second color based on the second plurality of sets of image data.
Similarly, different models 160 may be generated for different types (for example, different models) of printer, in order to take account of the different characteristics of the different types of printers.
As mentioned above, in an example, the computing apparatus 100 may comprise part of printer. Such an arrangement may be used to train the neural network 110 to generate a model 160 specifically tailored to the particular individual printer. For example, the printer may be used to print an image portion on a printing medium (for example, paper) based on the input image data to generate a printing medium image. The printing medium image may then scanned, using a scanner function of the printer for example, to generate the corresponding printing image data. This enables a model 160 to be generated reflecting the characteristics of an individual printer. In this example (as well as other examples), part of the training process may be performed on a device different from the computing device 100. The training process performed on the computing device 100 may take the form of a calibration process, for example using a single set of input image data and corresponding printed image data, or a relatively small number of such sets.
In the example of
In one example, a Mean Average Error (MAE) loss function may be used as the loss value to be minimised, as illustrated in equation (1):
MAE=Σn=0all|ImgGT−ImgP| Equation (1)
In another example, an accuracy loss function may be used, as illustrated in equation (2):
In equations (1) and (2) ImgGT is a feature vector in the printed image data 152 and ImgP is a feature vector in the output image 152e.
As mentioned above, in order to accommodate cases where the image size of the printed image data (and therefore the size of the reconstructed image data) is larger than that of the input image data, the CNN 300 may include a deconvolution layer to map between images of different sizes. In the example of
As mentioned, a filter 406 is applied to each extracted subset. Example filter 406 includes parameter values a to i. In an example, the filter 406 is applied by multiplying each value of the subset 404 by a parameter value at a corresponding position in the filter 406, and summing the resulting products for all positions, the resulting sum is then used as a value at a position in the data set 316 which forms the output of the deconvolution layer 304. Grid 408 in
In some examples, a different type of neural network 110 to the CNN 300 illustrated in
As mentioned above, a model 160 generated by the methods described above may be saved to a computer-readable storage medium, which may include computer-readable instructions, and which may be executed by a computing device such as a general purpose computing device.
The preceding description has been presented to illustrate and describe examples of the principles described. This description is not intended to be exhaustive or to limit these principles to any precise form disclosed. Features of individual examples may be combined in different configurations, including those not explicitly set out herein. Many modifications and variations are possible in light of the above teaching.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2018/049487 | 9/5/2018 | WO | 00 |