Printing devices can use a variety of different technologies to form images on media such as paper. Such technologies include dry electrophotography (EP) and liquid EP (LEP) technologies, which may be considered as different types of laser and light-emitting diode (LED) printing technologies, as well as fluid-jet printing technologies like inkjet-printing technologies. Printing devices deposit print material, such as colorants like dry and liquid toner as well as printing fluids like ink, among other types of print material.
As noted in the background, printing devices can be used to form images on media using a variety of different technologies. While printing technologies have evolved over time, they are still susceptible to various print defects. Such defects may at first manifest themselves nearly imperceptibly before reaching the point at which print quality has inescapably degraded. Detecting print quality degradation before it becomes too excessive can make ameliorating the root problem less costly and time-consuming, and can also improve end user satisfaction of a printing device. Accurate identification and assessment of print quality degradation can assist in the identification of the defects responsible for and the root causes of the degradation.
Assessing degradation in the print quality of a printing device has traditionally been a cumbersome, time-consuming, and costly affair. An end user prints a specially designed test image and provides the printed image to an expert. The expert, in turn, evaluates the test image, looking for telltale signs of print defects to assess the overall degradation in print quality of the printing device. Upon locating such print defects, the expert may be able to discern the root causes of the degradation and provide solutions to resolve them. With the provided solutions in hand, the end user may thus be able to fix the problems before they become too unwieldy to correct or more severely impact print quality.
Techniques described herein, by comparison, provide for a way by which degradation in the print quality of a printing device can be assessed without having to involve an expert or other user. The techniques instead generate feature vectors that characterize image quality defects within a test image that a printing device has printed. The test image corresponds to a reference image in which regions of interest (ROIs) of different types are identified. The different types of ROIs can include raster, symbol, background, and vector ROI types, for instance. The feature vectors can be generated based on a comparison of the ROIs within the reference image and their corresponding ROIs within the test image. Whether print quality has degraded below a specified level of acceptability can be assessed based on the generated feature vectors.
A print job 102 may include rendered data specially adapted to reveal image quality defects of a printing device when printed, or may be data submitted for printing during the normal course of usage of the device, such as by the end user, and then rendered. The print job 102 may be defined in a page description language (PDL), such as PostScript or the printer control language (PCL). The definition of the print job 102 can include text (e.g., human-readable) or binary data streams, intermixed with text or graphics to be printed. Source data may thus be rendered to generate the print job 102.
The method 100 includes imaging the print job 102 (104) to generate a reference image 106 of the job 102. Imaging the print job 102 means that the job 102 is converted to a pixel-based, or bitmap, reference image 106 having a number of pixels. The imaging process may be referred to as rasterization. The print job 102 is also printed (108) and scanned (110) to generate a test image 112 corresponding to the reference image 106. The print job 102 may be printed by a printing device performing the method 100, or by a computing device performing the method 100 sending the job 102 to a printing device. The print job 102 may be scanned using an optical scanner that may be part of the printing device or a standalone scanning device.
The method 100 includes extracting ROIs from the reference image 106 using an object map 120 for the reference image 106 (116), to generate reference image ROIs 122. The object map 120 distinguishes different types of objects within the reference image 106, and specifies the type of object to which each pixel of the image 106 belongs. Such different types of objects can include symbol objects including text and other symbols, raster images including pixel-based graphics, and vector objects including vector-based graphics. The object map 120 may be generated from the print job 102 or from the reference image 106. An example technique for generating the object map 120 from the reference image 106 is described in Z. Xiao et al., “Digital Image Segmentation for Object-Oriented Halftoning,” Color Imaging: Displaying, Processing, Hardcopy, and Applications 2016.
The reference image ROIs 122 may be extracted from the reference image 106 and the object map 120 using the technique described in the co-filed PCT patent application entitled “Region of Interest Extraction from Reference Image Using Object Map,” filed on [date] and assigned patent application No. [number]. Another such example technique is described in M. Ling et al., “Traffic Sign Detection by ROI Extraction and Histogram Features-Based Recognition,” 2013 IEEE International Joint Conference on Neural Networks. Each reference image ROI 122 is a cropped portion of the reference image 106 of a particular ROI type. There may be multiple ROIs 122 of the same ROI type. The ROIs 122 are non-overlapping, and can identify areas of the reference image 106 in which print defects are most likely to occur and/or be discerned when the image 106 is printed.
The ROI types may correspond to the different object types of the object map 120 and include symbol and raster ROI types respectively corresponding to the symbol and rater object types. In one implementation, the ROI types may thus include a vector ROI type as well, corresponding to the vector object type. In another implementation, however, there may be two ROI types corresponding to the vector object type instead of just one. The vector ROI type may itself just include uniform non-white as well as smooth gradient color areas, whereas another, background ROI type may just include uniform areas in no colorant is printed, and which thus have the color of the media.
The method 100 can include aligning the printed and scanned test image 112 with the reference image 106 (114), to correct misalignment between the test image 112 and the reference image 106. That is, upon printing and scanning, the location of each pixel within the test image 112 may differ from the location of the corresponding pixel within the reference image 106. The alignment process can include shifting the test image 112 horizontally and/or vertically, among other operations, to align the locations of the pixels in the test image 112 with their corresponding pixels in the reference image 106, within a margin of error. An example alignment technique is described in A. Myronenko et al., “Intensity-Based Image Registration by Minimizing Residual Complexity,” 2010 IEEE transactions on medical imaging, 29(11).
The method 100 can include color calibrating the aligned test image 112 against the reference image 106 (116), to correct for color variations between the test image 112 and the reference image 106. That is, upon printing and scanning, the color of each pixel within the test image 112 may vary from the color of its corresponding pixel within the reference image 106, due to the manufacturing and operational tolerances and characteristics of the printing device and/or the scanner. The color calibration process can thus modify the color of each pixel of the test image 112 so that it corresponds to the color of the corresponding pixel of the reference image 106, within a margin of error. An example color calibration technique is described in E. Reinhard et al., “Color Transfer Between Images,” 2001 IEEE Computer Graphics and Applications, 21(5).
The method 100 can include cropping the color calibrated test image 112 to generate test image ROIs 126 corresponding to the reference image ROIs 122 (124). For example, a reference image ROI 122 is a cropped portion of the reference image 106 at a particular location within the image 106 and having a particular size. As such, the corresponding test image ROI 126 is a cropped portion of the test image 112 at the same location within the image 112 and having the same particular size. There is, therefore, a one-to-one correspondence between the reference image ROIs 122 and the test image ROIs 126.
The symbol ROIs 122A, the raster ROI 122B, the vector ROIs 122C, and the background ROIs 122D can collectively be referred to as the reference image ROIs 122. Each reference image ROI 122 is a cropped contiguous portion of the reference image 106 of an ROI type. The reference image ROIs 122 do not overlap one another; that is, each pixel of the reference image 106 belongs to at most one ROI 122. Whereas the object map 120 specifies the object 202 to which every pixel of the reference image 106 belongs, the reference image ROIs 122 each include just a subset of the pixels of the reference image 106. Further, not all the pixels of the reference image 106 may be included within any reference image ROI 122.
For each extracted reference image ROI 122 (302) of each ROI type (304), the method 300 includes the following. The reference image ROI 122 of an ROI type and its corresponding test image ROI 126 are compared to one another (306). Based on the results of this comparison, a feature vector is generated for the ROI (308). The feature vector characterizes image quality defects within the test image ROI 126. For each ROI type (304), the generated feature vectors for the ROIs of the ROI type in question can then be combined (e.g., concatenated or subjected to a union operation) (310), to generate a feature vector characterizing image quality defects within the test image 112 for that ROI type. The feature vectors for the different ROI types may in turn be combined to generate a single feature vector characterizing image quality defects within the test image 112 as a whole (312).
A feature vector is a vector (e.g., collection or set) of image characteristic-based values. The feature vector for each ROI type can be defined to include such image characteristic-based values that best characterize the image quality defects within the test image 112 for that ROI type. An example manner by which a reference image ROI 122 of each ROI type can be compared to its corresponding test image ROI 126 is described later in the detailed description. An example definition of the feature vector for each ROI type, and an example manner by which the feature vector can be generated based on the results of the comparison of a reference image ROI 122 of that ROI type and its corresponding test image ROI 126, are also described later in the detailed description.
In one implementation, parts 306, 308, 310, and 312 can be performed by the printing device that printed and scanned the print job 102 to generate the test image 112. In another implementation, parts 306, 308, 310, and 312 can be performed by a computing device separate from the printing device. For example, once the printing device has printed the print job 102, a scanner that is part of the printing device or part of a standalone scanning device may scan the printed print job 102 to generate the test image 112, and then the computing device may perform parts 306, 308, 310, and 312.
The method 300 can include assessing whether print quality of the printing device that printed the test image 112 has degraded below a specified acceptable print quality level (314), based on the generated feature vectors that may have been combined in part 312. Such an assessment can be performed in a variety of different ways. For example, an unsupervised or supervised machine learning technique may be employed to discern whether print quality has degraded below a specified acceptable print quality level. As another example, the generated feature vectors may be subjected to a rule-based or other algorithm to assess whether print quality has degraded below a specified acceptable print quality level. As a third example, the values within the generated feature vectors may each be compared to a corresponding threshold, and if more than a specified weighted or unweighted number of the values exceed their thresholds, then it is concluded that print quality has degraded below a specified acceptable print quality level.
The assessment of part 314 can be performed by a cloud-based computing device. For example, the combined feature vectors may be transmitted by a printing device or a computing device to another computing device over a network, such as the Internet. The latter computing device may be considered a cloud-based computing device, which performs the assessment.
The method 300 may include responsively performing a corrective action to improve the degraded print quality of the printing device (316). The corrective action may be identified based on the image quality defects within the test image that the feature vectors characterize. The corrective action may be identified by the computing device that performed the assessment of part 314, such as a cloud-based computing device, and then sent to the printing device for performance, or to another computing device more locally connected to the printing device and that can perform the corrective action on the printing device.
There may be more than one corrective action, such as a corrective action for each ROI type. The corrective actions may include reconfiguring the printing device so that when printing source data, the device is able to compensate for its degraded print quality in a way that is less perceptible in the printed output. The corrective actions may include replacing components within the printing device, such as consumable items thereof, or otherwise repairing the device to ameliorate the degraded print quality.
In one implementation, part 314 can be performed by a printing device transmitting the generated feature vectors, which may have been combined in part 312, to a computing device that performs the actual assessment. As such, the generated feature vectors of a large number of similar printing devices can be leveraged by the computing device to improve print quality degradation assessment, which is particularly beneficial in the context of a machine learning technique. In another implementation, part 314 can be performed by a computing device that also generated the feature vectors. Part 316 can be performed by or at the printing device itself. For instance, the identified corrective actions may be transmitted to the printing device for performance by the device.
The method 400 can include transforming reference and test image symbol ROIs 122 and 126 to grayscale (402). When the print job 102 of
The method 400 can include then extracting text and other symbol characters from the reference image symbol ROI 122 (404). Such extraction can be performed by using an image processing technique known as Otsu's method, which provides for automated image thresholding. Otsu's method returns a single intensity threshold that separates pixels into two classes, foreground (e.g., characters) and background (i.e., not characters).
The method 400 includes then performing the morphological operation of dilation on the characters extracted from the reference image symbol ROI 122 (406). Dilation is the process of enlarging the boundaries of regions, such as characters, within an image, such as the reference image symbol ROI 122, to include more pixels. In one implementation, for a 600-dots per inch (dpi) letter size reference image 106, the extracted characters are dilated by nine pixels.
The method 400 includes removing the dilated extracted characters from the reference image symbol ROI 122 and the test image symbol ROI 126 (408). That is, each pixel of the reference image symbol ROI 122 that is included in the dilated extracted characters has a relative location within the ROI 122, and is removed from the ROI 122. The test image ROI 126 likewise has a pixel at each such corresponding location, and is removed from the ROI 126. The result of removal of the dilated extracted characters from the reference image symbol ROI 122 and the test symbol image ROI 126 is the background areas of the ROIs 122 and ROI 126, respectively. That is, the pixels remaining within the reference image ROI 122 and the test image ROI 126 constitute the respective background areas of these ROIs 122 and 126. It is noted that the dilated extracted characters are removed from the original ROIs 122 and 126, not the grayscale versions thereof generated in part 102.
The method 400 can include then transforming reference and test image symbol ROIs 122 and 126 from which the extracted characters have been removed to the LAB color space (410). The LAB color space is also known as the L*a*b or CIELAB color space. The reference and test image symbol ROIs 122 and 126 are thus transformed from the RGB color space, for instance, to the LAB color space.
The method 400 can include calculating a distance between corresponding pixels of the reference and test image symbol ROIs 122 and 126 (412). The calculated distance may be the Euclidean distance within the LAB color space. Such a distance may be specified as ΔE(i,j)=√{square root over ((L(i,j)ref−L(i,j)test)2+(a(i,j)ref−a(i,j)test)2+(b(i,j)ref−b(i,j)test)2)}. In this equation, L(i,j)ref, a(i,j)ref, b(i,j)ref are the L, a, b color values, respectively, of the reference image symbol ROI 122 at location (i,j) within the reference image 106. Likewise, L(i,j)ref, a(i,j)ref, b(i,j)ref are the L, a, b color values, respectively, of the test image symbol ROI 126 at location (i,j) within the test image 112.
The result of part 412 is a grayscale comparison image. The method 400 can include normalizing the calculated distances of the grayscale comparison image (414). Normalization provides for better delineation and distinction of defects. As one example, normalization may be to an eight-bit value between 0 and 255.
The method 400 includes then extracting a background defect from the normalized gray image (416). Such extraction can be performed by again using Otsu's method. In the case of part 416, Otsu's method returns a single intensity threshold that separates defect pixels from non-defect pixels. The pixels remaining within the gray image thus constitute the background defect.
The value (i.e., the normalized calculated distance) and/or number of pixels within the background defect along a media advancement direction are projected (418). The media advancement direction is the direction of the media on which the rendered print job was advanced through a printing device during printing in part 108 prior to scanning in part 110 to generate the test image 112. Part 418 may be implemented by weighting the background defect pixels at each location along the media advancement direction by their normalized calculated distances and plotting the resulting projection over all the locations along the media advancement direction. Part 418 can also or instead be implemented by plotting a projection of the number of background defect pixels at each location along the media advancement direction.
Streaking analysis can then be performed on the resulting projection (420). Streaking is the occurrence of undesired light or dark lines along the media advancement direction. In the case of an EP printing device, streaking can occur when an intermediate transfer belt (ITB), organic photoconductor (OPC), or other components of the device become defective or otherwise suffer from operational issues. Streaking analysis identifies occurrences of streaking (i.e., streaks or streak defects) within the test image ROI 126 from the projection.
To identify streaking occurrences, a threshold may be specified to distinguish background noise within the projection from actual occurrences of streaking. The threshold may be set as the average of the projection values over the locations along the media advancement direction, plus a standard deviation of the projection values. This threshold can vary based on the streak detection result. As another example, the threshold may be the average of the projection values plus twice the standard deviation. A location along the advancement direction is identified as part of a streaking occurrence if the projection value at the location exceeds this threshold. A streak is identified for each contiguous set of such contiguous locations at which the projection values exceed the threshold.
The value (i.e., the normalized calculated distance) and/or number of pixels within the background defect along a direction perpendicular to the media advance direction are also projected (422). Part 422 may be implemented in a manner similar to part 418, by weighting the background defect pixels at each location along the direction perpendicular to the media advancement direction by their grayscale values and plotting the resulting projection over all the locations along this direction. Part 422 can also or instead be implemented by plotting a projection of the number of background defect pixels at each location along the direction perpendicular to the media advancement direction.
Banding analysis can then be performed on the resulting projection (424). Banding analysis is similar to streaking analysis. However, whereas streaks occur along the media advancement direction, bands (i.e., band defects or banding occurrences) that are identified by the banding analysis occur along the direction perpendicular to the media advancement direction. Banding is thus the occurrence of undesired light or dark lines along the direction perpendicular to the media advancement direction.
As with the identification of streaking occurrences, to identify banding occurrences a threshold may be specified to distinguish background noise within the projection from actual occurrences of banding. The threshold may be set as the average of the projection values over the locations along the direction perpendicular to the media advancement direction, plus a standard deviation. This threshold can similarly change based on the streak detection result, and as another example, can be the average of the projection values plus twice the standard deviation. A location along the direction perpendicular to the media advancement direction is identified as part of a banding occurrence if the projection value at the location exceeds this threshold. A band is identified for each contiguous set of such contiguous locations at which the projection values exceed the threshold.
A feature vector for a test image symbol ROI 126 can be generated once the method 400 has been performed for this ROI 126. In one implementation, the feature vector includes determining the following values for inclusion within the vector. One value is the average color variation of pixels belonging to the band and streak defects identified in parts 420 and 424. The color variation of such a defect pixel can be the calculated Euclidean distance, ΔE, that has been described.
The feature vector can include values corresponding to the total number, total width, average length, and average sharpness of the streak defects identified within the test image symbol ROI 126. The width of a streak defect is the number of locations along the direction perpendicular to the media advancement direction encompassed by the defect. The length of a streak defect is the average value of the comparison image projections of the defect at these locations. The sharpness of a streak defect may in one implementation be what is referred to as the 10-90% rise distance of the defect, which is the number of pixels between the 10% and 90% gray values, where 0% is a completely black pixel and 100% is a completely white pixel.
The feature vector can similarly include values corresponding to the total number, total width, average length, and average sharpness of the band defects identified within the ROI 126. The width of a band defect is the number of locations along the media advancement direction encompassed by the defect. The length of a band defect is the average value of the comparison image projections of the defect at these locations. The sharpness of a band defect may similarly in one implementation be the 10-90% rise distance of the defect.
The feature vector can include values for each of a specified number of the streak defects, such as the three largest streak defects (e.g., the three streak defects having the greatest projections). These values can include the width, length, sharpness, and severity of each such streak defect. The severity of a streak defect may in one implementation be the average color variation of the defect (i.e., the average ΔE value) multiplied by the area of the defect.
The feature vector can similarly include values for each of a specified number of band defects, such as the three largest band defects (e.g., the three band defects having the greatest projections). These values can include the width, length, sharpness, and severity of each such band defect. The severity of a band defect may similarly in one implementation be the average color variation of the defect multiplied by the defect's area.
As in
In the context of the method 600, the connected component analysis assigns the pixels of the characters extracted from the reference image symbol ROI 122 to groups. The groups may, for instance, correspond to individual words, and so on, within the extracted characters. The pixels of a group are thus connected to one another to some degree; each group can be referred to as a connected component.
The method 600 includes then identifying the corresponding connected components within the test image symbol ROI 126 (608). A connected component within the test image symbol ROI 126 should be the group of pixels within the ROI 126 at the same locations as a corresponding connected component of pixels within the reference image symbol ROI 122. This is because the test image 112 has been aligned with respect to the reference image 106 in part 114 of
However, in actuality, some misalignment may remain between the test image 112 and the reference image 106 after image alignment. This means that the group of pixels within the test image symbol ROI 126 at the same locations as a given connected component of pixels within the reference image symbol ROI 122 may not identify the same extracted characters to sufficient precision. Therefore, a cross-correlation value between the connected component of the reference image symbol ROI 122 and the groups of pixels at the corresponding positions in the test image symbol ROI 126 may be determined. The group of pixels of the test image symbol ROI 126 having the largest cross-correlation value is then selected as the corresponding connected component within the ROI 126, in what can be referred to as a template-matching technique.
However, if the cross-correlation value of the selected group is lower than a threshold, such as 0.9, then the connected component in question is discarded and not considered further in the method 600. Furthermore, in one implementation, the results of the connected component analysis performed in part 606 and the identification performed in part 608 can be used in lieu of the morphological dilation of part 406 in
As a rudimentary example of part 608, five groups of pixels of the test image symbol ROI 126 may be considered. The first group is the group of pixels within the test image symbol ROI 126 at the same locations as the connected component of pixels within the reference image symbol ROI 122. The second and third groups are groups of pixels within the test image symbol ROI 126 at locations respectively shifted one pixel to the left and the right relative to the connected component of pixels within the reference image symbol ROI 122. The fourth and fifth groups are groups of pixels within the test image symbol ROI 126 at locations respectively shifted one pixel up and down relative to the connected component of pixels within the reference image symbol ROI 122.
The cross-correlation value may be determined as:
In this equation, Rcorr is the cross-correlation value. The value T(x′,y′) is the value of the pixel of the reference image symbol ROI 122 at location (x′,y′) within the reference image 106. The value I(x+x′,y+y′) is the value of the pixel of the test image symbol ROI 126 at location (x+x′,y+y′) within the reference image 112. The values X and y represent the amount of shift in the x and y directions, respectively.
The method 600 includes performing symbol fading (i.e., character fading such as text fading) analysis on the connected components identified within the test image symbol ROI 126 in relation to their corresponding connected components within the reference image symbol ROI 122 (610). The fading analysis is a comparison of each connected component of the test image symbol ROI 126 and its corresponding connected component within the reference image symbol ROI 122. Specifically, various values and statistics may be calculated. The comparison of a connected component of the test image symbol ROI 126 and the corresponding connected component of the reference image symbol ROI 122 (i.e., the calculated values and statistics) characterizes the degree of fading, within the test image 112, of the characters that the connected component identifies.
A feature vector for the test image symbol ROI 126 can be generated once the method 600 has been performed. The generated feature vector can include the calculated values and statistics. Three values may be the average L, a, and b color channel values of pixels of the connected components within the reference test image symbol ROI 122. Similarly, three values may be the average L, a, and b color channel values of pixels of the corresponding connected components within the test image symbol ROI 126.
Two values of the feature vector may be the average color variation and the standard deviation thereof between the pixels of the connected components within the reference image symbol ROI 122 and white. The color variation between such a pixel and the color white can be the ΔE(i,j) value described above in relation to part 412 of
Similarly, two values may be the average color variation and the standard deviation thereof between the pixels of the connected components within the test image symbol ROI 126 and white. The color variation between such a pixel and the color white can be the ΔE(i,j) value described above in relation to part 412 of
Two values of the feature vector may be the average color variation and the standard deviation thereof between the pixels of the connected components within the test image symbol ROI 126 and the pixels of the connected components within the reference image symbol ROI 122. The color variation between a pixel of the connected component within the test image symbol ROI 126 and the pixels of the corresponding connected component within the reference image symbol ROI 122 can be the ΔE(i,j) value described above in relation to part 412 of
The method 800 can include transforming reference and test image raster ROIs 122 and 126 to the LAB color space (802). The method 800 can include calculating a distance between corresponding pixels of the reference and test image raster ROIs 122 and 126 (803). The calculated distance may be the Euclidean distance within the LAB color space, such as the ΔE(i,j) value described above in relation to part 412 of
A feature vector for the test image raster ROI 126 can be generated once the method 800 has been performed. The generated feature vector can include the calculated values and statistics. Six such values may be the average L, a, and b color channel values, and respective standard deviations thereof, of the pixels within the reference image raster ROI 122. Similarly, six values may be the average L, a, and b color channel values, and respective standard deviations thereof, of the pixels within the test image raster ROI 126.
Two values of the feature vector may be the average color variation and the standard deviation thereof between pixels within the test image raster ROI 126 and pixels within the reference image raster ROI 122. The color variation between a pixel of test image raster ROI 126 and the pixels of the reference image raster ROI 122 can be the ΔE(i,j) value described above in relation to part 412 of
As in
Streaking analysis can then be performed on the resulting projection (914), similar to part 420 of
The distance values and color channel values, such as the L color channel values, within the test image raster ROI 126 are also projected along a direction perpendicular to the media advancement direction (916), similar to part 422 of
Banding analysis can then be performed on the resulting projection (918), similar to part 424 of
A feature vector for the test image raster ROI 126 can be generated once the method 900 has been performed for this ROI 126. The feature vector can include determining the following values for inclusion within the vector. One value is the average color variation of pixels belonging to the band and streak defects identified in parts 914 and 918. The color variation of such a defect pixel can be determined as has been described in relation to the generation of a feature vector for a test image symbol ROI 126 subsequent to performance of the method 400 of
The feature vector can include two values corresponding to the average difference in L channel values, and a standard deviation thereof, of pixels within the test image raster ROI 126 and their corresponding pixels within the reference image raster ROI 122. The feature vector can include two values corresponding to the average difference in L channel values, and a standard deviation thereof, of pixels within each streak defect identified within the test image raster ROI 126 and their corresponding pixels within the reference image raster ROI 122. The feature vector can similarly include two values corresponding to the average difference in L channel values, and a standard deviation thereof, of pixels within each band defect identified within the test image raster ROI 126 and their corresponding pixels within the reference image raster ROI 122.
The method 1000 can include transforming reference and test image vector ROIs 122 and 126 to the LAB color space (1002). The method 1000 can include calculating a distance between corresponding pixels of the reference and test image vector ROIs 122 and 126 (1003). The calculated distance may be the Euclidean distance within the LAB color space, such as the ΔE(i,j) value described above in relation to part 412 of
A feature vector for the test image vector ROI 126 can be generated once the method 1000 has been performed. The generated feature vector can include the calculated values and statistics. Six such values may be the average L, a, and b color channel values, and respective standard deviations thereof, of pixels within the reference image vector ROI 122. Similarly, six values may be the average L, a, and b color channel values, and respective standard deviations thereof, of pixels within the test image vector ROI 126. Two values may be the average color variation and the standard deviation thereof between pixels within the test image vector ROI 126 and pixels within the reference image vector ROI 122. The color variation between a pixel of test image vector ROI 126 and the pixels of the reference image vector ROI 122 can be determined as has been described above in relation to generation of a feature factor for a test image raster ROI 126 subsequent to performance of the method 800 of
As in
A feature vector for the test image vector ROI 126 can be generated once the method 1100 has been performed for this ROI 126. The feature vector can include determining the following values for inclusion within the vector. One such value is the average color variation of pixels belonging to the band and streak defects identified in parts 1114 and 1118. The color variation of such a defect pixel can be determined as has been described in relation to the generation of a feature vector for a test image symbol ROI 126 subsequent to performance of the method 400 of
The feature vector can include two values corresponding to the average difference in L channel values, and a standard deviation thereof, of pixels within the test image vector ROI 126 and their corresponding pixels within the reference image vector ROI 122. The feature vector can include two values corresponding to the average difference in L channel values, and a standard deviation thereof, of pixels within each streak defect identified within the test image vector ROI 126 and their corresponding pixels within the reference image vector ROI 122. The feature vector can similarly include two values corresponding to the average difference in L channel values, and a standard deviation thereof, of pixels within each band defect identified within the test image vector ROI 126 and their corresponding pixels within the reference image vector ROI 122.
The feature vector can also include a value corresponding to the highest-frequency energy of each streak defect identified within the test image vector ROI 126, as well as a value corresponding to the highest-frequency energy of each band defect identified within the test image vector ROI 126. The highest-frequency energy of such a defect is the value of the defect at its point of highest frequency, which can itself be determined by subjecting the defect to a one-dimensional fast Fourier transform (FFT).
The method 1200 can include transforming reference and test image background ROI s 122 and 126 to the LAB color space (1202). The method 1000 can include calculating a distance between corresponding pixels of the reference and test image background ROIs 122 and 126 (1003). The calculated distance may be the Euclidean distance within the LAB color space, such as the ΔE(i,j) value described above in relation to part 412 of
A feature vector for the test image background ROI 126 can be generated once the method 1200 has been performed for this ROI 126. The feature vector can include determining the following values for inclusion within the vector. One such value is the average color variation of pixels belonging to the band and streak defects identified in parts 1114 and 1118. The color variation of such a defect pixel can be determined as has been described in relation to the generation of a feature vector for a test image symbol ROI 126 subsequent to performance of the method 400 of
The feature vector can include two values corresponding to the average difference in L channel values, and a standard deviation thereof, of pixels within each streak defect identified within the test image background ROI 126 and their corresponding pixels within the reference image background ROI 122. The feature vector can similarly include two values corresponding to the average difference in L channel values, and a standard deviation thereof, of pixels within each band defect identified within the test image background ROI 126 and their corresponding pixels within the reference image background ROI 122. The feature vector can also include a value corresponding to highest-frequency energy of each streak and band defect identified within the test image background ROI 126, as has been described in relation to generation of a feature vector for a vector ROI 126 subsequent to performance of the method 1100 of
The processing includes, for each of a number of ROI types, comparing ROIs of the ROI type within a reference image to corresponding ROIs within a test image corresponding to the reference image and printed by a printing device (1304). The processing includes, for each ROI type, generating a feature vector characterizing image quality defects within the test image for the ROI type, based on results of the comparing for the ROI type (1306). The processing includes assessing whether print quality of the printing device has degraded below a specified acceptable print quality level, based on the feature vectors for the ROI types (1308).
The printing device 1400 includes hardware logic 1406. The hardware logic 1406 may be a processor and a non-transitory computer-readable data storage medium storing program code that the processor executes. The hardware logic 1406 compares the ROIs of each ROI type within the reference image to corresponding ROIs within the scanned test image (1408). The hardware logic 1406 generates, based on results of the comparing, a feature vector characterizing image quality defects within the test image for the ROI type (1410). Whether print quality of the printing device has degraded below a specified acceptable print quality level is assessable based on the generated feature vector.
The techniques that have been described herein thus provide a way by which degradation in the print quality of a printing device can be assessed in an automated manner. Rather than having an expert or other user inspect a printed test image to assess print quality degradation, feature vectors are generated for ROIs identified within the printed test image. The feature vectors are generated by comparing the test image ROIs with corresponding ROIs within a reference image to which the test image corresponds. The feature vectors include particularly selected values and statistics that have been novelly determined to optimally reflect, denote, or indicate image quality defects for respective ROI types. Print quality degradation can thus be accurately assessed, in an automated manner, based on the generated feature vectors.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2020/014846 | 1/23/2020 | WO |