This application is based on and claims priority under 35 U.S.C. 119 from Japanese Patent Application No. 2008-192375 filed Jul. 25, 2008.
1. Technical Field
The present invention relates to an image processing apparatus, an image processing method, and a computer readable medium.
2. Related Art
It has become popular to describe an image by using the page description language such as PostScript (trade mark) (hereinafter abbreviated as PS) or PDF and use the image thus described by such the language for printing or display. As a software such as an interpreter (also called as RIP which is abbreviation of Raster Image Processor) which interprets page description language data (data described by the page description language) and generates raster image data, there are various kinds of software treating the same page description language and also there are various versions as to the same kind of the software.
Further, for example, as a method of printing PDF data, there are not only a method of directly converting the PDF data into a raster image by using the RIP for PDF but also a method of once converting the PDF data into PS data by using a conversion software and then converting the PS data into a rater image by using the RIP for PS. In this manner, there is a case that the conversion of page description language data into a raster image is performed by the combination of plural software.
In this manner, there are various types as the conversion processing system (that is, a system for realizing the conversion by a single conversion program or the combination of plural programs) for converting data described by the same page description language into a raster image. When the conversion processing system differs, there may arise a case that raster images generated by the same page description language differ slightly therebetween due to the difference in the mounting method of specification or the difference in the treatment of a complicated processing portion.
A person who prepared page description language data generates a raster image from the page description language data by using the conversion processing system of own environment (that is, a computer system) and confirms the appearance of the raster image. However, in the case where the page description language data prepared by one person is transferred to another person, if the conversion processing system used by the another person differs from the conversion processing system of the one person, there arises a case that a raster image expected by the one person can not be outputted. For example, such a problem may arise in the case where page description language data prepared by a publishing company is transferred to and printed by a printing company.
According to an aspect of the present invention, an image processing apparatus includes: a conversion control unit that controls each of a plurality of different conversion processing systems, which converts page description language data into raster image data, to convert page description language data to be inspected into raster image data; and a difference information outputting unit that outputs difference information in a case where there is a difference among the raster image data as conversion results obtained by the conversion processing systems.
Exemplary embodiment of the present invention will be described in detail based on the following figures, wherein:
An example of the configuration of the image processing apparatus according to an embodiment will be explained with reference to
The image processing apparatus 100 mounts all conversion processing systems which assumed to be used by users of various kinds of PDF data, as the conversion processing system for converting PDF data into a raster image. However,
The conversion processing system A is configured by a PDF to PS conversion part 104 for converting PDF data into PS data and a PS RIP 106 for converting the PS data into a raster image. The conversion processing system B is configured by a PDF RIP 108 for converting PDF data into a raster image. The conversion processing system C is configured by a PDF application 110, a PS printer driver 112 and a PS RIP 114. The PDF application 110 is application software treating PDF data. PDF application 110 is Adobe (trade mark) Acrobat (trade mark), Adobe (trade mark) Reader (trade mark) of Adobe Systems, for example. The PS printer driver 112 is called from the PDF application 110 and converts PDF data supplied from the PS printer driver 112 into PS data.
Each of the PDF to PS conversion part 104, PS RIP 106, PDF RIP 108, PDF application 110, PS printer driver 112 and PS RIP 114 is mounted as a program system formed by a single program or plural programs (hereinafter collectively referred to “program”). The aforesaid functions are attained by executing these programs in the processor of the image processing apparatus 100.
The constituent elements within the conversion processing systems A, B and C may be made common. For example, the PS RIP 106 and the PS RIP 114 may be mounted as the same programs. In this case, the processor of the image processing apparatus 100 executes the same PS RIP program not only in the case of executing the processing of the PS RIP 106 in the processing of the conversion processing system A but also in the case of executing the processing of the PS RIP 114 in the processing of the conversion processing system C.
The image processing apparatus 100 receives PDF data from the outside and is instructed to execute the inspection. Then, the inspection control part 102 inputs the PDF data into each of the conversion processing systems A, B and C within the image processing apparatus 100 to thereby convert into raster images. The conversion results of the conversion processing systems A, B and C will be referred to raster images A, B, and C, respectively.
The difference detection part 116 compares the raster images A, B, and C respectively outputted from the conversion processing systems A, B and C, then detects a significant difference existing thereamong and outputs difference information representing the detection result. The difference among the raster images to be detected is a difference of the color of the same pixel, a difference of the width or position of a figure (for example, a line) among the raster images, for example.
According to this procedure, one of the two raster images is set as a reference image (the image A in the figure) and a difference of the other image with respect to the reference image is obtained and displayed. To this end, one of the conversion processing systems provided at the image processing apparatus 100 may be designated as the processing system for the reference image in advance.
According to this procedure, the difference detection part 116 firstly draws the image A as the reference on a display data area secured within the memory of the image processing apparatus 100 with a density reduced to a predetermined rate (for example, the density of a quarter of that of the original image) (S10). That is, in this case, the respective pixel values (values of cyan C, magenta M, yellow Y and black K in the case of a color image) of each of the pixels of the image A are reduced to the predetermined rate and an image thus reduced in its density is formed in the display data area.
Next, the difference detection part 116 executes a color comparison processing (S12). The detailed procedure of the color comparison processing is shown in
Next, the difference detection part 116 obtains a difference of color for each pixel between the image A and the image B, then compares the color difference thus obtained with the color comparison threshold value Th-V. When the color difference is equal to or larger than Th-V, this pixel is marked (recorded) in the color-alarm image data area (S22, S24). In this respect, for example, a distance in the CMYK space between the pixel values (CMYK values) of the corresponding pixels between the image A and the image B may be obtained as the color difference. In the marking processing, the pixel value in the color-alarm image data area is set to be 1, for example.
The aforesaid processings are repeated for all pixels. When the aforesaid processings are completed for all pixels, the process returns to the procedure of
Next, the difference detection part 116 executes a figure comparison processing (S16). The detailed procedure of the figure comparison processing is shown in
Next, the difference detection part 116 binarizes the image A to generate a binarized image A-bw (S32) and also binarizes the image B to generate a binarized image B-bw (S34). The processing order of steps S32 and S34 may be exchanged. In the binarizing processing, for example, the pixel value of a pixel which all color values of the original image are 0 (that is, C=M=Y=K=0) is set to 0, whilst the pixel value of a pixel which one of color values of the original image is not 0 is set to 1. Alternatively, in order to prevent the influence of the flat tint or halftone of the background, the pixel value may be set to 0 when C+M+Y+K is smaller than 128 and the pixel value may be set to 1 when C+M+Y+K is equal to or larger than 128 (supposing that each of C, M, Y, K is in a range of 0 to 255).
Next, the difference detection part 116 prepares a start-point/end-point list List-H-A in the horizontal direction of the binarized image A-bw (S36). The start-point/end-point list List-H-A is a list for storing respective pairs of the start-point coordinates and the end-point coordinates (hereinafter called start-point/end-point data) in the case where scanning lines in the horizontal direction cross each figure element (that is, a cluster of black pixels) on the binarized image A-bw. For example, the binarized image A-bw shown in
Similarly, the difference detection part 116 prepares a start-point/end-point list List-H-B in the horizontal direction of the binarized image B-bw, a start-point/end-point list List-V-A in the vertical direction of the binarized image A-bw and a start-point/end-point list List-V-B in the vertical direction of the binarized image B-bw (S38, S40, S42). The execution order of steps S36 to S42 may be changed from the figure.
Next, the difference detection part 116 compares the start-point/end-point list List-H-A with the start-point/end-point list List-H-B in the horizontal direction (S46). An example of the procedure of the comparison processing is shown in
According to the procedure shown in
Next, the difference detection part 116 retrieves start-point/end-point data from the list of the image B (the List-H-B in this case) which satisfies conditions that each of the difference of the start point coordinate (distance) from the start-point/end-point data selected in step S40 and the difference of the end point coordinate therefrom is within the threshold value Th-S, and the difference between the distance between the end point and the start point of the data in the List-H-B and the distance between the end point and the start point in the start-point/end-point data selected in step S40 is within the threshold value Th-W (S62). In this retrieval, an area to be retrieved may be limited to the start-point/end-point data group corresponding to the same horizontal scanning line (that is, the same y coordinate) as the start-point/end-point data selected in step S60. Then, it is determined whether or not the start-point/end-point data satisfying the retrieval conditions of step S62 is found from the list of the image B (S64). When such the start-point/end-point data is found, the start-point/end-point data thus selected or retrieved is deleted from the respective lists of the image A and the image B (S68). When the start-point/end-point data satisfying the retrieval conditions is not found, the processing of step S66 is skipped.
Then, the processings of steps S60 to S66 are repeatedly executed for all the start-point/end-point data in the list of the image A (S68). According to the aforesaid processings, in each of the lists of the image A and the image B, there remains only start-point/end-point data each of which does not coincide with the data of the partner-side list within the allowable range.
As described above, although the explanation is made as to the comparison between the start-point/end-point lists List-H-A and List-H-B in the horizontal direction, the difference detection part 116 executes the comparison processing shown in
Then, the difference detection part 116 marks, in the figure-alarm image data area, the start-point/end-point section represented by each of the start-point/end-point data remained in the start-point/end-point lists List-H-A, List-H-B, List-V-A and List-V-B having been subjected to the processings of steps S46 and S48 (S50). In the marking processing, each of the pixel values in the start-point/end-point section represented by the start-point/end-point data in the figure-alarm image data area may be set to 1.
When the processing of step S50 is completed, the process returns to the processing of
In the processing of
As described above, the explanation is made as to the processing of detecting and presenting the different portions between the output raster images of the two conversion processing systems. In the case of comparing the output raster images of the three or more conversion processing systems, the comparison between the reference raster image and each of the remaining raster images is performed in accordance with the procedure of
In the aforesaid example, although the different portion between the output raster images of the conversion processing systems is displayed as an image, the presentation method of the different portion is not limited thereto. For example, alternatively, a message representing whether or not there is the different portion may be displayed.
When a difference between the images of the conversion processing systems is detected by the image processing apparatus 100, a user who inspected PDF data by using the image processing apparatus 100 corrects the PDF data, for example. In the correction, for example, another PDF data may be prepared by using another PDF generation tool based on application data from which the PDF data is generated. Alternatively, the application data may be corrected as to the different portion thus detected or the different portion may be converted into a raster image and then disposed in the raster image, and then PDF data may be generated again.
The image processing apparatus (inspection apparatus) 100 explained above can be realized by installing a single program or a set of plural programs for realizing the function of the image processing apparatus 100 into a PC used by a user, for example. As another example, as shown in
One of the conversion programs may be arranged to change a part of the contents of the drawing processing in accordance with an amount of the memory capable of being used. In this case, the image processing apparatus (inspection apparatus) 100 may include a conversion processing system which is arranged to use the same conversion program but to differentiate memory amounts allocated to the program as a work area in plural stages.
Some of the conversion programs may be arranged to change the setting of the operation mode such as the operation of font cache, the adjustment of the position and the thickness of a fine-line etc. In this case, the image processing apparatus (inspection apparatus) 100 may include a conversion processing system which is arranged to use the same conversion program but to differentiate the setting of the operation mode of the program.
The image processing apparatus (inspection apparatus) 100 explained above can be realized by rendering a general purpose computer to execute the programs representing the processings of the aforesaid respective functional modules, for example. The computer includes as a hardware, as shown in
The foregoing description of the embodiments of the present invention has been provided for the purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise forms disclosed. Obviously, many modifications and variations will be apparent to practitioners skilled in the art. The embodiments were chosen and described in order to best explain the principles of the invention and its practical applications, thereby enabling others skilled in the art to understand the invention for various embodiments and with the various modifications as are suited to the particular use contemplated. It is intended that the scope of the invention defined by the following claims and their equivalents.
Number | Date | Country | Kind |
---|---|---|---|
2008-192375 | Jul 2008 | JP | national |