The present invention relates to an image processing apparatus capable of executing color mapping, an image processing method, and a non-transitory computer-readable storage medium storing a program.
There is known an image processing apparatus that receives a digital original described in a predetermined color space, performs, for each color in the color space, mapping to a color gamut that can be reproduced by a printer, and outputs the original. Japanese Patent Laid-Open No. 2020-27948 describes “perceptual” mapping and “absolute colorimetric” mapping. In addition, Japanese Patent Laid-Open No. 07-203234 describes deciding the presence/absence of color space compression and the compression direction for an input color image signal.
The present invention provides a technique for appropriately performing color mapping to a print color gamut based on the type of a content in image data.
The present invention in its first aspect provides an image processing apparatus comprising: at least one memory and at least one processor which function as: an input unit configured to input image data; a generation unit configured to generate image data having undergone color gamut conversion from the image data input by the input unit using a conversion unit configured to convert a color gamut of the image data input by the input unit into a color gamut of a device configured to output the image data; a correction unit configured to correct the conversion unit based on a result of the color gamut conversion; and a control unit configured to control execution of the correction by the correction unit for a content included in the image data input by the input unit, wherein the control unit controls execution of the correction by the correction unit for the content based on a type of the content, in a case where the correction unit corrects the conversion unit, the generation unit generates image data having undergone color gamut conversion from the image data input by the input unit using the corrected conversion unit, and in the image data having undergone the color gamut conversion by the corrected conversion unit, a color difference in the image data having undergone the color gamut conversion by the conversion unit before the correction is expanded.
The present invention in its second aspect provides an image processing method comprising: inputting image data; generating image data having undergone color gamut conversion from the input image data using a conversion unit configured to convert a color gamut of the input image data into a color gamut of a device configured to output the image data; correcting the conversion unit based on a result of the color gamut conversion; and controlling execution of the correction for a content included in the input image data, wherein execution of the correction for the content is controlled based on a type of the content, in a case where the conversion unit is corrected, image data having undergone color gamut conversion from the input image data is generated using the corrected conversion unit, and in the image data having undergone the color gamut conversion by the corrected conversion unit, a color difference in the image data having undergone the color gamut conversion by the conversion unit before the correction is expanded.
The present invention in its third aspect provides a non-transitory computer-readable storage medium storing a program configured to cause a computer to function to: input image data; generate image data having undergone color gamut conversion from the input image data using a conversion unit configured to convert a color gamut of the input image data into a color gamut of a device configured to output the image data; correct the conversion unit based on a result of the color gamut conversion; and control execution of the correction for a content included in the input image data, wherein execution of the correction for the content is controlled based on a type of the content, in a case where the conversion unit is corrected, image data having undergone color gamut conversion from the input image data is generated using the corrected conversion unit, and in the image data having undergone the color gamut conversion by the corrected conversion unit, a color difference in the image data having undergone the color gamut conversion by the conversion unit before the correction is expanded.
According to the present invention, it is possible to appropriately perform color mapping to a print color gamut based on the type of a content in image data.
Further features of the present invention will become apparent from the following description of exemplary embodiments with reference to the attached drawings.
Hereinafter, embodiments will be described in detail with reference to the attached drawings. Note, the following embodiments are not intended to limit the scope of the claimed invention. Multiple features are described in the embodiments, but limitation is not made an invention that requires all such features, and multiple such features may be combined as appropriate. Furthermore, in the attached drawings, the same reference numerals are given to the same or similar configurations, and redundant description thereof is omitted.
If color mapping to a print color gamut is performed regardless of the type of a content in image data, an output result of a color suitable to the content may not be obtained.
According to the present disclosure, it is possible to appropriately perform color mapping to a print color gamut based on the type of a content in image data.
Terms used in this specification are defined in advance, as follows.
“Color reproduction region” is also called a color reproduction range, a color gamut, or a gamut. Generally, “color reproduction region” indicates the range of colors that can be reproduced in an arbitrary color space. In addition, a gamut volume is an index representing the extent of this color reproduction range. The gamut volume is a three-dimensional volume in an arbitrary color space. Chromaticity points forming the color reproduction range are sometimes discrete. For example, a specific color reproduction range is represented by 729 points on CIE-L*a*b*, and points between them are obtained by using a well-known interpolating operation such as tetrahedral interpolation or cubic interpolation. In this case, as the corresponding gamut volume, it is possible to use a volume obtained by calculating the volumes on CIE-L*a*b* of tetrahedrons or cubes forming the color reproduction range and accumulating the calculated volumes, in accordance with the interpolating operation method. The color reproduction region and the color gamut in this embodiment are not limited to a specific color space. In this embodiment, however, a color reproduction region in the CIE-L*a*b* space will be explained as an example. Furthermore, the numerical value of a color reproduction region in this embodiment indicates a volume obtained by accumulation in the CIE-L*a*b* space on the premise of tetrahedral interpolation.
Gamut mapping is processing of performing conversion between different color gamuts, and is, for example, mapping of an input color gamut to an output color gamut of a device such as a printer. Perceptual, Saturation, Colorimetric, and the like of the ICC profile are general. The mapping processing may be implemented by, for example, conversion by a three-dimensional lookup table (3DLUT). Furthermore, the mapping processing may be performed after conversion of a color space into a standard color space. For example, if an input color space is sRGB, conversion into the CIE-L*a*b* color space is performed and then the mapping processing to an output color gamut is performed on the CIE-L*a*b* color space. The mapping processing may be conversion by a 3DLUT, or may be performed using a conversion formula. Conversion between the input color space and the output color space may be performed simultaneously. For example, the input color space may be the sRGB color space, and conversion into RGB values or CMYK values unique to a printer may be performed at the time of output.
Original data indicates whole input digital data as a processing target. The original data includes one to a plurality of pages. Each single page may be held as image data or may be represented as a drawing command. If a page is represented as a drawing command, the page may be rendered and converted into image data, and then processing may be performed. The image data is formed by a plurality of pixels that are two-dimensionally arranged. Each pixel holds information indicating a color in a color space. Examples of the information indicating a color are, for example, RGB values, CMYK values, a K value, CIE-L*a*b* values, HSV values, and HLS values. Note that this embodiment is applicable to one page or a plurality of pages. As an example, this embodiment will describe original data of one page as image data.
In this embodiment, the fact that when performing gamut mapping for arbitrary two colors, the distance between the colors after mapping in a predetermined color space is smaller than the distance between the colors before mapping is defined as color degeneration. More specifically, assume that there are a color A and a color B in a digital original, and mapping to the color gamut of a printer is performed to convert the color A into a color C and the color B into a color D. In this case, the fact that the distance between the colors C and Dis smaller than the distance between the colors A and B is defined as color degeneration. If color degeneration occurs, colors that are recognized as different colors in the digital original are recognized as identical colors when the original is printed. For example, in a graph, different items have different colors, thereby recognizing the different items. If color degeneration occurs, different colors may be recognized as identical colors, and thus different items of a graph may erroneously be recognized as identical items. The predetermined color space in which the distance between the colors is calculated may be an arbitrary color space. Examples of the color space are the sRGB color space, the Adobe RGB color space, the CIE-L*a*b* color space, the CIE-LUV color space, the XYZ color space, the xyY color space, the HSV color space, and the HLS color space.
An image processing accelerator 105 is hardware capable of executing image processing faster than the CPU 102. The image processing accelerator 105 is activated when the CPU 102 writes a parameter and data necessary for image processing at a predetermined address of the RAM 103. The image processing accelerator 105 loads the above-described parameter and data, and then executes the image processing for the data. Note that the image processing accelerator 105 is not an essential element, and the CPU 102 may execute equivalent processing. More specifically, the image processing accelerator is a GPU or an exclusively designed electric circuit. The above-described parameter can be stored in the storage medium 104 or can be externally acquired via the data transfer I/F 106.
In the printing apparatus 108, a CPU 111 reads out a program stored in a storage medium 113 to a RAM 112 as a work area and executes the readout program, thereby comprehensively controlling the printing apparatus 108. An image processing accelerator 109 is hardware capable of executing image processing faster than the CPU 111. The image processing accelerator 109 is activated when the CPU 111 writes a parameter and data necessary for image processing at a predetermined address of the RAM 112. The image processing accelerator 109 loads the above-described parameter and data, and then executes the image processing for the data. Note that the image processing accelerator 109 is not an essential element, and the CPU 111 may execute equivalent processing. The above-described parameter can be stored in the storage medium 113, or can be stored in a storage (not shown) such as a flash memory or an HDD.
The image processing to be performed by the CPU 111 or the image processing accelerator 109 will now be explained. This image processing is, for example, processing of generating, based on acquired print data, data indicating the dot formation position of ink in each scan by a printhead 115. The CPU 111 or the image processing accelerator 109 performs color conversion processing and quantization processing for the acquired print data.
The color conversion processing is processing of performing color separation to ink concentrations to be used in the printing apparatus 108. For example, the acquired print data contains image data indicating an image. In a case where the image data is data indicating an image in a color space coordinate system such as sRGB as the expression colors of a monitor, data indicating an image by color coordinates (R, G, B) of the sRGB is converted into ink data (CMYK) to be handled by the printing apparatus 108. The color conversion method is implemented by, for example, matrix operation processing or processing using a 3DLUT or 4DLUT.
In this embodiment, as an example, the printing apparatus 108 uses inks of black (K), cyan (C), magenta (M), and yellow (Y) for printing. Therefore, image data of RGB signals is converted into image data formed by 8-bit color signals of K, C, M, and Y. The color signal of each color corresponds to the application amount of each ink. Furthermore, the ink colors are four colors of K, C, M, and Y, as examples. However, to improve image quality, it is also possible to use other ink colors such as inks of fluorescence ink (F) and light cyan (Lc), light magenta (Lm), and gray (Gy) having low concentrations. In this case, color signals corresponding to the inks are generated.
After the color conversion processing, quantization processing is performed for the ink data. This quantization processing is processing of decreasing the number of tone levels of the ink data. In this embodiment, quantization is performed by using a dither matrix in which thresholds to be compared with the values of the ink data are arrayed in individual pixels. After the quantization processing, binary data indicating whether to form a dot in each dot formation position is finally generated.
After the image processing is performed, a printhead controller 114 transfers the binary data to the printhead 115. At the same time, the CPU 111 performs printing control via the printhead controller 114 so as to operate a carriage motor (not shown) for operating the printhead 115, and to operate a conveyance motor for conveying a print medium. The printhead 115 scans the print medium and also discharges ink droplets onto the print medium, thereby forming an image.
The image processing apparatus 101 and the printing apparatus 108 are connected to each other via a communication line 107. In this embodiment, a Local Area Network (LAN) will be explained as an example of the communication line 107. However, the connection may also be obtained by using, for example, a USB hub, a wireless communication network using a wireless access point, or a Wifi direct communication function.
A description will be provided below by assuming that the printhead 115 has nozzle arrays for four color inks of cyan (C), magenta (M), yellow (Y), and black (K).
There are various types of content information included in image data input from the user. Examples are various types including graphics such as a character, a photograph, and a figure, and gradation as a design expression. Types related to processes (to be described later), such as “emphasis of difference in tint is desirably focused on” and “tint continuity is desirably focused on”, can be processed as content information. For example, settings on contents by the user, such as “emphasis of difference in tint is focused on” and “tint continuity is focused on”, may be processed as content information. In this case, “emphasis of difference in tint is focused on” corresponds to “character or graphic” below, and “tint continuity is focused on” corresponds to “photograph or gradation” below.
When mapping image data to a color space that can be represented by a printer, if colors existing in the image data are subjected to color degeneration, the user may recognize different colors as identical colors as a print result. If the image data includes “characters and a background formed by a color close to that of the characters”, it is undesirable for the user not to be able to read the characters due to color degeneration. The same applies to a graphic. It is desirable to perform “color degeneration correction” for such content so as to reduce the degree of color degeneration after mapping.
On the other hand, since color degeneration correction focuses on ensuring the distance between colors in the limited color space that can be represented by the printer, tint continuity among close colors is reduced as compared with that before color degeneration correction. A “photograph” in the image data includes many portions where a color continuously slightly changes, such as the sky or the skin of a person. However, an attempt is made to ensure the distance between colors by shifting chroma or lightness by performing color degeneration correction, thereby changing the characteristic of the tint continuity. Therefore, an output result undesirable for the user is obtained. The same applies to gradation. It is undesirable to perform “color degeneration correction” for such content after mapping.
Furthermore, various kinds of contents may be mixed in the image data input from the user. By uniformly performing or not performing color degeneration correction for the image data, an output result undesirable for some contents may be obtained. To cope with this, in this embodiment, a content in the image data is analyzed in advance, and it is determined whether the content focuses on maintaining of the color difference. With this arrangement, it is possible to perform color degeneration correction for a content for which it is desirable to perform color degeneration correction, thereby preventing an output result undesirable for some contents from being obtained.
In step S101, the CPU 102 inputs image data. For example, the CPU 102 inputs image data stored in the storage medium 104. Alternatively, the CPU 102 may input image data via the data transfer I/F 106. The CPU 102 acquires image data including color information from the input original data (acquisition of color information). The image data includes values representing a color expressed in a predetermined color space. In acquisition of the color information, the values representing a color are acquired. The values representing a color are values acquired from sRGB data, Adobe RGB data, CIE-L*a*b* data, CIE-LUV data, XYZ color system data, xyY color system data, HSV data, or HLS data.
In step S102, the CPU 102 acquires a gamut mapping table recorded in advance in the storage medium 104.
In step S103, the CPU 102 performs gamut mapping for the color information of each pixel of the image data using the gamut mapping table acquired in step S102. The image data obtained after gamut mapping is stored in the RAM 103 or the storage medium 104. More specifically, the gamut mapping table is a three-dimensional lookup table. By the three-dimensional lookup table, a combination of output pixel values (Rout, Gout, Bout) can be calculated with respect to a combination of input pixel values (Rin, Gin, Bin). If each of the input values Rin, Gin, and Bin has 256 tones, a table Table1[256][256][256][3] having 256×256×256=16,777,216 sets of output values in total is preferably used. The CPU 102 performs color conversion using the gamut mapping table. More specifically, color conversion is implemented by performing, for each pixel of the image formed by the RGB pixel values of the original data input in step S101, the following processing given by:
Rout=Table1[Rin][Gin][Bin][0] (1)
Gout=Table1[Rin][Gin][Bin][1] (2)
Bout=Table1[Rin][Gin][Bin][2] (3)
The table size may be reduced by decreasing the number of grids of the LUT from 256 grids to, for example, 16 grids and deciding output values by interpolating table values of a plurality of grids. Instead of the three-dimensional LUT, a 3×3 matrix operation may be used.
In step S104, the CPU 102 sets areas in the original data input in step S101. Various methods of setting areas are considered. As an example, a method using a white pixel value of the input image as a determination criterion is used.
In step S201, the CPU 102 acquires one pixel in the image data. In this flowchart, pixel analysis is performed by setting the pixel as a pixel of interest.
Referring to
In step S202, it is determined whether the pixel of interest is a white pixel. The white pixel is, for example, a pixel with R=G=B=255 in 8-bit information. In this processing, whether the pixel of interest is a white pixel may be determined after performing the gamut mapping in step S103 or may be determined for a pixel value before performing gamut mapping. If the determination processing is performed for the pixel after performing the gamut mapping, a table which ensures that the pixel with R=G=B=255 holds the values even after the gamut mapping, that is, the pixel remains the white pixel is used. If it is determined that the pixel of interest is a white pixel, the process advances to step S203; otherwise, the process advances to step S204.
In step S203, the CPU 102 sets white seg in the pixel of interest. In this embodiment, seg is represented by a numerical value and white seg=seg number of 0 is set.
In step S204, the CPU 102 temporarily sets a seg number in the pixel of interest. Inspection is performed in the scan direction of the pixels shown in
In step S205, the CPU 102 determines whether the temporary seg numbers have been assigned to all the pixels of the image data. If it is determined that the temporary seg numbers have been assigned to all the pixels, the process advances to step S206; otherwise, the processes from step S201 are repeated.
In step S206, the CPU 102 corrects the temporary seg number. The pixel of interest is corrected in the same scan direction as in
In step S207, the CPU 102 determines whether the temporary seg number has been corrected for all the pixels of the image data. If it is determined that the temporary seg number has been corrected for all the pixels, the processing shown in
The above-described area setting processing is merely an example. A pixel other than the white pixel in the pixel information may be set as the criterion of the area setting processing, and determination of the seg number may be implemented by a different procedure. For example, such determination may be performed that all pixels that overlap each other by reducing an area by performing reduction processing in advance have the same seg number. By using the reduction processing, it is possible to shorten the processing time of the area setting processing. Information other than R, G, and B information of the pixel may be used.
In step S105, the CPU 102 analyzes the area set in step S104. This analysis processing is processing of generating information for determining whether to create and apply a color difference correction table in the succeeding stage. In this embodiment, with respect to each area, information indicating “character or graphic” or “photograph or gradation” is generated as determination information.
In this embodiment, by analyzing contents included in the image data, control can be executed to perform color degeneration correction for “character or graphic” and not to perform color degeneration correction for “photograph or gradation”. That is, color degeneration correction is not uniformly performed for the image data, and is performed only for a content for which it is preferable to perform color degeneration correction. With this arrangement, it is possible to reduce a failure of an image caused by uniformly performing color degeneration correction.
In step S105, the CPU 102 determines the presence/absence of an edge in an hatched area using an edge detection area 402 or 403 having a predetermined area. With respect to the presence/absence of the edge, there are various methods. As an example, a method using “same pixels”, “similar pixels”, and “different pixels” is used here. As shown in
A histogram 404 corresponds to
By a frequency distribution as the result of the histogram, the area can be classified into “character or graphic” or “photograph or gradation”. Since “character or graphic” can be determined in a case where the number of similar pixels is small, the area is classified by setting thresholds for the same pixels, similar pixels, and different pixels. For example, based on whether a condition given by expression (4) below is satisfied, the CPU 102 determines whether the area is “character or graphic”.
number of same pixels>TH_same && number of similar pixels<TH_near && number of different pixels>TH_other (4)
This analysis processing is merely an example, and the method is not limited to the above one. For example, the number of pixels as a determination result may be converted into an area ratio of the area and compared using expression (4). Furthermore, in this embodiment, edge detection using the edge detection area 402 or 403 is used for the determination processing but the determination processing may be performed based on a color distribution in the area. Thresholds may appropriately be set here. This arrangement will be described in the sixth embodiment and subsequent embodiments.
In step S106, the CPU 102 performs processing of determining for each area whether color degeneration correction is necessary. For this determination processing, the information of “character or graphic” or “photograph or gradation” determined in step S105 is used. In this embodiment, if the area is “character or graphic”, it is determined that color degeneration correction is necessary, and the process advances to step S107. On the other hand, if the area is “photograph or gradation”, it is determined that color degeneration correction is unnecessary, and the process advances to step S109.
In step S107, using the image data input in step S101, the gamut mapping table acquired in step S102, and the image data obtained after performing the gamut mapping in step S103, the CPU 102 creates a color degeneration-corrected table. The color degeneration-corrected table is created for each area divided in step S104. Note that the form of the color degeneration-corrected table is the same as the form of the gamut mapping table.
In step S108, the CPU 102 generates area data having undergone color degeneration correction by performing an operation for the area data of the corresponding area in the image data input in step S101 using the color degeneration-corrected table created in step S107. The generated color degeneration-corrected area data is stored in the RAM 103 or the storage medium 104. In step S109, the CPU 102 stores, as area data, the result obtained after the gamut mapping in step S103 in the RAM 103 or the storage medium 104.
With respect to the area data determined as “photograph or gradation”, the gamut mapping table may be switched to that for “photograph or gradation”. It is possible to perform color conversion using the characteristic of each content by applying, to “character or graphic”, the table that emphasizes a difference in tint and applying, to “photograph or gradation”, the table that focuses on tint continuity. To maintain tint continuity, the gamut mapping table need not be applied to the area of “photograph or gradation”. By not applying the gamut mapping table to some contents, the effect of reducing the memory capacity is obtained. Similarly, the table length of the gamut mapping table may be changed in accordance with the content and the table may be replaced by a simple form such as a unique conversion coefficient, thereby similarly obtaining the effect of reducing the memory capacity.
In step S110, the CPU 102 determines whether all the areas of the image data have been processed. If it is determined that all the areas have been processed, the process advances to step S111; otherwise, the processes from step S105 are repeated.
In step S111, the CPU 102 outputs the color degeneration-corrected area data stored in step S108 or the result after the gamut mapping stored in step S109 from the image processing apparatus 101 via the data transfer I/F 106. The gamut mapping may be mapping from the sRGB color space to the color reproduction gamut of the printing apparatus 108. In this case, it is possible to suppress decreases in chroma and color difference caused by the gamut mapping to the color reproduction gamut of the printing apparatus 108.
The color degeneration-corrected table creation processing in step S107 will be described in detail with reference to
In step S301, the CPU 102 detects unique colors existing in each area. In this embodiment, the term “unique color” indicates a color used in image data. For example, in a case of black text data with a white background, unique colors are white and black. Furthermore, for example, in a case of an image such as a photograph, unique colors are colors used in the photograph. The CPU 102 stores the detection result as a unique color list in the RAM 103 or the storage medium 104. The unique color list is initialized at the start of step S301. The CPU 102 repeats the detection processing for each pixel of the image data, and determines, for all the pixels included in a target object, whether the color of each pixel is different from unique colors detected until now. If it is determined that the color of the pixel is determined as a unique color, this color is stored as a unique color in the unique color list.
As a determination method, it is determined whether the color of the target pixel is a color included in the created unique color list. In a case where it is determined that the color is not included in the list, color information is newly added to the unique color list. In this way, the unique color list included in the area can be detected. For example, if the input image data is sRGB data, each of the input values has 256 tones, and thus 256×256×256=16,777,216 unique colors in total are detected. In this case, the number of colors is enormous, thereby decreasing the processing speed. Therefore, the unique colors may be detected discretely. For example, the 256 tones may be reduced to 16 tones, and then unique colors may be detected. If the number of colors is reduced, colors may be reduced to the colors of the closest grids. In this way, it is possible to detect 16×16×16=4,096 unique colors in total, thereby improving the processing speed.
In step S302, based on the unique color list detected in step S301, the CPU 102 detects the number of combinations of colors subjected to color degeneration, among the combinations of the unique colors existing in the area.
In a case where the color difference ΔE 608 is smaller than the color difference ΔE 607, the CPU 102 determines that color degeneration has occurred. Furthermore, in a case where the color difference ΔE 608 does not have such magnitude that a color difference can be identified, the CPU 102 determines that color degeneration has occurred. This is because if there is such color difference between the colors 605 and 606 that the colors can be identified as different colors based on the human visual characteristic, it is unnecessary to correct the color difference. In terms of the visual characteristic, 2.0 may be used as the color difference ΔE with which the colors can be identified as different colors. That is, in a case where the color difference ΔE 608 is smaller than the color difference ΔE 607 and is smaller than 2.0, it may be determined that color degeneration has occurred.
In step S303, the CPU 102 determines whether the number of combinations of colors that have been determined in step S302 to be subjected to color degeneration is zero. If it is determined that the number of combinations of colors that have been determined to be subjected to color degeneration is zero, that is, there is a color difference, the process advances to step S304, and the CPU 102 determines that the area requires no color degeneration correction, ends the processing shown in
Since color degeneration correction changes the colors, the combinations of colors not subjected to color degeneration are also changed, which is unnecessary. Therefore, based on, for example, a ratio between the total number of combinations of the unique colors and the number of combinations of the colors subjected to color degeneration, it may be determined whether color degeneration correction is necessary. More specifically, in a case where the majority of all the combinations of the unique colors are combinations of the colors subjected to color degeneration, it may be determined that color degeneration correction is necessary. This can suppress a color change caused by excessive color degeneration correction.
In step S305, based on the input image data, the image data having undergone the gamut mapping, and the gamut mapping table, the CPU 102 performs color degeneration correction for the combinations of the colors subjected to color degeneration.
Color degeneration correction will be described in detail with reference to
Next, the color degeneration correction processing will be described in detail. A color difference correction amount 609 that increases the color difference ΔE is obtained from the color difference ΔE 608. In terms of the visual characteristic, the difference between the color difference ΔE 608 and 2.0 which is the color difference ΔE with which the colors can be recognized as different colors is the color difference correction amount 609. More preferably, the difference between the color difference ΔE 607 and the color difference ΔE 608 is the color difference correction amount 609. As a result of correcting the color 605 by the color difference correction amount 609 on an extension from the color 606 to the color 605 in the CIE-L*a*b color space, a color 610 is obtained. The color 610 is separated from the color 606 by a color difference obtained by adding the color difference ΔE 608 and the color difference correction amount 609. The color 610 is on the extension from the color 606 to the color 605 but this embodiment is not limited to this. As long as the color difference ΔE between the colors 606 and 610 is equal to the color difference obtained by adding the color difference ΔE 608 and the color difference correction amount 609, the direction can be any of the lightness direction, the chroma direction, and the hue angle direction in the CIE-L*a*b* color space. Not only one direction but also any combination of the lightness direction, the chroma direction, and the hue angle direction may be used. Furthermore, in the above example, color degeneration is corrected by changing the color 605 but the color 606 may be changed. Alternatively, both the colors 605 and 606 may be changed. If the color 606 is changed, the color 606 cannot be changed outside the color gamut 602, and thus the color 606 is moved and changed on the boundary surface of the color gamut 602. In this case, with respect to the shortage of the color difference ΔE, color degeneration correction may be performed by changing the color 605.
In step S306, the CPU 102 changes the gamut mapping table using the result of the color degeneration correction processing in step S305. The gamut mapping table before the change is a table for converting the color 603 as an input color into the color 605 as an output color. In accordance with the result of step S305, the table is changed to a table for converting the color 603 as an input color into the color 610 as an output color. In this way, the color degeneration-corrected table can be created. The CPU 102 repeats the processing of changing the gamut mapping table the number of times that is equal to the number of combinations of the colors subjected to color degeneration.
As described above, by applying the color degeneration-corrected gamut mapping table to the application target area of the input image data, it is possible to perform correction of increasing the distance between the colors for each of the combinations of the colors subjected to color degeneration, among the combinations of the unique colors included in the area. As a result, it is possible to efficiently reduce color degeneration with respect to the combinations of the colors subjected to color degeneration. For example, assume that if the input image data is sRGB data, the gamut mapping table is created on the premise that the input image data has 16,777,216 colors. The gamut mapping table created on this premise is created in consideration of color degeneration and chroma even for colors not actually included in the input image data. In this embodiment, it is possible to adaptively correct the gamut mapping table with respect to the area data by detecting the colors of the area data. Then, it is possible to create the gamut mapping table for the colors of the area data. As a result, it is possible to perform preferred adaptive gamut mapping for the area data, thereby efficiently reducing color degeneration.
In this embodiment, areas are set in the input image data, and analysis is performed for each set area. Then, it is determined whether to perform color degeneration correction using an analysis result, and in a case where it is determined to perform color degeneration correction, color degeneration correction is performed. The purpose of performing color degeneration correction is to reduce color degeneration caused by the gamut mapping. In information of a character or graphic, not colors but described contents (the meaning of the characters or the shape of the graphic) are important. Therefore, if the user cannot recognize the characters or the shape of the graphic after printing by decreasing the distance between colors, visibility and readability of the contents deteriorate. To cope with this, in the case of the character or graphic, color degeneration correction is performed. On the other hand, in information of a photograph or gradation, colors themselves are often important. For example, in a photograph of a person, even if the distance between the color of the skin of the person and the color of a wall on the background in the color space is small, it is undesirable for the user to correct the color of the skin to another color. Gradation of continuous tones has the meaning of designability. However, if the distance between colors of continuous tones in the color space is corrected to generate a gradation level difference, designability deteriorates, which is undesirable for the user. The area analysis and the color degeneration correction determination processing according to this embodiment aim at determining the user's intention (the meaning of data) based on the characteristic of the image data input from the user. According to this embodiment, it is possible to perform appropriate gamut mapping for each area in accordance with the color degeneration correction determination result.
In this embodiment, the processing in a case where the input image data includes one page has been explained. The input image data may include a plurality of pages. If the input image data includes a plurality of pages, the processing procedure shown in
In this embodiment, a plurality of areas are set from the input image data but whether to perform color degeneration correction may be determined for the whole image. In this case, instead of setting a plurality of areas, “character or graphic” or “photograph or gradation” is determined for the whole image.
In this embodiment, classification is performed into two types of “character or graphic” or “photograph or gradation”, but classification contents are not limited to this. For example, since a gray line (a pixel value on a gray axis from white to black), a black point, and a white point in the input image cannot deviate from the gray line by performing color degeneration correction, color degeneration correction need not be performed for them. For example, the printer may discharge only K ink for the input gray line portion, and control of a method of using ink may be changed by deviating from the gray line. Therefore, whether to perform color degeneration correction may be determined using a result of analyzing gray line information.
In this embodiment, the color degeneration-corrected gamut mapping table is applied to the input image but a correction table for performing color degeneration correction may be created for the image data having undergone gamut mapping. In this case, based on the result of the color degeneration correction processing in step S305, a correction table for converting color information before correction into color information after correction may be generated. The generated correction table is a table for converting the color 605 into the color 610 in
In this embodiment, the user may be able to input an instruction indicating whether to execute the color degeneration correction processing. In this case, a UI screen shown in
The second embodiment will be described below concerning points different from the first embodiment. The first embodiment has explained that color degeneration correction is performed for a single color in step S305. Therefore, depending on combinations of colors of the input image data, a tint may change while reducing the degree of color degeneration. More specifically, if color degeneration correction is performed for two colors having different hue angles, and the color is changed by changing the hue angle, a tint is different from the tint of the color in the input image data. For example, if color degeneration correction is performed for blue and purple by changing a hue angle, purple is changed into red. If a tint changes, this may cause the user to recall a failure of an apparatus such as an ink discharge failure.
Furthermore, in the first embodiment, color degeneration correction is repeated the number of times that is equal to the number of combinations of the unique colors of the input image data. Therefore, the distance between the colors can be increased reliably. However, if the number of unique colors of the input image data increases, as a result of changing the color to increase the distance between the colors, the distance between the changed color and another unique color may be decreased. To cope with this, the CPU 102 needs to repeatedly execute color degeneration correction in step S305 so as to have expected distances between colors with respect to all the combinations of the unique colors of the input image data. Since the amount of processing of increasing the distance between colors is enormous, the processing time increases.
To cope with this, in this embodiment, color degeneration correction is performed in the same direction for every predetermined hue angle by setting a plurality of unique colors as one color group. To perform correction by setting a plurality of unique colors as one color group, in this embodiment, a unique color (to be described later) as a reference is selected from the color group. Furthermore, by limiting the correction direction to the lightness direction, it is possible to suppress a change of a tint. By performing correction in the lightness direction by setting the plurality of unique colors as one color group, it is unnecessary to perform processing for all the combinations of the colors of input image data, thereby reducing the processing time.
A CPU 102 detects the number of combinations of colors subjected to color degeneration, similar to the first embodiment, with respect to the combinations of the unique colors of the input image data within the hue range 1201. Referring to
In
First, the CPU 102 decides a unique color (reference color) as the reference of the color degeneration correction processing for each hue range. In this embodiment, the maximum lightness color, the minimum lightness color, and the maximum chroma color are decided as reference colors. In
Next, the CPU 102 calculates, for each hue range, a correction ratio R from the number of combinations of the unique colors and the number of combinations of the colors subjected to color degeneration within the target hue range. A preferred calculation formula is given by:
correction ratio R=number of combinations of colors subjected to color degeneration/number of combinations of unique colors (7)
The correction ratio R is lower as the number of combinations of the colors subjected to color degeneration is smaller, and is higher as the number of combinations of the colors subjected to color degeneration is larger. As described above, as the number of combinations of the colors subjected to color degeneration is larger, color degeneration correction can be performed more strongly.
Next, the CPU 102 calculates, for each hue range, a correction amount based on the correction ratio R and pieces of color information of the maximum lightness, the minimum lightness, and the maximum chroma. The CPU 102 calculates, as correction amounts, a correction amount Mh on a side brighter than the maximum chroma color and a correction amount Ml on a side darker than the maximum chroma color. Similar to the first embodiment, the color information in the CIE-L*a*b* color space is represented in a color space with three axes of L*, a*, and b*. The color 1301 as the maximum lightness color is represented by L1301, a1301, and b1301. The color 1302 as the minimum lightness color is represented by L1302, a1302, and b1302. The color 1303 as the maximum chroma color is represented by L1303, a1303, and b1303. The preferred correction amount Mh is a value obtained by multiplying the color difference ΔE between the maximum lightness color and the maximum chroma color by the correction ratio R. The preferred correction amount Ml is a value obtained by multiplying the color difference ΔE between the maximum chroma color and the minimum lightness color by the correction ratio R. The correction amounts Mh and Ml are calculated by:
As described above, the color difference ΔE to be held after gamut mapping is calculated. The color difference ΔE to be held after gamut mapping is the color difference ΔE before gamut mapping. In
Next, the CPU 102 generates a lightness correction table for each hue range. The lightness correction table is a table for expanding lightness between colors in the lightness direction based on the lightness of the maximum chroma color and the correction amounts Mh and Ml. In
The lightness correction table is a 1DLUT. In the 1DLUT, input lightness is lightness before correction, and output lightness is lightness after correction. The lightness after correction is decided in accordance with a characteristic based on minimum lightness after correction, the lightness of the maximum chroma color after gamut mapping, and maximum lightness after correction. The maximum lightness after correction is lightness obtained by adding the correction amount Mh to the lightness of the maximum chroma color after gamut mapping. The minimum lightness after correction is lightness obtained by subtracting the correction amount Ml from the lightness of the maximum chroma color after gamut mapping. In the lightness correction table, the relationship between the minimum lightness after correction and the lightness of the maximum chroma color after gamut mapping is defined as a characteristic that linearly changes. Furthermore, the relationship between the lightness of the maximum chroma color after gamut mapping and the maximum lightness after correction is defined as a characteristic that linearly changes. In
If the maximum lightness after correction exceeds the maximum lightness of the color gamut after gamut mapping, the CPU 102 performs maximum value clip processing. The maximum value clip processing is processing of subtracting the difference between the maximum lightness after correction and the maximum lightness of the color gamut after gamut mapping in the whole lightness correction table. In other words, the whole lightness correction table is shifted in the low lightness direction until the maximum lightness of the color gamut after gamut mapping becomes equal to the maximum lightness after correction. In this case, the lightness of the maximum chroma color after gamut mapping is also moved to the low lightness side. As described above, if the unique colors of the input image data are localized to the high lightness side, it is possible to improve the color difference ΔE and reduce color degeneration by using the lightness tone range on the low lightness side. On the other hand, if the minimum lightness after correction is lower than the minimum lightness of the color gamut after gamut mapping, the CPU 102 performs minimum value clip processing. The minimum value clip processing adds the difference between the minimum lightness after correction and the minimum lightness of the color gamut after gamut mapping in the whole lightness correction table. In other words, the whole lightness correction table is shifted in the high lightness direction until the minimum lightness of the color gamut after gamut mapping becomes equal to the minimum lightness after correction. As described above, if the unique colors of the input image data are localized to the low lightness side, it is possible to improve the color difference ΔE and reduce color degeneration by using the lightness tone range on the high lightness side.
Next, the CPU 102 applies, to the gamut mapping table, the lightness correction table created for each hue range. First, based on color information held by the output value of the gamut mapping, the CPU 102 decides the lightness correction table of a specific hue angle to be applied. For example, if the hue angle of the output value of the gamut mapping is 25°, the CPU 102 decides to apply the lightness correction table of the hue range 1201 shown in
As described above, in this embodiment, the lightness correction table created based on the reference color is also applied to a color other than the reference color within the hue range 1201. Then, with reference to the color after the lightness correction, for example, the color 1312, mapping to a color gamut 1316 is performed not to change the hue, as will be described later. That is, within the hue range 1201, the color degeneration correction direction is limited to the lightness direction. With this arrangement, it is possible to suppress a change of a tint. Furthermore, it is unnecessary to perform color degeneration correction processing for all the combinations of the unique colors of the input image data, thereby making it possible to reduce the processing time.
In addition, in accordance with the hue angle of the output value of the gamut mapping, the lightness correction tables of adjacent hue ranges may be combined. For example, if the hue angle of the output value of the gamut mapping is Hn°, the lightness correction table of the hue range 1201 and that of a hue range 1202 are combined. More specifically, the lightness value of the output value after the gamut mapping is corrected by the lightness correction table of the hue range 1201 to obtain a lightness value Lc1201. Furthermore, the lightness value of the output value after the gamut mapping is corrected by the lightness correction table of the hue range 1202 to obtain a lightness value Lc1202. At this time, the intermediate hue angle of the hue range 1201 is a hue angle H1201, and the intermediate hue angle of the hue range 1202 is a hue angle H1202. In this case, the corrected lightness value Lc 1201 and the corrected lightness value Lc1202 are complemented, thereby calculating a corrected lightness value Lc. The corrected lightness value Lc is calculated by:
As described above, by combining the lightness correction tables to be applied, in accordance with the hue angle, it is possible to suppress a sudden change of correction intensity caused by a change of the hue angle.
If the color space of the color information after correction is different from the color space of the output value after gamut mapping, the color space is converted and set as the output value after gamut mapping. For example, if the color space of the color information after correction is the CIE-L*a*b* color space, the following search is performed to obtain an output value after gamut mapping.
If the value after lightness correction exceeds the color gamut after gamut mapping, mapping to the color gamut after gamut mapping is performed. For example, the color 1312 shown in
Since the color difference ΔE is converted and expanded in the lightness direction, mapping is performed by focusing on lightness more than chroma. That is, the weight Wl of lightness is larger than the weight Wc of chroma. Furthermore, since hue largely influences a tint, it is possible to minimize a change of the tint before and after correction by performing mapping by focusing on hue more than lightness and chroma. That is, the weight Wh of hue is equal to or larger than the weight Wl of lightness, and is larger than the weight Wc of chroma. As described above, according to this embodiment, it is possible to correct the color difference ΔE while maintaining a tint.
Furthermore, the color space may be converted at the time of performing color difference minimum mapping. It is known that in the CIE-L*a*b* color space, a color change in the chroma direction does not obtain the same hue. Therefore, if a change of the hue angle is suppressed by increasing the weight of hue, mapping to a color of the same hue is not performed. Thus, the color space may be converted into a color space in which the hue angle is bent so that the color change in the chroma direction obtains the same hue. As described above, by performing color difference minimum mapping by weighting, it is possible to suppress a change of a tint.
Referring to
This embodiment has explained the example in which the lightness correction table is created for each hue range. However, the lightness correction table may be created by combining with the lightness correction table of the adjacent hue range. More specifically, within a hue range obtained by combining the hue ranges 1201 and 1202 in
This embodiment has explained the example in which the color difference ΔE is corrected in the lightness direction by setting a plurality of unique colors as one group. As the visual characteristic, it is known that sensitivity to the lightness difference varies depending on chroma, and sensitivity to the lightness difference of low chroma is higher than sensitivity to the lightness difference of high chroma. Therefore, the correction amount in the lightness direction may be controlled by a chroma value. That is, the correction amount in the lightness direction is controlled to be small for low chroma, and correction is performed, for high chroma, by the above-described correction amount in the lightness direction. More specifically, if correction of lightness is performed by the lightness correction table, the lightness value Ln before correction and the lightness value Lc after correction are divided by a chroma correction ratio S. Based on a chroma value Sn of the output value after gamut mapping and a maximum chroma value Sm of the color gamut after gamut mapping at the hue angle of the output value after gamut mapping, the chroma correction ratio S is calculated by:
That is, as the maximum chroma value Sm of the color gamut after gamut mapping is closer, the chroma correction ratio S is closer to 1, and Lc′ is closer to the lightness value Lc after correction, which is obtained by the lightness correction table. On the other hand, as the chroma value Sn of the output value after gamut mapping is lower, the chroma correction ratio S is closer to 0, and Lc′ is closer to the lightness value Ln before correction. In other words, as the chroma value Sn of the output value after gamut mapping is lower, the correction amount of lightness is smaller. Furthermore, the correction amount may be set to zero in a low-chroma color gamut. With this arrangement, it is possible to suppress a color change around a gray axis. Furthermore, since color degeneration correction can be performed in accordance with the visual sensitivity, it is possible to suppress excessive correction.
The third embodiment will be described below concerning points different from the first and second embodiments. This embodiment will describe an example in which a content is analyzed using information different from the pixel values of input image data.
In this embodiment, determination is performed based on a drawing instruction (to be described later) as information different from the pixel values of image data. The drawing instruction includes description information such as “photograph” or “character” that is assigned on application software when the user creates image data. By using the drawing instruction, it is possible to apply correction only to a content for which it is preferable to perform color degeneration correction.
With respect to area setting processing in step S104, processing using information different from pixel values will be described.
Instruction 1) TEXT drawing instruction (X1, Y1, color, font information, character string information)
Instruction 2) BOX drawing instruction (X1, Y1, X2, Y2, color, paint shape)
Instruction 3) IMAGE drawing instruction (X1, Y1, X2, Y2, image file information)
In some cases, drawing instructions such as a DOT drawing instruction for drawing a dot, a LINE drawing instruction for drawing a line, and a CIRCLE drawing instruction for drawing a circle are used as needed in accordance with the application purpose. For example, a general PDL such as Portable Document Format (PDF) proposed by Adobe, XPS proposed by Microsoft, or HP-GL/2 proposed by HP may be used.
An original page 700 in
<PAGE-001> of the first row is a tag representing the number of pages in this embodiment. Normally, since the PDL is designed to be able to describe a plurality of pages, a tag representing a page break is described in the PDL. In this example, the section up to </PAGE> represents the first page. In this embodiment, this corresponds to the original page 700 in
The section from <TEXT> of the second row to </TEXT> of the third row is drawing instruction 1, and this corresponds to the first row of an area 701 in
The section from <TEXT> of the fourth row to </TEXT> of the fifth row is drawing instruction 2, and this corresponds to the second row of the area 701 in
The section from <TEXT> of the sixth row to </TEXT> of the seventh row is drawing instruction 3, and this corresponds to the third row of the area 701 in
The section from <BOX> to </BOX> of the eighth row is drawing instruction 4, and this corresponds to an area 702 in
Next, the IMAGE instruction of the ninth and 10th rows corresponds to an area 703 in
There is a case where an actual PDL file integrates “STD” font data and a “PORTRAIT.jpg” image file in addition to the above-described drawing instruction group. This is because if the font data and the image file are separately managed, the character portion and the image portion cannot be formed only by the drawing instructions, and information needed to form the image shown in
In an original page described in PDL, like the original page 700 shown in
In addition, it is found that both the BOX instruction and the IMAGE instruction are apart from the TEXT instructions by 100 pixels in the Y direction.
Next, in the BOX instruction and the IMAGE instruction, the start points and the end points of the drawing X-coordinates are as follows, and it is found that these are apart by 50 pixels in the X direction.
Thus, three areas can be set as follows.
A CPU 102 determines a content in the first area as “character or graphic” in step S105. The CPU 102 determines a content in the second area as “character or graphic” in step S105. The CPU 102 determines a content in the third area as “photograph or gradation” in step S105.
As described above, according to this embodiment, a content is analyzed using a drawing instruction as information different from the pixel values of the input image data. By using drawing instructions, different areas can be set for overlapping contents that are difficult to perform determination with only the input image data. Characters of price information may be described to be partially superimposed on a product photograph in a POP image in a retail shop or the like. In this case as well, it is possible to execute control to perform color degeneration correction for the characters of the price information and not to perform color degeneration correction for the product photograph. Furthermore, it is possible to set, as the same area, information in which characters are horizontally arranged by including spaces between them, like the character string in the area 701. Furthermore, even in a case where the background is solidly filled with a given color other than white, an area can be set.
The fourth embodiment will be described below concerning points different from the first to third embodiments. This embodiment will describe an arrangement of determining, based on information different from PDL description information and pixel information, whether to perform color degeneration correction.
Thie embodiment will explain an example of an application with which the user can assign, to a pixel of an input image, information indicating whether to perform color degeneration correction. The first embodiment has explained that a graphic should focus on visibility and readability. However, in a graphic, there exists information having the meaning of a color. For example, a color used for a corporate logo is called a corporate color, and represents a color for impressing a corporate image. Therefore, in a case where a corporate log is superimposed on a background graphic, a change of a color included in the logo may be undesirable for visibility and readability. In this embodiment, in a case where a graphic includes an area where color degeneration correction should not be performed, it is determined, based on information different from PDL description information and pixel information, whether to perform color degeneration correction.
In this embodiment, the application can accept, from the user, an operation of setting whether to perform color degeneration correction for a designated pixel. With this arrangement, the user's intention concerning execution of color degeneration correction can be reflected.
Upon the pressing of an area of “+”, a tint designation window 808 shown in
Upon the pressing of the filling palette 804, the tint designation window 808 shown in
The user can apply the color to the image display portion 802 by selecting the designated color by a pointer, and sliding and superimposing the color on the image display portion 802. The user can use a plurality of methods in an area selection portion 805 to apply the color to a specific portion of the image displayed in the image display portion 802. For example, the user can apply the color to the whole object area on which an arrow is slid and superimposed. By designating a designated area in the image display portion 802, as displayed by a solid-line frame or a dotted-line frame, the area can be filled with the color.
The application shown in
The editing application 902 outputs the RGB information of image data as well as α plane information with respect to an area designated in the tint designation palette portion 803. The α plane information may be binary information of 0 and 1 or multivalued information. In other words, the α plane information is information of an area for which the user designates not to perform color degeneration correction. The RGB information and the α plane information are input to the image processing apparatus 101, and gamut mapping is performed based on these pieces of information. The RGB information obtained as a result of performing the gamut mapping is input to the printing apparatus 108 and converted into quantization data for printing. The α plane information may be used for ink separation processing, γ conversion processing, and quantization processing performed in the printing apparatus 108, as indicated by dotted lines.
In step S401, the CPU 102 inputs input image data and corresponding a plane data. The α plane data includes, for each pixel, ON/OFF information for determining whether it is possible to perform color degeneration correction. In this embodiment, α=1 (ON) indicates that no color degeneration correction is performed, and α=0 (OFF) indicates that color degeneration correction is performed.
Steps S102 and S103 of
In step S402, area setting processing is performed, similar to the first embodiment. In addition, each area is set so that the α plane data is formed only by ON or OFF.
Step S105 of
In step S403, the CPU 102 determines whether it is necessary to perform color degeneration correction for each area.
In step S501, the CPU 102 determines whether the α plane information assigned to a seg number=N is α=1 (ON). If the determination result indicates α=1 (ON), the process advances to step S503. Alternatively, if the determination result indicates α=0 (OFF), the process advances to step S502.
In step S502, the CPU 102 determines whether the area of the seg number=N is “character or graphic”. If it is determined that the area is “character or graphic”, the process advances to step S504. If it is determined that the area is not “character or graphic” (=the area is “photograph or gradation”), the process advances to step S503.
In step S503, the CPU 102 determines not to perform color degeneration correction for the area of the seg number=N, and stores the determination result in a memory area such as the RAM 103. In step S504, the CPU 102 determines to perform color degeneration correction for the area of the seg number=N, and stores the determination result in the memory area such as the RAM 103.
A case in which α=1 (ON) is determined indicates an area designated, by the user, as an area for which no color degeneration correction is performed. In this case, it is determined not to perform color degeneration correction without determining a content. On the other hand, in a case where α=0 (OFF) is determined, it is determined, based on the determination result of a content, whether it is necessary to perform color degeneration correction.
As described above, according to this embodiment, designation of whether to perform color degeneration correction for a pixel designated on the application is accepted from the user. According to this embodiment, it is possible to perform, for each area, appropriate gamut mapping according to the user's intention of creation of image data by using information designated by the user on the application.
According to this embodiment, a color designated on the application is not limited to a corporate color. For example, since an original color is given to each route in a rout map presented in a train ticket machine or the like, the designated color may be the original color. Furthermore, since stamp information in which “confidential”, “reproduction prohibited”, or the like is described is often set with red as a warning color, and this color desirably remains unchanged regardless of a document, the designated color may be the color used for the stamp information. The designated color may be a color used for legend information of a graph in which colors are desirably identical among pages. According to this embodiment, it is possible to execute control not to perform color degeneration correction with respect to a color that is not preferably changed in terms of visibility and readability.
The fifth embodiment will be described below concerning points different from the first to fourth embodiments. According to this embodiment, in a case where content information of each area is determined based on a likelihood, the correction intensity of color degeneration correction is controlled in accordance with the likelihood.
This embodiment assumes a case where the probability that the analysis result of a content is “photograph or gradation” and the probability that the analysis result of a content is “character or graphic” are not uniquely decided. In this case, if the higher probability is determined, a result of performing color degeneration correction may cause a result, not preferable for the content, that color continuity is lost. To cope with this, in this embodiment, the correction intensity of color degeneration correction is controlled in accordance with the likelihood of the content information of each area. With this arrangement, it is possible to reduce possibility that the result of performing color degeneration correction causes a result, not preferable for the content, that color continuity is lost.
As an example, assume that a likelihood that a given area set in step S104 of
That is, in the above-described example, since the likelihood of “character or graphic” is 80%, the correction intensity of color degeneration correction is weakened from 100% to 80%. As a result, it is possible to reduce the possibility that the color degeneration correction is excessive.
If the likelihood of “character or graphic” is larger than 0%, it is determined, in the determination processing in step S106 of
As described above, according to this embodiment, it is possible to control the correction intensity based on the likelihood of a content in which an area is analyzed. With this arrangement, it is possible to perform gamut mapping, on which the likelihood has been reflected, even for an area that is difficult to be determined as “character or graphic” or “photograph or gradation” as an analysis result.
The correction intensity of color degeneration correction indicates an amount of color continuity to be maintained. In this embodiment, it is possible to implement control to maintain color continuity to some extent or ensure a distance between colors to some extent with respect to a content for which it is determined that erroneous determination is readily made.
The sixth embodiment will be described below concerning points different from the first to fifth embodiments. This embodiment assumes that areas are set in step S104 so as to prevent a state in which one area includes one content.
First, assume that with respect to the area, determination processing is performed using, as a threshold THnear, a value THratio decided as a determination criterion in step S105 based on the ratio between the number of area pixels and the number of similar pixels in the area. In the determination processing in step S105, a target pixel is inspected based on an edge detection area. If, as a result of comparing the target pixel with pixels around it, the pixel for which the ratio of similar pixels each of which has a difference in a pixel value falling within a predetermined range is high and the ratio of the same pixels/different pixels is low is decided as a similar pixel, the target pixel is determined as a “similar pixel”. The number of pixels (=the number of similar pixels) that are determined as similar pixels in the area by scanning the area by an edge detection filter by setting the target pixel as the center is acquired.
As compared with, for example, an area including only a photographic content, the above-described area including the plurality of types of contents has a low similar pixel ratio. Therefore, such area may be determined as “character or graphic”. For example, in
For example, consider a case where THratio is calculated by setting the similar pixel ratio in the area to 20%. In this case, since the determination threshold THratio of the area 1805 is 60, “photograph or gradation” is determined. On the other hand,
For example, with reference to
To cope with this, in this embodiment, a value THabs decided regardless of the number of area pixels is used as the threshold THnear. The area including both the photographic content and the graphic content has a low similar pixel ratio, as compared with an area including only the photographic content, but has the same number of similar pixels. In this embodiment, by determining the type of the content based on the number of similar pixels regardless of the number of area pixels, for example, it is possible to reduce the influence of a decrease in similar pixel ratio caused by the graphic content in the area including both the photographic content and the graphic content.
In this embodiment, in step S105, the number of similar pixels existing in the area is acquired. The unique value THabs independent of the total number of pixels of the area is set as a threshold for determining “photograph or gradation”.
In step S2002, the CPU 102 compares the number of similar pixels acquired in step S2000 with THnear. If it is determined that a condition of “number of similar pixels>THnear” is satisfied, the CPU 102 determines in step S2003 that the type of the content is “photograph or gradation”. On the other hand, if it is determined that the above condition is not satisfied, the CPU 102 determines in step S2004 that the type of the content is “character or graphic”.
Note that the value of THabs may be decided in accordance with the number of pixels of the content to be determined as “photograph or gradation”.
As described above, in this embodiment, a fixed threshold independent of the number of area pixels is set. Thus, with respect to the area including both the photographic content and the graphic content in one area as a processing unit, “photograph or gradation” can be determined in a case where a photographic content of a predetermined size or more exists. For example, with respect to the area including both the photographic content and the graphic content, no color degeneration correction is performed, thereby maintaining the characteristic of tint continuity. Not only in a case where color degeneration correction is performed but also in a case where different color correction is performed in accordance with the type of the content, as described above, it is possible to prevent the user from being given a sense of incongruity.
When determining the type of the content, the determination method is not limited to the above-described one, and another method may be used. For example, in a case where image data is PDL data such as PDF data and an attribute value for each area is added, the type of the content may be determined using the attribute value. For example, a graphic type may be determined for data added with a text attribute or a vector attribute, and a photographic type may be determined for data added with a bitmap attribute. By using the attribute value, it is unnecessary to scan the image data to perform the determination processing, thereby implementing processing at a higher speed.
The seventh embodiment will be described below concerning points different from the first to sixth embodiments. The sixth embodiment has explained that the value THabs independent of the number of area pixels is set as the threshold THnear. In this case, all the areas where the number of similar pixels is small are determined as “character or graphic”.
Assume here that even for contents of the same size, a portion of interest of the user may be different depending on the arrangement of the contents.
Assume here that a photographic content smaller than an ID photo size in A4 is determined as “character or graphic”. In a case where the photographic contents 2102 and 2104 are small, each have a small number of similar pixels, and are thus determined as “character or graphic”, the effect of ensuring the distance between colors by color degeneration correction for the graphic content 2101 is large in
In this embodiment, in addition to a value THabs independent of the number of area pixels, a value THratio dependent on the number of area pixels is used. As compared with the area including both the photographic content and the graphic content, the area including only thee photographic content has a high similar pixel ratio. In this case, by using the similar pixel ratio as a determination threshold, it is possible to determine “photograph or gradation” for the area including only the photographic content and having the number of similar pixels smaller than the determination threshold.
In this embodiment, in step S105 of
In this embodiment, as an example, 50% of the number of area pixels is set as THratio, and 100 is set as THabs. In a case where the number of similar pixels in the area exceeds one of these values, “photograph or gradation” is determined. In this example, if the number of area pixels is 200, THratio is equal to THabs. If the number of area pixels is larger than 201, THabs is used as a determination criterion for determining “photograph or gradation”. The number of area pixels with which THratio is equal to THabs is at an inflection point IF of the threshold. Referring to
In the following description, the photographic contents 2102 and 2104 have 50 as the number of similar pixels and 80 as the number of area pixels. The graphic content 2101 has 0 as the number of similar pixels and 500 as the number of area pixels. Furthermore, 50% of the number of area pixels is set as THratio and 100 is set as THabs.
In step S2200, the CPU 102 acquires the number of area pixels. Since an area 2103 has 580 as the number of area pixels, THratio is 290. In step S2201, the CPU 102 acquires the number of similar pixels in the area. Then, in step S2202, the CPU 102 determines whether a condition of “THabs >THratio” is satisfied. If it is determined that the condition is satisfied, the CPU 102 sets THratio as THnear in step S2203. On the other hand, if it is determined that the condition is not satisfied, the CPU 102 sets THabs as THnear in step S2204.
In this example, since THabs is 100 and THratio is 290, it is determined that the condition is not satisfied, and THabs=100 is set as THnear in step S2204.
In step S2205, the CPU 102 compares the number of similar pixels acquired in step S2200 with THnear. If it is determined that a condition of “number of similar pixels>THnear”, the CPU 102 determines in step S2206 that the type of the content is “photograph or gradation”. On the other hand, if it is determined that the above condition is not satisfied, the CPU 102 determines in step S2207 that the type of the content is “character or graphic”.
In this example, since the number of similar pixels is 50 and THnear is 100, it is determined that the condition is not satisfied, and “character or graphic” is determined in step S2207. On the other hand, for example, with respect to an area 2105, the number of area pixels is 80, the number of similar pixels is 50, and THratio is 40. Therefore, since the condition of “THabs >THratio” is satisfied in step S2202, THratio=40 is set as THnear in step S2203. Since the condition of “number of similar pixels>THnear” is satisfied in step S2205, “photograph or gradation” is determined in step S2206.
As described above, in the case of a content with a small number of area pixels, the determination processing is performed based on the threshold using the similar pixel ratio, thereby making it possible to determine “photograph or gradation” for an area including only a single photographic content. With respect to the area including only a single photographic content, no color degeneration correction is performed, thereby maintaining the characteristic of tint continuity.
In this example, as a graphic content with a high similar pixel ratio and a small number of similar pixels, for example, a solid graphic content with a small number of area pixels and a blurred edge is assumed. Since this graphic content has a small number of area pixels, the effect of improving color discriminability by correcting the solid content and improving the appearance by chroma enhancement is large, as compared with an adverse effect of deterioration of tonality caused by correction for the blurred portion. In the above-described processing, such graphic content is determined as “photograph or gradation”. Attention is paid here to the fact that the graphic content tends to include many portions where the same color continues. By using, for the number of same pixels, a threshold TH_ratio_same obtained by setting a ratio with respect to the number of area pixels, it is possible to improve determination accuracy in this embodiment. The number of same pixels in this embodiment will be described below.
In this embodiment, in the determination processing in step S105, a target pixel is inspected based on an edge detection area. If, as a result of comparing the target pixel with pixels around it, the pixel for which the ratio of the same pixels is high and the ratio of similar pixels/different pixels is low is decided as the same pixel, the target pixel is determined as “same pixel”. The number of pixels (=the number of same pixels) that are determined as the same pixels in the area by scanning the area by an edge detection filter by setting the target pixel as the center is acquired.
In step S2302, the CPU 102 acquires the number of same pixels in the area, as described above. In step S2303, the CPU 102 compares the number of same pixels acquired in step S2302 with THratio_same. If it is determined that the condition of “number of same pixels<THratio_same” is satisfied, the process advances to step S2304. Steps S2304 to S2309 are the same as steps S2202 to S2207 of
On the other hand, if it is determined that the condition of “number of same pixels<THratio_same” is not satisfied, the process advances to step S2310. In step S2310, the CPU 102 compares the number of similar pixels acquired in step S2301 with THabs. If it is determined that a condition of “number of similar pixels>THabs” is satisfied, the CPU 102 determines in step S2311 that the type of the content is “photograph or gradation”. On the other hand, if it is determined that the condition of “number of similar pixels>THabs” is not satisfied, the CPU 102 determines in step S2312 that the type of the content is “character or graphic”. Thus, with respect to the above-described graphic content with a high similar pixel ratio and a small number of similar pixels, “character or graphic” can be determined. Note that the flowchart shown in
Points different from the first to seventh embodiments will be described below. In the seventh embodiment, in a case where the number of similar pixels is larger than one of THratio and THabs, “photograph or gradation” is determined. That is, in a case where the number of area pixels exceeds the inflection point IF decided by THratio and THabs, as shown in
It is not that a content as “photograph or graphic” includes no similar pixel. For example, examples are anti-aliasing and drop shadow. The user can visually perceive, at a distance from the printed product, an image with a sufficiently large area. For example, with respect to a poster in a shop or the like, chroma is enhanced, and it is necessary to attract people's attention and for the characters to be easily viewable. In this case, prioritizing the effect of the whole content over deterioration of image quality caused by color degeneration correction of an anti-alias part generated in details of the content is considered. In a case where such graphic content includes similar pixels the number of which is equal to or larger than a determination threshold THabs, if the number of area pixels exceeds the inflection point IF, “photograph or gradation” is uniformly determined. By increasing THabs, it is possible to determine the above-described graphic content as “character or graphic”. However, a desired determination result is not obtained depending on the size of the graphic content.
For example, assume that there are a graphic content having 10 as the number of similar pixels and 100 as the number of area pixels and an enlarged graphic content having 100 as the number of similar pixels and 1,000 as the number of area pixels. As an example, 50% of the number of area pixels is set as THratio and 100 is set as THabs. At this time, the number of area pixels at the inflection point IF is 200 and is between 100 and 1,000. Therefore, THnear is THratio=50 for the former graphic content, and THnear is THabs=100 for the latter graphic content. As a result, while the former graphic content is determined as “character or graphic”, the latter graphic content is determined as “photograph or gradation”.
To determine “character or graphic” in a case where the size is sufficiently large with respect to the graphic content having a low similar pixel ratio and including similar pixels, a determination threshold dependent on the similar pixel ratio is considered to be provided even in a case where the number of area pixels is large. However, if the determination threshold based on the number of similar pixels is eliminated, an area including both a graphic content and a photographic content may be determined as “character or graphic”, as described in the sixth embodiment.
To cope with this, in this embodiment, THabs is not made constant and is controlled using a weighting factor a dependent on the number of area pixels. In determining an area including both a graphic content and a photographic content, the area can be determined as “photograph or gradation”. On the other hand, it is possible to determine, as “character or graphic”, a graphic content having a low similar pixel ratio and including similar pixels. Furthermore, with respect to a photographic content of a predetermined size as a target of THabs, as the number of area pixels is larger, the ratio of the photographic content to the area is lower. Therefore, as the number of area pixels is larger, the effect obtained by applying color degeneration correction to the graphic content is larger.
As an example, 50% of the number of area pixels is set as THratio, 100 is set as THabs, and (number of area pixels−number of area pixels at inflection point IF)× 0.1 is set as the weighting factor α. At this time, when the number of area pixels is 200, α+THabs is equal to THratio. That is, in
Processing in step S2402 will be described below with reference to
As described above, according to this embodiment, in a case where the number of area pixels is large, the number of area pixels is weighted in accordance with determination using THabs. This can cope with an increase in number of similar pixels caused by an increase in number of area pixels while maintaining the effect by THabs. For example, since the graphic content has the predetermined size or more, the content having a low similar pixel ratio and a large number of similar pixels can appropriately be determined as “character or graphic”.
As described above, according to each of the embodiments, it is possible to appropriately perform color degeneration correction for each content in image data. With this arrangement, it is possible to reduce deterioration of image quality caused by uniformly performing color degeneration correction. Furthermore, the present invention is not limited to the arrangement described in each of the embodiments, and as the arrangements such as the area setting method and the color conversion method, other arrangements may be used as long as the above-described effect is obtained. For example, in each of the embodiments, the image processing apparatus 101 causes the CPU 102 to perform the area setting processing and the color conversion processing. However, for example, the CPU 102 may be replaced by an ASIC, a GPU, an FPGA, or the like to perform these processes.
Embodiment(s) of the present invention can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a ‘non-transitory computer-readable storage medium’) to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s). The computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions. The computer executable instructions may be provided to the computer, for example, from a network or the storage medium. The storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD)™), a flash memory device, a memory card, and the like.
While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
This application claims the benefit of Japanese Patent Application No. 2022-203525, filed Dec. 20, 2022, and Japanese Patent Application No. 2023-086431, filed May 25, 2023, that are hereby incorporated by reference herein in their entirety.
Number | Date | Country | Kind |
---|---|---|---|
2022-203525 | Dec 2022 | JP | national |
2023-086431 | May 2023 | JP | national |