This application is based on and claims priority under 35 USC 119 from Japanese Patent Application No. 2023-048898 filed Mar. 24, 2023.
The present invention relates to an image processing system, a non-transitory computer readable medium storing a program, and an image processing method.
A preview technique of checking a finished state of a printed matter before printing is known.
JP4168748B discloses an image processing device that causes a display device to display a preview image imitating a print form based on image information for forming an image having a stereoscopic portion on a medium. The document describes a form in which the stereoscopic portion is represented by a specific color, blinking, or the like in the preview image.
The printed matter has a swelling portion due to adhesion of an image forming material such as ink or toner. Such a swelling portion of the printed matter is requested to be expressed in the preview image of the printed matter in the same manner as or close to a real thing.
Aspects of non-limiting embodiments of the present disclosure relate to an image processing system, a non-transitory computer readable medium storing a program, and an image processing method that enable a swelling portion on a recording medium formed of an image forming material to be expressed in a preview image of a printed matter in the same manner as or close to a real thing.
Aspects of certain non-limiting embodiments of the present disclosure address the above advantages and/or other advantages not described above. However, aspects of the non-limiting embodiments are not required to address the advantages described above, and aspects of the non-limiting embodiments of the present disclosure may not address advantages described above.
According to an aspect of the present disclosure, there is provided an image processing system including a processor configured to generate stereoscopic image data including information representing a layer thickness of an image forming material adhered to a recording medium based on image data of a printed image, and cause a display unit to display a preview image of a printed matter based on the stereoscopic image data, in which the information representing the layer thickness of the image forming material includes information representing an aspect in which an edge of a swelling portion of the image forming material is inclined toward a recording medium surface or an aspect in which the edge of the swelling portion of the image forming material is rounded.
Exemplary embodiment(s) of the present invention will be described in detail based on the following figures, wherein:
Hereinafter, exemplary embodiments will be described in detail with reference to accompanying drawings. Configurations described below are examples for description and can be changed as appropriate. Further, in a case where a plurality of exemplary embodiments, modification examples, or the like are included in the following, characteristic portions thereof are assumed from the beginning to be combined as appropriate and used. Identical elements are designated by identical reference numerals in all drawings, and duplicate description is omitted.
Hereinafter, an exemplary embodiment of an image processing system that generates a preview image based on image data of a printed image will be described.
A printed image is an image printed on a recording medium. The image data is data representing a printed image input to the image processing system or generated in the image processing system. The image data may be, for example, data in which each pixel of the image data is represented by a pixel value of RGB or CMYK, and the pixel value may be a density value of CMYK. Further, the pixel value of the image data may include the pixel value of a spot color other than CMYK.
An image forming material is a material such as a coloring material that is adhered to the recording medium for representing the printed image. The image forming material is, for example, a toner or an ink. A color of the image forming material includes, for example, cyan (C), magenta (M), yellow (Y), and black (K) as basic colors. The color of the image forming material includes silver, gold, clear, and white as spot colors. There is no limitation on the type and color of the image forming material used for a printed matter. In the present exemplary embodiment, a toner for the basic color is referred to as “basic toner”, and a toner for the spot color is referred to as “special toner”.
In the following description, information representing a layer thickness of the image forming material represents a thickness of the image forming material in the printed matter. The information representing the layer thickness of the image forming material may be, for example, a normal map or a height map. In the following description, the information representing the layer thickness of the image forming material is also referred to as layer thickness information. Further, stereoscopic image data is necessary for representing the printed matter in three dimensions in the preview of the printed matter.
A printing condition is information indicating a material used for creating the printed matter and conditions (for example, environment, creating method, and equipment used) in a process of creating the printed matter. As an example, the printing conditions may be at least one of a type of recording medium used for the printed matter, a type or color of the image forming material used for printing, the number of times of additional printing, or a printing method.
Examples of the type of recording medium include plain paper, coated paper, uncoated paper, recycled paper, and a film. Examples of a type of printing method include offset printing, gravure printing, an electrophotographic method, and an inkjet method. There is no limitation on the types of recording medium and printing method.
The display unit is, for example, a display as a display device. The display unit may be, for example, a liquid crystal display and an organic electro luminescence (EL) display.
As shown in
The input unit 12 includes a basic toner information reception unit 20 and a special toner information reception unit 22. The reception units 20 and 22 receive the image data. Specifically, the basic toner information reception unit 20 receives the pixel value of four colors (CMYK) in each pixel of the image data. Further, the special toner information reception unit 22 receives the pixel value of the spot color in each pixel of the image data. In the exemplary embodiment, the spot color is one color, but there may be two or more spot colors. The pixel value of each pixel of the image data is, for example, a value of 0 to 255. Further, the pixel value of each pixel of the image data may be the density value of the toner (0 to 100%). The image processing unit 14 includes a preview image creation unit 30, a color conversion table storage unit 32, a normal map creation unit 34, a film thickness prediction unit 36, a preview unit 40, and a preview operation reception unit 42.
A plurality of color conversion tables are stored in the color conversion table storage unit 32.
The film thickness prediction unit 36 predicts a film thickness of the toner, from the pixel value (CMYK and spot color) of the image data for each pixel of the image data, to create film thickness data. The normal map creation unit 34 creates the normal map based on the film thickness data created by the film thickness prediction unit 36. The normal map is an example of the information (layer thickness information) representing the layer thickness of the image forming material. The preview image creation unit 30 outputs, to the preview unit 40, information combining the RGB gloss data obtained from the color conversion table with the normal map obtained from the normal map creation unit 34, as the stereoscopic image data.
The preview unit 40 includes a reflectance distribution function calculation unit 50, a rendering unit 52, and a display control unit 60.
The reflectance distribution function calculation unit 50 calculates a reflectance distribution function for each pixel based on the RGB gloss data obtained from the preview image creation unit 30. The rendering unit 52 disposes a three-dimensional model of the printed matter corresponding to the image data on a virtual screen in a virtual three-dimensional space and determines the RGB value of each pixel configuring a surface of the three-dimensional model. Specifically, the rendering unit 52 determines the RGB value of each pixel configuring the surface of the three-dimensional model of the printed matter based on the reflectance distribution function calculated by the reflectance distribution function calculation unit 50 and the normal map obtained from the normal map creation unit 34.
The display control unit 60 causes the display 90 to display a three-dimensional computer graphic (CG) image including the three-dimensional model of the printed matter, which is obtained from the rendering unit 52, via the output unit 16 as the preview image. The preview operation reception unit 42 receives, from a user, an operation (for example, operation of inclining printed matter) on the three-dimensional model of the printed matter in the virtual three-dimensional space. The display control unit 60 causes the display 90 to display the three-dimensional CG image that reflects the user operation received by the preview operation reception unit 42.
Next, specific processing by the film thickness prediction unit 36 and the normal map creation unit 34 will be described. The part (A) in
The film thickness prediction unit 36 creates density data 101 from the image data 100. The part (B) in
Next, the film thickness prediction unit 36 creates film thickness data 102 from the density data 101. The part (C) of
The normal map creation unit 34 creates a normal map 110 from the film thickness data 102. The normal map 110 includes data of a Red version, a Green version, and a Blue version. The part (D1) of
The normal map creation unit 34 performs processing (hereinafter, referred to as row processing) for each pixel string arranged in a y direction (up-down direction) regarding pixel strings arranged in an x direction (right-left direction) of the film thickness data 102 to create the normal maps 110 of the R and B versions. Further, the normal map creation unit 34 performs processing (hereinafter, referred to as column processing) for each pixel string arranged in the x direction (right-left direction) regarding pixel strings arranged in the y direction (up-down direction) of the film thickness data 102 to create the normal maps 110 of the G and B versions.
The two normal maps 110 of the B version are acquired by the row processing and the column processing. Regarding the normal map 110 of the B version, the normal map creation unit 34 synthesizes the normal maps (two-dimensional data) respectively acquired in the row processing and the column processing to create one normal map (two-dimensional data). The details will be described below.
A method of creating the normal map will be specifically described. Parts (A) to (C) in
The part (A) in
Specifically, the normal map creation unit 34 calculates the inclination angle of the toner layer from a film thickness difference between the pixel of interest and a pixel next to the right side of the pixel of interest and a width of one pixel. For example, in a case where the resolution of the image data is 600 dpi, the width of one pixel is about 42.3 μm. Assuming that the pixel of interest is a pixel N1, the film thickness difference between the film thickness of 0 μm of the pixel N1 and the film thickness of 20 μm of a pixel N+1 next to the right side of the pixel of interest is 20 μm (=−20 μm). In this case, the inclination angle of the pixel of interest N1 is calculated as arctan (−20/42.3)=−25.3 degrees. The calculated inclination angle is 25.3 degrees to the left.
Next, with movement of the pixel of interest to the right by one pixel, the pixel of interest is a pixel N1+1. The film thickness difference between the film thickness of 20 μm of the pixel of interest N1+1 and the film thickness of 20 μm of a pixel N1+2 next to the right side of the pixel of interest is 0 μm. In this case, the inclination angle of the pixel of interest N1+1 is calculated as arctan (0/42.3)=0 degree.
In the same manner, the pixel of interest is moved to the right in order, and the inclination angle of each pixel is calculated. Assuming that the pixel of interest is a pixel N2-1 now, the film thickness difference between the film thickness of 20 μm of the pixel N2-1 and the film thickness of 0 μm of a pixel N2 next to the right side of the pixel of interest is 20 μm (=+20 μm). In this case, the inclination angle of the pixel of interest N2-1 is calculated as arctan (20/42.3)=+25.3 degrees. The calculated inclination angle is 25.3 degrees to the right.
In this manner, the angle data of one pixel string is calculated. The normal map creation unit 34 converts each pixel value (inclination angle) of the angle data into the normal map. The part (D) in
In the column processing, the inclination angle of the pixel of interest is calculated based on the film thickness of the pixel of interest and the film thickness of a pixel next to the lower side of the pixel of interest, while moving the pixel of interest in order from top to bottom. With the calculation, the angle data is acquired. In this case, the inclination angle represents an upward or downward angle. With the column processing, the normal maps of the G and B versions can be obtained.
Here, the normal map will be specifically described. The normal map is information indicating a direction of a surface of an object. The normal map is RGB data corresponding to X, Y, and Z components of a normal vector of the object.
The X component of the normal vector is a component in the right-left direction and takes a range of −1 (−90 degrees: 90 degrees to the left) to 1 (90 degrees: 90 degrees to the right). These values are represented by 0 (−90 degrees: 90 degrees to the left) to 255 (90 degrees: 90 degrees to the right) as the pixel value of the R version. A relationship between a value Xv of the X component and a pixel value R of the R version is represented by, for example, the following Equation (1).
The Y component of the normal vector is a component in the up-down direction and takes a range of −1 (−90 degrees: downward 90 degrees) to 1 (90 degrees: upward 90 degrees). These values are represented by a value of 0 (−90 degrees: downward 90 degrees) to 255 (90 degrees: upward 90 degrees) as the pixel value of the G version. A relationship between a value Yv of the Y component and a pixel value G of the G version is represented by, for example, the following Equation (2).
The Z component of the normal vector is a front/back component and takes a range of 0 to 1. These values are represented by 0 to 255 as the pixel value of the B version. A relationship between a value Zv of the Z component and a pixel value B of the B version is represented by, for example, the following Equation (3).
For example, in a case where the object is a plane, the normal map (R,G,B) is (127.5,127.5,255).
For example, in a case where the object faces to the right, the normal map (R,G,B) is (255,127.5,127.5). Further, for example, in a case where the object is oriented to the right at 45 degrees, the normal map (R,G,B) is (((1+))cos(45°×127.5), 127.5,((1+))sin(45°×127.5)).
For example, in a case where the object faces upward, the normal map (R,G,B) is (127.5,255,127.5). Further, for example, in a case where the object is upward at 45 degrees, the normal map (R,G,B) is (127.5,((1+))cos(45°×127.5),((1+))sin(45°×127.5)).
In the exemplary embodiment, the normal map (R,G,B) is calculated for each pixel in the row processing and the column processing. In the description of the exemplary embodiment, the value of the normal map may be rounded off to the nearest whole number to be represented as an integer. For example, “127.5” may be represented as “128”.
In the row processing, a degree of inclination in the right-left direction of the toner layer as the object is calculated. Thus, the pixel value of the normal map of the G version (inclination in the up-down direction) is always 127.5. Therefore, in the row processing, the normal maps of the R and B versions (data) can be obtained.
In the column processing, a degree of inclination of the toner layer as the object in the up-down direction is calculated. Thus, the pixel value of the normal map of the R version (inclination in the right-left direction) is always 127.5. Therefore, in the column processing, the normal maps of the G and B versions (data) can be obtained.
Regarding the normal map of the B version, the two normal maps can be obtained by row processing and column processing. Regarding the normal map 110 of the B version, the normal map creation unit 34 synthesizes the normal maps (data) respectively acquired in the row processing and the column processing to create one normal map (data). Specifically, regarding the normal map 110 of the B version, since a pixel having a pixel value other than 255 is exclusively obtained in the row processing and the column processing, the normal map creation unit 34 creates one normal map (data) in which the pixel is set to have the pixel value other than 255 obtained by the row processing or the column processing and other pixels are set to have the pixel value of 255.
Since a size of the normal vector is always 1, in a case where the normal maps of the R and G versions are obtained, the normal map of the B version is uniquely determined. Therefore, the normal map creation unit 34 may acquire the normal map of the R version in the row processing, acquire the normal map of the G version in the column processing, and then create the normal map of the B version from the normal maps of the R and G versions.
The method of creating the normal map described above is an example. The normal map may be created by another method, and the method of creating the normal map is not limited.
In the exemplary embodiment described above, in the row processing, the inclination angle of the pixel of interest is calculated from the film thickness of the pixel of interest and the film thickness of the pixel next to the right side of the pixel of interest. However, in the row processing, the inclination angle of the pixel of interest may be calculated based on the film thickness of the pixel of interest and the film thickness of a pixel next to the left side of the pixel of interest. Similarly, in the column processing, the inclination angle of the pixel of interest may be calculated based on the film thickness of the pixel of interest and the film thickness of a pixel next to the upper side of the pixel of interest.
In the row processing, the inclination angle of the pixel of interest may be calculated based on the film thickness of the pixel next to the left side of the pixel of interest and the film thickness of the pixel next to the right side of the pixel of interest. For example, in the part (C) in
The inclination angle of the pixel of interest may be calculated based on the film thickness of a pixel next to the left side by n (n is an integer of 1 or more) pixels from the pixel of interest and the film thickness of a pixel next to the right side by n pixels from the pixel of interest. Similarly, in the column processing, the inclination angle of the pixel of interest may be calculated based on the film thickness of a pixel next to the upper side by n pixels from the pixel of interest and the film thickness of a pixel next to the lower side by n pixels from the pixel of interest.
Next, another method of generating the film thickness data, which is the basis of the normal map, will be described. The film thickness data 102 (for example, refer to the part (C) in
For example, as shown in
The film thickness of the toner may change according to the printing condition of the printed matter. For example, even though the density is the same, a formed film thickness may differ depending on the type of toner. Further, the formed film thickness may differ depending on the type of recording medium. Further, the formed film thickness may differ depending on the printing method or model of the printing device. Therefore, the film thickness data 102 may be created based on the printing condition. That is, the printing condition may be reflected in the film thickness, and a difference in the stereoscopic feeling of the film thickness in the preview may be expressed according to the printing condition.
In order to realize the above, the table 70 in which the density value of each toner is associated with the film thickness may be provided according to at least one of each toner type, each combination of toner types, each type of recording medium, or each printing method or model of the printing device.
The thickness (film thickness) of the toner layer formed on the recording medium may differ depending on the type of the special toner. For example, the clear toner and the white toner may have different film thicknesses. The tables 70A to 70D of
Further, the thickness (film thickness) of the toner layer formed on the recording medium may differ depending on the type of the recording medium. For example, the uncoated paper has a lower smoothness than the coated paper and thus tends to have a lower film thickness. The tables 70A to 70D of
The film thickness prediction unit 36 selectively uses the tables 70A to 70D according to the type of toner and the recording medium used for the printed matter to create the film thickness data 102. Accordingly, the film thickness difference according to the type of toner and the recording medium may appear in the preview image.
The film thickness prediction unit 36 may convert each pixel value (density value) of the image data 100 into the film thickness by using, instead of the table 70, an approximate expression that defines a relationship between the toner density and the film thickness, as shown in
Further, the approximate expression may define a relationship between a total density value of the plurality of toners and the film thickness. In this case, the film thickness prediction unit 36 calculates the total density value obtained by totaling the density values of the plurality of toners, which are pixel values of the image data 100, for each pixel of the image data 100. The film thickness prediction unit 36 converts the total density value into the film thickness by using the approximate expression for each pixel to create the film thickness data 102.
Further, the toner may be repeatedly laminated a plurality of times by additional printing or the like to form the printed matter. The additional printing is to further perform, on a surface of the recording medium printed by the printing device, the printing from above. In this case, the film thickness prediction unit 36 acquires, for each pixel of the image data 100, the film thickness (referred to as temporary film thickness) from the table 70 or the like showing the relationship between the density value of each toner and the film thickness and calculates a total film thickness by adding the film thickness for the additional printing (referred to as additional film thickness) to the temporary film thickness to create the film thickness data 102 with the total film thickness as the pixel value.
As described above, the film thickness data 102 can be generated by various methods. The method of generating the film thickness data 102 is not limited. The film thickness data 102 may be acquired by multiplying the temporary film thickness for each pixel by a coefficient set in advance according to at least one of the type of toner, the type of recording medium, the type of printing method, or the type of printing device (model or the like), which is set based on the image data 100.
In the present exemplary embodiment, in order to represent the thickness of the image forming material of the printed matter in the preview image, the normal map as the layer thickness information is used in the preview unit 40 (rendering unit 52). However, the height map may be used instead of the normal map as the layer thickness information in the preview unit 40 (rendering unit 52). The height map is, for example, the film thickness data 102 or a map obtained by multiplying each pixel value of the film thickness data 102 by a predetermined coefficient.
Next, processing for a swelling portion of the image forming material (toner as an example) will be described. In the swelling portion of the toner of the printed matter, the film thickness of an edge may be lower than the film thickness of the center due to misalignment of toners of a plurality of colors (misalignment of the versions) or the like. Accordingly, the actual printed matter may have an aspect in which the edge of the swelling portion of the toner is inclined toward a recording medium surface or an aspect in which the edge thereof is rounded. In the exemplary embodiment described here, the edge of the swelling portion on the recording medium may appear in the preview image of the printed matter in the same manner as or close to a real printed matter.
In order to realize the above, as shown in
Further, in the preview image of the printed matter, smoothing processing may be performed on the film thickness data or the normal map in order to represent the aspect in which the edge of the swelling portion is inclined or the aspect in which the edge is rounded. Processing performed on information representing the edge of the swelling portion is also referred to as contour processing or edge processing.
First, the smoothing processing for the normal map will be described. The normal maps 110 of parts (D1) and (D2) in
The smoothing processing (here, filter processing) of this exemplary embodiment is performed while moving the pixel of interest left, right, up, and down. In a state where the center of the filter is disposed on the pixel of interest, the pixel value of the pixel of interest is multiplied by a coefficient of the center of the filter and each pixel around the pixel of interest is multiplied by each coefficient around the center of the filter disposed corresponding to each pixel to obtain a total value of the multiplication results, and the total value is divided by a total sum of the coefficients of the filter to obtain a pixel value of the pixel of interest after the filter processing. The form of the filter is not limited to the form described here. Further, the smoothing processing is not limited to the processing using the filter. The same applies to the smoothing processing described below.
Since a size of the normal vector is always 1, in a case where the normal maps of the R and G versions are obtained, the normal map of the B version is uniquely determined. The normal map creation unit 34 creates a smoothed normal map 110A of the B version (not shown) from the smoothed normal maps 110A of the R and G versions. Accordingly, the normal map includes the information representing the aspect in which the edge of the swelling portion is inclined (refer to
Next, an exemplary embodiment will be described in which the information representing the aspect in which the edge of the swelling portion is inclined or rounded is created in the normal map with the smoothing processing for the film thickness data. Parts (A), (B), (C), and (D) in
In this exemplary embodiment, the normal map also includes the information representing the aspect in which the edge of the swelling portion is inclined (refer to
As described above, the height map may be used instead of the normal map as the layer thickness information in the preview unit 40 (rendering unit 52). The height map is, for example, the film thickness data or processed data thereof. In this case, the smoothed film thickness data or the processed data thereof is used by the preview unit 40 as the height map. The film thickness data and the smoothed film thickness data are also referred to as data representing a height of the swelling portion.
Next, another method of creating the smoothed film thickness data will be described. The film thickness prediction unit 36 may execute the smoothing processing based on the height of the swelling portion of the toner in the film thickness data such that the toner is expanded around the edge of the swelling portion of the toner in the film thickness data to create the smoothed film thickness data. This exemplary embodiment will be described below.
Parts (A), (B), (B1), and (C) in
Next, the film thickness prediction unit 36 performs edge expansion processing on the color version number data (the part (A) in
Next, the film thickness prediction unit 36 changes the filter applied to each pixel of the film thickness data according to each pixel value (color version number) of the expanded color version number data (the part (B) in
Specifically, the filter is prepared for each color version number as shown in the part (C) in
According to this exemplary embodiment, the smoothed film thickness data has information on a toner expansion portion in which the toner is expanded around the edge of the swelling portion of the toner, and has information that a toner height of the toner expansion portion around the edge thereof becomes higher as the edge of the swelling portion is higher.
Although the color version number has been described here, the same applies to the number of times of additional printing. That is, data of the number of times of additional printing in which the number of times of additional printing applied to each pixel is defined is prepared, instead of the color version number data, and the processing described above is performed. In addition, data in which a value obtained by combining the color version number applied to each pixel and the number of times of additional printing is defined may be prepared, and the processing described above may be performed on the data. The same applies to the following exemplary embodiment described below.
Next, an exemplary embodiment will be described in which the filter processing is performed on the film thickness data a plurality of times to create the smoothed film thickness data. Parts (A), (B), (B1), and (C) in
Specifically, as shown in the part (C) in
Also in this exemplary embodiment, the smoothed film thickness data has information on a toner expansion portion in which the toner is expanded around the edge of the swelling portion of the toner, and has information that a toner height of the toner expansion portion around the edge thereof becomes higher as the edge of the swelling portion is higher.
The aspect of the edge of the swelling portion of the toner in the printed matter may change depending on the type of toner, the type of recording medium, and the printing method or model of the printing device. Accordingly, the film thickness data or the normal maps may be created such that the aspect of the edge of the swelling portion in the preview image is different according to at least one of the type of toner used for the printed matter, the type of recording medium, the printing method of the printing device, or the model of the printing device.
From here, a method of realizing the preview will be described in detail. First, the color conversion table used by the preview image creation unit 30 (refer to
The color conversion table 33 is prepared for each combination of, for example, the recording medium, the image forming material (toner or the like), and the printing method. Therefore, there are a plurality of color conversion tables 33.
The color conversion table 33 is created, for example, as follows. First, a patch chart is prepared in which a plurality of patches having different colors and densities are printed on the recording medium. An image reading apparatus reads the patch chart to acquire each average RGB value of a diffuse reflection image and a mirror-surface reflection image for each patch. The average RGB value of the diffuse reflection image is the RGB value of the color conversion table 33. A difference between the average RGB value of the diffuse reflection image and the average RGB value of the mirror-surface reflection image is calculated to generate a difference image. The RGB value of the difference image is a ΔRΔGΔB value of the color conversion table 33. The pixel value (CMYK and spot color) of the patch is associated with the RGB value and the ΔRΔGΔB value corresponding to the patch for each patch to generate the color conversion table 33.
Information on the recording medium, the image forming material (toner or the like), and the printing method used for the patch chart is added to the color conversion table 33, and the color conversion table 33 is managed by the color conversion table storage unit 32 (refer to
One color conversion table is selected from the plurality of color conversion tables 33 based on the recording medium, the image forming material (toner or the like), the printing method, and the like used for the creation of the printed matter to be previewed, and color conversion processing is performed by using the selected color conversion table 33 (S104 in
In the color conversion table 33 (refer to
Next, processing performed by the preview image creation unit 30 (refer to
The preview image creation unit 30 searches the color conversion table 33 (refer to
In a case where the pixel value (CMYK and spot color) of the image data is not recorded in the color conversion table 33, the preview image creation unit 30 may read out the RGB value and the ΔRΔGΔB value corresponding to a color similar to the CMYK and spot color of the image data. Here, the similar color refers to, for example, a color having a closest distance in a color space and a color within a distance set in advance. Further, the preview image creation unit 30 may read out a plurality of sets of RGB values and ΔRΔGΔB values for a plurality of similar colors and calculate estimated values of the RGB values and ΔRΔGΔB values from these values.
Accordingly, the RGB gloss data as the two-dimensional data in which each pixel value consists of the RGB value and the ΔRΔGΔB value (RGB gloss value) can be obtained.
The reflectance distribution function calculation unit 50 calculates the reflectance distribution function corresponding to the appearance in printing from the pixel value (RGB value and ΔRΔGΔB value) of the RGB gloss data for each pixel of the RGB gloss data. For example, the reflectance distribution function calculation unit 50 calculates the reflectance distribution function as the following equation according to a reflection model of Phong.
Here, I is reflected light intensity. {wd·RGB·cosθi} of the first term on the right side is a diffuse reflectance distribution function. Here, wd is a diffuse reflection weight coefficient, and RGB is a value read out from the color conversion table 33. θi is an incident angle. {ws·ΔRΔGΔB·cosnγ} of the second term on the right side is a mirror-surface reflectance distribution function. Here, ws is a mirror-surface reflection weight coefficient, and ΔRΔGΔB is a value read out from the color conversion table 33. γ is an angle formed by a mirror-surface reflection direction and a line-of-sight direction, and n is a mirror-surface reflection index.
The rendering unit 52 generates the CG image. In other words, the rendering unit 52 disposes the three-dimensional model of the printed matter corresponding to the image data on the virtual screen in the virtual three-dimensional space and determines the RGB value of each pixel configuring the surface of the three-dimensional model. Specifically, the rendering unit 52 determines the RGB value of each pixel configuring the surface of the three-dimensional model of the printed matter based on the reflectance distribution function calculated by the reflectance distribution function calculation unit 50 and the normal map, which is created by the normal map creation unit 34, as the layer thickness information. The rendering processing is known and may be executed by using, for example, a radiosity method or a ray tracing method taking into consideration inter-reflection.
The rendering unit 52 may use the height map such as the film thickness data, instead of the normal map, as the layer thickness information. In this case, the rendering unit 52 determines the RGB value of each pixel configuring the surface of the three-dimensional model of the printed matter, based on the reflectance distribution function calculated by the reflectance distribution function calculation unit 50 and the height map.
The display control unit 60 causes the display 90 as the display unit to perform the preview display of an image of the three-dimensional model generated by rendering. The three-dimensional model is a simulation image of the printed matter corresponding to the image data. The preview operation reception unit 42 receives, from a user, an operation (for example, operation of inclining printed matter) on the three-dimensional model of the printed matter in the virtual three-dimensional space. The display control unit 60 causes the display 90 to display the three-dimensional CG image that reflects the user operation received by the preview operation reception unit 42.
The preview display technique described here is an example. Techniques of reproducing texture (glossy feeling, unevenness feeling, or the like) of an object surface by a three-dimensional CG are known, and a part or all of the techniques may be employed as appropriate in the present exemplary embodiment. A technique related to bidirectional reflectance distribution function (BRDF) may be employed.
The image processing system of the above exemplary embodiment is configured by using, for example, a general-purpose computer. The image processing system may be constructed as, for example, a single computer or a system consisting of a plurality of computers that cooperate with each other. As shown as an example in
The program can be provided via a network such as the Internet and provided being stored in a computer-readable recording medium such as an optical disk or a USB memory.
In the embodiments above, the term “processor” refers to hardware in a broad sense. Examples of the processor include general processors (e.g., CPU: Central Processing Unit) and dedicated processors (e.g., GPU: Graphics Processing Unit, ASIC: Application Specific Integrated Circuit, FPGA: Field Programmable Gate Array, and programmable logic device).
In the embodiments above, the term “processor” is broad enough to encompass one processor or plural processors in collaboration which are located physically apart from each other but may work cooperatively. The order of operations of the processor is not limited to one described in the embodiments above, and may be changed.
(((1)))
An image processing system comprising:
The image processing system according to (((1))), wherein the processor is configured to:
The image processing system according to (((2))), wherein the processor is configured to:
The image processing system according to (((1))), wherein the processor is configured to:
The image processing system according to any one of (((1))) to (((4))), wherein the processor is configured to:
The image processing system according to (((5))),
The image processing system according to (((5))),
The image processing system according to (((5))),
The image processing system according to (((5))),
The image processing system according to any one of (((1))) to (((3))),
The image processing system according to any one of (((1))) to (((9))),
A program that causes a computer to execute a process comprising:
The foregoing description of the exemplary embodiments of the present invention has been provided for the purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise forms disclosed. Obviously, many modifications and variations will be apparent to practitioners skilled in the art. The embodiments were chosen and described in order to best explain the principles of the invention and its practical applications, thereby enabling others skilled in the art to understand the invention for various embodiments and with the various modifications as are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the following claims and their equivalents.
Number | Date | Country | Kind |
---|---|---|---|
2023-048898 | Mar 2023 | JP | national |