The present application is based on, and claims priority from JP Application Serial Number 2020-213538, filed Dec. 23, 2020, JP Application Serial Number 2021-31439, filed Mar. 1, 2021, and JP Application Serial Number 2021-31440, filed Mar. 1, 2021, the disclosure of which are hereby incorporated by reference herein in its entirety.
The present disclosure relates to a method and system for executing a discrimination process of a printing medium using a machine learning model.
JP-A-2019-55554 discloses a technology for detecting a printing medium using a medium detection sensor and selecting print settings of a medium associated with attribute information that can be acquired by the medium detection sensor. The medium detection sensor is composed of an optical sensor.
However, there was a problem in JP-A-2019-55554 that printing media having similar optical characteristics cannot be discriminated because the detection result of the optical medium detection sensor is determined whether to be within a constant allowable range.
In addition, JP-A-2020-121503 proposes a technology for discriminating, using a machine learning model, a plurality of types of printing media used in a recording apparatus such as a printer. However, management was not made whether or not the stored training data has been learned, and thus there was room for improvement. Specifically, it is described that the machine learning process may be executed at any timing after a predetermined amount of training data is stored, but the management whether or not the stored training data has been learned was not made. In other words, there is a demand for a technology capable of identifying whether or not the stored data has been learned.
Further, the discrimination accuracy of the discriminator was not managed, and thus there was room for improvement. Specifically, it is described in the machine learning process that correspondence between the medium data and the type of printing medium is not accurate at the initial stage, and is optimized in the process of the machine learning. However, the discrimination accuracy of the discriminator is not described. That is, a technology capable of grasping the discrimination accuracy of the discriminator has been required.
A method for executing a discrimination process of a printing medium using a machine learning model according to the present disclosure includes a step (a) of preparing N machine learning models when N is an integer of 1 or more, in which each of the N machine learning models is configured to discriminate a type of the printing medium by classifying input spectral data, which is a spectral reflectance of the printing medium, into any one of a plurality of classes, a step (b) of acquiring target spectral data which is a spectral reflectance of a target printing medium, and a step (c) of discriminating a type of the target printing medium by executing a class classification process of the target spectral data using the N machine learning models.
A system for executing a discrimination process of a printing medium using the machine learning model according to the present disclosure includes a memory that stores N machine learning models when N is an integer of 1 or more, and a processor that executes the discrimination process using the N machine learning models. Each of the N machine learning models is configured to discriminate a type of the printing medium by classifying input spectral data, which is a spectral reflectance of the printing medium, into any one of a plurality of classes. The processor is configured to execute a first process of acquiring target spectral data of a target printing medium and a second process of discriminating a type of the target printing medium by executing a class classification process of the target spectral data using the N machine learning models.
A recording apparatus according to the present disclosure includes a storage section that stores physical information of a recording medium and a recording parameter corresponding to type information of the recording medium, a recording section that performs recording based on the recording parameter, a learning section that obtains a discriminator that was machine-learned using the physical information of the recording medium and the type information of the recording medium, and a learning state determination section that determines whether or not the recording medium is a recording medium using the machine learning of the discriminator.
A method for discriminating the recording medium according to the present disclosure is a method for performing a discrimination process of the recording medium using a machine learning model, and includes obtaining a discriminator that has N machine learning models when N is an integer of 1 or more and has been machine-learned using physical characteristics of the recording medium and type information of the recording medium for each of the N machine learning models, determining whether or not the recording medium is a recording medium used for the machine learning, and displaying the determination result.
A recording system according to the present disclosure includes a learning section that obtains a discriminator that has been machine-learned using physical characteristics of the recording medium and type information of the recording medium, and an accuracy evaluation section that obtains a discrimination accuracy of the discriminator.
A method for confirming the discrimination accuracy according to the present disclosure is a method for confirming a discrimination accuracy in the discrimination process of the recording medium using the machine learning model, and includes obtaining a discriminator that has N machine learning models when N is an integer of 1 or more and has been machine-learned using physical characteristics of the recording medium and type information of the recording medium for each of the N machine learning models, obtaining the discrimination accuracy using an accuracy evaluation data different from the physical characteristics of the recording medium used for the machine learning, and displaying the discrimination accuracy.
The printing system 100 as a recording system includes a printer 10 as a recording section, an information processing device 20, a spectrometer 30, and the like. The spectrometer 30 can acquire a spectral reflectance as physical information by performing spectrometry on a printing medium PM as a recording medium used in the printer 10 in an unprinted state. In the present disclosure, the spectral reflectance is also referred to as “spectral data”. The spectrometer 30 includes, for example, a tunable interference spectral filter and a monochrome image sensor. The spectral data obtained by the spectrometer 30 is used as input data to a machine learning model to be described later. As will be described later, the information processing device 20 executes a class classification process of the spectral data using the machine learning model, and classifies which of a plurality of classes the printing medium PM corresponds to. The “class of printing medium PM” means a type of printing medium PM. The information processing device 20 controls the printer 10 so as to perform printing under appropriate printing conditions according to the type of the printing medium PM. In a preferred example, the information processing device 20 uses a notebook PC that is easy to carry. In addition, the printing medium PM includes a roll medium in which the printing medium is wound around a roll-shaped core material. In the present embodiment, printing is given as an example of recording, but it can be applied to a recording system, apparatus, and method in a broad sense including fixing, in which recording conditions need to be changed according to physical information of a medium.
In the description above, the printer 10, the information processing device 20, and the spectrometer 30 are configured separately, but are not limited to the configuration, and any configuration having these functions may be used. For example, as illustrated in
Specifically, the printing apparatus 110 includes the information processing device 20, a printing machine 11 as a recording section, the spectrometer 30, a printing medium holder 40, and the like. The printing medium holder 40 houses the printing medium PM, and the spectrometer 30 performs spectrometry on the printing medium PM housed in the printing medium holder 40 and acquires spectroscopic spectrum data. The printing machine 11 is a printing machine similar to the printing machine included in the printer 10. In a preferred example, the information processing device 20 is a tablet PC provided with a touch panel, and is incorporated in the printing apparatus 110 with a display section 150 exposed. Such a printing apparatus 110 functions in the same manner as the printing system 100.
The information processing device 20 includes a processor 105, a storage section 120, an interface circuit 130, and an input device 140 and a display section 150 coupled to the interface circuit 130. The spectrometer 30 and the printer 10 are also coupled to the interface circuit 130. Further, the interface circuit 130 is coupled to a network NW by wire or wirelessly. The network NW is also coupled to a cloud environment.
Although not limited, for example, the processor 105 has not only a function of executing processes which will be described later in detail, and also a function of displaying, on the display section 150, data obtained through the processes and data generated during the processes.
The processor 105 functions as a print process section 112 that executes print processing using the printer 10, and also functions as a class classification process section 114 that executes a class classification process of the spectral data of the printing medium PM and as a print setting creation section 116 that creates print setting suitable for the printing medium PM. Furthermore, the processor 105 also functions as a learning section 117 that obtains a discriminator that has been machine-learned using physical information and type information of the printing medium PM, and as a discriminator management section 118 that manages information related to the discriminator. The discriminator will be described later.
The processor 105 executes computer programs stored in the storage section 120, thereby realizing the print process section 112, the class classification process section 114, the print setting creation section 116, the learning section 117, and the discriminator management section 118. In a preferred example, the processor 105 includes one or more processors. Each of the sections may be realized by a hardware circuit. The processor in the present embodiment is a term including such a hardware circuit.
Further, the processor executing the class classification process may be a processor that is included in a remote computer connected to the information processing device 20 via the network NW including the cloud environment.
In a preferred example, the storage section 120 includes a random access memory (RAM) and a read only memory (ROM). The storage section 120 may also include a hard disk drive (HDD).
The storage section 120 stores image data such as a printing parameter in accordance with physical information of the printing medium PM and type information, a graphical user interface (GUI) setting screen for inputting an operation of adding a new printing medium by a user, or the like, or a printing medium management program used in management such as addition of the printing medium. Examples of the printing parameter as a recording parameter include, for example, an ink ejection amount, a temperature of a heater for drying the ink, a drying time, a medium feeding speed, a transport parameter including media tension in the transport mechanism, and the like. Then, the printer 10 and the printing machine 11 perform printing based on the printing parameter. In addition, the storage section 120 also stores accuracy evaluation data of the discrimination accuracy of the discriminator. The accuracy evaluation data will be described later.
Furthermore, the storage section 120 stores a plurality of machine learning models 201 and 202, a plurality of spectral data groups SD1 and SD2, a medium identifier list IDL, a plurality of group management tables GT1 and GT2, and a plurality of known feature spectrum groups KS1 and KS2, and a print setting table PST. The machine learning models 201 and 202 are used in an operation by the class classification process section 114.
Configuration examples and operations of the machine learning models 201 and 202 will be described later. The spectral data groups SD1 and SD2 are a set of labeled spectral data used for learning of the machine learning models 201 and 202. The medium identifier list IDL is a list in which the medium identifier and the spectral data are registered for each printing medium. The plurality of group management tables GT1 and GT2 are tables showing management states of the spectral data groups SD1 and SD2. The known feature spectrum groups KS1 and KS2 are a set of feature spectra obtained when training data is input again to the learned machine learning models 201 and 202. The feature spectrum will be described later. The print setting table PST is a table in which print settings suitable for each printing medium are registered.
In the present embodiment, the input data IM is one-dimensional array data because it is the spectral data. For example, the input data IM is data obtained by extracting 36 representative values every 10 nm from the spectral data in the range of 380 nm to 730 nm.
In the example of
The machine learning model 201 of
The configuration of each layer 211 to 251 can be described as follows.
In the description of each of the layers 211 to 251, a character string before the parentheses is a layer name, and numbers in the parentheses are, in order, the number of channels, a kernel size, and a stride. For example, the layer name of the Conv layer 211 is “Conv”, the number of channels is 32, the kernel size is 1×6, and the stride is 2. In
The Conv layer 211 is a layer composed of scalar neurons. The other four layers 221 to 251 are layers composed of vector neurons. A vector neuron is a neuron that inputs and outputs a vector. In the above description, the dimension of the output vector of each vector neuron is constant at 16. In the following, the term “node” will be used as a superordinate concept of the scalar neurons and vector neurons.
In
As is well known, resolution W1 in the y direction after convolution is given by the following Equation.
W1=Ceil{(W0−Wk+1)/S}
Here, W0 is resolution before convolution, Wk is a kernel size, S is a stride, and Ceil{X} is a function for an operation of rounding up X.
The resolution of each layer illustrated in
The ClassVN layer 251 has n1 channels. The similarity arithmetic section 261 has one channel. In the example of
The determination value Class1-UN indicating that is the unknown class may be omitted. In this case, when the largest value among the determination values Class1-1 to Class1-10 for the known class is less than a predetermined threshold value, it is determined that the class of the input data IM is unknown.
The configuration of each layer 212 to 252 can be described as follows.
As can be understood by comparing
The second machine learning model 202 is configured to have at least one known class different from the first machine learning model 201. Further, since the class that can be classified is different between the first machine learning model 201 and the second machine learning model 202, the values of kernel elements are also different from each other. In the present disclosure, when N is an integer of 2 or more, any one of the N machine learning models is configured to have at least one known class different from the other machine learning models. In the present embodiment, the number N of machine learning models is set to 2 or more, but the present disclosure is also applicable to a case where only one machine learning model is used.
In step S10, spectral data of a plurality of initial printing media is generated as initial spectral data. In the present embodiment, the initial printing media used for learning of the machine learning model in the preparation step are all any printing media. In the present disclosure, the term “any printing media” means a printing media that can be subjected to the class classification process by the machine learning model and can be excluded from being subjected to the class classification process without a user's exclusion instruction. On the other hand, the printing media added in medium addition process to be described later is an essential printing medium that cannot be excluded from being subjected to the class classification process without the user's exclusion instruction. However, a part or all of the initial printing media may be used as an essential printing medium.
In step S10, initial spectral data is generated by performing spectrometry on a plurality of initial printing media by the spectrometer 30 in an unprinted state. At this time, it is preferable to extend the data in consideration of variation in the spectral reflectance. Generally, the spectral reflectance varies depending on a color measurement date and a measuring instrument. The data extension is processing for generating a plurality of spectral data by imparting random variations to the measured spectral data in order to simulate such variations. It should be noted that the initial spectral data may be virtually generated without performing spectrometry on the actual printing medium. In this case, the initial printing medium is also virtual.
In step S20, a medium identifier list IDL is created for the plurality of initial printing media. FIG. is an explanatory diagram illustrating the medium identifier list IDL. A medium identifier, a medium name, a data sub-number, and spectral data assigned to each printing medium are registered in the medium identifier list IDL. In the example, medium identifiers “A-1” to “A-16” are assigned to 16 printing media. The medium name is a name of the printing medium displayed in a window for the user to set printing conditions. The data sub-number is a number for distinguishing the plurality of spectral data related to the same printing medium. In the example, three spectral data are registered for each printing medium. However, the number of spectral data for each printing medium may be different. For each printing medium, one or more spectral data may be registered, but a plurality of spectral data are preferably registered.
In step S30 of
In step S40 of
In the present embodiment, a plurality of spectral data are grouped into two spectral data groups SD1 and SD2, but only one spectral data group may be used or three or more spectral data groups may be created. Further, a plurality of spectral data groups may be created by a method other than the clustering process. However, when a plurality of spectral data are grouped by the clustering process, the spectral data close to each other can be grouped into the same group. When a plurality of machine learning models are learned using each of the plurality of spectral data groups, the accuracy of the class classification process by the machine learning model can be enhanced as compared with a case where the clustering process is not executed.
Even when spectral data of a new printing medium is added after being grouped by the clustering process, it is possible to maintain a state equivalent to a state where the spectral data of the new printing medium is grouped by the clustering process.
In step S50 of
In step S60 of
In step S80, the class classification process section 114 inputs the spectral data groups SD1 and SD2 again to the learned machine learning models 201 and 202 to generate known feature spectrum groups KS1 and KS2. The known feature spectrum groups KS1 and KS2 are a set of feature spectra to be described later. Hereinafter, a method for generating the known feature spectrum group KS1 mainly associated with the machine learning model 201 will be described.
A vertical axis of
Since the number of feature spectra Sp obtained from the output of ConvVN1 layer 231 for one input data is equal to the number of plane positions (x, y) of ConvVN1 layer 231, 1×6=6.
Similarly, for one input data, three feature spectra Sp are obtained from the output of ConvVN2 layer 241, and one feature spectrum Sp is obtained from the output of ClassVN layer 251.
When the training data is input again to the learned machine learning model 201, the similarity arithmetic section 261 calculates the feature spectrum Sp illustrated in
Each record of the known feature spectrum group KS1_ConvVN1 includes a record number, a layer name, a label Lb, and a known feature spectrum KSp. The known feature spectrum KSp is the same as the feature spectrum Sp in
The spectral data group used in step S80 does not have to be the same as the plurality of spectral data groups SD1 and SD2 used in step S70. However, there is an advantage in that it is not necessary to prepare new training data as long as a part or all of the plurality of spectral data groups SD1 and SD2 used in step S70 are also used in step S80. Step S80 may be omitted.
In step S210, it is determined whether or not a discrimination process is necessary for a target printing medium which is a printing medium as an object to be processed. When the discrimination process is unnecessary, that is, when a type of the target printing medium is known, the process proceeds to step S280 to select a print setting suitable for the target printing medium, and printing is performed using the target printing medium in step S290. On the other hand, when the type of the target printing medium is unclear and the discrimination process is required, the process proceeds to step S220.
In step S220, the class classification process section 114 acquires target spectral data by causing the spectrometer 30 to perform spectrometry on the target printing medium. The target spectral data is subjected to class classification process by a machine learning model.
In step S230, the class classification process section 114 inputs the target spectral data to the existing learned machine learning models 201 and 202, and executes the class classification process of the target spectral data. In this case, either a first process method in which the plurality of machine learning models 201 and 202 are sequentially used one by one or a second process method in which the plurality of machine learning models 201 and 202 are used simultaneously can be used. In the first process method, the class classification process is executed first using one machine learning model 201, and when it is determined that the target spectral data belongs to an unknown class as a result, the class classification process is executed using another machine learning model 202. In the second process method, two machine learning models 201 and 202 are used simultaneously to execute the class classification process on the same target spectral data in parallel, and the class classification process section 114 integrates the processing results. According to an experiment of the inventors of the present disclosure, the second process method is more preferable because a processing time is shorter than that of the first process method.
In step S240, the class classification process section 114 determines whether the target spectral data belongs to an unknown class or a known class from the result of the class classification process in step S230. When the target spectral data belongs to the unknown class, the target printing medium is a new printing medium that does not correspond to any one of the plurality of initial printing media used in the preparation step and the printing medium added in a medium addition process to be described later. Therefore, the process proceeds to step S300 to be described later, and the medium addition process is executed. On the other hand, when the target spectral data belongs to a known class, the process proceeds to step S250.
In step S250, a similarity to the known feature spectrum group is calculated using one of the plurality of machine learning models 201 and 202 that has been determined that the target spectral data belongs to the known class. For example, when it is determined by the processing of the first machine learning model 201 that the target spectral data belongs to the known class, the similarity arithmetic section 261 calculates similarities S1_ConvVN1, S1_ConvVN2, and S1_CLssVN to the known feature spectrum group KS1 from the outputs of the ConvVN1 layer 231, the ConvVN2 layer 241, and the ClassVN layer 251, respectively. On the other hand, when it is determined by the processing of the second machine learning model 202 that the target spectral data belongs to the known class, the similarity arithmetic section 262 calculates similarities S2 ConvVN1, S2 ConvVN2, and S2_CLssVN to the known feature spectrum group KS2.
Hereinafter, a method for calculating the similarity S1_ConvVN1 from the output of the ConvVN1 layer 231 of the first machine learning model 201 will be described.
The similarity S1_ConvVN1 can be calculated using, for example, the following Equation:
S1_ConvVN1(Class)=max[G{Sp(i,j),KSp(Class,k)}]
wherein “Class” indicates an ordinal number for a plurality of classes, G{a,b} indicates a function for obtaining a similarity between a and b, and Sp(i,j) indicates feature spectra at all plane positions (i,j) obtained according to the target spectral data, KSp(Class,k) indicates all the known feature spectra associated with the ConvVN1 layer 231 and the specific “Class”, and max[X] indicates a logical operation that takes the maximum value of X. That is, the similarity S1_ConvVN1 is the maximum value of the similarity calculated between each of the feature spectra Sp (i,j) at all the plane positions (i,j) of the ConvVN1 layer 231 and each of all the known feature spectra KSp(k) corresponding to the specific class, respectively. Such a similarity S1_ConvVN1 is obtained for each of the plurality of classes corresponding to a plurality of labels Lb. The similarity S1_ConvVN1 represents a degree to which the target spectral data is similar to a feature of each class.
The similarities S1_ConvVN2 and S1 ClassVN related to the outputs of ConvVN2 layer 241 and ClassVN layer 251 are also generated in the same manner as the similarity S1_ConvVN1. It is not necessary to generate all of three similarities S1_ConvVN1, S1_ConvVN2, and S1 ClassVN, but it is preferable to generate one or more the similarities S1_ConvVN1, S1_ConvVN2, and S1 ClassVN. In the present disclosure, a layer used to generate the similarity is also referred to as a “specific layer”.
In step S260, the class classification process section 114 presents the similarity obtained in step S250 to the user, and the user confirms whether or not the similarity matches the result of the class classification process. The similarity S1_ConvVN1, S1_ConvVN2, and S1 ClassVN represent the degree to which the target spectral data is similar to the feature of each class, and thus it is possible to confirm the result of the class classification process that is good or bad from at least one of the similarities S1_ConvVN1, S1_ConvVN2, and S1 ClassVN. For example, when at least one of the three similarities S1_ConvVN1, S1_ConvVN2, and S1 ClassVN does not match the result of the class classification process, it can be determined that the similarities do not match the result of the class classification process. In another embodiment, when all the three similarities S1_ConvVN1, S1_ConvVN2, and S1 ClassVN do not match the result of the class classification process, it may be determined that the similarities do not match the result of the class classification process. Generally, when a predetermined number of similarities among a plurality of similarities generated from the outputs of a plurality of specific layers do not match the result of the class classification process, it is determined that the predetermined number of similarities do not match the result of the class classification process. The determination in step S260 may be performed by the class classification process section 114. Further, step S250 and step S260 may be omitted.
When the similarity matches the result of the class classification process, the process proceeds to step S270, and the class classification process section 114 discriminates the medium identifier of the target printing medium according to the result of the class classification process. For example, the process is executed by referring to the group management tables GT1 and GT2 illustrated in
When it is determined in step S260 that the similarity does not match the result of the class classification process, the target printing medium is a new printing medium that does not correspond to any one of the plurality of initial printing media used in the preparation step and the printing medium added in a medium addition process to be described later. Therefore, the process proceeds to step S300 to be described later. In step S300, the medium addition process is executed in order to make the new printing medium an object being subjected to the class classification process. Since the machine learning model is updated or added in the medium addition process, the medium addition process can be considered as a part of the step of preparing the machine learning model.
In step S310, the class classification process section 114 searches for the machine learning model closest to the spectral data of the additional printing medium from the existing machine learning models 201 and 202. The “machine learning model closest to the spectral data of the additional printing medium” means a machine learning model that has the shortest distance between the representative points Gland G2 of a training data group used for learning of each machine learning models 201 and 202 and the spectral data of the additional printing medium. The distance between each of the representative points G1 and G2 and the spectral data of the additional printing medium can be calculated as, for example, the Euclidean distance. The training data group having the smallest distance from the spectral data of the additional printing medium is also referred to as a “proximity training data group”.
In step S320, the class classification process section 114 determines whether or not the number of classes corresponding to the essential printing medium has reached an upper limit value for the machine learning model searched in step S310. As described above, in the present embodiment, all the initial printing media used in the preparation step are any printing media, and all the printing media added after the preparation step are essential printing media. When the number of classes corresponding to the essential printing medium has not reached the upper limit value, the process proceeds to step S330, and learning of the machine learning model is performed with the training data to which the spectral data of the additional printing medium is added. State S1 of
When a printing medium is further added from the state S2 of
In the state S5 of
When the machine learning model is not found by the search in step S340, the process proceeds from step S350 to step S370 to create a new machine learning model, and learning of the new machine learning model is performed with training data including the spectral data of the additional printing medium. This process corresponds to a process of changing from the state S5 to the state S6 in
The above-described steps S340 to S360 may be omitted, and when the number of classes of the essential printing media in step S320 is equal to the upper limit value, the process immediately proceeds to step S370.
The medium addition process illustrated in FIG. can be executed when the number of existing machine learning models is one. When the number of existing machine learning models is one, for example, the second machine learning model 202 illustrated in
When the machine learning model is updated or added in one of steps S330, S360, and S370, in step S380, the class classification process section 114 inputs the training data again to the updated or added machine learning model to generate a known feature spectrum group. Since the process is the same as that in step S230 of
When the process of
In the process of
According to the process of
In step S430, the class classification process section 114 performs relearning on the machine learning model using the training data updated in step S420. In step S440, the class classification process section 114 inputs the training data again to the relearned machine learning model to generate a known feature spectrum group. Since the process is the same as that in step S230 of
As described above, since when N is an integer of or more, the class classification process is executed using N machine learning models in the present embodiment, a printing medium having similar optical characteristics can be accurately discriminated. Furthermore, by executing the class classification process using two or more machine learning models, the process can be executed at a higher speed than a case where one machine learning model is subjected to class classification process into a plurality of classes.
In step S510, it is determined whether or not there is a machine learning model of which the number of classes is less than the upper limit value among the existing machine learning models. When N is an integer of 2 or more, and when there are N existing machine learning models, it is determined whether or not there is a machine learning model of which the number of classes is less than the upper limit value. However, the number of the existing machine learning models N may be 1. In the present embodiment, there are two existing machine learning models 201 and 202 illustrated in
In step S520, the class classification process section 114 updates the machine learning model for which the number of classes is less than the upper limit value so that the number of channels in the highest layer is increased by one. In the present embodiment, the number of channels (n2+1) in the highest layer of the second machine learning model 202 is changed from 3 to 4. In step S530, the class classification process section 114 performs learning on the machine learning model updated in step S520. At the time of the learning, the target spectral data acquired in step S220 of
In step S540, the class classification process section 114 adds a new machine learning model having a class corresponding to the target spectral data and sets a parameter thereof. The new machine learning model preferably has the same configuration as the first machine learning model 201 illustrated in
As the class of the existing machine learning model to be adopted in the new machine learning model, for example, it is preferable to select from the following classes.
(a) A class corresponding to the optical spectrum data having the highest similarity to the target spectral data among the plurality of known classes in the existing machine learning model.
(b) A class corresponding to the optical spectrum data having the lowest similarity to the target spectral data among the plurality of known classes in the existing machine learning model.
(c) An erroneously determined class to which the target spectral data in step S240 of
Among these, adopting the class (a) or (c) makes it possible to reduce erroneous discrimination in the new machine learning model. In addition, adopting the class (b) makes it possible to shorten a learning time of the new machine learning model.
In step S550, the class classification process section 114 performs learning on the added machine learning model. In the learning, the target spectral data acquired in step S220 of
When the number of known classes of the second machine learning model 202 reaches the upper limit value, the third machine learning model is added by steps S540 and S550 of
(1) When the other one machine learning model has the number of classes less than the upper limit value, for the other one machine learning model, a new class for the target spectral data is added by performing learning using the training data including the target spectral data by the process of steps S520 and S530.
(2) When the other one machine learning model has the number of classes equal to the upper limit value, a new machine learning model having a class corresponding to the target spectral data is added by the process of steps S540 and S550.
According to the processes, even when the class classification of the target spectral data cannot be performed well with the N machine learning models, it is possible to classify the target spectral data into the class corresponding to the target spectral data.
The update process of the machine learning model illustrated in
In step S560, the class classification process section 114 inputs the training data again to the updated or added machine learning model to generate a known feature spectrum group.
As described above, when N is an integer of 2 or more in the present embodiment, since the class classification process is executed using N machine learning models, the process can be executed at a higher speed than a case where one machine learning model is subjected to class classification process into a plurality of classes. Furthermore, when the existing machine learning model cannot classify the classified data well, it is possible to perform class classification into the class corresponding to the classified data by adding the class to the existing machine learning model or adding the new machine learning model into the existing machine learning model.
In the description above, a vector neural network type machine learning model using vector neurons is used, but instead, a machine learning model using scalar neurons like a normal convolutional neural network may be used. However, the vector neural network type machine learning model is preferable in that the accuracy of the class classification process is higher than that of the machine learning model using the scalar neurons.
The method for operating the output of each layer in the first machine learning model 201 illustrated in FIG. 4 is as follows. The same applies to the second machine learning model 202.
Each node of the PrimeVN layer 221 regards the scalar output of the 1×1×32 nodes of the Conv layer 211 as a 32-dimensional vector, and the output of the vector of the node is obtained by multiplying the vector by a transformation matrix. The transformation matrix is an element of the 1×1 kernel and is updated by performing learning on the machine learning model 201. It is also possible to integrate the processes of the Conv layer 211 and the PrimeVN layer 221 to form the processes as one primary vector neuron layer.
When the PrimeVN layer 221 is called the “lower layer L” and the ConvVN1 layer 231 adjacent to a higher level side is called the “higher layer L+1”, the output of each node of the higher layer L+1 is determined using the following Equation:
wherein,
MLi is an output vector of the i-th node in the lower layer L,
ML+1j is an output vector of the j-th node in the higher layer L+1,
vij is a prediction vector of the output vector ML+1j,
WLij is a prediction matrix for calculating the prediction vector vij from the output vector MLi of the lower layer L,
uj is a sum of the prediction vector vij, that is, a sum vector as a linear combination,
aj is an activation value, which is a normalization coefficient obtained by normalizing the norm |uj| of the sum vector uj, and
F(X) is a normalization function for normalizing X.
As the normalization function F(X), for example, the following Equation (4a) or Equation (4b) can be used:
wherein,
k is an ordinal number for all nodes in the higher layer L+1, and
β is an adjustment parameter that is an any positive coefficient, for example, β=1.
In Equation (4a), the activation value aj is obtained by normalizing the norm |uj| of the sum vector uj with a softmax function for all the nodes of the higher layer L+1. On the other hand, in Equation (4b), the activation value aj is obtained by dividing the norm |uj| of the sum vector uj by the sum of the norms |uj| for all the nodes of the higher layer L+1. As the normalization function F(X), a function other than Equation (4a) or Equation (4b) may be used.
The ordinal number i of Equation (3), which is assigned to the node of the lower layer L used for determining the output vector ML+1j of the j-th node in the higher layer L+1 for convenience, takes a value of 1 to n. In addition, an integer n is the number of nodes in the lower layer L that is used for determining the output vector ML+1j of the j-th node in the higher layer L+1. Therefore, the integer n is given by the following Equation:
n=Nk×Nc (6)
wherein, Nk is the number of kernel elements, and Nc is the number of channels of the PrimeVN layer 221 as the lower layer. In the example of
One kernel used to obtain the output vector of ConvVN1 layer 231 has 78 elements in which the kernel size 1×3 is used as a surface size and the number of channels of the lower layer of 26 is used as a depth (1×3×26=78), and each of which is a prediction matrix WLij. In addition, 20 sets of the kernel are required to generate the output vectors of 20 channels of ConvVN1 layer 231. Therefore, the number of kernel prediction matrices WLij used to obtain the output vector of the ConvVN1 layer 231 is 78×20=1560. The prediction matrices WLij are updated by training the machine learning model 201.
As can be seen from Equations (2) to (5) described above, the output vector ML+1j of each node of the higher layer L+1 is obtained by the following operation of:
(a) multiplying the output vector MLi of each node in the lower layer L by the prediction matrix WLij to obtain the prediction vector vij,
(b) obtaining the sum vector uj, which is the sum of the prediction vector vij obtained from each node in the lower layer L, that is, a linear combination,
(c) obtaining the activation value aj, which is a normalization coefficient obtained by normalizing the norm |uj| of the sum vector uj, and
(d) dividing the sum vector uj by the norm |uj| and further multiplying the activation value aj.
The activation value aj is a normalization coefficient obtained by normalizing the norm |uj| for all the nodes in the higher layer L+1. Thus, the activation value aj can be considered as an index showing a relative output intensity of each node among all the nodes in the higher layer L+1. The norms used in Equations (4), (4a), (4b), and (5) are typically L2 norms representing vector lengths. At this time, the activation value aj corresponds to a vector length of the output vector ML+1j. Since the activation value aj is only used in Equations (4) and (5), it does not need to be output from the node. However, it is also possible to configure the higher layer L+1 so as to output the activation value aj to the outside.
The configuration of the vector neural network is almost the same as the configuration of the capsule network, and the vector neurons of the vector neural network correspond to the capsules of the capsule network. However, the operations according to Equations (2) to (5) used in the vector neural network are different from the operations used in the capsule network. The biggest difference between the vector neural network and the capsule network is that in the capsule network, the prediction vector vij on the right side of Equation (3) is each multiplied by a weight, and the weight is searched for by repeating dynamic routing plural times. On the other hand, since the vector neural network of the present embodiment has an advantage in that the output vector ML+1j can be obtained by calculating Equations (2) to (5) once in order, so that it is not necessary to repeat the dynamic routing and the operation is performed faster. Furthermore, the vector neural network of the present embodiment has an advantage in that an amount of memory required for the operation is smaller than that of the capsule network. Thus, according to the experiment by the inventors of the present disclosure, the operation has been made using substantially ½ to ⅓ of the amount of memory.
A vector neural network is the same as a capsule network in that the vector neural network uses a node that inputs and outputs a vector. Thus, the advantages of using vector neurons are also common to the capsule network. Further, the plurality of layers 211 to 251 represent features of a larger region as the layers 211 to 251 approach the higher layer, and represent features of a smaller region as the layers 211 to 251 approach the lower layer, which is the same as a normal convolutional neural network. Here, the “feature” means a feature included in the input data to the neural network. The vector neural network and capsule network are superior to the normal convolutional neural network in that the output vector of a certain node includes spatial information that represents spatial information of the feature represented by the node. That is, the vector length of the output vector of a certain node represents an existence probability of the feature represented by the node, and a vector direction represents spatial information such as a direction or scale of the feature. Therefore, the vector directions of the output vectors of the two nodes belonging to the same layer represent a positional relationship of each feature. Alternatively, it can be said that the vector directions of the output vectors of the two nodes represent a variation of the feature. For example, when a node corresponds to the feature of “eyes”, the direction of the output vector can represent variations such as thinness of eyes and how to lift eyes. It can be said in the normal convolutional neural network that the spatial information of the feature is lost by a pooling process. As a result, the vector neural network and the capsule network have an advantage in that they are superior in the performance of identifying the input data as compared with the normal convolutional neural network.
The advantage of vector neural network can also be considered as follows. That is, the vector neural network has an advantage in that the output vector of the node represents features of the input data as coordinates in a continuous space. Therefore, once the vector direction is close, the output vector can be evaluated as if the features are similar. Furthermore, even if the features included in the input data cannot be covered by the training data, the vector neural network has an advantage in that the features can be discriminated by interpolation. On the other hand, the normal convolutional neural network has a drawback that the features of the input data cannot be represented as coordinates in the continuous space due to chaotic compression caused by the pooling process.
Since the outputs of the nodes of the ConvVN2 layer 241 and the ClassVN layer 251 are also determined in the same manner using Equations (2) to (5), detailed description thereof will be omitted. The resolution of the ClassVN layer 251, which is the highest layer, is 1×1, and the number of channels is (n1+1).
The output of the ClassVN layer 251 is converted into a plurality of determination values Class1-1 and Class1-2 for a known class and a determination value Class1-UN indicating that the class is an unknown class. Generally, the determination values are values normalized by the softmax function. Specifically, for example, an operation of calculating the vector length of the output vector from the output vector of each node of the ClassVN layer 251 and normalizing the vector length of each node by the softmax function can be performed, such that a determination value for each class can be obtained. As described above, the activation value aj obtained by Equation (4) is a value corresponding to the vector length of the output vector ML+1j, and is normalized. Thus, the activation value aj in each node of the ClassVN layer 251 may be output and used as a determination value for each class as it is.
In the above-described embodiment, a vector neural network for obtaining the output vector by the operations of Equations (2) to (5) is used as the machine learning models 201 and 202, but instead of the vector neural network, the capsule network disclosed in U.S. Pat. No. 5,210,798 and W02019/083553 may be used. In addition, a neural network using only the scalar neurons may be used.
A method for generating the known feature spectrum groups KS1 and KS2 and a method for generating the output data of an intermediate layer such as the ConvVN1 layer are not limited to the above embodiments, and for example, the Kmeans method may be used to generate these data. In addition, these data may be generated using conversion such as PCA, ICA, or Fisher. Further, the method for converting the output data of the known feature spectrum group KSG and the intermediate layer may be different.
The screen 50a of
In the printing medium list 51a, an ID number of the printing medium, a medium name which is a name of the printing medium, the presence/absence of learning, and the learning date and time are displayed in a list. The ID number corresponds to the medium identifier described above. The presence/absence of learning is a column for displaying the learning state of machine learning of the corresponding printing medium, and is displayed as “learned” when machine learning has been completed and “not learned” when machine learning has not been performed. As the learning date and time, the date and time when the learning was performed is displayed. In other words, the display section 150 of the information processing device 20 displays the screen 50a for displaying a learning state including whether or not the printing medium is a printing medium used for machine learning.
For example, on the screen 50a of
The add new button 52 is an operation button used when adding a new printing medium.
The learn button 53 is an operation button for confirming the learning state.
The discriminator selection button 54 is an operation button for selecting a discriminator corresponding to the learning group.
Further, “G1: Medium list” is displayed on the upper left of the screen 50a. G1 is an identification number of the discriminator, and the medium list is a list of the printing medium. The printing medium related to the discriminator 1 is displayed in the printing medium list 51a.
When adding a new printing medium, press the add new button 52 of the screen 50a. When the add new button 52 is operated, the setting screen is switched and a screen 55 of
In an initial state of switching the screen, both the ID number column 56 and the medium name column 57 of the screen 55 are blank, and the ID number and the medium name can be input. The screen 55 of
When there is spectral data recording of the medium D, a graph 58 showing the spectral data is displayed as shown on the screen 55. A horizontal axis of the graph 58 is a wavelength (nm), and a vertical axis is a reflectance. In addition, when there is no recording, the spectral data of the medium D can be measured by the spectrometer 30 by operating a color measurement button 59 on the right side of the screen 55. In this case, it is necessary to set the medium D in the printing medium holder 40.
Then, when the medium D is added to the “G1: medium list”, an add button 60 on the lower side of the screen 55 is operated. When the medium D is not added to the “G1: medium list”, a cancel button 61 is operated. When the cancel button 61 is pressed, the process after pressing the add new button 52 on the screen 50a of
When the add button 60 is pressed on the screen 55 of
When the medium D is machine-learned, the learn button 53 is pressed on the screen 50b of
For example, the medium A, the medium B, the medium C, and the medium D are selected by the check boxes on the screen 62 of
***Method for Discriminating Whether or Not recording medium Is Used for Machine Learning***
In the description above, a method for discriminating whether or not the printing medium is a printing medium used for machine learning, which has been described using the plurality of setting screens, will be organized.
The method for discriminating whether or not the printing medium is a printing medium used for machine learning has a plurality of machine learning models and includes, for each of the plurality of machine learning models, (h) obtaining a discriminator that has been machine-learned using physical characteristics and type information of the printing medium by the learning section 117, (i) determining whether or not the printing medium is the printing medium used for machine learning by the discriminator management section 118 as a learning state determination section, and (j) displaying a determination result on the display section 150.
When the learning execution button 64 is pressed on the screen 62 of
A message “Do you want to display accuracy during learning?”, a yes button 67, and a no button 68 are displayed on the screen 66. When pressing the yes button 67, the screen 66 is switched to a screen 69 of
The screen 69 of
At a point in time when the learning progress rate is 60%, a state in which the discrimination accuracy is substantially 70% is confirmed on the screen 69. When the learning is completed and the progress rate reaches 100%, the screen 69 is switched to the screen 71 of
When pressing the yes button 72, the screen 71 is switched to a screen 50c of
Furthermore, machine learning performed until the learning progress rate reaches 100% has been described, but the present disclosure is not limited to this, and when the discrimination accuracy is equal to or higher than a predetermined discrimination accuracy, the machine learning may be completed.
For example, when the detailed setting button 70 is pressed, a screen (not illustrated) on which a target discrimination accuracy can be input is displayed on the screen 69 of
Furthermore, as a result of the medium D being machine-learned with the discriminator 1 in the description above, as shown in the screen 71 of
For example, when the discriminator selection button 54 on the screen 50a of
Alternatively, when the discrimination accuracy is less than the predetermined discrimination accuracy, a process of changing the discriminator may be programmed and performed by the learning section 117 (
Further, the storage section 120 stores the discrimination accuracy history for each discriminator and a machine learning history corresponding to the discrimination accuracy. For example, when a specific discriminator is selected by operating the discriminator selection button 54 on the screen 50a of
Here, the screen 74 is provided with a restore button 75. When the restore button 75 is pressed, a discrimination accuracy point in the graph can be selected. For example, when the discriminator as of November 10, which has the highest discrimination accuracy, needs to be restored, the discrimination accuracy point of 98% is selected. When selecting the discrimination accuracy point, a message “Do you want to restore discriminator on November 10?”, and a yes button and a no button (none of which illustrated) are displayed. When pressing the yes button, restore is performed. The restore of the discriminator is performed in the discrimination accuracy selected by the learning section 117 based on the discrimination accuracy history for each discriminator and the machine learning history in the storage section 120.
In the description above, a method for confirming a discrimination accuracy in the discrimination process of the printing medium, which has been described using the plurality of setting screens, will be organized.
The method for confirming a discrimination accuracy has a plurality of machine learning models and includes, for each of the plurality of machine learning models, (h) obtaining a discriminator that has been machine-learned using physical characteristics and type information of the printing medium by the learning section 117, (l) obtaining a discrimination accuracy using an accuracy evaluation data different from the physical characteristics of the printing medium used for machine learning by the discriminator management section 118 as an accuracy evaluation section, and (m) displaying a determination result on the display section 150.
As described above, the following effects can be obtained according to the printing apparatus 110, the printing system 100, the method for discriminating a printing medium, the method for confirming a discrimination accuracy, and the method for discriminating a printing medium of the present embodiment.
The printing apparatus 110 includes the storage section 120 that stores a printing parameter corresponding to physical information and type information of the printing medium PM, the printing machine 11 that performs printing based on the printing parameter, the learning section 117 that obtains a discriminator which has been machine-learned using the physical information and the type information of the printing medium PM, and the discriminator management section 118 that determines whether or not the printing medium is a printing medium used for machine learning of the discriminator as a learning state determination section. The printing system 100 also includes the storage section 120, the printer 10, the learning section 117, and the discriminator management section 118, which function in the same manner as each section of the printing apparatus 110.
According to this, the learning section 117 obtains a machine-learned discriminator using the physical information and the type information of the printing medium PM. Then, the discriminator management section 118 as the learning state determination section determines whether or not the printing medium PM is a printing medium used for machine learning of the discriminator.
Therefore, it is possible to provide the printing apparatus 110 and the printing system 100 capable of determining whether or not the printing medium PM is the printing medium used for machine learning of the discriminator. In other words, it is possible to provide a recording apparatus and a recording system capable of identifying a learning state of the recording medium.
The printing system 100 includes the learning section 117 that obtains a discriminator which has been machine-learned using the physical information and type information of the printing medium PM, and the discriminator management section 118 that obtains the discrimination accuracy of the discriminator as an accuracy evaluation section. The printing apparatus 110 also includes the learning section 117 and the discriminator management section 118, which function in the same manner as each section of the printing system 100.
According to this, the discrimination accuracy of the discriminator can be obtained by the discriminator management section 118 that functions as an accuracy evaluation section.
Therefore, it is possible to provide the printing system 100 (printing apparatus 110) capable of grasping and managing the discrimination accuracy of the discriminator.
The printing apparatus 110 further includes the storage section 120, accuracy evaluation data is stored in the storage section 120, and the discriminator management section 118 as the accuracy evaluation section obtains the discrimination accuracy by using the accuracy evaluation data. According to this, the discrimination accuracy can be obtained using the accuracy evaluation data.
Further, the accuracy evaluation data is data different from the physical characteristics of the printing medium used for machine learning of the corresponding discriminator.
When physical characteristic data of the recording medium used for machine learning is used, the accuracy is 100%, which is meaningless because the data has already been learned and can be reliably discriminated. According to this, an appropriate discrimination accuracy can be obtained by using data different from the physical characteristics of the recording medium used for machine learning as the accuracy evaluation data. In other words, an accurate discrimination accuracy can be obtained.
The printing apparatus 110 further includes the display section 150, and the display section 150 displays the screen 50a, the screen 50b, the screen 50c, and the screen 62 as a first screen that displays a learning state including whether or not the printing medium PM is a printing medium used for machine learning of the discriminator.
According to this, since the learning state of the printing medium is displayed on the display section 150, it is possible to inform the user of the learning state of the printing medium.
Further, on the screen 50a, a plurality of printing media PM used for printing by the printing machine 11 (printer 10) as a recording section are displayed, and a learning state for each printing medium PM is also displayed.
According to this, it is possible to inform the user of the learning state for each printing medium.
The learning state is also displayed on the screen 50a together with the type information of the printing medium PM.
According to this, it is possible to inform the user of the type information of the printing medium and the learning state of the recording medium together.
In addition, the learning date and time when the machine learning was performed by the learning section 117 is displayed on the screen 50a. According to this, it is possible to inform the user of the history of the learning date and time of the printing medium.
In addition, the learning section 117 performs machine learning on the printing medium selected according to the type information of the printing medium in the screen 50a.
According to this, the user can select an any recording medium and perform machine learning.
Further, the display section 150 displays a screen 69 showing the discrimination accuracy. According to this, the user can recognize the discrimination accuracy on the screen 69.
Further, the discrimination accuracy according to the progress rate of machine learning is displayed on the screen 69 as a graph.
According to this, it is possible to grasp a change in discrimination accuracy according to the progress rate of machine learning.
Further, the learning section 117 completes machine learning when the discrimination accuracy during the progress of machine learning is equal to or higher than a predetermined discrimination accuracy.
According to this, since machine learning is completed at a point in time when the discrimination accuracy reaches the predetermined discrimination accuracy, a discriminator with good accuracy can be efficiently obtained.
Further, a plurality of discriminators obtained by each of the plurality of machine learning models are provided, and the learning section 117 changes the machine learning model when the discrimination accuracy is less than a predetermined discrimination accuracy.
According to this, when the discrimination accuracy does not increase, it is possible to select a machine learning model that may further enhance the discrimination accuracy of the discriminator.
Further, the storage section 120 stores the history of the discrimination accuracy for each discriminator and the history of machine learning corresponding to the discrimination accuracy.
According to this, the discrimination accuracy history and the machine learning history for each discriminator can be confirmed in the storage section 120.
Further, the learning section 117 restores the discriminator with a predetermined discrimination accuracy based on the discrimination accuracy history and the machine learning history recorded in the storage section 120.
According to this, the discriminator with a predetermined discrimination accuracy can be restored from the history of the storage section 120.
The method for discriminating whether or not the printing medium is a printing medium used for machine learning has a plurality of machine learning models and includes, for each of the plurality of machine learning models, obtaining a discriminator that has been machine-learned using physical characteristics and type information of the printing medium, determining whether or not the printing medium is the printing medium used for machine learning, and displaying a determination result.
According to the method, it is possible to determine whether or not the printing medium is the printing medium used for machine learning, and display the determination result.
Therefore, according to the discrimination method, it is possible to inform the user whether or not the recording medium is the recording medium used for machine learning of the discriminator.
The method for confirming a discrimination accuracy in the discrimination process of the printing medium has a plurality of machine learning models and further includes, for each of the plurality of machine learning models, obtaining a discriminator that has been machine-learned using physical characteristics and type information of the printing medium, obtaining a discrimination accuracy using an accuracy evaluation data different from the physical characteristics of the printing medium used for machine learning, and displaying a discrimination accuracy.
According to this, an appropriate discrimination accuracy can be obtained by using accuracy evaluation data different from the physical characteristics of the printing medium used for machine learning of the corresponding discriminator. Furthermore, the discrimination accuracy can be displayed to inform the user.
Therefore, it is possible to provide a method for confirming the discrimination accuracy capable of accurately obtaining the discrimination accuracy in the discrimination process of the printing medium.
Moreover, in a preferred example, the information processing device 20 adopts a notebook PC or a tablet PC. The printer 10 or the printing apparatus 110 may have a configuration of a large-sized apparatus that performs large-sized printing on a roll medium as a printing medium. In this case, since a distance between an operation panel of the printing apparatus 110 and the roll medium is separated from each other, it is difficult to perform work while confirming the actual roll medium. According to the present embodiment, the user who carries a wirelessly connected notebook PC or tablet PC goes to a location of the roll medium to enable performing the work at the location while confirming a type information label of the roll medium, and the work can thus be performed efficiently. Furthermore, in the preferred example, the notebook PC or the tablet PC includes an imaging unit, and can accurately and efficiently acquire roll medium type information from barcode information printed on the roll medium type information label. In addition, since the information processing device 20 may be an information terminal capable of executing a printing medium management program, a smartphone having the same functions as the tablet PC may be used.
This will be described with reference to
In the description above, the spectral reflectance (spectral data) measured by the spectrometer 30 is used as the physical information of the printing medium PM, but the present disclosure is not limited to this, and the physical information of the printing medium PM may be used. For example, a spectral transmittance of light transmitted through the printing medium PM or image data obtained by imaging a surface of the printing medium may be used as the physical information. Alternatively, the printing medium PM may be irradiated with ultrasonic waves and a reflectance thereof may be used as physical information.
This will be described with reference to
In the description above, it is assumed that the learning section 117, the discriminator management section 118 as the learning state determination section and the accuracy evaluation section, and the like function by the cooperation of each section of the information processing device 20. However, the present disclosure is not limited to this, and the learning section 117, the discriminator management section 118 as the learning state determination section and the accuracy evaluation section, and the like may be an information processing device capable of executing the printing medium management program. For example, a server or a PC placed in a cloud environment via the network NW may be used as the information processing device 20. According to this, it is possible to manage the printing apparatus 110 from a remote location, manage a plurality of printing apparatus 110 collectively, and the like.
The present disclosure is not limited to the above-described embodiment, and can be realized in various aspects in the scope without departing from the gist thereof. For example, the present disclosure can also be realized by the following aspects. The technical features in the embodiments corresponding to technical features in the aspects described below can be substituted or combined as appropriate in order to solve a part or all of the problems of the present disclosure or achieve a part or all of the effects of the present disclosure. Unless the technical features are explained as essential technical features in the present specification, the technical features can be deleted as appropriate.
(1) According to a first aspect of the present disclosure, there is provided a method for executing a discrimination process of a printing medium using a machine learning model. The method includes a step (a) of preparing N machine learning models when N is an integer of 1 or more, in which each of the N machine learning models is configured to discriminate a type of the printing medium by classifying input spectral data, which is a spectral reflectance of the printing medium, into any one of a plurality of classes, a step (b) of acquiring target spectral data which is a spectral reflectance of a target printing medium, and a step (c) of discriminating a type of the target printing medium by executing a class classification process of the target spectral data using the N machine learning models.
According to the method, since the class classification process is executed using the machine learning model, it is possible to accurately discriminate the printing medium having similar optical characteristics.
(2) In the above method, the step (c) may include a step of discriminating a medium identifier indicating the type of the target printing medium according to a result of the class classification process of the target spectral data, and the method may further include a step of selecting print setting for performing printing by using the target printing medium according to the medium identifier, and a step of performing printing by using the target printing medium according to the print setting.
According to the method, since the print setting is selected from a result of the discrimination process of the target printing medium, it is possible to create a clean printed matter using the target printing medium.
(3) In the above method, the N is an integer of 2 or more, and each of the N machine learning models may be configured to have at least one class different from that of the other machine learning models among the N machine learning models.
According to the method, since the class classification process is executed using two or more machine learning models, it is possible to execute the class classification process faster than a case of executing the class classification process on a plurality of classes in one machine learning model.
(4) In the above method, learning of the N machine learning models may be performed using corresponding N training data groups, and N spectral data groups constituting the N training data groups may be in a state equivalent to a state in which the N spectral data groups are grouped into N groups by a clustering process.
According to the method, since the spectral data used for learning of each machine learning model is grouped by the clustering process, it is possible to enhance the accuracy of the class classification process by the machine learning model.
(5) In the above method, each training data group may have a representative point representing a center of a spectral data group constituting each training data group, an upper limit value may be set for the number of classes capable of classifying by any one machine learning model, and a plurality of types of printing media, which are an object to be subjected to the class classification process by the N machine learning models, may be classified into any one of an essential printing medium that is not capable of being excluded from the object to be subjected to the class classification process without a user's exclusion instruction and an any printing medium that is capable of being excluded from the object to be subjected to the class classification process without the user's exclusion instruction. The step (a) may include a medium addition step of using a new additional printing medium, which is not the object to be subjected to the class classification process by the N machine learning models, as the object to be subjected to the class classification process, and the medium addition step may include a step (a1) of acquiring a spectral reflectance of the additional printing medium as additional spectral data, a step (a2) of selecting a training data group having a representative point closest to the additional spectral data among the N training data groups as a proximity training data group, and selecting a specific machine learning model that was learned using the proximity training data group, and a step (a3) of adding the additional spectral data to the proximity training data group to update the proximity training data group when the number of classes corresponding to the essential printing medium in the specific machine learning model is less than the upper limit value, and performing relearning on the specific machine learning model using the proximity training data group after updating.
According to the method, it is possible to perform class classification corresponding to the additional printing medium. In addition, since the relearning of the machine learning model is performed after the additional spectral data is added to the proximity training data group having a center of gravity closest to the additional spectral data of the additional printing medium, it is possible to enhance the accuracy of the class classification process by the machine learning model.
(6) In the above method, the step (a3) may include a step of deleting any spectral data about the any printing medium from the proximity training data group when a sum of the number of classes corresponding to the essential printing medium and the number of classes corresponding to the any printing medium in the specific machine learning model at a point in time before executing the step (a3) is equal to the upper limit value.
According to the method, since the any spectral data of the any printing medium is deleted from the proximity training data group, it is possible to enhance the accuracy of the class classification process without increasing the number of classes of the machine learning model.
(7) In the above method, the medium addition step may further include a step (a4) of creating a new machine learning model and performing learning on the new machine learning model using a new training data group including the additional spectral data and any spectral data about one or more any printing medium, when the number of classes corresponding to the essential printing medium in the specific machine learning model is equal to the upper limit value.
According to the method, it is possible to perform class classification corresponding to the additional printing medium. In addition, since the learning of the machine learning model is performed using the new training data group including the additional spectral data and the any spectral data, it is possible to enhance the accuracy of the class classification process.
(8) The above method may further include a medium exclusion step of excluding one printing medium to be excluded from the object to be subjected to the class classification process by one target machine learning model selected from the N machine learning models, in which the medium exclusion step may include a step (i) of updating the training data group by deleting spectral data about the printing medium to be excluded from a training data group used for learning of the target machine learning model, and a step (ii) of performing relearning on the target machine learning model using the updated training data group. According to the method, it is possible to exclude the printing medium from the object to be subjected to the class classification process of the machine learning model.
(9) In the above method, in the step (i), the spectral data about the printing medium to be excluded may be deleted from the training data group used for learning of the target machine learning model, and the any spectral data about one or more any printing medium may be added to update the training data group, when the number of classes of the target machine learning model obtained by excluding the printing medium to be excluded from the object to be subjected to the class classification process by the target machine learning model is less than a predetermined lower limit value.
According to the method, since the number of classes of the machine learning model can be set to the lower limit value or more, it is possible to prevent the accuracy of the class classification process from being excessively lowered.
(10) In the above method, one training data group used for learning of each machine learning model, spectral data excluded from the training data group, and spectral data added to the training data group may be managed to constitute the same spectral data group, the spectral data excluded from the training data group may be saved in a saving area of the spectral data group, and the spectral data added to the training data group may be selected from the spectral data saved in the saving area of the spectral data group.
According to the method, since the spectral data using as the training data is managed as spectral group data, it is possible to maintain a state equivalent to a state in which the N spectral data groups constituting the N training data groups are grouped by the clustering process.
(11) According to a second aspect of the present disclosure, there is provided a system for executing a discrimination process of a printing medium using a machine learning model. The system includes a memory that stores N machine learning models when N is an integer of 1 or more, and a processor that executes the discrimination process using the N machine learning models. Each of the N machine learning models is configured to discriminate a type of the printing medium by classifying input spectral data, which is a spectral reflectance of the printing medium, into any one of a plurality of classes. The processor is configured to execute a first process of acquiring target spectral data of a target printing medium and a second process of discriminating a type of the target printing medium by executing a class classification process of the target spectral data using the N machine learning models.
According to the system, since the class classification process is executed using the machine learning model, it is possible to accurately discriminate the printing medium having similar optical characteristics.
The present disclosure can also be realized in various aspects other than the above. For example, the present disclosure can be realized in an aspect of a computer program for realizing a function of a class classification device, a non-transitory storage medium in which the computer program is recorded, or the like.
Number | Date | Country | Kind |
---|---|---|---|
2020-213538 | Dec 2020 | JP | national |
2021-031439 | Mar 2021 | JP | national |
2021-031440 | Mar 2021 | JP | national |