The embodiments discussed herein are related to a determination method, a determination program, and an information processing device.
A machine learning model (hereinafter, may be simply referred to as “model”) has been increasingly introduced into an information system used in companies or the like, for data determination and classification functions, or the like. Because the machine learning model performs determination and classification as in learned teacher data when the system is developed, if a tendency (data distribution) of input data changes during a system operation, accuracy of the machine learning model deteriorates.
Generally, in order to detect the model accuracy deterioration during the system operation, a method is used for periodically and manually calculating a correct answer rate by confirming whether or not an output result of the model is correct or wrong by humans and detecting accuracy deterioration from decrease in the correct answer rate.
In recent years, as a technique for automatically detecting the accuracy deterioration of the machine learning model during the system operation, a T2 statistics amount (Hotelling's T-square) has been known. For example, main components of an input data group and a normal data (training data) group are analyzed, and a T2 statistics amount of input data that is a sum of squares of distances from an origin to respective standardized main components is calculated. Then, a change in a ratio of abnormal value data is detected on the basis of a distribution of the T2 statistics amount of the input data group, and the accuracy deterioration of the model is automatically detected.
A. Shabbak and H. Midi, “An Improvement of the Hotelling T2 Statistic in Monitoring Multivariate Quality Characteristics”, Mathematical Problems in Engineering, pp. 1 to 15, 2012 is disclosed as related art.
According to an aspect of the embodiments, a determination method performed by a computer, the determination method includes: acquiring a first output result when data generated under second environment different from a first environment that is a training environment is input to a trained model; acquiring a second output result when the data is input to a detection model that detects decrease in a correct answer rate of a trained model when the trained model is converted into the second environment; and determining whether or not to retrain the trained model when the trained model is converted into the second environment based on the first output result and the second output result.
The object and advantages of the invention will be realized and attained by means of the elements and combinations particularly pointed out in the claims.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory and are not restrictive of the invention.
In the related art, there are many cases where a development environment of a machine learning model and an introduction environment (production environment) where the machine learning model is introduced do not necessarily match and feature amounts and qualities of input data differ. For example, when image data is used, because brightness, camera installation positions, camera performances, or the like differ, resolutions of image data to be captured or the like also differ.
Generally, because the machine learning model performs determination and classification according to learned teacher data under the development environment at the time of development, it is considered that performance is decreased due to a difference between tendencies (data distribution) of the teacher data under the development environment and the input data under the production environment. At present, at the time of the introduction into the production environment, by manually confirming whether or not an output result of a model is correct or wrong by humans, a correct answer rate is calculated, a model performance is inspected, and it is determined whether or not to perform introduction.
In one aspect, an object is to provide a determination method, a determination program, and an information processing device that can automatically inspect whether or not a learned machine learning model is introduced into a production environment.
Hereinafter, embodiments of a determination method, a determination program, and an information processing device according to the present disclosure will be described in detail with reference to the drawings. Note that, the embodiments do not limit the present disclosure. Furthermore, each of the embodiments may be appropriately combined within a range without inconsistency.
For example, the machine learning model is an image classifier that is learned using teacher data with an explanatory variable as image data and an objective variable as a clothing name at the time of learning and outputs a determination result such as “shirt” when the image data is input as the input data at the time of an operation. That is, for example, the machine learning model is an example of an image classifier that classifies high-dimensional data or performs multi-class classification.
Here, because the machine learning model learned through machine learning, deep learning, or the like is learned based on teacher data in which training data and labeling are combined, the machine learning model functions only in a range included in the teacher data. For example, when teacher data imaged in a development environment is used, learning is performed in a state where a feature amount under the development environment is included (state of which distribution of input data is different from production environment). Therefore, when the learned machine learning model learned under the development environment is introduced into the production environment different from the development environment, the distribution of the input data differs. Therefore, there is a case where accuracy deteriorates under the production environment and it is not possible to exhibit a performance that is about the same as that under the development environment.
As illustrated in
In this way, there is a case where the distribution of the input data changes from that at the time of learning when the introduction from the development environment into the production environment is performed, as a result, the correct answer rate of the machine learning model decreases, and accuracy deterioration of the machine learning model occurs.
Therefore, as illustrated in
Here, the inspector model will be described.
This is because, as the model applicability domain is narrower, a slight change in the input data more sensitively changes the output. Therefore, by narrowing the model applicability domain of the inspector model than the machine learning model to be monitored, an output value of the inspector model fluctuates due to a small change in the input data, and a change in data tendency can be measured according to the matching rate with the output value of the machine learning model.
Specifically, for example, as illustrated in
On the other hand, when the input data of the introduction destination (production environment) is outside the range of the model applicability domain of the inspector model, although the machine learning model determines the corresponding input data as the class 0, the inspector model does not necessarily determine the corresponding input data as the class 0 because the corresponding input data is outside the model applicability range of each class. That is, for example, because the output values do not necessarily match, the matching rate decreases.
In this way, the introduction determination device 10 according to the first embodiment inputs the input data under the production environment into each of the machine learning model of which development is in progress or development has been completed and the inspector model that is learned to have a model applicability domain narrower than a model applicability domain of the machine learning model and acquires an output result. Then, the introduction determination device 10 can collect a change in the accuracy when the machine learning model is introduced into the production environment in advance according to the matching rate of the output results.
The communication unit 11 is a processing unit that controls communication with another device, and is, for example, a communication interface or the like. For example, the communication unit 11 receives various instructions from an administrator's terminal or the like. Furthermore, the communication unit 11 receives input data to be determined from various terminals.
The storage unit 12 is an example of a storage device that stores data and a program or the like executed by the control unit 20, and is, for example, a memory, a hard disk, or the like. The storage unit 12 stores a development environment data DB 13, an introduction destination data DB 14, a machine learning model 15, and an inspector model DB 16.
The development environment data DB 13 is a database that stores teacher data under the development environment used to learn the machine learning model that is teacher data used to learn the inspector model.
The stored data ID here is an identifier for identifying teacher data. The teacher data is training data used for learning or verification data used for verification at the time of learning. In the example in
An example of image data used for the teacher data will be described.
The introduction destination data DB 14 is a database that stores data acquired or collected in the introduction destination (production environment) that is a destination where the machine learning model 15 is introduced. Specifically, for example, the introduction destination data DB 14 stores image data that is assumed to be input to the machine learning model or image data to be image-classified.
The stored data ID here is an identifier for identifying input data. The input data is image data to be classified that is assumed to be determined (predicted) by the machine learning model 15. In the example in
The machine learning model 15 is a learned machine learning model and is a model to be evaluated by the introduction determination device 10. Note that the machine learning model 15 of a neural network, a support vector machine, or the like to which a learned parameter is set can be stored, and the learned machine learning model 15 may also store a learned parameter or the like that can be constructed.
The inspector model DB 16 is a database that stores information regarding at least one inspector model used to detect accuracy deterioration. For example, the inspector model DB 16 stores parameters used to respectively construct five inspector models that are various parameters of the DNN generated (optimized) through machine learning by the control unit 20 to be described later. Note that, the inspector model DB 16 can store a learned parameter and can store an inspector model (DNN) to which the learned parameter is set.
The control unit 20 is a processing unit that controls the entire introduction determination device 10 and is, for example, a processor or the like. The control unit 20 includes an inspector model generation unit 21, a threshold setting unit 22, a deterioration detection unit 23, and an introduction determination unit 26. Note that, the inspector model generation unit 21, the threshold setting unit 22, the deterioration detection unit 23, and the introduction determination unit 26 are examples of an electronic circuit included in a processor, examples of a process executed by a processor, or the like.
The inspector model generation unit 21 is a processing unit that generates the inspector model that is an example of a monitor or a detection model that detects the accuracy deterioration of the machine learning model 15. Specifically, for example, the inspector model generation unit 21 generates a plurality of inspector models, of which model applicability ranges are different from each other, through deep learning using the teacher data, stored in the development environment data DB 13, used to learn the machine learning model 15. Then, the inspector model generation unit 21 stores various parameters used to construct the respective inspector models (each DNN), obtained through deep learning, having different model applicability ranges in the inspector model DB 16.
For example, by controlling the number of pieces of training data, the inspector model generation unit 21 generates the plurality of inspector models having the different applicable ranges.
As illustrated in
Therefore, the inspector model generation unit 21 generates a plurality of inspector models by setting the number of times of training to be the same and changing the number of pieces of training data. For example, a case will be considered where five inspector models are generated in a state where the machine learning model 15 is learned with the number of times of training (100 epochs) and the number of pieces of training data (1000 pieces per class). In this case, the inspector model generation unit 21 determines the number of pieces of training data of an inspector model 1 as “500 pieces per class”, the number of pieces of training data of an inspector model 2 as “400 pieces per class”, the number of pieces of training data of an inspector model 3 as “300 pieces per class”, the number of pieces of training data of an inspector model 4 as “200 pieces per class”, and the number of pieces of training data of an inspector model 5 as “100 pieces per class”, randomly selects teacher data from the development environment data DB 13, and learns each piece of the teacher data with 100 epochs.
Thereafter, the inspector model generation unit 21 stores various parameters of each of the learned inspector models 1 to 5 in the inspector model DB 16. In this way, the inspector model generation unit 21 can generate the five inspector models that have model applicability ranges narrower than the applicable range of the machine learning model 15 and are different from each other.
Note that, the inspector model generation unit 21 can learn each inspector model using a method such as error back propagation and can adopt another method. For example, the inspector model generation unit 21 learns the inspector model (DNN) by updating the parameter of the DNN so as to reduce an error between an output result obtained by inputting the training data into the inspector model and a label of the input training data.
Returning to
Thereafter, the threshold setting unit 22 calculates a matching rate between the classes of the machine learning model 15 and the inspector model 1, a matching rate between the classes of the machine learning model 15 and the inspector model 2, a matching rate between the classes of the machine learning model 15 and the inspector model 3, a matching rate between the classes of the machine learning model 15 and the inspector model 4, and a matching rate between the classes of the machine learning model 15 and the inspector model 5, for the verification data.
Then, the threshold setting unit 22 sets a threshold using each matching rate. For example, the threshold setting unit 22 displays each matching rate on a display or the like and accepts setting of the threshold from a user. Furthermore, the threshold setting unit 22 can optionally select and set any one of an average value of the matching rates, a maximum value of the matching rates, a minimum value of the matching rates, or the like according to a deterioration state that the user requests to detect.
Returning to
The classification unit 24 is a processing unit that inputs the input data stored in the introduction destination data DB 14 to each of the machine learning model 15 and each inspector model and acquires each output result (classification result). For example, the classification unit 24 acquires the parameter of each inspector model from the inspector model DB 16 and constructs each inspector model when learning of each inspector model is completed and executes the machine learning model 15.
Then, the classification unit 24 inputs the input data of the introduction destination to the machine learning model 15 and acquires the output result, and inputs the input data of the corresponding introduction destination into each of the five inspector models from the inspector model 1 (DNN 1) to the inspector model 5 (DNN 5) and acquires each output result. Thereafter, the classification unit 24 stores the input data of the introduction destination and each output result in the storage unit 12 in association with each other and outputs the stored data and result to the monitoring unit 25.
The monitoring unit 25 is a processing unit that monitors accuracy deterioration of the machine learning model 15 using the output result of each inspector model. Specifically, for example, the monitoring unit 25 measures a change in a distribution of a matching rate between the output of the machine learning model 15 and the output of the inspector model for each class on the basis of the processing result by the classification unit 24. For example, the monitoring unit 25 calculates a matching rate between the output result of the machine learning model 15 and the output result of each inspector model for each input data, and when the matching rate decreases, the monitoring unit 25 detects accuracy deterioration of the machine learning model 15. Note that, the monitoring unit 25 outputs the detection result to the introduction determination unit 26.
As illustrated in
That is, for example, the monitoring unit 25 calculates a matching rate as 100% because the matching rates of each class of the machine learning model 15 and the inspector model match. At this timing, each of the classification results matches.
As the time elapses, the monitoring unit 25 acquires that six pieces of input data belongs to the model applicability domain of the class 0, six pieces of input data belongs to the model applicability domain of the class 1, and eight pieces of input data belongs to the model applicability domain of the class 2 from the machine learning model 15 to be monitored. On the other hand, the monitoring unit 25 acquires that three pieces of input data belongs to the model applicability domain of the class 0, six pieces of input data belongs to the model applicability domain of the class 1, and eight pieces of input data belongs to the model applicability domain of the class 2 from the inspector model.
That is, for example, the monitoring unit 25 calculates a matching rate of the class 0 as 50% (( 3/6)×100) and calculates matching rates of the classes 1 and 2 as 100%. In other words, for example, a change in data distribution of the class 0 is detected. At this timing, the inspector model is in a state where the three pieces of input data that is not classified into the class 0 is not necessarily classified into the class 0.
As the time further elapses, the monitoring unit 25 acquires that three pieces of input data belongs to the model applicability domain of the class 0, six pieces of input data belongs to the model applicability domain of the class 1, and eight pieces of input data belongs to the model applicability domain of the class 2 from the machine learning model 15 to be monitored. On the other hand, the monitoring unit 25 acquires that one piece of input data belongs to the model applicability domain of the class 0, six pieces of input data belongs to the model applicability domain of the class 1, and eight pieces of input data belongs to the model applicability domain of the class 2 from the inspector model.
That is, for example, the monitoring unit 25 calculates a matching rate of the class 0 as 33% ((⅓)×100) and calculates matching rates of the classes 1 and 2 as 100%. In other words, for example, it is determined that the data distribution of the class 0 is changed. A state at this timing is where the machine learning model 15 does not classify input data to be classified into the class 0 into the class 0, and the inspector model does not necessarily classify five pieces of input data, which has not been classified into the class 0, into the class 0.
In this way, the monitoring unit 25 calculates a matching rate when the input data of the introduction destination (production environment) is input to each of the machine learning model 15 developed using the teacher data under the development environment and each inspector model generated using the teacher data under the development environment. Then, the monitoring unit 25 periodically calculates the matching rate and outputs the matching rate to the introduction determination unit 26.
The introduction determination unit 26 is a processing unit that determines whether or not the machine learning model 15 is introduced into the production environment based on the matching rate calculated by the monitoring unit 25. Specifically, for example, the introduction determination unit 26 calculates an average of the matching rate for each class for each inspector model of which the matching rate is calculated for each class and calculates the matching rate of each inspector model. Then, when equal to or more than a predetermined number of inspector models, of which the matching rate is less than the threshold, among the inspector models exist, the introduction determination unit 26 determines that occurrence of accuracy deterioration can be predicted when the machine learning model 15 is introduced into the production environment, determines that it is not possible to introduce the machine learning model 15, and determines that the machine learning model 15 needs to be relearned.
As illustrated in
For example, as illustrated in
Furthermore, as illustrated in
Furthermore, the introduction determination unit 26 can acquire the matching rate of each inspector model for each class and determine a policy for learning the machine learning model 15. For example, the introduction determination unit 26 compares the matching rates of the respective inspector models as in
In this case, the introduction determination unit 26 can output a message or the like that prompts to relearn the machine learning model 15 on a display or the like, for the class 2. As a result, a user can generate new teacher data by adding a noise or the like to teacher data used for learning, as the teacher data of the class 2 and relearn the machine learning model 15.
Furthermore, the introduction determination unit 26 can automatically relearn the machine learning model 15 without depending on a user notification. For example, the introduction determination unit 26 generates new teacher data in which the output result of the inspector model of which the matching rate is less than the threshold is set as correct answer information for the class 1 and relearns the machine learning model 15.
Note that, a timing of introduction determination can be arbitrarily set. For example, introduction determination can be performed at a timing when the matching rate is calculated by the deterioration detection unit 23, or introduction determination can be performed after calculation of the matching rate for equal to or more than a predetermined number of pieces of input data of the introduction destination is completed.
Subsequently, the threshold setting unit 22 calculates a matching rate of an output result obtained by inputting verification data in the teacher data under the development environment to the machine learning model 15 and each inspector model (S104) and sets a threshold based on the matching rate (S105).
Thereafter, the deterioration detection unit 23 inputs the input data of the introduction destination to the machine learning model 15 and acquires an output result (S106) and inputs the input data of the introduction destination to each inspector model and acquires an output result (S107).
Then, the deterioration detection unit 23 accumulates comparison between the output results, for example, a distribution of the model applicability domain in the feature amount space (S108) and repeats S106 and subsequent processing until the number of accumulations reaches the prescribed number (S109: No).
Thereafter, when the number of accumulations reaches the prescribed number (S109: Yes), the deterioration detection unit 23 calculates the matching rate of each inspector model and the machine learning model 15 for each class (S110). Then, the introduction determination unit 26 determines whether or not the machine learning model 15 is introduced into the production environment on the basis of the matching rate and outputs a determination result to the determined introduction destination (S111).
As described above, the introduction determination device 10 prepares a plurality of inspector models that solves a problem similar to the machine learning model to be inspected and calculates the matching rate of the outputs for each class or each inspector model. Then, the introduction determination device 10 inspects performance decrease of the machine learning model from a difference between the distribution of the matching rate under the development environment and that under the production environment and determines whether or not introduction can be performed. As a result, because the introduction determination device 10 can automatically inspect the model performance decrease in advance before the introduction and no manpower is needed, cost at the time when the machine learning model 15 is introduced into the production environment can be reduced.
As illustrated in
On the other hand, the inspector model according to the first embodiment has a narrower model applicability domain than the machine learning model 15. Therefore, even when the image data of a dog having many green components is input as in the A introduction destination, the inspector model can determine that the image data is not the cat class. Moreover, even in a case of the image data of a cat including an abnormally large amount of white as in the B introduction destination, the feature amount of the cat can be accurately learned. Therefore, the inspector model can detect that the image data is the cat class.
As a result, when the input data of the A introduction destination is used, the matching rate of the output result of the machine learning model 15 and the output result of the inspector model decreases. Similarly, when the input data of the B introduction destination is used, the matching rate of the output result of the machine learning model 15 and the output result of the inspector model also decreases. Therefore, the introduction determination device 10 can determine that introduction into the A introduction destination and introduction into the B introduction destination are not appropriate.
Furthermore, based on these results, the introduction determination device 10 can relearn the machine learning model 15 using the image data of a dog having many green components (label: dog) or the image data of a cat including an abnormally large amount of white (label: cat). Furthermore, after relearning the machine learning model 15, a user can introduce the machine learning model 15 into the A introduction destination or the B introduction destination.
By the way, in the first embodiment, an example has been described where the machine learning model 15 is evaluated using the data under the production environment. However, the present embodiment is not limited to this. For example, it is possible to develop a versatile machine learning model using data of a plurality of different customers.
For example, due to security and contract problems, it is assumed that it is difficult to use on-site data acquired from a customer as teacher data of a machine learning model of another company (difficult customer) and the machine learning model is forced to be trained using teacher data prepared for each customer. Therefore, it is often difficult to bring the on-site data of each customer together and use the data as the teacher data in order to develop a versatile machine learning model.
Therefore, under a situation where it is not possible to use existing data of various customers as the teacher data of the machine learning model to be developed when a versatile machine learning model to be introduced into different environments (different customers) is developed, the introduction determination device 10 according to the second embodiment inspects input data suitable for developing the versatile machine learning model and generates teacher data. Note that, processing to be described here can be executed independently from each processing described in the first embodiment.
Next, the introduction determination device 10 inputs each input data collected from the Internet or the like for each inspector model into each of a developing model (machine learning model 15) and the inspector model and calculates a matching rate (refer to (2) in
For example, the deterioration detection unit 23 inputs input data X to the inspector model A, the inspector model B, the inspector model C, and the developing model and calculates a matching rate of the inspector model A and the developing model (0.6), a matching rate of the inspector model B and the developing model (0.2), and a matching rate of the inspector model C and the developing model (0.9).
Furthermore, the deterioration detection unit 23 inputs input data Y to the inspector model A, the inspector model B, the inspector model C, and the developing model, and calculates a matching rate of the inspector model A and the developing model (0.1), a matching rate of the inspector model B and the developing model (0.3), and a matching rate of the inspector model C and the developing model (0.2).
Then, the introduction determination device 10 adds input data of which the matching rate between the one inspector model and the developing learning model is equal to or more than a threshold to the teacher data (refer to (3) in
Thereafter, by relearning the developing learning model using all the pieces of added teacher data, the introduction determination device 10 can generate a more versatile learning model. Describing the above example, the introduction determination unit 26 adds the input data X among the input data collected from the Internet to a teacher data group under the development environment and relearns a developing learning model.
According to the processing described above, because it is possible to develop a versatile machine learning model using only teacher data owned by the own company, it is not necessary to newly develop a machine learning model for a new customer, and cost can be reduced.
Incidentally, while the embodiments of the present disclosure have been described above, the present disclosure may be carried out in a variety of different modes in addition to the embodiments described above.
In the embodiments described above, the production environment different from the development environment (learning environment) has been described as an example. However, as an example of the environment, a model usage scene, places of a camera or a sensor that generates teacher data, a system environment to which the model is applied, or the like are assumed.
In the embodiments described above, an example has been described where, when it is determined that the machine learning model 15 needs to be relearned, the machine learning model 15 is relearned using the relearning data using the determination result of the inspector model for the input data of the introduction destination as correct answer information. For example, an example has been described where, when a determination result of the inspector model for input data P of the introduction destination is “label P” and a determination result of the machine learning model is “label Q”, the machine learning model 15 is relearned using relearning data using the input data as an explanatory variable and the label P as an objective variable. However, the present embodiment is not limited to this.
For example, it is possible to collect data under the production environment that is the introduction destination and use the data as the relearning data. For example, the machine learning model 15 can be relearned using relearning data using each input data imaged by a real camera under the production environment as an explanatory variable and correct answer information (label) of each input data as an objective variable.
Furthermore, in the second embodiment, the machine learning model in the middle of the development has been described as an example. However, the second embodiment can be applied to a learned machine learning model, and in that case, the machine learning model is relearned. Furthermore, the input data used to determine the matching rate in the second embodiment may also be data of customers A, B, and C. In this case, data effective for general-purpose learning is extracted from among the data of each of the customers A, B, and C.
Furthermore, the data example, the numerical values, each threshold, the feature amount space, the number of labels, the number of inspector models, the specific example, or the like used in the embodiments described above are merely examples and can be arbitrarily changed. Furthermore, the input data, the learning method, or the like are merely examples and can be arbitrarily changed. Furthermore, as the learning model, various methods such as a neural network can be adopted.
In the first embodiment, an example has been described where the plurality of inspector models having different model applicability ranges is generated by reducing the number of pieces of teacher data. However, the present embodiment is not limited to this, and for example, the plurality of inspector models having different model applicability ranges can be generated by reducing the number of times of training (the number of epochs). Furthermore, the plurality of inspector models having different model applicability ranges can be generated by reducing the number of pieces of training data included in the teacher data, not the number of pieces of teacher data.
For example, in the embodiments described above, an example has been described in which the matching rate of the input data belonging to the model applicability domain of each class is obtained. However, the embodiment is not limited to this. For example, accuracy deterioration can be detected according to the matching rate of the output result of the machine learning model 15 and the output result of the inspector model.
Furthermore, in the example in
Pieces of information including a processing procedure, a control procedure, a specific name, various types of data, and parameters described above or illustrated in the drawings may be optionally changed unless otherwise specified.
Furthermore, each component of each device illustrated in the drawings is functionally conceptual and does not necessarily have to be physically configured as illustrated in the drawings. In other words, for example, specific forms of distribution and integration of each device are not limited to those illustrated in the drawings. That is, for example, all or a part of the devices may be configured by being functionally or physically distributed or integrated in optional units depending on various types of loads, usage situations, or the like. For example, a device that executes the machine learning model 15 and classifies the input data and a device that detects accuracy deterioration can be implemented as different housings.
Moreover, all or any part of individual processing functions performed by each device may be implemented by a central processing unit (CPU) and a program analyzed and executed by the CPU or may be implemented as hardware by wired logic.
The communication device 10a is a network interface card or the like and communicates with another device. The HDD 10b stores a program that operates the functions illustrated in
The processor 10d reads a program that executes processing similar to the processing of each processing unit illustrated in
As described above, the introduction determination device 10 operates as an information processing device that executes the introduction determination method by reading and executing the programs. Furthermore, the introduction determination device 10 may also implement functions similar to the functions of the above-described embodiments by reading the program described above from a recording medium by a medium reading device and executing the read program described above. Note that the program referred to in other embodiments is not limited to being executed by the introduction determination device 10. For example, the embodiment may be similarly applied to a case where another computer or server executes the program, or a case where these computer and server cooperatively execute the program.
All examples and conditional language provided herein are intended for the pedagogical purposes of aiding the reader in understanding the invention and the concepts contributed by the inventor to further the art, and are not to be construed as limitations to such specifically recited examples and conditions, nor does the organization of such examples in the specification relate to a showing of the superiority and inferiority of the invention. Although one or more embodiments of the present invention have been described in detail, it should be understood that the various changes, substitutions, and alterations could be made hereto without departing from the spirit and scope of the invention.
This application is a continuation application of International Application PCT/JP2019/041793 filed on Oct. 24, 2019 and designated the U.S., the entire contents of which are incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/JP2019/041793 | Oct 2019 | US |
Child | 17715687 | US |