The present invention relates to a storage medium, a model generation method, and an information processing apparatus.
With the spread of artificial intelligence (AI) technology, there is an increasing demand for accountable machine learning models, such as questioning whether determination of a black box model is trustworthy, seeking the basis of the determination that may be interpreted by humans, and the like. In view of the above, a white box model such as a rule list, a decision tree, a linear model, or the like is used in advance, but simply using a white box model does not necessarily result in a model that may be interpreted by humans.
Accordingly, in recent years, an interactive approach that repeats model generation and feedback to humans has been used to generate a model convincing to humans and having high accuracy. For example, a task of “predicting a model output for a certain input” is displayed to a user, and interpretability is evaluated on the basis of a reaction time. Then, according to the evaluation, parameters for optimizing the model are changed to update the model. With such a process repeated, the generation of the model convincing to humans and having high accuracy has been carried out.
According to an aspect of the embodiments, a non-transitory computer-readable storage medium storing a model generation program that causes at least one computer to execute a process, the process includes acquiring, on a first assumption that assumes each of individual data items included in a training data set is easy for a user to interpret for each first state in which the individual data items violates the first assumption, each of first values for each of the individual data items by optimizing an objective function that has a loss weight related to ease of interpretation of the data item by using the training data set; acquiring, on a second assumption that assumes each of the individual data items is not easy for a user to interpret for each second state in which the individual data items violates the second assumption, each of second values for each of individual data items by optimizing the objective function; selecting a specific data item from the individual data items based on each of the first values and each of the second values for each of the individual data items; and generating a linear model using user evaluation for the specific data item.
The object and advantages of the invention will be realized and attained by means of the elements and combinations particularly pointed out in the claims.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory and are not restrictive of the invention.
The technique described above is for models that allow humans to predict the output by following branches, such as the decision tree, the rule list, and the like, and it is difficult to apply the technique to the linear model. For example, in a case where 100 data items appear in the model, it is burdensome and unrealistic for the user to read all the 100 data items and estimate a predicted value of the model.
Furthermore, since the interpretability of the linear model is determined by ease of interpretation of the data items presented as explanation of the output, it is not possible to evaluate the interpretability from the length of the response time to the task described above.
In one aspect, it is aimed to provide a model generation program, a model generation method, and an information processing apparatus capable of improving ease of interpretation of a model.
According to one embodiment, it becomes possible to improve ease of interpretation of a model.
Hereinafter, embodiments of a model generation program, a model generation method, and an information processing apparatus according to the present invention will be described in detail with reference to the drawings. Note that the embodiments do not limit the present invention. Furthermore, the individual embodiments may be appropriately combined within a range without inconsistency.
[Description of Information Processing Apparatus]
Here, a classification model (training model) based on a regression equation (see equation (2)) obtained by minimizing a loss function expressed by an equation (1) may be considered as an example of the linear model. Note that the loss function is an exemplary objective function including training data, a classification error, and a weight penalty, and the regression equation indicates an example assuming that there are d data items. The regression equation is a model that makes a classification of a positive example when m(x)>0 and a negative example otherwise.
In general, in the trained classification model, a data item that matches input data and has a weight of not “0” is presented to the user as an explanation. For example, in a case of inputting an input x=(0, 1, 1, 0, 1) when the classification model is m(x)=7x1−2x3−6x5, the predicted value m(x) by the classification model is “−8”. At this time, since it is classified as a negative example due to x3 and x5, “x5” may be presented to the user as particularly important. In this manner, as the training progresses by the interactive approach, the data items with a weight of “0” increase due to adjustment of the penalty in the loss function so that the explanation is simplified, but the explanation simplicity and the classification accuracy are in a trade-off relationship.
In view of the above, the information processing apparatus 10 according to the first embodiment performs optimization under the formulation assuming the ease of interpretation of each data item, and gives the user a simple task of “evaluating one data item” to obtain the actual ease of interpretation. Then, the information processing apparatus 10 manages the upper bound and the lower bound of the optimum value, thereby effectively determining the data item to be evaluated by the user on the basis of them.
Specifically, the information processing apparatus 10 obtains the classification model trained using a training data set including each data item. Then, the information processing apparatus 10 calculates a first value obtained by optimizing, using the training data set, the loss function having the ease of interpretation of the data item as a loss weight on a first assumption in which each of the data items included in the training data set is assumed to be easy to interpret. Similarly, the information processing apparatus 10 calculates a second value obtained by optimizing the loss function using the training data set on a second assumption in which the data item is assumed to be easy to interpret. Then, the information processing apparatus 10 selects a specific data item from the individual data items on the basis of a change in the first value and the second value for each of the data items, and executes retraining of the classification model using the user evaluation for the specific data item.
For example, as illustrated in
That is, when recommending the data item to the user on the basis of a training history, the information processing apparatus 10 simplifies the task by reducing the number of data items and repeats the user evaluation and the retraining based on the evaluation, thereby implementing model generation in consideration of the ease of interpretation of the data item. In this manner, the information processing apparatus 10 is enabled to improve the ease of interpretation of the model. Note that “easy to interpret data items” used in the present embodiment is synonymous with “easy to appear in a model”.
[Functional Configuration]
The communication unit 11 is a processing unit that controls communication with another device, and is implemented by, for example, a communication interface. For example, the communication unit 11 receives, from an administrator terminal or the like, the training data set and various instructions such as a processing start and the like, and transmits the trained classification model to the administrator terminal.
The display unit 12 is a processing unit that outputs various types of information generated by the control unit 20, and is implemented by, for example, a display, a touch panel, or the like.
The storage unit 13 is an exemplary storage device that stores various data, programs to be executed by the control unit 20, and the like, and is implemented by, for example, a memory or a hard disk. The storage unit 13 stores a training data set 14 and a classification model 15.
The training data set 14 is training data used for training the classification model 15.
Specifically, as illustrated in
The classification model 15 is a trained model trained using the training data set 14. For example, the classification model 15 is a linear model m(x) expressed by an equation (3), and is classified as a “positive example” when the predicted value m(x) for the input is larger than zero, and as a “negative example” when the predicted value m(x) is equal to or less than zero. Note that the classification model 15 is trained by a training unit 21 to be described later.
[Equation 3]
m(x)=x1−2x2−x5+2x8 Equation (3)
The control unit 20 is a processing unit that takes overall control of the information processing apparatus 10, and is implemented by, for example, a processor or the like. The control unit 20 includes a training unit 21, an interaction processing unit 22, and an output unit 26. Note that the training unit 21, the interaction processing unit 22, and the output unit 26 may be implemented as an electronic circuit, such as a processor or the like, or may be implemented as a process to be executed by a processor.
The training unit 21 is a processing unit that executes training of the classification model 15. Specifically, the training unit 21 trains the classification model 15 using the training data set 14, and stores the trained classification model 15 in the storage unit 13 upon completion of the training.
Here, the classification model and the loss function to be used for the training will be described. A loss function L expressed by an equation (4) is defined by the sum of the classification error and the weight penalty. Here, X represents the explanatory variable of the training data, and y represents the objective variable of the training data. Furthermore, ρ represents a preset constant, and wi represents a value in which a true value is found by imposing a task on humans. Note that wi=w− is set when the data item i is easy to interpret while wi=w+ is set when the data item i is difficult to interpret, and w− and w+ are input parameters given in advance. In the first embodiment, it is assumed that w−=1.0 and w+=1.5.
[Equation 4]
Furthermore, a matrix of one row and six columns having the label of each data of the training data set 14 as the row is assigned to “y” of the loss function L. For example, “label=positive example” of the data a is set in the first line of y, “label=positive example” of the data b is set in the second line, “label=positive example” of the data c is set in the third line, “label=negative example” of the data d is set in the fourth line, “label=negative example” of the data e is set in the fifth line, and “label=negative example” of the data f is set in the sixth line. In the calculation, the positive example is converted to “1” and the negative example is converted to “0”.
Furthermore, wi is a value set for each data item, and is defined by the ease of interpretation of each data item. For example, w1 is set for the data item x1, w2 is set for the data item x2, w3 is set for the data item x3, w4 is set for the data item x4, w5 is set for the data item x5, w6 is set for the data item x6, w7 is set for the data item x7, w8 is set for the data item x8, and optimization (minimization) of the loss function is calculated. Note that an optional value is set for wi at the time of training by the training unit 21. For example, it is possible to set “1” for all pieces of wi, and is also possible to set a random value for each piece of wi.
Then, the training unit 21 executes optimization of the loss function L in which the values are set for the individual variables as described above, and generates the classification model m(x) expressed by an equation (5) using βi obtained by the optimization. In other words, the training unit 21 generates a classification model based on the regression equation obtained by minimizing the loss function L, and stores it in the storage unit 13 as the classification model 15. Note that, while the equation (5) indicates an example in which the number of data items is d, d=8 in the first embodiment.
[Equation 5]
m(x)=β1x1+β2x2+ . . . +βdxd Equation (5)
The interaction processing unit 22 is a processing unit that includes a recommendation unit 23, a retraining unit 24, and a screen display unit 25, and executes acquisition of user evaluation for data items by the interactive approach with the user and retraining of the classification model 15 in consideration of the user evaluation.
Specifically, the interaction processing unit 22 sets the first assumption (hereinafter referred to as “lower bound”) in which all data items on which no task is imposed are assumed to be “easy to interpret” and the second assumption (hereinafter referred to as “upper bound”) in which all data items on which no task is imposed are assumed to be “difficult to interpret”, and manages the optimum solution for the equation (3) for each of the upper bound and the lower bound.
Then, the interaction processing unit 22 considers a new lower bound and upper bound for each of cases where the data items are said to be “easy to interpret” and “difficult to interpret”, recommends the data item that reduces the difference between the optimum value based on the new lower bound and the optimum value based on the new upper bound as a result thereof, and feeds back the user evaluation. As a result, the interaction processing unit 22 achieves the optimization of the classification model 15 with a small number of tasks by effectively imposing tasks.
The recommendation unit 23 is a processing unit that searches for one data item to be recommended to the user from multiple data items included in each training data of the training data set and recommends the searched data item to the user.
Specifically, the recommendation unit 23 calculates a first optimum value (first value) obtained by optimizing the loss function of the equation (3) using the training data set in the lower bound where each data item is assumed to be easy to interpret, and a second optimum value (second value) obtained by optimizing the loss function of the equation (3) using the training data set in the upper bound where each data item is assumed to be difficult to interpret. Then, the recommendation unit 23 selects a specific data item as a recommendation target on the basis of a change in the first optimum value and the second optimum value when each data item violates the lower bound and the upper bound.
Here, the recommendation of the data item will be described in detail.
Then, the recommendation unit 23 calculates each optimum value by generating a contradiction (state that violates the assumption) in each data item at the time of calculating the optimum value (minimization) of the loss function for each of the lower bound and the upper bound.
Specifically, for the lower bounds, the recommendation unit 23 calculates each of the optimum solution when a contradiction is generated only in the lower bound of the data item x1, the optimum solution when a contradiction is generated only in the lower bound of the data item x2, the optimum solution when a contradiction is generated only in the lower bound of the data item x3, the optimum solution when a contradiction is generated only in the lower bound of the data item x4, the optimum solution when a contradiction is generated only in the lower bound of the data item x5, the optimum solution when a contradiction is generated only in the lower bound of the data item x6, the optimum solution when a contradiction is generated only in the lower bound of the data item x7, and the optimum solution when a contradiction is generated only in the lower bound of the data item x8.
Similarly, for the upper bounds, the recommendation unit 23 calculates each of the optimum solution when a contradiction is generated only in the upper bound of the data item x1, the optimum solution when a contradiction is generated only in the upper bound of the data item x2, the optimum solution when a contradiction is generated only in the upper bound of the data item x3, the optimum solution when a contradiction is generated only in the upper bound of the data item x4, the optimum solution when a contradiction is generated only in the upper bound of the data item x5, the optimum solution when a contradiction is generated only in the upper bound of the data item x6, the optimum solution when a contradiction is generated only in the upper bound of the data item x7, and the optimum solution when a contradiction is generated only in the upper bound of the data item x8.
In this manner, the recommendation unit 23 calculates 16 optimum solutions (8 sets of upper bound and lower bound optimum solutions). Then, as illustrated in
That is, the recommendation unit 23 searches for a data item having a small influence in a state contrary to the assumption, determines that the data item is likely to appear in the model, and inquires of the user about the interpretability of the data item, thereby causing the user evaluation to be accurately fed back to the machine learning.
The retraining unit 24 is a processing unit that executes retraining of the classification model 15 in consideration of the user evaluation obtained by the recommendation unit 23. Specifically, the retraining unit 24 generates the classification model 15 based on the regression equation obtained by minimizing the loss function L using the training data set 14 and the equation (3) by a method similar to the training unit 21.
At this time, the retraining unit 24 reflects the user evaluation obtained by the recommendation unit 23 in “wi” to execute the minimization. For example, when the data item x3 is evaluated as “easy to interpret”, the retraining unit 24 calculates the minimization of the loss function in which “w3=1.0” is set and random values are set for “wi” of other data items. Furthermore, when the data item x3 is evaluated as “difficult to interpret”, the retraining unit 24 calculates the minimization of the loss function in which “w3=1.5” is set and random values are set for “wi” of other data items.
Then, the retraining unit 24 presents, to the user, the classification model 15 based on the regression equation obtained by minimizing the loss function in which the user evaluation is reflected in “wi”, and causes the user to evaluate whether or not the classification model 15 itself is easy to interpret.
Here, in a case where the classification model 15 itself is evaluated to be easy to interpret, the classification model 15 at that time is determined as the ultimately obtained classification model. On the other hand, in a case where the classification model 15 itself is evaluated to be difficult to interpret, the search and recommendation of the data item by the recommendation unit 23 and the retraining by the retraining unit 24 are re-executed.
The screen display unit 25 is a processing unit that generates an inquiry screen for receiving user evaluation and displays it to the user. For example, the screen display unit 25 generates an inquiry screen for inquiring whether the data item searched by the recommendation unit 23 is easy to interpret or difficult to interpret, and displays it to the user. Furthermore, the screen display unit 25 generates an inquiry screen for inquiring whether the classification model 15 generated by the retraining unit 24 is easy to interpret or difficult to interpret, and displays it to the user.
Note that the recommendation unit 23 and the retraining unit 24 receive user evaluation on the inquiry screen generated by the screen display unit 25. Furthermore, the screen display unit 25 may display the inquiry screen on the screen of the display unit 12 of the information processing apparatus 10, and may transmit it to a user terminal.
The output unit 26 is a processing unit that outputs the classification model 15 ultimately determined to be easy to interpret. For example, in a case where classification model 15 displayed on the inquiry screen generated by the screen display unit 25 is determined to be “easy to interpret”, the output unit 26 stores the displayed classification model 15 in the storage unit 13, outputs it to the user terminal, or outputs it to any output destination.
Next, specific examples of the retraining of the classification model 15 in consideration of the user evaluation will be described with reference to
(First Loop)
Then, the interaction processing unit 22 calculates 16 optimum solutions (8 sets of upper bound and lower bound optimum solutions) by generating a state where each data item violates the assumption at the time of calculating the optimum value of the loss function for each of the lower bound and the upper bound, and calculates a difference between the optimum value of the upper bound and the optimum value of the lower bound (difference between new upper and lower bounds).
In this manner, the interaction processing unit 22 generates a new upper bound and lower bound when each data item violates the assumption, and calculates an optimum solution for each of them, thereby calculating 16 optimum solutions (8 sets of upper bound and lower bound optimum solutions). Then, assuming that the interaction processing unit 22 has calculated the individual differences between the new upper and lower bound optimum solutions of the data items “x1 to x8” as “10, 8, 11, 9, 10, 8, 7, and 10” as illustrated in
Specifically, the interaction processing unit 22 displays the current classification model 15 (m(x)) in the area 51 indicating the current model, and also displays a button for selecting whether or not to output the model. Furthermore, the interaction processing unit 22 displays the “data item” determined as the recommendation target in the area 52 for receiving the evaluation of the data item, and also displays a button or the like for selecting whether the data item is “easy to interpret” or “difficult to interpret”. Furthermore, the interaction processing unit 22 displays the training data set 14 in the area 53 for the data details.
Note that, in this specific example, it is assumed that the interaction processing unit 22 has obtained the evaluation of “easy to interpret” from the user with respect to the recommended data item “x7”. Furthermore, it is assumed that the interaction processing unit 22 has obtained the evaluation of “difficult to interpret” from the user with respect to the classification model “m(x)=x1−2x2−x5+2x8”.
(Second Loop)
That is, the interaction processing unit 22 reflects the user evaluation “easy to interpret” only in the data item “x7”, and sets random values for other data items as the evaluation is unknown, and then executes retraining of the classification model. Here, it is assumed that the classification model is generated as “m(x)=x1−2x2−x5+2x7” by the retraining.
Subsequently, the interaction processing unit 22 generates the inquiry screen 50 in which the retrained classification model “m(x)=x1−2x2−x5+2x7” is displayed in the area 51, and displays it to the user. Here, since the interaction processing unit 22 obtains the user evaluation “difficult to interpret” for the classification model “m(x)=x1−2x2−x5+2x7”, it searches for the data item to be recommended.
Specifically, as illustrated in
Note that, in this specific example, it is assumed that the interaction processing unit 22 has obtained the evaluation of “easy to interpret” from the user with respect to the recommended data item “x4”.
(Third Loop)
That is, the interaction processing unit 22 reflects the user evaluation “easy to interpret” only in the data item “x7” and in the data item “x4”, and sets random values for other data items as the evaluation is unknown, and then executes retraining of the classification model. Here, it is assumed that the classification model is generated as “m(x)=x1−2x4−x5+2x7” by the retraining.
Subsequently, the interaction processing unit 22 generates the inquiry screen 50 in which the retrained classification model “m(x)=x1−2x4−x5+2x7” is displayed in the area 51, and displays it to the user. Here, since the interaction processing unit 22 obtains the user evaluation “difficult to interpret” for the classification model “m(x)=x1−2x4−x5+2x7”, it searches for the data item to be recommended.
Specifically, as illustrated in
Note that, in this specific example, it is assumed that the interaction processing unit 22 has obtained the evaluation of “difficult to interpret” from the user with respect to the recommended data item “x5”.
(Fourth Loop)
Then, at the time of inputting the training data set 14 to the loss function L of the equation (3) and calculating the optimum solution of the loss function using a method similar to that described with reference to
That is, the interaction processing unit 22 reflects the user evaluation “easy to interpret” in the data item “x7” and in the data item “x4”, reflects the user evaluation “difficult to interpret” in the data item “x5”, and sets random values for other data items as the evaluation is unknown, and then executes retraining of the classification model. Here, it is assumed that the classification model is generated as “m(x)=x1−2x4−x5+2x7” by the retraining.
Subsequently, the interaction processing unit 22 generates the inquiry screen 50 in which the retrained classification model 15 “m(x)=x1−2x4−x5+2x7” is displayed in the area 51, and displays it to the user. Here, since the interaction processing unit 22 obtains the user evaluation “difficult to interpret” for the classification model “m(x)=x1−2x4−x5+2x7”, it searches for the data item to be recommended.
Specifically, as illustrated in
Note that, in this specific example, it is assumed that the interaction processing unit 22 has obtained the evaluation of “easy to interpret” from the user with respect to the recommended data item “x6”.
(Fifth Loop)
Then, at the time of inputting the training data set 14 to the loss function L of the equation (3) and calculating the optimum solution of the loss function using a method similar to that described with reference to
That is, the interaction processing unit 22 reflects the user evaluation “easy to interpret” in the data items “x7”, “x4”, and “x6”, reflects the user evaluation “difficult to interpret” in the data item “x5”, and sets random values for other data items as the evaluation is unknown, and then executes retraining of the classification model. Here, it is assumed that the classification model is generated as “m(x)=x1−2x4−x6+2x7” by the retraining.
Subsequently, the interaction processing unit 22 generates the inquiry screen 50 in which the retrained classification model “m(x)=x1−2x4−x6+2x7” is displayed in the area 51, and displays it to the user. Here, it is assumed that the interaction processing unit 22 has obtained the user evaluation “easy to interpret” with respect to the classification model “m(x)=x1−2x4−x6+2x7”.
Then, the interaction processing unit 22 determines that the linear model easy for the user to interpret has been generated to terminate the search and the retraining, and outputs the current classification model “m(x)=x1−2x4−x6+2x7” to the storage unit 13 as the classification model 15.
[Processing Flow]
Then, the interaction processing unit 22 calculates a difference between the optimum value of the upper bound and the optimum value of the lower bound in a case of violating the assumption for each data item of the training data set 14 (S103), and recommends the data item with the smallest difference to the user (S104).
Thereafter, the interaction processing unit 22 obtains the user evaluation for the recommended data item (S105), reflects the user evaluation on the recommended data item, and randomly assumes the ease of interpretation of unevaluated data items to retrain the model (S106).
Then, the interaction processing unit 22 presents the retrained model (S107), and if conditions of the user are satisfied (Yes in S108), it outputs the current model (S109). On the other hand, if the conditions of the user are not satisfied (No in S108), the interaction processing unit 22 repeats S103 and subsequent steps.
[Effects]
As described above, the information processing apparatus 10 is capable of imposing a simple task of “evaluating one data item” on humans to obtain the actual ease of interpretation. Furthermore, the information processing apparatus 10 is capable of generating a classification model based on optimization of the loss function while adjusting the appearance frequency of individual data items. As a result, the information processing apparatus 10 is enabled to generate a highly interpretable classification model with less burden on humans.
Incidentally, while the embodiment of the present invention has been described above, the present invention may be carried out in a variety of different modes in addition to the embodiment described above.
[Numerical Values, Etc.]
The exemplary numerical values, the loss function, the number of data items, the number of training data, and the like used in the embodiment described above are merely examples, and may be optionally changed. Furthermore, the loss function used to generate the classification model is not limited to the one expressed by the equation (3), and another objective function including a weight penalty that changes depending on whether it is “easy to interpret” or “difficult to interpret” may be adopted. Furthermore, the processing flow may also be appropriately changed within a range with no inconsistencies. Furthermore, the device that executes the training unit 21 and the device that executes the interaction processing unit 22 and the output unit 26 may be implemented by separate devices.
[Models, Etc.]
While the example of reflecting the user evaluation in the model once trained and performing retraining has been described in the embodiment above, it is not limited to this, and it is also possible to reflect the user evaluation in the model before training by the method according to the embodiment described above and perform the training. Furthermore, the timing for terminating the generation (retraining) of the linear model is not limited to the user evaluation, and may be optionally set such as when execution is carried out a predetermined number of times. Furthermore, while the example of using the loss function as an exemplary objective function has been described in the embodiment above, it is not limited to this, and another objective function, such as a cost function, may be adopted.
[System]
Pieces of information including a processing procedure, a control procedure, a specific name, various types of data, and parameters described above or illustrated in the drawings may be optionally changed unless otherwise specified. Note that the training unit 21 is an exemplary acquisition unit, the recommendation unit 23 is an exemplary calculation unit and selection unit, and the retraining unit 24 is an exemplary generation unit.
Furthermore, each component of each device illustrated in the drawings is functionally conceptual, and is not necessarily physically configured as illustrated in the drawings. In other words, specific forms of distribution and integration of individual devices are not limited to those illustrated in the drawings. That is, all or a part thereof may be configured by being functionally or physically distributed or integrated in optional units depending on various types of loads, usage situations, or the like.
Moreover, all or an optional part of individual processing functions performed in each device may be implemented by a central processing unit (CPU) and a program analyzed and executed by the CPU, or may be implemented as hardware by wired logic.
[Hardware]
Next, an exemplary hardware configuration of the information processing apparatus 10 will be described.
The communication device 10a is a network interface card or the like, and communicates with another server. The HDD 10b stores programs and DBs for operating the functions illustrated in
The processor 10d reads, from the HDD 10b or the like, a program that executes processing similar to that of each processing unit illustrated in
In this manner, the information processing apparatus 10 reads and executes a program to operate as an information processing apparatus that executes a model generation method. Furthermore, the information processing apparatus 10 may implement functions similar to those of the embodiments described above by reading the program described above from a recording medium with a medium reading device and executing the read program described above. Note that other programs referred to in the embodiments are not limited to being executed by the information processing apparatus 10. For example, the present invention may be similarly applied to a case where another computer or server executes a program, or a case where such computer and server cooperatively execute a program.
All examples and conditional language provided herein are intended for the pedagogical purposes of aiding the reader in understanding the invention and the concepts contributed by the inventor to further the art, and are not to be construed as limitations to such specifically recited examples and conditions, nor does the organization of such examples in the specification relate to a showing of the superiority and inferiority of the invention. Although one or more embodiments of the present invention have been described in detail, it should be understood that the various changes, substitutions, and alterations could be made hereto without departing from the spirit and scope of the invention.
This application is a continuation application of International Application PCT/JP2020/009534 filed on Mar. 5, 2020 and designated the U.S., the entire contents of which are incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/JP2020/009534 | Mar 2020 | US |
Child | 17900972 | US |