A technique of the present disclosure relates to a learning apparatus, an operation method of the learning apparatus, a non-transitory computer readable recording medium storing an operation program of the learning apparatus, and an operating apparatus.
A quality of a product is predicted using a machine learning model. In order to improve accuracy of prediction, JP2018-018354A proposes a machine learning model that learns, as learning input data, physical-property relevance data derived from physical-property data representing a physical property of a product.
In JP2018-018354A, as a product, foods and drinks such as coffee beans are exemplified. Further, in JP2018-018354A, as physical-property data, spectrum data such as near infrared (NIR) spectroscopic analysis data, Fourier transform infrared (FT-IR) spectroscopic analysis data, or nuclear magnetic resonance (NMR) spectroscopic analysis data, or image data obtained by imaging a product using a camera is exemplified. As physical-property relevance data, numerical values obtained from spectrum data, for example, a slope, a periodicity, an amplitude, a peak height, and a peak width of a waveform of a spectrum are exemplified. Further, as physical-property relevance data, image data itself obtained by imaging a product using a camera is exemplified.
The physical-property data may be data representing one type of physical properties of a product by one parameter, such as a weight of a product, and may be data representing one type of physical properties of a product by a plurality of parameters, such as the spectrum data or the image data. More specifically, in a case where the physical-property data is spectrum data, a spectrum is a physical property of a product, and, for example, a wave number and an intensity correspond to a plurality of parameters. In a case where the physical-property data is image data, a color is a physical property of a product, and a red pixel value, a green pixel value, and a blue pixel value correspond to a plurality of parameters. In a case where a parameter is regarded as a dimension, it can be said that the physical-property data is multi-dimensional physical-property data. On the other hand, it can be said that the physical-property data representing one type of physical properties of a product by one parameter is one-dimensional physical-property data. Hereinafter, physical-property data representing one type of physical properties of a product by a plurality of parameters is referred to as multi-dimensional physical-property data. Further, physical-property relevance data derived from the multi-dimensional physical-property data is referred to as multi-dimensional physical-property relevance data.
As described above, in JP2018-018354A, as the multi-dimensional physical-property relevance data which is derived from the spectrum data as the multi-dimensional physical-property data, a slope, a periodicity, an amplitude, a peak height, and a peak width of a spectrum waveform are exemplified. However, numerical values exemplified in JP2018-018354A do not comprehensively cover overall characteristics of the multi-dimensional physical-property data. As a result, it cannot be said that the numerical values accurately represent the physical property of the product. For this reason, in a case where learning is performed by inputting, as learning input data, the numerical values to the machine learning model, there is a problem that accuracy of prediction is leveled off at a relatively-low level.
An object of the technique of the present disclosure is to provide a learning apparatus, an operation method of the learning apparatus, a non-transitory computer readable recording medium storing an operation program of the learning apparatus, and an operating apparatus capable of further improving accuracy of prediction of a quality of a product by a machine learning model in a case where learning is performed by inputting, as learning input data, multi-dimensional physical-property relevance data, which is derived from multi-dimensional physical-property data of the product, to the machine learning model.
In order to achieve the object, according to an aspect of the present disclosure, there is provided a learning apparatus including: a first processors that is configured to acquire multi-dimensional physical-property data representing a physical property of a product; derive learning input data to be input to a machine learning model for predicting a quality of the product from the multi-dimensional physical-property data and that derives, as the learning input data, multi-dimensional physical-property relevance data which is related to the multi-dimensional physical-property data by applying at least a part of an autoencoder to the multi-dimensional physical-property data; and input the learning input data to the machine learning model, perform learning, and output the machine learning model as a learned model to be provided for actual operation.
Preferably, the learning input data includes production condition data which is set in a production process of the product.
Preferably, the autoencoder is learned by inputting the multi-dimensional physical-property data of the product of which the quality is higher than a preset level, and the first processor inputs the multi-dimensional physical-property data to the autoencoder, outputs output data, and derives the multi-dimensional physical-property relevance data based on difference data between the multi-dimensional physical-property data which is input to the autoencoder and the output data.
Preferably, the first processor inputs the multi-dimensional physical-property data to the autoencoder, outputs feature data from an encoder network of the autoencoder, and derives the multi-dimensional physical-property relevance data based on the feature data.
Preferably, the multi-dimensional physical-property data includes image data of a spectrum which is represented by spectrum data detected by performing spectroscopic analysis on the product.
Preferably, the first processor derives the multi-dimensional physical-property relevance data for each of a plurality of intervals obtained by dividing the spectrum data.
Preferably, the multi-dimensional physical-property data includes image data obtained by imaging the product.
Preferably, the product is produced by using a flow synthesis method.
According to another aspect of the present disclosure, there is provided an operating apparatus including: a second processor that is configured to acquire the learned model which is output from the first processor of the learning apparatus; acquire multi-dimensional physical-property relevance data for prediction which is data of a product of which a quality is unknown; input the multi-dimensional physical-property relevance data for prediction to the learned model, predicts the quality; and control outputting of a prediction result of the quality by the learned model.
According to still another aspect of the present disclosure, there is provided an operation method of a learning apparatus, the method including: acquiring multi-dimensional physical-property data representing a physical property of a product; deriving learning input data to be input to a machine learning model for predicting a quality of the product from the multi-dimensional physical-property data and deriving, as the learning input data, multi-dimensional physical-property relevance data which is related to the multi-dimensional physical-property data by applying at least a part of an autoencoder to the multi-dimensional physical-property data; and inputting the learning input data to the machine learning model, performing learning, and outputting the machine learning model as a learned model to be provided for actual operation.
According to still another aspect of the present disclosure, there is provided a non-transitory computer readable recording medium storing an operation program of a learning apparatus, the program causing a computer to function as: acquiring multi-dimensional physical-property data representing a physical property of a product; deriving learning input data to be input to a machine learning model for predicting a quality of the product from the multi-dimensional physical-property data and that derives, as the learning input data, multi-dimensional physical-property relevance data which is related to the multi-dimensional physical-property data by applying at least a part of an autoencoder to the multi-dimensional physical-property data; and inputting the learning input data to the machine learning model, performing learning, and outputting the machine learning model as a learned model to be provided for actual operation.
According to the technique of the present disclosure, it is possible to provide a learning apparatus, an operation method of the learning apparatus, a non-transitory computer readable recording medium storing an operation program of the learning apparatus, and an operating apparatus capable of further improving accuracy of prediction of a quality of a product by a machine learning model in a case where learning is performed by inputting, as learning input data, multi-dimensional physical-property relevance data, which is derived from multi-dimensional physical-property data of the product, to the machine learning model.
In
In
In
The learning apparatus 10 includes a machine learning model M. The machine learning model M is a model for predicting a quality of the product PR. In order to improve an accuracy of prediction of the machine learning model M, the learning apparatus 10 inputs the learning input data IDL including the production condition data PCD and the relevance data PRD with the same ID, to the machine learning model M (refer to
The quality data QD is data for matching the learning output data ODL with an answer. As an accuracy of prediction of the machine learning model M is higher, a difference between the quality data QD and the learning output data ODL is smaller. Therefore, the learning apparatus 10 evaluates an accuracy of prediction of the machine learning model M by comparing the learning output data ODL with the quality data QD having the same ID as the learning input data IDL. The machine learning model M is updated according to the evaluation result. The learning apparatus 10 inputs the learning input data IDL to the machine learning model M, outputs the learning output data ODL from the machine learning model M, evaluates an accuracy of prediction of the machine learning model M, and updates the machine learning model M, while changing the learning input data IDL and the quality data QD. The series of processing is repeated until an accuracy of prediction of the machine learning model M reaches a preset level. The learning apparatus 10 transmits, to the operating apparatus 11, the machine learning model M of which an accuracy of prediction reaches a preset level, as a learned model TM to be used for actual operation.
The operating apparatus 11 receives the learned model TM from the learning apparatus 10. The operating apparatus 11 inputs, to the learned model TM, production condition data for prediction PCDF, which is production condition data of a product PR of which a quality is unknown, and physical-property relevance data for prediction (hereinafter, abbreviated to as relevance data for prediction) PRDF, which is relevance data of the product PR of which a quality is unknown. The relevance data for prediction PRDF is data derived from the physical-property data for prediction PDF (refer to
A flow reaction performed in the flow reaction apparatus 13 is a synthesis reaction of synthesizing monomers, a polymerization reaction of generating a polymer by reacting monomers with each other, or the like. Therefore, the product PR may be, for example, a compound at a growth stage, which is a target of a polymerization reaction. In the example, the flow reaction apparatus 13 performs, as the flow reaction, an anionic polymerization reaction of polystyrene which is a product PR.
In
The first raw material supply unit 20 is connected to an upstream end of the reaction section 22 by a pipe (not illustrated). The first raw material supply unit 20 supplies a first raw material RM1 to the reaction section 22. The first raw material supply unit 20 includes a pump for transporting the first raw material RM1 to the reaction section 22. By controlling a rotation speed of the pump, a flow rate of the first raw material RM1 which is transported from the first raw material supply unit 20 to the reaction section 22 is adjusted.
In the example, the first raw material RM1 is a solution obtained by dissolving polystyryl lithium in a solvent. The polystyryl lithium functions as an initiator for the anionic polymerization reaction of polystyrene which is the product PR. As the solvent, tetrahydrofuran is used. In addition, a small amount of toluene and a small amount of hexane are mixed in the solution. A raw material for the flow reaction may be a mixture of a reactant such as polystyryl lithium and another substance, such as the first raw material RM1, or may be made of only a reactant.
Similar to the first raw material supply unit 20, the second raw material supply unit 21 is connected to the upstream end of the reaction section 22 by a pipe (not illustrated). The second raw material supply unit 21 supplies a second raw material RM2 to the reaction section 22. Similar to the first raw material supply unit 20, the second raw material supply unit 21 also includes a pump for transporting the second raw material RM2 to the reaction section 22. By controlling a rotation speed of the pump, a flow rate of the second raw material RM2 which is transported from the second raw material supply unit 21 to the reaction section 22 is adjusted.
In the example, the second raw material RM2 is an aqueous methanol solution. Methanol is used as a terminator for an anionic polymerization reaction.
The reaction section 22 is a section for performing a flow reaction (in the example, anionic polymerization reaction). The reaction section 22 includes a junction portion 30 and a reaction portion 31. The junction portion 30 includes a first pipe portion 32, a second pipe portion 33, and a third pipe portion 34. The first pipe portion 32 and the second pipe portion 33 are connected in a straight line, and the third pipe portion 34 intersects with the first pipe portion 32 and the second pipe portion 33 at a right angle. That is, the junction portion 30 has a T-shape.
The first pipe portion 32 is connected to the first raw material supply unit 20, and the second pipe portion 33 is connected to the second raw material supply unit 21. Further, the third pipe portion 34 is connected to the reaction portion 31. The first raw material RM1 is supplied from the first raw material supply unit 20 to the first pipe portion 32, and the second raw material RM2 is supplied from the second raw material supply unit 21 to the second pipe portion 33. The first raw material RM1 and the second raw material RM2 are mixed in the third pipe portion 34, and are transported to the reaction portion 31 in a mixed state.
A first flow velocity sensor 35 that detects a flow velocity of the first raw material RM1 passing through the first pipe portion 32 is provided in the first pipe portion 32. In addition, a second flow velocity sensor 36 that detects a flow velocity of the second raw material RM2 passing through the second pipe portion 33 is provided in the second pipe portion 33. Further, a third flow velocity sensor 37 that detects a flow velocity of the mixture of the first raw material RM1 and the second raw material RM2 passing through the third pipe portion 34 is provided in the third pipe portion 34.
The reaction portion 31 is an elongated pipe obtained by connecting a plurality of linear-shaped pipes having the same inner diameter in a straight line. A length L of the reaction portion 31 may be changed by changing the number of linear-shaped pipes to be connected and/or lengths of the linear-shaped pipes. Further, the inner diameter Ø of the reaction portion 31 may be changed by changing the inner diameter of the linear-shaped pipe to be connected.
The inside of the reaction portion 31 is a flow path through which the mixture of the first raw material RM1 and the second raw material RM2 flows, and is a portion at which a flow reaction is performed. In a case where the mixture passes through the reaction portion 31, a flow reaction is promoted, and thus a polystyrene solution is obtained. The flow reaction is slightly promoted in the third pipe portion 34 of the junction portion 30. On the other hand, a length of the third pipe portion 34 is very shorter than a length L of the reaction portion 31. For this reason, the length of the third pipe portion 34 is ignored, and the length L of the reaction portion 31 is regarded as a length of a reaction path, which is a length of a portion at which a flow reaction is performed. Similarly, the inner diameter Ø of the reaction portion 31 is regarded as a diameter of the reaction path, which is a diameter of a portion at which a flow reaction is performed.
The temperature control unit 23 includes a heater and/or a cooler, and controls a temperature inside the reaction portion 31 (hereinafter, referred to as a reaction temperature). A temperature sensor 38 for detecting the reaction temperature is provided at a downstream end of the reaction portion 31.
The recovery/discard section 24 is a section for recovering polystyrene which is the product PR and discarding a waste in which a reaction is failed. The recovery/discard section 24 includes a recovery unit 40 and a discard unit 41. The recovery unit 40 and the discard unit 41 are connected to the downstream end of the reaction portion 31 by a three-way valve 42. By using the three-way valve 42, switching between a recovery line that connects the reaction portion 31 and the recovery unit 40 and a discard line that connects the reaction portion 31 and the discard unit 41 can be performed.
The recovery unit 40 precipitates polystyrene from the polystyrene solution. The recovery unit 40 collects the precipitated polystyrene by filtering the solution. The collected polystyrene is dried. More specifically, the recovery unit 40 includes a container with a stirrer, and precipitates polystyrene by filling the container with methanol and mixing the polystyrene solution into the stirred methanol. Further, the recovery unit 40 includes a constant-temperature tank with a depressurization function, and dries the methanol by heating the inside of the constant-temperature tank in a depressurization state.
The discard unit 41 is a tank for storing a waste. Here, the waste is transported from the reaction portion 31 in a case where the flow velocity of the first raw material RM1, the flow velocity of the second raw material RM2, the flow velocity of the mixture, the reaction temperature, or the like is disturbed for some reason and, as a result, production cannot be performed under originally-predetermined production conditions.
The setting unit 25 receives setting of production conditions of the production process of the product PR by an operator of the flow reaction apparatus 13. The production conditions received by the setting unit 25 are registered in the system controller 26, as the production condition data PCD which is set in the production process of the product PR.
The system controller 26 overall controls operations of the entire flow reaction apparatus 13. The system controller 26 is connected to the first raw material supply unit 20, the second raw material supply unit 21, the temperature control unit 23, the first flow velocity sensor 35, the second flow velocity sensor 36, the third flow velocity sensor 37, the temperature sensor 38, and the three-way valve 42.
The system controller 26 adjusts the flow rate of the first raw material RM1 by controlling the rotation speed of the pump of the first raw material supply unit 20 according to the flow velocity of the first raw material RM1 that is detected by the first flow velocity sensor 35. Similarly, the system controller 26 adjusts the flow rate of the second raw material RM2 by controlling the rotation speed of the pump of the second raw material supply unit 21 according to the flow velocity of the second raw material RM2 that is detected by the second flow velocity sensor 36. In addition, the system controller 26 drives the temperature control unit 23 according to the reaction temperature detected by the temperature sensor 38. Further, the system controller 26 performs switching between the recovery line and the discard line by controlling the three-way valve 42.
Instead of the reaction section 22, a reaction section 45 illustrated in
The junction portion 46 of the reaction section 45 illustrated in
The first pipe portion 47 and the second pipe portion 48 are connected to the first raw material supply unit 20, and the third pipe portion 49 is connected to the second raw material supply unit 21. Further, the fourth pipe portion 50 is connected to the reaction portion 31. The first raw material RM1 is supplied from the first raw material supply unit 20 to the first pipe portion 47 and the second pipe portion 48, and the second raw material RM2 is supplied from the second raw material supply unit 21 to the third pipe portion 49. The first raw material RM1 and the second raw material RM2 are mixed in the fourth pipe portion 50, and are transported to the reaction portion 31 in a mixed state.
A first flow velocity sensor 51 and a second flow velocity sensor 52 that detect the flow velocity of the first raw material RM1 passing through the first pipe portion 47 and the second pipe portion 48 are provided in the first pipe portion 47 and the second pipe portion 48. In addition, a third flow velocity sensor 53 that detects the flow velocity of the second raw material RM2 passing through the third pipe portion 49 is provided in the third pipe portion 49. Further, a fourth flow velocity sensor 54 that detects the flow velocity of the mixture of the first raw material RM1 and the second raw material RM2 passing through the fourth pipe portion 50 is provided in the fourth pipe portion 50.
In this case, the system controller 26 adjusts the flow rate of the first raw material RM1 by controlling the rotation speed of the pump of the first raw material supply unit 20 according to an average value of the flow velocity of the first raw material RM1 that is detected by the first flow velocity sensor 51 and the flow velocity of the first raw material RM1 that is detected by the second flow velocity sensor 52. Further, the system controller 26 adjusts the flow rate of the second raw material RM2 by controlling the rotation speed of the pump of the second raw material supply unit 21 according to the flow velocity of the second raw material RM2 that is detected by the third flow velocity sensor 53.
In
In a case where the reaction section 22 is used, the system controller 26 adjusts the flow rate of the first raw material RM1 by controlling the rotation speed of the pump of the first raw material supply unit 20 such that the flow velocity of the first raw material RM1 detected by the first flow velocity sensor 35 matches with the flow velocity of the first raw material RM1 registered in the production condition data PCD. Similarly, the system controller 26 adjusts the flow rate of the second raw material RM2 by controlling the rotation speed of the pump of the second raw material supply unit 21 such that the flow velocity of the second raw material RM2 detected by the second flow velocity sensor 36 matches with the flow velocity of the second raw material RM2 registered in the production condition data PCD.
In a case where the reaction section 45 is used, the system controller 26 adjusts the flow rate of the first raw material RM1 by controlling the rotation speed of the pump of the first raw material supply unit 20 such that an average value of the flow velocity of the first raw material RM1 detected by the first flow velocity sensor 51 and the flow velocity of the first raw material RM1 detected by the second flow velocity sensor 52 matches with the flow velocity of the first raw material RM1 registered in the production condition data PCD. Similarly, the system controller 26 adjusts the flow rate of the second raw material RM2 by controlling the rotation speed of the pump of the second raw material supply unit 21 such that the flow velocity of the second raw material RM2 detected by the third flow velocity sensor 53 matches with the flow velocity of the second raw material RM2 registered in the production condition data PCD.
Further, the system controller 26 drives the temperature control unit 23 such that the reaction temperature detected by the temperature sensor 38 matches with a reaction temperature registered in the production condition data PCD.
In a case where a deviation between each value detected by each of the sensors 35, 36, 38, 51, 52, and 53 and each value registered in the production condition data PCD exceeds a preset range, the system controller 26 controls the three-way valve 42 to perform switching to the discard line and guide the waste to the discard unit 41. In a case where the reaction fails and the waste is generated, of course, the physical-property data PD and the quality data QD are not output. Therefore, in a case where the waste is generated, the production condition data PCD is discarded without being transmitted to the learning apparatus 10.
In
In
GPC is performed under the following conditions.
Instead of GPC, various methods such as infrared spectroscopic analysis, nuclear magnetic resonance spectroscopic analysis, high performance liquid chromatography (HPLC), or gas chromatography (GC) may be used. Further, the quality data QD is not limited to the molecular weight dispersion and the molecular weight of the product PR. In a case where the product PR is obtained in a state of a solution, a molar concentration which is a concentration of the product PR in the solution may be used as the quality data QD. Alternatively, a yield of the product PR that is obtained by dividing an amount of the product PR by an amount of the raw material RM may be used as the quality data QD. Further, in a case where a by-product is produced, a yield of the by-product may be used as the quality data QD.
In
The storage device 60 is a hard disk drive that is built in the computer including the learning apparatus 10 or the like or is connected via a cable or a network. Alternatively, the storage device 60 is a disk array in which a plurality of hard disk drives are connected in series. The storage device 60 stores a control program such as an operating system, various application programs, and various data associated with the programs. A solid state drive may be used instead of or in addition to the hard disk drive.
The memory 61 is a work memory which is necessary to execute processing by the CPU 62. The CPU 62 loads the program stored in the storage device 60 into the memory 61, and collectively controls each unit of the computer by executing processing according to the program.
The communication unit 63 is a network interface that controls transmission of various information via the network 12. The display 64 displays various screens. The computer constituting the learning apparatus 10 or the like receives an input of an operation instruction from the input device 65 via the various screens. The input device 65 includes a keyboard, a mouse, a touch panel, and the like.
In the following description, in order to distinguish the components, a subscript “A” is attached to each component of the learning apparatus 10, and a subscript “B” is attached to each component of the operating apparatus 11.
In
The storage device 60A also stores the production condition data PCD of the production process of the product PR from the flow reaction apparatus 13, the physical-property data PD from the physical-property analysis apparatus 14, and the quality data QD from the quality evaluation apparatus 15. In addition, the storage device 60A stores an autoencoder AE. Further, the storage device 60A also stores the relevance data PRD, which is derived from the physical-property data PD by using the autoencoder AE, and the machine learning model M. A plurality of sets of the production condition data PCD, the physical-property data PD, the relevance data PRD, and the quality data QD are stored in the storage device 60A.
In a case where the first operation program 70 is started, the CPU 62A of the computer including the learning apparatus 10 functions as a first read/write (hereinafter, abbreviated as RW) control unit 75, a first derivation unit 76, a learning unit 77, and a transmission control unit 78, in cooperation with the memory 61 and the like.
The first RW control unit 75 controls reading of various data stored in the storage device 60A and storing of various data in the storage device 60A. The first RW control unit 75 reads the physical-property data PD and the autoencoder AE from the storage device 60A, and outputs the physical-property data PD and the autoencoder AE to the first derivation unit 76. Further, the first RW control unit 75 stores the relevance data PRD from the first derivation unit 76 in the storage device 60A. The first RW control unit 75 acquires the physical-property data PD by reading the physical-property data PD from the storage device 60A. That is, the first RW control unit 75 is an example of a “first acquisition unit” according to the technique of the present disclosure.
The first RW control unit 75 reads the relevance data PRD, the production condition data PCD, and the quality data QD from the storage device 60A, and outputs the read data to the learning unit 77. In addition, the first RW control unit 75 reads the machine learning model M from the storage device 60A, and outputs the machine learning model M to any of the learning unit 77 and the transmission control unit 78. Further, the first RW control unit 75 stores the machine learning model M from the learning unit 77 in the storage device 60A.
The first derivation unit 76 receives the physical-property data PD from the first RW control unit 75. The first derivation unit 76 derives the relevance data PRD by applying the autoencoder AE to the physical-property data PD. That is, the first derivation unit 76 is an example of a “derivation unit” according to the technique of the present disclosure. The first derivation unit 76 assigns the same ID as the ID of the physical-property data PD to the derived relevance data PRD, and outputs the relevance data PRD to the first RW control unit 75. The first derivation unit 76 derives the relevance data PRD each time new physical-property data PD is transmitted from the physical-property analysis apparatus 14.
The learning unit 77 receives the learning input data IDL, the quality data QD, and the machine learning model M from the first RW control unit 75. The learning unit 77 performs learning by inputting the learning input data IDL to the machine learning model M, and outputs a learned model TM.
The transmission control unit 78 receives the machine learning model M from the first RW control unit 75. The machine learning model M received by the transmission control unit 78 from the first RW control unit 75 is a learned model TM. The transmission control unit 78 performs a control for transmitting the learned model TM to the operating apparatus 11.
The autoencoder AE is a hierarchical machine learning model configured with a convolutional neural network that includes a plurality of layers for analyzing the input image data IIMD (refer to
As illustrated in
In each layer of the encoder network 80, input data DI (refer to
In
k=az+by+cx+dw+ev+fu+gt+hs+ir (Equation 1)
In the convolution processing, the pixel value k is output by performing the convolution operation on each pixel of the input data DI. In this way, the output data DIc in which the pixel values k are two-dimensionally arranged is output. One piece of the output data DIc is output in correspondence with one filter F. In a case where a plurality of filters F having different types are used, the output data DIc is output for each filter F.
As illustrated in
As illustrated in
As illustrated in
As illustrated in
In
Here, as illustrated in
As illustrated in
As illustrated in
The evaluation unit 86 receives the learning output data ODL from the first processing unit 85. The evaluation unit 86 evaluates an accuracy of prediction of the machine learning model M by comparing the learning output data ODL and the quality data QD. The evaluation unit 86 outputs an evaluation result to the update unit 87.
The evaluation unit 86 evaluates an accuracy of prediction of the machine learning model M using, for example, a loss function. The loss function is a function that represents a degree of a difference between the learning output data ODL and the quality data QD. As a calculated value of the loss function is closer to 0, an accuracy of prediction of the machine learning model M is higher.
The update unit 87 updates the machine learning model M according to the evaluation result from the evaluation unit 86. For example, the update unit 87 changes various parameter values of the machine learning model M by a stochastic gradient descent method or the like using a learning coefficient. The learning coefficient indicates a change range in various parameter values of the machine learning model M. That is, as the learning coefficient has a relatively large value, the change range in various parameter values becomes wider, and thus, an update level of the machine learning model M becomes higher.
The inputting of the learning input data IDL to the machine learning model M and the outputting of the learning output data ODL to the evaluation unit 86 by the first processing unit 85, the evaluation of the accuracy of prediction by the evaluation unit 86, and the updating of the machine learning model M by the update unit 87 are repeated until the accuracy of prediction reaches a preset level.
As illustrated in
In
The storage device 60B also stores the learned model TM from the learning apparatus 10, the autoencoder AE which is the same as the autoencoder AE of the learning apparatus 10, and the physical-property data for prediction PDF from the physical-property analysis apparatus 14. In addition, the storage device 60B also stores the production condition data for prediction PCDF. The production condition data for prediction PCDF is input via the input device 65B by the operator. More specifically, an input screen including input boxes for each item of the production condition data for prediction PCDF is displayed on the display 64B, and the production condition data for prediction PCDF is input via the input screen. The production condition data for prediction PCDF and the physical-property data for prediction PDF are the production condition data PCD and the physical-property data PD of the product PR of which a quality is unknown, and are used to predict the quality by using the learned model TM.
Further, the storage device 60B also stores the physical-property relevance data for prediction (hereinafter, abbreviated as relevance data for prediction) PRDF derived from the physical-property data for prediction PDF.
In a case where the second operation program 110 is started, the CPU 62B of the computer including the operating apparatus 11 functions as a second RW control unit 115, a second derivation unit 116, a second processing unit 117, and a display control unit 118 in cooperation with the memory 61 and the like.
Similar to the first RW control unit 75 of the learning apparatus 10, the second RW control unit 115 controls reading of various data stored in the storage device 60B and storing of various data in the storage device 60B. The second RW control unit 115 reads the physical-property data for prediction PDF and the autoencoder AE from the storage device 60B, and outputs the physical-property data for prediction PDF and the autoencoder AE to the second derivation unit 116. Further, the second RW control unit 115 stores the relevance data for prediction PRDF from the second derivation unit 116 in the storage device 60B.
The second RW control unit 115 reads the learned model TM from the storage device 60B, and outputs the learned model TM to the second processing unit 117. The second RW control unit 115 acquires the learned model TM by reading the learned model TM from the storage device 60B. That is, the second RW control unit 115 is an example of a “second acquisition unit” according to the technique of the present disclosure.
The second RW control unit 115 reads the relevance data for prediction PRDF and the production condition data for prediction PCDF from the storage device 60B, and outputs the read data to the second processing unit 117. The second RW control unit 115 acquires the relevance data for prediction PRDF by reading the relevance data for prediction PRDF from the storage device 60B. That is, the second RW control unit 115 is an example of a “third acquisition unit” according to the technique of the present disclosure.
The second derivation unit 116 receives the physical-property data for prediction PDF and the autoencoder AE from the second RW control unit 115. The second derivation unit 116 derives the relevance data for prediction PRDF from the physical-property data for prediction PDF. More specifically, similar to the first derivation unit 76 of the learning apparatus 10, the second derivation unit 116 derives an average value and a sum of the differences in the intensity for each of the plurality of intervals INT1 to INT20 obtained by dividing the spectrum data SPD.
The second processing unit 117 receives the production condition data for prediction PCDF, the relevance data for prediction PRDF, and the learned model TM from the second RW control unit 115. The second processing unit 117 predicts a quality by inputting the production condition data for prediction PCDF and the relevance data for prediction PRDF to the learned model TM. That is, the second processing unit 117 is an example of a “processing unit” according to the technique of the present disclosure. The second processing unit 117 outputs quality prediction data QFD, which is a quality prediction result by the learned model TM, to the display control unit 118. The quality prediction data QFD includes the molecular weight dispersion and the molecular weight, similar to the quality data QD (refer to
The display control unit 118 controls displaying of various screens on the display 64B. The various screens include a quality prediction display screen 120 (refer to
In
Next, an operation according to the configuration will be described with reference to flowcharts illustrated in
In
As illustrated in
As illustrated in
In the learning unit 77, as illustrated in
In a case where an evaluation result of the accuracy of prediction of the machine learning model M by the evaluation unit 86 includes content indicating that the accuracy of prediction of the machine learning model M is lower than a preset level (NO in step ST2004), the update unit 87 updates the machine learning model M (step ST2005). Processing of step ST2001, step ST2002, and step ST2003 is repeated by using the updated machine learning model M. In a case where the evaluation result of the accuracy of prediction of the machine learning model M by the evaluation unit 86 includes content indicating that the accuracy of prediction of the machine learning model M reaches a preset level (YES in step ST2004), the processing of step ST2001 to step ST2003 is ended. The machine learning model M of which the accuracy of prediction reaches a preset level is output from the learning unit 77 to the first RW control unit 75, as the learned model TM (step ST2006). The learned model TM is stored in the storage device 60A by the first RW control unit 75. The learned model TM is transmitted to the operating apparatus 11 by the transmission control unit 78. A series of step ST2001 to step ST2006 is an example of a “learning step” according to the technique of the present disclosure.
In a case where the second operation program 110 is started in the operating apparatus 11, as illustrated in
In the operating apparatus 11, the second RW control unit 115 reads the physical-property data for prediction PDF and the autoencoder AE from the storage device 60B, and outputs the physical-property data for prediction PDF and the autoencoder AE to the second derivation unit 116. The second derivation unit 116 derives an average value and a sum of the differences in the intensity, for each of the plurality of intervals INT1 to INT20 obtained by dividing the spectrum data SPD of the physical-property data for prediction PDF. In this way, the second derivation unit 116 derives the relevance data for prediction PRDF. The relevance data for prediction PRDF is stored in the storage device 60B by the second RW control unit 115.
As illustrated in
As illustrated in
The display control unit 118 displays the quality prediction display screen 120 illustrated in
As described above, in the learning apparatus 10, the first RW control unit 75 acquires the physical-property data PD, and the first derivation unit 76 derives, as the learning input data IDL, the relevance data PRD by applying the autoencoder AE to the physical-property data PD. The learning unit 77 performs learning by inputting the learning input data IDL including the relevance data PRD to the machine learning model M, and outputs the learned model TM. Therefore, numerical values (in the example, the average value and the sum of the differences in the intensity) that accurately represent the physical property of the product PR can be derived as the relevance data PRD, and thus it is possible to further improve the accuracy of prediction of the quality of the product PR by the learned model TM.
More specifically, as illustrated in a table 125 of
For example, as illustrated in
The learning input data IDL includes not only the relevance data PRD but also the production condition data PCD. Thus, the quality prediction data QFD considering an influence of the production condition data PCD can be output from the machine learning model M.
The physical-property data PD includes image data SPIMD of a spectrum SP which is represented by spectrum data SPD detected by performing spectroscopic analysis on the product PR. The relevance data PRD includes the average value and the sum of the differences in the intensity that are derived for each of the plurality of intervals INT1 to INT20 obtained by dividing the spectrum data SPD. As compared with a case where the intensity of each wave number of the spectrum data SPD is used as the relevance data PRD, a data amount of the relevance data PRD can be reduced.
In the operating apparatus 11, the second RW control unit 115 acquires the learned model TM and the relevance data for prediction PRDF. Next, the second processing unit 117 predicts a quality by inputting the relevance data for prediction PRDF to the learned model TM. Under a control of the display control unit 118, the quality prediction display screen 120 including the quality prediction data QFD, which is a quality prediction result by the learned model TM, is displayed on the display 64B. Therefore, the operator can easily recognize a degree of the quality of the product PR without actually applying the product PR to the quality evaluation apparatus 15 and evaluating the quality of the product PR.
Here, in a case where the quality evaluation apparatus 15 actually evaluates the quality of the product PR, it takes a relatively long time, for example, approximately 1 to 2 weeks, for preprocessing and quality evaluation processing of the product PR. On the other hand, in a case where the learned model TM is used, it is possible to predict the quality of the product PR in a very short time. Moreover, as illustrated in
In the second embodiment illustrated in
In
As illustrated in
As described above, in the second embodiment, it is assumed that the image data IMD obtained by imaging the product PR is the physical-property data PD. Therefore, the physical property of the product PR can be more easily recognized.
The image data IMD may be used as the physical-property data PD instead of the image data SPIMD of the spectrum SP according to the first embodiment, or may be used as the physical-property data PD in addition to the image data SPIMD of the spectrum SP. Further, the physical-property analysis apparatus 130 is not limited to the exemplified digital optical microscope, and may be a scanning electron microscope (SEM) or the like.
The multi-dimensional physical-property data is not limited to the spectrum data SPD according to the first embodiment and the image data IMD according to the second embodiment. The multi-dimensional physical-property data related to any of five human senses of sight, hearing, smell, touch, and taste may be used. For example, touch data of the product PR, odor data at a time of production of the product PR, audio data at a time of production of the product PR, and the like may be used. In a case of touch data, an output of a touch sensor provided at each of a plurality of portions of the product PR is used as touch data, and an average value of the output of the touch sensor provided at each of the plurality of portions is derived as the relevance data PRD. In a case of audio data, an audio recorded for a period from the start to the end of production by a microphone is used as audio data. In this case, the audio data is divided into a plurality of intervals, and an average value of a frequency of each of the plurality of intervals and an average value of an amplitude of each of the plurality of intervals are derived as the relevance data PRD.
In the third embodiment illustrated in
As illustrated in
The first derivation unit 145 derives the relevance data PRD based on the image feature map CMP. That is, the image feature map CMP is an example of “feature data” according to the technique of the present disclosure.
In
As described above, in the third embodiment, the relevance data PRD is derived based on the image feature map CMP output from the encoder network 140 of the autoencoder AE. Therefore, as in the first embodiment, a numerical value (in the example, the average value of the pixel values of the output data DIc) that accurately represents the physical property of the product PR can be derived as the relevance data PRD, and thus it is possible to further improve the accuracy of prediction of the learned model TM.
The image feature map CMP may be output from a layer other than the lowest layer of the encoder network 140.
The relevance data PRD based on the image feature map CMP may be used instead of or in addition to the relevance data PRD based on the difference data DD according to the first embodiment.
In the third embodiment, the input image data IIMD is not limited to the image data SPIMD of the exemplified spectrum SP. Instead of or in addition to the image data SPIMD of the spectrum SP, the image data IMD according to the second embodiment that is obtained by imaging the product PR may be used as the input image data IIMD.
In the fourth embodiment illustrated in
In
In a case where the first operation program 151 is started, the CPU 62A of the computer including the learning apparatus 150 functions as an extraction unit 155 in addition to the first RW control unit 75, the first derivation unit 76, the learning unit 77, and the transmission control unit 78 according to the first embodiment, in cooperation with the memory 61 and the like.
In this case, the learning unit 77 performs learning by inputting the learning input data IDL to the machine learning model M, and outputs a temporary machine learning model PM (refer to
The first RW control unit 75 reads the temporary machine learning model PM from the storage device 60A, and outputs the temporary machine learning model PM to the extraction unit 155. The extraction unit 155 extracts a high-contribution item from a plurality of items of the relevance data PRD by using the temporary machine learning model PM. The high-contribution item is an item of which a contribution to improvement of accuracy of prediction of the quality of the product PR satisfies a preset condition. The extraction unit 155 outputs high-contribution item information HCII, which is an extraction result of the high-contribution item, to the learning unit 77. Although not illustrated, the extraction unit 155 outputs the high-contribution item information HCII to the first RW control unit 75, and the first RW control unit 75 stores the high-contribution item information HCII in the storage device 60A.
The learning unit 77 receives the high-contribution item information HCII from the extraction unit 155. The learning unit 77 performs learning by selectively inputting the relevance data PRD of the high-contribution item to the machine learning model M based on the high-contribution item information HCII, and outputs the machine learning model M as a learned model TM. Hereinafter, a period during which the learning unit 77 outputs the learned model TM is referred to as main learning. Further, the machine learning model M which is used in the main learning is referred to as a second machine learning model M2 (refer to
The transmission control unit 78 performs a control for receiving the learned model TM and the high-contribution item information HCII from the first RW control unit 75 and transmitting the high-contribution item information HCII to the operating apparatus 175 (refer to
As illustrated in
As illustrated in
The fourth processing unit 161 outputs temporary output data for extraction PODE from the temporary machine learning model PM by inputting the production condition data PCD and physical-property relevance data for extraction (hereinafter, abbreviated as relevance data for extraction) PRDE to the temporary machine learning model PM, the production condition data PCD being same as the production condition data PCD which is input to the temporary machine learning model PM by the third processing unit 160. The temporary output data for extraction PODE includes the molecular weight dispersion and the molecular weight, similar to the learning output data ODL (refer to
The calculation unit 162 receives the temporary output data POD from the third processing unit 160 and the temporary output data for extraction PODE from the fourth processing unit 161. The calculation unit 162 calculates a contribution of the item of the relevance data PRD that is a degree to which the item of the relevance data PRD contributes to improvement of accuracy of quality prediction of the first machine learning model M1, based on the temporary output data POD and the temporary output data for extraction PODE. The calculation unit 162 outputs contribution information CI, which is a calculation result of the contribution, to the determination unit 163.
The determination unit 163 receives the contribution information CI from the calculation unit 162. The determination unit 163 determines whether or not each of the plurality of items of the relevance data PRD is a high-contribution item based on the contribution information CI and a setting condition SC. The determination unit 163 outputs, as a determination result, the high-contribution item information HCII to the learning unit 77.
As illustrated in
As illustrated in
rate of change=difference between temporary output data and temporary output data for extraction/temporary output data (Equation 2)
The rate of change is a value indicating a degree to which the output data of the temporary machine learning model PM changes due to an influence of the item excluded from the relevance data for extraction PRDE.
Next, the calculation unit 162 converts the rate of change into a contribution by using a conversion table 170 that converts the rate of change into a contribution. In the conversion table 170, in a case where the rate of change is equal to or larger than 0 and smaller than 0.05, the contribution is registered as 0, in a case where the rate of change is equal to or larger than 0.05 and smaller than 0.1, the contribution is registered as 1, . . . , in a case where the rate of change is equal to or larger than 0.45 and smaller than 0.5, the contribution is registered as 9, and in a case where the rate of change is equal to or larger than 0.5, the contribution is registered as 10.
The fourth processing unit 161 inputs the relevance data for extraction PRDE to the temporary machine learning model PM one after another while changing the excluded items one by one. In the example, since the number of the items of the relevance data PRD is 40, the fourth processing unit 161 inputs 40 pieces of the relevance data for extraction PRDE to the temporary machine learning model PM. In addition, the calculation unit 162 calculates the contribution for each relevance data for extraction PRDE. In order to improve reliability of a value of the rate of change, the rate of change may be calculated not only for one set of the production condition data PCD and the relevance data PRD but also for a plurality of different sets of the production condition data PCD and the relevance data PRD, and an average value of the rate of change may be converted to a contribution.
As illustrated in
In this way, a method of extracting an essential part (in the example, the high-contribution item) from a plurality of pieces of data (in the example, the plurality of items of the relevance data PRD) is called sparse modeling. The sparse modeling may be performed, for example, using a glmnet package that can operate on a R language. A detail algorithm for sparse modeling is described in, for example, Regularization Paths for Generalized Linear Models via Coordinate Descent, Journal of statistical software, vol. 33-1 (2010)”.
As illustrated in
In summary, as illustrated in
In
The second RW control unit 115 of the operating apparatus 175 reads the high-contribution item information HCII from the storage device 60B, and outputs the high-contribution item information HCII to the second derivation unit 180. Similar to the second derivation unit 116 of the operating apparatus 11 according to the first embodiment, the second derivation unit 180 derives, as the relevance data for prediction PRDF, the average value and the sum of the differences in the intensity for each of the plurality of intervals INT1 to INT20 obtained by dividing the spectrum data SPD. Here, the second derivation unit 180 selectively derives the high-contribution item based on the high-contribution item information HCII, and does not derive items other than the high-contribution item. Therefore, similar to the relevance data PRD illustrated in
As described above, in the learning apparatus 150, the learning unit 77 performs learning by inputting the learning input data IDL to the first machine learning model M1, and outputs a temporary machine learning model PM. Next, the extraction unit 155 extracts the high-contribution item from the plurality of items of the relevance data PRD by using the temporary machine learning model PM. The learning unit 77 performs learning by selectively inputting the relevance data PRD of the high-contribution item to the second machine learning model M2, and outputs the learned second machine learning model M2 as the learned model TM. Therefore, as compared with a case where the relevance data PRD other than the high-contribution item is input and learned, it is possible to improve the accuracy of prediction of the quality of the product PR by the learned model TM. Further, it is possible to prevent an effort for learning from being spent for the item which has a low contribution in the improvement of the accuracy of prediction, and thus it is possible to improve learning efficiency.
The learning input data IDL includes not only the relevance data PRD but also the production condition data PCD. Thus, the high-contribution item can be extracted in consideration of the influence of the production condition data PCD, and thus validity of the high-contribution item can be further enhanced.
The intervals INT obtained by dividing the spectrum data SPD may be overlapped with each other. Similarly, the regions AR obtained by dividing the image data IMD may also be overlapped with each other.
The product PR is not limited to the product that is produced by using a flow synthesis method. For example, the product may be produced by using a batch combination method.
In each of the embodiments, the production condition which is received by the setting unit 25 of the flow reaction apparatus 13 is used as the production condition data PCD. On the other hand, the present disclosure is not limited thereto. As the production condition data PCD, actual measurement values that are measured by the first flow velocity sensor 35, the second flow velocity sensor 36, the third flow velocity sensor 37, the temperature sensor 38, the first flow velocity sensor 51, the second flow velocity sensor 52, the third flow velocity sensor 53, and the fourth flow velocity sensor 54 may be used.
In each of the embodiments, the quality prediction display screen 120 is exemplified as an output form of the quality prediction data QFD. On the other hand, the present disclosure is not limited thereto. Instead of or in addition to the quality prediction display screen 120, a form in which the quality prediction data QFD is printed and output on a paper medium and a form in which the quality prediction data QFD is output as a data file may be adopted.
As the machine learning model M, there are machine learning models using linear regression, Gaussian process regression, support vector regression, decision tree, an ensemble method, a bagging method, a boosting method, a gradient boosting method, and the like. Further, there are machine learning models using a simple perceptron, a multi-layer perceptron, a deep neural network, a convolutional neural network, a deep belief network, a recurrent neural network, a stochastic neural network, and the like. Which machine learning model M is used among the above-described models is not particularly limited, and a machine learning model M using any method may be selected.
As an ensemble method, there is random forest. As well known, random forest is a method of improving accuracy of prediction by creating a plurality of decision tree groups with low correlation using randomly-sampled learning data and randomly-selected explanatory variables and integrating and averaging prediction results by the decision tree groups. In this case, control parameters of the machine learning model M include the number of explanatory variables to be selected and the number of branches of the decision trees.
Since the deep neural network has the relatively large number of control parameters, flexible combinations may be made. For this reason, the deep neural network can exhibit high prediction performance for various data structures. The control parameters include the number of layers of the network and the number of nodes of the network, a type of an activated function, a dropout ratio, a mini-batch size, the number of epochs, a learning rate, and the like.
The machine learning model M includes a plurality of execution frameworks, and an execution framework may be appropriately selected from the execution frameworks. For example, an execution framework may be selected from Tensorflow, Cognitive Toolkit (CNT), Theano, Caffe, mxnet, Keras, PyTorch, Chainer, Scikit-learn, Caret, MATLAB (registered trademark), and the like.
The hardware configuration of the computer including the machine learning system 2 may be modified in various ways. For example, the learning apparatus 10 and the operating apparatus 11 may be integrated and configured by one computer. Further, at least one of the learning apparatus 10 or the operating apparatus 11 may be configured by a plurality of computers which are separated as hardware for the purpose of improving processing capability and reliability. For example, in the learning apparatus 10, the function of the first derivation unit 76 and the function of the learning unit 77 are distributed to two computers. In this case, the learning apparatus 10 is configured by two computers.
In this way, the hardware configuration of the computer of the machine learning system 2 may be appropriately changed according to the required performance such as processing capability, safety, and reliability. Further, not only hardware but also the application program such as the first operation programs 70 and 151 and the second operation programs 110 and 176, may be duplicated or distributed and stored in a plurality of storage devices for the purpose of ensuring safety and reliability.
In each of the embodiments, for example, as a hardware structure of the processing unit that executes various processing, such as the first RW control unit 75, the first derivation units 76, 135, and 145, the learning unit 77 (the first processing unit 85, the evaluation unit 86, and the update unit 87), the transmission control unit 78, the second RW control unit 115, the second derivation units 116 and 180, the second processing unit 117, the display control unit 118, the extraction unit 155 (the third processing unit 160, the fourth processing unit 161, the calculation unit 162, and the determination unit 163), the following various processors may be used. The various processors include, as described above, the CPU 62A or 62B which is a general-purpose processor that functions as various processing units by executing software (the first operation programs 70 and 151, and the second operation programs 110 and 176), a programmable logic device (PLD) such as a field programmable gate array (FPGA) which is a processor capable of changing a circuit configuration after manufacture, a dedicated electric circuit such as an application specific integrated circuit (ASIC) which is a processor having a circuit configuration specifically designed to execute specific processing, and the like.
One processing unit may be configured by one of these various processors, or may be configured by a combination of two or more processors having the same type or different types (for example, a combination of a plurality of FPGAs and/or a combination of a CPU and an FPGA). Further, the plurality of processing units may be configured by one processor.
As an example in which the plurality of processing units are configured by one processor, firstly, as represented by a computer such as a client and a server, a form in which one processor is configured by a combination of one or more CPUs and software and the processor functions as the plurality of processing units may be adopted. Secondly, as typified by system on chip (System On Chip: SoC), there is a form in which a processor that realizes the functions of the entire system including a plurality of processing units with one IC (Integrated Circuit) chip is used. As described above, the various processing units are configured by using one or more various processors as a hardware structure.
Further, as the hardware structure of the various processors, more specifically, an electric circuit (circuitry) in which circuit elements such as semiconductor elements are combined may be used.
From the above description, the invention described in following Appendixes 1 and 2 can be understood.
[Appendix 1]
A learning apparatus including:
a first acquisition processor configured to acquire multi-dimensional physical-property data representing a physical property of a product;
a derivation processor configured to derive learning input data to be input to a machine learning model for predicting a quality of the product from the multi-dimensional physical-property data and derive, as the learning input data, multi-dimensional physical-property relevance data which is related to the multi-dimensional physical-property data by applying at least a part of an autoencoder to the multi-dimensional physical-property data to; and
a learning processor configured to perform learning by inputting the learning input data including the multi-dimensional physical-property relevance data to the machine learning model and output the machine learning model as a learned model to be provided for actual operation.
[Appendix 2]
An operating apparatus including:
a second acquisition processor configured to acquire the learned model which is output from the learning processor of the learning apparatus according to Appendix 1;
a third acquisition processor configured to acquire multi-dimensional physical-property relevance data for prediction which is data of a product of which a quality is unknown;
a processing processor configured to predict the quality by inputting, to the learned model acquired by the second acquisition processor, the multi-dimensional physical-property relevance data for prediction which is data of the product of which the quality is unknown and is acquired by the third acquisition processor; and
an output control processor configured to control outputting of a prediction result of the quality by the learned model.
The technique of the present disclosure can also appropriately combine the various embodiments and the various modification examples. In addition, the technique of the present disclosure is not limited to each embodiment, and various configurations may be adopted without departing from the scope of the present disclosure. Further, the technique of the present disclosure extends to a program and a storage medium for non-temporarily storing the program.
The described contents and the illustrated contents are detailed explanations of a part according to the technique of the present disclosure, and are merely examples of the technique of the present disclosure. For example, the descriptions related to the configuration, the function, the operation, and the effect are descriptions related to examples of a configuration, a function, an operation, and an effect of a part according to the technique of the present disclosure. Therefore, it goes without saying that, in the described contents and illustrated contents, unnecessary parts may be deleted, new components may be added, or replacements may be made without departing from the spirit of the technique of the present disclosure. Further, in order to avoid complications and facilitate understanding of the part according to the technique of the present disclosure, in the described contents and illustrated contents, descriptions of technical knowledge and the like that do not require particular explanations to enable implementation of the technique of the present disclosure are omitted.
In this specification, “A and/or B” is synonymous with “at least one of A or B”. That is, “A and/or B” means that only A may be included, that only B may be included, or that a combination of A and B may be included. Further, in this specification, even in a case where three or more matters are expressed by being connected using “and/or”, the same concept as “A and/or B” is applied.
All documents, patent applications, and technical standards mentioned in this specification are incorporated herein by reference to the same extent as in a case where each document, each patent application, and each technical standard are specifically and individually described by being incorporated by reference.
Number | Date | Country | Kind |
---|---|---|---|
2019-124420 | Jul 2019 | JP | national |
This application is a Continuation of PCT International Application No. PCT/JP2020/019936 filed on May 20, 2020, which claims priority under 35 U.S.C § 119(a) to Japanese Patent Application No. 2019-124420 filed on Jul. 3, 2019. Each of the above application(s) is hereby expressly incorporated by reference, in its entirety, into the present application.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/JP2020/019936 | May 2020 | US |
Child | 17541725 | US |