The present disclosure relates to an information processing device, an information processing method, and a program.
In recent years, systems using neural networks have been actively developed. For example, Patent Document 1 discloses a prediction system using a recurrent neural network (RNN).
However, feedback in an RNN or the like is generally performed locally. Furthermore, in a case where input information is large, the calculation amount tends to be large.
According to an aspect of the present disclosure, there is provided an information processing device including a processing unit that executes processing using a neural network, in which the processing unit performs feedback related to a plurality of function parameters respectively used in a plurality of intermediate layers in the neural network on the basis of an output from the intermediate layers.
Furthermore, according to another aspect of the present disclosure, there is provided an information processing method including executing, by a processor, processing using a neural network, in which executing the processing further includes performing feedback related to a plurality of function parameters respectively used in a plurality of intermediate layers in the neural network on the basis of an output from the intermediate layers.
Furthermore, according to another aspect of the present disclosure, there is provided a program for causing a computer to function as an information processing device including a processing unit that executes processing using a neural network, in which the processing unit performs feedback related to a plurality of function parameters respectively used in a plurality of intermediate layers in the neural network on the basis of an output from the intermediate layers.
Hereinafter, preferred embodiments of the present disclosure will be described in detail with reference to the accompanying drawings. Note that, in the present specification and the drawings, components having substantially the same functional configuration are denoted by the same reference numerals, and redundant description is omitted.
Note that the description will be given in the following order.
First, an overview of one embodiment of the present disclosure will be described.
A neural network may be provided with a feedback circuit in addition to a feedforward circuit in order to improve the accuracy of processing such as recognition.
An example of the neural network including the feedback circuit includes an RNN.
Furthermore, an output from each intermediate layer is input to one of feedback functions G, and feedback to the same intermediate layer is performed.
For example, in the first intermediate layer from the input layer side, a feature-amount conversion function F1 converts the input input information x by using a function parameter θ1.
At this time, the input information x converted by the feature-amount conversion function F1 is input to a feature-amount conversion function F2 in the second intermediate layer from the input layer side, and is also input to a feedback function G1.
The feature-amount conversion function F2 to a feature-amount conversion function Fn also convert the input information x using function parameters θ2 to θn, respectively, and the input information x after the conversion is input to the next layer and feedback functions G2 to Gn, respectively.
Furthermore, as illustrated in
As described above, in the general RNN, feedback using an output from an intermediate layer is locally performed on the same layer, and thus each intermediate layer cannot perform feature amount conversion based on a feature amount converted in a subsequent layer.
Furthermore, in a case where feedback is performed for each intermediate layer, the calculation amount increases. The increase in the calculation amount is more remarkable in a case where the input information x is large data such as an image.
The technical idea according to the present disclosure has been conceived focusing on the above points, and implements highly efficient and highly accurate feedback with the calculation amount suppressed.
Therefore, one of the features of a processing unit 210 according to the one embodiment of the present disclosure is to perform feedback related to a plurality of function parameters respectively used in a plurality of intermediate layers in a neural network on the basis of an output from the intermediate layers.
Meanwhile, in the processing unit 210 according to the present embodiment, unlike the general RNN, one feedback function G is provided for a plurality of intermediate layers.
For example, in the case of the example illustrated in
According to the feedback method as described above, the feature-amount conversion function F1 and the feature-amount conversion function F1 can perform calculation in consideration of a feature amount of a subsequent stage. Furthermore, providing one feedback function G for a plurality of intermediate layers makes it possible to greatly reduce the calculation amount.
Moreover, in the feedback method according to the present embodiment, as illustrated in
A θ update function updates the function parameter θ on the basis of an output from the feedback function G, and the updated function parameter is fed back to the feature-amount conversion function F.
According to the feedback method according to the present embodiment, the possibility of obtaining a more accurate solution is improved as compared with the case of feeding back the input information x.
Furthermore, in many cases, the data amount of the function parameters is smaller than the input information x, and thus the calculation amount can be effectively reduced.
Moreover, a feedback circuit according to the present embodiment can arbitrarily set execution or non-execution. Therefore, for example, in a case where it is not necessary to update the function parameters, turning off the feedback circuit makes it possible to reduce the calculation amount and improve the calculation speed.
Next, an exemplary configuration of a system 1 in the present embodiment will be described.
As illustrated in
The input information acquisition device 10 according to the present embodiment is a device that acquires input information input to the neural network included in the processing unit 210.
As illustrated in
The input information acquisition unit 110 according to the present embodiment acquires input information input to the neural network.
For this purpose, the input information acquisition unit 110 according to the present embodiment includes a sensor or the like corresponding to the type of input information to be used.
The preprocessing unit 120 according to the present embodiment performs preprocessing on the input information prior to input to the neural network.
The preprocessing is only required to be appropriately designed according to the type of the input information, the specifications of the system 1, or the like.
Specific examples of the input information and the preprocessing according to the present embodiment will be described later.
The processing device 20 according to the present embodiment is an information processing device that performs processing such as recognition and prediction using the neural network.
As illustrated in
The processing unit 210 according to the present embodiment performs processing such as recognition and prediction using the neural network. Furthermore, one of the features of the processing unit 210 according to the present embodiment is to perform feedback related to a plurality of function parameters respectively used in a plurality of intermediate layers in the neural network on the basis of an output from the intermediate layers.
Details of the feedback performed by the processing unit 210 according to the present embodiment will be separately described.
The post-processing device 30 according to the present embodiment is a device that performs some sort of processing (post-processing) based on a result of processing by the processing unit 210.
As illustrated in
The post-processing unit 310 according to the present embodiment performs post-processing based on a result of processing by the processing unit 210.
The post-processing is only required to be appropriately designed according to processing contents of the processing unit 210, the specifications of the system 1, or the like.
For example, in a case where the processing unit 210 performs object recognition using the neural network, the post-processing unit 310 may execute, as post-processing, notification or machine control based on a result of the object recognition.
The exemplary configuration of the system 1 according to the present embodiment has been described above. Note that the configuration illustrated in
For example, each of the input information acquisition device 10, the processing device 20, and the post-processing device 30 may further include an operation unit that receives an operation by a user, a display unit that displays information, and the like.
Furthermore, for example, the functions of the input information acquisition device 10, the processing device 20, and the post-processing device 30 described above may be implemented in a single device.
The configuration of the system 1 according to the present embodiment can be flexibly modified according to specifications, operations, and the like.
Next, feedback control according to the present embodiment will be described in more detail.
As described above, one of the features of the processing unit 210 according to the present embodiment is to perform feedback related to function parameters used in intermediate layers instead of the input information x.
Several patterns are conceivable in a structure for implementing the feedback related to the function parameters.
For example, as in a processing unit 210A illustrated in
In this case, a θ1 update function to a θ3 update function update a function parameter θ1 to a function parameter θ3 on the basis of outputs from the feedback function G1 to the feedback function G3, respectively, and perform feedback to the feature-amount conversion function F1 to the feature-amount conversion function F3.
However, in this case, although the calculation amount can be reduced as compared with the case where the input information x is fed back, there is a possibility that the accuracy of processing is lowered by the latest conversion result being locally fed back.
Meanwhile, a processing unit 210B illustrated in
In this case, reducing the number of feedback functions G makes it possible to further reduce the calculation amount. Furthermore, it is expected that feedback based on a result of feature amount conversion in a subsequent stage can be performed on the feature-amount conversion function F1 and the feature-amount conversion function F3, and the accuracy of processing is improved.
However, in a case where the structure illustrated in
In order to eliminate the difficulty of the feedback control as described above, the processing unit 210 according to the present embodiment may update a latent variable on the basis of an output from the intermediate layers in the neural network and update the function parameters on the basis of the updated latent variable.
More specifically, the processing unit 210 according to the present embodiment may execute update of the function parameters based on the updated latent variable and update of the latent variable based on the updated function parameters in order from an intermediate layer closer to the input layer side.
For example, in the case of a processing unit 210C illustrated in
Next, the θ1 update function updates the function parameter θ1 on the basis of the latent variable y updated as described above.
Subsequently, the y update function updates the latent variable y on the basis of the function parameter θ1 updated by the θ1 update function.
A θ2 update function updates a function parameter θ2 on the basis of the latent variable y updated on the basis of the function parameter θ1.
Subsequently, the y update function updates the latent variable y on the basis of the function parameter θ2 updated by the θ2 update function.
The θ3 update function updates the function parameter θ3 on the basis of the latent variable y updated on the basis of the function parameter θ2.
As described above, it is expected that dimensions of data are reduced by use of the latent variable y, and the feedback control is easier.
Note that
However, the feedback targets illustrated in
For example, the feedback based on the output from the feature-amount conversion function F3 in the third layer may target the feature-amount conversion function F1 and a feature-amount conversion function F2 in the first layer and the second layer.
Meanwhile, the feedback based on the output from the feature-amount conversion function F3 in the third layer may target the feature-amount conversion function F1 and the feature-amount conversion function F3 in the first layer and the third layer.
Meanwhile, the feedback based on the output from the feature-amount conversion function F3 in the third layer may target the feature-amount conversion function F2 and the feature-amount conversion function F3 in the second layer and the third layer.
As described above, the feedback targets according to the present embodiment can be arbitrarily and flexibly designed.
Next, a learning method according to the present embodiment will be described with specific examples.
The processing unit 210 according to the present embodiment may perform learning by a gradient method using parameter setters 220, for example.
For example, in a case where the input information x is time-series data, the parameter setters 220 according to the present embodiment may set initial values of the function parameters at the start of new learning on the basis of the function parameters updated on the basis of inputs of the second and subsequent frames of the input information x in past learning using the input information x.
As described above, in a case of performing learning using time-series data of two or more frames as the input information x, the parameter setters 220 may set function parameters used for conversion of the input information xt-1 of the first frame.
In the case of the example illustrated in
At this time, each of the parameter setter 220A to the parameter setter 220C may perform the above setting on the basis of the function parameter θ1 to the function parameter θ3 updated in the second and subsequent frames stored in past learning.
For example, each of the parameter setter 220A to the parameter setter 220C may set a value randomly selected from among the function parameters θ used in the past, or may set a calculated moving average, a value obtained by adding noise to the moving average, or the like.
Furthermore, a parameter setter may set an initial value of the latent variable y in the first frame.
As described above, in a case of performing learning using time-series data of two or more frames as the input information x, as illustrated in
At this time, the parameter setter 220D may perform the above setting on the basis of the latent variable y updated in the second and subsequent frames stored in past learning.
For example, the parameter setter 220D may set a value randomly selected from among the latent variables y used in the past, or may set an average value, a median value, or the like.
Next, learning using a search algorithm according to the present embodiment will be described. For example, the processing unit 210 according to the present embodiment may perform learning using a search algorithm as necessary while using the gradient method as a basis.
In the case of the example illustrated in
Here, in a case where the learning by the gradient method in step S102 satisfies a preset end condition (S104: Yes), the processing unit 210 may end the learning.
On the other hand, in a case where the learning by the gradient method in step S102 does not satisfy the preset end condition (S104: No), the processing unit 210 searches for each function parameter by the search algorithm (S106).
Next, the processing unit 210 performs further learning with each function parameter obtained in step S106 set as a correct answer value (grand truth) (S108).
After the processing in step S108, the processing unit 210 returns to step S102 and repeatedly executes the processing in steps S102 to S108 until the preset end condition is satisfied.
The learning using the search algorithm as described above is particularly effective, for example, in a situation where an optimal control system cannot be implemented only by the gradient method.
Furthermore, even in a case where a local solution is obtained in the learning by the gradient method, learning is performed with each function parameter obtained by the search algorithm set as a correct answer value, so that the feedback circuit can escape from the local solution.
Furthermore, by repeating the entire learning by the gradient method and the learning of a feedback portion with the search result set as a correct answer value, it is possible to achieve the entire optimization in end-to-end while avoiding obtaining a local solution.
Note that, instead of setting each function parameter obtained by the search algorithm as a correct answer value, the processing unit 210 may perform learning with a loss of design in which a feature amount of an output in the feature-amount conversion function F1, the feature-amount conversion function F3, or a subsequent stage matches a feature amount in the case of using the correct answer value obtained by the search algorithm.
In a case where there is a plurality of sets of control outputs {θ1, θ2, θ3} that achieve the same output, there is a possibility that the learning using the loss as described above improves the accuracy.
Meanwhile, the processing unit 210 according to the present embodiment can learn the correspondence between an output from the intermediate layers in the neural network and a combination of a plurality of function parameters by reinforcement learning.
In the Q table, the input information x input to the feedback function G and a set of control output solutions {θ1, θ2, θ3} for the input information x are stored in association with each other.
Note that the processing unit 210 may update the Q table as the latent variable y=Q table.
According to the reinforcement learning as described above, it is possible to set the function parameters θ more suitable for the input information x.
Next, the function parameters according to the present embodiment will be described with specific examples.
The function parameters according to the present embodiment may be, for example, parameters used in kernel functions.
For example, each kernel update function according to the present embodiment may feed back all kernel values on the basis of an output from the feedback function G. In this case, although the calculation amount increases, feedback with a high degree of freedom can be implemented.
Furthermore, for example, each kernel update function according to the present embodiment may perform feedback of multiplying the original kernel by a constant for each channel. In this case, it is possible to implement feedback focusing on a specific layer with the calculation amount suppressed.
Furthermore, for example, each kernel update function according to the present embodiment may perform feedback of multiplying the original kernel by a constant. In this case, the calculation amount is further reduced, and the reaction degree of neuro can be adjusted.
Furthermore, for example, each kernel update function according to the present embodiment can feed back not only the kernel but also a bias.
According to the feedback for finely correcting the original kernel and bias as described above, an effect of reducing the calculation amount and facilitating learning is expected.
Moreover, performing feedback to a convolution layer makes it possible to implement precise control as compared with control of feeding back a feature amount, which causes a large calculation amount.
Furthermore, the function parameters according to the present embodiment may be, for example, parameters used in activation functions.
Note that
In this case, each of the parameters a may be updated by any of the following calculations.
Furthermore, the processing unit 210 can also perform feedback not for each set of input information x but for each feature amount. In this case, a non-uniform distribution in the input information x (for example, a bright portion and a dark portion exist in an image) can be corrected.
The feedback of the function parameters according to the present embodiment has been described above with specific examples.
Next, feedback of processing parameters according to the present embodiment will be described.
The processing unit 210 according to the present embodiment may further perform acquisition of input information to be input to the neural network or feedback related to processing parameters used for processing the input information.
A camera 15 illustrated in
RAW data (input information) captured by the imaging sensor 115 is input to the image processing processor 125, and is subjected to image processing (an example of preprocessing according to the present embodiment) by the image processing processor 125.
Furthermore, the input information subjected to the image processing by the image processing processor 125 is input to a neural network 212 included in the processing unit 210.
In a feedback circuit 214 included in the processing unit 210, the feedback function G performs calculation based on an output from an intermediate layer of the neural network and outputs a calculation result to a plurality of parameter functions 240 indicated by dots.
The processing parameters according to the present embodiment may be, for example, parameters used by the imaging sensor 115 that acquires image information.
In the case of the example illustrated in
Furthermore, a parameter update function 240B updates a parameter related to an analog gain on the basis of the output from the feedback function G.
Furthermore, the processing parameters according to the present embodiment may be, for example, parameters used by the image processing processor 125 that processes image information.
In the case of the example illustrated in
Furthermore, a parameter update function 240D updates a parameter related to tone mapping on the basis of the output from the feedback function G.
Note that a parameter update function 240E updates the above-described function parameters on the basis of the output from the feedback function G.
According to the feedback of the processing parameters as described above, even in a case where the accuracy of recognition processing is not improved only by the feedback of the function parameters to the neural network 212, controlling the entire recognition pipeline (acquisition, processing, and recognition of image information) makes it possible to improve the recognition accuracy.
Furthermore, according to the feedback of the processing parameters as described above, the processing load of the neural network 212 is reduced, and even a light model that may be a bottleneck of calculation can ensure predetermined recognition accuracy.
Furthermore, as illustrated in
The environment information may include, for example, illuminance, temperature, humidity, weather, time, position information, and the like.
As described above, the feedback circuit 214 according to the present embodiment can arbitrarily switch execution or non-execution of feedback. The switching of the execution or non-execution of feedback may be set on the basis of, for example, time (for example, every hour or the like) or may be set by an instruction from a user.
Meanwhile, the processing unit 210 according to the present embodiment may determine whether or not to execute feedback related to both or any of the function parameters and the processing parameters on the basis of the environment information.
For example, the processing unit 210 may perform control such that feedback is executed in a case where a predetermined environmental change is detected (for example, it gets dark, it starts to rain, or the humidity increases).
According to the control as described above, feedback is executed and the parameters are updated only when the feedback is necessary, such as when the model cannot be adapted to the environment, so that it is possible to adapt the model to the environment while the calculation cost in normal times is lowered.
Furthermore, according to the control as described above, it is possible to immediately adapt the model to the environment without relearning.
Furthermore, the processing unit 210 according to the present embodiment may update both or any of the function parameters and the processing parameters on the basis of the environment information.
For example, in a case where it gets dark and it is difficult to perform recognition processing, each parameter update function illustrated in
Parameter update function 240A: Update the parameter so that the exposure time is longer
Parameter update function 240B: Update the parameter so that the gain is larger
Parameter update function 240C: Update the parameter so that denoising is performed more strongly
Parameter update function 240D: Update the parameter so that a dark portion is more emphasized
Parameter update function 240E: Update the parameters so as to be able to cope with a dark image and a noisy image
According to the parameter update based on the environment information as described above, it is possible to perform recognition with higher accuracy according to the situation.
The feedback of the function parameters and the processing parameters according to the present embodiment has been described above with specific examples.
Note that, in the above description, cases where the input information according to the present embodiment is time-series data have been described as main examples, but the input information according to the present embodiment may be, for example, non-time-series data such as a still image.
The processing unit 210 according to the present embodiment can also perform feedback related to a plurality of parameters a plurality of times with respect to the same input information input to the neural network 212.
In the case of the example illustrated in
Thereafter, the input information x is input again to the feature-amount conversion function F1 to the feature-amount conversion function F3, and conversion using the updated function parameter θ1 to function parameter θ3 is performed.
As described above, even in a case where the input information x is non-time-series data, the same input information x is input a plurality of times, so that parameters that cannot be updated by one input are sequentially updated, and a more accurate processing result can be obtained.
Furthermore, in the above description, cases where the processing unit 210 performs feedback to all the intermediate layers included in the neural network 212 by the single feedback function G have been described as main examples.
Meanwhile, the number of feedback functions G according to the present embodiment may be two or more, and grouping of intermediate layers that receive feedback based on an output of the feedback function G can be arbitrarily set.
For example, the processing unit 210 according to the present embodiment may perform feedback related to a plurality of function parameters on the basis of a plurality of intermediate feature amounts.
In the case of the example illustrated in
Furthermore, the feedback function G2 performs calculation based on the output from the feedback function G1 and an output from a feature-amount conversion function Fm-1, and outputs a calculation result to the θ1 update function to a θm-1 update function.
As described above, the processing unit 210 according to the present embodiment can implement feedback based on the degrees of reaction in a plurality of intermediate layers.
Furthermore, the feedback control according to the present embodiment is not limited to the update of the parameters.
The function parameters and the processing parameters according to the present embodiment may be used for determining contents of processing using the function parameters or the processing parameters or the presence or absence of processing.
In the case of the example illustrated in
In the case of the example illustrated in
As described above, the feedback control according to the present embodiment includes determination of processing contents or presence or absence of processing based on an updated parameter.
Note that the determination control of the processing contents or the presence or absence of processing based on an updated parameter may be combined with the feedback control based on the environment information described above.
Furthermore, in addition to the above combination, the individual controls described in the present disclosure can be arbitrarily combined unless they are alternative controls.
Next, an exemplary hardware configuration of an information processing device 90 according to one embodiment of the present disclosure will be described.
As illustrated in
The processor 871 functions as, for example, an arithmetic processing device or a control device, and controls the overall operation of each component or a part thereof on the basis of various programs recorded in the ROM 872, the RAM 873, the storage 880, or a removable storage medium 901.
The ROM 872 is a means for storing a program to be read into the processor 871, data to be used for calculation, and the like. The RAM 873 temporarily or permanently stores, for example, a program to be read into the processor 871, various parameters that appropriately change when the program is executed, and the like.
The processor 871, the ROM 872, and the RAM 873 are mutually connected via, for example, the host bus 874 capable of high-speed data transmission. Meanwhile, the host bus 874 is connected to the external bus 876 having a relatively low data transmission speed via the bridge 875, for example. Furthermore, the external bus 876 is connected to various components via the interface 877.
As the input device 878, for example, a mouse, a keyboard, a touch panel, a button, a switch, a lever, and the like are used. Moreover, as the input device 878, a remote controller (hereinafter referred to as a remote) capable of transmitting a control signal using infrared rays or other radio waves may be used. Furthermore, the input device 878 includes a voice input device such as a microphone.
The output device 879 is a device capable of visually or auditorily notifying a user of acquired information, such as a display device such as a cathode ray tube (CRT), an LCD, or an organic EL, an audio output device such as a speaker or a headphone, a printer, a mobile phone, or a facsimile, for example. Furthermore, the output device 879 according to the present disclosure includes various vibration devices capable of outputting a haptic stimulus.
The storage 880 is a device for storing various types of data. As the storage 880, for example, a magnetic storage device such as a hard disk drive (HDD), a semiconductor storage device, an optical storage device, a magneto-optical storage device, or the like is used.
The drive 881 is, for example, a device that reads information recorded in the removable storage medium 901 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory, or writes information in the removable storage medium 901.
The removable storage medium 901 is, for example, a DVD medium, a Blu-ray (registered trademark) medium, an HD DVD medium, various semiconductor storage media, or the like. It is needless to say that the removable storage medium 901 may be, for example, an IC card on which a non-contact IC chip is mounted, an electronic device, or the like.
The connection port 882 is a port for connecting an external connection device 902, such as a universal serial bus (USB) port, an IEEE 1394 port, a small computer system interface (SCSI), an RS-232C port, or an optical audio terminal, for example.
The external connection device 902 is, for example, a printer, a portable music player, a digital camera, a digital video camera, an IC recorder, or the like.
(Communication device 883)
The communication device 883 is a communication device for connecting to a network, and is, for example, a communication card for a wired or wireless LAN, Bluetooth (registered trademark), or wireless USB (WUSB), a router for optical communication, a router for asymmetric digital subscriber line (ADSL), a modem for various types of communication, or the like.
As described above, the processing unit 210 that executes processing using a neural network according to one embodiment of the present disclosure is provided. Furthermore, one of the features of the processing unit 210 is to perform feedback related to a plurality of function parameters respectively used in a plurality of intermediate layers in the neural network on the basis of an output from the intermediate layers.
According to the above configuration, it is possible to implement highly efficient and highly accurate feedback with the calculation amount suppressed.
Although the preferred embodiments of the present disclosure have been described above in detail with reference to the accompanying drawings, the technical scope of the present disclosure is not limited to such examples. It is obvious that a person having ordinary knowledge in the technical field of the present disclosure can conceive various changes or modifications within the scope of the technical idea described in the claims, and it is naturally understood that these also belong to the technical scope of the present disclosure.
Furthermore, a series of processing performed by each device described in the present disclosure may be implemented by a program stored in a non-transitory computer readable storage medium. For example, each program is read into the RAM when a computer executes the program, and is executed by a processor such as a CPU. The storage medium is, for example, a magnetic disk, an optical disk, a magneto-optical disk, a flash memory, or the like. Furthermore, the program may be distributed via, for example, a network without using a storage medium.
Furthermore, the effects described in the present specification are merely illustrative or exemplary, and are not restrictive. That is, the technology according to the present disclosure can provide other effects that are apparent to those skilled in the art from the description of the present specification, in combination with or instead of the effects described above.
Note that the following configurations also belong to the technical scope of the present disclosure.
(1)
An information processing device including
The information processing device according to (1), in which
The information processing device according to (2), in which
The information processing device according to any one of (1) to (3), in which
The information processing device according to any one of (1) to (4), in which
The information processing device according to any one of (1) to (4), in which
The information processing device according to any one of (1) to (6), in which
The information processing device according to any one of (1) to (7), in which
The information processing device according to any one of (1) to (8), in which
The information processing device according to (9), in which
The information processing device according to (9) or (10), in which
The information processing device according to any one of (9) to (11), in which
The information processing device according to any one of (9) to (12), in which
The information processing device according to (13), in which
The information processing device according to (13) or (14), in which
The information processing device according to any one of (1) to (15), in which
The information processing device according to any one of (1) to (16), in which
An information processing method including
A program for causing a computer to function as
Number | Date | Country | Kind |
---|---|---|---|
2022-014174 | Feb 2022 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2023/000302 | 1/10/2023 | WO |