The present disclosure relates to an information processing apparatus, an information processing method, an information processing program, a learning device, a learning method, a learning program, and a discriminative model.
In recent years, with the progress of medical devices, such as a computed tomography (CT) apparatus and a magnetic resonance imaging (MRI) apparatus, an image diagnosis can be made by using a medical image having a higher quality and a higher resolution. In particular, in a case in which a target part is the brain, it is possible to specify a region in which a blood vessel disorder of the brain, such as a cerebral infarction or a cerebral hemorrhage, occurs by the image diagnosis using a CT image, an MRI image, or the like. Therefore, various methods for supporting the image diagnosis have been proposed.
By the way, the cerebral infarction is a disease in which a brain tissue is damaged by occlusion of a cerebral blood vessel, and is known to have a poor prognosis. In a case in which the cerebral infarction is developed, irreversible cell death progresses with the elapse of time. Therefore, how to shorten the time to the start of treatment has become an important issue. Here, in the application of thrombectomy treatment method, which is a typical treatment method for the cerebral infarction, two pieces of information, “degree of extent of infarction” and “presence or absence of large vessel occlusion (LVO)”, are required (see Appropriate Use Guidelines For Percutaneous Transluminal Cerebral Thrombectomy Devices, 4th edition, March 2020, p. 12-(1)).
On the other hand, in the diagnosis of a patient suspected of having a brain disease, the presence or absence of bleeding in the brain is often confirmed before confirming the cerebral infarction. Since bleeding in the brain can be clearly confirmed on a non-contrast CT image, a diagnosis using the non-contrast CT image is first made for the patient suspected of having the brain disease. However, in the non-contrast CT image, a difference in pixel value between a region of the cerebral infarction and the other region is not so large. Moreover, in the non-contrast CT image, a hyperdense artery sign (HAS) reflecting a thrombus that causes the large vessel occlusion can be visually recognized, but is not clear, so that it is difficult to specify a large vessel occlusion region. As described above, it is often difficult to specify an infarction region and the large vessel occlusion region by using the non-contrast CT image. Therefore, after the diagnosis using the non-contrast CT image, the MRI image or a contrast CT image is acquired to diagnose whether or not the cerebral infarction has developed, specify the large vessel occlusion region, and confirm the degree of extent of the infarction in a case in which the cerebral infarction has occurred.
However, in a case in which whether or not the cerebral infarction has developed is diagnosed by acquiring the MRI image and the contrast CT image after the diagnosis using the CT image, the elapsed time from the development of the infarction is long and the start of treatment is delayed, as a result, there is a high probability that the prognosis will be poor.
Therefore, a method for automatically extracting an infarction region and the large vessel occlusion region from the non-contrast CT image has been proposed. For example, JP2020-054580A proposes a method of specifying an infarction region and a thrombus region by using a discriminator that has been trained to extract the infarction region from a non-contrast CT image and a discriminator that has been trained to extract the thrombus region from the non-contrast CT image.
On the other hand, an appearance place of HAS representing the large vessel occlusion region is changed depending on which blood vessel is occluded, and an appearance varies depending on an angle of a tomographic plane with respect to the brain in the CT image, a property of a thrombus, a degree of occlusion, and the like. Moreover, it may be difficult to distinguish from similar structures in the vicinity, such as calcification. Moreover, the infarction region is generated in a blood vessel dominant region by the blood vessel in which the HAS is generated. Therefore, in a case in which the large vessel occlusion region can be specified, it is easy to specify the infarction region.
The present disclosure has been made in view of the above circumstances, and an object of the present disclosure is to enable accurate specification of a large vessel occlusion region or an infarction region by using a non-contrast CT image of a head.
The present disclosure relates to an information processing apparatus comprising: at least one processor, in which the processor acquires a non-contrast CT image of a head of a patient and first information representing any one of an infarction region or a large vessel occlusion region in the non-contrast CT image, and derives second information representing the other of the infarction region or the large vessel occlusion region in the non-contrast CT image based on the non-contrast CT image and the first information.
It should be noted that, in the information processing apparatus according to the present disclosure, the processor may derive the second information by using a discriminative model that has been trained to output the second information in a case in which the non-contrast CT image and the first information are input.
In addition, in the information processing apparatus according to the present disclosure, the processor may derive the second information further based on information on symmetrical regions with respect to a midline of a brain in at least the non-contrast CT image out of the non-contrast CT image and the first information.
In addition, in the information processing apparatus according to the present disclosure, the information on the symmetrical regions may be inversion information obtained by inverting at least the non-contrast CT image out of the non-contrast CT image and the first information with respect to the midline of the brain.
In addition, in the information processing apparatus according to the present disclosure, the processor may derive the second information further based on at least one of information representing an anatomical region of a brain or clinical information.
In addition, in the information processing apparatus according to the present disclosure, the processor may acquire the first information by extracting any one of the infarction region or the large vessel occlusion region from the non-contrast CT image.
In addition, in the information processing apparatus according to the present disclosure, the processor may derive quantitative information for at least one of the first information or the second information, and may display the quantitative information.
The present disclosure relates to a learning device comprising: at least one processor, in which the processor acquires training data including input data consisting of a non-contrast CT image of a head of a patient with cerebral infarction and first information representing any one of an infarction region or a large vessel occlusion region in the non-contrast CT image, and correct answer data consisting of second information representing the other of the infarction region or the large vessel occlusion region in the non-contrast CT image, and trains a neural network through machine learning using the training data to construct a discriminative model that outputs the second information in a case in which the non-contrast CT image and the first information are input.
The present disclosure relates to a discriminative model that, in a case in which a non-contrast CT image of a head of a patient and first information representing any one of an infarction region or a large vessel occlusion region in the non-contrast CT image are input, outputs second information representing the other of the infarction region or the large vessel occlusion region in the non-contrast CT image.
The present disclosure relates to an information processing method comprising: acquiring a non-contrast CT image of a head of a patient and first information representing any one of an infarction region or a large vessel occlusion region in the non-contrast CT image; and deriving second information representing the other of the infarction region or the large vessel occlusion region in the non-contrast CT image based on the non-contrast CT image and the first information.
The present disclosure relates to a learning method comprising: acquiring training data including input data consisting of a non-contrast CT image of a head of a patient with cerebral infarction and first information representing any one of an infarction region or a large vessel occlusion region in the non-contrast CT image, and correct answer data consisting of second information representing the other of the infarction region or the large vessel occlusion region in the non-contrast CT image; and training a neural network through machine learning using the training data to construct a discriminative model that outputs the second information in a case in which the non-contrast CT image and the first information are input.
It should be noted that programs casing a computer to execute the information processing method and the learning method according to the present disclosure may be provided.
According to the present disclosure, the large vessel occlusion region or the infarction region can be accurately specified by using the non-contrast CT image of the head.
In the following, a first embodiment of the present disclosure will be described with reference to the drawings.
The three-dimensional image capturing apparatus 2 is an apparatus that images a diagnosis target part of a subject to generate a three-dimensional image representing the part, and is, specifically, a CT apparatus, an MRI apparatus, a PET apparatus, and the like. A medical image generated by the three-dimensional image capturing apparatus 2 is transmitted to and stored in the image storage server 3. It should be noted that, in the present embodiment, the diagnosis target part of a patient who is the subject is the brain, the three-dimensional image capturing apparatus 2 is the CT apparatus, and a three-dimensional CT image G0 of the head of the patient who is the subject is generated in the CT apparatus. It should be noted that, in the present embodiment, the CT image G0 is a non-contrast CT image acquired by performing imaging without using a contrast agent.
The image storage server 3 is a computer that stores and manages various data, and comprises a large-capacity external storage device and software for database management. The image storage server 3 communicates with another device via the wired or wireless network 4 to transmit and receive image data and the like to and from the other device. Specifically, the image storage server 3 acquires various data including the image data of the CT image generated by the three-dimensional image capturing apparatus 2 via the network, and stores and manages the data in a recording medium, such as the large-capacity external storage device. Training data for constructing a discriminative model is also stored in the image storage server 3, as will be described below. It should be noted that a storage format of the image data and the communication between the devices via the network 4 are based on a protocol, such as digital imaging and communication in medicine (DICOM).
Next, the information processing apparatus and the learning device according to the first embodiment of the present disclosure will be described.
The storage 13 is realized by a hard disk drive (HDD), a solid state drive (SSD), a flash memory, or the like. An information processing program 12A and a learning program 12B are stored in the storage 13 as a storage medium. The CPU 11 reads out the information processing program 12A and the learning program 12B from the storage 13, loads the information processing program 12A and the learning program 12B in the memory 16, and executes the loaded information processing program 12A and learning program 12B.
Next, a functional configuration of the information processing apparatus according to the first embodiment will be described.
The information acquisition unit 21 acquires the non-contrast CT image G0 of the head of the patient from the image storage server 3. Moreover, the information acquisition unit 21 acquires the training data for training a neural network from the image storage server 3 in order to construct the discriminative model described below.
The information derivation unit 22 acquires the first information representing any one of the infarction region or the large vessel occlusion region in the CT image G0, and derives the second information representing the other of the infarction region or the large vessel occlusion region in the CT image G0 based on the CT image G0 and the first information. In the present embodiment, the first information representing the infarction region in the CT image G0 is acquired, and the second information representing the large vessel occlusion region in the CT image G0 is derived based on the CT image G0 and the first information.
The second discriminative model 22B is constructed by training U-Net, which is a type of the convolutional neural network, through machine learning using a large amount of the training data to extract the large vessel occlusion region from the CT image G0 as the second information based on the CT image G0 and the mask image M0 representing the infarction region in the CT image G0.
In the present embodiment, the CT image G0 and the mask image M0 representing the infarction region in the CT image G0 are input in combination to the first layer 31. It should be noted that, depending on the CT image G0, there is a case in which the midline of the brain is inclined with respect to a perpendicular bisector of the CT image G0 in the image. In such a case, the brain in the CT image G0 is rotated such that the midline of the brain matches the perpendicular bisector of the CT image G0. In this case, it is required to also perform the same rotation processing on the mask image M0.
The first layer 31 includes two convolutional layers, and outputs a feature amount map F1 in which two feature amount maps of the CT image G0 and the mask image M0 after the convolution are integrated. The integrated feature amount map F1 is input to the ninth layer 39 as shown by a broken line in
The second layer 32 includes two convolutional layers, and a feature amount map F2 output from the second layer 32 is input to the eighth layer 38 as shown by a broken line in
The third layer 33 also includes two convolutional layers, and a feature amount map F3 output from the third layer 33 is input to the seventh layer 37 as shown by a broken line in
In addition, in the present embodiment, in a case in which the second information is derived, the information on the symmetrical regions with respect to the midline of the brain in the CT image G0 and the mask image M0 representing the infarction region of the CT image G0 is used. Therefore, in the third layer 33 of the second discriminative model 22B, the feature amount map F3 subjected to the pooling is inverted left and right with respect to the midline of the brain, and an inversion feature amount map F3A is derived. The inversion feature amount map F3A is an example of inversion information according to the present disclosure.
The fourth layer 34 also includes two convolutional layers, and the feature amount map F3 subjected to the pooling and the inversion feature amount map F3A are input to the first convolutional layer. A feature amount map F4 output from the fourth layer 34 is input to the sixth layer 36 as shown by a broken line in
The fifth layer 35 includes one convolutional layer, and a feature amount map F5 output from the fifth layer 35 is subjected to upsampling, is doubled in size, and is input to the sixth layer 36. In
The sixth layer 36 includes two convolutional layers, and performs a convolution operation by integrating the feature amount map F4 from the fourth layer 34 and the feature amount map F5, which is subjected to the upsampling, from the fifth layer 35. A feature amount map F6 output from the sixth layer 36 is subjected to upsampling, is doubled in size, and is input to the seventh layer 37.
The seventh layer 37 includes two convolutional layers, and performs the convolution operation by integrating the feature amount map F3 from the third layer 33 and the feature amount map F6, which is subjected to the upsampling, from the sixth layer 36. A feature amount map F7 output from the seventh layer 37 is subjected to upsampling and is input to the eighth layer 38.
The eighth layer 38 includes two convolutional layers, and performs the convolution operation by integrating the feature amount map F2 from the second layer 32 and the feature amount map F7, which is subjected to the upsampling, from the seventh layer 37. A feature amount map output from the eighth layer 38 is subjected to upsampling and is input to the ninth layer 39.
The ninth layer 39 includes three convolutional layers, and performs the convolution operation by integrating the feature amount map F1 from the first layer 31 and the feature amount map F8, which is subjected to the upsampling, from the eighth layer 38. A feature amount map F9 output from the ninth layer 39 is an image obtained by extracting the large vessel occlusion region in the CT image G0.
In the present embodiment, a large amount of the training data 40 is stored in the image storage server 3, and the training data 40 is acquired from the image storage server 3 by the information acquisition unit 21 and is used for training the U-Net by the learning unit 23.
The learning unit 23 inputs the non-contrast CT image 43 and the mask image 44, which are the input data 41, to the U-Net, and causes the U-Net to output the image representing the large vessel occlusion region in the non-contrast CT image 43. Specifically, the learning unit 23 causes the U-Net to extract the HAS in the non-contrast CT image 43 and to output a mask image in which a part of the HAS is masked. The learning unit 23 derives a difference between the output image and the correct answer data 42 as a loss, and learns the weight of the connection of each layer in the U-Net and a coefficient of kernel such that the loss is small. It should be noted that, in a case of the learning, a perturbation may be added to the mask image 44. As the perturbation, for example, morphology processing may be added to the mask with a random probability, or the mask may be subjected to zero padding. By adding the perturbation to the mask image 44, it is possible to handle a pattern observed in a case of the cerebral infarction in a hyperacute phase in which only the thrombus appears on the image without a remarkable infarction region, and it is further possible to prevent the second discriminative model 22B being excessively dependent on the input mask image in a case of the discrimination.
Then, the learning unit 23 repeatedly performs the learning until the loss is equal to or less than a predetermined threshold value. As a result, the second discriminative model 22B is constructed, the second discriminative model 22B extracting the large vessel occlusion region included in the CT image G0 as the second information, to output a mask image H0 representing the large vessel occlusion region in the CT image G0 in a case in which the non-contrast CT image G0 and the mask image M0 representing the infarction region in the CT image G0 are input. It should be noted that the learning unit 23 may construct the second discriminative model 22B by repeatedly performing the learning a predetermined number of times.
It should be noted that the configuration of the U-Net constituting the second discriminative model 22B is not limited to that shown in
The quantitative value derivation unit 24 derives a quantitative value for at least one of the infarction region or the large vessel occlusion region derived by the information derivation unit 22. The quantitative value is an example of quantitative information in the present disclosure. In the present embodiment, it is assumed that the quantitative value derivation unit 24 derives the quantitative values of both the infarction region and the large vessel occlusion region, but the quantitative value of any one of the infarction region or the large vessel occlusion region may be derived. Since the CT image G0 is the three-dimensional image, the quantitative value derivation unit 24 may derive a volume of the infarction region, a volume of the large vessel occlusion region, and a length of the large vessel occlusion region as the quantitative values. Moreover, the quantitative value derivation unit 24 may derive a score of ASPECTS as the quantitative value.
The “ASPECTS” is an abbreviation for alberta stroke program early CT score, and is a scoring method in which an early CT sign of a simple CT is quantified for the cerebral infarction in a middle cerebral artery region. Specifically, the ASPECTS is a method in which, in a case in which the medical image is the CT image, the middle cerebral artery region is classified into 10 regions in two representative cross sections (basal ganglia level and radiation coronary level), the presence or absence of early ischemic change for each region is evaluated, and a positive part is scored by a point-deduction method. In the ASPECTS, an area of the infarction region is larger as the score is lower. The quantitative value derivation unit 24 need only derive the score depending on whether or not the infarction region is included in the 10 regions described above.
Moreover, the quantitative value derivation unit 24 may specify a dominant region of the occluded blood vessel based on the large vessel occlusion region, and derive an overlapping amount (volume) between the dominant region and the infarction region as the quantitative value.
It should be noted that the dominant region need only be specified by the registration of the CT image G0 with a prepared standard brain image in which the dominant region is specified.
The quantitative value derivation unit 24 specifies the artery in which the large vessel occlusion region is present, and specifies the dominant region by the specified artery of the brain. For example, in a case in which the large vessel occlusion region is present in the left anterior cerebral artery, the dominant region is specified as the anterior cerebral artery dominant region 61L. Here, the infarction region is generated downstream of the part in which the thrombus is present in the artery. Therefore, the infarction region is present in the anterior cerebral artery dominant region 61L. Therefore, the quantitative value derivation unit 24 need only derive the volume of the infarction region with respect to the volume of the anterior cerebral artery dominant region 61L in the CT image G0 as the quantitative value.
The display control unit 25 displays the CT image G0 of the patient and the quantitative value on the display 14.
Next, processing performed in the first embodiment will be described.
In a case in which a negative determination is made in step ST4, the processing returns to step ST1, and the learning unit 23 repeats the processing of step ST1 to step ST4. In a case in which a positive determination is made in step ST4, the processing ends. As a result, the second discriminative model 22B is constructed.
Then, the quantitative value derivation unit 24 derives the quantitative value based on the information on the infarction region and the large vessel occlusion region (step ST13). Then, the display control unit 25 displays the CT image G0 and the quantitative value (step ST14), and ends the processing.
As described above, in the first embodiment, the large vessel occlusion region in the CT image G0 is derived based on the non-contrast CT image G0 of the head of the patient and the infarction region in the CT image G0. As a result, since the infarction region can be considered, the large vessel occlusion region can be accurately specified in the CT image G0.
Here, a brain disease, such as the cerebral infarction, is rarely developed simultaneously in both the left brain and the right brain. Therefore, by using the inversion feature amount map F3A in which the feature amount map F3 is inverted with respect to the midline CO of the brain, it is possible to specify the large vessel occlusion region while comparing the features of the left and right brains. As a result, the large vessel occlusion region can be specified with high accuracy.
Moreover, by displaying the quantitative value, a doctor can easily decide the treatment policy based on the quantitative value. For example, by displaying the volume or the length of the large vessel occlusion region, it is easy to decide a type or a length of a device used in the application of thrombectomy treatment method.
Next, a second embodiment of the present disclosure will be described. It should be noted that a configuration of an information processing apparatus in the second embodiment is the same as the configuration of the information processing apparatus in the first embodiment, only the processing to be performed is different, and thus the detailed description of the apparatus will be omitted.
The second discriminative model 82B in the second embodiment is constructed by training the U-Net through machine learning using a large amount of the training data to extract the infarction region of the brain from the CT image G0 as the second information based on the CT image G0 and the mask image M1 representing the large vessel occlusion region in the CT image G0. It should be noted that the configuration of the U-Net is the same as that of the first embodiment, and thus the detailed description thereof will be omitted here.
In the second embodiment, the learning unit 23 constructs the second discriminative model 82B by training the U-Net using a large amount of the training data 90 shown in
Next, processing performed in the second embodiment will be described.
In a case in which a negative determination is made in step ST24, the processing returns to step ST21, and the learning unit 23 repeats the processing of step ST21 to step ST24. In a case in which a positive determination is made in step ST24, the processing ends. As a result, the second discriminative model 82B is constructed.
Then, the quantitative value derivation unit 24 derives the quantitative value based on the information on the infarction region and the large vessel occlusion region (step ST33). Then, the display control unit 25 displays the CT image G0 and the quantitative value (step ST34), and ends the processing.
In this way, in the second embodiment, the infarction region in the CT image G0 is derived based on the non-contrast CT image G0 of the head of the patient and the large vessel occlusion region in the CT image G0. As a result, since the large vessel occlusion region can be considered, the infarction region can be accurately specified in the CT image G0.
Next, a third embodiment of the present disclosure will be described. It should be noted that a configuration of an information processing apparatus in the third embodiment is the same as the configuration of the information processing apparatus in the first embodiment, only the processing to be performed is different, and thus the detailed description of the apparatus will be omitted.
The second discriminative model 83B in the third embodiment is constructed by training the U-Net through machine learning using a large amount of the training data to extract the large vessel occlusion region from the CT image G0 as the second information based on the CT image G0, the mask image M0 representing the infarction region in the CT image G0, and at least one information (hereinafter, referred to as additional information A0) of the information representing the anatomical region of the brain or the clinical information. It should be noted that the configuration of the U-Net is the same as that of the first embodiment, and thus the detailed description thereof will be omitted here.
Here, as the information representing the anatomical region, for example, a mask image of the blood vessel dominant region in which the infarction region is present in the non-contrast CT image 103 can be used. Moreover, the mask image of the region of the ASPECTS in which the infarction region is present in the non-contrast CT image 103 can be used as the information representing the anatomical region. As the clinical information, a score of the ASPECTS for the non-contrast CT image 103 and a national institutes of health stroke scale (NIHSS) for the patient from whom the non-contrast CT image 103 is acquired can be used. The NIHSS is one of the most widely used evaluation methods in the world as an evaluation scale for the severity of stroke neurology.
In the third embodiment, the learning unit 23 constructs the second discriminative model 83B by training the U-Net using a large amount of the training data 100 shown in
It should be noted that the learning processing in the third embodiment is different from that in the first embodiment only in that the additional information A0 is used, and thus the detailed description of the learning processing will be omitted. The information processing in the third embodiment is different from the information processing in the first embodiment only in that the information input to the second discriminative model 83B includes the additional information A0 of the patient in addition to the CT image G0 and the mask image representing the infarction region, and thus the detailed description of the information processing will be omitted.
In a third embodiment, the large vessel occlusion region in the CT image G0 is derived based on the additional information in addition to the non-contrast CT image G0 of the head of the patient and the infarction region in the and the CT image G0. As a result, the large vessel occlusion region can be more accurately specified in the CT image G0.
It should be noted that, in the third embodiment, the second discriminative model 83B is constructed to extract the large vessel occlusion region in the CT image G0 in a case in which the CT image G0, the mask image M0 representing the infarction region, and the additional information A0 are input, but the present disclosure is not limited to this. The second discriminative model 83B may be constructed to extract the infarction region in the CT image G0 in a case in which the CT image G0, the mask image representing the large vessel occlusion region, and the additional information are input.
In addition, in each of the above-described embodiments, the second discriminative model derives the second information (that is, the infarction region or the large vessel occlusion region) by using the information on the symmetrical regions with respect to the midline of the brain in the CT image G0 and the first information, but the present disclosure is not limited to this. The second discriminative model may be constructed to derive the second information without using the information on the symmetrical regions with respect to the midline of the brain in the CT image G0 and the first information.
In addition, in each of the above-described embodiments, the second discriminative model is constructed by using the U-Net, but the present disclosure is not limited to this. The second discriminative model may be constructed by using a convolutional neural network other than the U-Net.
Moreover, in each of the embodiments described above, in the first discriminative models 22A, 82A, and 83A of the information derivation units 22, 82, and 83, the first information (that is, the infarction region or the large vessel occlusion region) is derived from the CT image G0 by using the CNN, but the present disclosure is not limited to this. The information derivation unit may acquire the mask image generated by a doctor by interpreting the CT image G0 to specify the infarction region or the large vessel occlusion region as the first information without using the first discriminative model, and derive the second information.
Moreover, in each of the embodiments described above, the information derivation units 22, 82, and 83 derive the infarction region and the large vessel occlusion region, but the present disclosure is not limited to this. A bounding box that surrounds the infarction region and the large vessel occlusion region may be derived.
Moreover, in the embodiments described above, for example, various processors shown below can be used as the hardware structures of processing units that execute various processing, such as the information acquisition unit 21, the information derivation unit 22, the learning unit 23, the quantitative value derivation unit 24, and the display control unit 25 in the information processing apparatus 1. As described above, in addition to the CPU which is a general-purpose processor that executes the software (program) to function as the various processing units described above, the various processors include a programmable logic device (PLD), which is a processor of which a circuit configuration can be changed after manufacturing, such as a field programmable gate array (FPGA), a dedicated electric circuit, which is a processor having a circuit configuration exclusively designed to execute specific processing, such as an application specific integrated circuit (ASIC), and the like.
One processing unit may be configured by one of these various processors, or may be configured by a combination of two or more processors of the same type or different types (for example, a combination of a plurality of FPGAs or a combination of the CPU and the FPGA). Moreover, a plurality of processing units may be configured by one processor. A first example of the configuration in which the plurality of processing units are configured by one processor is a form in which one processor is configured by a combination of one or more CPUs and software and the processor functions as the plurality of processing units as represented by the computer, such as a client and a server. A second example thereof is a form in which a processor that realizes the function of the entire system including the plurality of processing units by one integrated circuit (IC) chip is used, as represented by a system-on-chip (SoC) or the like. As described above, as the hardware structures, the various processing units are configured by using one or more of the various processors described above.
Further, as the hardware structures of these various processors, more specifically, it is possible to use an electric circuit (circuitry) in which circuit elements, such as semiconductor elements, are combined.
Number | Date | Country | Kind |
---|---|---|---|
2022-034780 | Mar 2022 | JP | national |
This application is a continuation of International Application No. PCT/JP2022/041923, filed on Nov. 10, 2022, which claims priority from Japanese Patent Application No. 2022-034780, filed on Mar. 7, 2022. The entire disclosure of each of the above applications is incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/JP2022/041923 | Nov 2022 | WO |
Child | 18817148 | US |