The entire disclosure of Japanese Patent Application No. 2023-119918, filed on Jul. 24, 2023, is incorporated herein by reference in its entirety.
The present disclosure relates to an inference apparatus, an ultrasound diagnostic apparatus, a training apparatus, an inference method, a display method, and a program.
With the recent development of deep learning technology, machine learning models have been utilized for various purposes. For example, Japanese Unexamined Patent Application Publication (Translation of PCT Application) No. 2020-519369 and Japanese Patent Application Laid-Open No. 2021-164573 have proposed the utilization of a machine learning model for image diagnosis of ultrasound image data in the medical field.
For a patient suffering from lower back pain, a lumbar nerve root block is performed. Generally, a nerve root block is an injection of an anesthetic liquid into a lumbar nerve root. Generally, a nerve root block is executed by puncture under X-ray fluoroscopy, but a nerve root itself is not imaged. For this reason, the doctor determines that a puncture needle has reached the vicinity of a nerve root, by bringing the puncture needle into contact with the nerve root and observing a pain response of the patient.
However, since such a method imposes a heavy burden on the patient, improvement is desired.
In recent years, a technique has been developed in which when a nerve root block for a cervical nerve root is performed, puncture is performed while the position of the nerve root is confirmed in real time by using an ultrasound diagnostic apparatus.
In a case where a lumbar nerve root is rendered using an ultrasound diagnostic apparatus, the fifth lumbar nerve root (also referred to as the L5 nerve root), for which puncture is particularly highly frequently performed, is located in a deep position from the body surface and is surrounded by bones, and therefore, there is a problem in that it is difficult to render the nerve root as compared with a cervical nerve root. Further, in a case where a lumbar nerve root is observed from the back, the nerve root is observed through a muscle such as the multifidus muscle or the erector spinae muscle. For a patient whose muscles have turned into fat due to aging, ultrasound is scattered due to the fat, which makes it more difficult to render a lumbar nerve root.
In consideration of such circumstances, an object of the present disclosure is to provide an inference apparatus capable of performing accurate inference on a nerve root by using a trained model, and an ultrasound diagnostic apparatus, a training apparatus, an inference method, a display method, and a program which are related to the inference apparatus.
To achieve at least one of the abovementioned objects, according to an aspect of the present invention, an inference apparatus according to an aspect of the present disclosure includes a hardware processor that: acquires ultrasound image data on an ultrasound image generated based on a reflected wave of ultrasound transmitted to a subject; and inputs the ultrasound image data, as data to be inferred, to a trained model and performs inference on a nerve root in the ultrasound image based on output data outputted from the trained model.
The advantages and features provided by one or more embodiments of the invention will become more fully understood from the detailed description given hereinbelow and the appended drawings which are given by way of illustration only, and thus are not intended as a definition of the limits of the present invention:
Hereinafter, one or more embodiments of the present invention will be described with reference to the drawings. However, the scope of the invention is not limited to the disclosed embodiments.
Hereinafter, an embodiment of the present disclosure will be described in detail with reference to the accompanying drawings.
Hereinafter, an ultrasound diagnostic system is disclosed which includes: a training apparatus that trains a machine learning model (a model to be trained) for performing inference on a nerve root in an ultrasound image; an inference apparatus that performs inference on a nerve root by utilizing a trained model; and an ultrasound diagnostic apparatus that generates an ultrasound image to be inferred and displays the ultrasound image based on an inference result.
The trained model trained by the training apparatus is trained so as to output, by using ultrasound image data on an ultrasound image including a bone existing around a nerve root as training data, output data in a case where the ultrasound image data is inputted.
The ultrasound image apparatus includes an inference apparatus that holds a trained model for which the training by the training apparatus has been completed. The ultrasound image apparatus transmits ultrasound to a subject, generates an ultrasound image based on reflected ultrasound, and passes ultrasound image data on the ultrasound image to the inference apparatus. The inference apparatus inputs the ultrasound image data to the trained model and performs inference on a nerve root based on output data that has been outputted. Specifically, the inference apparatus performs inference on the position of a nerve root in an ultrasound image.
The ultrasound image apparatus displays an ultrasound image based on an inference result outputted by the inference apparatus. Specifically, the ultrasound image apparatus displays, together with an ultrasound image, the position of a nerve root based on an inference result.
Accordingly, a user (a doctor or the like) who uses the ultrasound diagnostic system can perform a nerve root block while confirming the position of a nerve root at any time based on an ultrasound image. In particular, when the training apparatus trains a model to be trained, by using, as training data, an ultrasound image including at least one bone of the fifth lumbar vertebra, the fourth lumbar vertebra or the sacrum, each of which exists around a lumbar nerve root, it is possible to perform a lumbar nerve root block with a relatively small burden on the subject.
First, a system configuration of ultrasound diagnostic system 1 according to an embodiment of the present disclosure will be described with reference to
As illustrated in
Training apparatus 50 stores a machine learning model to be trained, and uses training data stored in training data database (DB) 20 to train the machine learning model to be trained. The machine learning model to be trained may be realized as a machine learning model of an arbitrary suitable type, such as a neural network, for example. For example, in a case where the machine learning model to be trained is realized by a neural network, training apparatus 50 may execute supervised learning by using training data acquired from training data DB 20 and update a parameter(s) of the machine learning model according to an arbitrary training algorithm known in the art, such as a backpropagation method.
When the training of the machine learning model is completed, the trained machine learning model (hereinafter referred to as trained model 10) is stored in a storage unit of inference apparatus 200 included in ultrasound diagnostic apparatus 100, or the like.
Ultrasound diagnostic apparatus 100 generates ultrasound image data by transmitting and receiving an ultrasound signal to and from subject 30 via an ultrasound probe.
Inference apparatus 200 inputs ultrasound image data to trained model 10 and performs inference on a nerve root in an ultrasound image, for example, inference on the position of the nerve root within the image, or the like, based on output data that has been outputted.
Ultrasound diagnostic apparatus 100 displays an ultrasound image and performs displaying on a nerve root included in the ultrasound image based on an inference result of inference apparatus 200.
Although the system configuration according to an embodiment of the present disclosure has been described with reference to
Further, although illustration is omitted, the present disclosure may include a form in which an ultrasound diagnostic apparatus that generates an ultrasound image and an image display apparatus that displays an ultrasound image based on an inference result of inference apparatus 200 are separately provided.
Ultrasound diagnostic apparatus 100 visualizes, as an ultrasound image, at least one of a shape and/or a dynamic state within subject 30. Ultrasound diagnostic apparatus 100 is used, for example, in applications to capture an ultrasound image (i.e., a tomographic image) of a region to be detected, in particular the lumbar region, and to perform a nerve root block for a nerve root existing in the region to be detected, in particular a lumbar nerve root.
As illustrated in
Ultrasound probe 1020 functions as an acoustic sensor that transmits an ultrasound beam to subject 30 (e.g., a patient), receives a reflected wave (ultrasound echo) of the transmitted ultrasound beam, which is reflected inside the body of subject 30, and converts the reflected wave into an electric signal.
The user operates ultrasound diagnostic apparatus 100 by bringing an ultrasound beam transmission/reception surface of ultrasound probe 1020 into contact with the body surface of a region to be detected of subject 30. Note that, as ultrasound probe 1020, an arbitrary ultrasound probe such as a convex probe, a linear probe, a sector probe, or a three-dimensional probe can be applied.
Ultrasound probe 1020 is configured to include, for example, a plurality of transducers (e.g., piezoelectric elements) arranged in an array shape, and a channel switching apparatus (e.g., a multiplexer) for controlling switching to turn on/off a driving state of the plurality of transducers individually or in units of blocks.
Each transducer of ultrasound probe 1020 converts a voltage pulse generated by transmission unit 1012 of ultrasound diagnostic apparatus main body 1010 into an ultrasound beam, transmits the ultrasound beam to subject 30, receives an ultrasound echo reflected inside subject 30, converts the ultrasound echo into an electric signal, and outputs the electric signal to reception unit 1013 of ultrasound diagnostic apparatus main body 1010.
In the present embodiment, as illustrated in
Transmission unit 1012, reception unit 1013, ultrasound image generation unit 1014, and display image generation unit 1015 are constituted by, for example, dedicated or general-purpose hardware (electronic circuit) supporting each processing, such as a central process unit (CPU), a digital signal processor (DSP), an application specific integrated circuit (ASIC), or a field programmable gate array (FPGA), and implement each function in cooperation with control unit 1017.
Operation input unit 1011 receives, for example, a command instructing an operation of ultrasound diagnostic apparatus 100, an input of information on subject 30, or the like. Operation input unit 1011 may include, for example, an operation panel including a plurality of input switches, a keyboard, a mouse, and/or the like. Note that, operation input unit 1011 may be constituted by a touch screen provided integrally with output unit 1016.
Transmission unit 1012 is a transmitter that transmits a voltage pulse as a driving signal to ultrasound probe 1020 according to an instruction of control unit 1017. Transmission unit 1012 may be configured to include, for example, a high-frequency pulse oscillator, a pulse setting unit, and the like. Transmission unit 1012 may adjust a voltage pulse generated by the high-frequency pulse oscillator to have a voltage amplitude, a pulse width, and a transmission timing that are set by the pulse setting unit, and transmit the voltage pulse for each channel of ultrasound probe 1020.
Transmission unit 1012 includes a pulse setting unit for each of the plurality of channels of ultrasound probe 1020, and the voltage amplitude, pulse width, and transmission timing of a voltage pulse can be set for each of the plurality of channels. For example, transmission unit 1012 may change a target depth or generate a different pulse waveform by setting an appropriate delay time for the plurality of channels.
Reception unit 1013 is a receiver that receives and processes, according to an instruction of control unit 1017, an electric signal according to an ultrasound echo generated by ultrasound probe 1020. Reception unit 1013 may include a preamplifier, an AD conversion unit, and a reception beamformer.
Reception unit 1013 amplifies a reception signal according to a weak ultrasound echo for each channel by the preamplifier, and converts the reception signal into a digital signal by the AD conversion unit. Then, reception unit 1013 uses the reception beamformer to perform delay and sum on the reception signals of the respective channels, thereby combining the reception signals of the plurality of channels into one to obtain acoustic line data.
Ultrasound image generation unit 1014 acquires a reception signal (acoustic line data) from reception unit 1013 and generates an ultrasound image (i.e., a tomographic image) of the inside of subject 30.
For example, when ultrasound probe 1020 transmits a pulsed ultrasound beam in the depth direction, ultrasound image generation unit 1014 temporally continuously accumulates the signal intensity of an ultrasound echo, which is detected thereafter, in a line memory. Then, as the ultrasound beam from ultrasound probe 1020 scans the inside of subject 30, ultrasound image generation unit 1014 sequentially accumulates the signal intensity of the ultrasound echo in each scanning position, in the line memory, and generates two-dimensional data in units of frames. Then, ultrasound image generation unit 1014 may convert the signal intensity of the two-dimensional data into a luminance value, thereby generating an ultrasound image representing a two-dimensional structure within a cross section including the transmission direction of ultrasound and the scanning direction of the ultrasound.
Note that, ultrasound image generation unit 1014 may include, for example, an envelope detection circuit that performs envelope detection on a reception signal acquired from reception unit 1013, a logarithmic compression circuit that performs logarithmic compression on the signal intensity of the reception signal detected by the envelope detection circuit, a dynamic filter that removes a noise component included in a reception signal, with a band-pass filter whose frequency characteristic is changed according to the depth, and the like.
Ultrasound image generation unit 1014 may be capable of generating, when needed, ultrasound image data in, in addition to a B mode, any other mode such as a color Doppler mode, a motion (M) mode, or a pulse Doppler mode as the image mode. The B mode is a mode for generating a tomographic image in which the intensity of a reception signal of ultrasound in a subject is represented by luminance. The color Doppler mode is a mode for converting a Doppler shift frequency generated by a blood flow of a subject into a velocity and displaying the velocity distribution as a tomographic image.
Ultrasound image data on an ultrasound image generated by ultrasound image generation unit 1014 is outputted to inference apparatus 200.
As illustrated in
Image acquisition unit 201 acquires ultrasound image data outputted from ultrasound image generation unit 1014.
In a case where ultrasound image generation unit 1014 generates an image in the color Doppler mode, blood flow information acquisition unit 202 acquires blood flow information on a blood flow flowing in a blood vessel included in the ultrasound image.
Inference unit 203 inputs ultrasound image data, as data to be inferred, to trained model 10 stored in storage unit 204 and performs inference on a nerve root in an ultrasound image based on output data on the nerve root outputted from trained model 10. The content of the inference performed by inference unit 203 is, for example, inference on the position of a nerve root in an ultrasound image.
Output data that is a result of reference by inference unit 203 is outputted to display image generation unit 1015.
Display image generation unit 1015 generates, based on ultrasound image data generated by ultrasound image generation unit 1014 and output data acquired from inference apparatus 200, a display image including an ultrasound image and displaying on a nerve root. Then, display image generation unit 1015 transmits data of the generated display image to output unit 1016. Display image generation unit 1015 may sequentially update a display image each time a new ultrasound image is acquired from ultrasound image generation unit 1014, and cause output unit 1016 to display the display image in a moving image format.
Note that, display image generation unit 1015 may generate a display image after performing predetermined image processing, such as coordinate conversion processing or data interpolation processing, on an ultrasound image outputted from ultrasound image generation unit 1014.
Following an instruction from control unit 1017, output unit 1016 acquires data of a display image from display image generation unit 1015 and outputs the display image. For example, output unit 1016 may be constituted by a liquid crystal display, an organic EL display, a CRT display, or the like.
Control unit 1017 controls operation input unit 1011, transmission unit 1012, reception unit 1013, ultrasound image generation unit 1014, display image generation unit 1015, output unit 1016, and inference apparatus 200 according to the functions thereof, thereby performing overall control of ultrasound diagnostic apparatus 100. Note that, in the present disclosure, the control of inference apparatus 200 may not be performed by control unit 1017, and for example, inference apparatus 200 may include a control unit independent of control unit 1017.
Control unit 1017 may include central processing unit (CPU) 1171 as an arithmetic/control apparatus, solid state drive (SSD) 1172 in which a basic program and basic setting data are stored, random access memory (RAM) 1173 as a main storage apparatus, and the like. CPU 1171 reads a program corresponding to a processing content from SSD 1172, stores the program in RAM 1173, and executes the stored program, thereby centrally controlling the operation of each functional block of ultrasound diagnostic apparatus main body 1010 and inference apparatus 200.
Next, a configuration of training apparatus 50 according to the embodiment of the present disclosure will be described.
As illustrated in
Data acquisition unit 51 acquires training data (learning data) that is used for training of a machine learning model to be trained. Specifically, data acquisition unit 51 acquires a pair of ultrasound image data on an ultrasound image and a ground truth label indicating the position of a nerve root in the ultrasound image.
Note that, the ultrasound image data as training data acquired by training apparatus 50 may not necessarily be the ultrasound image data generated by ultrasound diagnostic apparatus 100 described above. Training apparatus 50 may acquire, for example, ultrasound image data including a large number of ultrasound images generated in the past by another/other ultrasound diagnostic apparatus(es), and ground truth labels assigned in advance to the ultrasound images, respectively. Training apparatus 50 may acquire the training data from, for example, a company(s) or an organization(s) that provide(s) pairs of ultrasound image data and ground truth labels, or may use, for example, existent data existing on a public network, such as the Internet, as the training data.
The ground truth labels may be assigned in advance to a plurality of ultrasound images as described above, or may be assigned by the user of ultrasound diagnostic system 1 to ultrasound images, respectively, which are generated using ultrasound diagnostic apparatus 100.
Data acquisition unit 51 may perform pre-processing, such as contrast change or noise removal, on the acquired ultrasound image data for training. Furthermore, data acquisition unit 51 may execute data augmentation on the acquired training data to increase training data. For example, data acquisition unit 51 may execute data augmentation by performing image transformation processing, such as enlargement or reduction, position change, or deformation, on ultrasound image data for training.
Training unit 52 uses a pair of ultrasound image data and a ground truth label as training data to generate trained model 10 that is trained so as to output an inference result on a nerve root in response to an input of ultrasound image data.
More specifically, training unit 52 compares output data, which is outputted by a machine learning model to be trained in response to an input of ultrasound image data for training, with a ground truth label, and updates a parameter(s) of the machine learning model to be trained, based on a comparison result (error). For example, in a case where a machine learning model is realized by a convolutional neural network, training unit 52 may continue to adjust a parameter(s) of the machine learning model according to an error between an output result and ground truth data in accordance with the backpropagation method until a predetermined completion condition is satisfied.
In the present embodiment, when training unit 52 determines that the training of a machine learning model has been completed, training unit 52 provides inference apparatus 200 of ultrasound diagnostic apparatus 100 with trained model 10 (see
Each of inference apparatus 200 and training apparatus 50 may be realized by various types of calculation apparatuses 300 such as a server apparatus, a personal computer (PC), a workstation, a smartphone, or a tablet terminal. That is, calculation apparatus 300 which may form inference apparatus 200 or training apparatus 50 includes drive apparatus 101, storage apparatus 102, memory apparatus 103, processor 104, user interface (UI) apparatus 105, and communication apparatus 106 which are interconnected via bus B.
Programs or instructions for implementing various functions and processing of inference apparatus 200 and training apparatus 50 may be stored in an attachable/detachable storage medium such as a compact disc read-only memory (CD-ROM) or a flash memory. When the storage medium is set in drive apparatus 101, a program or an instruction is installed in storage apparatus 102 or memory apparatus 103 from the storage medium via drive apparatus 101. However, the program or the instruction may not need to be installed from the storage medium, but may be downloaded from any external apparatus via a network or the like.
Storage apparatus 102 is realized by a storage apparatus such as a hard disk drive (HDD) or an SSD, and stores, together with an installed program or instruction, a file, data, or the like used for executing the program or the instruction.
Memory apparatus 103 is realized by a random access memory, a static memory, or the like, and reads, when a program or an instruction is activated, a program, an instruction, data, or the like from storage apparatus 102 and stores the read program, instruction, data, or the like. Storage apparatus 102, memory apparatus 103, and the attachable/detachable storage medium may be collectively referred to as a non-transitory storage medium.
Processor 104 may be realized by at least one central processing unit (CPU), graphics processing unit (GPU), processing circuitry, or the like, which may be constituted by at least one processor core, and executes various functions and processing of inference apparatus 200 and training apparatus 50 according to programs and instructions stored in memory apparatus 103 and data such as a parameter(s) required to execute the programs or instructions.
User interface (UI) apparatus 105 may be constituted by an input apparatus such as a keyboard, a mouse, a camera, or a microphone, an output apparatus such as a display, a speaker, a headset, or a printer, or an input/output apparatus such as a touch screen, and realizes an interface between the user and inference apparatus 200 as well as training apparatus 50. For example, the user operates a graphical user interface (GUI) displayed on a display or a touch screen with a keyboard, a mouse, or the like to operate inference apparatus 200 and training apparatus 50.
Communication apparatus 106 is realized by various communication circuits that execute wired or wireless communication processing with an external apparatus or a communication network such as the Internet, a local area network (LAN), or a cellular network.
The hardware configuration exemplified in
Hereinafter, operation examples of ultrasound diagnostic system I will be described in detail.
First, an operation example of training apparatus 50 in a training phase will be described.
In step S1, data acquisition unit 51 of training apparatus 50 acquires training data.
The training data is ultrasound image data obtained from various subjects. In a case where it is assumed that ultrasound diagnostic system 1 is used particularly for a lumbar nerve root block, the training data is desirably ultrasound image data of the lumbar regions of various subjects. Nonetheless, the training data may include ultrasound image data of a region(s) other than the lumbar region.
The white area in
In the example illustrated in
As described above, the training data is constituted by a pair of ultrasound image data on an ultrasound image as exemplified in
The ultrasound image illustrated in
In the present disclosure, in addition to the ultrasound images described with reference to
For example, as training data, ultrasound image data of an ultrasound image in which a nerve root is marked by injecting a drug solution or the like for marking the nerve root into a subject at the time of generating the ultrasound image data may be used. According to such an image, it is not necessary for the user to manually assign a ground truth label, and it is possible to significantly reduce the labor for generating training data.
In step S3, training unit 52 determines whether a predetermined completion condition has been completed. The predetermined completion condition may be set in advance by, for example, the user of ultrasound diagnostic system 1. As an example of the completion condition, it is possible to mention completion of learning using a predetermined number of pairs of ultrasound images and ground truth labels.
In a case where training unit 52 determines in step S3 that the training has been completed (step S3: YES), the processing proceeds to step S4. In a case where training unit 52 determines in step S3 that the training has not been completed (step S3: NO), the processing returns to step SI and the training is continued. Note that, an example in which in a case where it is determined that the training has not been completed in step S3, the processing returns to step S1 is illustrated in the flowchart illustrated in
In step S4, training apparatus 50 outputs trained model 10 (see
By performing the training in this way, the machine learning model to be trained becomes trained model 10 capable of outputting output data on a nerve root in response to an input of ultrasound image data. Note that, the output data may be, for example, data itself indicating the position of a nerve root in an ultrasound image, or various data on a nerve root, such as data on the position(s) of a bone(s) existing around a nerve root or data on the position of a blood vessel running parallel to a nerve root.
The output data outputted by trained model 10 may include the above-described plurality of types of data or may include only one type of data.
Next, an operation example of ultrasound diagnostic apparatus 100 in the inference phase will be described.
The inference phase is executed, for example, in a case where the user performs a nerve root block treatment on a subject by using ultrasound diagnostic apparatus 100.
In step S11, ultrasound image generation unit 1014 (see
In step S12, ultrasound image generation unit 1014 inputs the generated image data to inference apparatus 200. In a case where the generated ultrasound image is a moving image, ultrasound image generation unit 1014 inputs image data generated for each frame to inference apparatus 200. Thus, image acquisition unit 201 of inference apparatus 200 acquires ultrasound image data to be inferred.
In step S13, inference unit 203 of inference apparatus 200 inputs the ultrasound image data to be inferred to trained model 10 stored in storage unit 204, and acquires output data outputted from trained model 10.
As described above, the output data may include data indicating the position of a nerve root itself as well as a plurality of types of data such as data indicating the position(s) of a bone(s) (such as the fourth lumbar vertebra, the fifth lumbar vertebra, the sacrum, or the like in the case of the lumbar region) that should exist around a nerve root or data indicating the position of a blood vessel (a lumbar artery in the case of the lumbar region) running parallel to a nerve root.
In step S14, inference unit 203 performs inference on a nerve root in an ultrasound image based on the output data.
Specifically, in a case where the output data includes the position of a nerve root to be treated, inference unit 203 specifies the position of the nerve root based on the output data.
Furthermore, in a case where the output data includes the position(s) of a bone(s) that should exist around a nerve root to be treated, inference unit 203 performs inference on the position of the nerve root based on the output data. For example, in the ultrasound image of the lumbar region exemplified in
Furthermore, in a case where the output data includes the position of a blood vessel that should run parallel to a nerve root to be treated, inference unit 203 performs inference on the position of the nerve root based on the output data. For example, in the example illustrated in
To improve the accuracy of nerve root inference, it is more desirable that inference unit 203 infer the position of a nerve root in an ultrasound image by combining a plurality of types of inference results using a plurality of types of data included in the output data.
In step S15, inference unit 203 outputs inference result (the position of a nerve root in an ultrasound image) data to display image generation unit 1015.
In step S16, display image generation unit 1015 generates a display image to be displayed on output unit 1016, based on the ultrasound image data acquired from ultrasound image generation unit 1014 and the inference result data indicating the position of a nerve root in an ultrasound image of the ultrasound image data.
In addition, in the example illustrated in
Note that, the display image illustrated in
Further, for example, as exemplified in
In step S17, output unit 1016 outputs (displays) the display image. Thus, the user can accurately grasp the position of a nerve root.
Note that, in a case where an ultrasound image is a moving image, ultrasound diagnostic apparatus 100 repeats step S11 illustrated in
As described above, inference apparatus 200 performs inference on the position of a nerve root based on output data outputted by a learned model in response to an input of ultrasound image data, and ultrasound diagnostic apparatus 100 generates a display image based on an inference result of inference apparatus 200, thereby making it possible to generate a display image including a display element that accurately indicates the position of a nerve root. Accordingly, the user who performs a nerve root block treatment on a subject can accurately grasp the position of a nerve root even when the user does not elicit a reaction of the subject. Therefore, it is possible to perform a nerve root block with a small burden on the subject.
Hereinabove, the examples of the present disclosure have been described in detail, but the present disclosure is not limited to the above-described specific embodiment, and various modifications and changes can be made within the scope of the gist of the present disclosure described in the claims.
Although embodiments of the present invention have been described and illustrated in detail, the disclosed embodiments are made for purposes of illustration and example only and not limitation. The scope of the present invention should be interpreted by terms of the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
2023-119918 | Jul 2023 | JP | national |