The present invention relates to a blood vessel segment discrimination system, a blood vessel segment discrimination method, and a program.
Priority is claimed on Japanese Patent Application No. 2022-062934, filed in Japan on Apr. 5, 2022, the content of which is incorporated herein by reference.
Resuscitative endovascular balloon occlusion of the aorta (REBOA) is a critical care technique in which a balloon catheter is used to perform hemostasis or control the amount of bleeding for hemodynamics unstable traumatic bleeding or the like. In REBOA, a balloon is placed in the aorta using a catheter. REBOA is also performed in situations in which an X-ray fluoroscopy device, an ultrasound device, or the like cannot be used, unlike normal catheter treatment. In recent years, REBOA has also attracted attention as an effective technique for non-traumatic bleeding as well as traumatic bleeding.
In REBOA, perfusion of a non-bleeding part can be selectively preserved by adjusting a Zone 1 to a Zone 3 of the aorta in which the balloon is placed (Zone 1 to Zone 3 of the aorta are segmented according to branching positions of the aorta). Zone 1 to Zone 3 of the aorta are described in, for example, Non-Patent Document 1.
In a situation in which an X-ray fluoroscopy device, an ultrasound device, or the like can be used, and a catheter and Zone 1 to Zone 3 of the aorta can be observed by the X-ray fluoroscopy device, the ultrasound device, or the like, a position of the catheter with respect to Zone 1 to Zone 3 of the aorta is adjusted while checking the position of the catheter and positions of Zone 1 to Zone 3 of the aorta.
For example, even in a situation in which an X-ray fluoroscopy device, an ultrasound device, or the like cannot be used, such as in an emergency, and Zone 1 to Zone 3 of the aorta cannot be observed by the X-ray fluoroscopy device, the ultrasound device, or the like, REBOA may be performed.
In such a case, in the related art, an operator is required to perform REBOA in a state in which Zone 1 to Zone 3 of the aorta of a patient cannot be ascertained.
In view of the above, an object of the present invention is to provide a blood vessel segment discrimination system, a blood vessel segment discrimination method, and a program, which enable easy identification of a Zone 1, a Zone 2, and a Zone 3 of the aorta of a patient and can support implementation of REBOA even in a situation in which an X-ray fluoroscopy or ultrasound device cannot be used.
According to an aspect of the present invention, a blood vessel segment discrimination system is provided including: a three-dimensional structure recognition device configured to recognize a three-dimensional structure of an abdomen of the patient; a depth image generation unit configured to generate a depth image of the abdomen of the patient from the three-dimensional structure of the abdominal surface of the patient recognized by the three-dimensional structure recognition device; a blood vessel segment discrimination device configured to discriminate an aortic segment of the patient whose three-dimensional structure of the abdominal surface is recognized by the three-dimensional structure recognition device, using a deep learning model; and a training data set generation unit configured to generate a training data set used for training of the deep learning model, in which, before the training of the deep learning model is performed, the training data set generation unit generates the three-dimensional structure of the abdominal surface for training from any of an abdominal computed tomography (CT) image, an abdominal magnetic resonance imaging (MRI) image, and an abdominal magnetic resonance angiography (MRA) image of a person different from a patient who is a discrimination target of the aortic segment by the blood vessel segment discrimination device, generates a depth image for training from the three-dimensional structure of the abdominal surface for training, and generates the training data set showing a correspondence relationship between each pixel in the depth image for training and any of a first blood vessel segment corresponding to a Zone 1 of an aorta, a second blood vessel segment corresponding to a Zone 2 of the aorta, a third blood vessel segment corresponding to a Zone 3 of the aorta, and another segment, based on any of the abdominal CT image, the abdominal MRI image, and the abdominal MRA image, and after the training of the deep learning model is performed using the training data set generated by the training data set generation unit, the three-dimensional structure recognition device recognizes the three-dimensional structure of the abdomen of the patient, the depth image generation unit generates the depth image of the abdomen of the patient from the three-dimensional structure of the abdominal surface of the patient recognized by the three-dimensional structure recognition device, and the blood vessel segment discrimination device estimates whether each pixel in the depth image of the abdomen of the patient generated by the depth image generation unit corresponds to any of the first blood vessel segment, the second blood vessel segment, the third blood vessel segment, and another segment using the trained deep learning model.
According to an aspect of the present invention, a blood vessel segment discrimination method for a blood vessel segment discrimination system is provided including: a three-dimensional structure recognition device configured to recognize a three-dimensional structure of an abdomen of the patient; a depth image generation unit configured to generate a depth image of the abdomen of the patient from the three-dimensional structure of the abdominal surface of the patient recognized by the three-dimensional structure recognition device; a blood vessel segment discrimination device configured to discriminate an aortic segment of the patient whose three-dimensional structure of the abdominal surface is recognized by the three-dimensional structure recognition device, using a deep learning model; and a training data set generation unit configured to generate a training data set used for training of the deep learning model, the blood vessel segment discrimination method including: a training data set generation step of, before the training of the deep learning model is performed, via the training data set generation unit, generating the three-dimensional structure of the abdominal surface for training from any of an abdominal CT image, an abdominal MRI image, and an abdominal MRA image of a person different from a patient who is a discrimination target of the aortic segment by the blood vessel segment discrimination device, generating a depth image for training from the three-dimensional structure of the abdominal surface for training, and generating the training data set showing a correspondence relationship between each pixel in the depth image for training and any of a first blood vessel segment corresponding to a Zone 1 of an aorta, a second blood vessel segment corresponding to a Zone 2 of the aorta, a third blood vessel segment corresponding to a Zone 3 of the aorta, and another segment, based on any of the abdominal CT image, the abdominal MRI image, and the abdominal MRA image; a three-dimensional structure recognition step of, after the training of the deep learning model is performed using the training data set generated in the training data set generation step, via the three-dimensional structure recognition device, recognizing the three-dimensional structure of the abdomen of the patient; a depth image generation step of, via the depth image generation unit, generating the depth image of the abdomen of the patient from the three-dimensional structure of the abdominal surface of the patient recognized in the three-dimensional structure recognition step; and a blood vessel segment discrimination step of, via the blood vessel segment discrimination device, estimating whether each pixel in the depth image of the abdomen of the patient generated in the depth image generation step corresponds to any of the first blood vessel segment, the second blood vessel segment, the third blood vessel segment, and another segment using the trained deep learning model.
According to an aspect of the present invention, a program for causing a computer constituting a blood vessel segment discrimination device provided in a blood vessel segment discrimination system is provided including: a three-dimensional structure recognition device configured to recognize a three-dimensional structure of an abdomen of the patient; a depth image generation unit configured to generate a depth image of the abdomen of the patient from the three-dimensional structure of the abdominal surface of the patient recognized by the three-dimensional structure recognition device; and a training data set generation unit configured to generate a training data set used for training of a deep learning model, to execute: a training step of performing the training of the deep learning model using the training data set generated by the training data set generation unit; and a blood vessel segment discrimination step, in which the training data set generation unit generates the three- dimensional structure of the abdominal surface for training from any of an abdominal CT image, an abdominal MRI image, and an abdominal MRA image of a person different from a patient who is a discrimination target of an aortic segment by the blood vessel segment discrimination device, generates a depth image for training from the three-dimensional structure of the abdominal surface for training, and generates the training data set showing a correspondence relationship between each pixel in the depth image for training and any of a first blood vessel segment corresponding to a Zone 1 of an aorta, a second blood vessel segment corresponding to a Zone 2 of the aorta, a third blood vessel segment corresponding to a Zone 3 of the aorta, and another segment, based on any of the abdominal CT image, the abdominal MRI image, and the abdominal MRA image, after the training step is executed, the three-dimensional structure recognition device recognizes the three-dimensional structure of the abdomen of the patient, and the depth image generation unit generates the depth image of the abdomen of the patient from the three-dimensional structure of the abdominal surface of the patient recognized by the three-dimensional structure recognition device, and in the blood vessel segment discrimination step, estimation is made as to whether each pixel in the depth image of the abdomen of the patient generated by the depth image generation unit corresponds to any of the first blood vessel segment, the second blood vessel segment, the third blood vessel segment, and another segment using the trained deep learning model.
According to an aspect of the present invention, a blood vessel segment discrimination system is provided including: a three-dimensional structure recognition device configured to recognize a three-dimensional structure of an abdomen of the patient; a blood vessel segment discrimination device configured to discriminate an aortic segment of the patient whose three-dimensional structure of the abdominal surface is recognized by the three-dimensional structure recognition device, using a deep learning model; and a training data set generation unit configured to generate a training data set used for training of the deep learning model, in which, before the training of the deep learning model is performed, the training data set generation unit, generates the three-dimensional structure of the abdominal surface for training from any of an abdominal CT image, an abdominal MRI image, and an abdominal MRA image of a person different from a patient who is a discrimination target of the aortic segment by the blood vessel segment discrimination device, and generates the training data set showing a correspondence relationship between each point on the three-dimensional structure of the abdominal surface for training and any of a first blood vessel segment corresponding to a Zone 1 of an aorta, a second blood vessel segment corresponding to a Zone 2 of the aorta, a third blood vessel segment corresponding to a Zone 3 of the aorta, and another segment, based on any of the abdominal CT image, the abdominal MRI image, and the abdominal MRA image, and after the training of the deep learning model is performed using the training data set generated by the training data set generation unit, the three-dimensional structure recognition device recognizes the three-dimensional structure of the abdomen of the patient, and the blood vessel segment discrimination device estimates whether each point on the three-dimensional structure of the abdomen of the patient recognized by the three-dimensional structure recognition device corresponds to any of the first blood vessel segment, the second blood vessel segment, the third blood vessel segment, and another segment using the trained deep learning model.
According to an aspect of the present invention, a blood vessel segment discrimination method for a blood vessel segment discrimination system is provided including: a three-dimensional structure recognition device configured to recognize a three-dimensional structure of an abdomen of the patient; a blood vessel segment discrimination device configured to discriminate an aortic segment of the patient whose three-dimensional structure of the abdominal surface is recognized by the three-dimensional structure recognition device, using a deep learning model; and a training data set generation unit configured to generate a training data set used for training of the deep learning model, the blood vessel segment discrimination method including: a training data set generation step of, before the training of the deep learning model is performed, via the training data set generation unit, generating the three-dimensional structure of the abdominal surface for training from any of an abdominal CT image, an abdominal MRI image, and an abdominal MRA image of a person different from a patient who is a discrimination target of the aortic segment by the blood vessel segment discrimination device, and generating the training data set showing a correspondence relationship between each point on the three-dimensional structure of the abdominal surface for training and any of a first blood vessel segment corresponding to a Zone 1 of an aorta, a second blood vessel segment corresponding to a Zone 2 of the aorta, a third blood vessel segment corresponding to a Zone 3 of the aorta, and another segment, based on any of the abdominal CT image, the abdominal MRI image, and the abdominal MRA image; a three-dimensional structure recognition step of, after the training of the deep learning model is performed using the training data set generated in the training data set generation step, via the three-dimensional structure recognition device, recognizing the three-dimensional structure of the abdomen of the patient; and a blood vessel segment discrimination step of, via the blood vessel segment discrimination device, estimating whether each point on the three-dimensional structure of the abdomen of the patient recognized in the three-dimensional structure recognition step corresponds to any of the first blood vessel segment, the second blood vessel segment, the third blood vessel segment, and another segment using the trained deep learning model.
According to an aspect of the present invention, a program for causing a computer constituting a blood vessel segment discrimination device provided in a blood vessel segment discrimination system is provided including: a three-dimensional structure recognition device configured to recognize a three-dimensional structure of an abdomen of the patient; and a training data set generation unit configured to generate a training data set used for training of a deep learning model, to execute: a training step of performing the training of the deep learning model using the training data set generated by the training data set generation unit; and a blood vessel segment discrimination step, in which the training data set generation unit generates the three-dimensional structure of the abdominal surface for training from any of an abdominal CT image, an abdominal MRI image, and an abdominal MRA image of a person different from a patient who is a discrimination target of an aortic segment by the blood vessel segment discrimination device, and generates the training data set showing a correspondence relationship between each point on the three-dimensional structure of the abdominal surface for training and any of a first blood vessel segment corresponding to a Zone 1 of an aorta, a second blood vessel segment corresponding to a Zone 2 of the aorta, a third blood vessel segment corresponding to a Zone 3 of the aorta, and another segment, based on any of the abdominal CT image, the abdominal MRI image, and the abdominal MRA image, after the training step is executed, the three-dimensional structure recognition device recognizes the three-dimensional structure of the abdomen of the patient, and in the blood vessel segment discrimination step, estimation is made as to whether each point on the three-dimensional structure of the abdomen of the patient recognized by the three-dimensional structure recognition device corresponds to any of the first blood vessel segment, the second blood vessel segment, the third blood vessel segment, and another segment using the trained deep learning model.
According to the present invention, it is possible to provide a blood vessel segment discrimination system, a blood vessel segment discrimination method, and a program with which it is possible to easily identify a Zone 1, a Zone 2, and a Zone 3 of the aorta of a patient even in a situation in which an X-ray fluoroscopy or ultrasound device cannot be used, and it is possible to support the implementation of REBOA.
Hereinafter, embodiments of a blood vessel segment discrimination system, a blood vessel segment discrimination method, and a program according to the present invention will be described with reference to the drawings.
In the example shown in
The three-dimensional structure recognition device 11 recognizes a three-dimensional structure of an abdomen of the patient. The three-dimensional structure recognition device 11 recognizes the three-dimensional structure of the abdomen of the patient and generates three-dimensional data of the abdomen of the patient using a technique similar to that used with a “LiDAR scanner” described on, for example, the following website.
As the three-dimensional structure recognition device 11, products described on, for example, the following websites and the like can be used.
The former product is used in a procedure described on, for example, the following website to recognize a three-dimensional structure of an external environment.
In addition, the latter product is used in a procedure described on, for example, the following websites to recognize a three-dimensional structure of external environment.
The depth image generation unit 12 generates a depth image of the abdomen of the patient from the three-dimensional structure of the abdominal surface of the patient recognized by the three-dimensional structure recognition device 11. The depth image is an image colored according to a distance, as described on, for example, the following website. That is, the depth image generation unit 12 generates a two-dimensional image of the abdomen of the patient in which the abdomen of the patient is colored according to a distance from the three-dimensional structure recognition device 11.
The depth image generation unit 12 generates the depth image of the abdomen of the patient using a technique described on, for example, the following website.
The training data set generation unit 13 generates a training data set.
The blood vessel segment discrimination device 14 discriminates an aortic segment of the patient whose three-dimensional structure of the abdominal surface is recognized by the three-dimensional structure recognition device 11 using a deep learning model 14A.
In the example shown in
In addition, the training data set generation unit 13 generates a depth image for training (image shown on the upper right side of
Further, the training data set generation unit 13 generates, based on the abdominal CT image (image shown on the left side of
In addition, the training data set generation unit 13 generates a training data set, which is a set of the depth image for training (image shown in the upper right side of
In the example shown in
Specifically, since the risk of organ ischemia increases if an estimation accuracy of the second blood vessel segment corresponding to Zone 2 of the aorta is low in a case where REBOA is performed, the training unit 141 performs training of the deep learning model 14A such that the estimation accuracy of the second blood vessel segment corresponding to Zone 2 of the aorta using the trained deep learning model 14A is equal to or higher than a predetermined threshold value.
In the blood vessel segment discrimination system 1 according to the first embodiment, a segmentation model is used as the deep learning model 14A.
Specifically, in a first example of the blood vessel segment discrimination system 1 according to the first embodiment, as the deep learning model 14A, semantic segmentation models described on, for example, the following websites and the like are used.
In a second example of the blood vessel segment discrimination system 1 according to the first embodiment, as the deep learning model 14A, a segmentation model other than the semantic segmentation, such as instance segmentation and panoptic segmentation described on, for example, the following website, may be used.
In the example shown in
That is, in the example shown in
The visualization device 15 generates a virtual image in which the estimation result of the aortic segment of the patient by the blood vessel segment discrimination device 14 is projected onto the abdomen of the patient. The visualization device 15 includes a virtual image generation unit 15A and a virtual image presentation unit 15B.
The virtual image generation unit 15A generates a virtual image in which the first blood vessel segment (Zone 1), the second blood vessel segment (Zone 2), and the third blood vessel segment (Zone 3) of the patient are projected onto the abdominal surface of the patient. The virtual image presentation unit 15B presents the virtual image generated by the virtual image generation unit 15A to a user (for example, an operator or the like) of the blood vessel segment discrimination system 1.
In the first example of the blood vessel segment discrimination system 1 according to the first embodiment, the visualization device 15 projects the first blood vessel segment (Zone 1), the second blood vessel segment (Zone 2), and the third blood vessel segment (Zone 3) of the patient onto the abdomen of the patient using the techniques described in, for example, Non-Patent Document 2, Non-Patent Document 3, and the following website.
A projection technique of the first example of the blood vessel segment discrimination system 1 according to the first embodiment is, for example, a technique using an external marker as a reference, a technique of superimposing information on a human body by performing registration manually, or the like.
In the second example of the blood vessel segment discrimination system 1 according to the first embodiment, as the visualization device 15, a product described on, for example, the following websites is used.
In the second example of the blood vessel segment discrimination system 1 according to the first embodiment, the visualization device 15 directly uses a real three-dimensional object (that is, the abdomen of the patient) as a marker using a technique similar to techniques described on, for example, the following websites, that is, directly projects the first blood vessel segment (Zone 1), the second blood vessel segment (Zone 2), and the third blood vessel segment (Zone 3) of the patient onto the abdomen of the patient without any external intervention.
In a third example of the blood vessel segment discrimination system 1 according to the first embodiment, as the visualization device 15, a product described on, for example, the following website is used.
In the third example of the blood vessel segment discrimination system 1 according to the first embodiment, the visualization device 15 directly uses a real three-dimensional object (that is, the abdomen of the patient) as a marker using a technique similar to techniques described on, for example, the following websites, that is, directly projects the first blood vessel segment (Zone 1), the second blood vessel segment (Zone 2), and the third blood vessel segment (Zone 3) of the patient onto the abdomen of the patient without any external intervention.
In the third example of the blood vessel segment discrimination system 1 according to the first embodiment, the visualization device 15 generates a virtual image in which the first blood vessel segment (Zone 1), the second blood vessel segment (Zone 2), and the third blood vessel segment (Zone 3) of the patient are projected onto the abdominal surface of the patient using techniques similar to the techniques described on, for example, the following websites, and presents the virtual image to the user of the blood vessel segment discrimination system 1.
In the example shown in
Specifically, in step S11A, the training data set generation unit 13 generates the three-dimensional structure of the abdominal surface for training from any of the abdominal CT image, the abdominal MRI image, and the abdominal MRA image of a person different from a patient who is a discrimination target of the aortic segment by the blood vessel segment discrimination device 14. Next, in step S11B, the training data set generation unit 13 generates the depth image for training from the three-dimensional structure of the abdominal surface for training generated in step S11A.
Next, in step S11C, the training data set generation unit 13 generates the training data set showing a correspondence relationship between each pixel in the depth image for training and any of the first blood vessel segment corresponding to Zone 1 of the aorta, the second blood vessel segment corresponding to Zone 2 of the aorta, the third blood vessel segment corresponding to Zone 3 of the aorta, and another segment (that is, a segment that does not correspond to any of the first blood vessel segment, the second blood vessel segment, and the third blood vessel segment), based on any of the abdominal CT image, the abdominal MRI image, and the abdominal MRA image.
Next, in step S12, the training unit 141 of the blood vessel segment discrimination device 14 performs training of the deep learning model 14A using the training data set generated in step S11.
Next, in step S13, the three-dimensional structure recognition device 11 recognizes the three-dimensional structure of the abdomen of the patient who is the discrimination target of the aortic segment by the blood vessel segment discrimination device 14.
Next, in step S14, the depth image generation unit 12 generates the depth image of the abdomen of the patient from the three-dimensional structure of the abdominal surface of the patient recognized in step S13.
Next, in step S15, the blood vessel segment discrimination device 14 discriminates the aortic segment of the patient whose three-dimensional structure of the abdominal surface is recognized in step S13 using the trained deep learning model 14A that has been trained in step S12.
Specifically, in step S15, the estimation unit 142 of the blood vessel segment discrimination device 14 uses the trained deep learning model 14A to estimate whether each pixel in the depth image of the abdomen of the patient generated in step S14 corresponds to any of the first blood vessel segment corresponding to Zone 1 of the aorta, the second blood vessel segment corresponding to Zone 2 of the aorta, the third blood vessel segment corresponding to Zone 3 of the aorta, and another segment (that is, a segment that does not correspond to any of the first blood vessel segment, the second blood vessel segment, and the third blood vessel segment).
Next, in step S16, the virtual image generation unit 15A of the visualization device 15 generates a virtual image in which the first blood vessel segment (Zone 1), the second blood vessel segment (Zone 2), and the third blood vessel segment (Zone 3) of the patient are projected onto the abdominal surface of the patient. In addition, the virtual image presentation unit 15B of the visualization device 15 presents the virtual image generated by the virtual image generation unit 15A to the user of the blood vessel segment discrimination system 1.
Therefore, with the blood vessel segment discrimination system 1 according to the first embodiment, the user of the blood vessel segment discrimination system 1 can easily identify Zone 1, Zone 2, and Zone 3 of the aorta of the patient even in a situation in which the X-ray fluoroscopy or ultrasound device cannot be used. As a result, it is possible to improve the availability of REBOA in a situation in which the X-ray fluoroscopy or ultrasound device cannot be used.
As described above, in the example shown in
In another example, as in the example shown in
As described above, in the example shown in
In another example, the blood vessel segment discrimination device 14 may output, as the estimation result of the aortic segment of the patient, a length from a landmark part (that is, a site suitable for puncture (that is, a catheter insertion site)) of the patient to the first blood vessel segment (Zone 1) of the patient, a length from the landmark part of the patient to the second blood vessel segment (Zone 2) of the patient, and a length from the landmark part of the patient to the third blood vessel segment (Zone 3) of the patient, for example, in numerical values or like.
Further, in another example, as the visualization device 15, a product described on, for example, the following website is used.
Further, in this example, the length from the landmark part (catheter insertion site) of the patient to the first blood vessel segment (Zone 1) of the patient, the length from the landmark part of the patient to the second blood vessel segment (Zone 2) of the patient, and the length from the landmark part of the patient to the third blood vessel segment (Zone 3) of the patient are measured using a technique described on the following website
Specifically, in this example, the user of the blood vessel segment discrimination system 1 designates a point corresponding to the landmark part (catheter insertion site) of the patient and a point corresponding to, for example, the first blood vessel segment (Zone 1) of the patient on the virtual image presented to the user of the blood vessel segment discrimination system 1, whereby a length between these two points is measured and presented to the user of the blood vessel segment discrimination system 1. Therefore, the user of the blood vessel segment discrimination system 1 can determine a catheter length inserted into the patient using the presented result.
In addition, in another example, as the visualization device 15, a product described on, for example, the following website is used.
Further, in this example, the length from the landmark part (catheter insertion site) of the patient to the first blood vessel segment (Zone 1) of the patient, the length from the landmark part of the patient to the second blood vessel segment (Zone 2) of the patient, and the length from the landmark part of the patient to the third blood vessel segment (Zone 3) of the patient are measured using a technique described on the following website.
In this example as well, the user of the blood vessel segment discrimination system 1 designates a point corresponding to the landmark part (catheter insertion site) of the patient and a point corresponding to, for example, the first blood vessel segment (Zone 1) of the patient on the virtual image presented to the user of the blood vessel segment discrimination system 1, whereby a length between these two points is measured and presented to the user of the blood vessel segment discrimination system 1. Therefore, the user of the blood vessel segment discrimination system 1 can determine a catheter length inserted into the patient using the presented result.
In the two examples described above, as a balloon catheter to be inserted into the patient, a balloon catheter having length marks as described on, for example, the following website is used.
In still another example, the training data set generation unit 13 may generate the training data set for learning a running state (meandering state) of a blood vessel into which the balloon catheter is inserted, based on any of the abdominal CT image (image shown on the left side of
In this example, the training unit 141 of the blood vessel segment discrimination device 14 performs training of the deep learning model 14A using the training data set. The estimation unit 142 of the blood vessel segment discrimination device 14 estimates a running state of the blood vessel of the patient into which the balloon catheter is inserted using the trained deep learning model 14A. Further, in a case where the user of the blood vessel segment discrimination system 1 designates a point corresponding to the landmark part (catheter insertion site) of the patient and a point corresponding to, for example, the first blood vessel segment (Zone 1) of the patient on the virtual image presented to the user of the blood vessel segment discrimination system 1, a length along the blood vessel between these two points is estimated and output based on a straight line distance between these two points and an estimation result of the estimation unit 142 of the blood vessel segment discrimination device 14.
In a modification example of this example, the training data set generation unit 13 may generate a training data set for learning a length along the blood vessel between the point corresponding to the landmark part (catheter insertion site) of the patient and the point corresponding to, for example, the first blood vessel segment (Zone 1) of the patient, based on any of the abdominal CT image (image shown on the left side of
In still another example, the visualization device 15 may generate a virtual image in which a balloon of the balloon catheter inserted into the blood vessel of the patient is in a state of being placed in any of the first blood vessel segment (Zone 1), the second blood vessel segment (Zone 2), and the third blood vessel segment (Zone 3) of the patient (that is, a state in which the balloon is expanded), and may present the virtual image to the user of the blood vessel segment discrimination system 1. In this example, the user of the blood vessel segment discrimination system 1 can easily imagine a state in which the balloon is actually placed in any of the first blood vessel segment (Zone 1), the second blood vessel segment (Zone 2), and the third blood vessel segment (Zone 3) of the patient.
The present inventor researched and developed the deep learning model 14A that instantly discriminates the aortic segment from a body surface of a patient in testing the blood vessel segment discrimination system 1 of the first embodiment. Specifically, the present inventor estimated the aortic segment from three-dimensional information of the body surface of a patient by combining a segmentation technique based on advanced deep learning and depth information of the patient obtained from reflection of infrared light emitted from a camera corresponding to the three-dimensional structure recognition device 11.
The present inventor specializes in medical radiology technology, machine learning, and mixed reality, and has been conducting research on, for example, visualizing previously invisible radiation exposure through mixed reality. Medical applications in mixed reality have advanced, and mixed reality has started to play an active role in the three-dimensional visualization of organs and telemedicine. The present inventor considered that visualization of other unknown information in an intuitive manner could be of use in medical care support and has focused on the use of REBOA in life-saving medical care situations performed even when an X-ray fluoroscopy device is physically unusable. The present research is devised in consideration that estimation of a Zone segment to be visualized with high accuracy using deep learning technology enables visualization with high accuracy and immersiveness, which is significantly different from the related art and remarkably improves the quality of life-saving medical care.
The present inventor performed the following in order to construct a model that makes it possible to estimate the Zone using a technique of segmentation by deep learning and to establish a trained model that makes it possible to perform highly accurate estimation.
A training data set plays an extremely important role in performing deep learning, and the degree of completion of the training data set greatly affects the results of research. Therefore, the present inventor first focused on the preparation of a training data set. A body surface three-dimensional image created from the abdominal CT image was used for training. The present inventor mainly used data owned by Teikyo University Hospital as the CT image. Further, in order to avoid any bias of training data, the present inventor also utilized an open access CT image database. The present inventor, who is also a radiological technologist, performed segmentation of the training data and used the segmented training data as a training label. Further, the present inventor collected information on the age and the body weight in preparation for a case where the prediction accuracy was insufficient with only a body surface image, and added the information to the training data as necessary.
The present inventor performed training of a semantic segmentation model using a training data set. The present inventor appropriately performed a search for a hyperparameter and a modification of a network structure in the training to construct an optimal training model. The present inventor comprehensively evaluated the trained model using reproducibility, precision, a dice coefficient, and the like. Among these, particularly, the reproducibility of Zone 2 having a high risk of organ ischemia was focused on, aiming at the creation of a model capable of discriminating a high-risk region with high accuracy.
Further, the present inventor performed the following in order to develop an application that can intuitively visualize a Zone estimated using a trained model, using mixed reality that seamlessly combines reality and virtuality.
In the research, the present inventor developed an application that operates on an iPad Pro (“iPad” is a registered trademark) (manufactured by Apple Inc.), which is a tablet-type terminal, and estimated and visualized a Zone from a trained model. The iPad Pro includes an infrared camera and can perform three-dimensional measurement of real space. Three-dimensional information of a patient body surface imaged by the infrared camera is input to the trained model, and the estimation result of the Zone is projected in real time onto the body surface imaged by a normal camera. As a result, the user of the blood vessel segment discrimination system 1 visualizes the Zone segment only by holding the iPad Pro over the human body. In this research, the iPad Pro functions as the three-dimensional structure recognition device 11 and as the visualization device 15.
In addition, in the research, the present inventor developed an application that operates on HoloLens2 (“HOLOLENS” is an international registered trademark) (manufactured by Microsoft Corporation) which is a headset-type terminal, and estimates and visualizes a Zone from a trained model. The HoloLens 2 also includes an infrared camera and can perform three-dimensional measurement of real space, as in the iPad Pro. Three-dimensional information of the body surface is used as an input to the trained model, and the estimation result of the Zone is projected in real time onto the body surface visible through the see-through HoloLens 2. As a result, the user of the blood vessel segment discrimination system 1 visualizes the Zone segment only by wearing the HoloLens 2 and looking at the patient. In this research, the HoloLens 2 functions as the three-dimensional structure recognition device 11 and as the visualization device 15.
Since an operation system and a hardware configuration of the HoloLens2 are different from those of the iPad Pro, it is necessary to change a surface layer portion of the application, but a portion related to the Zone segment estimation, which is a core of the system, can be shared (that is, either the iPad Pro or the HoloLens2 can be used as the visualization device 15).
As a result of the research described above, the present inventor found that the Zone segment can be estimated with sufficiently high accuracy from a three-dimensional image of the body surface of a patient. Specifically, the present inventor constructed a data set from an extremely small part of the CT image and performed a test of segmentation by deep learning. As a result, the reproducibility of Zone 1 was maximized to 0.99, the reproducibility of Zone 2 was maximized to 0.95, and the reproducibility of Zone 3 was maximized to 0.98.
The present inventor studied an implementation method of visualization using mixed reality based on the knowledge acquired so far, and realized a technique of projecting the Zone onto the body surface of a patient (that is, operating an actual device as the blood vessel segment discrimination system 1).
Hereinafter, a second embodiment of the blood vessel segment discrimination system, the blood vessel segment discrimination method, and the program according to the present invention will be described.
The blood vessel segment discrimination system 1 according to the second embodiment is configured in the same manner as the blood vessel segment discrimination system 1 according to the first embodiment described above, except for the following points. Therefore, with the blood vessel segment discrimination system 1 of the second embodiment, the same effects as those of the blood vessel segment discrimination system 1 of the first embodiment described above can be obtained, except for the following points.
As described above, in the blood vessel segment discrimination system 1 according to the first embodiment, the segmentation model is used as the deep learning model 14A.
On the other hand, in the blood vessel segment discrimination system 1 of the second embodiment, as the deep learning model 14A, a pseudo image generation technique such as a generative adversarial network (GAN) described on, for example, the following websites is used.
Hereinafter, a third embodiment of the blood vessel segment discrimination system, the blood vessel segment discrimination method, and the program according to the present invention will be described.
A blood vessel segment discrimination system 2 according to the third embodiment is configured in the same manner as the blood vessel segment discrimination system 1 according to the first embodiment described above, except for the following points. Therefore, with the blood vessel segment discrimination system 2 according to the third embodiment, the same effects as those of the blood vessel segment discrimination system 1 according to the first embodiment described above can be obtained, except for the following points.
In the example shown in
The three-dimensional structure recognition device 21 recognizes a three-dimensional structure of the abdomen of the patient in the same manner as the three-dimensional structure recognition device 11 shown in
The training data set generation unit 23 generates a training data set.
The blood vessel segment discrimination device 24 discriminates the aortic segment of the patient whose three-dimensional structure of the abdominal surface is recognized by the three-dimensional structure recognition device 21 using a deep learning model 24A.
In the example of the blood vessel segment discrimination system 2 according to the third embodiment, the training data set generation unit 23 generates the three-dimensional structure of the abdominal surface for training from any of the abdominal CT image, the abdominal MRI image, and the abdominal MRA image of a person different from a patient who is a discrimination target of the aortic segment by the blood vessel segment discrimination device 24.
Further, the training data set generation unit 23 generates information showing a correspondence relationship between each point on the three-dimensional structure of the abdominal surface for training and any of the first blood vessel segment corresponding to Zone 1 of the aorta, the second blood vessel segment corresponding to Zone 2 of the aorta, the third blood vessel segment corresponding to Zone 3 of the aorta, and another segment, based on any of the abdominal CT image, the abdominal MRI image, and the abdominal MRA image.
In addition, the training data set generation unit 23 generates a training data set, which is a set of the three-dimensional structure of the abdominal surface for training and the information described above, as supervised data used for training the deep learning model 24A.
In the example shown in
Specifically, since the risk of organ ischemia increases if an estimation accuracy of the second blood vessel segment corresponding to Zone 2 of the aorta is low in a case where REBOA is performed, the training unit 241 performs training of the deep learning model 24A such that the estimation accuracy of the second blood vessel segment corresponding to Zone 2 of the aorta using the trained deep learning model 24A is equal to or higher than a predetermined threshold value.
In the blood vessel segment discrimination system 2 according to the third embodiment, the segmentation model is used as the deep learning model 24A, in the same manner as in the blood vessel segment discrimination system 1 according to the first embodiment.
Specifically, in a first example of the blood vessel segment discrimination system 1 according to the third embodiment, the semantic segmentation model described above is used as the deep learning model 24A.
In a second example of the blood vessel segment discrimination system 2 according to the third embodiment, as the deep learning model 24A, a segmentation model other than semantic segmentation, such as instance segmentation and panoptic segmentation described above, may be used.
In the example shown in
That is, in the example shown in
The visualization device 25 generates a virtual image in which the estimation result of the aortic segment of the patient by the blood vessel segment discrimination device 24 is projected onto the abdomen of the patient. The visualization device 25 includes a virtual image generation unit 25A and a virtual image presentation unit 25B.
The virtual image generation unit 25A generates a virtual image in which the first blood vessel segment (Zone 1), the second blood vessel segment (Zone 2), and the third blood vessel segment (Zone 3) of the patient are projected onto the abdominal surface of the patient. The virtual image presentation unit 25B presents the virtual image generated by the virtual image generation unit 25A to the user of the blood vessel segment discrimination system 2.
In the first example of the blood vessel segment discrimination system 2 according to the third embodiment, the visualization device 25 projects the first blood vessel segment (Zone 1), the second blood vessel segment (Zone 2), and the third blood vessel segment (Zone 3) of the patient onto the abdomen of the patient, in the same manner as in the first example of the blood vessel segment discrimination system 1 according to the first embodiment.
In the second example of the blood vessel segment discrimination system 2 according to the third embodiment, the visualization device 25 directly uses a real three-dimensional object (that is, the abdomen of the patient) as a marker, that is, directly projects the first blood vessel segment (Zone 1), the second blood vessel segment (Zone 2), and the third blood vessel segment (Zone 3) of the patient onto the abdomen of the patient without any external intervention, in the same manner as in the second example of the blood vessel segment discrimination system 1 according to the first embodiment.
In a third example of the blood vessel segment discrimination system 2 according to the third embodiment, the visualization device 25 directly uses the real three-dimensional object (that is, the abdomen of the patient) as a marker, that is, directly projects the first blood vessel segment (Zone 1), the second blood vessel segment (Zone 2), and the third blood vessel segment (Zone 3) of the patient onto the abdomen of the patient without any external intervention, in the same manner as in the third example of the blood vessel segment discrimination system 1 according to the first embodiment.
In the third example of the blood vessel segment discrimination system 2 according to the third embodiment, the visualization device 25 generates a virtual image in which the first blood vessel segment (Zone 1), the second blood vessel segment (Zone 2), and the third blood vessel segment (Zone 3) of the patient are projected onto the abdominal surface of the patient, and presents the virtual image to the user of the blood vessel segment discrimination system 2, in the same manner as in the third example of the blood vessel segment discrimination system 1 according to the first embodiment.
In the example shown in
Specifically, in step S21A, the training data set generation unit 23 generates the three-dimensional structure of the abdominal surface for training from any of the abdominal CT image, the abdominal MRI image, and the abdominal MRA image of a person different from a patient who is a discrimination target of the aortic segment by the blood vessel segment discrimination device 14. Next, in step S21C, the training data set generation unit 13 generates the training data set showing a correspondence relationship between each point on the three-dimensional structure of the abdominal surface for training and any of the first blood vessel segment corresponding to Zone 1 of the aorta, the second blood vessel segment corresponding to Zone 2 of the aorta, the third blood vessel segment corresponding to Zone 3 of the aorta, and another segment (that is, a segment that does not correspond to any of the first blood vessel segment, the second blood vessel segment, and the third blood vessel segment) based on any of the abdominal CT image, the abdominal MRI image, and the abdominal MRA image.
Next, in step S22, the training unit 241 of the blood vessel segment discrimination device 24 performs training of the deep learning model 24A using the training data set generated in step S21.
Next, in step S23, the three-dimensional structure recognition device 21 recognizes the three-dimensional structure of the abdomen of the patient who is the discrimination target of the aortic segment by the blood vessel segment discrimination device 24.
Next, in step S25, the blood vessel segment discrimination device 14 discriminates the aortic segment of the patient whose three-dimensional structure of the abdominal surface is recognized in step S23 using the trained deep learning model 24A that has been trained in step S22.
Specifically, in step S25, the estimation unit 242 of the blood vessel segment discrimination device 24 uses the trained deep learning model 24A to estimate whether each point on the three-dimensional structure of the abdomen of the patient recognized in step S23 corresponds to any of the first blood vessel segment corresponding to Zone 1of the aorta, the second blood vessel segment corresponding to Zone 2 of the aorta, the third blood vessel segment corresponding to Zone 3 of the aorta, and another segment (that is, a segment that does not correspond to any of the first blood vessel segment, the second blood vessel segment, and the third blood vessel segment).
Next, in step S26, the virtual image generation unit 25A of the visualization device 25 generates a virtual image in which the first blood vessel segment (Zone 1), the second blood vessel segment (Zone 2), and the third blood vessel segment (Zone 3) of the patient are projected onto the abdominal surface of the patient. In addition, the virtual image presentation unit 25B of the visualization device 25 presents the virtual image generated by the virtual image generation unit 25A to the user of the blood vessel segment discrimination system 2.
Therefore, with the blood vessel segment discrimination system 2 according to the third embodiment, the user of the blood vessel segment discrimination system 2 can easily identify Zone 1, Zone 2, and Zone 3 of the aorta of the patient even in a situation in which the X-ray fluoroscopy or ultrasound device cannot be used. As a result, it is possible to improve the availability of REBOA in a situation in which the X-ray fluoroscopy or ultrasound device cannot be used.
As described above, in the example shown in
As described above, in the example shown in
In another example, the blood vessel segment discrimination device 24 may output, as the estimation result of the aortic segment of the patient, a length from a landmark part (that is, a site suitable for puncture (that is, a catheter insertion site)) of the patient to the first blood vessel segment (Zone 1) of the patient, a length from the landmark part of the patient to the second blood vessel segment (Zone 2) of the patient, and a length from the landmark part of the patient to the third blood vessel segment (Zone 3) of the patient, for example, in numerical values or like.
Hereinafter, a fourth embodiment of the blood vessel segment discrimination system, the blood vessel segment discrimination method, and the program according to the present invention will be described.
A blood vessel segment discrimination system 2 according to the fourth embodiment is configured in the same manner as the blood vessel segment discrimination system 2 according to the third embodiment described above, except for the following points. Therefore, with the blood vessel segment discrimination system 2 according to the fourth embodiment, the same effects as those of the blood vessel segment discrimination system 2 according to the third embodiment described above can be obtained, except for the following points.
As described above, in the blood vessel segment discrimination system 2 according to the third embodiment, the segmentation model is used as the deep learning model 24A.
On the other hand, in the blood vessel segment discrimination system 2 according to the fourth embodiment, as the deep learning model 24A, a pseudo image generation technique such as the GAN described above is used.
Hitherto, the embodiments of the present invention have been described in detail with reference to the drawings. A specific configuration is not limited to the embodiments, and can be appropriately modified within the scope not departing from the concept of the present invention. The configurations described above in each of the embodiments and each of the examples may be combined.
An entirety or a part of the blood vessel segment discrimination system 1 or 2 in the embodiments described above may be realized by dedicated hardware, or may be realized by a memory and a microprocessor.
An entirety or a part of the blood vessel segment discrimination system 1 may include a memory and a central processing unit (CPU), and may realize a function thereof in such a manner that a program for realizing a function of each unit included in each system is loaded to and executed by the memory.
A program for realizing all or a part of the functions of the blood vessel segment discrimination system 1 may be recorded on a computer-readable recording medium, and the program recorded on the recording medium may be read and executed by a computer system to perform processing of each unit. The “computer system” referred to herein includes an OS and hardware such as a peripheral device. In addition, the “computer system” also includes a homepage providing environment (or a display environment) in a case where a WWW system is used.
In addition, the “computer-readable recording medium” refers to a portable medium such as a flexible disk, a magneto-optical disk, a ROM, and a CD-ROM, and a storage device such as a hard disk built in the computer system. Furthermore, the “computer-readable recording medium” includes a medium which dynamically holds the program for a short period of time as in a communication line in a case where the program is transmitted through a network such as the Internet or a communication line such as a telephone line, and a medium which holds the program for a certain period of time as in a volatile memory inside the computer system serving as a server or a client in that case. In addition, the program may be provided to realize a part of the above-described functions, or may be provided to be capable of realizing the above-described functions in combination with a program already recorded on the computer system.
| Number | Date | Country | Kind |
|---|---|---|---|
| 2022-062934 | Apr 2022 | JP | national |
| Filing Document | Filing Date | Country | Kind |
|---|---|---|---|
| PCT/JP2023/007803 | 3/2/2023 | WO |