This application claims the benefit of priority of Japanese Patent Application No. 2021-190452, filed Nov. 24, 2021, and Japanese Patent Application No. 2021-190453, filed Nov. 24, 2021, the entire contents of all of which are incorporated herein by reference.
Embodiments described herein relate generally to an X-ray diagnostic apparatus, a medical image processing apparatus, and a medical image processing method.
Spectral imaging techniques are known in which X-ray diagnostic apparatus acquires an X-ray image of an object corresponding to each of a plurality of different X-ray energies by varying the energy of X-rays, and discriminates materials in the object, by utilizing the fact that X-ray absorption characteristics that are different between each material.
Spectral imaging technology can perform material discrimination processing to obtain thickness data of each material, as well as energy subtraction processing to suppress specific material components.
In the material discrimination process, a material discrimination image showing thickness distributions of, for example, two specific materials (e.g., bone and soft tissue) can be generated from two X-ray images corresponding to two continuous X-ray spectra (X-ray energies).
In spectral imaging techniques such as material discrimination processing and energy subtraction processing, it is preferable to set a large difference in X-ray energy in order to improve processing accuracy.
For example, when changing the X-ray energy by changing the tube voltage, it is preferable to apply a tube voltage as high as possible (for example, 110-140 kV) for imaging using a high tube voltage, and to apply a tube voltage as low as possible (for example, 40-60 kV) for imaging using a low tube voltage.
On the other hand, in normal clinical practice, an X-ray image acquired by applying an intermediate tube voltage (for example, 70-80 kV) is more suitable for user observation than an X-ray image acquired by applying these extreme tube voltages.
Therefore, though the X-ray images obtained by X-ray imaging in spectral imaging technology can be used for material discrimination processing, and to visualize the difference in the constituent elements of the object, it is difficult to use such X-ray images in usual clinical practice.
As a method of obtaining an image suitable for normal clinical use while increasing the processing accuracy of spectral imaging technology, it is conceivable to separately perform X-ray imaging using intermediate tube voltage in addition to X-ray imaging using extreme tube voltages, but the exposure dose of the object will increase.
Hereinbelow, a description will be given of an X-ray diagnostic apparatus, a medical image processing apparatus, and a medical image processing method according to embodiments of the present invention with reference to the drawings.
An X-ray diagnostic apparatus according to an embodiment includes an image acquisition unit and a virtual image generation unit. The image acquisition unit acquires a two-dimensional first X-ray image based on X-ray imaging using a first X-ray energy, and an X-ray image using a second X-ray energy different from the first X-ray energy. The virtual image generation unit generates a two-dimensional virtual third X-ray image that simulates an image using a third X-ray energy different from the first X-ray energy and the second X-ray energy, based on the first X-ray image and the second X-ray image.
An X-ray diagnostic apparatus, a medical image processing apparatus, and a medical image processing method according to an embodiment of the present invention can use spectral imaging technology, and acquire X-ray images that correspond to each of a plurality of continuous X-ray spectra (X-ray energies).
As shown in
The imaging apparatus 20 is typically installed in an examination room and is configured to generate image data about an object. The console 30, which is an example of a medical image processing apparatus, is installed, for example, in an operation room adjacent to the examination room, and generates and displays an X-ray image based on the image data.
The console 30 may be installed in the examination room where the imaging apparatus 20 is installed, or may be connected to the imaging apparatus 20 via a network and installed in a remote location away from the examination room.
The imaging apparatus 20 has an X-ray tube 21, an X-ray movable diaphragm 22, an FPD 23, a tabletop 24, a display 25, and a controller 26.
The X-ray tube 21 is applied with a high voltage to generate X-rays. The tube voltage applied to the X-ray tube 21 is controlled by the processing circuitry 34 of the console 30.
The X-ray movable diaphragm 22 may be configured of a plurality of lead plates for narrowing the irradiation range of the X-rays generated by the X-ray tube 21, and may form a slit by combining the plurality of lead plates. For example, the X-ray movable diaphragm 22 has two pairs of movable blades, and the irradiation range of X-rays emitted from the
X-ray tube 21 is adjusted by opening and closing each pair of movable blades.
FPD 23 is configured of a flat panel detector (FPD) having a plurality of X-ray detection elements (group of imaging elements), and detects the X-rays irradiated to the FPD23. Based on the detected X-rays the FPD 23 outputs image data of X-ray fluoroscopic images and X-ray radiographic images at a predetermined frame. Note that, hereinafter, X-ray fluoroscopic images and X-ray radiographic images are collectively referred to as X-ray images. This image data is given to the console 30.
The FPD 23 has, more specifically, a plurality of X-ray detection elements configured by semiconductor elements that store signal charges corresponding to the amount of incident X-rays. The plurality of X-ray detection elements is arranged in a matrix. As the FPD 23, a CMOS-FPD or the like can be used.
The X-ray tube 21 and the FPD 23 may be arranged to face each other with the object placed on the tabletop 24 interposed therebetween. For example, the X-ray tube 21 and the FPD 23 may be supported at both ends of a support member such as a C-arm so as to face each other across the object as shown in
Although
In addition, although the single-plane X-ray diagnostic apparatus 10 configured with one C-arm is illustrated in
Also, the C-arm may hold the X-ray tube 21 and the FPD 23 such that the distance between the X-ray tube focus and the X-ray detector (SID: Source Image receptor Distance) can be changed.
A tabletop 24 is provided above the bed, and an object is placed thereon. A high voltage power supply (not shown) is controlled by the processing circuitry 34 of the console 30. The high voltage power supply includes a high voltage generation unit having a function of generating a high voltage to be applied to the X-ray tube 21, and an X-ray control apparatus for controlling output voltage according to the X-rays emitted by the X-ray tube 21. The high voltage generation unit may be of a transformer type or an inverter type.
The display 25 includes one or more display areas, and displays various information such as images generated by the processing circuitry 34. The display 25 is arranged at a position visible to the user in the examination room, and is configured of a general display output apparatus such as a liquid crystal display or an OLED (Organic Light Emitting Diode) display.
Controller 26 has at least a processor and memory. The controller 26 is controlled by the console 30 according to the program stored in this memory, and comprehensively controls each component of the imaging apparatus 20. For example, the controller 26 is controlled by the console 30 to image an object with a plurality of continuous X-ray spectra, and generate X-ray image data corresponding to each continuous X-ray spectrum (X-ray energy) to provide the generated X-ray image data to the console 30.
On the other hand, console 30 has display 31, input interface 32, memory 33, and processing circuitry 34.
The display 31 is configured of a general display output apparatus such as a liquid crystal display or an OLED (Organic Light Emitting Diode) display, and displays various information such as images generated by the processing circuitry 34 under the control of the processing circuitry 34.
The input interface 32 is implemented by an input device, such as a trackball, a switch, a button, a mouse, a keyboard, a touch pad that performs an input operation by touching an operation surface, a non-contact input interface using an optical sensor, a voice input interface, and the like. The input interface 32 also outputs an input operation signal corresponding to a user's operation to the processing circuitry 34. Further, the input interface 32 includes an exposure switch that controls the on/off of X-ray radiation.
The memory 33 has a configuration including a processor-readable recording medium such as a magnetic or optical recording medium or a semiconductor memory. Some or all of the programs and data in the storage medium of the memory 33 may be downloaded via an electronic network, or provided to the memory 33 via a portable storage medium such as an optical disc.
The processing circuitry 34 implements a function of comprehensively controlling the X-ray diagnostic apparatus 10. In particular, the processing circuitry 34 reads out and executes the image processing program stored in the memory 33, thereby improving the processing accuracy of the spectral imaging technology, while performing processing for obtaining an image suitable for normal clinical use based on the X-ray imaging acquired by the spectral imaging technology.
The processor of the processing circuitry 34 implements an image acquisition function 341, a material discrimination function 342, a virtual energy image generation function 343, and a display control function 344, as shown in
The image acquisition function 341 acquires a two-dimensional first X-ray image based on X-ray imaging using a first continuous X-ray spectrum (hereinafter referred to as first X-ray energy), and acquires a two-dimensional second X-ray image based on X-ray imaging using a second continuous X-ray spectrum (hereinafter referred to as second X-ray energy) that is different from the first X-ray energy. The image acquisition function 341 is an example of an image acquisition unit.
A change in the X-ray energy, that is, a change in the wavelength distribution of the continuous X-ray spectrum can be realized by switching the tube voltage, changing the beam filter, or the like.
The first X-ray image and the second X-ray image may be acquired in real time from the imaging apparatus 20 by controlling the imaging apparatus 20 to perform X-ray imaging. Alternatively, the first X-ray image and the second X-ray image may be obtained in post-processing from the X-ray diagnostic apparatus 10 or from an image server connected to the X-ray diagnostic apparatus 10 via a network after the examination by the X-ray diagnostic apparatus 10 is completed.
The image server is, for example, a server for long-term storage of images provided in a PACS (Picture Archiving and Communication System), which stores X-ray images such as fluoroscopic images and DSA (digital subtraction angiography) images.
In the following description, the first X-ray image will be referred to as the high energy captured image IH corresponding to the high tube voltage, and the second X-ray image will be referred to as the low energy captured image IL corresponding to the low tube voltage.
Based on the first X-ray image and the second X-ray image, the material discrimination function 342, through material discrimination processing, generates a first material discrimination image showing the thickness distribution of the first material, and a second material discrimination image showing the thickness distribution of the second material.
In the following description, an example will be described in which the first material discrimination image is a bone thickness image Tbone obtained by discriminating bones by material discrimination processing based on the first X-ray image and the second X-ray image, and the second material discrimination image is a soft tissue thickness image Ttissue obtained by discriminating the soft tissue by material discrimination processing based on the first X-ray image and the second X-ray image. The material discrimination function 342 is an example of a discrimination unit.
Based on the first X-ray image and the second X-ray image, the virtual energy image generation function 343 generates a two-dimensional virtual third X-ray image that simulates an X-ray image using a third continuous X-ray spectrum (hereinafter referred to as the third X-ray energy) different from the first X-ray energy and the second X-ray energy. The virtual energy image generation function 343 is an example of a virtual image generation unit.
The virtual energy image generation function 343 simulates spectral projection corresponding to the third x-ray energy by virtually adjusting at least one of the tube voltage and beam filter.
In the following description, an example is shown in which the tube voltage corresponding to the third X-ray energy (normal energy) is an intermediate tube voltage between the high tube voltage corresponding to the first X-ray energy (high energy) and the low tube voltage corresponding to the second X-ray energy (low energy). Note that, in the following description, the virtual third X-ray image is referred to as normal energy virtual image ImaginaryIM (iIM).
The display control function 344 causes the display 25 or the display 31 to display the virtual third X-ray image in parallel with at least one of the first material discrimination image and the second material discrimination image. Alternatively, the display control function 344 may cause the display 25 or the display 31 to display the virtual third X-ray image superimposed on at least one of the first material discrimination image and the second material discrimination image, or superimposed on an image based on at least one of the first material discrimination image and the second material discrimination image. The display control function 344 is an example of a display control unit.
First, the first embodiment of the X-ray diagnostic apparatus 10, medical image processing apparatus, and medical image processing method will be described with reference to
The procedure shown in
On the other hand, when the exposure switch has been turned on (YES in step S1), the image acquisition function 341 acquires a two-dimensional high energy captured image IH based on X-ray imaging using the first X-ray energy (step S2), and acquires a two-dimensional low energy captured image IL based on X-ray imaging using a second X-ray energy (step S3). The order of steps S2 and S3 may be reversed.
X-ray imaging using the first X-ray energy and X-ray imaging using the second X-ray energy are performed by switching tube voltages, for example. In this case, the imaging corresponding to the high tube voltage and the imaging corresponding to the low tube voltage may be performed with different X-ray pulses, or may be performed by switching the tube voltage while the X-ray tube 21 is emitting one X-ray pulse. In the latter case, the FPD 23 capable of non-destructive read out may be used, and non-destructive read out from the FPD 23 may be performed according to the switch of the tube voltage.
Next, in step S4, the material discrimination function 342 generates a bone thickness image Tbone and a soft tissue thickness image Ttissue by material discrimination processing based on the high energy captured image IH and the low energy captured image IL (see
In the procedure shown in
In equations (1) to (4), N(high) (E) represents the spectrum of the first X-ray energy, N(low) (E) represents the spectrum of the second X-ray energy, μbone (E) represents the X-ray absorption coefficient of bones, etc., and μsoft(E) represents the absorption coefficient of soft tissue. Note that N(high)(E), N(low)(E), μbone(E), and, μsoft (E) are previously stored in the memory 33 as known values.
Then, the material discrimination function 342 generates a bone thickness image Tbone by assigning a luminance value corresponding to the thickness dbone of the bone obtained for each pixel, and generates a soft tissue thickness image Ttissue by assigning a luminance value corresponding to the thickness dsoft of the soft tissue obtained for each pixel.
Next, in step S5, the virtual energy image generation function 343 generates a normal energy virtual image iIM, which is a two-dimensional virtual X-ray image, based on the high energy captured image IH and the low energy captured image IL.
In the procedure shown in
Specifically, the virtual energy image generation function 343 substitutes the bone thickness dbone and the soft tissue thickness dsoft obtained in step S4 into the following equation (5). Then, the virtual energy image generation function 343 generates a normal energy virtual image iIM, by simulating the projection for each pixel using the spectrum N(middle) (E) of virtual third X-ray energy (for example, normal energy suitable for normal clinical use), (see
I(middle)=∫0middle kVpN(middle)(E)exp (−μbone(E)dbone−μsoft(E)dsoft)EdE (5)
The virtual third X-ray energy spectrum N(middle) (E) may be simulated by virtually setting the tube voltage, or by virtually switching the beam filter, or by virtually adjusting both the tube voltage and the beam filter.
In addition, the virtual energy image generation function 343 uses Equation (5) to generate a second normal energy virtual image based on the virtual fourth X-ray energy spectrum N(middle2) (E). For example, when a virtual third X-ray energy corresponds to a tube voltage of 80 kV, a virtual fourth X-ray energy may correspond to a slightly smaller tube voltage, such as 70 kV.
Next, in step S6, the display control function 344 displays the normal energy virtual image iIM in parallel with, and/or, superimposed on at least one of the bone thickness image Tbone and the soft tissue thickness image Ttissue on the display 25 or the display 31, and then, the process returns to step S1.
Here, in the superimposed display, the normal energy virtual image iIM may be used as the main image, and the bone thickness image Tbone and/or the soft tissue thickness image Ttissue may be superimposed as subordinate images, or vice versa.
Alternatively, in superimposed display, the bone thickness image Tbone and/or the soft tissue thickness image Ttissue can be replaced by an image obtained by extracting the contour of the bone thickness image Tbone and/or the soft tissue thickness image Ttissue.
Furthermore, at least one of the images to be superimposed may be displayed in a chromatic color so that the superimposed images can be readily distinguished and recognized.
In addition to or instead of displaying at least one of the bone thickness image Tbone and the soft tissue thickness image Ttissue, which are the results of material discrimination, a normal energy virtual image ilMafter image processing (for example, enhancement of a specific spatial frequency component) based on the material discrimination result, may be displayed.
Using the above procedure, it is possible to obtain an image suitable for normal clinical use based on the X-ray image acquired by the spectral imaging technique while improving the processing accuracy of the spectral imaging technique.
In addition, the image acquisition function 341 and the virtual energy image generation function 343 may sequentially repeat combination of the acquisition of the high energy captured image IH, the acquisition of the low energy captured image IL and generation of the normal energy virtual image iIM, on the condition that the user continues the exposure instruction, for example, via the exposure switch.
The X-ray diagnostic apparatus 10 can obtain a third X-ray image (for example, normal energy virtual image iIM) based on a plurality of X-ray images obtained by X-ray imaging in spectral imaging technology.
Therefore, simply by performing X-ray imaging using high X-ray energies in the spectral imaging technique, it is possible not only to generate material discrimination images such as a bone thickness image Tbone and a soft tissue thickness image Ttissue but also to virtually generate and display an image suitable for normal clinical practice, without specially performing X-ray imaging for generating an image suitable for normal clinical practice, thereby preventing increased exposure dose of the object.
The bone thickness image Tbone and the soft tissue thickness image Ttissue are images showing the thickness distribution of materials. For this reason, the bone thickness image Tbone and the soft tissue thickness image Ttissue have contrasts different from those of a normal X-ray image, which may be unfamiliar to the user and make intuitive observation difficult.
Therefore, the virtual energy image generation function 343 may substitute only one of the thickness dbone of the bone and the thickness dsoft of the soft tissue into the equation (5), simulate the projection for each pixel using the virtual X-ray energy spectrum N(middle) (E) used in normal clinical practice, and generate a normal energy virtual bone image ilMbone and a normal energy virtual soft tissue image ilMtissue (see
In this case, the display control function 344 may use the normal energy virtual bone image ilMbone and the normal energy virtual soft tissue image ilMtissue instead of the bone thickness image Tbone and the soft tissue thickness image Ttissue.
By displaying the normal energy virtual bone image ilMbone and the normal energy virtual soft tissue image ilMtissue, instead of the bone thickness image Tbone and the soft tissue thickness image Ttissue, the user can observe an image with familiar contrast, resulting in that a more accurate diagnosis can be made.
In addition, there are cases where it is desired to make the discriminative image of the intended material (e.g., bone) more visible, or to decrease the visibility of the discriminative image of the unintended material (e.g., soft tissue).
Accordingly, when performing the calculation according to formula (5), in the former case, the thickness dbone of the bone is uniformly multiplied by a numerical value greater than 1, while in the latter case, dsoft is uniformly multiplied by a numerical value smaller than 1, which enables to generate a discriminative image having the desired visibility.
A similar effect can be available by, for example, using a different spectrum N(middle) (E) for each material in the calculation of Equation (2).
The virtual energy image generation function 343 may generate a normal energy virtual image iIM from the high energy captured image IH and the low energy captured image IL, using a trained model constructed by machine learning. In this case, deep learning using multi-layered neural networks such as CNN (convolutional neural network) and convolutional deep belief network (CDBN) may be used as machine learning. Hereinafter, an example in which the virtual energy image generation function 343 uses a trained model constructed by deep learning will be described. Note that in this case, step S4 in
Each dataset of a plurality of training datasets consists of training data 41 and teaching data 42. The training data 41 consists of datasets 411, 412, 413 . . . of high energy captured images IH and low energy captured images IL. The teaching data 42 consists of actual third X-ray images (normal energy actual images) rIM421, 422, 423, which are acquired by actually performing X-ray imaging using the third X-ray energy.
Each of the actual third X-ray images (normal energy actual images) rIM 421, 422, 423 . . . is acquired under the same imaging conditions as each of the sets 411, 412, 413, of the high energy captured image IH and the low energy captured image IL, except for the X-ray energy.
The virtual energy image generation function 343 updates the parameter data 52 so that the result of processing the training data 41 by the neural network 51 approaches the teaching data 42 each time a training data set is given, which is called learning.
In general, when the change rate of the parameter data 52 converges within a threshold value, it is determined that the learning has ended. Hereinafter, the parameter data 52 after learning is particularly referred to as trained parameter data 52t. The neural network 51 and the trained parameter data 52t constitute the trained model 50.
During operation, the virtual energy image generation function 343 receives a set 61 of the high energy captured image IH and the low energy captured image IL, and uses the trained model 50 to generate a normal energy virtual image iIM62. A trained model 50 is configured of a neural network 51 and trained parameter data 52t. Various methods are well known for this type of learning and constructing a trained model.
The neural network 51 is stored in the memory 33 in the form of a program. The trained parameter data 52t may be stored in the memory 33, or may be stored in a storage medium connected to the processing circuitry 34 via a network.
When the trained model 50 (the neural network 51 and the trained parameter data 52t) is stored in the memory 33, the virtual energy image generation function 343, which is implemented by the processor of the processing circuitry 34, reads out the trained model 50 from the memory 33 and executes it to generate the normal energy virtual image iIM from the high energy captured image IH and the low energy captured image IL.
Note that the trained model 50 may be constructed by an integrated circuit such as an ASIC (Application Specific Integrated Circuit) or an FPGA (Field Programmable Gate Array).
The normal energy virtual image iIM generated by the method according to the first embodiment may be corrected with use of an actual third X-ray image rIM acquired by actually performing X-ray imaging with use of the third X-ray energy.
In this case, it is preferred that the actual X-ray imaging using the third X-ray energy be performed at a dose lower than the dose in normal X-ray imaging so as to reduce the exposure dose on the object. In this case, the acquired third X-ray image becomes an image (roughness rIM) that is rougher than a normal image. However, a virtual image (correction iIM) that has significantly higher image quality than the normal energy virtual image iIM can be acquired by correcting the normal energy virtual image iIM by processing using the roughness rIM acquired by the actual X-ray imaging such as addition of the roughness rIM and the normal energy virtual image iIM.
When the abovementioned approach is used, the dose in the actual X-ray imaging for acquiring the third X-ray image rIM can be made smaller than the dose in a case where an X-ray image having the same SN ratio as the correction iIM is acquired by X-ray imaging. In other words, with use of the abovementioned approach, the dose can be reduced as compared to a method where X-ray imaging for acquiring an X-ray image suitable for normal clinical use is separately performed.
In the description above, the normal energy virtual image iIM is described as an image subjected to correction processing, and the actual third X-ray image rIM is described to be used as an image for correction processing. However, the normal energy virtual image iIM and the actual third X-ray image rIM are images similar to each other besides the noise amount and the like. Therefore, the distinction between the image subjected to correction processing and the image for correction processing is merely for convenience. The essence lies in the feature where an image that has higher image quality than the normal energy virtual image iIM is acquired by using the actual third X-ray image rIM in addition to the normal energy virtual image iIM. Thus, even when the actual third X-ray image rIM is subjected to correction processing or when the normal energy virtual image iIM and the actual third X-ray image rIM are handled equally without a master-servant relationship in terms of data processing, the acquired high quality image is deemed as the correction iIM or a “corrected virtual third X-ray image”.
The correction iIM may be generated from the high energy captured image IH, the low energy captured image IL, and the roughness rIM with use of a trained model constructed by machine learning.
Each training data set of the large number of training datasets consists of training data 43 and teaching data 44. The training data 43 consists of datasets 431, 432, 433, . . . of the high energy captured image IH, the low energy captured image IL, and the roughness rIM. The teaching data 44 consists of actual third X-ray images rIM 441, 442, 443, . . . acquired by actually performing X-ray imaging with use of the third X-ray energy. The actual third X-ray images rIM 441, 442, 443, . . . are high-image-quality detailed images acquired by performing X-ray imaging with use of the third X-ray energy at a dose equal to or more than that of normal X-ray imaging.
The roughness rIM of the training data 43 may be generated by adding noise to each of the actual third X-ray images rIM 441, 442, 443, . . . without actually performing X-ray imaging.
Each time a training dataset is given, the virtual energy image generation function 343 acquires trained parameter data 72t by updating the parameter data 72 such that a result obtained by processing the training data 43 by the neural network 71 approaches the training data 44. The neural network 71 and the trained parameter data 72t configure a trained model 70.
In the modification of the second embodiment, at the time of operation, the virtual energy image generation function 343 only needs an input of a set 63 of the high energy captured image IH, the low energy captured image IL, and the roughness rIM and generate a corrected normal energy virtual image iIM (correction iIM) 64 with use of the trained model 70.
The X-ray diagnostic apparatus 10, a medical image processing apparatus, and a medical image processing method described in a third embodiment are different from those in the first embodiment and the second embodiment in that contrast-enhanced blood vessels are discriminated.
Now, consider a case where bones, contrast-enhanced blood vessels, and soft tissues are included in a field of view of the X-ray imaging. The X-ray absorption coefficient of the contrast agent for performing contrast imaging of a blood vessel is high and is close to that for performing contrast imaging of a bone. The material discrimination function 342 regards the absorption coefficient of the contrast-enhanced blood vessel 82 as an absorption coefficient dbone (E) of a bone 81 and obtains the thickness dbono of the bone and the like with use of equations (1) and (2). Therefore, in the bone thickness image Tbone generated by the material discrimination function 342 on the basis of the thickness donne, the thickness distribution of the contrast-enhanced blood vessel 82 is visualized as a material having a thickness converted into that of the bone 81 together with the thickness distribution of the bone 81 (see
In the soft-tissue thickness image Ttissue generated by the material discrimination function 342, the thickness distribution of a soft tissue 83 is visualized. At this time, in the soft-tissue thickness image Ttissue, a pixel in which the soft tissue 83 and the contrast-enhanced blood vessel 82 are projected in an overlapping manner has a luminance value obtained by causing the thickness of the soft tissue 83 to be thinner by the amount of the thickness of the contrast-enhanced blood vessel 82 (see
However, the absorption coefficients of the bone 81 and the contrast-enhanced blood vessel 82 are not actually the same. Thus, the density difference between the contrast-enhanced blood vessel 82 and other components differs between the normal energy virtual image iIM (see
Therefore, the material discrimination function 342 according to the third embodiment can generate the contrast-enhanced blood vessel thickness image Ttissue, which is a contrast-enhanced blood vessel discrimination image obtained by discriminating the contrast-enhanced blood vessel 82 by comparing and taking the difference between the normal energy virtual image iIM and the actual normal energy actual image rIM (see
The X-ray diagnostic apparatus 10 described in the fourth embodiment is different from the X-ray diagnostic apparatus 10 described in the first embodiment in that the processing circuitry 34 segments a target (object to be observed) at a high accuracy on the basis of X-ray images respectively corresponding to a plurality of X-ray energies. Regarding other configurations and effects are substantially from the same as the X-ray diagnostic apparatus 10 illustrated in
The target includes bones, contrast-enhanced blood vessels, soft tissues, and the like included in the X-ray image.
As illustrated in
The image acquisition function 341 according to the fourth embodiment acquires a two-dimensional first X-ray image including a target based on X-ray imaging using a first X-ray energy and acquires a two-dimensional second X-ray image including the target based on X-ray imaging using a second X-ray energy different from the first X-ray energy. The segmentation function 345 performs the segmentation of the target on the basis of the first X-ray image and the second X-ray image. The segmentation function 345 is one example of a segmentation unit.
The display control function 344 according to the fourth embodiment displays the image of the segmented target on the display 25 or the display 31 in parallel with or being superimposed on a virtual third X-ray image. The display control function 344 may display the image of the segmented target on the display 25 or the display 31 in a manner of being superimposed on the first X-ray image or the second X-ray image.
First, in Step S11, the image acquisition function 341 determines whether the radiation exposure switch of the input interface 32 is turned on. When the radiation exposure switch is not turned on (NO in Step S11), the procedure ends. When the exposure switch has been turned on (YES in Step S11), the image acquisition function 341 acquires the two-dimensional high energy captured image IH including the target (for example, the contrast-enhanced blood vessel) based on X-ray imaging using the first X-ray energy (Step S12) and acquires the two-dimensional low energy captured image IL including the target (for example, the contrast-enhanced blood vessel) based on X-ray imaging using the second X-ray energy (Step S13). The order of Step S2 and Step S3 may be reversed.
Next, in Step S14, the segmentation function 345 segments the target (for example, the contrast-enhanced blood vessel) on the basis of the high energy captured image IH and the low energy captured image IL.
By the procedures above, the segmentation of the target is performed on the basis of the X-ray images respectively corresponding to the plurality of X-ray energies. By using the X-ray images respectively corresponding to the plurality of X-ray energies, the target can be segmented at a higher accuracy as compared to segmentation based on one X-ray image corresponding to one X-ray energy.
A segmentation method of the target based on the X-ray images respectively corresponding to the plurality of X-ray energies is described in detail below. Further, an example of a case where the target is a contrast-enhanced blood vessel will be described below.
When the high energy captured image IH and the low energy captured image IL are acquired in Step S12 and Step S13, the material discrimination function 342 then generates the bone thickness image Tbone and the soft-tissue thickness image Ttissue by the material discrimination processing based on the high energy captured image IH and the low energy captured image IL in Step S21 same as Step S4 in
In the procedures illustrated in
The material discrimination function 342 generates the bone thickness image Tbone (see the left side of
Ttissue such that each pixel has a luminance value in accordance with the soft tissue thickness dsoft obtained for each pixel.
Next, in Step S22, the segmentation function 345 segments the target in an image out of a first material discrimination image and a second material discrimination image including the target.
As the segmentation method, various methods such as an active contour method (snakes method), a Level Set method, and a semantic segmentation method using a trained model constructed by machine learning have hitherto been known, and it is possible to use any of the above. The case where the semantic segmentation is used is described later with reference to
In the first material discrimination image or the second material discrimination image, the contrast between the materials is stronger than that in the X-ray image. Therefore, a more accurate segmentation can be expected by segmenting the first material discrimination image or the second material discrimination image than by segmenting the X-ray image.
For example, when the target is a soft tissue, at least the soft tissue is segmented in the soft-tissue thickness image Ttissue.
When the target is a contrast-enhanced blood vessel, the segmentation function 345 generates the blood-vessel segmentation-completed image Tbone_seg including the segmented contrast-enhanced blood vessels VEs by segmenting at least the contrast-enhanced blood vessel VE in the bone thickness image Tbone (see the right side of
Next, in Step S23, the virtual energy image generation function 343 generates the normal energy virtual image iIM by performing virtual projection processing using spectrum of a virtual X-ray energy for each pixel of the bone thickness image Tbone and the soft-tissue thickness image Ttissue.
Specifically, the virtual energy image generation function 343 generates the normal energy virtual image iIM by substituting the bone thickness and the like and the soft tissue thickness dsoft obtained in Step S21 into equation (5) and simulating projection for each pixel with use of the spectrum N(middle) (E) of a virtual third X-ray energy (for example, normal energy suitable for normal clinical use) same as Step S5 in
Next, in Step S24, the display control function 344 displays a segmentation result on the display 25 or the display 31. At this time, it is preferred that the display control function 344 display the segmentation result on the display 25 or the display 31 in parallel with or being superimposed on the normal energy virtual image iIM (see the right side of
As described above, the bone thickness image Tbone and the soft-tissue thickness image Ttissue are images indicating the thickness distribution of materials, and hence the contrast in the bone thickness image Tbone and the soft-tissue thickness image Ttissue is different from that in a normal X-ray image. Thus, such difference can be unfamiliar to the user, and make intuitive observation difficult. Therefore, when the segmentation result is superimposed on the normal energy virtual image iIM, the user can observe an image having a familiar contrast and a diagnosis can be more accurate as compared to a case where the segmentation result is superimposed on the bone thickness image Tbone.
The segmentation result superimposed on the normal energy virtual image iIM may be the blood-vessel segmentation-completed image Tbone_seg itself or may be only the segmented contrast-enhanced blood vessels VEs. In the right side of
By the procedures above, the target can be segmented from the material discrimination image such as the bone thickness image Tbone or the soft-tissue thickness image Ttissue. By the procedures above, an image suitable for normal clinical use can be virtually generated, and the segmentation result can be displayed on the image in a superimposed manner by simply performing X-ray imaging using extreme X-ray energies in the spectral imaging technology without additional X-ray imaging for generating an image suitable for normal clinical use, thereby preventing increased exposure dose of the object.
Next, a method of segmenting the target from the material discrimination image by semantic segmentation using a trained model is described.
The segmentation function 345 may detect the target from the material discrimination image by performing semantic segmentation with use of a trained model constructed by machine learning. In this case, deep learning using a multilayered neural network such as a CNN and a convolutional deep belief network may be adopted for the machine learning. An example of a case where the segmentation function 345 uses a trained model constructed by deep learning is described below.
As the trained model, a different model is constructed and used for each target. A trained model used in a case where the target is the contrast-enhanced blood vessel is described below.
Each training dataset of the large number of training datasets consists of a set of the training data 45 and the teaching data 46. The training data 45 is bone thickness images Tbone 451, 452, 453, . . . including the contrast-enhanced blood vessel VE. The teaching data 46 is blood-vessel segmentation-completed images Tbone_seg 461, 462, 463, . . . corresponding to the bone thickness images Tbone 451, 452, 453, . . . .
The blood-vessel segmentation-completed images Tbone_seg 461, 462, 463, . . . are obtained by manually segmenting the bone thickness images Tbone 451, 452, 453, . . ., for example.
When the target is the contrast-enhanced blood vessel, the segmentation is performed so as to classify the blood-vessel segmentation-completed image Tbone_seg into a region of the contrast-enhanced blood vessel and other regions, for example.
Each time a training data set is given, the segmentation function 345 performs so-called learning, that is to update the parameter data 92 such that a result obtained by processing the training data 45 by the neural network 91 approaches the teaching data 46. The parameter data 92 after the learning is hereinafter particularly referred to as trained parameter data 92t. The neural network 91 and the trained parameter data 92t configure the trained model 90.
At the time of operation, a bone thickness image Tbone 65 including the contrast-enhanced blood vessel VE is input to the segmentation function 345, and the segmentation function 345 generates a blood-vessel segmentation-completed image Tbone_seg 66 with use of the trained model 90.
The trained model 90 is configured by the neural network 91 and the trained parameter data 92t. As a learning method and a method of constructing a trained model of this type, various methods disclosed in published literatures have been known (Bishop, Christopher. M. (2006). Pattern recognition and machine learning, pp. 225-290: Springer, for example). The neural network 91 is stored in the memory 33 in a form of a program. The trained parameter data 92t may be stored in the memory 33 or may be stored in a storage medium connected to the processing circuitry 34 over a network.
When the trained model 90 (the neural network 91 and the trained parameter data 92t) is stored in the memory 33, the segmentation function 345 realized by the processor of the processing circuitry 34 can generate the blood-vessel segmentation-completed image Tbone_seg 66 from the bone thickness image Tbone 65 including the contrast-enhanced blood vessel VE by reading out the trained model 90 from the memory 33 and executing the trained model 90.
The trained model 90 may be constructed by an integrated circuit such as an application specific integrated circuit (ASIC) and a field programmable gate array (FPGA).
The X-ray diagnostic apparatus 10, a medical image processing apparatus, and a medical image processing method described in the fifth embodiment is different from the X-ray diagnostic apparatus 10, the medical image processing apparatus, and the medical image processing method described in the fourth embodiment in that the blood-vessel segmentation-completed image Tbone_seg is generated on the basis of the high energy captured image IH and the low energy captured image IL and not from the bone thickness image Tbone.
When the bone thickness image Tbone and the soft-tissue thickness image Ttissue are generated in Step S21, the segmentation function 345 segments a target on the basis of the high energy captured image IH including the target and the low energy captured image IL including the target in Step S31 (see
A training dataset of the segmentation function 345 according to the fifth embodiment consists of the training data 41 and the teaching data 48. The training data 41 is the datasets 411, 412, 413, . . . of the high energy captured image IH and the low energy captured image IL. The training data 48 is blood-vessel segmentation-completed images Tbone_seg 481, 482, 483, . . . . The training data 48 may be an image obtained by manually segmenting the contrast-enhanced blood vessel in at least one of the high energy captured image IH and the low energy captured image IL, for example, or may be an image obtained by manually segmenting the contrast-enhanced blood vessel in the bone thickness image Tbone generated on the basis of the high energy captured image IH and the low energy captured image IL, for example, same as the training data 46 of the segmentation function 345 according to the fourth embodiment.
Each time a training dataset is given, the segmentation function 345 acquires the trained parameter data 97t by updating the parameter data 97 such that a result obtained by processing the training data 41 by the neural network 96 approaches the training data 48. The neural network 96 and the trained parameter data 97t configure the trained model 95.
In the fifth embodiment, at the time of execution, only the set 61 of the high energy captured image IH and the low energy captured image IL needs to be input to the segmentation function 345, and the segmentation function 345 only needs to generate the blood-vessel segmentation-completed image Tbone_seg68 with use of the trained model 95.
In the segmentation based on one X-ray image corresponding to one X-ray energy, there are cases where the target cannot be distinguished because a background is dark and dense or because the contrast of the target is poor. Even in those cases, the segmentation function 345 of the X-ray diagnostic apparatus 10 according to the second embodiment can segment the target on the basis of the plurality of X-ray images respectively corresponding to the plurality of X-ray energies, and hence can segment the target at a high accuracy.
As to the first to third embodiments, the virtual energy image generation function 343 may generate the normal energy virtual image iIM from the high energy captured image IH and the low energy captured image IL with use of the trained model constructed by machine learning. In this case, Step S21 in
According to at least one embodiment described above, an image suitable for normal clinical use can be acquired on the basis of X-ray imaging obtained by the spectral imaging technology while increasing the accuracy of the processing of the spectral imaging technology.
In the above-described embodiments, the term “processor” means, for example, a circuit such as a special-purpose or general-purpose CPU (Central Processing Unit), a special-purpose or general-purpose GPU (Graphics Processing Unit), an ASIC, and a programmable logic device including: an SPLD (Simple Programmable Logic Device); a CPLD (Complex Programmable Logic Device); and an FPGA. When the processor is, for example, a CPU, the processor implements various functions by reading out programs stored in a memory and executing the programs.
Additionally, when the processor is, for example, an ASIC, instead of storing the programs in the memory, the functions corresponding to the respective programs are directly incorporated as a logic circuit in the circuit of the processor. In this case, the processor implements various functions by hardware processing in which the programs incorporated in the circuit are read out and executed. Further, the processor can also implement various functions by executing software processing and hardware processing in combination.
Although a description has been given of the case where a single processor of the processing circuitry implements each function in the above-described embodiments, the processing circuitry may be configured by combining a plurality of independent processors which implement the respective functions. When a plurality of processors is provided, the memory for storing the programs may be individually provided for each processor or one memory may collectively store the programs corresponding to the functions of all the processors.
While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. These embodiments can be implemented in various other aspects, and various omissions, substitutions, changes, and combinations of embodiments can be made without departing from the spirit of the invention. These embodiments and modifications thereof are included in the scope of the invention and the gist thereof, and are also included in the invention described in the claims and the equivalent scope thereof. (aspect 1) For instance, an X-ray diagnostic apparatus according to an embodiment includes an image acquisition unit and a virtual image generation unit. The image acquisition unit acquires a two-dimensional first X-ray image based on X-ray imaging using a first X-ray energy, and an X-ray image using a second X-ray energy different from the first X-ray energy. The virtual image generation unit generates a two-dimensional virtual third X-ray image that simulates an image using a third X-ray energy different from the first X-ray energy and the second X-ray energy, based on the first X-ray image and the second X-ray image.
The X-ray diagnostic apparatus may further include a discrimination unit configured to generate a first material discrimination image and a second material discrimination image based on the first X-ray image and the second X-ray image. Wherein, the virtual image generation unit may generate the virtual third X-ray image, by performing virtual projection processing using a spectrum corresponding to the third continuous X-ray spectrum for each pixel of the first material discrimination image and the second material discrimination image.
The virtual image generation unit may simulate a spectral projection corresponding to the third continuous X-ray spectrum, by virtually adjusting at least one of a tube voltage and a beam filter.
The X-ray diagnostic apparatus may further include an input interface that accepts an exposure instruction operation from an operator, wherein, the image acquisition unit and the virtual image generation unit may sequentially repeat combination of acquisition of the first X-ray image, acquisition of the second X-ray image, and generation of the virtual third X-ray image, on condition that the exposure instruction is continued.
The X-ray diagnostic apparatus may further include a display and a display control unit, wherein the display control unit may be configured to cause the display to display the virtual third X-ray image in parallel with at least one of the first material discrimination image and the second material discrimination image, display the virtual third X-ray image superimposed on at least one of the first material discrimination image and the second material discrimination image, or display the virtual third X-ray image superimposed on an image based on at least one of the first material discrimination image and the second material discrimination image.
The discrimination unit may determine thicknesses of the first material and the second material based on the first X-ray image and the second X-ray image, and generate the first material discrimination image and the second material discrimination image such that pixels of the first material discrimination image and the second material discrimination image respectively have luminance values corresponding to the thicknesses of the first material and the second material.
The virtual image generation unit may generate a virtual first material discrimination image and a virtual second material discrimination image simulating an X-ray imaging of the first material and the second material using the third continuous X-ray spectrum, based on thicknesses of the first material and the second material, and the display control unit may cause the display to display the virtual third X-ray image in parallel with at least one of the virtual first material discrimination image and the virtual second material discrimination image, display the virtual third X-ray image superimposed on at least one of the virtual first material discrimination image and the virtual second material discrimination image, or display the virtual third X-ray image superimposed on an image based on the at least one of the virtual first material discrimination image and the virtual second material discrimination image.
The virtual image generation unit may generate the virtual third X-ray image, by inputting the first X-ray image and the second X-ray image to a trained model that is configured to generate the virtual third X-ray image based on the first X-ray image and the second X-ray image.
The image acquisition unit further may acquire a two-dimensional actual third X-ray image based on actual X-ray imaging using the third continuous X-ray spectrum, and the virtual image generation unit may generate a corrected virtual third X-ray image based on the actual third X-ray image and the virtual third X-ray image.
The actual third X-ray image may be acquired by performing an actual X-ray imaging using the third continuous X-ray spectrum at a lower dose than normal X-ray imaging. The dose in actual X-ray imaging using the third continuous X-ray spectrum may be smaller than the dose in the case of acquiring an X-ray image having the same signal to noise (SN) ratio as the corrected virtual third X-ray image by X-ray imaging.
The virtual image generation unit may generate the corrected virtual third X-ray image, by inputting the first X-ray image, the second X-ray image, and the actual third X-ray image to a trained model that is configured to generate the corrected virtual third X-ray image based on the first X-ray image, the second X-ray image, and the actual third X-ray image.
The trained model may be constructed by using training datasets including training data and teaching data, wherein the training data includes the first X-ray image, the second X-ray image, and a coarse third X-ray image generated by adding noise to a fine third X-ray image obtained by performing actual X-ray imaging using the third continuous X-ray spectrum at a dose equal to or greater than that of a normal X-ray imaging, and the teaching data includes the fine third X-ray image.
The X-ray diagnostic apparatus may further include a discrimination unit configured to generate a first material discrimination image and a second material discrimination image based on the first X-ray image and the second X-ray image, wherein, the discrimination unit may compare the actual third X-ray image and the virtual third X-ray image to generate a contrast-enhanced blood vessel discrimination image in which contrast-enhanced blood vessels are extracted.
(aspect 2) A medical image processing apparatus according to one embodiment include: an image acquisition unit configured to acquire a two-dimensional first X-ray image based on X-ray imaging using a first continuous X-ray spectrum, and acquire a two-dimensional second X-ray image based on X-ray imaging using a second continuous X-ray spectrum different from the first continuous X-ray spectrum; and a virtual image generation unit configured to generate a two-dimensional virtual third X-ray image that simulates an X-ray image using a third continuous X-ray spectrum different from the first continuous X-ray spectrum and the second continuous X-ray spectrum, based on the first X-ray image and the second X-ray image.
(aspect 3) A medical image processing method according to one embodiment includes: acquiring a two-dimensional first X-ray image based on X-ray imaging using a first continuous X-ray spectrum, and acquire a two-dimensional second X-ray image based on X-ray imaging using a second continuous X-ray spectrum different from the first continuous X-ray spectrum; and generating a two-dimensional virtual third X-ray image that simulates an X-ray image using a third continuous X-ray spectrum different from the first continuous X-ray spectrum and the second continuous X-ray spectrum, based on the first X-ray image and the second X-ray image.
(aspect 4) An X-ray diagnostic apparatus according to one embodiment includes a segmentation unit and an image acquisition unit. Wherein, the image acquisition unit acquires the first two-dimensional X-ray image including a target based on X-ray imaging using the first continuous X-ray spectrum, and acquires the second two-dimensional X-ray image including a target based on X-ray imaging using the second continuous X-ray spectrum, and the segmentation unit segments the target based on the first X-ray image and the second X-ray image.
An X-ray diagnostic apparatus may further include a virtual image generation unit configured to generate a two-dimensional virtual third X-ray image that simulates an X-ray image using a third continuous X-ray spectrum different from the first continuous X-ray spectrum and the second continuous X-ray spectrum, based on the first X-ray image and the second X-ray image, and a display control unit configured to cause a display to display the segmented image of the target superimposed on the virtual third X-ray image, or cause the display to display it in parallel with the virtual third X-ray image.
The X-ray diagnostic apparatus may further include a discrimination unit configured to generate a first material discrimination image and a second material discrimination image based on the first X-ray image and the second X-ray image, wherein, the virtual image generation unit may generate the virtual third X-ray image, by performing virtual projection processing using a spectrum corresponding to the third continuous X-ray spectrum for each pixel of the first material discrimination image and the second material discrimination image.
The discrimination unit may determine thicknesses of the first material and the second material based on the first X-ray image and the second X-ray image, and generate the first material discrimination image and the second material discrimination image such that pixels of the first material discrimination image and the second material discrimination image respectively have luminance values corresponding to the thicknesses of the first material and the second material.
The target may be a first material or a third material depicted in the first material discrimination image, or a second material depicted in the second material discrimination image. In this case, the segmentation unit may generate an image in which the first material or the third material is segmented, by inputs the first material discrimination image to a trained model that is configured to generate an image in which the first material or the third material is segmented, based on the first material discrimination image.
Alternatively, the segmentation unit may generate an image in which the second material is segmented, by inputting the second material discrimination image to a trained model that is configured to generate an image in which the second material is segmented based on the second material discrimination image.
The segmentation unit may generate a segmented image of the target, by inputting the first X-ray image and the second X-ray image to a trained model that is configured to generate an image in which the target is segmented based on the first X-ray image and the second X-ray image.
The virtual image generation unit may generate the virtual third X-ray image, by inputting the first X-ray image and the second X-ray image to a trained model that is configured to generate the virtual third X-ray image based on the first X-ray image and the second X-ray image.
The X-ray diagnostic apparatus may further include a display control unit that causes the display to display the image of the segmented target superimposed on the first X-ray image or the second X-ray image.
(aspect 5) A medical image processing apparatus according to one embodiment includes an image acquisition unit and a segmentation unit. The image acquisition unit acquires a two-dimensional first X-ray image including the target based on X-ray imaging using the first continuous X-ray spectrum, and acquires a two-dimensional second x-ray image including the target based on x-ray imaging using the second continuous x-ray spectrum different from the first continuous X-ray spectrum. The segmentation unit segments the target based on the first X-ray image and the second X-ray image.
(aspect 6) A medical image processing method according to one embodiment includes: acquiring a two-dimensional first X-ray image including the target based on X-ray imaging using the first continuous X-ray spectrum, acquiring a two-dimensional second x-ray image including the target based on x-ray imaging using the second continuous x-ray spectrum different from the first continuous X-ray spectrum, and segmenting the target based on the first X-ray image and the second X-ray image.
The first X-ray image and the second X-ray image may be X-ray fluoroscopic images. The first X-ray image and the second X-ray image may be X-ray radiographic images.
The first material and second material may be a bone and a soft tissue, respectively. In this case, the first material discrimination image is a bone thickness image Tbone obtained by discriminating bones by material discrimination processing based on X-ray images, and the second material discrimination image is a soft tissue thickness image Ttissue obtained by discriminating the soft tissue by material discrimination processing based on the X-ray images.
The first continuous X-ray spectrum may correspond to a high tube voltage, the second continuous X-ray spectrum to a low tube voltage, and the third continuous X-ray spectrum to an intermediate tube voltage between the high and low tube voltages. For example, the first continuous X-ray spectrum may correspond to a tube voltage of 120-160 kV, the second continuous X-ray spectrum to the tube voltage of 40-60 kV, and the third continuous X-ray spectrum to the tube voltage of 70-90 kV.
Number | Date | Country | Kind |
---|---|---|---|
2021-190452 | Nov 2021 | JP | national |
2021-190453 | Nov 2021 | JP | national |