IMAGE PROCESSING APPARATUS, RADIATION IMAGING SYSTEM, IMAGE PROCESSING METHOD, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM

Information

  • Patent Application
  • 20220167935
  • Publication Number
    20220167935
  • Date Filed
    February 22, 2022
    2 years ago
  • Date Published
    June 02, 2022
    2 years ago
Abstract
An image processing apparatus comprises a generating unit configured to generate, using a plurality of radiation images corresponding to mutually different radiation energies, a first material decomposition image that indicates a thickness of a first material and a second material decomposition image that indicates a thickness of a second material that differs from the first material. The generating unit generates, using the first material decomposition image and the second material decomposition image, a thickness image in which the thickness of the first material and the thickness of the second material are added together.
Description
BACKGROUND OF THE INVENTION
Field of the Invention

The present invention relates to an image processing apparatus, a radiation imaging system, an image processing method, and a non-transitory computer-readable storage medium storing a program. Specifically, the present invention relates to a radiation imaging apparatus and a radiation imaging system to be used in medical diagnosis for the capturing of still images, such as general radiography, and for the capturing of moving images, such as fluoroscopy.


Background Art

In recent years, radiation imaging apparatuses in which flat panel detectors (hereinafter “FPDs”) are used are being widely used as image-capturing apparatuses for use in radiation-based medical image diagnosis and non-destructive examination. Energy subtraction is an image-capturing method in which an FPD is used. In energy subtraction, thickness images for a plurality of materials, such as a bone image and a soft-tissue image for example, can be derived from a plurality of images with different radiation energies that are obtained by emitting radiation at different tube voltages, etc.


Japanese Patent Laid-Open No. H03-285475 discloses a technique in which the image quality of a bone part image is improved by smoothing a soft-tissue image and subtracting the smoothed image from an accumulation image.


In interventional radiology (IVR) in which an FPD is used, a contrast agent is injected into blood vessels and medical devices such as a catheter and a guide wire are inserted into a blood vessel, and a treatment is performed while checking the positions and shapes of the contrast agent and the medical devices.


However, there is a problem that, if bone thickness and soft-tissue thickness are separated using energy subtraction, the results may include noise, which includes materials other than the separated materials.


In view of the above-described problem, the present invention provides an image processing technique that allows material decomposition images with reduced noise to be obtained by utilizing the continuity of thickness in the human body.


SUMMARY OF THE INVENTION

According to one aspect of the present invention, there is provided an image processing apparatus comprising a generating unit configured to generate, using a plurality of radiation images corresponding to mutually different radiation energies, a first material decomposition image that indicates a thickness of a first material and a second material decomposition image that indicates a thickness of a second material that differs from the first material, wherein the generating unit generates, using the first material decomposition image and the second material decomposition image, a thickness image in which the thickness of the first material and the thickness of the second material are added together.


According to another aspect of the present invention, there is provided an image processing method for processing radiation images, comprising


generating, using a plurality of radiation images corresponding to mutually different radiation energies, a thickness image in which a thickness of a first material and a thickness of a second material that differs from the first material are added together.


Further features of the present invention will become apparent from the following description of exemplary embodiments with reference to the attached drawings.





BRIEF DESCRIPTION OF THE DRAWINGS

The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate embodiments of the invention and, together with the description, serve to explain principles of the invention.



FIG. 1 is a diagram illustrating an example of a configuration of an X-ray image-capturing system according to a first embodiment.



FIG. 2 is an equivalent circuit diagram of a pixel of an X-ray imaging apparatus according to the first embodiment.



FIG. 3 is a timing chart of the X-ray imaging apparatus according to the first embodiment.



FIG. 4 is a timing chart of the X-ray imaging apparatus according to the first embodiment.



FIG. 5 is a diagram for describing correction processing according to the first embodiment.



FIG. 6 is a block diagram of signal processing according to the first embodiment.



FIG. 7 is a block diagram of image processing according to the first embodiment.



FIG. 8 is a diagram illustrating examples of an accumulation image and a bone image according to the first embodiment.



FIG. 9 is a diagram illustrating examples of a soft-tissue image and a thickness image according to the first embodiment.



FIG. 10 is a diagram illustrating examples of the accumulation image and the thickness image according to the first embodiment.



FIG. 11 includes portion 11A that is a diagram illustrating the relationship between X-ray spectra and energies, and portion 11B that is a diagram illustrating the relationship between linear attenuation coefficients and energies.



FIG. 12 is a block diagram of signal processing according to a second embodiment.



FIG. 13 is a block diagram of signal processing according to the second embodiment.



FIG. 14 is a block diagram of signal processing according to a third embodiment.



FIG. 15 is a block diagram of signal processing according to the third embodiment.





DESCRIPTION OF THE EMBODIMENTS

In the following, embodiments will be described in detail with reference to the attached drawings. Note that the following embodiments do not limit the invention according to the claims. While a plurality of features are described in the embodiments, it does not necessarily mean that all such features are necessary for the invention, and a plurality of features may also be combined as appropriate. Furthermore, the same reference numeral is appended in the attached drawings to configurations that are the same as or similar to one another, and overlapping description will be omitted.


Note that the term “radiation” in the present invention includes, in addition to alpha rays, beta rays, gamma rays, etc., which are beams formed by particles (including photons) that are emitted as a result of radioactive decay, beams having similar or higher energies, such as X-rays, particle beams, and cosmic rays. In the following embodiments, an apparatus in which X-rays are used as one example of radiation will be described. Accordingly, in the following, an X-ray imaging apparatus and an X-ray imaging system will be described as a radiation imaging apparatus and a radiation imaging system, respectively.


First Embodiment


FIG. 1 is a block diagram illustrating an example of a configuration of an X-ray imaging system according to the first embodiment as one example of a radiation imaging system. The X-ray imaging system according to the first embodiment includes an X-ray generation apparatus 101, an X-ray control apparatus 102, an imaging control apparatus 103, and an X-ray imaging apparatus 104.


The X-ray generation apparatus 101 generates X-rays and irradiates objects with X-rays. The X-ray control apparatus 102 controls the generation of X-rays in the X-ray generation apparatus 101. The imaging control apparatus 103 includes at least one processor (CPU) and a memory, for example, and obtains X-ray images and performs image processing on the X-ray images by the processor executing one or more programs stored in the memory. Note that the processing performed by the imaging control apparatus 103, which includes the image processing, may be realized by dedicated hardware or by hardware and software working together. The X-ray imaging apparatus 104 includes a fluorescent material 105 that converts the X-rays into visible light, and a two-dimensional detector 106 that detects the visible light. The two-dimensional detector 106 is a sensor in which pixels 20 for detecting X-ray quanta are arranged in an array having X columns and Y rows, and outputs image information.


The imaging control apparatus 103 functions as an image processing apparatus that processes radiation images using the above-mentioned processor. Examples of functional configurations as an image processing apparatus are illustrated as an obtaining unit 131, a correction unit 132, a signal processing unit 133, and an image processing unit 134. The obtaining unit 131 obtains a plurality of radiation images with mutually different energies that are obtained by capturing images while irradiating the object with radiation. As the plurality of radiation images, the obtaining unit 131 obtains radiation images that are obtained by performing sampling and holding multiple times during exposure to one shot of radiation. The correction unit 132 corrects the plurality of radiation images obtained by the obtaining unit 131 and generates a plurality of images to be used in energy subtraction processing.


The signal processing unit 133 generates material characteristic images using the plurality of images generated by the correction unit 132. The material characteristic images are images obtained through the energy subtraction processing, such as material decomposition images for separately indicating materials such as bone and soft tissue, or material identification images indicating effective atomic numbers and surface densities thereof, for example. The signal processing unit 133 generates a first material decomposition image indicating a thickness of a first material and a second material decomposition image indicating a thickness of a second material based on a plurality of radiation images captured at mutually different radiation energies. The signal processing unit 133 generates a thickness image in which the thickness of the first material and the thickness of the second material are added together. Here, the first material includes at least one of calcium, hydroxyapatite, or bone, and the second material includes at least one of water, fat, or a soft material that does not contain calcium. The signal processing unit 133 will be described in detail later. The image processing unit 134 generates a display image using the material characteristic images obtained through the signal processing.



FIG. 2 is an equivalent circuit diagram of a pixel 20 according to the first embodiment. The pixel 20 includes a photoelectric conversion element 201 and an output circuit section 202. The photoelectric conversion element 201 may typically be a photodiode. The output circuit section 202 includes an amplifier circuit section 204, a clamp circuit section 206, a sample-and-hold circuit 207, and a selector circuit section 208.


The photoelectric conversion element 201 includes a charge accumulator, and the charge accumulator is connected to the gate of a MOS transistor 204a of the amplifier circuit section 204. The source of the MOS transistor 204a is connected to a current source 204c via a MOS transistor 204b. The MOS transistor 204a and the current source 204c constitute a source follower circuit. The MOS transistor 204b is an enabling switch that switches on and puts the source follower circuit in an operating state when an enable signal EN supplied to the gate of the MOS transistor 204b is set to an active level.


In the example illustrated in FIG. 2, the charge accumulator of the photoelectric conversion element 201 and the gate of the MOS transistor 204a form one same node, and this node functions as a charge-voltage converter that converts charge accumulated in the charge accumulator into a voltage. That is, a voltage V (=Q/C) that is determined by the charge Q accumulated in the charge accumulator and a capacitance value C of the charge-voltage converter appears in the charge-voltage converter. The charge-voltage converter is connected to a reset potential Vres via a reset switch 203. When a reset signal PRES is set to an active level, the reset switch 203 switches on, and the potential of the charge-voltage converter is reset to the reset potential Vres.


The clamp circuit section 206 uses a clamp capacitor 206a and clamps noise that is output by the amplifier circuit section 204 in accordance with the reset potential of the charge-voltage converter. That is, the clamp circuit section 206 is a circuit for cancelling out this noise from a signal output from the source follower circuit in accordance with the charge generated by the photoelectric conversion element 201 through photoelectric conversion. This noise includes kTC noise generated when resetting is performed. The clamping is performed by setting a clamp signal PCL to an active level and switching a MOS transistor 206b on, and then setting the clamp signal PCL to a non-active level and switching the MOS transistor 206b off. The output side of the clamp capacitor 206a is connected to the gate of a MOS transistor 206c. The source of the MOS transistor 206c is connected to a current source 206e via a MOS transistor 206d. The MOS transistor 206c and the current source 206e constitute a source follower circuit. The MOS transistor 206d is an enabling switch that switches on and puts the source follower circuit in an operating state when an enable signal ENO supplied to the gate of the MOS transistor 206d is set to an active level.


The signal that is output from the clamp circuit section 206 in accordance with the charge generated by the photoelectric conversion element 201 through photoelectric conversion is written as a light signal to a capacitor 207Sb via a switch 207Sa when a light-signal sampling signal TS is set to an active level. A signal that is output from the clamp circuit section 206 when the MOS transistor 206b is switched on immediately after the potential of the charge-voltage converter is reset is a clamp voltage. The noise signal is written to a capacitor 207Nb via a switch 207Na when a noise sampling signal TN is set to an active level. This noise signal includes an offset component of the clamp circuit section 206. The switch 207Sa and the capacitor 207Sb constitute a signal sample-and-hold circuit 207S, and the switch 207Na and the capacitor 207Nb constitute a noise sample-and-hold circuit 207N. The sample-and-hold circuit 207 includes the signal sample-and-hold circuit 207S and the noise sample-and-hold circuit 207N.


When a driving circuit section drives and sets a row selection signal to an active level, the signal (light signal) held in the capacitor 207Sb is output to a signal line 21S via a MOS transistor 208Sa and a row selection switch 208Sb. Furthermore, the signal (noise) held in the capacitor 207Nb is simultaneously output to a signal line 21N via a MOS transistor 208Na and a row selection switch 208Nb. The MOS transistor 208Sa, along with a constant current source (not illustrated) provided to the signal line 21S, constitutes a source follower circuit. Similarly, the MOS transistor 208Na, along with a constant current source (not illustrated) provided to the signal line 21N, constitute a source follower circuit. The MOS transistor 208Sa and the row selection switch 208Sb constitute a signal selector circuit section 208S, and the MOS transistor 208Na and the row selection switch 208Nb constitute a noise selector circuit section 208N. The selector circuit section 208 includes the signal selector circuit section 208S and the noise selector circuit section 208N.


The pixel 20 may include an adding switch 209S for adding light signals of a plurality of adjacent pixels 20. During an adding mode, an adding mode signal ADD is set to an active level, and the adding switch 209S is switched on. Thus, the capacitors 207Sb of adjacent pixels 20 are mutually connected by the adding switch 209S, and the light signals are averaged. Similarly, the pixel 20 may include an adding switch 209N for adding noises of a plurality of adjacent pixels 20. When the adding switch 209N switches on, the capacitors 207Nb of adjacent pixels 20 are mutually connected by the adding switch 209N, and the noises are averaged. An adding section 209 includes the adding switch 209S and the adding switch 209N.


Furthermore, the pixel 20 may include a sensitivity changing section 205 for changing sensitivity. For example, the pixel 20 may include a first sensitivity changing switch 205a and a second sensitivity changing switch 205a, and circuit elements accompanying these switches. When a first change signal WIDE is set to an active level, the first sensitivity changing switch 205a switches on, and the capacitance value of a first additional capacitor 205b is added to the capacitance value of the charge-voltage converter. Thus, the sensitivity of the pixel 20 decreases. When a second change signal WIDE2 is set to an active level, the second sensitivity changing switch 205a switches on, and the capacitance value of a second additional capacitor 205b is added to the capacitance value of the charge-voltage converter. Thus, the sensitivity of the pixel 20 decreases to a further extent. By adding a function of decreasing the sensitivity of the pixel 20 in such manner, the pixel 20 can receive a larger light amount and the dynamic range thereof can be widened. If the first change signal WIDE is set to an active level, an enable signal ENn may be set to an active level to cause a MOS transistor 204a to perform a source follower operation in place of the MOS transistor 204a.


The X-ray imaging apparatus 104 reads, from the two-dimensional detector 106, the outputs from pixel circuits as described above, converts the outputs into digital values using an AD converter (not illustrated), and then transfers an image to the imaging control apparatus 103.


Next, the operation of the X-ray imaging system according to the first embodiment having the above-described configuration will be described. FIG. 3 illustrates driving timings of the X-ray imaging apparatus 104 for obtaining a plurality of X-ray images with mutually different energies that are to be provided to the energy subtraction in the X-ray imaging system according to the first embodiment. With the horizontal axis indicating time, the waveforms in FIG. 3 indicate: X-ray exposure; a synchronization signal; the resetting of the photoelectric conversion element 201; the sample-and-hold circuit 207; and timings when images are read from the signal lines 21.


X-ray exposure is performed after the photoelectric conversion element 201 is reset based on the reset signal. The X-ray tube voltage is ideally a square wave. However, it takes a finite amount of time for the tube voltage to rise and fall. Particularly in a case in which a pulse X-ray beam is used and the exposure time is short, the tube voltage can no longer be regarded as a square wave, and exhibits a waveform as indicated by X-rays 301 to 303. Rise period X-rays 301, stable period X-rays 302, and fall period X-rays 303 have different X-ray energies. Thus, by obtaining X-ray images corresponding to radiation during periods separated by performing sampling and holding, a plurality of types of X-ray images with mutually different energies can be obtained.


The X-ray imaging apparatus 104 performs sampling using the noise sample-and-hold circuit 207N after exposure to the rise period X-rays 301, and further performs sampling using the signal sample-and-hold circuit 207S after exposure to the stable period X-rays 302. Subsequently, the X-ray imaging apparatus 104 reads the difference between the signal line 21N and the signal line 21S as an image. Here, a signal (R1) of the rise period X-rays 301 is held in the noise sample-and-hold circuit 207N, and the sum (R1+B) of the signal of the rise period X-rays 301 and a signal (B) of the stable period X-rays 302 is held in the signal sample-and-hold circuit 207S. Accordingly, an image 304 that corresponds to the signal of the stable period X-rays 302 is read.


Next, the X-ray imaging apparatus 104 performs sampling using the signal sample-and-hold circuit 207S again after the exposure to the fall period X-rays 303 and the reading of the image 304 are completed. Subsequently, the X-ray imaging apparatus 104 resets the photoelectric conversion element 201, performs sampling using the noise sample-and-hold circuit 207N again, and reads the difference between the signal line 21N and the signal line 21S as an image. Here, a signal of a state in which no X-ray exposure is performed is held in the noise sample-and-hold circuit 207N, and the sum (R1+B+R2) of the signal of the rise period X-rays 301, the stable period X-rays 302, and a signal (R2) of the fall period X-rays 303 is held in the signal sample-and-hold circuit 207S. Accordingly, an image 306 that corresponds to the signal of the rise period X-rays 301, the signal of the stable period X-rays 302, and the signal of the fall period X-rays 303 is read. Subsequently, by calculating the difference between the image 306 and the image 304, an image 305 that corresponds to the sum of the rise period X-rays 301 and the fall period X-rays 303 is obtained. This calculation may be performed by the X-ray imaging apparatus 104 or by the imaging control apparatus 103.


The timings when the sample-and-hold circuit 207 and the photoelectric conversion element 201 are reset are determined using a synchronization signal 307 that indicates that the exposure to X-rays from the X-ray generation apparatus 101 has started. A configuration of measuring the tube current of the X-ray generation apparatus 101 and determining whether or not the current value exceeds a preset threshold may be adopted as the method for detecting the start of X-ray exposure, but the method for detecting the start of X-ray exposure is not limited to this. For example, a configuration may be adopted in which the start of X-ray exposure is detected by, after the resetting of the photoelectric conversion element 201 is completed, repeating reading from the pixel 20 and determining whether or not the pixel value exceeds a preset threshold.


Alternatively, a configuration may be adopted in which an X-ray detector that is different from the two-dimensional detector 106 is built into the X-ray imaging apparatus 104, and the start of X-ray exposure is detected by determining whether or not a measurement value of the X-ray detector exceeds a preset threshold, for example. In any case, the sampling using the signal sample-and-hold circuit 207S, the sampling using the noise sample-and-hold circuit 207N, and the resetting of the photoelectric conversion element 201 are performed after a predetermined amount of time elapses from when the synchronization signal 307 indicating the start of X-ray exposure is input.


In such a manner, the image 304 corresponding to the stable period of a pulse X-ray beam and the image 305 corresponding to the sum of the rise period and the fall period of the pulse X-ray beam are obtained. These two X-ray images are formed through exposure to X-rays having mutually different energies, and thus the energy subtraction processing can be performed by performing calculation between these X-ray images.



FIG. 4 illustrates driving timings of the X-ray imaging apparatus 104, differing from those in FIG. 3, for obtaining a plurality of X-ray images with mutually different energies that are to be provided to the energy subtraction in the X-ray imaging system according to the first embodiment. FIG. 4 differs from FIG. 3 in that the tube voltage of the X-ray generation apparatus 101 is actively switched.


First, the X-ray generation apparatus 101 performs exposure to low-energy X-rays 401 after the photoelectric conversion element 201 is reset. In this state, the X-ray imaging apparatus 104 performs sampling using the noise sample-and-hold circuit 207N. Subsequently, the X-ray generation apparatus 101 switches the tube voltage and performs exposure to high-energy X-rays 402. In this state, the X-ray imaging apparatus 104 performs sampling using the signal sample-and-hold circuit 207S. Subsequently, the X-ray generation apparatus 101 switches the tube voltage and performs exposure to low-energy X-rays 401. The X-ray imaging apparatus 104 reads the difference between the signal line 21N and the signal line 21S as an image. Here, a signal (R1) of the low-energy X-rays 401 is held in the noise sample-and-hold circuit 207N, and the sum (R1+B) of the signal of the low-energy X-rays 401 and a signal (B) of the high-energy X-rays 402 is held in the signal sample-and-hold circuit 207S. Accordingly, an image 404 that corresponds to the signal of the high-energy X-rays 402 is read.


Next, the X-ray imaging apparatus 104 performs sampling using the signal sample-and-hold circuit 207S again after the exposure to the low-energy X-rays 403 and the reading of the image 404 are completed. Subsequently, the X-ray imaging apparatus 104 resets the photoelectric conversion element 201, performs sampling using the noise sample-and-hold circuit 207N again, and reads the difference between the signal line 21N and the signal line 21S as an image. Here, a signal of a state in which no X-ray exposure is performed is held in the noise sample-and-hold circuit 207N, and the sum (R1+B+R2) of the signal of the low-energy X-rays 401, the high-energy X-rays 402, and a signal (R2) of the low-energy X-rays 403 is held in the signal sample-and-hold circuit 207S. Accordingly, an image 406 that corresponds to the signal of the low-energy X-rays 401, the signal of the high-energy X-rays 402, and the signal of the low-energy X-rays 403 is read.


Subsequently, by calculating the difference between the image 406 and the image 404, an image 405 that corresponds to the sum of the low-energy X-rays 401 and the low-energy X-rays 403 is obtained. This calculation may be performed by the X-ray imaging apparatus 104 or by the imaging control apparatus 103. The synchronization signal (407) is similar to that in FIG. 3. By obtaining images while actively switching the tube voltage in such a manner, the energy difference between the low-energy and high-energy radiation images can be increased to a further extent compared to the method in FIG. 3.


Next, the energy subtraction processing by the imaging control apparatus 103 will be described. The energy subtraction processing in the first embodiment is divided into three stages, namely correction processing by the correction unit 132, signal processing by the signal processing unit 133, and image processing by the image processing unit 134. Each processing will be described below.


Description of Correction Processing

The correction processing is processing in which the plurality of radiation images obtained from the X-ray imaging apparatus 104 are processed to generate a plurality of images to be used in the later-described signal processing in the energy subtraction processing. FIG. 5 illustrates the correction processing for the energy subtraction processing according to the first embodiment. First, the obtaining unit 131 causes the X-ray imaging apparatus 104 to capture images in a state in which no X-ray exposure is performed, and obtains images according to the driving illustrated in FIG. 3 or FIG. 4. Two images are read as a result of the driving. In the following, the first image (image 304 or image 404) is referred to as F_ODD, and the second image (image 306 or image 406) is referred to as F_EVEN. F_ODD and F_EVEN are each an image corresponding to fixed pattern noise (FPN) of the X-ray imaging apparatus 104.


Next, the obtaining unit 131 exposes the X-ray imaging apparatus 104 to X-rays in a state in which no object is present and causes the X-ray imaging apparatus 104 to capture images to obtain gain correction images that are output from the X-ray imaging apparatus 104 according to the driving illustrated FIG. 3 or FIG. 4. As in the above-described case, two images are read as a result of this driving. In the following, the first gain-correction image (image 304 or image 404) is referred to as W_ODD, and the second gain-correction image (image 306 or image 406) is referred to as W_EVEN. W_ODD and W_EVEN are each an image corresponding to the sum of the FPN of the X-ray imaging apparatus 104 and an X-ray-based signal. The correction unit 132 obtains images WF_ODD and WF_EVEN from which the FPN of the X-ray imaging apparatus 104 has been removed by subtracting F_ODD from W_ODD and subtracting F_EVEN from W_EVEN. This is referred to as offset correction.


WF_ODD is an image that corresponds to the stable period X-rays 302, and WF_EVEN is an image that corresponds to the sum of the rise period X-rays 301, the stable period X-rays 302, and the fall period X-rays 303. Accordingly, the correction unit 132 obtains an image that corresponds to the sum of the rise period X-rays 301 and the fall period X-rays 303 by subtracting WF_ODD from WF_EVEN. Processing in which images that correspond to X-rays of specific periods separated by performing sampling and holding are obtained by performing subtraction using a plurality of images in such a manner is referred to as color correction. The energies of the rise period X-rays 301 and the fall period X-rays 303 are lower than the energy of the stable period X-rays 302. Accordingly, as a result of the color correction, a low-energy image W_LOW corresponding to a case in which no object is present can be obtained by subtracting WF_ODD from WF_EVEN. Furthermore, a high-energy image W_High corresponding to a case in which no object is present obtained from WF_ODD.


Next, the obtaining unit 131 exposes the X-ray imaging apparatus 104 to X-rays in a state in which an object is present and causes the X-ray imaging apparatus 104 to capture images to obtain images that are output from the X-ray imaging apparatus 104 according to the driving illustrated FIG. 3 or FIG. 4. Here, two images are read. In the following, the first image (image 304 or image 404) is referred to as X_ODD, and the second image (image 306 or image 406) is referred to as X_EVEN. By performing offset correction and color correction similar to those in the case in which no object is present, the correction unit 132 obtains a low-energy image X_Low corresponding to a case in which the object is present and a high-energy image X_High corresponding to a case in which the object is present.


Here, [Math. 1] below holds true, where d is the thickness of the object, μ is a linear attenuation coefficient of the object, I0 is the output from the pixels 20 in a case in which no object is present, and I is the output from the pixels 20 in a case in which the object is present.






I=I
0 exp(μd)  [Math. 1]


[Math. 2] below can be obtained by transforming [Math. 1]. The right side of [Math. 2] indicates the attenuation rate of the object. The attenuation rate of the object is a real number between 0 and 1.






I/I
0=exp(μd)  [Math. 2]


Accordingly, the correction unit 132 obtains a low-energy attenuation rate image L (hereinafter also referred to as “low-energy image L”) by dividing the low-energy image X_Low corresponding to a case in which the object is present by the low-energy image W_Low corresponding to a case in which no object is not present. Similarly, the correction unit 132 obtains a high-energy attenuation rate image H (hereinafter also referred to as “high-energy image H”) by dividing the high-energy image X_High corresponding to a case in which the object is present by the high-energy image W_High corresponding to a case in which no object is not present. Processing in which attenuation rate images are obtained by dividing images obtained based on radiation images obtained in a state in which the object is present by images obtained based on radiation images obtained in a state in which no object is present in such a manner is referred to as gain correction. This concludes the description of the correction processing by the correction unit 132 according to the first embodiment.


Description of Signal Processing


FIG. 6 illustrates a block diagram of the signal processing in the energy subtraction processing according to the first embodiment. The signal processing unit 133 generates material characteristic images using the plurality of images obtained from the correction unit 132. In the following, the generation of material decomposition images including a bone thickness image B and a soft-tissue thickness image S will be described. The signal processing unit 133, by performing the following processing, derives a bone thickness image B and a soft-tissue thickness image S from the low-energy attenuation rate image L and the high-energy attenuation rate image H obtained through the correction illustrated in FIG. 5.


First, [Math. 3] below holds true, where E is the X-ray photon energy, N(E) is the number of photons at the energy E, B is the thickness in the bone thickness image, S is the thickness in the soft-tissue thickness image, μB(E) is a linear attenuation coefficient of bone at the energy E, ηs(E) is a linear attenuation coefficient of soft tissue at the energy E, and I/I0 is the attenuation rate.










I
/

I
0


=




0





N


(
E
)



exp


{



-


μ
B



(
E
)




B

-



μ
S



(
E
)



S


}


EdE





0





N


(
E
)



EdE







[

Math
.




3

]







The number of photons N(E) at the energy E indicates an X-ray spectrum. The X-ray spectrum can be obtained through simulation or actual measurement. Furthermore, the linear attenuation coefficient μB(E) of bone at the energy E and the linear attenuation coefficient ηS(E) of soft tissue at the energy E can be obtained from a database such as that provided by the National Institute of Standards and Technology (NIST). Accordingly, based on [Math. 3], the thickness B in any bone thickness image, the thickness S in any soft-tissue thickness image, and the attenuation rate I/I0 for any X-ray spectrum N(E) can be calculated.


Here, the equations in [Math. 4] below hold true, where NL(E) is the low-energy X-ray spectrum, and NH(E) is the high-energy X-ray spectrum. Note that L is a pixel value in the low-energy attenuation rate image, and H is a pixel value in the high-energy attenuation rate image.










L
=




0






N
L



(
E
)



exp


{



-


μ
B



(
E
)




B

-



μ
S



(
E
)



S


}


EdE





0






N
L



(
E
)



EdE










H
=




0






N
H



(
E
)



exp


{



-


μ
B



(
E
)




B

-



μ
S



(
E
)



S


}


EdE





0






N
H



(
E
)



EdE








[

Math
.




4

]







By solving the non-linear system of equations in [Math. 4], the thickness B in the bone thickness image and the thickness S in the soft-tissue thickness image can be derived. Here, a case will be described in which the Newton-Raphson method is used as a representative method for solving non-linear systems of equations. First, when m is the number of iterations of the Newton-Raphson method, Bm is the bone thickness after the mth iteration, and Sm is the soft-tissue thickness after the mth iteration, the high-energy attenuation rate Hm after the mth iteration and the low-energy attenuation rate Lm after the mth iteration can be expressed as in [Math. 5] below.











L
m

=




0






N
L



(
E
)



exp


{



-


μ
B



(
E
)





B
m


-



μ
S



(
E
)




S
m



}


EdE





0






N
L



(
E
)



EdE











H
m

=




0






N
H



(
E
)



exp


{



-


μ
B



(
E
)





B
m


-



μ
S



(
E
)




S
m



}


EdE





0






N
H



(
E
)



EdE








[

Math
.




5

]







Furthermore, the rate of change in attenuation rate when there is a minute change in thickness is expressed using [Math. 6] below.














H
m





B
m



=







0





-


μ
B



(
E
)






N
H



(
E
)



exp








{



-


μ
B



(
E
)





B
m


-



μ
S



(
E
)




S
m



}


EdE







0






N
H



(
E
)



EdE














L
m





B
m



=







0





-


μ
B



(
E
)






N
L



(
E
)



exp








{



-


μ
B



(
E
)





B
m


-



μ
S



(
E
)




S
m



}


EdE







0






N
L



(
E
)



EdE














H
m





S
m



=







0





-


μ
S



(
E
)






N
H



(
E
)



exp








{



-


μ
B



(
E
)





B
m


-



μ
S



(
E
)




S
m



}


EdE







0






N
H



(
E
)



EdE














L
m





S
m



=







0





-


μ
S



(
E
)






N
L



(
E
)



exp








{



-


μ
B



(
E
)





B
m


-



μ
S



(
E
)




S
m



}


EdE







0






N
L



(
E
)



EdE








[

Math
.




6

]







Here, the bone thickness Bm+1 and the soft-tissue thickness Sm+1 after the (m+1)th iteration are expressed using the high-energy attenuation rate H and the low-energy attenuation rate L as shown in [Math. 7] below.










[




B

m
+
1







S

m
+
1





]

=


[




B
m






S
m




]

+



[







H
m





B
m









H
m





S
m











L
m





B
m









L
m





S
m






]


-
1




[




H
-

H
m







L
-

L
m





]







[

Math
.




7

]







According to the Cramer's rule, the inverse matrix of the 2×2 matrix can be expressed using [Math. 8] below, where det is the determinant.









det
=







H
m





B
m








L
m





S
m




-





H
m





S
m










L
m





B
m





[







H
m





B
m









H
m





S
m











L
m





B
m









L
m





S
m






]



-
1




=


1
det



[







L
m





S
m






-




H
m





S
m









-




L
m





B
m










H
m





B
m






]







[

Math
.




8

]







Accordingly, [Math. 9] below can be derived by substituting [Math. 8] into [Math. 7].











B

m
+
1


=


B
m

+


1
det






L
m





S
m





(

H
-

H
m


)


-


1
det






H
m





S
m





(

L
-

L
m


)











S

m
+
1


=


S
m

+


1
det






L
m





B
m





(

H
-

H
m


)


+


1
det






H
m





B
m





(

L
-

L
m


)








[

Math
.




9

]







By repeating such a calculation, the difference between the high-energy attenuation rate Hm after the mth iteration and the actually-measured high-energy attenuation rate H infinitely approaches 0. The same applies also to the low-energy attenuation rate L. Thus, the bone thickness Bm after the mth iteration converges to a bone thickness B, and the soft-tissue thickness Sm after the mth iteration converges to a soft-tissue thickness S. The non-linear system of equations shown in [Math. 4] can be solved in such a manner. Accordingly, by calculating [Math. 4] for all pixels, a bone thickness image B and a soft-tissue thickness image S can be obtained from the low-energy attenuation rate image L and the high-energy attenuation rate image H.


Note that, while the bone thickness image B and the soft-tissue thickness image S are calculated in the first embodiment, the present invention is not limited to such an embodiment. For example, the thickness W of water and the thickness I of a contrast agent may be calculated. That is, decomposition may be performed into the thicknesses of any two kinds of materials. Furthermore, an image of effective atomic numbers Z and an image of surface densities D may be derived from the low-energy attenuation rate image L and the high-energy attenuation rate image H obtained through the corrections illustrated in FIG. 5. An effective atomic number Z is an atomic number equivalent of a mixture. Also, a surface density D is the product of object density [g/cm3] and object thickness [cm].


Furthermore, the non-linear system of equations is solved using the Newton-Raphson method in the first embodiment. However, the present invention is not limited to such an embodiment. For example, iterative solution methods such as the least-squares method and the bisection method may be used. Furthermore, while the non-linear system of equations is solved using an iterative solution method in the first embodiment, the present invention is not limited to such an embodiment. A configuration may be adopted in which bone thicknesses B and soft-tissue thicknesses S for various combinations of the high-energy attenuation rate H and the low-energy attenuation rate L are derived in advance to create a table, and the bone thickness B and the soft-tissue thickness S are derived at high speed by referring to this table.


Description of Image Processing


FIG. 7 illustrates a block diagram of the image processing in the energy subtraction processing according to the first embodiment. The image processing unit 134 according to the first embodiment generates a display image by performing post-processing, etc., on the bone thickness image B obtained through the signal processing illustrated in FIG. 6. The image processing unit 134 may use logarithmic conversion, dynamic range compression, etc., as post-processing.


Furthermore, an accumulation image A, which is an image that is the sum of high energy and low energy, may be used as the display image, for example. The accumulation image A is an image that is compatible with images without energy resolution captured using existing radiation imaging systems. The image processing unit 134 may generate the accumulation image A by multiplying the high-energy attenuation rate image H and the low-energy attenuation rate image L by coefficients and adding the results. Alternatively, the image processing unit 134 may generate the accumulation image A by dividing the image X_EVEN illustrated in FIG. 5, which corresponds to the sum of the rise period X-rays 301, the stable period X-rays 302, and the fall period X-rays 303 of the X-rays in a case in which the object is present, by the image W_EVEN illustrated in FIG. 5, which corresponds to the sum of the rise period X-rays 301, the stable period X-rays 302, and the fall period X-rays 303 of the X-rays in a case in which no object is present.



FIG. 8 is a diagram illustrating examples of the accumulation image A and the bone image B according to the first embodiment. A human body is normally formed from only soft tissue and bones. However, when IVR is performed using the radiation imaging system illustrated in FIG. 1, a contrast agent is injected into blood vessels. Furthermore, a catheter and a guide wire are inserted into a blood vessel to perform a procedure such as stenting or coiling. IVR is performed while checking the positions and shapes of the contrast agent and the medical devices. Accordingly, visibility may improve by separating only the contrast agent or the medical devices, or by removing backgrounds such as soft tissue and bones.


As illustrated in FIG. 8, in an image that is compatible with ordinary radiation imaging systems, i.e., in the accumulation image A, soft tissue is displayed as well as the contrast agent, a stent, and bones. On the other hand, in the radiation imaging system according to the first embodiment, the influence of soft tissue can be reduced by displaying the bone image B.


Meanwhile, the main component of a contrast agent is iodine, and the main component of a medical device is a metal, such as stainless steel. Both such materials have atomic numbers higher than that of calcium, which is the main component of bone, and thus bones, contrast agents, and medical devices are displayed in the bone image B.


According to an investigation carried out by the present inventors, even in a case such as that in which separation into a water image W and a contrast agent image I was performed based on the high-energy image H and the low-energy image L, bones, contrast agents, and medical devices were displayed in the contrast agent image I. Furthermore, this is the same even if the filters and tube voltages used to generate low-energy and high-energy X-rays are changed. That is, contrast agents and medical devices could not be separated from bones.



FIG. 9 is a diagram illustrating examples of the soft-tissue image and a thickness image according to the first embodiment. According to an observation carried out by the present inventors of soft-tissue images S of phantoms of the four limbs, it was found that a bone is visible as a decrease in soft tissue thickness. This is because the soft-tissue thickness decreases by an amount corresponding to the thickness of a bone. Furthermore, it was found through an observation of an image that is the sum of the bone image B and the soft-tissue image S, i.e., the thickness image T, that the bone contrast disappeared and was no longer visible. This is because the decrease in soft-tissue thickness in an area where a bone is present is offset by the thickness of the bone being added.


According to an investigation carried out by the present inventors, it was found that, while there are areas in the bones of the human body that do not contain calcium, such as the spongy bone and the bone marrow, the insides of such areas are filled with organic matter (i.e., not filled with gas). That is, it can be said that a human body has continuous thickness when projected from one direction. Thus, the contrast of a bone can be eliminated in the thickness image T even in the case of the human body. Here, it should be noted that the above-described continuity of thickness does not hold true for areas that may contain gases, such as the lungs and the digestive organs. Furthermore, it should also be noted that the inside of the spongy bone and the bone marrow is hollow (i.e., is filled with gases) in a dry human-bone phantom, and that there are creatures having hollow bones, such as birds.


In the present embodiment, the signal processing unit 133 generates a thickness image in which a thickness of a first material and a thickness of a second material are added together. Thus, a material decomposition image with reduced noise can be generated.



FIG. 10 is a diagram illustrating examples of the accumulation image A and the thickness image T according to the first embodiment. When a contrast agent is injected into the four limbs, both the bones and the contrast agent are visible in the accumulation image A. However, in the thickness image T, the contrast of bones disappears, and it is possible to see only soft tissue and the contrast agent. Accordingly, it can be expected that visibility can be improved in a situation such as where bones would be in the way in viewing the contrast agent and medical devices. However, depending on the tube voltages of the low-energy and high-energy X-rays illustrated in FIG. 4, there are cases in which, in the thickness image T, the contrast-agent contrast becomes weak at the same time as the bone contrast disappears, and thus the contrast agent becomes difficult to see.



FIG. 11 includes portion 11A that is a diagram illustrating the relationship between X-ray spectra and energies, and portion 11B that is a diagram illustrating the relationship between linear attenuation coefficients and energies. In portion 11A of FIG. 11, a waveform 1101 indicates an X-ray spectrum at a 50-kV tube voltage, and a waveform 1102 indicates an X-ray spectrum at a 120-kV tube voltage. A broken line 1110 indicates the average energy (33 keV) of the X-ray spectrum at the 50-kV tube voltage, and a broken line 1120 indicates the average energy (57 keV) of the X-ray spectrum at the 120-kV tube voltage


Furthermore, as illustrated in portion 11B of FIG. 11, the linear attenuation coefficient varies depending on material (for example, soft tissue, bone, contrast agent, etc.) and energy. In portion 11B of FIG. 11, a waveform 1103 indicates the linear attenuation coefficient of soft tissue, a waveform 1104 indicates the linear attenuation coefficient of bone, and a waveform 1105 indicates the linear attenuation coefficient of a contrast agent.


In the present embodiment, the obtaining unit 131 obtains a plurality of radiation images by capturing images at a first energy (low energy) and at a second energy (high energy) that is higher than the first energy. Here, the average energy in the spectrum of radiation for obtaining radiation images based on the first energy is lower than the iodine K-edge 1130 (portion 11B of FIG. 11).


Generally, the greater the difference between the tube voltages of the low-energy and high-energy X-rays, the greater the difference between the linear attenuation coefficients of materials. Accordingly, the SN ratio of the images obtained through the signal processing illustrated in FIG. 6 is improved. On the other hand, the exposure dose necessary for achieving the same SN ratio tends to increase if the tube voltage of the low-energy X-rays is set too low. Thus, in the signal processing, the tube voltage of the low-energy X-rays and the tube voltage of the high-energy X-rays may be respectively set to 70 kV and 120 kV, etc., for example.


Here, as illustrated in portion 11B of FIG. 11 for example, the K-edge 1130 of iodine, which is the main component of contrast agents, is around 30 keV. The contrast-agent contrast in the thickness image T can be emphasized by selecting a radiation quality such that this K-edge 1130 can be used. That is, the contrast-agent contrast in the thickness image T can be emphasized by selecting the radiation quality so that the average energy in the spectrum of the radiation for obtaining radiation images based on low energy is lower than the iodine K-edge.


For example, the signal processing unit 133 may use, in the signal processing, a radiation quality such as that obtained by setting the tube voltage of the low-energy X-rays to 40-50 kV and by not passing the X-rays through any additional filter. With such a radiation quality, the average energy of the low-energy X-rays falls below the iodine K-edge, and the contrast-agent contrast is readily emphasized. However, the low-energy X-rays hardly pass through thick objects. Accordingly, the first embodiment of the present invention is suitable for use with parts, such as the four limbs for example, which are relatively thin and in which the continuity of thickness illustrated in FIG. 9 favorably holds true.


In the image processing according to the present embodiment, the accumulation image A, the bone image B, or the thickness image T is displayed as the display image. However, the display image is not limited to such examples, and the image processing unit 134 may display the high-energy image H or the soft-tissue image S as the display image. Furthermore, the images obtained in the timing chart illustrated in FIG. 4, the images obtained in the correction processing illustrated in FIG. 5, and the images obtained through the signal processing illustrated in FIG. 6 may also be used. Furthermore, while logarithmic conversion and dynamic range compression have been mentioned as the post-processing to be applied to such images, there is no limitation to such an embodiment. For example, the image processing unit 134 may perform image processing such as the application of a temporal-direction filter such as a recursive filter or a spatial-direction filter such as a Gaussian filter. That is, it can be said that the image processing according to the present embodiment is processing in which images that have been captured, corrected, or signal-processed are subjected to calculation, as appropriate.


According to the present embodiment, material decomposition images with reduced noise can be obtained.


Second Embodiment

In the second embodiment, a configuration for utilizing the continuity of thickness illustrated in FIG. 9 and thereby reducing noise in the images (material decomposition images) obtained through the signal processing illustrated in FIG. 6 will be described.



FIG. 12 illustrates a block diagram of signal processing according to the second embodiment. In the second embodiment, in a manner similar to that in the signal processing in FIG. 6, the signal processing unit 133 generates material decomposition images using the plurality of images obtained from the correction unit 132. That is, the signal processing unit 133 generates the bone thickness image B and the soft-tissue thickness image S from the low-energy image L and the high-energy image H. There is a problem that these thickness images include more noise than the low-energy image L and the high-energy image H, which leads to degradation of image quality. In view of this, the signal processing unit 133 generates a filtered soft-tissue thickness image S′ by applying filtering for reducing noise to the soft-tissue thickness image S. For the filtering, the signal processing unit 133 may use a Gaussian filter, a median filter, etc. Subsequently, in a manner similar to that in the description of FIG. 7, the signal processing unit 133 generates the accumulation image A from the low-energy image L and the high-energy image H. Furthermore, the signal processing unit 133 generates a bone thickness image B′ with reduced noise from the filtered soft-tissue thickness image S′ and the accumulation image A.


First, [Math. 10] below holds true, where NA(E) is the spectrum in the image that is the sum of the low-energy and high-energy X-rays, i.e., the accumulation image A, S is the soft-tissue thickness, and B is the bone thickness.









A
=






N
A



(
E
)



exp


{



-


μ
B



(
E
)




B

-



μ
S



(
E
)



S


}


EdE







N
A



(
E
)



EdE







[

Math
.




10

]







By substituting the soft-tissue thickness S and the pixel value A of the accumulation image at a given pixel into [Math. 10] and solving the non-linear equation, the bone thickness B at the given pixel can be derived. Here, by substituting the filtered soft-tissue thickness S′ in place of the soft-tissue thickness S and solving [Math. 10], a bone thickness B′ can be obtained.


Generally, since a soft-tissue image does not include so many high-frequency components, signal components are not readily lost even if filtering is applied to remove noise from a soft-tissue image. Thus, a bone thickness image B′ with reduced noise can be obtained using the accumulation image A, which does not include much noise in the first place, and the soft-tissue thickness image S′ with reduced noise. However, there is a problem that, in a case in which high-frequency components are included in the soft-tissue image, some signal components of the bone thickness image B′ with reduced noise would be lost.


In such a case, the bone thickness image B′ may be generated according to the block diagram of signal processing illustrated in FIG. 13. FIG. 13 illustrates a block diagram of signal processing according to the second embodiment. In the block diagram of signal processing illustrated in FIG. 13, in a similar manner as that in the signal processing in the block diagram illustrated in FIG. 12, the signal processing unit 133 generates material decomposition images using the plurality of images obtained from the correction unit 132. That is, the signal processing unit 133 generates the bone thickness image B and the soft-tissue thickness image S from the low-energy image L and the high-energy image H. In addition, the signal processing unit 133 generates the accumulation image A from the low-energy image L and the high-energy image H. Furthermore, the signal processing unit 133 generates an image that is the sum of the bone thickness image B and the soft-tissue thickness image S, i.e., the thickness image T. Then, the signal processing unit 133 generates a filtered thickness image T′ by applying filtering for reducing noise to the thickness image T. Furthermore, the signal processing unit 133 generates a bone thickness image B′ with reduced noise from the filtered thickness image T′ and the accumulation image A.


Here, [Math. 11] below holds true when [Math. 10] is transformed based on T=B+S, where T is thickness.









A
=






N
A



(
E
)



exp


{



-


μ
B



(
E
)




B

-



μ
S



(
E
)




(

T
-
B

)



}


EdE







N
A



(
E
)



EdE







[

Math
.




11

]







By substituting the thickness T and the pixel value A of the accumulation image at a given pixel into [Math. 11] and solving the non-linear equation, the bone thickness B at the given pixel can be derived. Here, by substituting the filtered thickness T′ in place of the thickness T and solving [Math. 11], a bone thickness B′ can be obtained. As described in FIG. 9, the continuity in the thickness image T is high, and thus the thickness image T includes even less high-frequency components compared to the soft-tissue thickness image. Accordingly, signal components are not readily lost even if filtering is performed to remove noise. Thus, a bone thickness image B′ with reduced noise can be obtained using the accumulation image A, which does not include much noise in the first place, and the thickness image T′ with reduced noise.


Note that, while an example in which the bone thickness image B′ with reduced noise is calculated is described in [Math. 11], the same applies to a case in which a soft-tissue thickness image with reduced noise is calculated. That is, the signal processing unit 133 can generate a material decomposition image of a first material (bone thickness image B′) with reduced noise compared to a first material decomposition image (bone thickness image B) or a material decomposition image of a second material (soft-tissue thickness image S′) with reduced noise compared to a second material decomposition image (soft-tissue thickness image S) based on a filtered thickness image T′ and an accumulation image A obtained based on addition of a plurality of radiation images (H and L).


According to the present embodiment, material decomposition images with reduced noise can be obtained.


Furthermore, the results of the calculation in [Math. 11] can be stored in advance in a table in an internal memory of the signal processing unit 133, and the signal processing unit 133 can obtain the bone thickness image B′ (soft-tissue thickness image S′) corresponding to the filtered thickness image T′ and the accumulation image A by referring to the table when performing the calculation in [Math. 11]. Thus, the signal processing unit 133 can obtain a material decomposition image (B′ or S′) with reduced noise for each material in a short amount of time in the capturing of moving images such as that in IVR, as well as in the capturing of still images.


Third Embodiment

In the third embodiment, a configuration for separating a contrast agent and reducing noise by utilizing the continuity of thickness illustrated in FIG. 9 will be described.



FIG. 14 illustrates a block diagram of signal processing according to the third embodiment. In the third embodiment, in a manner similar to that in the signal processing in FIG. 13, the signal processing unit 133 generates the bone thickness image B and the soft-tissue thickness image S from the low-energy image L and the high-energy image H. Furthermore, the signal processing unit 133 generates an image that is the sum of the bone thickness image B and the soft-tissue thickness image S, i.e., the thickness image T. Then, the signal processing unit 133 generates a filtered thickness image T′ by performing filtering for removing the contrast-agent contrast on the thickness image T. Furthermore, the signal processing unit 133 generates a contrast-agent image I′ from the filtered thickness image T′, the low-energy image L, and the high-energy image H.


Here, [Math. 12] below holds true when [Math. 4] is expanded, where μ1(E) is the linear attenuation coefficient of the contrast agent at energy E, and I is contrast-agent thickness.









L
=






N
L



(
E
)



exp


{



-


μ
B



(
E
)




B

-



μ
S



(
E
)



S

-



μ
I



(
E
)



I


}


EdE







N
L



(
E
)



EdE







[

Math
.




12

]






H
=






N
H



(
E
)



exp


{



-


μ
B



(
E
)




B

-



μ
S



(
E
)



S

-



μ
I



(
E
)



I


}


EdE







N
H



(
E
)



EdE















Furthermore, [Math. 13] below holds true when [Math. 12] is transformed based on T=B+S+I, where T is the thickness in the thickness image.









L
=









N
L



(
E
)



exp


{



-


μ
B



(
E
)




B

-



μ
S



(
E
)



S

-













μ
I



(
E
)




(

T
-
B
-
S

)


}


EdE









N
L



(
E
)



EdE







[

Math
.




13

]






H
=









N
H



(
E
)



exp


{



-


μ
B



(
E
)




B

-



μ
S



(
E
)



S

-













μ
I



(
E
)




(

T
-
B
-
S

)


}


EdE









N
H



(
E
)



EdE















By substituting the pixel value of the low-energy image L, the pixel value of the high-energy image H, and the thickness T in the thickness image at a given pixel into [Math. 13] and solving the non-linear system of equations, the thickness B in the bone thickness image and the thickness S in the soft tissue thickness image at the given pixel can be calculated. However, if calculation is directly performed in this state, the contrast-agent thickness I would always be 0 because the thickness image T is the sum of the bone thickness image B and the soft-tissue thickness image S, i.e., because T=B+S holds true.


As has been described in FIG. 10, bones disappear but contrast agents are visible in the thickness image T. That is, thickness changes only at blood-vessel portions including a contrast agent. However, in blood-vessel portions including a contrast agent, blood is replaced by the contrast agent. That is, regardless of whether or not a contrast agent is present, the true thickness image T′ should not change.


Furthermore, since blood vessels including the contrast agent are generally thin, changes in thickness brought about by a contrast agent can be removed and the true thickness image T′ can be obtained by performing filtering using a Gaussian filter or the like on the thickness image T. That is, by substituting the thickness in the filtered thickness image T′ in place of that in the thickness image T into [Math. 13] and solving [Math. 13], the bone thickness image B′, the soft tissue thickness image S′, and the contrast-agent thickness image I′ can be obtained.


The signal processing unit 133 generates a material decomposition image of a first material (bone thickness image B′) with reduced noise compared to a first material decomposition image (bone thickness image B), a material decomposition image of a second material (soft tissue thickness image S′) with reduced noise compared to a second material decomposition image (soft-tissue thickness image S), and a third material decomposition image (contrast-agent thickness image I′) indicating a thickness of a third material (iodine-containing contrast agent) that differs from the first and second materials based on a filtered thickness image T′ obtained by applying a spatial filter to a thickness image T, and a plurality of radiation images (low-energy image L and high-energy image H).


Here, the results of the calculation in [Math. 13] can be stored in advance in a table in the internal memory of the signal processing unit 133, and the signal processing unit 133 can obtain the bone thickness image B′, the soft-tissue thickness image S′, and the contrast-agent thickness image I′ corresponding to the filtered thickness image T′ and the plurality of radiation images (low-energy image L and high-energy image H) by referring to the table when performing the calculation in [Math. 13]. Thus, the signal processing unit 133 can acquire a material decomposition image (B′, S′, and I′) for each material in a shorter amount of time compared to when a non-linear equation is analyzed.


In the signal processing in FIG. 14, noise is not reduced in the low-energy image L and the high-energy image H while noise is reduced in the filtered thickness image T′. Thus, there is a problem that the bone thickness image B′, the soft tissue thickness image S′, and the contrast-agent thickness image I′ include much noise, which leads to degradation of image quality.


In this case, the signal processing unit 133 can also perform processing according to the block diagram of signal processing illustrated in FIG. 15. The signal processing unit 133 generates a filtered thickness image t′ without the contrast agent by performing filtering for reducing noise on an image that is the sum of the bone thickness image B′ and the soft tissue thickness image S′, i.e., a thickness image t without the contrast agent. In addition, in a manner similar to that in the description of FIG. 7, the signal processing unit 133 generates the accumulation image A from the low-energy image L and the high-energy image H. Furthermore, the signal processing unit 133 generates a contrast-agent thickness image I″ with reduced noise from the filtered thickness image T′ (first thickness image), the filtered thickness image t′ without the contrast agent (second thickness image), and the accumulation image A.


Here, [Math. 14] below holds true when [Math. 10] is expanded, where μ1(E) is the linear attenuation coefficient of the contrast agent at energy E, and I is contrast-agent thickness.









A
=









N
A



(
E
)



exp


{



-


μ
B



(
E
)




B

-



μ
S



(
E
)



S

-













μ
I



(
E
)



I

}


EdE









N
A



(
E
)



EdE







[

Math
.




14

]







Furthermore, [Math. 15] below holds true when [Math. 14] is transformed based on T=B+S+I and t=B+S, where T is the thickness and t is the thickness without the contrast agent.












A
=









N
A



(
E
)



exp


{



-


μ
B



(
E
)




B

-



μ
S



(
E
)




(

t
-
B

)


-













μ
I



(
E
)




(

T
-
t

)


}


EdE









N
A



(
E
)



EdE















[

Math
.




15

]







By substituting the pixel value A of the accumulation image, the thickness in the thickness image T, and the thickness in the thickness image t without the contrast agent at a given pixel into [Math. 15] and solving the non-linear equation, the thickness in the bone thickness image B at the given pixel can be calculated. However, if calculation is directly performed in this state, the contrast-agent thickness I would always be 0 because the thickness image T is the sum of the bone thickness image B and the soft-tissue thickness image S, i.e., because T=B+S holds true.


However, by substituting the filtered thickness T′ in place of the thickness image T and the filtered thickness image t′ without the contrast agent in place of the thickness image t without the contrast agent into [Math. 15], a contrast-agent thickness image I″ with reduced noise can be calculated. As described in FIG. 9, the continuity of the thickness image T is high, and thus signal components are not readily lost from the thickness image T even if filtering is performed to reduce noise. The same applies to the thickness image t without the contrast agent. In such a manner, the contrast-agent thickness image I″ with reduced noise can be obtained using the accumulation image A, which does not include much noise in the first place, the thickness image t′ without the contrast agent and with reduced noise, and the thickness image T′ with reduced noise.


That is, the signal processing unit 133 generates a second thickness image (thickness image t without the contrast agent) without a third material (iodine-containing contrast agent) based on a material decomposition image of a first material with reduced noise (bone thickness image B′) and a material decomposition image of a second material with reduced noise (soft-tissue thickness image S′). Furthermore, the signal processing unit 133 generates a material decomposition image of a third material (contrast-agent thickness image I″) with reduced noise compared to a third material decomposition image (contrast-agent thickness image I′) based on a filtered first thickness image T′ obtained by applying a spatial filter to a thickness image T, a filtered second thickness image t′ obtained by applying a spatial filter to the second thickness image t, and an accumulation image A obtained based on addition of a plurality of radiation images.


Here, the results of the calculation in [Math. 15] can be stored in advance in a table in the internal memory of the signal processing unit 133, and the signal processing unit 133 can obtain the contrast-agent thickness image I″ with reduced noise corresponding to the filtered first thickness image T′, the accumulation image A, and the filtered second thickness image t′ by referring to the table when performing the calculation in [Math. 15]. Thus, the signal processing unit 133 can acquire a material decomposition image of the contrast agent with reduced noise in a shorter amount of time compared to when a non-linear equation is analyzed.


Note that, in the first to third embodiments above, an indirect-type X-ray sensor using a fluorescent material is used as the X-ray imaging apparatus 104. However, the present invention is not limited to such an embodiment. For example, a direct-type X-ray sensor using a direct conversion material such as CdTe may be used. That is, the X-ray sensor may be that of the indirect type or the direct type.


Furthermore, in the first to third embodiments, the tube voltage of the X-ray generation apparatus 101 is changed in the operation in FIG. 4, for example. However, the present invention is not limited to such an embodiment. The energy of the X-rays to which the X-ray imaging apparatus 104 is exposed may be changed by temporally switching filters of the X-ray generation apparatus 101. That is, there is no limitation whatsoever regarding the method for changing the energy of the X-rays to which the X-ray imaging apparatus 104 is exposed.


Furthermore, while images with different energies were obtained by changing the X-ray energy in the first to third embodiments, the present invention is not limited to such an embodiment. For example, a stacked configuration may be adopted in which a plurality of fluorescent materials 105 and a plurality of two-dimensional detectors 106 are stacked, whereby images with different energies are respectively obtained from the two-dimensional detectors on the front and back sides relative to the X-ray incidence direction.


Furthermore, in the first to third embodiments, energy subtraction processing is performed using the imaging control apparatus 103 of the X-ray image-capturing system. However, the present invention is not limited to such an embodiment. For example, images obtained by the imaging control apparatus 103 may be transferred to a different computer, where energy subtraction processing is performed. For example, a configuration may be adopted in which obtained images are transferred to a different personal computer via a medical PACS to be displayed after energy subtraction processing is performed. That is, the apparatus in which the correction processing described in the embodiments is performed need not be paired with an image-capturing apparatus (i.e., may be an image viewer).


According to the present embodiment, material decomposition images with reduced noise can be obtained. Furthermore, contrast agents and medical devices can also be separated while estimating bone thickness and soft-tissue thickness with reduced noise.


OTHER EMBODIMENTS

Embodiment(s) of the present invention can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a ‘non-transitory computer-readable storage medium’) to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s). The computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions. The computer executable instructions may be provided to the computer, for example, from a network or the storage medium. The storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD)™), a flash memory device, a memory card, and the like.


While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.


According to the present invention, material decomposition images with reduced noise can be obtained.

Claims
  • 1. An image processing apparatus comprising a generating unit configured to generate, using a plurality of radiation images corresponding to mutually different radiation energies, a first material decomposition image that indicates a thickness of a first material and a second material decomposition image that indicates a thickness of a second material that differs from the first material,wherein the generating unit generates, using the first material decomposition image and the second material decomposition image, a thickness image in which the thickness of the first material and the thickness of the second material are added together.
  • 2. The image processing apparatus according to claim 1, wherein the first material includes at least one of calcium, hydroxyapatite, or bone, and the second material includes at least one of water or fat.
  • 3. The image processing apparatus according to claim 1, further comprising an obtaining unit configured to obtain the plurality of radiation images by capturing images at a first energy and at a second energy that is higher than the first energy,wherein the average energy of a spectrum of radiation for obtaining a radiation image based on the first energy is an energy lower than the iodine K-edge.
  • 4. The image processing apparatus according to claim 3, wherein the obtaining unit obtains, as the plurality of radiation images, radiation images that are obtained by performing sampling and holding multiple times during exposure to one shot of radiation.
  • 5. The image processing apparatus according to claim 1, wherein the generating unit, based on a filtered thickness image obtained by applying a spatial filter to the thickness image, and an accumulation image obtained based on addition of the plurality of radiation images,generates a material decomposition image of the first material with reduced noise compared to the first material decomposition image, or a material decomposition image of the second material with reduced noise compared to the second material decomposition image.
  • 6. The image processing apparatus according to claim 1, wherein the generating unit, based on a filtered thickness image obtained by applying a spatial filter to the thickness image, and the plurality of radiation images,generates a material decomposition image of the first material with reduced noise compared to the first material decomposition image, a material decomposition image of the second material with reduced noise compared to the second material decomposition image, anda third material decomposition image that indicates a thickness of a third material that is different from the first and second materials.
  • 7. The image processing apparatus according to claim 6, wherein the generating unit generates a second thickness image without the third material based on the material decomposition image of the first material with reduced noise and the material decomposition image of the second material with reduced noise.
  • 8. The image processing apparatus according to claim 7, wherein: the generating unit, based on a filtered first thickness image obtained by applying a spatial filter to the thickness image,a filtered second thickness image obtained by applying a spatial filter to the second thickness image, andan accumulation image obtained based on addition of the plurality of radiation images,generates a material decomposition image of the third material with reduced noise compared to the third material decomposition image.
  • 9. The image processing apparatus according to claim 6, wherein the third material includes an iodine-containing contrast agent.
  • 10. A radiation imaging system comprising: the image processing apparatus according to claim 1; anda radiation imaging apparatus that generates the plurality of radiation images by exposure to radiation.
  • 11. An image processing method for processing radiation images, comprising generating, using a plurality of radiation images corresponding to mutually different radiation energies, a thickness image in which a thickness of a first material and a thickness of a second material that differs from the first material are added together.
  • 12. A non-transitory computer-readable storage medium storing a program for causing a computer to execute the method according to claim 11.
  • 13. An image processing apparatus comprising a generating unit configured to generate, using a plurality of radiation images corresponding to mutually different radiation energies, a thickness image in which a thickness of a first material and a thickness of a second material that differs from the first material are added together.
  • 14. The image processing apparatus according to claim 13, wherein the generating unit, using the thickness image, and an accumulation image obtained by addition of the plurality of radiation images,generates a material decomposition image indicating a thickness of the first material and a material decomposition image indicating a thickness of the second material.
  • 15. The image processing apparatus according to claim 13, wherein the generating unit, using a new thickness image obtained by applying a spatial filter to the thickness image, and the plurality of radiation images,generates a material decomposition image indicating a thickness of the first material and a material decomposition image indicating a thickness of the second material.
  • 16. The image processing apparatus according to claim 13, wherein the generating unit, using a new thickness image obtained by applying a spatial filter to the thickness image, and the plurality of radiation images,generates a third material decomposition image that indicates a thickness of a third material that is different from the first and second materials.
Priority Claims (1)
Number Date Country Kind
2019-159726 Sep 2019 JP national
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a Continuation of International Patent Application No. PCT/JP2020/028194, filed Jul. 21, 2020, which claims the benefit of Japanese Patent Application No. 2019-159726, filed Sep. 2, 2019, both of which are hereby incorporated by reference herein in their entirety.

Continuations (1)
Number Date Country
Parent PCT/JP2020/028194 Jul 2020 US
Child 17652006 US