This application is the U.S. National Phase application under 35 U.S.C. § 371 of International Application No. PCT/EP2017/062314, filed on May 22, 2017, which claims the benefit of European Application Serial No. 16305588.2, filed May 23, 2016. These applications are hereby incorporated by reference herein.
This invention relates to medical diagnostic ultrasonic imaging and, in particular, to ultrasound fusion imaging systems which correct for probe induced deformation.
Various medical diagnostic imaging systems have different characteristics which play important roles in diagnostic imaging. Magnetic resonance imaging (MRI) systems and computed tomography (CT) systems are known for producing highly resolved images of tissue and organs inside the body, but do not lend themselves well to high frame rate real time imaging. Ultrasound imaging, on the other hand, produces images with less resolution but at a high frame rate more suitable for real time imaging. To take advantage of these different strengths image fusion system have been developed which enable visualization of a patient's anatomy with both ultrasound and CT or both ultrasound and MRI. A common implementation is to view images from both modalities in co-registration, that is, overlaying (fusing) two images of the same anatomy together in a common display. So-called fusion imaging systems thereby take advantage of the strengths of both modalities. A CT or MRI image can be used for navigation, for instance, while the fused ultrasound image enables the movement of tissues and blood flow to be viewed in real time.
To create high quality ultrasound images it is necessary that there be good acoustic contact between an ultrasound probe and the body of the patient being scanned. Good acoustic contact is facilitated by the application of coupling gel to the probe and skin of the patient and by sonographers maintaining good acoustic coupling by forcefully pressing the ultrasound probe against the skin of the patient. When the probe is placed against soft tissue as is the case during abdominal scanning, for example, the force of the probe against the body will depress the skin and body where the probe is in contact with the patient. This is not the case with MRI or CT imaging, where the magnetic field or radiation beams pass through the air and readily penetrate the body without physical contact of an instrument. Consequently, the soft tissue and organs seen in CT and MRI images are uncompressed, whereas the same tissue and organs can be significantly compressed by the probe during ultrasound imaging of the same anatomy. As a result, the physical differences between the uncompressed anatomy in a CT or MRI image and the compressed anatomy in an ultrasound image can make the co-registration and fusing of the two images difficult. Accordingly it is desirable to be able to correct for this compression induced by the ultrasound probe so that the two images can be accurately fused into a single display of the anatomy under diagnosis.
Document US 2014/0193053 discloses a system and method for automatically fusing pre-operative images and intra-operative images. The pre-operative images (reference images) are transformed based on the intra-operative images.
Accordingly, it is an object of the present invention to provide a technique for recognizing and correcting for the soft tissue compression induced by ultrasound probe pressure when two images are to be co-registered and fused together.
It is a further object of the present invention to provide a simple and reliable technique for identifying probe pressure compression.
It is a further object of the present invention to modify reference CT or MRI images so that they can be more accurately co-registered with an ultrasound image.
The invention is defined by the claims.
In accordance with the principles of the present invention, a fusion imaging system is described in which real time ultrasound images are fused with reference images such as those produced by MRI or CT imaging. In an illustrated implementation, previously acquired CT or MRI or ultrasound images are acquired by the fusion imaging system for fusion with live ultrasound images. An ultrasound system is operated in conjunction with a tracking system such as an electromagnetic (EM) tracking system so that the ultrasound probe and images can be spatially tracked. A computerized image processor registers the probe position with a reference image of the anatomy being scanned by the probe and determines whether the probe appears to be inside the surface of the subject. For an external probe pressed against the exterior of the body the surface is the skin line. For an internal probe the surface is generally the outer surface of the organ being scanned. If the probe appears to be inside the surface, it is due to probe compression of the subject, and the reference image is modified to locate the skin line or organ surface in the reference image in front of the ultrasound probe. The modified reference image can then be readily co-registered and fused with an ultrasound image produced by the probe.
In the drawings:
Referring first to
Suppose that the system of
Once the EM tracking system has been calibrated the clinician begins scanning the patient and the computer 24 will align the real time ultrasound images with the corresponding planes or volumes of the reference image dataset. In this example the clinician is examining the liver, and so the registration software program executed by the computer is trying to segment exactly the same region of interest, a liver, out of at least two different images. The segmentation program in this example begins by deforming an initial model such as a shape model that roughly represents the shape of the target object. In the example of the target object being a liver, the initial shape might be a sphere or a liver mean shape. This shape is represented by an implicit function, i.e., a function Φ, defined in the whole space, which is positive inside the shape and negative outside. The shape is then the zero level-set of such a function. The whole implicit function is deformed by a space transformation ψ. In particular, the zero level-set will change and so will the corresponding object. This transformation is decomposed into two transformations of different kinds that will correct the initial pose of the model:
ψ=ξ·G:
where G is a global transformation that can translate, rotate or rescale the initial shape and ξ is a local deformation that will actually deform the object so that it matches more precisely the object to segment in the image.
The goal of the method is then to find the best ξ and G, using the image I information. This is done by minimizing the following energy:
∫H(Φ·ξ·G(x))r(x)++λ∫∥ξ(x)−x∥2
In the first term, also called data fidelity, H is the Heaviside function (H(x)=1 if x>0 and 0 if x<0) which means that the integral is actually only inside the deformed object. r(x) is an image-based function that returns at each point a negative (or positive) value if the voxel is likely to be outside (or inside) the object of interest. For ambiguous regions, r(x) is set to zero. The second term is the so-called regularization. The second term is the norm between ξ and the identity function. The amplitude of the deformation is constrained because the object shape should not deviate too much from the prior shape. It is to be emphasized that this second term is independent from the position and orientation of the object which was the purpose of the decomposition of the transformation. The minimization of such energy is performed using a gradient descent on both ξ and G at the same time.
In a simple example of only two images, and if the two images were already perfectly registered, then the previously described equation can easily be extended by adding another data fidelity term:
∫H(Φ·ξ·G(x))r1(x)+∫H(Φ·ξ·G(x))r2(x)+λ∫∥ξ(x)−x∥2
However, a registered acquisition might only take place if both images were acquired simultaneously or shortly after one another. It is very unlikely that the images would be registered if acquired subsequently. Hence, this possibility is taken into account with another transformation. In general, this further transformation might be non-rigid and of any type. However, if an assumption of looking for the same object can be made, this transformation (denoted G12) can be rigid, i.e., it allows a global change of position and orientation but only with the same size target. The transformation G12 could also be set to any affine transform to take into account volume changes, without loss of computational efficiency. The energy then becomes:
∫H(Φ·ξ·G(x))r1(x)+∫H(Φ·ξ·G(x))r2(x)·G12(x)+λ∫∥ξ(x)−x∥2
Basically, this equation corrects the image information coming from the second term by the transformation G12. In case of the registration of more than two images, further terms for each image, each comprising its own transformation, would be added.
The third term, which is optional, is constructed as a constraint to the local deformation. It restrains the deformation if the local deformation causes the shape of the object to deviate too much from the initial geometric shape. Hence, as we search for a minimum, in case the first and the second term lead to the same results, the solution transforming the initial geometric shape less than the other solutions will be considered best. The parameter “λ” may be set to determine the relevance of this constraint.
The optimization is performed by gradient descent simultaneously on ξ, G, and G12. At the end, a segmentation as the zero level-set of the function Φ·ξ·G is more precise because it uses the information of the two images. Further, estimation of the transformation G12 allows registration of the images to each other to be more precisely achieved.
A preferred implementation of the present invention utilizes a system such as that illustrated in
How this is accomplished is illustrated by the CT reference image of
In particular, the probe 12 appears to be inside the skin surface 30 of the patient. An implementation of the present invention identifies the skin 30 by segmenting it in the reference image. This segmentation process is both simple and reliable because the skin 30 is the outer surface of the subject in the CT or MRI image. The side of the skin surface occupied by tissue and organs in the image is the inside of the body of the subject and the other side, where the subject's clothing and air return no signal, is the outside of the body. Thus, when the location of the probe 12 is found to be inside the body in the reference image, the system concludes that this is due compression of the body by the probe during ultrasound scanning.
The correction of the anomaly is then straightforward. The reference image is deformed so that it will more readily register with the ultrasound image, in which the outer tissue is compressed due to probe pressure. This is done by the computer redrawing the skin surface so that the surface 30′ does not overlap with and is in front of the probe as shown in
The concepts of the present invention can address the same problem caused by an internal probe such as an intracavity probe used to image the prostate. In that case, probe pressure can compress and distend the prostate in the ultrasound image compared to a CT image of the prostate in which no pressure is exerted against the organ. The surface of the probe in the CT image can be modified as described above so that both the CT and ultrasound images of the organ are in good registration.
Number | Date | Country | Kind |
---|---|---|---|
16305588 | May 2016 | EP | regional |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2017/062314 | 5/22/2017 | WO |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2017/202795 | 11/30/2017 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
6567087 | Reid | May 2003 | B1 |
7793217 | Kim | Sep 2010 | B1 |
8165327 | Pape | Apr 2012 | B2 |
8165372 | Ishikawa et al. | Apr 2012 | B2 |
9471981 | Arai et al. | Oct 2016 | B2 |
20050119568 | Salcudean | Jun 2005 | A1 |
20070167784 | Shekhar et al. | Jul 2007 | A1 |
20080186378 | Shen et al. | Aug 2008 | A1 |
20110178389 | Kumar et al. | Jul 2011 | A1 |
20120063655 | Dean | Mar 2012 | A1 |
20120123263 | Osaka | May 2012 | A1 |
20140193053 | Kadoury et al. | Jul 2014 | A1 |
20140341449 | Tizhoosh | Nov 2014 | A1 |
20150209599 | Schlosser | Jul 2015 | A1 |
20160000519 | Dehghan Marvast | Jan 2016 | A1 |
20160007970 | Dufour et al. | Jan 2016 | A1 |
20160030008 | Gerard | Feb 2016 | A1 |
20180296185 | Cox | Oct 2018 | A1 |
Number | Date | Country |
---|---|---|
2010131269 | Jun 2010 | JP |
2011083636 | Apr 2011 | JP |
2012217769 | Nov 2012 | JP |
2012217769 | Nov 2012 | JP |
2012141184 | Oct 2012 | WO |
2013141974 | Sep 2013 | WO |
WO 2014132209 | Sep 2014 | WO |
Entry |
---|
International Search and Written Opinion for International Application No. PCT/EP2017/062314, dated Sep. 25, 2017, 15 pages. |
Kadoury, et al., “A Model-Based Registration Approach of Preoperative MRI With 3D Ultrasound of the Liver for Interventional Guidance Procedures”, Biomedical Imaging, 2012 9th IEEE International Symposium On, IEEE, May 2, 2012, pp. 952-955. |
Number | Date | Country | |
---|---|---|---|
20190290241 A1 | Sep 2019 | US |