The invention relates to registration of ultrasound scan data with volumetric scan data. More specifically, the invention relates to registration of laparoscopic ultrasound scan data with CT or MRI scan data.
Ultrasound information in combination with computed tomography (CT) may be advantageous in a number of clinical contexts. For example, a lesion in a patient's liver or kidney may be identified with by CT or MRI, and further characterisation of the lesion may be performed using ultrasound (US) imaging. Further examples are in performing percutaneous needle biopsy, ablation of a lesion (e.g. in the kidney or liver), endoscopic ultrasound (e.g. of a pancreas), or resection of an organ (e.g. a kidney or liver) to remove a lesion or tumour. In such procedures the ultrasound imaging may be performed in real time during the procedure, often in combination with video imaging. The ultrasound imaging may therefore be used to guide a surgical procedure.
However, it can be difficult to use ultrasound for guiding a surgical procedure, because the imager may not be confident of the precise position being imaged by the ultrasound, for example in relation to the lesion or tumour, and/or with regard to blood vessels (whose position may have already been identified in a pre-surgical CT or MRI scan). In such contexts a method that works rapidly (in near real-time) to register the position of an US probe would be valuable.
The issue of registration of ultrasound scan data is particularly relevant in laparoscopic procedures, in which laparoscopic ultrasound (LUS) imaging is used. LUS probes tend to have a narrow field of view, which makes registration more challenging. A relatively small section of an organ may be acquired (in contrast with transabdominal US), providing less information to constrain the registration problem.
One approach for registration of US with volumetric scan data is to track the position of the US probe, and to use this to generate a volumetric US scan, which provides more information for registration with a pre-surgical volumetric scan (e.g. from CT). Another approach is to provide a relatively accurate initial position estimate for the US probe (which may be referred to as initialisation).
A better method of registering ultrasound scan images, especially LUS scan images, to pre-existing volumetric scan data is desirable. Preferably, such a method should be simple to implement, and require minimal additional equipment.
According to a first aspect of the present disclosure, there is provided a computer implemented method for identifying a pose of a probe by registering an ultrasound image from with volumetric scan data, comprising:
Determining a distance or similarity between each candidate image and the ultrasound image may comprise calculating a L2 distance between the feature vector of the ultrasound image and the feature vector of the simulated image.
Determining a distance or similarity between each candidate image and the ultrasound image may further comprise weighting the L2 distance with a term to penalise features in the feature vector of the simulated image that are not found in the feature vector of the ultrasound image.
The method may comprise identifying a probe path comprising a sequence of ultrasound images corresponding with successive poses of the probe path, wherein:
The transition probability may be based on a kinematic model that determines a transition probability based on an expected variance in pose between successive ultrasound images of the probe path.
The probe path may be along an organ, and an expected variance in position orthogonal to a surface of the organ may be lower than an expected variance in position along the surface of the organ.
The expected variance in pose may be proportional to a time difference between successive ultrasound images of the probe path.
The transition probability between two candidate images Jki and Jki+1 may be defined based on:
where δki+1,ki is a vector containing differences in pose between two candidate images and Σpose is a covariance matrix of the pose defining the expected variance of the pose parameters with respect to time.
The method may comprise imposing a transition probability penalty when a probe path direction deviates from an initial direction by more than a threshold amount.
A Viterbi algorithm may be used to determine a most probable probe path.
Selecting candidate images that best match each of the sequence of ultrasound images may comprise selecting, for each of the sequence of ultrasound images, a predetermined number of candidate images with the lowest distance or highest similarity based on the respective feature vectors.
Extracting a feature vector may comprise segmenting each of the simulated ultrasound images and the ultrasound image.
The segmentation may identify the position of blood vessels in each image, and the feature vector may comprises a position of each blood vessel and optionally a size of each blood vessel.
The feature vector may be extracted using a convolutional neural network.
The convolutional neural network may have been trained to distinguish between ultrasound images.
The pose may comprise the position of the probe on the surface of an organ, and an orientation of the probe.
The pose may further comprise a depth or deformation parameter.
The ultrasound image may be obtained by scanning a liver, kidney or pancreas.
The method may comprise displaying the pose of the probe with a 3D representation of the volumetric scan data, wherein a 3D representation of the probe pose is registered to the 3D representation of the volumetric scan data.
According to a second aspect of the invention, there is provided a non-transient machine readable medium comprising instructions for configuring a processor to perform the method of the first aspect, including any of the optional features thereof.
According to third aspect, there is provided apparatus, comprising a processor configured to perform the method according to the first aspect, including any of the optional features thereof.
The apparatus may further comprise an ultrasound probe, for acquisition of the ultrasound image or sequence of ultrasound images.
The ultrasound probe may be a laparoscopic ultrasound probe, or an endoscopic ultrasound probe.
The apparatus may further comprise a display, wherein the processor is configured to cause the display to display the pose of the probe with a 3D representation of the volumetric scan data, wherein a 3D representation of the probe pose is registered to the 3D representation of the volumetric scan data.
Embodiments will now be described, purely by way of example, with reference to the accompanying drawings, in which:
At step 31, the volumetric scan data is processed to determine a plurality of simulated ultrasound images corresponding with different poses of the probe (e.g. at least one of position, orientation, depth/deformation).
At step 32, a feature vector is extracted from each of the simulated ultrasound images, and from the ultrasound image. The feature vector may comprise a position and size of each vessel intersection with the respective image. The feature vector may be obtained by segmentation of the images into vessels and not-vessels.
At step 33, the feature vector from each simulated ultrasound image is compared with the feature vector from the ultrasound image to determine a distance or similarity value.
At step 34, a candidate image is selected as the best match, based on the distance or similarity.
At step 35, the pose of the probe is identified from the candidate image.
At step 41, the volumetric scan data is processed to determine a plurality of simulated ultrasound images corresponding with different poses of the probe (e.g. at least one of position, orientation, depth/deformation).
At step 42, a feature vector is extracted from each of the simulated ultrasound images, and from each of the sequence of ultrasound images. The feature vector may comprise a position and size of each vessel intersection with the respective image. The feature vector may be obtained by segmentation of the images into vessels and not-vessels.
At step 43, the feature vector from each simulated ultrasound image is compared with the feature vector from each of the sequence of ultrasound images to determine a distance or similarity value.
At step 44, candidate simulated images are selected that best match each of the sequence of ultrasound images, based on the distance or similarity.
At step 45, a probe path is identified by determining from the candidate images which is most likely to match each ultrasound image in the sequence of ultrasound images using a transition probability between two candidate images. The transition probability may be based on kinematic assumptions about the movement of the probe over time. A hidden Markov model may be used to determine the simulated images that are most likely to correspond with the sequence of ultrasound images.
Example embodiments will be described in more detail with reference to
Given a set of N, 2D ultrasound images {I1, . . . , IN} and corresponding acquisition time stamps {ti, . . . , tN}, embodiments of the invention can recover the sequence of US images, simulated from pre-operative volumetric scan data (e.g. obtained by CT) {J1, . . . , JN} that most closely represent the US acquisition in terms of features defined in a feature vector. Conveniently, the feature vector may be based on vascular content. Content based image retrieval may be used to obtain a set of K possible images {Ji1, . . . , JKi} as candidates for each image Ii, based on a comparison of a feature vector of the image Ii with each of the simulated US images {J1, . . . , JN}. In one embodiment, the set of K possible images is the a single US image with a feature vector that is most similar to the ultrasound image Ii. In other embodiments, a Viterbi algorithm may be applied with kinematic prior information in order to find the most likely sequence of simulated US images {J1, . . . , JN} and hence the corresponding pose of the probe in each of a sequence of US images {I1, . . . , IN}.
The set of simulated US images J may be obtained by intersecting a segmented model of the volumetric scan data with 2D planes, bounded by an LUS field of view. The model of the volumetric scan data may be segmented to indicate “blood vessel” and “not blood vessel”. A set of evenly distributed points PS may be generated over the surface of the organ of interest (e.g. liver). At each of these points PS a virtual reference orientation may be created, orthogonal to the organ surface normal and with the imaging plane aligned with the sagittal plane. At each point PS, different combinations of rotations Rx, Ry, and Rz may be applied to generate simulated US images corresponding with rotated projections parameterised by R=[{right arrow over (x)}, {right arrow over (y)}, {right arrow over (z)}]. In addition, at each point PS, a number of translations d may be applied along the organ surface normal, simulating the case in which the probe compresses the tissue of the organ and images deeper structures. For each combination of PS, R and d, a binary image containing vessel sections may be generated.
Other approaches may be used to produce a feature vector. For example, principle component analysis may be used to compress the simulated images to produce a feature vector, or the first n layers of a convolutional neural network that has been trained to discriminate between different ultrasound images may be used to produce a feature vector.
In order to compare an ultrasound image with each simulated ultrasound image, a corresponding feature vector must be extracted from the ultrasound image. For embodiments where the feature vector encodes the position and area of vessels intersecting the imaging plane, the ultrasound image must be segmented to identify the vessels, to produce a feature vector that can be compared with the feature vectors obtained from each simulated ultrasound image. The ultrasound image may be automatically segmented, for example using a convolutional neural network (e.g. as described in reference 10).
Feasible candidate poses for an input ultrasound image I may be obtained by comparing its feature vector ƒ1 to all the pre-computed vectors ƒ, obtained by from the volumetric scan data, for example by calculating a weighted L2 distance:
where ƒS is a feature vector with a smaller number of vessel sections MS, and ƒL is a feature vector with a larger number of vessel sections ML. In equation (1), the function m(ƒiS,ƒL) returns the feature triplet values in ƒL with the closest lumen centroid to that of triplet ƒiS and the function A(·) returns the area value from a triplet. An area ratio is used to penalise the exclusion of triplet from the longer vector ƒL: the total area of all vessels in ƒL is divided by the sum of the ones that were included in the matching. The larger the excluded areas, the larger D becomes, and therefore the less similar the feature vectors.
To perform an efficient search, it is possible (but not essential) to only search for feature vectors that have a similar number of triplets (corresponding with vessel sections) to the input ƒ1. Feature vectors may be grouped in lookup tables FM according to their size M. The search for the best candidates ƒ* may as expressed in equation (2):
Here, the distance D is computed between f1 and members of the lookup tables of size M1−r to M1+r, where r is the allowable limit of the feature vector length differences. The results may be normalised by the minimum number of sections used in each comparison, and a lowest set of K candidate ƒ* vectors picked. These vectors become a set of CT images {J1i, . . . , JKi} with corresponding probe poses.
Once a set of k possible matches {J1i, . . . , JKi} are obtained for image Ii a transition probability may be used to determine a set of simulated images from J that match the set of acquired images {I1, . . . , IN} acquired by sweeping the probe over the surface of the organ. Under these conditions, each successive acquired image will correspond with a successive pose along the path swept by the probe as it moves over the surface of the organ. This imposes a kinematic constraint on the set of images selected from J to match the acquired images {I1, . . . , IN} because solutions that require very high acceleration and/or velocity are very unlikely to be correct.
This can be formulated as a hidden Markov model, as shown in
where δki+1,ki is a vector containing the differences in rotation and translation between the two candidates. As shown in
Other expressions may be used to model the transition probability—it is not essential to assume that the probability distribution is Gaussian, for example.
The values for σx, σy and σz may be selected based on knowledge of the speed that the probe is expect to move during the acquisition sweep. It is likely that the speed of movement in the z direction, normal to the plane of the imaging scan (as shown in
One way to find the optimal sequence of candidates is to use the Viterbi algorithm to find the lower cost path. In some embodiments, each of the candidate simulated images corresponding with the vector ƒ* of best matches may be assumed to be equally likely to match the current acquired image, and the node probabilities P(Ii|Jki) assumed to be 1 (leaving it to the kinematic transition probability to determine the best matching set of images). In other embodiments, the node probabilities P(Ii|Jki) may be weighted by the distance D (e.g. according to equation (1), in addition to the kinematic prior).
During optimisation, a constraint may be implemented to reject candidate simulated image sets that do not fulfil specific kinematic conditions. For example, a sweep direction may be defined as the difference between the first two probe positions Pk2−Pk1 in the candidate simulated image set. The probability P(Ii|Jki) may be set to 0 (or reduced by a predetermined amount or ratio) if the angle between Pki+i−Pki and the sweep direction is above 90 degrees (or some other predetermined angle or variance).
A method according to an embodiment was applied to acquired ultrasonic data from three patients. Pre-operative models of the liver and vasculature were segmented (following a similar approach to reference 4), and respective databases of simulated images and feature vectors generated using rotation angles in the intervals Rx=Rz=[−40°, 40°], Ry=[−90°, 90°] with steps of 10° and depth values in the interval d=[0,20 mm] with steps of 5 mm. The spatial resolution between successive positions P was [3-4 mm]. The probability P(Ii|Jki)=1, and a hard constraint was implemented, setting the probability P(Ii|Jki) to zero in the event the angle between Pki+1−Pki and Pk2−Pk1 is greater than 90 degrees.
Initially, the validity of this approach was tested by registering synthetic sweeps generated from a CT model to itself. For each of the three patients, three smooth trajectories were generated, comprising 20 images with time stamps t=[1, . . . , 20s]. Retrieval with search limit r=0 was used to find K=200 candidates for each ultrasound image, and registrations were performed using variances σz=1.5 mm, σx32 σy=0.2σy and σθ=2.
The mean number of plausible paths 2 for each of the nine sweep registrations vs the number of images is shown in
To demonstrate the utility of embodiments on real data, LUS scans acquired intra-operatively were retrospectively registered with CT scan data. The LUS probe was a BK Medical 8666-RF probe, operating at a frame rate of 40 Hz. From each patient, two sequences of contiguous images were selected, and segmented to identify vessels and non-vessels. Manual segmentation was used to demonstrate the methodology, but automatic segmentation may also be used (as already mentioned above).
A search was performed to find k=1000, with r=2. The translation variance values were doubled over those defined above, with σz=3 mm, σx=σy=0.2σy and σθ=2°. For each sweep, LUS images were manually registered to CT data to provide a ground truth trajectory. After obtaining a solution, the errors Et and Eθ were measured, and a Target Registration Error (TRE) determined of a set of manually picked vessel bifurcations found in the path. Since these bifurcations may land in images in between the sequence that were not registered, a cubic polynomial fit was used to predict their position given the algorithm solution.
Table 1 shows the results from the six sweeps.
The best trajectory registration results are found in the sweeps of patient 2, with translation errors of around 10 mm. A visual display of the result of sweep 2 from patient 2 is shown in
The best trajectory registration results are found in the sweeps of patient 2, with translation errors around 10 mm. Lowest accuracies are obtained for patient 3, but these do not surpass 20 mm. This value is still usable as this is a globally optimal alignment (on the whole liver).
The number of images NC at which the errors converge varies greatly, which may be due to the variation in uniqueness of registered images that is specific to each patient dataset. The TRE results are in the range of [3.7-25.3 mm] and are therefore in reasonable agreement with the other errors.
The results show that embodiments can register smaller field of view images (e.g. from an LUS probe) to a larger volume (e.g. an organ larger than the field of view of the LUS probe) globally and without tracking information. Embodiments may provide for a reduction in manual interaction and less interruption to clinical work flow, since a tracking device is not required.
In the example described herein, it is implicit that the organ does not deform. In some embodiments, the set of simulated ultrasound images obtained from the volumetric scan may be parameterised to include deformation (e.g. in the y direction). In some embodiments the depth d parameter may represent deformation of the organ in a direction normal to the surface of the organ (rather than a simple translation without deformation). Higher accuracies may be achievable with parameterisation including deformation.
Although specific embodiments have been described variations are possible within the scope of the invention. The scope of the invention should be determined with reference to the accompanying claims.
Number | Date | Country | Kind |
---|---|---|---|
1910756 | Jul 2019 | GB | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/GB2020/051770 | 7/23/2020 | WO |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2021/019217 | 2/4/2021 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
20140193053 | Kadoury | Jul 2014 | A1 |
20150279031 | Cavusoglu | Oct 2015 | A1 |
20160113630 | Chang | Apr 2016 | A1 |
20160113632 | Ribes | Apr 2016 | A1 |
20170079623 | Kruecker | Mar 2017 | A1 |
20180270474 | Liu | Sep 2018 | A1 |
20200273184 | Dufour | Aug 2020 | A1 |
20210059762 | Ng | Mar 2021 | A1 |
Number | Date | Country |
---|---|---|
2017200519 | Nov 2017 | WO |
Entry |
---|
Wein, W., Brunke, S., Khamene, A., Callstrom, M. R., & Navab, N. (2008). Automatic CT-ultrasound registration for diagnostic imaging and image-guided intervention. Medical image analysis, 12(5), 577-585 (Year: 2008). |
International Search Report and Written Opinion from corresponding PCT Appln. No. PCT/GB2020/051770, mailed Oct. 22, 2020. |
Ramalhinho, Joao et al., “Registration of Untracked ZD Laparoscopic Ultrasound Liver Images to CT Using Content-Based Retrieval and Kinematic Priors”, Oct. 8, 2019, 12th European Conference on Computer Vision, ECCV 2012, Springer Berlin Heidelberg, Berlin Germany, pp. 11-19. |
Smistad, Erik et al., “Vessel Detection in Ultrasound Images Using Deep Convolutional Neural Networks”, Sep. 27, 2016, Big Data Analytics in the Social and Ubiquitous Context: 5th International Workshop on Modeling Social Media, MSM 2014, 5th International Workshop on Mining Ubiquitous and Social Environments, Muse 2014 and First International Workshop on Machine, pp. 30-38. |
Wein, W. et al., “Automatic CT-Ultrasound Registration for Diagnostic Imaging and Image-Guided Intervention”, Medical Image Analysis, Oxford University Press, Oxford, GB, vol. 12, No. 5, Oct. 2008, pp. 577-585. |
Search Report under Section 17(5) of United Kingdom Application No. GB1910756.4, dated Dec. 20, 2019, 4 pages. |
Number | Date | Country | |
---|---|---|---|
20220249056 A1 | Aug 2022 | US |