This application claims priority to and all the benefits of European Patent Application No. 23214830, filed Dec. 7, 2023, the entire contents of which are hereby incorporated by reference.
The present disclosure provides a method for processing medical image data of a patient's body, a system and a computer program.
In various surgical procedures, two-dimensional (2D) medical images of a patient's body are of interest for the surgeon. For example, pre-operative or intra-operative X-ray images may be acquired using a medical image acquisition device such as a so-called C-Arm.
It is generally desired to associate such 2D images and/or information derived therefrom with additional information.
For example, it may be desired to obtain a registration between the two-dimensional images and three-dimensional (3D) medical image data of the patient's body, for example magnetic resonance (MR) image data and/or computed tomography (CT) image data. Known solutions require a surgeon to perform such a registration by aligning the images relative to one another. Other solutions require the determination of digitally rendered radiographs (DRRs) to be aligned relative to the two-dimensional images, which generally requires large processing resources.
As another example, in current solutions, 2D images are not associated with an orientation of the patient during image acquisition, so the surgeon may need to derive anatomical directions from said images by himself. Patient orientation information may also be helpful for later computational image processing steps. For instance, if the images are to be used for surgical navigation, the patient orientation may be considered for generating navigation views of interest for the surgeon.
As a still further example, identities of anatomical elements represented by the images may be of interest for a surgeon and/or for subsequent image processing steps such as surgical planning and/or surgical navigation. Similar to the patient orientation, current solutions require surgeons to derive such identities of anatomical elements by themselves.
There is a need for a technique that solves one or more of the above or other problems. It may be particularly advantageous to associate positions of anatomical elements shown in 2D images and in 3D image data with one another. In certain examples, this may allow transferring anatomical information such as identities of anatomical elements and/or a patient orientation from one type of image to the other and/or determining a registration between the different types of images.
According to a first aspect, a method for processing medical image data of a patient's body is provided. The method is performed by at least one processor and comprises obtaining medical image data. The medical image data comprises at least two two-dimensional medical images, each depicting, at least, two or more common anatomical elements of the patient's body from a known viewing direction, the known viewing direction differing between the at least two two-dimensional medical images. The method further comprises determining, based on the at least two two-dimensional medical images comprised in the medical image data and based on the known viewing directions, a first spatial pattern indicative of first three-dimensional positions of the two or more common anatomical elements. The method comprises obtaining a second spatial pattern indicative of second three-dimensional positions of a plurality of anatomical elements of the patient's body, said plurality of anatomical elements comprising the two or more common anatomical elements. The method further comprises determining a mapping between the first spatial pattern and the second spatial pattern to associate at least one of the first three-dimensional positions with at least one of the second three-dimensional positions.
The method may be referred to as a computer-implemented method. The method may not comprise a surgical step. The at least two 2D images may comprise or be X-ray images. The two or more common anatomical elements are depicted in each of the at least two 2D image, so these anatomical elements are common among the 2D images, hence the name “common” anatomical element. The two or more common anatomical elements may comprise a bone of the patient's body. Each of the two or more common anatomical elements may correspond to a different bone of the patient's body. The anatomical elements may correspond to vertebrae of the patient's spine. The known viewing direction may be a lateral (LAT) direction or an anterior-posterior (AP) direction. The known viewing direction may be (e.g. pre-) defined relative to the patient's body. The first spatial pattern may consist of the first 3D positions. The second spatial pattern may consist of the second 3D positions. The mapping may be determined using a pattern matching, a point matching and/or a curve matching algorithm. One exemplary algorithm that can be used for determining the mapping is the so-called iterative closest point (ICP) algorithm.
The method may further comprise triggering display of a visualization. The visualization may comprise at least one of the two-dimensional medical images and information that is based on the mapping. The visualization may be output on a display, for example a display of a surgical navigation system. The information that is based on the mapping may comprise an identifier of one or more of the common anatomical elements, a patient orientation, and/or an indication (e.g., an overlay) of (e.g., an outline of) one or more of the common anatomical elements as represented by the 3D image data. The visualization may include a representation of a tracked surgical instrument. In this case, the visualization may be referred to as a navigation view.
For example, the second spatial pattern and/or one or more of the second three-dimensional positions are associated with predefined anatomical information. The method may comprise obtaining said predefined anatomical information (e.g., together with the second spatial pattern). The method may further comprise assigning, based on the determined mapping, at least a part of the predefined anatomical information to one or more of: {a} the associated at least one of the first three-dimensional positions, {b} at least one of the common anatomical elements, {c} one or more of the two-dimensional medical images, {d} the medical image data.
The information that is based on the mapping may comprise an indication of the assigned part of the predefined anatomical information. The visualization may include an indication of the assigned part of the predefined anatomical information.
The predefined anatomical information may comprise at least one predefined identifier or label that is associated with one of the plurality of anatomical elements. In this case, assigning at least a part of the predefined anatomical information may comprise assigning the at least one predefined identifier or label. The identifier or label may be associated with the one of the plurality of anatomical elements by a user or automatically (e.g., by matching the one of the plurality of anatomical elements to an anatomical atlas comprising said identifier or label).
The predefined anatomical information may comprise a patient orientation. In this case, assigning at least a part of the predefined anatomical information may comprise assigning the patient orientation. The patient orientation may be associated with the 3D image data by a user or automatically (e.g., based on metadata comprised in the 3D image data such as a DICOM header).
The method may further comprise determining, for at least one of the common anatomical elements, the first three-dimensional position based on a segmentation of the at least one of the common anatomical elements in one or more of the at least two two-dimensional medical images. Determining the first spatial pattern may comprise determining the first 3D position(s). The method may comprise determining the segmentation of the at least one of the common anatomical elements.
The method may further comprise determining, for at least one of the common anatomical elements, a first outline or bounding box in a first one of the two-dimensional medical images and a second outline or bounding box in a second one of the two-dimensional medical images. The first three-dimensional position of the at least one of the common anatomical elements may be determined based on {i} the first outline or bounding box and {ii} the second outline or bounding box. The first three-dimensional position of the at least one of the common anatomical elements may be determined based on {i} a first projection of the first outline or bounding box (e.g., along the known viewing direction and/or towards a viewing point such as an X-ray origin) associated with the first one of the two-dimensional medical images and {ii} a second projection of the second outline or bounding box (e.g., along the known viewing direction and/or towards a viewing point such as an X-ray origin) associated with the second one of the two-dimensional medical images. The method may comprise determining the first projection and the second projection.
The first three-dimensional position of the at least one of the common anatomical elements may be determined based on a three-dimensional position of a virtual intersection volume of the first projection and the second projection. The method may comprise determining the virtual intersection volume. For example, a three-dimensional position of a center of the virtual intersection volume is used as the first three-dimensional position of the at least one of the common anatomical elements.
The method may comprise determining a 3D position of the center of the virtual intersection volume as the first 3D position.
The second spatial pattern may be determined based on reference image data of at least the plurality of anatomical elements. The method may comprise determining the second spatial pattern based on the reference image data. The reference image data may comprise or be 3D image data, e.g., MR image data and/or CT image data. For example, at least one or each of the second three-dimensional positions is determined based on a segmentation of one of the plurality of anatomical elements as represented by the reference image data. The method may comprise determining the segmentation of the one of the plurality of anatomical elements based on the reference image data to determine the at least one of the second 3D positions. The method may further comprise determining a registration between the medical image data and the reference image data based on the mapping.
The first spatial pattern may comprise a first virtual geometrical object defined by the first three-dimensional positions and the second spatial pattern may comprise a second virtual geometrical object defined by the second three-dimensional positions. Determining the mapping may comprise optimizing an alignment between the first virtual geometrical object and the second virtual geometrical object. Optimizing the alignment may comprise scaling, rotating and/or shifting one or both of the first virtual geometrical object and the second virtual geometrical object. In one example, determining the mapping is determined without deforming (e.g., bending) the respective virtual geometrical object.
The first virtual geometrical object may comprise a first curve modeling (e.g., fitting and/or extending through) the first three-dimensional positions and the second virtual geometrical object may comprise a second curve modeling (e.g., fitting and/or extending through) the second three-dimensional positions. Determining the mapping may comprise matching the first curve to the second curve or matching the second curve to the first curve.
According to a second aspect, a system is provided. The system comprises at least one processor configured to carry out the method according to the first aspect. The at least one processor may be configured to: obtain medical image data comprising at least two two-dimensional medical images, each depicting, at least, two or more common anatomical elements of the patient's body from a known viewing direction, the known viewing direction differing between the at least two two-dimensional medical images; determine, based on the at least two two-dimensional medical images comprised in the medical image data and based on the known viewing directions, a first spatial pattern indicative of first three-dimensional positions of the two or more common anatomical elements; obtain a second spatial pattern indicative of second three-dimensional positions of a plurality of anatomical elements of the patient's body, said plurality of anatomical elements comprising the two or more common anatomical elements; and determine a mapping between the first spatial pattern and the second spatial pattern to associate at least one of the first three-dimensional positions with at least one of the second three-dimensional positions.
The system may further comprise a medical imaging apparatus (e.g., an MR scanner, a CT scanner or a C-arm) configured to acquire the medical image data and/or the reference image data. The system may comprise a feedback unit, such as a display, configured to output a (e.g., the) visualization.
According to a third aspect, a computer program is provided. The computer program stores instructions which, when executed by at least one processor (e.g., the processor of the system according to the second aspect), cause the at least one processor to carry out the method according to the first aspect. The computer program may store instructions which, when executed by the at least one processor, cause the at least one processor to: obtain medical image data comprising at least two two-dimensional medical images, each depicting, at least, two or more common anatomical elements of the patient's body from a known viewing direction, the known viewing direction differing between the at least two two-dimensional medical images; determine, based on the at least two two-dimensional medical images comprised in the medical image data and based on the known viewing directions, a first spatial pattern indicative of first three-dimensional positions of the two or more common anatomical elements; obtain a second spatial pattern indicative of second three-dimensional positions of a plurality of anatomical elements of the patient's body, said plurality of anatomical elements comprising the two or more common anatomical elements; and determine a mapping between the first spatial pattern and the second spatial pattern to associate at least one of the first three-dimensional positions with at least one of the second three-dimensional positions. The computer program may be carried by a carrier (e.g., a data stream) and/or stored on a non-transitory computer-readable storage medium.
Embodiments and examples in accordance with the present disclosure will now be described with reference to the figures, wherein:
Unless indicated otherwise, reference signs used in the following refer to the same or similar structural or functional features.
In step 202, medical image data is obtained. The medical image data may be obtained from the database 8 and/or from a medical image acquisition device such as the C-arm 10. The medical image data comprises at least two two-dimensional medical images of at least a part of the patient's body 12. Each of the 2D images is associated with a known viewing direction of the image acquisition device that was used when acquiring the respective image. These viewing directions may be defined in a spatial coordinate system of an operating room in which the medical imaging apparatus is located and/or relative to the patient's body 12. For example, an AP X-ray image and a LAT X-ray image may be comprised in the medical image data. The medical image data may comprise direction information (e.g., as part of an image's metadata) indicative of the known respective viewing direction. Each of the images depicts at least two anatomical elements of the patient's body. These at least two anatomical elements are thus represented by each of the at least two 2D images and may be referred to as common anatomical elements herein. In the following, the vertebrae 14-20 will be used as specific examples of such anatomical elements, although the present disclosure is not limited thereto (e.g., other bones or organs of the patient's body may be used as common anatomical elements).
In optional step 204, first 3D positions of one or more anatomical elements depicted in the 2D images are determined. For example, first 3D positions are determined for each common anatomical element based on the medical image data, in particular based on the 2D images (e.g., segmentations of the common anatomical elements in these images) and the known viewing directions thereof.
As indicated in
In optional sub-step 208, for each of the bounding boxes and/or outlines, a respective projection in the viewing direction of the 2D image is determined. The projection may be a linear projection along a viewing axis (e.g., the axis 23 or 25) or a projection towards a particular 3D position (e.g., the 3D position of the viewing point 26). The known viewing direction may be indicative or include the viewing axis and/or the viewing point (e.g., a point at which X-rays of the medical imaging device originate). The first 3D positions may be determined based on these projections.
In optional sub-step 210, a virtual intersection volume is determined between a first projection of a bounding box or contour of a common anatomical element in a first of the 2D images and a second projection of a bounding box or contour of the same common anatomical element in a second of the 2D images. Such a virtual intersection volume may be determined for each of the common anatomical elements, wherein each intersection volume is associated with exactly one common anatomical element.
The first 3D positions may be determined based on the virtual intersection volumes.
In optional sub-step 212, centers of the virtual intersection volumes are determined. The first 3D positions of these centers may then be used as the first 3D positions of the common anatomical elements.
In step 214, a first spatial pattern is determined based on the at least two two-dimensional medical images comprised in the medical image data and based on the known viewing directions. The first spatial pattern is indicative of first three-dimensional positions of the two or more common anatomical elements. The first spatial pattern may consist of the first 3D positions determined for the common anatomical elements.
In step 216, a second spatial pattern is obtained. The second spatial pattern is indicative of second 3D positions of a plurality of anatomical elements of the patient's body 12, said plurality of anatomical elements comprising the two or more common anatomical elements. Referring to the example of
In case the second 3D positions are not known in advance, the method may comprise optional step 218 in which the second 3D positions are determined. Step 218 may comprise sub-step 220 in which reference image data of the patient's body 12 is obtained. The reference image data may comprise one or more 3D medical images of the patient's body, e.g. a CT scan and/or an MR scan. In optional sub-step 222, the anatomical elements (e.g., at least the common anatomical elements) are segmented in the reference image data. A position of a center of a so-segmented anatomical element may then be used as the second 3D position of said anatomical element.
In step 224, a mapping is determined between the first and the second spatial pattern to associate the first 3D positions with the second 3D positions. The mapping may comprise a transformation (e.g., a translation and/or rotation and/or homogeneous scaling in all spatial directions) between the first and the second spatial pattern. The mapping may define a (e.g., coordinate) transformation between the coordinate systems 40 and 41 that is to be used to match the first to the second spatial pattern or vice versa.
With the mapping determined, the method may proceed with one or more of optional steps 226-234.
The second spatial pattern and/or one or more of the second 3D positions may be associated with predefined anatomical information. The predefined anatomical information may be indicative of an identifier of the common anatomical element(s) and/or anatomical directions of the patient's body 12 (e.g., in the coordinate system 41). In optional step 226, at least a part of said predefined anatomical information is assigned directly to the associated first 3D position(s) or assigned to the common anatomical element having the associated first 3D position. In this manner, the part of the predefined anatomical information can be transferred from the 3D space into the 2D image space. Put differently, the 2D images can be enriched with the predefined anatomical information based on the determined mapping.
In optional step 228, vertebra labels are determined for the vertebrae 14-18 as the common anatomical elements. For example, each of the vertebrae 14-20 in the reference image data may be associated with a predefined label (e.g., determined as part of the method or pre-set by a user). The label of a vertebra having a given second 3D position may then be assigned to the vertebra depicted in the 2D images having a first 3D position mapped to said given second 3D position. In other words, the labels of the anatomical elements of the 3D image data may be transferred onto the anatomical elements in the 2D images. For example, each of the bounding boxes in the 2D images may be assigned such a label.
In optional step 230, a patient orientation is determined. The patient orientation may be determined based on the assigned labels. In particular, the labels may be indicative of an anatomical direction or plane of the patient's body 12, so the assigned labels may enable determining said anatomical direction or plane. The patient orientation can then be determined based on the anatomical direction or plane. As another example, the patient orientation may be determined based on a predefined patient orientation that is associated with the second spatial pattern and/or the reference image data. For example, a DICOM header of the reference image data may indicate anatomical direction(s) and/or anatomical plane(s) of the patient relative to the coordinate system 41. These direction(s) and/or plane(s) may then be transferred into the coordinate system 40 based on the transformation 66 to enrich the 2D images with the patient orientation information.
In optional step 232, a registration between the 2D images and the reference image data is determined. For example, the transformation 66 is used as the registration. The registration may be used for transforming positions from one of the coordinate systems 40, 41 into the other and vice versa.
In optional step 234, a visualization is triggered to be output. The visualization comprises one or more (e.g., at least two) of the 2D images, and further comprises information that is based on the mapping. The information that is based on the mapping may comprise the assigned anatomical information (e.g., label(s) and/or patient orientation). The information that is based on the mapping may comprise an indication of (e.g., an outline of) a common anatomical element as represented by the reference image data. The information that is based on the mapping may be displayed as overlay onto the 2D image(s).
With the technique disclosed herein, 2D fluoroscopy images can be enhanced with information such as vertebra names and patient orientation. The so-enriched 2D images set the foundation for various use cases, e.g., automatic orientation alignment of navigated 2D and 3D information. This may also yield useful information for enhancing existing workflows, and may reduce the time needed for a procedure since anatomically relevant information can be shown directly on the display during navigation.
Various modifications of the technique disclosed herein are possible. For instance, instead or in addition to vertebrae, the common anatomical elements may comprise other bones or organs of the patient's body 12. Additional variations and advantages of the technique disclosed herein will be apparent to those skilled in the art.
Number | Date | Country | Kind |
---|---|---|---|
23214830 | Dec 2023 | EP | regional |