The invention relates to the field of medical imaging and, in particular, to a method and apparatus for positioning markers in images of an anatomical structure.
Medical imaging is a useful tool for providing visual representations of anatomical structures (for example, organs) in images. There exist many different types of medical imaging techniques including computed tomography (CT), magnetic resonance (MR), ultrasound (US), X-ray, and similar. The images acquired from medical imaging are processed and analysed to make clinical findings on a subject and to determine whether medical intervention is necessary.
For many image processing and image analysis algorithms, ground-truth is required for the accurate evaluation of images and clinical validation and also for optimising, tuning, and analysing the algorithms. In the case of multiple images, it is often necessary to bring the images into correspondence (or alignment) for clinical analysis. For any kind of alignment, registration, fusion, matching, motion estimation or motion compensation framework, the most common way to enable ground-truth is to use markers to indicate the features (or landmarks) of the anatomical structures in the images. By using markers, the same anatomical position can be marked in two or more images so that the images can be directly compared. Also, the markers can be used to validate computed displacement vectors to identify target registration errors.
In order to accurately bring images into correspondence (or alignment) for clinical analysis, the markers need to be positioned in the images with high precision and this is difficult to achieve. Annotation tools allow for the positioning of markers, typically by selecting a position on a voxel or in-between voxels in the case of three-dimensional images. However, even where markers are positioned in-between voxels, it is a challenge for the user to combine the information from two different images in their mind in order to define corresponding markers with high accuracy. The complexity of this is further increased due to pathologies, different acquisition modalities or protocols, artefacts, or different noise levels.
Often, a feature of an anatomical structure that is of diameter equal to or less than the size of a voxel (for example, a filigree structure such as a vessel bifurcation or a lung vessel close to the pleura) is shown in a first image by a single bright voxel, but is shown in a second image by two less bright voxels (due to the partial volume effect, e.g. the feature may be spread over two voxels in the second image). A mismatch in a single spatial dimension can be relatively easily compensated by annotating a position in the voxel centre in the first image and a position shifted by half a voxel in the second image. However, a mismatch in more than one dimension is typically challenging to compensate and can be a time-consuming and error-prone task for the user. Thus, even with the existing tools to assist with the positioning of markers on features of anatomical structures in images, which allow markers to be placed in-between voxels, it is challenging (if not impossible) for a user to accurately mark the position of corresponding features in two or more images.
WO 2005/048844 discloses a method for estimating a position of two features of an anatomical structure. Specifically, the position of anterior commissure (AC) and posterior commissure (PC) landmarks are estimated in an image based on pixel intensities. In the disclosed method, two sub-images are generated from the image as regions of interests (ROIs) respectively around the estimated positions of the AC and PC landmarks and the sub-images are analysed to improve the estimated positions. However, while this method can be used to more accurately position markers on features in a single image, it is still not possible to ensure that the positon of the marker on the same feature in another image is consistent to aid a user in clinical analysis.
There is thus a need for an improved method and apparatus for positioning markers in images of an anatomical structure.
As noted above, the limitation with existing approaches is that it is not possible to accurately mark the position of corresponding features in two or more images. It would thus be valuable to have a method and apparatus that can position markers in images of an anatomical structure in a manner that overcomes these existing problems.
Therefore, according to a first aspect of the invention, there is provided a method for positioning markers in images of an anatomical structure. The method comprises positioning a first marker over a feature of an anatomical structure in a first image and a second marker over the same feature of the anatomical structure in a second image, and translating the first image under the first marker to adjust the position of the first marker with respect to the feature of the anatomical structure in the first image to correspond to the position of the second marker with respect to the feature of the anatomical structure in the second image.
In some embodiments, translating the first image may comprise an interpolation of the first image. In some embodiments, translating the first image may comprise any one or more of translating the first image in a left-right direction, translating the first image in an anterior-posterior direction, and translating the first image in an inferior-superior direction. In some embodiments, the first image may be a two-dimensional image comprising a plurality of pixels and translating the first image may comprise translating the first image by part of a pixel or the first image may be a three-dimensional image comprising a plurality of voxels and translating the first image may comprise translating the first image by part of a voxel. In some embodiments, the first image may be translated continuously under the first marker.
In some embodiments, the first image may be translated under the first marker in a plurality of steps. In some embodiments, translating the first image under the first marker may comprise translating the first image under the first marker in the plurality of steps to acquire a plurality of translated first images, for each of the plurality of translated first images, comparing the position of the first marker with respect to the feature of the anatomical structure in the translated first image to the position of the second marker with respect to the feature of the anatomical structure in the second image, and selecting a translated first image from the plurality of translated first images for which the position of the first marker with respect to the feature of the anatomical structure most closely corresponds to the position of the second marker with respect to the feature of the anatomical structure in the second image.
In some embodiments, one or more of the positioning of the first marker and the translation of the first image under the marker may be at least partially based on a received user input.
In some embodiments, the method may further comprise translating the second image under the second marker to adjust the position of the second marker with respect to the feature of the anatomical structure in the second image to correspond to the position of the first marker with respect to the feature of the anatomical structure in the first image.
In some embodiments, the method may further comprise rotating the first image under the first marker to alter the orientation of the anatomical structure in the first image to correspond to the orientation of the anatomical structure in the second image.
In some embodiments, each of the first image and the second image may comprise a plurality of views of the anatomical structure and the method disclosed herein may be performed for one or more of the plurality of views of the anatomical structure. In some embodiments, the plurality of views of the anatomical structure may comprise any one or more of an axial view of the anatomical structure, a coronal view of the anatomical structure, and a sagittal view of the anatomical structure.
According to a second aspect of the invention, there is provided a computer program product comprising a computer readable medium, the computer readable medium having computer readable code embodied therein, the computer readable code being configured such that, on execution by a suitable computer or processor, the computer or processor is caused to perform the method or the methods described above.
According to a third aspect of the invention, there is provided an apparatus for positioning markers in images of an anatomical structure. The apparatus comprises a processor configured to position a first marker over a feature of an anatomical structure in a first image and a second marker over the same feature of the anatomical structure in a second image, and translate the first image under the first marker to adjust the position of the first marker with respect to the feature of the anatomical structure in the first image to correspond to the position of the second marker with respect to the feature of the anatomical structure in the second image.
In some embodiments, the processor may be configured to control a user interface to render the translated first image with the first marker and the second image with second marker and/or control a memory to store the translated first image with the first marker and the second image with second marker.
According to the aspects and embodiments described above, the limitations of existing techniques are addressed. In particular, according to the above-described aspects and embodiments, it is possible to accurately mark the position of corresponding features in two or more images. A user tasked with clinically analysing the images is no longer required to combine or overlay the information from the two or more images in their mind. In this way, the aspects and embodiments described above allow markers to be positioned over corresponding features in different images with high precision.
There is thus provided an improved method and apparatus for positioning markers in (or annotating) images of an anatomical structure, which overcomes the existing problems.
For a better understanding of the invention, and to show more clearly how it may be carried into effect, reference will now be made, by way of example only, to the accompanying drawings, in which:
As noted above, the invention provides an improved method and apparatus for positioning markers in (or annotating) images of an anatomical structure, which overcomes the existing problems.
The images of the anatomical structure can, for example, be medical images. Examples include, but are not limited to, computed tomography (CT) images, magnetic resonance (MR) images, ultrasound (US) images, X-ray images, fluoroscopy images, positron emission tomography (PET) images, single photon emission computed tomography (SPECT) images, nuclear medicine images, or any other medical images.
In some embodiments, the images of the anatomical structure may be two-dimensional images comprising a plurality of pixels. In some embodiments, the images of the anatomical structure may be a plurality of two-dimensional images, each comprising a plurality of pixels, where time is the third dimension (i.e. the images of the anatomical structure may be 2D+t images). In some embodiments, the images of the anatomical structure may be three-dimensional images comprising a plurality of voxels. In some embodiments, the images of the anatomical structure may be four-dimensional images comprising a plurality (for example, a series, such as a time series) of three-dimensional images, each three-dimensional image comprising a plurality of voxels. The anatomical structure in the images may be an organ such as a heart, a lung, an intestine, a kidney, a liver, or any other anatomical structure. The anatomical structure in the images can comprise one or more anatomical parts. For example, images of the heart can comprise a ventricle, an atrium, an aorta, and/or any other part of the heart.
Although examples have been provided for the type of images and for the anatomical structure (and the parts of the anatomical structure) in the images, it will be understood that the invention may also be used for positioning markers in any other type of images and the anatomical structure may be any other anatomical structure.
The apparatus 100 comprises a processor 102 that controls the operation of the apparatus 100 and that can implement the method described herein. The processor 102 can comprise one or more processors, processing units, multi-core processors or modules that are configured or programmed to control the apparatus 100 in the manner described herein. In particular implementations, the processor 102 can comprise a plurality of software and/or hardware modules that are each configured to perform, or are for performing, individual or multiple steps of the method according to embodiments of the invention.
Briefly, the processor 102 is configured to position a first marker over a feature of an anatomical structure in a first image and a second marker over the same feature of the anatomical structure in a second image and translate the first image under the first marker to adjust the position of the first marker with respect to the feature of the anatomical structure in the first image to correspond to the position of the second marker with respect to the feature of the anatomical structure in the second image.
In some embodiments, the apparatus 100 may also comprise at least one user interface 104. Alternatively or in addition, at least one user interface 104 may be external to (i.e. separate to or remote from) the apparatus 100. For example, at least one user interface 104 may be part of another device.
A user interface 104 may be for use in providing a user of the apparatus 100 (for example, a healthcare provider, a healthcare specialist, a care giver, a subject, or any other user) with information resulting from the method according to the invention. The processor 102 may be configured to control one or more user interfaces 104 to provide information resulting from the method according to the invention. For example, the processor 102 may be configured to control one or more user interfaces 104 to render (or output or display) the translated first image with the first marker and the second image with second marker. Alternatively or in addition, a user interface 104 may be configured to receive a user input. In other words, a user interface 104 may allow a user of the apparatus 100 to manually enter instructions, data, or information. The processor 102 may be configured to acquire the user input from one or more user interfaces 104.
A user interface 104 may be any user interface that enables rendering (or output or display) of information, data or signals to a user of the apparatus 100. Alternatively or in addition, a user interface 104 may be any user interface that enables a user of the apparatus 100 to provide a user input, interact with and/or control the apparatus 100. For example, the user interface 104 may comprise one or more switches, one or more buttons, a keypad, a keyboard, a touch screen or an application (for example, on a tablet or smartphone), a display screen, a graphical user interface (GUI) or other visual rendering component, one or more speakers, one or more microphones or any other audio component, one or more lights, a component for providing tactile feedback (e.g. a vibration function), or any other user interface, or combination of user interfaces.
In some embodiments, the apparatus 100 may also comprise a memory 106 configured to store program code that can be executed by the processor 102 to perform the method described herein. Alternatively or in addition, one or more memories 106 may be external to (i.e. separate to or remote from) the apparatus 100. For example, one or more memories 106 may be part of another device. A memory 106 can be used to store images, information, data, signals and measurements acquired or made by the processor 102 of the apparatus 100 or from any interfaces, memories or devices that are external to the apparatus 100. For example, a memory 106 may be used to store the translated first image with the first marker and the second image with second marker. The processor 102 may be configured to control a memory 106 to store the translated first image with the first marker and the second image with second marker.
In some embodiments, the apparatus 100 may also comprise a communications interface (or circuitry) 108 for enabling the apparatus 100 to communicate with any interfaces, memories and devices that are internal or external to the apparatus 100. The communications interface 108 may communicate with any interfaces, memories and devices wirelessly or via a wired connection. For example, in an embodiment where one or more user interfaces 104 are external to the apparatus 100, the communications interface 108 may communicate with the one or more external user interfaces 104 wirelessly or via a wired connection. Similarly, in an embodiment where one or more memories 106 are external to the apparatus 100, the communications interface 108 may communicate with the one or more external memories 106 wirelessly or via a wired connection.
It will be appreciated that
Although not illustrated in
In some embodiments, each of the first image and the second image may comprise a plurality of views of the anatomical structure. For example, the plurality of views of the anatomical structure can comprise any one or more of an axial view of the anatomical structure, a coronal view of the anatomical structure, a sagittal view of the anatomical structure, or any other view of the anatomical structure, or any combination of views of the anatomical structure. In some embodiments, the first image and the second image of the anatomical structure may be displayed on the user interface 104 in an orthogonal view manner. In embodiments where the first image and the second image comprise a plurality of views of the anatomical structure, the first image and the second image may be displayed on the user interface 104 in the plurality of views (for example, in six orthogonal views comprising the axial, coronal and sagittal views of each image).
With reference to
In some embodiments, the positioning of the first marker and the second marker (at block 202 of
At block 204 of
In some embodiments, translating the first image may comprise re-sampling the first image to a new set of coordinates. For example, re-sampling the values of each image component in the first image to a new set of image component co-ordinates (where the image components are pixels in the case of the image being a two-dimensional image or voxels in the case of the first image being a three-dimensional image).
In some embodiments, translating the first image can comprise an interpolation of the first image (e.g. the first image may be re-sampled by interpolating the values of the image components of the first image onto a new set of image component co-ordinates). The interpolation of the first image to translate the first image under the first marker can comprise any suitable interpolation technique. Examples of an interpolation technique that may be used include, but are not limited to, a linear interpolation, a polynomial interpolation (such as a cubic interpolation, a B-spline-based interpolation, or any other polynomial interpolation), a trilinear interpolation, or any other interpolation technique. In the embodiments in which the first image is translated under the first marker in a plurality of steps, the interpolation of the first image may be performed after each translation of the first image.
The translation of the first image can be in any direction. For example, translating the first image may comprise any one or more of translating the first image in a left-right direction, translating the first image in an anterior-posterior direction, translating the first image in an inferior-superior direction, or translating the first image in any other direction, or in any combination of directions.
In embodiments where the first image is a two-dimensional image comprising a plurality of pixels, translating the first image may comprise translating the first image by part of a pixel. In other words, the first image may be translated on a sub-pixel level. Similarly, in embodiments where the first image is a three-dimensional image (or a four-dimensional image comprising a plurality of three-dimensional images) comprising a plurality of voxels, translating the first image may comprise translating the first image by part of a voxel. In other words, the first image may be translated on a sub-voxel level. In embodiments where the first image is translated under the first marker in a plurality of steps, the step size can be a fraction of a pixel in the case of two-dimensional images or a fraction of a voxel in the case of three-dimensional images (or four-dimensional images comprising a plurality of three-dimensional images).
In an example embodiment, the first image may be translated by 0.2 of a voxel. However, although an example has been provided, it will be understood that other examples are also possible and the first image may be translated by any other fraction of a voxel (such as 0.1 of a voxel, 0.3 of a voxel, 0.4 of a voxel, 0.5 of a voxel, 0.6 of a voxel, or any other fraction of a voxel). In this way, even features having a diameter that is equal to or less than a voxel in the case of a three-dimensional image (or a four-dimensional image comprising a plurality of three-dimensional images), or a diameter that is equal to or less than a pixel in the case of two-dimensional images, can be accurately marked to achieve high precision alignment.
In some embodiments, the translation of the first image under the first marker (at block 204 of
In some embodiments, the translation of the first image under the marker (at block 204 of
Although not illustrated in
The translation of the second image may be performed simultaneously with the translation of the first image, prior to the translation of the first image or subsequent to the translation of the first image (at block 204 of
Although also not illustrated in
In some embodiments, the rotation of the first image under the first marker can be performed automatically by the processor 102 of the apparatus 100. For example, an automatic image registration algorithm can be employed. In some embodiments, for example, an automatic image registration algorithm may be applied to the first image and the second image (or one or more sub-images of those images, where the sub-images show the local environment around the position of a marker). An optimal rotation can then be determined by comparing the first image and second image (or by mapping one of the images to the other). The optimal rotation is a rotation that, when applied to one of the images, makes the rotated image and the other image most similar. The determined optimal rotation can be used to update the first marker in the first image or the second marker in the second image such that the first and second markers correspond in the first and second images. In some embodiments, for the purpose of rotation, the transformations of the automatic image registration algorithm can be optimised for rotations. Any known registration algorithms can be employed in this way.
In some embodiments, the rotation of the first image under the first marker can be at least partially based on a received user input. The user input may be received via one or more user interfaces 104, which may be one or more user interfaces of the apparatus 100, one or more user interfaces external to the apparatus 100, or a combination of both.
In some embodiments, rotating the first image can comprise an interpolation of the first image. The interpolation of the first image to rotate the first image under the first marker can comprise any suitable interpolation technique such as those mentioned earlier in respect of the translation of the first image. In the embodiments in which the first image is rotated under the first marker in a plurality of steps, the interpolation of the first image may be performed after each rotation of the first image.
In the same way as described above for the first image (which will not be repeated here but will be understood to apply), the second image may alternatively or in addition be rotated under the second marker to alter the orientation of the anatomical structure in the second image to correspond to the orientation of the anatomical structure in the first image. For example, the anatomical structures in the underlying images can be rotated relative to each other.
The rotation of the second image may be performed simultaneously with the rotation of the first image, prior to the rotation of the first image or subsequent to the rotation of the first image. Similarly, the rotation of any one or more of the first image and the second image may be performed simultaneously with the translation of any one or more of the first image and the second image, prior to the translation of any one or more of the first image and the second image, or subsequent to the translation of any one or more of the first image and the second image (at block 204 of
Specifically, in some embodiments, the first image may first be rotated under the first marker and then the first image may subsequently be translated under the first marker or vice versa. Alternatively, in some embodiments, the second image may first be rotated under the second marker and then the second image may subsequently be translated under the second marker or vice versa.
Alternatively, in some embodiments, the first image may first be rotated under the first marker (or, alternatively, the second image may first be rotated under the second image) and then the first and the second image may subsequently be translated under their respective markers (for example, the first image and the second image may be translated simultaneously under their respective markers, or the first image may be translated under the first marker followed by the second image being translated under the second marker or vice versa). Alternatively, in some embodiments, the first image may first be translated under the first marker (or, alternatively, the second image may first be translated under the second image) and then the first and the second image may subsequently be rotated under their respective markers (for example, the first image and the second image may be rotated simultaneously under their respective markers, or the first image may be rotated under the first marker followed by the second image being rotated under the second marker or vice versa).
Alternatively, in some embodiments, the first image and the second image may first be translated under their respective markers (for example, the first image and the second image may first be translated simultaneously under their respective markers, or the first image may be translated under the first marker followed by the second image being translated under the second marker or vice versa) and then the first image may subsequently be rotated under the first marker (or, alternatively, the second image may subsequently be rotated under the second marker). Alternatively, in some embodiments, the first image and the second image may first be rotated under their respective markers (for example, the first image and the second image may first be rotated simultaneously under their respective markers, or the first image may be rotated under the first marker followed by the second image being rotated under the second marker or vice versa) and then the first image may subsequently be translated under the first marker (or, alternatively, the second image may subsequently be translated under the second marker).
Alternatively, in some embodiments, the first image and the second image may first be translated under their respective markers (for example, the first image and the second image may first be translated simultaneously under their respective markers, or the first image may be translated under the first marker followed by the second image being translated under the second marker or vice versa) and then the first image and the second image may subsequently be rotated under their respective markers (for example, the first image and the second image may first be rotated simultaneously under their respective markers, or the first image may be rotated under the first marker followed by the second image being rotated under the second marker or vice versa). Alternatively, in some embodiments, the first image and the second image may first be rotated under their respective markers (for example, the first image and the second image may first be rotated simultaneously under their respective markers, or the first image may be rotated under the first marker followed by the second image being rotated under the second marker or vice versa) and then the first image and the second image may subsequently be translated under their respective markers (for example, the first image and the second image may first be translated simultaneously under their respective markers, or the first image may be translated under the first marker followed by the second image being translated under the second marker or vice versa).
In some embodiments, the rotation of the second image under the marker can be performed automatically by the processor 102 of the apparatus 100, as described earlier in respect of the first image. In some embodiments, the rotation of the second image under the second marker can be at least partially based on a received user input. The user input may be received via one or more user interfaces 104, which may be one or more user interfaces of the apparatus 100, one or more user interfaces external to the apparatus 100, or a combination of both.
In embodiments in which the first image and the second image comprise a plurality of views of the anatomical structure (such as the views described earlier), the method disclosed herein may be performed for one or more of the plurality of views. In any of the embodiments described herein, the translation and, optionally, the rotation steps described earlier may be repeated for one or more of the images until the markers in the images with respect to the feature of the anatomical structures appear as similar as possible. In embodiments in which the translation is performed in a plurality of steps, the translation may be repeated with reduced step sizes for the translations. Similarly, in embodiments in which the rotation is performed in a plurality of steps, the rotation may be repeated with reduced step sizes for the rotations.
Although not illustrated in
In some embodiments, once the position of the first marker with respect to the feature in the first image corresponds to the position of the second marker with respect to the feature in the second image (which may be by one or multiple translations), the overall translation of the first image may be determined. In these embodiments, the determined overall translation may be rendered (or output or displayed) at the position of the first marker in the first image. Similarly, in embodiments where the second image is translated, the overall translation of the second image may be determined. In these embodiments, the determined overall translation may be rendered (or output or displayed) at the position of the second marker in the second image. In embodiments in which an image is also rotated, the overall rotation of that image may be determined and rendered (or output or displayed) at the position of the marker in the image.
In this way, by positioning the marker over the image and translating (or shifting) the image under the marker in the manner described above, a high precision pair of markers can be generated over the images. It will be understood that, although the method is described herein with respect to positioning markers over corresponding features in two images, the method may also be performed in respect of more than two images in the same way.
In this illustrated example, the first image 300 comprises an axial view 300a of the anatomical structure, a coronal view 300b of the anatomical structure, and a sagittal view 300c of the anatomical structure. Similarly, in this illustrated example, the second image 302 comprises an axial view 302a of the anatomical structure, a coronal view 302b of the anatomical structure, and a sagittal view 302c of the anatomical structure. As illustrated in
As described earlier with reference to block 202 of
As illustrated in
It will be understood that any one or more of the first image 300 and the second image 302 may be translated (and optionally rotated) in any of the manners described earlier, which will not be repeated here but will be understood to apply.
With reference to
At block 404 of
Then, at block 406 of
In some embodiments, the comparison can be performed automatically by the processor 102 of the apparatus 100. For example, in some embodiments, the comparison can be performed using a similarity measure. Specifically, the first image and the second image (or at least a part of the first image and a corresponding part of the second image) can be compared to each other using a similarity measure. The similarity measure may, for example, comprise a sum of squared differences, a cross correlation (such as a local cross correlation), mutual information, or any other similarity measure. The comparison using a similarity measure can be performed using any of the known techniques. In some embodiments, the comparison may comprise displaying the plurality of translated first images to a user via a user interface 104 and acquiring a user input from the user interface 104 related to the comparison. The user input may be received via one or more user interfaces 104, which may be one or more user interfaces of the apparatus 100, one or more user interfaces external to the apparatus 100, or a combination of both.
At block 408 of
In the same manner described for the translation of the first image with respect to
It will also be understood that the first image, the second image, or both the first and second images may be rotated in the manner described earlier in respect of
Similarly, the rotation of any one or more of the first image and the second image may be performed simultaneously with the translation of any one or more of the first image and the second image, prior to the translation of any one or more of the first image and the second image, or subsequent to the translation of any one or more of the first image and the second image (as described earlier).
In this illustrated example, the first image 500 and the second image 502 comprise a plurality of views 500a, 500b, 500c, 502a, 502b, 502c. Specifically, the first image 500 comprises an axial view 500a of the anatomical structure, a coronal view 500b of the anatomical structure, and a sagittal view 500c of the anatomical structure and, similarly, the second image 502 comprises an axial view 502a of the anatomical structure, a coronal view 502b of the anatomical structure, and a sagittal view 502c of the anatomical structure. As described earlier in respect of block 402 of
In this illustrated example, the first marker 504 is positioned over the feature of the anatomical structure in each of the axial view 500a of the anatomical structure, the coronal view 500b of the anatomical structure, and the sagittal view 500c of the anatomical structure in the first image 500. Similarly, in this illustrated example, the second marker 506 is positioned over the same feature of the anatomical structure in each of the axial view 502a of the anatomical structure, the coronal view 502b of the anatomical structure, and the sagittal view 502c of the anatomical structure in the second image 502.
Then, as described earlier in respect of block 404 of
As described earlier with respect to block 408 of
Although the illustrated example has been described in relation to the sagittal view 500c of the first image 500, it will be understood that the method may alternatively or in addition be performed for any of the other views of the first image 500, any of the views of the second image 502, and any combination of views.
It will be understood that, while the methods have been described herein in respect of two images, the methods can equally be performed for more than two images.
There is therefore provided an improved method and apparatus for positioning markers in images of an anatomical structure. The method and apparatus described herein can be used for positioning markers in images of any anatomical structure (for example, organs or any other anatomical structure). Specifically, the method and apparatus allows markers to be positioned over corresponding features in different images with high precision. The method and apparatus can be valuable in medical imaging analysis and visualisation tools.
There is also provided a computer program product comprising a computer readable medium, the computer readable medium having computer readable code embodied therein, the computer readable code being configured such that, on execution by a suitable computer or processor, the computer or processor is caused to perform the method or methods described herein. Thus, it will be appreciated that the invention also applies to computer programs, particularly computer programs on or in a carrier, adapted to put the invention into practice. The program may be in the form of a source code, an object code, a code intermediate source and an object code such as in a partially compiled form, or in any other form suitable for use in the implementation of the method according to the invention.
It will also be appreciated that such a program may have many different architectural designs. For example, a program code implementing the functionality of the method or system according to the invention may be sub-divided into one or more sub-routines. Many different ways of distributing the functionality among these sub-routines will be apparent to the skilled person. The sub-routines may be stored together in one executable file to form a self-contained program. Such an executable file may comprise computer-executable instructions, for example, processor instructions and/or interpreter instructions (e.g. Java interpreter instructions). Alternatively, one or more or all of the sub-routines may be stored in at least one external library file and linked with a main program either statically or dynamically, e.g. at run-time. The main program contains at least one call to at least one of the sub-routines. The sub-routines may also comprise function calls to each other.
An embodiment relating to a computer program product comprises computer-executable instructions corresponding to each processing stage of at least one of the methods set forth herein. These instructions may be sub-divided into sub-routines and/or stored in one or more files that may be linked statically or dynamically. Another embodiment relating to a computer program product comprises computer-executable instructions corresponding to each means of at least one of the systems and/or products set forth herein. These instructions may be sub-divided into sub-routines and/or stored in one or more files that may be linked statically or dynamically.
The carrier of a computer program may be any entity or device capable of carrying the program. For example, the carrier may include a data storage, such as a ROM, for example, a CD ROM or a semiconductor ROM, or a magnetic recording medium, for example, a hard disk. Furthermore, the carrier may be a transmissible carrier such as an electric or optical signal, which may be conveyed via electric or optical cable or by radio or other means. When the program is embodied in such a signal, the carrier may be constituted by such a cable or other device or means. Alternatively, the carrier may be an integrated circuit in which the program is embedded, the integrated circuit being adapted to perform, or used in the performance of, the relevant method.
Variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a study of the drawings, the disclosure and the appended claims. In the claims, the word “comprising” does not exclude other elements or steps, and the indefinite article “a” or “an” does not exclude a plurality. A single processor or other unit may fulfil the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage. A computer program may be stored/distributed on a suitable medium, such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems. Any reference signs in the claims should not be construed as limiting the scope.
Number | Date | Country | Kind |
---|---|---|---|
16206056.0 | Dec 2016 | EP | regional |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2017/083447 | 12/19/2017 | WO | 00 |