This application claims priority of German application No. 10 2005 046 416.5 filed Sep. 28, 2005, which is incorporated by reference herein in its entirety.
The invention relates to an arrangement having a 3D device, the 3D device being embodied for acquiring an object and generating a 3D acquisition result representing the object at least partially in at least three dimensions. The arrangement also has a 2D device, the 2D device being embodied for acquiring the object and generating a 2D acquisition result representing the object in at least two dimensions. The 2D acquisition result represents the object at least partially, in particular a top view of the object, a view through the object or a section through the object.
3D devices in the form of computed tomography systems, magnetic resonance tomography systems, positron emission computed tomography systems or single-photon emission computed tomography systems are known from the prior art. Such 3D devices can record an object, a patient for example, in 3 spatial dimensions. A user-selectable sectional or through-view image which is required for example for an intervention procedure can then be selected from the acquisition result.
With 3D devices known from the prior art, the process of acquisition and in particular a subsequent evaluation by a user, a physician for example, currently takes up a lot of time to the extent that an acquisition and evaluation of this kind is regularly performed prior to an intervention or in critical phases of the intervention following time-consuming repositioning of the patient or for monitoring after the intervention. During the intervention 2D acquisition results, generated for example by means of a C-arm X-ray device, must be mentally reconciled by a user, a physician for example, with the acquisition results of the 3D device in order to compare the 2D acquisition result with the 3D acquisition result.
The problem underlying the invention is therefore that the 3D acquisition result generated by a 3D device and the 2D acquisition result generated by a 2D device, a C-arm X-ray system for example, and obtained for example during an intervention are difficult to compare with each other in order, for example, to relocate an organ, a vessel or the like represented in each case by an acquisition result.
The aforementioned problem is solved by an arrangement of the type cited in the introduction, wherein the 3D device and the 2D device are in each case connected to each other, in particular mechanically, in such a way that a part of the first acquisition result corresponding to an object location can be assigned to a part of the second acquisition result corresponding to the same object location.
A 3D acquisition result can be a 3D dataset which represents an object at least partially in at least three dimensions. For example, a 3D dataset can represent an object in at least 3 spatial dimensions. A 4D dataset can represent an object in 3 spatial and in a further time-dependent dimension. In the case of a 4D dataset the object has therefore been acquired in addition as a function of time.
A 2D acquisition result can be a 2D dataset which represents the object at least partially. For example, the 2D dataset can represent a top view of the object, a view through the object or a section through the object. In another exemplary embodiment a 3D dataset can represent an object in at least three dimensions, with two dimensions being location-dependent and therefore spatial, and one dimension being time-related and therefore time-dependent.
A 3D dataset can also preferably contain data corresponding to a plurality of voxel object points and the voxel object points together at least partially represent the object in at least three dimensions, with one voxel object point representing one location in an object.
A 2D dataset can contain data corresponding to a plurality of pixels of an image of an object, the pixels together at least partially representing the image of an object.
An object can be represented at least partially in that a part of the object, an organ or a vessel, for example, in the case of a patient, is represented. Alternatively or in addition thereto, partially representing an object can be realized by a spatial distancing of acquisition points that are adjacent to one another.
A mechanical connection of the 2D device to the 3D device can be, for example, a rigid connection between a housing part of the 2D device and a housing part of the 3D device. Alternatively thereto, a detachable rigid connection can also be provided between the aforementioned housing parts.
In a preferred embodiment, the arrangement can, for example, have a receiving apparatus with a receiving surface for receiving an object, in particular a patient, that can be used jointly by the 3D device and the 2D device. The receiving apparatus is embodied to supply the object either to the 3D device for the purpose of being acquired by the 3D device or to the 2D device for the purpose of being acquired by the 2D device. The receiving apparatus can also preferably be embodied to connect the 3D device mechanically to the 2D device. In this embodiment the receiving apparatus advantageously forms a connecting piece which is disposed between the 3D device and the 2D device.
In an alternative embodiment the 3D device and the 2D device can in each case be connected to a base, for example a floor or a junction plate base, which can form a rigid connecting piece. The receiving apparatus can be part of the 3D device or the 2D device.
The receiving apparatus is preferably embodied to move the receiving surface to a predetermined first position in the acquisition range of the 3D device, or optionally to a predetermined second position in the acquisition range of the 2D device.
A calibration of the arrangement can advantageously be performed at these positions that are known relative to one another.
By means of the receiving apparatus an object can advantageously be supplied to, in each case, predetermined positions in the acquisition ranges of the aforementioned devices, with the result that parts of the acquisition result corresponding to an object location can be assigned to one another.
In a preferred embodiment the receiving apparatus is embodied to generate a calibration signal at at least one predetermined position of the receiving surface. This advantageously enables acquisition locations which in each case represent the same object location to be assigned in a simple manner.
The receiving apparatus is preferably embodied to supply the receiving surface to the acquisition range of the 3D device or optionally the acquisition range of the 2D device by translational movement and/or rotational movement.
The receiving apparatus is also preferably embodied to swivel the receiving surface about at least one spatial axis. As a result an object which is connected, in particular rigidly, to the receiving surface can advantageously be acquired by the 2D device at an acquisition angle which corresponds to an acquisition angle of an acquisition by the 3D device.
The receiving apparatus can also preferably be embodied to swivel the receiving surface about two or three spatial axes which are in particular orthogonal to one another.
In a preferred embodiment the 2D device is electrically connected to the 3D device.
The arrangement preferably has a coordinate memory which is connected in each case to the 3D device and the 2D device. The 2D device and the 3D device are each embodied to generate an object coordinates dataset corresponding to an object location and to store the object coordinates dataset in the coordinate memory. This advantageously enables a calibration of the arrangement to be performed.
The arrangement also preferably has an assignment unit with an input for a calibration signal, the assignment unit being embodied to assign a 3D object coordinates dataset generated by the assignment unit to the 2D object coordinates dataset generated by the 2D device as a function of a calibration signal received on the input side.
This advantageously further simplifies a calibration of the arrangement.
In an advantageous embodiment variant the arrangement has a magnetic field navigator which is embodied to generate a magnetic field with a spatial orientation. The spatial orientation of the magnetic field can be changed as a function of a user interaction in such a way that a magnetizable or magnetized object, in particular a distal catheter end of a catheter, can be orientated in an effective range of the magnetic field correspondingly spatially to said field.
The magnetic field navigator is preferably at least indirectly connected to the coordinate memory and embodied to read out an object coordinates dataset stored in the coordinate memory and to output a position of the magnetizable or magnetized object relative to the read out object coordinates dataset.
Alternatively to this embodiment, the magnetic field navigator can generate the object location of the magnetizable or magnetized object in the form of coordinates which correspond to those given by the 2D object coordinates dataset or the 3D object coordinates dataset.
The magnetic field navigator advantageously enables an end of a catheter to be moved to a position which corresponds to a predetermined position which is represented by a 3D acquisition result.
A magnetic field navigator can advantageously be a magnetic field navigator of the company Stereotaxis.
In an advantageous embodiment the arrangement can have a position sensor which is embodied to detect the position of a location-indicating object—for example a catheter end. The position sensor is preferably at least indirectly connected to the coordinate memory and embodied to read out an object coordinates dataset stored in the coordinate memory and output a position of the location-indicating object relative to the read out object coordinates dataset.
Alternatively to this embodiment the position sensor can generate the object location of the location-indicating object in the form of coordinates which correspond to those given by the 2D object coordinates dataset or the 3D object coordinates dataset.
A position sensor can advantageously be a position sensor of the company Biosense Webster.
In a preferred embodiment the arrangement has an image display unit which is connected at least indirectly to the assignment unit. The arrangement is embodied to display jointly, spatially and/or on a time-dependent basis, the object represented in each case by the 2D acquisition result and by the 3D acquisition result on at least one image display unit.
The invention also relates to a method for acquiring an object, preferably by means of an arrangement of the aforementioned type.
The method comprises the following steps:
A 2D object coordinates dataset can represent an object location in two or three spatial dimensions.
In a further preferred embodiment the method additionally has the following step:
A spatially joint display can be a representation in a common space or in a common plane. Object parts that are different relative to one another can also be displayed in a common space or in a common plane.
The invention will now be explained below with reference to figures, in which:
The 3D device 3, for example a SPECT scanner (SPECT=Single-Photon Emission Computed Tomography) is embodied to acquire an object 7 and generate a 3D acquisition result representing the object 7 at least partially in three dimensions. The 3D device can also be embodied to generate a 3D object coordinates dataset corresponding to an acquisition location and to output said dataset on the output side.
The 3D acquisition result can be a 3D dataset which is formed by a plurality of voxel image points which together at least partially represent the object 7.
The 3D dataset can also contain the object coordinates dataset which represents the acquisition location of the object 7 acquired by the 3D device.
The 2D device, for example a C-arm X-ray device, is embodied for acquiring the object and generating a 2D acquisition result representing the object in at least two dimensions. The 2D acquisition result at least partially represents the object, in particular a top view of the object, a view through the object or a section through the object.
The receiving apparatus 13 has a receiving surface 15 and a swiveling connection 19. The receiving surface 15 is connected to the receiving apparatus 13 via the swiveling connection 19. The receiving apparatus 13 is embodied to swivel the receiving surface 15 about a swiveling axis 17 as a function of a user interaction signal received on the input side. The receiving surface 15 is shown in a swiveling position 15′.
The receiving surface 15 is embodied to receive an object 7, a patient for example.
In the embodiment shown in
For example, the receiving apparatus 13 can swivel the receiving surface 15 back and forth in the range of a swiveling angle of 180 degrees.
The arrangement 1 also has an assignment device 25 which is connected to the receiving apparatus 13 via a bidirectional connecting cable 43. The assignment device 25 is connected to the 2D device 5 via a data bus 41 and to the 3D device 3 via a data bus 39.
The arrangement 1 also has a coordinate memory 27 which is connected to the assignment device 25 via a connecting cable 33.
The arrangement 1 also has an image display unit 29. The image display unit 29 has a touch-sensitive surface 31, the touch-sensitive surface 31 being connected to the assignment unit 25 via a connecting cable 37, and the image display unit 29 being connected to the assignment unit 25 via a connecting cable 35. The image display unit 29 can be, for example, a TFT display (TFT=Thin Film Transistor).
The touch-sensitive surface 31 is embodied to generate a touch signal as a function of a touching of the touch-sensitive surface 31, which touch signal corresponds to a touch location of the touch-sensitive surface 31, and to output said signal via the connecting cable 37 on the output side. Also shown is a hand of a user 62 which can generate a touch signal indirectly by touching the touch-sensitive surface 31.
The principle of operation of the arrangement 1 will now be explained as follows:
The 3D device 3 can send the generated 3D object coordinates dataset via the data bus 39 to the assignment device 25.
Also shown are object coordinates 11 which represent the acquisition location of the object 7 at which the object 7 was acquired by the 3D device.
The assignment device 25 is embodied to output the object coordinates dataset received over the data bus 39 on the output side via the connecting cable 33 and store it in the coordinate memory 27.
The receiving apparatus 13 can generate a calibration signal as a function of a swiveling position of the receiving surface 15. The receiving apparatus 13 can now generate a calibration signal which corresponds to the swiveling position of the receiving surface 15 in the acquisition range of the 3D device, and to send said calibration signal via the connecting cable 43 to the assignment unit 25. Said assignment unit 25 can send the object coordinates dataset representing an acquisition location and received via the data bus 39 to the coordinate memory 27 via the connecting cable 33 as a function of the calibration signal received via the connecting cable 43 and store it there.
The receiving apparatus 13 can now swivel the receiving surface 15 into the swiveling position 15′—for example as a function of a touch signal generated by the touch-sensitive surface 31—and thereby move the object 7 located on the receiving surface 15 along the swiveling direction 23 into the object position 7′ and therefore into the acquisition range of the 2D device 5. A resulting movement of the object 7 is represented by the movement direction arrow 21.
The 2D device 5, for example a C-arm X-ray device, is embodied to acquire an object and generate a 2D acquisition result representing the object in at least two dimensions. In this embodiment the 2D acquisition result represents, for example, a view through the object 7. The 2D device is embodied to output the 2D acquisition result, for example a 2D dataset which has a plurality of pixel image points which together represent the view through the object 7, via the data bus 41 on the output side.
The receiving apparatus 13 can now generate a calibration signal corresponding to the swiveling position of the receiving surface 15′ and send said signal via the connecting cable 43 to the assignment unit 25. The 2D device is embodied to generate a 2D object coordinates dataset corresponding to an acquisition location of the object in the swiveling position 7′ and to send said dataset via the data bus 41 to the assignment unit 25 on the output side.
The assignment unit 25 can send the 2D object coordinates dataset received via the data bus 41 as a function of the calibration signal received via the connecting cable 43 and representing the swiveling position 15′ via the connecting cable 33 to the coordinate memory 27 and store it there.
With the object coordinates datasets stored in the coordinate memory 27, a 2D acquisition result, represented by a 2D dataset, can now be assigned by the assignment unit 25 to a 3D acquisition result, represented by a 3D dataset. On the basis of the object coordinates datasets stored in the coordinate memory 27, the assignment unit 25 can thus assign components of the 2D dataset and the 3D dataset corresponding to precisely one object location to one another and generate a corresponding assignment result.
The assignment unit 25 can output the assignment result on the output side and send it via the connecting cable 35 to the image display unit 29 for joint display on the image display unit 29.
The arrangement 2 also shows an image display unit 29 which has likewise already been described in
In this exemplary embodiment the 2D device is a C-arm X-ray device with a pedestal 60.
The receiving apparatus 13 can swivel an object located on the receiving surface 15, for example a patient, optionally into the acquisition range of the 2D device 5 or into the acquisition range of the 3D device 3.
The 3D device 3 is shown in an acquisition position. Also shown is a park position 3′ of the 3D device.
In addition to the arrangement 1 shown in
The magnetic field heads 45 and 46 are each embodied to generate a magnetic field with a spatial orientation. The magnetic field navigator can change the spatial orientation of the magnetic field as a function of a user interaction signal, for example a touch signal generated by the touch-sensitive surface 31 in
The magnetic field navigator can be connected to the coordinate memory 27 shown in
The magnetic field navigator can send said dataset, which represents the object location of the magnetizable object, to the assignment unit 25 shown in
Also shown are the spacing dimension 58, which measures 500 centimeters, and the spacing dimension 57, which measures 455 centimeters.
Number | Date | Country | Kind |
---|---|---|---|
10 2005 046 416.5 | Sep 2005 | DE | national |