The invention relates to a method as well as to an apparatus for visualizing a sequence of volume mages of a moving object. Methods and apparatus of this kind are used wherever a sequence of volume images is to be visualized, for example, for a viewer.
Methods and apparatus whereby sequences of volume images of organs of a patient can be acquired and presented to a viewer are known from the medical field. A volume image is a three-dimensional image of an object and consists of volume elements or voxels, each of which represents, by way of a corresponding volume value or voxel value, the image information of the object that is present at the location of the voxel. The acquired volume images are visualized by forming a respective two-dimensional image from a respective volume image, which two-dimensional image can be rendered by means of suitable means such as a monitor. Known methods for visualization, however, limit the image repetition rate of the images displayed in contemporary systems, because the visualization of a volume image is very intricate as a function of its size. This is unsatisfactory notably when the acquisition unit of a system of this kind is capable of acquiring the volume images with an image repetition rate which is higher than the rate at which the visualization unit can form images therefrom.
Therefore, it is an object of the invention to provide a method and an apparatus for faster visualization of volume images.
This object is achieved by means of a method for visualizing a sequence of volume images, which method comprises the steps of:
According to this method first a first volume image from a sequence of volume images is available. In order to make such a volume image accessible for a viewer, it must be visualized; to this end, usually a two-dimensional image surface is defined on which the volume image is imaged and a viewer is then presented with the image then formed on the surface, for example, on a display screen. The image of the object can be presented to the viewer as if the viewer were to look at the image from the direction wherefrom the volume image is imaged on the image surface.
It has been found that only a part of all voxels is relevant for the visualization of a volume image, which part is dependent on the visualization method used.
Hereafter a voxel is deemed to be relevant when its volume value contributes or is relevant during the extraction of the two-dimensional image from the volume image, that is, if it contributes to the formation of the two-dimensional image information. For example, the voxels whose volume values represent volume image contents of parts of the object which are not visible in the imaging direction are not relevant for the visualization. The execution of a visualization method, therefore, can be accelerated when an as large as possible part of the non-relevant voxels is not used during the extraction of the two-dimensional image. The image then arising corresponds approximately to an image resulting from the same visualization while utilizing all voxels.
To this end, first those voxel values of a first volume image are determined which are relevant for the visualization thereof and the voxels with which these volume values are associated are stored. Thus, all voxels that are relevant for the visualization of the first volume image are stored and a two-dimensional image can be derived therefrom in a further step. It is assumed that a sequence of volume images representing a non-moving object is to be visualized. In that case it would be expected that the object is rendered in a second volume image in exactly the same way as in the first volume image so that for the visualization of this second volume image exactly those voxels are relevant which were already relevant for the visualization of the first volume image. Because the whole object or part of the object represented in the sequence of volume images moves, however, some volume voxels of the second volume image which are relevant for the visualization have been shifted relative to the first volume image due to the object motion, so that now some other voxels with their volume values are relevant for the visualization. These other relevant volume elements are located at a given distance from the stored voxels in conformity with the object motion so that they neighbor the stored voxels (that is, the relevant voxels of the first volume image). In this context the term “neighbor” means not only a direct neighborhood, but also a neighborhood where several non-relevant voxels may be situated between a stored voxel and a new relevant voxel.
It is the object to derive the two-dimensional image from the second volume image from an as small as possible number of volume values. The second volume image is visualized by deriving the two-dimensional image is derived exclusively from those of its volume values which are associated with stored voxels or voxels neighboring such stored voxels. The voxels used for the visualization constitute a sub-set of all voxels of the volume image. The larger the share of voxels having a relevant volume value is in the sub-set, the faster the actual visualization can be performed. In the optimum case this sub-set comprises exclusively voxels having a volume value which is relevant for the visualization. As a rule it is achieved by this method that the number of voxels used for the derivation of the two-dimensional image is drastically reduced with respect to the total number of all volume values of a volume image, even when the sub-set of the neighboring voxels used for the derivation also comprises voxels having a non-relevant volume value.
For the visualization of a third volume image, in conformity with the method first those volume values are determined which were actually relevant for the derivation of the two-dimensional image from the second volume image. The voxels associated with these volume values are stored. Analogous to the visualization of the second volume image, the two-dimensional image of a third volume image is then derived from those of its volume values which are associated with stored voxels or voxels neighboring such stored voxels. The procedure is the same for further volume images.
The dependent claims relate to special versions of the method in accordance with the invention.
The use of a motion model constitutes an embodiment of the method as disclosed in claim 2. The share of voxels having a relevant volume value thus becomes as large as possible in the sub-set of the voxels used during the derivation of the two-dimensional image. If there is no motion model or if the use of a motion model is not possible, the sub-set of the voxels used during the derivation can alternatively be defined by means of the version disclosed in claim 3. In order to keep this sub-set as small as possible, the method can be adapted to the actual object motion by means of the version disclosed in claim 4. The regions are chosen to be large when the object has moved through a large image region between the first and the second volume image. The regions are chosen to be small when the object has moved through over a small image region only. A particularly simple implementation of the method is offered by the further version disclosed in claim 5, because the indication of a radius or diameter suffices to define the region in that case.
For the visualization methods are known in which the volume values of the voxels of a volume image are processed in blocks. When a visualization procedure of this kind is used in the method in accordance with the invention, the further version in conformity with claim 6 reduces the number of storage steps and storage locations for relevant voxels.
The object is also achieved by means of an image processing unit for the visualization of a sequence of volume images, which unit comprises
An image processing unit can be realized on the one hand by an independent apparatus such as, for example, a workstation with a display screen. Via a data input the user receives sequences of volume images which are visualized and displayed on a monitor in conformity with the method of the invention. On the other hand, an image processing unit of this kind can be utilized, for example, in many known apparatus whereby additionally the volume images to be visualized are acquired in conformity with claim 8. The use of the method in accordance with the invention in medical apparatus enables a user to visualize sequences of volume images at an adequate speed in conformity with the claims 9 and 10.
A computer program or computer program product as claimed in claim 11 enables the use of programmable data processing units for the execution of a method in accordance with the invention.
The following examples and embodiments are supported by the FIGS. 1 to 5. Therein:
a show an ultrasound apparatus whereby a method in accordance with the invention can be carried out,
a to 4e diagrammatically show some visualization methods, and
A general illustration of a method in accordance with the invention will first be given with reference to
The volume images B2 to B5 of
In a further step 21 the voxels present in the vicinity of all relevant voxels are also stored as being relevant. The result is a dilation of the regions with relevant voxels as represented by F2 in
In order to enable a more exact description of the method of
Volume images can be acquired, for example, by means of an ultrasound apparatus as diagrammatically shown in
In the system shown in
In the step 301 in
To this end, for example, so-called rendering methods can be used in the step 204; according to these methods the volume values of the voxels are projected on an image in a different manner, or the individual pixels are determined, starting from an image plane or from a virtual point, along fictitious rays extending through the volume image. The principle of such a rendering method, that is, the so-called ray casting, is diagrammatically illustrated in the
Various possibilities exist for the determination of the image values from the volume values of the voxels various; such possibilities are dependent on the desired representation and some possibilities will be described with reference to
e shows this possibility for visualization for a line PZ of the image P; in this Figure the fictitious rays emanate from the pixels of an image line PZ, are parallel and penetrate a slice BB of the volume image B of the
A further possibility for visualization is represented by the ray R14 in
In this case the volume values of voxels which are higher than a given limit value are summed along the ray. The summing is interrupted when a given maximum sum is reached. All voxels contacted or traversed thus far have then been examined in that the relevant volume values have been compared with the limit value. This possibility for visualization is shown in
In the possibility for visualization as denoted by the ray R12 in
The rendering methods discussed herein are to be considered as examples of a large number of known rendering methods. A further rendering method that should be mentioned herein is the so-called “splatting” in which for visualization the volume values of the individual voxels are “thrown” onto the image plane of the image and are smeared in a way similar to a snowball hitting a wall. The sum of all volume values “thrown” onto the wall yields an image. As in the methods described above, once more voxels are obtained which are relevant for the visualization and also voxels which are used for the rendering method. A rendering method of this kind is known, for example, from the article by Wenli Cai and Georgios Sakas “DRR Volume Rendering Using Splatting in Shear-warp Context”, Nuclear Science & Medical Imaging including Nuclear Power Systems, 2000 Symposium, ISBN 0-7803-6503-8, pp. 19-12 ff. According to this method the individual voxels can be stored in a manner which allows very efficient memory access to the volume values during the execution of the rendering method.
According to other rendering methods the volume image is illuminated by a virtual light source whose fictitious light rays are reflected in the direction of the image plane by each voxel. For further rendering methods reference is made to the relevant literature. Most rendering methods, however, have in common that only a part of the voxels of a volume image is relevant for the visualization. In accordance with the invention the part which is relevant for the visualization is stored.
For the further processing of the volume image it is checked in the step 304 of
Assuming that a sequence of volume images of a non-moving object is to be visualized, it is to be expected that in each volume image of the sequence the same voxels are relevant for the visualization. For these voxels a “1” would be stored in the buffer memory and the loop with the steps 302 to 304 could then be completed in optimum form, because in the step 204 a rendering method is applied only to those voxels which are actually relevant for deriving the two-dimensional image. During the visualization of sequences of volume images representing a moving object, however, in dependence on the object motion it is to be expected that voxels of a next volume image to be visualized, whose respective corresponding voxel in the buffer memory 38 has a volume value “0” at the time, nevertheless are relevant for the visualization and must be used in deriving a two-dimensional image in the step 204. Therefore, it is necessary to define, using a kind of prediction, at least the volume values of these additional voxels prior to the visualization of the next volume image. To this end, in the step 207 a “1” is also stored in the buffer memory 38 for given neighboring voxels which are situated, for example, in the vicinity of voxels with a stored “1”. The result is a dilation of the regions with relevant voxels.
To this end, for example, the shape and the size of the neighboring regions may be chosen in dependence on the expected motion of the relevant object, or parts thereof, between the instant of acquisition of the previously visualized volume image and the next volume image. In a particularly simple case a user who knows the basic motion of the object and the image rate of the volume image sequence specifies a maximum range of motion for the entire object. Such information is incorporated in the step 207, for example, in such a manner that the volume values of all voxels within a given radius around every voxel that already has the volume value “1” are set to “1”. In the case of the ultrasound apparatus shown in
A more complex possibility for dilation involves the incorporation of a motion model M in the step 207. The motion model M is a general description of the motion carried out by the object during the acquisition of the sequence of volume images. For example, the motion model describes the direction of motion and the speed of individual parts of the object between the instants of acquisition of the relevant volume images for the object shown in the volume images B2 to B5. In the step 207 first the part of the object is determined which is represented by the corresponding volume value from the volume image visualized last, that is, for each voxel for which a “1” is stored in the buffer memory 38. Using the motion model, it is subsequently determined in which direction said part of the object has moved in the next volume image to be visualized. For the corresponding voxel, whose volume value represents this part of the object in the next volume image, a “1” is stored in the buffer memory 38. Generally speaking, this possibility results in neighboring regions which are substantially smaller than those offered by the previously described possibility, but it requires a more elaborate arithmetic unit for the conversion. This type of dilation can also be selected by the user via the control element 40.
When additional information is made available as regards the state of motion of the object in the visualized volume image, the data required for the dilation can be particularly simply derived from the motion model. For the visualization of the heart by means of the ultrasound apparatus shown in
After the execution of the dilation in the step 207, a next volume image can be visualized. To this end, the steps 201 to 304 are carried out again. It is assumed that in the step 204 the rendering method of
If fictitious rays now penetrate the volume image to be visualized during the visualization, only those voxels whose volume value is “1” in the buffer memory 38 are used for the visualization in the step 204. Thus, only those voxels along a fictitious ray which are situated in a spherical region are compared with the limit value. The other voxels are ignored, so that the required number of comparisons of volume values with the limit value is reduced and the visualization of a volume image is substantially accelerated.
The method shown in the
The method shown in the
The use of the volume data in blocks is advantageous notably when the step 204 utilizes known rendering methods which are based on the block-wise use of the volume data. Such a block-based rendering method is described, for example, in the article by Choong Hwan Lee and Kyo Ho Park “Fast Volume Rendering using Adaptive Block Subdivision”, the 5th Pacific Conference on Computer Graphics and Applications, 1997, EEE Computer Society, ISBN: 0-8186-8028-8, p. 148 ff.
The first angle of aperture of the radiation beam 4, denoted by the reference αmax (the angle of aperture is the angle enclosed by a ray of the beam 4 which is situated at the edge in the x-y plane relative to a plane defined by the radiation source S and the axis of rotation 14) then determines the diameter of the examination zone within which the object to be examined must be situated during the acquisition of the measuring values. The angle of aperture of the radiation beam 4 which is denoted by the reference βmax (angle enclosed by the two outer rays in the z direction in a plane defined by the radiation source S and the axis of rotation) then determines the thickness of the examination zone 13 within which the object to be examined must be situated during the acquisition of the measuring values.
The measuring data acquired by the detector unit 16 is applied to a reconstruction unit 10 which reconstructs therefrom the absorption distribution in the part of the examination zone 13 covered by the radiation cone 4. During each revolution of the gantry 1 the object to be examined is traversed completely by the radiation beam 4, so that a respective three-dimensional image data set can be generated during each revolution. The reconstruction unit 10 also comprises an image processing unit 10a which carries out the method illustrated in
The motor 2, the reconstruction unit 10, the radiation source S and the transfer of the measuring data from the detector unit 16 to the reconstruction unit 10 are controlled by a suitable control unit 7. If the object to be examined is larger in the z direction than the dimension of the radiation beam 4, the examination zone can be shifted parallel to the direction of the axis of rotation 14 or the z axis by means of a motor 5 which is also controlled by the control unit 7. The motors 2 and 5 can be controlled in such a manner that the ratio of the speed of displacement of the examination zone 13 to the angular velocity of the gantry 1 is constant, so that the radiation source S and the examination zone 13 move relative to one another along a helical path which is referred to as a trajectory. In this respect it is irrelevant whether the scanning unit, formed by the radiation source S and the detector unit 16, or the examination zone 13 performs the rotation or displacement; only the relative motion is of importance. For the continuous acquisition of volume images the object to be examined is moved cyclically forwards and backwards parallel to the z axis.
When the computer tomograph is used for the examination of the human heart, like in the ultrasound apparatus shown in
The method illustrated in
The versions and embodiments disclosed herein describe methods and apparatus for examining the human heart. However, examinations of other moving objects are also feasible. For example, the method in accordance with the invention can also be used for angiography examinations. A further field of application concerns the visualization of the motion of a joint; in that case the patient slowly moves the joint during the acquisition of the volume images.
Number | Date | Country | Kind |
---|---|---|---|
102 54 323.2 | Nov 2002 | DE | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/IB03/05130 | 11/13/2003 | WO | 5/17/2005 |