The present invention relates to an image processing apparatus and in particular to an image registration technology for performing registration between images obtained by multiple image diagnosis apparatuses.
Medical image diagnosis allows body information to be obtained noninvasively and thus has been widely performed in recent years. Three-dimensional images obtained by various types of image diagnosis apparatuses such as x-ray computer tomography (CT) apparatuses, magnetic resonance imaging (MRI) apparatuses, positron emission tomography (PET) apparatuses, and single photon emission computed tomography (SPECT) apparatuses have been used in diagnosis or follow-up. X-ray CT apparatuses generally can obtain images having less distortion and high spatial resolution. However, the images obtained do not sufficiently reflect histological changes in soft tissue. On the other hand, MRI apparatuses can render soft tissue with high contrast. PET apparatuses and SPECT apparatuses can convert physiological information such as metabolic level into an image and thus are called a functional image. However, these apparatuses cannot clearly render the morphology of an organ compared to x-ray CT apparatuses, MRI apparatuses, and the like. Ultrasound (US) apparatuses are small and have high mobility, and can capture an image in real time and in particular render the morphology and motion of soft tissue. However, the image pickup area thereof is limited depending on the shape of the probe. Further, a US image includes much noise and thus does not clearly show the morphology of soft tissue compared to images clearly showing the morphology, such as a CT image and MRI image. As seen, these image diagnosis apparatuses have both advantages and disadvantages.
Accordingly, registration between images obtained by multiple apparatuses (hereafter referred to as multi-modality images) allows compensation for the disadvantages of the respective images and utilization of the advantages thereof. This is useful in performing diagnosis, making a therapeutic plan, and identifying the target site during treatment. For example, registration between an x-ray CT image and a PET image allows a precise determination as to in what portion in what organ the tumor is located. Further, use of information on the body outline of the patient obtained from a CT image and information on the position of soft tissue obtained from a US image allows precise identification of the site to be treated, such as an organ or tumor.
Effective utilization of multi-modality images in diagnosis or treatment requires precise and easy registration between images. However, when images of the same subject are captured by multiple apparatuses, the images obtained do not have the same pixel value or the same distribution even at the same site. This is because the apparatuses have different image generation mechanisms. Further, the body outline of the subject or the morphology of an organ is clearly rendered in a CT image, MRI image, or the like, while the morphology is not clearly rendered in a US image, PET image, or the like. Furthermore, where the body outline or organ of the subject is not rendered in the field of view as in a US image, the corresponding site is not clear. This makes registration difficult.
In recent years, by utilizing the features of real-time image pickup by a US apparatus, registration is performed between a US image obtained by monitoring the current situation of the subject and a previously captured CT image while comparing these images. Thus, the position or size of the subject to be treated is monitored during operation. For example, in radio frequency ablation (RFA), treatment is performed while comparing a US image obtained during monitoring with a previously captured CT image. As seen, of multi-modality-image registration techniques, a technique of performing registration between an image obtained in real time and a previously captured, sharp morphology image with ease, high speed, and high degree of precision during operation is particularly increasingly needed.
Known conventional techniques used to perform registration between multi-modality images include (a) the manual method where the operator manually moves images to be positioned, (b) the point surface image overlay method where a feature or shape (point, straight line, curved surface) in images to be positioned is set manually or semi-automatically and corresponding features or shapes between the images are matched, (c) the voxel image overlay method where the similarity between the pixel values of the images is calculated and then registration is performed (Non-Patent Literature 1).
Another proposed method for performing registration between a CT image and an ultrasonic image is a method of generating a similar image to an ultrasonic image from a CT image and using it for registration (Non-Patent Literature 2).
Non-Patent Literature 1: Hiroshi Watabe, “Registration of Multi-modality Images,” Academic Journal of Japanese Society of Radiological Technology, Vol. 59, No. 1, 2003
Non-Patent Literature 2: Wolfgang Wein, et al., “Automatic CT-ultrasound Registration for Diagnostic Imaging and Image-guided Intervention,” Medical Image Analysis, 12, 577-585, 2008
Non-Patent Literature 3: Frederik Maes, et al., “Multi modality Image Registration by Maximization of Mutual Information,” IEEE Trans. Med. Image., Vol. 16, No. 2, 1997.
A technique used to perform registration between multi-modality images is described in Non-Patent Literature 1. However, the manual method (a) has a problem that it takes time and effort, as well as a problem that registration precision depends on the subjective point of view of the operator. The point surface image overlay method (b) can automatically perform registration between images once the corresponding shapes are determined. However, automatic extraction of the corresponding points or surfaces requires manual determination of the corresponding shape. Accordingly, (b) has the same problem as (a). The voxel image overlay method (c) relatively easily performs registration between images compared to (c) and (b). However, the entire shape of the body outline of the subject must be rendered in the images to be positioned even when the voxel pixel values are different. For example, it is difficult to perform registration between an image where only part of the body outline of the subject or an organ is rendered, such as a US image, and a CT or MRI image where its entirety is rendered.
A technique related to registration between a CT image and an ultrasonic image of multi-modality images is described in Non-Patent Literature 2. However, soft tissue or the like not rendered on a CT image is not rendered on a similar image generated from the CT image, either. Accordingly, where the registration target is soft tissue, sufficient registration cannot be performed.
The main factor that makes it difficult to automatically perform registration between multi-modality images with high speed and high degree of precision is that the images to be positioned have different pixel values, rendered shapes, and field of view. For this reason, the operators have conventionally understood medical knowledge or the features of the image pickup apparatuses or obtained images in advance and then performed registration between the images while determining the corresponding positions therebetween.
An object of the present invention is to provide a processing apparatus and image registration method that, in registration between multi-modality images, can automatically with high speed and high degree of precision perform registration between images where the captured same site of the same subject is not rendered as having the same pixel value, shape, and field of view owing to the image pickup apparatuses being of different types.
To accomplish the above-mentioned object, the present invention provides an image processing apparatus and method for performing registration between a plurality of images. The image processing apparatus includes a display unit that can display first and second images captured by different image pickup apparatuses; an input unit that inputs an instruction to perform processing on the second image; and a processing unit that performs processing on the second image. The processing unit generates a pseudo image by dividing the second image into predetermined regions, setting physical property values to the segmented regions, and calculating an image feature value of the first image, and performs registration between the first and second images using the generated pseudo image.
Further, there are provided an image processing apparatus and image registration method where, in the calculation of the pixel feature value from the second image, the processing unit further adds an additional area that is not present among the segmented regions, sets a physical property value to the additional area, and subsequently calculates the pixel feature value.
Further, there are provided an image processing apparatus and image registration method where, in the calculation of the pixel feature value from the second image, the processing unit uses theoretical physical property values corresponding to the segmented regions and area averages of pixel values of the segmented regions.
Specifically, for the purpose of accomplishing the above-mentioned object, in order to perform registration between the first and second images, the present invention generates, from one of the images (e.g., the second image), an image having a pixel value, shape, and field of view similar to those of the other image (e.g., the first image) (hereafter referred to as pseudo image) and performs registration between the first image and the pseudo image having the same image feature value as the first image. Thus, registration is performed between the first and second images. In the generation of this pseudo image, the second image is divided into predetermined segmented regions.
Further, in the process of generating the pseudo image, based on the distribution of one of the images (e.g., the second image), the present invention calculates the physical property (physical property value) distribution of the subject related to the generation mechanism of the image pickup apparatus of the other image (e.g., the first image).
Further, when an area having a different physical property distribution (divisional area) is not clearly rendered on the original image from which the physical property (physical property value) distribution has been calculated, the present invention adds the position and shape of the physical property area (additional area).
Further, the present invention calculates, from this physical property distribution, an image having a feature value similar to the pixel value, the rendered shape, and the field of view of the image (pseudo image) at high speed.
According to the present invention, in registration between the first and second images captured by different apparatuses, from one image, an image similar to the other image is generated at high speed. Thus, the pixel values, shapes, and field of views of the same site of the subject, which is an imaging target, can be easily compared. As a result, automatic, high-speed, and high degree of precision registration can be performed between the images.
Further, in the process of generating a similar image, an area to be positioned is specified in the original image and added thereto. Thus, registration with higher degree of precision can be performed between the images.
Hereafter, embodiments of the present invention will be described in detail with reference to the drawings. In this specification, data on an image A and data on an image B may be referred to as image A data and image B data, first image data and second image data, or image data A and image data B, respectively.
The overall configuration of an image registration system according to a first embodiment is shown in
As shown, the main body of the image pickup apparatus 101 further includes a communication device 104 for communicating with the inside of the main body, an image generation processing device 105 for generating an image from image capture data, a storage device 106 for storing data such as a processing result or image or an image generation program, a control device 107 for controlling the main body and the image generation processing device 105 of the image pickup apparatus 101, and a main storage device 108 for, when performing an image generation operation, temporarily storing the image generation program stored in the storage device 106 and data required for processing. This configuration can be composed of a computer including an ordinary communication interface, a central processing unit (CPU) serving as a processing unit, and a memory serving as a storage unit. That is, the image generation processing device 105 and the control device 107 correspond to processing performed by the CPU.
An image data server 110 includes a communication device 111 connected to a network 109 and configured to exchange data with other apparatuses, a storage device 112 for storing data, a data operation processing device 113 for controlling the internal devices of the image data server 110 and performing on data an operation such as compression of the data capacity, and a main storage device 114 for temporarily storing a processing program used by the data operation processing device 113 or data to be processed. Needless to say, in the server 110 also, the data operation processing device 113 corresponds to the above-mentioned CPU serving as a processing unit, and the image data server 110 is composed of an ordinary computer.
The image pickup apparatus 101 can transmit a captured image to the image data server 110 via the communication device 104 and the network 109 and store image data in the storage device 112 in the image data server 110.
An image processing apparatus 115 includes an main body 118 thereof, a monitor 116 for displaying an operation result and a user interface, and input means 117 serving as an input unit used to input an instruction to the image registration main apparatus 118 via the user interface displayed on the monitor 116. The input means 117 is, for example, a keyboard, mouse, or the like.
The image processing device main body 118 further includes a communication device 11 for transmitting input data and an operation result, an image registration operation processing device 120, a storage device 125 for storing data and an image registration operation program, and a main storage device 126 for temporarily storing an operation program, input data, and the like so that they are used by the image registration operation processing device 120. The image registration operation processing device 120 includes an area division operation processing device 121 for performing an image registration operation, a physical property value application operation processing device 122, a device 123 for processing an operation for calculating a pixel value from a physical property value distribution, and a movement amount calculation operation processing device 124. Details of image registration operation processing performed by the image registration operation processing device 120 will be described later. Needless to say, in the image processing apparatus 115 also, the image registration operation processing device 120 of the main body 118 thereof corresponds to the above-mentioned CPU serving as a processing unit, and the image processing apparatus 115 is composed of an ordinary computer.
The image processing apparatus 115 can obtain an image to be positioned from the image pickup apparatus 101 or the image data server 110 via the communication device 119 and the network 109.
The flow of image registration in the image registration system according to the first embodiment will be described using
First, an image of the target organ or affected site, which is the subject, is captured using the image pickup apparatus 101. The ultrasonic image A generated by the image generation processing device 105 is stored in the storage device 106. The CT image B having an image capture area including the area whose image has been captured by the image pickup apparatus 101 is stored in the image data server. The image processing apparatus 115 reads the ultrasonic image A from the image pickup apparatus 101 and the CT image B from the image data server 110 via the network 109 (steps 201 and 202) and stores them in the storage device 125 and the main storage device 126 (step 203).
It is assumed that the first image, the image A, stored in the storage device 106 of the image pickup apparatus 101 and the second image, the image B, stored in the storage device 112 of the image data server 110 are in the format of a standard, Digital Imaging and Communication in Medicine (DICOM), which is generally used in the field of the image pickup apparatus.
In this embodiment, to perform registration between the image A and the image B, the second image, the image B, is first divided into regions on a main organ basis (step 204). The method for dividing the image B into regions will be described using
Where the image capture site is, e.g., the stomach, the second image, an image B301, is divided into five regions, that is, the regions of air, soft tissue, organ, blood vessel, and bone, or six regions, as shown in
The most common of the methods for dividing into regions is the method of previously setting the upper and lower thresholds on the basis of pixel values and then dividing into regions using the thresholds. However, where the imaging conditions are different; the image pickup apparatuses are of different types; or the subjects are different, the pixel value at the same site varies. Accordingly, the same upper and lower thresholds cannot always be applied. Failure to skillfully divide into regions would affect the shape of the organ appearing on a pseudo image, as well as reduce registration precision. Accordingly, proper division into regions is required.
Techniques of calculating the upper and lower thresholds of a pixel value in accordance with the distribution of pixel values include the clustering method. The clustering method is a technique of, in accordance with a specified number of segmented regions, calculating the median of a region so that the differences between the median and the values distributed on the periphery of the area are minimized. This technique allows the upper and lower thresholds to be calculated in accordance with the difference between the pixel values of the subject. In this embodiment, the clustering method is used as one technique for accomplishing high-precision area division even when the image pickup conditions or the subjects are different.
The number of segmented regions can optionally be set by the operator. For example, the number of organs rendered on the image varies depending on the image pickup site. Accordingly, the image may be divided into a larger number of regions, or the number of segmented regions may be limited. As long as the image B is divided into at least two regions, the regions can be used for registration.
Next, physical property value parameters for calculating similar features on the basis of the generation mechanism of the image A are set to the segmented regions (step 205). Since theoretical physical property values of sites of a human body of ultrasound are already known, the physical property values can be set to the segmented regions, as shown in
For this reason, in this embodiment, to utilize the distribution of pixel values in the segmented regions, a physical property value f new (x, y) set from the original pixel value f (x, y) on the basis of the following formula using the area averages (Avg1 to Avg4) 304 of the pixel values of the regions and theory physical property values (Value1 to Value4) 305 shown in
Here, w is a parameter that can perform control as to what extent the original pixel value distribution should be considered. This makes it possible to set physical property values in consideration of the pixel value distribution of the image B itself. As a result, an image 303 having features similar to those of the image A can be obtained. At this time, the operator determines whether the above-mentioned area division and physical property value setting are sufficient (step 206). If not sufficient, the operator can return to step 205 and repeatedly perform area division, physical property value setting, and pixel feature value calculation. With respect to a distribution image of the physical property value f new (x, y) thus calculated, pixels on a straight line are tracked. Then, using a convolution operation, a pixel value distribution (pseudo image) similar to that of the image A is calculated (step 207).
Next, using
I(x,y)=Σn=−NN(V(i+n)·g(i+n)) [Formula 2]
A pseudo image of I(x, y) obtained from the above-mentioned operation on the basis of image data B601 has high pixel values on boundaries where there is a large difference between the physical property values as shown in 602 of
The value of N can optionally be set by the operator. If the image pickup apparatus 101 is an ultrasonic apparatus, a value according the frequency of ultrasound can be set. With respect to the range to which the above-mentioned calculation is to be applied, the field of view to be position-contrasted can be set or changed.
An example where calculation is performed in a section in
Next, in step 208 of
Image registration is more preferably performed as follows. That is, each time an evaluation function is calculated, the operator determines whether registration is sufficient (step 209). If not sufficient, the operator converts the image position (step 210) and returns to the evaluation function calculation step to repeat the above-mentioned operation. If registration is sufficient, the operator completes the operation. The position conversion parameter obtained in this image position conversion is stored in the main storage device 126.
Since the pseudo image according to this embodiment is originally generated from the image data B, the positional correspondence between the image data B and the pseudo image is uniquely determined. For this reason, the processing unit such as the CPU applies the position conversion parameter obtained with respect to the image data A and the pseudo image to the second image, the image data B, obtains data on the registered second image, registered image data B (step 211), and stores it in the main storage device 126. If necessary, the data on the registered second image, the registered image data B, may be stored in a storage device of the main body 118 of the image processing device, the storage device 125. In the last step of the processing flow performed by the processing unit of
An example of a screen displayed on the monitor according to this embodiment will be described using
As described above in detail, according to the image registration system and the image registration method provided in this embodiment, high-speed, high-precision image registration can be accomplished by generating a pseudo image even when the same site of the subject, which is an imaging target, has different pixel values, shapes, or field of views in images obtained by different image pickup apparatuses.
Various modifications can be made to the configuration described in the above-mentioned first embodiment without impairing the functions thereof. In this embodiment, the image pickup apparatus 101, the image data server 110, and the image processing apparatus 115 have been described as separate apparatuses; however, these apparatuses may be configured as a single apparatus, that is, as a single computer including programs corresponding to the functions thereof. Further, some of the above-mentioned apparatuses or functions may be configured as a single apparatus, that is, as a single computer. For example, the image pickup apparatus 101 and the image processing apparatus 115 may be configured as a single apparatus.
Further, in the first embodiment, the DICOM format is used as the format of the image data A transmitted from the image pickup apparatus 101 to the image processing apparatus 115 and as the format of the image data B transmitted from the image data server 110 to the image processing apparatus 115; however, other formats such as a JPEG image and a bitmap image may be used.
Further, the configuration where the image data server 110 stores data files is used in the first embodiment; however, the image pickup apparatus 101 and the image processing apparatus 115 may directly communicate with each other to exchange a data file. Furthermore, image files may be stored in the main storage device 126 of the image processing apparatus 115 rather than storing them in the image data server 110. While the configuration where communication of a data file or the like via the network 109 is used has been described, other storage media, for example, transportable large-capacity storage media such as a floppy disk® and a CD-R, may be used as means that exchanges a data file.
While the ultrasonic apparatus has been described as the image pickup apparatus 101 in the above-mentioned embodiment, this embodiment can also be applied to apparatuses other than the ultrasonic apparatus, such as an endoscopic device, as it is by only changing the convolution function when generating a pseudo image. Since the pseudo image can be calculated as a three-dimensional image in step 206 as described above, this embodiment is applicable even when images to be positioned are three-dimensional images.
Next, a method where, in step 205 of
Various methods such as free hand and polygon shape can be used as the method for specifying the additional area 5. While the area is specified in a section in
The present invention relates to an image processing apparatus and is particularly useful as an image registration technology for performing registration between images obtained by multiple image diagnosis apparatuses.
101 . . . image pickup apparatus
102 . . . monitor
103 . . . input means
104 . . . communication device
105 . . . image generation processing device
106 . . . storage device
107 . . . control device
108 . . . main storage device
109 . . . network
110 . . . image data server
111 . . . communication device
112 . . . storage device
113 . . . data operation processing device
114 . . . main storage device
115 . . . image processing apparatus
116 . . . monitor
117 . . . input means
118 . . . operation device
119 . . . communication device
120 . . . image registration operation device
121 . . . area division operation processing device
122 . . . physical property value application operation processing device
123 . . . pixel value calculation operation processing device
124 . . . movement amount calculation operation processing device
125 . . . storage device
126 . . . main storage device
Number | Date | Country | Kind |
---|---|---|---|
2009-285242 | Dec 2009 | JP | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/JP10/72628 | 12/16/2010 | WO | 00 | 7/23/2012 |