The present invention relates to a magnetic resonance imaging apparatus capable of imaging a wide region of an object to be examined by dividing the object into a plurality of regions.
Among magnetic resonance imaging apparatuses (hereinafter referred to as MRI apparatuses), there is a kind comprising the multi-station imaging method which performs imaging by dividing an object into a plurality of regions (hereinafter referred to as stations and multi-station imaging), synthesizes the images being imaged in the respective stations (hereinafter referred to as station images) for each image type, and reconstructs images of a wide region of the object.
In multi-station imaging, a wide region of an object can be imaged for each image type by imaging plural types of images, for example, a T1 weighted image, T2 weighted image and proton intensity image in the respective stations to obtain a station image, and synthesizing the obtained station images (for example, Non-patent Document 1).
In usual MRI apparatuses, the imaging region has been limited to a head region, etc. for imaging a plurality of image types. Therefore, the number of images per region has been low, for example 10 images, and classification or rearrangement of images would not have been a burden to an operator even they had to be carried out manually.
However, since the number of images are many in the multi-station method for imaging in a plurality of stations, it is desirable that the classification and rearrangement of images are executed by the MRI apparatus to reduce the burden of the operator. For example, in the cases such as juxtaposing and displaying a T1 weighted image and T2 weighted image, the number of images to be read in increases enormously and the procedure for selecting the necessary series of images and rearranging the displayed images becomes complicated. For this reason, if the MRI apparatus can perform classification and rearrangement of images, the burden of the operator can be greatly reduced.
In Patent Document 1, an example is disclosed which displays a plurality of station images by sequences, stations or slices by setting so that a head image is to be displayed on the upper part of the screen and a leg image on the lower part of the screen. Also, Patent Document 2 discloses the technique capable of changing the layout of screen display by using a variety of information associated with the images.
Patent Document 1: WO2006-134958
Patent Document 2: JP-A-2004-33381
Non-Patent Document 1: Japanese Journal of Radiology, Vol. 61, No. 10, pgs. 21-22, 2001
In the multi-station imaging method, when a plurality of images are read in to be synthesized or compared, classification of images is a crucial technique from the viewpoint of improving operability since the images of a plurality of image types and stations need to be classified.
However, Patent Document 1 only discloses the user interface for displaying a plurality of station images simply by sequence, station or slice in a predetermined display order, and the algorithm for classifying the plurality of images is not taken into consideration. Patent Document 2 displays the images having a specified index to a specified position, and the function for specifying the images referring to the imaging condition is not disclosed therein. Also, the process related to the discrimination of image types or stations is not disclosed therein either.
The objective of the present invention is to provide an MRI apparatus capable of classifying a plurality of images obtained by multi-station imaging, considering the above-described circumstance.
In order to solve the above-described problem, the MRI apparatus of the present invention is characterized in comprising:
an image acquisition unit configured to divide an imaging region of an object to be examined into a plurality of stations, and obtain a plurality of images having different image types for each imaging station;
a classification processing unit configured to classify the plurality of images by image types; and
display control unit configured to display the plurality of images by image types in a predetermined display format based on the classification result by the classification processing unit.
Also, the image classification method of the present invention is characterized in classifying a plurality of images which are obtained for each station using the method which divides the object into a plurality of stations, by image types, and displaying the plurality of images in a predetermined format based on the classification result.
In accordance with the present invention, it is possible to improve the operability of the MRI apparatus by providing the function capable of classifying the plurality of images obtained in multi-station imaging and simplifying the process to be operated by an operator upon synthesizing or comparing the obtained images.
The best mode for carrying out the present invention will be described below on the basis of the attached diagrams.
While the RF coil executes both transmission and reception in
Next, the operational procedure of the case for imaging object 102 using MRI apparatus 1 shown in
In accordance with the imaging condition specified by an operator, sequencer 116 transmits a command to gradient magnetic field sources 109, 110 and 111 in compliance with a predetermined pulse sequence, and generates a gradient magnetic field in the respective directions by gradient magnetic field coils 105, 106 and 107. At the same time, sequencer 116 transmits a command to synthesizer 112 and modulator 113 to generate a RF waveform, generates the RF pulse amplified by RF source 108 from RF coil 104 and irradiates the generated RF pulse to object 102.
The echo signal produced from object 102 is received by RF coil 104, amplified by amplifier 114 and A/D converted and detected in receiver 115. The center frequency to be the reference for detection is read out from sequencer 116, since the previously measured value thereof is kept in storage media 117, and set in receiver 115. The detected echo signal is transmitted to computer 118 and executed with image reconstruction process. The result of the process such as image reconstruction is displayed on display 119.
Next, the case of imaging a wide region of object 102 by the multi-station imaging method using MRI apparatus 1 will be described referring to
First, a T1 weighted image, T2 weighted image and proton image are imaged in station 1 in which a chest region is set as a region of interest. After imaging is completed in station 1, bed 103 is moved to station 2 in which an abdominal region is set as a region of interest, and a T1 weighted image, T2 weighted image and proton image are imaged in station 2. After imaging is completed in station 2, imaging is executed in station 3 in which a lower extremity is set as a region of interest, in the same manner as station 2.
When the imaging of the T1 weighted image, T2 weighted image and proton image is completed in all of the station positions, a diffusion weighted image is obtained from the lower extremity to the chest region in order. Generally, imaging of diffusion weighted images are influenced by inhomogeneity of static magnetic field, etc. and the station width in the body-axis direction needs to be set narrower compared to T1 weighted images, T2 weighted images and proton images, thus the number of stations need to be increased. In the case of executing the imaging shown in
In any stations, there are cases that the imaging needs to be executed again due to reasons such as body artifacts mixed in the image. In such cases, there will be a plurality of images having the same image type in the same station.
It also is possible to obtain a calculation image from the reconstruction images such as a T1 weighted image, T2 weighted image, proton image or diffusion weighted image. Calculation images are constructed by performing calculation process using a plurality of reconstruction images such as an MIP (Maximum Intensity Projection) image or difference image, and displaying the calculation result as an image.
The plurality of images obtained by the above-described method are registered in the database as shown in
In
Next, classification and rearrangement of such imaged reconstruction images or calculation images will be described. First, the conventional method for classification and rearrangement of images will be described using
Such manual method of classification and rearrangement of images becomes a heavy burden for the operator especially in the case of using the multi-station imaging method for imaging a wide region of an object or the case of juxtaposing and displaying the images of a plurality of image types, since great number of images need to be classified and rearranged. Therefore, it is desirable that the MRI apparatus executes the classification and rearrangement of images automatically for the purpose of reducing the workload of the operator.
The classification of the plurality of images obtained by the multi-station imaging method and the rearrangement of the images thereof using the classification result related to the present invention will be described below. First, the algorithm for achieving classification of the reconstruction images or calculation image (hereinafter referred to as automatic classification algorithm) will be described.
Automatic classification algorithm executes mainly three kinds of discrimination process. First process is to discriminate image types (steps S1-1˜S1-6) which requires the most complicated process among the three kinds of discrimination process. Second is the discrimination of station positions (steps S2-1˜S2-4). These processes correspond to the lateral axis and longitudinal axis respectively in the display format on which the classified images are to be displayed. The third is the discrimination of imaging times (step S3-1). This process corresponds to the case that the imaging is performed anew in the same image type and the same station position, attributed to a reason such as movement of the object during imaging or generation of artifacts in the image.
The characteristic of the classification process of the present invention is that the discrimination of image types are executed several times, the discrimination of the station positions and the imaging times are applied after the classification of the image types for each discrimination process is completed, then a detailed image classification is to be applied only to the image type of which the classification is determined to be incomplete. This is to reduce redundancy attributed to excessively complicated discrimination process and increase of processing time for the case of changing the imaging condition for each station.
The typical automatic classification algorithm in the present invention will be described based on the flowchart in
First, the respective station images specified by the operator are classified into the reconstruction image and the calculation image (step S1-1). Reconstruction images are generated by applying the imaging filter such as Fourier transformation and smoothing or edge enhancement, and calculation images are generated by performing calculation process using a plurality of reconstruction images such as an MIP image or difference image and displaying the calculation result as an image. This type of classification refers to, for example, the value of a private tag of DICOM. The classification can be carried out by referring to the record of information remained in a tag of a calculation image which indicates what type of process has been performed on the image thereof.
Next, with respect to the reconstruction image and calculation image, the image wherein the T1 value is other than zero and the image wherein the T1 value is zero are classified referring to the value of inversion time T1 which is the imaging parameter (step 1-2). Hereinafter, the reconstruction image of which the T1 value is other than zero is referred to as an IR image, and the reconstruction image of which the T1 value is zero is referred to as a non-IR image. All of the imaging parameters of this classification and later is referred to the value of DICOM tag. Then the respective calculation images and reconstruction images are classified into an axial plane, sagittal plane and coronal plane referring to the slice plane which is one of the imaging parameters (step S1-3), and further classification is executed referring to the imaging method which is one of the imaging parameters (step S1-4). As for the imaging method, for example, SE (Spin Echo) method or EPI (Echo Planar Imaging) method are commonly known.
The above-described process from step S1-1˜step S1-4 are set as the first stage in the discrimination process of image types. The order of above-described steps S1-1˜S1-4 does not have to be limited thereto. However, the above-described order of process is optimized considering the points below.
Point 1: since calculation images are generated using the reconstruction images, the classification of the calculation images and reconstruction images will not be executed in the case that the classification by imaging method is performed first.
Point 2: the types of calculation images are less than the types of reconstruction images.
After step S1-4 is completed, whether the reconstruction images or calculation images having the same station position exist or not is confirmed with respect to the classified image types respectively referring to the station position (step S2-1). The image type wherein the reconstruction images or the calculation images having the same station positions were found in step S2-1 is determined as the station image of which the classification is completed, and excluded from the subsequent classification process (step S2-2).
Here, the cases below can be expected in regard to the image type in which the reconstruction images or the calculation images existed:
Here, the case (1) needs more detailed classification of the image type, and the case (2) needs the determination on which image should be selected. When the both processes are compared, the process of the case (1) is to be prioritized since it is significant to accurately execute the classification of image types. Therefore, second stage of the discrimination process of image types as below is to be carried out after step S2-2.
In the image type where the same station positions existed in step S2-1, the reconstruction images and calculation images are classified by comparison to a predetermined threshold referring to the value of imaging parameter TE (echo time) (step S1-5). In the image type where the TE was less than the threshold value in step S1-5, the reconstruction images and calculation images are classified by comparison to a predetermined threshold referring to the value of imaging parameter TR (repetition time) (step S1-6).
The process related to the case (1) is now completed. In the same manner as step S1-1˜step S1-4, the order of process in step S1-5 and step S1-6 does not have to be limited to the order described above, but the above-described order is optimized considering the points below.
Point 3: In the image type classification process, classification of proton images, T1 weighted image and T2 weighted image are assumed to be executed. Since there are few cases that these images are acquired by the same imaging method, the image type classification process is not included in the interior half of steps S1-1˜step S1-4.
Point 4: Among the above-described three image types, the discrimination of TE in a T2 weighted image is to be prioritized since it is the easiest discrimination process.
Point 5: There are cases that the synchronous imaging is performed in the chest region or abdominal region and the images are obtained by different TR among the chest region, abdominal region and lower extremity region, thus the process referring to the value of imaging parameter TR is to be executed in the last process of image type classification.
In the respective image types classified in step S1-5 and step S1-6, the existence of the reconstruction images or the calculation images having the same station positions is to be confirmed again referring to the station position (step S2-3). The image type which did not have the reconstruction images or the calculation images having the same station positions in step S2-3 are determined as the station image of which the classification is completed (step S2-4). On the other hand, the process related to the determination of the case (2) is to be applied to the image type which had reconstruction images or the calculation images having the same station positions existed. More specifically, in the image type having the same station positions, the imaging times with respect to the plurality of reconstruction images and the calculation images obtained in the same station positions are compared, and the image which is obtained later is selected as the image to be used for display or generation of a composite image (step S3-1).
The automatic classification process is now completed.
In accordance with the automatic classification process of the present embodiment, it is possible to achieve the image display as shown in
Since all of the processes capable of executing automatic image classification in most of the circumstances is included in the process in
The method for selecting and executing only the necessary process will be described below.
Selection of the process is to be executed by selecting square button switches 12, 13, 14 and 15. In
The process related to the determination of slice plane indicated by square button switch 14 which is the selection of further detailed process is executed by specifying the priority items using circular button switches 16.
Also, a display format can be set using
While the first embodiment of the automatic classification algorithm is for collectively classifying various types of images, the second embodiment of the automatic classification algorithm is for classifying images in the case that the images of the object are obtained in the images types below. In the explanation below, the portions that are the same as the first embodiment will be appended with the same symbols and the explanation thereof will be omitted.
The station positions of the above-mentioned three image types, i.e., T1 image data, T2 image data and MIP image data and the respective image data are to be classified. This is the case that only button switch 12 in
The flow of the process in the second embodiment of the automatic classification algorithm will be described below based on the flowchart in
First, T1 image data and T2 image data are classified into the reconstruction image, and MIP image data is classified into the calculation image (step S1-1). T1 image data and T2 image data classified as a reconstruction image are recognized as two different image types by step S1-4 which determine the imaging method (step S1-4). In the same manner, MIP image data classified into a calculation image is recognized as one kind of image type in step S1-4 which determines the imaging method (S1-4). As stated above, it is recognized that there are three kinds of image types in step S1-1˜step S1-4.
In the respective three image types, the existence of the same station positions is determined referring to the station position (step S2-1). In the case that the redundant imaging due to the factor such as a body motion of the object does not happen, the station positions do not overlap in one image type. Therefore, such case is excluded from the classification process (step S2-2). For example, in the case that the imaging in the specified station is executed twice due to a body motion of the object upon the diffusion weighted imaging, it is recognized by step S2-1 that the same station positions exist. Since this case is not applicable to the exclusion from the classification process in step S2-2, the imaging times are compared in relation to the two image data in the same station positions, and the image data having the later imaging time is selected as the image data to be used for the generation of a composite image (step S3-1).
As it is described above using the example of obtaining a T1 weighted image, T2 weighted image and diffusion weighted image, in accordance with the present invention, it is possible to classify the station positions and image types even when image data of a plurality of image types and a plurality of station positions are specified at once, whereby operability in generating composite images can be improved.
The description above is the example of the case that only classification control of the calculation image reconstruction is selected using the button switch shown in
As stated above, by using processing order of automatic classification shown in
While the display by stations is described as image display and the automatic classification function is described as a default function in the above illustration, the classification function in the present invention does not have to be limited to the example thereof. For example, in the case that “display by image types” of the screen display in
Number | Date | Country | Kind |
---|---|---|---|
2007-124608 | May 2007 | JP | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/JP2008/057712 | 4/22/2008 | WO | 00 | 10/29/2009 |