The present invention relates to an image diagnosing support method and an image diagnosing support apparatus, and more particularly to a technique by which the region of an organ or the like is appropriately extracted from a plurality of images in the same region of an object displayed on the basis of a plurality of kinds of image data. This is an application involving a claim for Paris priority based on patent applications pertaining to Japanese Patent Application No. 2004-165427 and Japanese Patent Application No. 2004-222713 under the Japanese Patent Law.
An image diagnosing support system is proposed which acquires images of a region of one object to be examined by a plurality of medical imaging apparatuses to display the images. An object of such a system is to improve diagnostic skills, in particular to supplement observation of a region, which cannot be observed with one type of medical imaging apparatus, with other medical imaging apparatuses.
For instance, Patent Document 1 discloses an ultrasonic imaging apparatus which is provided with an image data input device enabling a plurality of images, including volume images from outside, to be inputted, and further provided with image position-associative display means which, when displaying a B mode image of a specific diagnosable region of a specific object to be examined, obtains a tomogram of the diagnosable region of the object in a position matching the tomographic position of the B mode image according to images from the image data input device, and positions the tomogram alongside or over the B mode image, or alternately displays them at regular intervals of time.
Patent Document 1: JP10-151131A
However, the ultrasonic imaging apparatus described in Patent Document 1 is nothing more than to automatically perform positional association (especially alignment) between a currently obtained B mode image and an analytical image of a similar region to it, obtained by an X-ray CT apparatus or an MRI apparatus, to make positional confirmation or the like of the desired region simply, rapidly and accurately, but it has no other function.
An object of the present invention is to provide an image diagnosing support method and an image diagnosing support apparatus by which the region of an organ or the like can be appropriately extracted when displaying an image in the same region of an object to be examined obtained by using a medical imaging apparatus. A further object of the present invention is to figure out the region of a blood vessel by using a region extracting method and measure the rate of constriction.
In order to achieve the objects stated above, an image diagnosing support method according to the present invention comprises an acquisition step of acquiring a plurality of images obtained by imaging the same region of the same object with one or more medical imaging apparatuses and having positional information at the time when each of the images was picked up; a selecting step of selecting a first image and a second image out of the plurality of images; a displaying step of displaying the first image and the second image; a step of designating one or more of first reference points on the first image; a step of figuring out second reference points, on the second image, each matching one or another of the first reference points; a first step of extracting for each of the first reference points a first region containing that point within it; and a second step of extracting for each of the second reference points a second region containing that point within it, wherein either one of the first step and the second step is performed earlier or both are at the same time.
Moreover, an image diagnosing support method according to the present invention comprises a step of acquiring an image obtained by imaging a blood vessel of an object with a medical imaging apparatus, the image containing a blood vessel sectional region comprising a luminal region in which blood flows and a vascular tissue region; a step of extracting the luminal region and the vascular tissue region; a step of extracting from the luminal region and the vascular tissue region a plurality of parameters necessary for figuring out the rate of constriction of the blood vessel; and a step of calculating the rate of constriction of the blood vessel by using the plurality of parameters.
An image diagnosing support apparatus according to the present invention comprises: means for acquiring a plurality of images obtained by imaging the same region of the same object with one or more medical imaging apparatuses and having positional information at the time when each of the images was picked up; means for selecting two or more images out of the plurality of images; means for displaying the selected two or more images; means for designating one or more of first reference points on one of the selected two or more images; means for figuring out second reference points, on one or more images out of the selected two or more images, each matching one or another of the first reference points; and means for extracting for each of the first reference points a first region containing that point within it, and for each of the second reference points a second region containing that point within it.
Also, an image diagnosing support apparatus according to the present invention is provided with means for acquiring an image obtained by imaging an object with a medical imaging apparatus; and region extracting means for extracting a desired region out of the image, wherein the image contains a luminal region and a vascular tissue region of a desired blood vessel, the region extracting means extracts the luminal region and the vascular tissue region; and has means for figuring out from the luminal region and the vascular tissue region a plurality of parameters necessary for figuring out the rate of constriction of the blood vessel and means for calculating the rate of constriction of the blood vessel by using the plurality of parameters.
According to the present invention, it is possible to provide an image diagnosing support method and an image diagnosing support apparatus by which particularly the region of an organ or the like can be appropriately extracted when displaying an image in the same region of an object to be examined obtained by using a medical imaging apparatus. Further according to the present invention, the rate of constriction of a blood vessel can be accurately measured.
Preferred embodiments of the present invention will be described below with reference to drawings.
A first embodiment is an embodiment in which matching regions are extracted two or more medical images.
According to
Next, the flow of processing by the image diagnosing support system of
First, the medical imaging apparatuses 1 and 2 respectively pick up a medical image A and a medical image B. For instance, the medical imaging apparatus 1 is an ultrasonic imaging apparatus, and the medical imaging apparatus 2 is an MRI (magnetic resonance imaging) apparatus. Each has its own position identifying means, and the spatial position of the data obtained by the medical imaging apparatuses 1 and 2 is identified with reference to a reference position 3 common to them (as the origin). For instance, where the medical imaging apparatus 1 is an ultrasonic imaging apparatus, the spatial position of the image having the reference position as its origin is identified by using a magnetic sensor or the like. This enables the spatial positional relationship to the same position as what was obtained by the MRI of the medical imaging apparatus 2 to be identified. The medical image data obtained by the respective medical imaging apparatuses are identified as spatial coordinates by the aligning unit 4 and displayed on the image display unit 5. This procedure enables the same region in the medical images obtained by the two different medical imaging apparatuses to be simultaneously displayed on the same screen. The aligning means may as well be comprised as means which, as described in Patent Document 1 for instance, displays an ultrasonic tomographic plane being examined by an ultrasonic imaging apparatus and the tomogram of an X-ray CT apparatus or the tomogram of an MRI apparatus corresponding to it on the same screen as the ultrasonic tomogram by detecting the relative position of the object to a probe and its direction in an arbitrarily set coordinate space and performing calibration in a three-dimensional coordinate space to make possible alignment between the image data from the image data input device and different regions of the object and thereby detecting positional relationships between tissues in the diagnostic image and the body surface (the position of the probe).
How this takes place will be described with reference to
First, the image diagnosing support system reads in the medical image data obtained by the medical imaging apparatus 1 and the medical image data obtained by the medical imaging apparatus 2. And the image display unit 5 displays medical images based on these medical image data.
The user selects, out of these medical images, an image for region extraction. In this embodiment, the user selects a medical image 10.
The user sets a reference point 12 in the medical image 10 by using the input unit 6. The reference point 12 is one point contained in an extraction region desired by the user (the reference point 12) in the medical image 10. The user further inputs extraction conditions α.
The region extracting unit 8 extracts the desired region containing the reference point 12 (hereinafter referred to as the “extraction region β”) in accordance with the inputted extraction conditions α.
Available methods of region extraction include one of region extraction based on brightness information on the image and another of region extraction based on waveform data. In the case of the former, brightness information on the image earlier selected by the input unit is delivered from the image display unit 5 to the region extracting unit 8.
The region extracting unit 8 performs extraction on the basis of the brightness information on the received image. The region extracting unit 8 extracts an extraction region β by successively comparing pixel values near the reference point 12 set in the medical image 10 with the predetermined threshold value (region growing). The region extracting unit 8 may use some known techniques other than region growing, such as the snake method.
In the latter case, on the other hand, signals received from the object are delivered from medical imaging apparatuses to the region extracting unit 8 directly. The region extracting unit 8 performs region extraction by using the received signals received from the object.
Next, the obtained region data β are delivered to the extraction region displaying/measuring unit 9. The user inputs screen display conditions by using the input unit 6 and determines them. The screen display unit 5 displays the extraction region β in the image earlier selected by the input unit 6 (the medical image 10 in this embodiment). The screen display conditions are conditions regarding the displaying, such as displaying the whole with the extraction region colored or displaying only the contour, for instance. It is also possible here to measure the extraction region, and measure such items of geometric information as the region, long axis and short axis, long diameter and short diameter, or characteristic quantities of the extraction region.
At the same time, region extraction of the same region as the reference point 12 selected by the medical image 10 is automatically performed in the medical image 11. More specifically, positional information positional coordinates on the extraction reference point 12 selected by the medical image 10 is delivered to the medical image 11.
Next, the region extracting unit 8 executes region extraction on the basis of the reference point 12a. The method of region extraction that is executed is determined by the user selection by using the input unit 6. As the method of region extraction, what seems to be the most suitable for each medical image can be selected. The method of region extraction and the extraction conditions that are executed on the medical image 11 may be either the same as or different from those used for the medical image 10.
The extraction region γ obtained as the result of extraction is delivered to the extraction region displaying/measuring unit 9 as in the previous case. And after determining the screen display conditions, the extraction region displaying/measuring unit 9 generates data for display in which the boundary of the extraction region γ is superposed over the extraction region β of the medical image 10 and data for display in which the boundary of the extraction region P is superposed over the extraction region γ of the medical image 11, and transmits them to the image display unit 5. The image display unit 5 displays these data for display.
By the operation described above, a plurality of medical images containing the same sectional image obtained by imaging the same region of the same object with different medical imaging apparatuses can be displayed on the same screen. Furthermore, by performing region extraction of any desired region in any desired one medical image out of the aforementioned two medical images, region extraction of the same region can be automatically performed in the other medical image.
The present invention is not limited to the above-described embodiment, but can be implemented in various modified embodiments without deviating from the essentials of the invention.
For instance, though the foregoing embodiment was described with reference to a plurality of medical images picked up with different medical imaging apparatuses, similar processing to the embodiment described above may be performed on the basis of a plurality of medical images picked up with the same medical imaging apparatus (for instance, sectional images of the same region of the same object picked by using the same MRI apparatus at different times).
Further, though the foregoing embodiment was described with reference to a plurality of medical images picked up with different medical imaging apparatuses, medical images picked up with different medical imaging apparatuses of the same type can as well be used. For instance, medical images taken of the same region of the same object with a plurality of ultrasonic diagnostic apparatuses may also be used.
In the foregoing embodiment, first the reference point 12 is set, and the extraction region β is extracted on the basis of the reference point 12. And the reference point 12a is set, and the extraction region γ is extracted on the basis of the reference point 12a. However, the reference points 12 and 12a are set according to the medical images 10 and 11; region extraction in the medical image 10 and region extraction in the medical image 11 may be accomplished simultaneously, or the extraction region γ is first extracted from the medical image 11, followed by extraction of the extraction region β from the medical image 10.
Further in the foregoing embodiment, there are supposed to be two medical imaging apparatuses, but there may be three or more medical images.
For instance, three medical imaging apparatuses display the same region simultaneously on the same screen, and generate three different sets of medical image data. The image diagnosing support system reads in these sets of medical image data, and displays medical images on the image display unit 5 on the basis of these medical image data. The user selects three medical images (one for each medical imaging apparatus) for processing region extraction out of the displayed medical images. The image diagnosing support system displays, for each of the three medical images, the boundaries of regions extracted from the two other medical images superposed one over the other.
More specifically, for the first medical image, a region extracted from the first medical image is displayed by using a solid line and color discrimination. The contour line (boundary) of an extraction region extracted from a second medical image is displayed in a one-dot chain line superposed over that region extracted from the first medical image. Further, the contour line of an extraction region extracted from a third medical image is displayed superposed in a dotted line.
For the second medical image, a region extracted from the second medical image is displayed by using a thin solid line and color discrimination. The contour line of an extraction region extracted from the first medical image is displayed superposed over a region extracted from the second medical image. Further, the contour line of an extraction region extracted from the third medical image is displayed superposed in a dotted line.
Further, for the third medical image, a region extracted from the third medical image is displayed by using a thin solid line and color discrimination. The contour line of an extraction region extracted from the first medical image is displayed in a thick solid line superposed over a region extracted from the third medical image. Further, the contour line of an extraction region extracted from the second medical image is displayed superposed in a one-dot chain line.
The number of sets of data to be superposed over the extraction region extracted from the first medical image may be three or two, selectable by the user as desired. It is also selectable as desired the result of region extraction from which image is to be superposed over which image.
Whereas a medical imaging apparatus which, regarding two or more medical imaging apparatuses, simultaneously displays the same region on the same screen and extracts the same region has been described so far, the medical imaging apparatuses 1 and 2 shown in
While a case in which images obtained by using the medical imaging apparatuses are shown in different parts (left and right) of the same screen was shown regarding this embodiment, a plurality of images may as well be shown in the same part of the screen superposed one over the other. For instance, the extraction region γ may be shown superposed over the extraction region β of the medical image 10.
Further, the region of region extraction from images obtained by different medical imaging apparatuses need not be the same, but there may be different regions. For instance, if an image obtained by an ultrasonic imaging apparatus is subjected to region extraction of the heart, and an image obtained by an MRI apparatus, to region extraction of a lung for superposed displaying, there will be no deviation from the essentials of the invention. In this embodiment, region information on regions which would not be clear enough with only one medical imaging apparatus can be obtained more accurately, and will become available for application to and assistance in various treatments and diagnoses.
The second embodiment is an embodiment in which the rates of constriction of blood vessels are figured by using the region extraction method in the first embodiment. While the number of reference points first designated by the user is one in the first embodiment, this embodiment is different in that the user designates two reference points, but it is similar to the first embodiment in other respects. The second embodiment will be described below with reference to
The user sets in the medical image 10 containing the short-axis line of the blood vessel reference points 15 and 16 respectively in a vascular tissue region constituting the blood vessel region (hereinafter referred to as the “tissue region”) and a luminal region (the region in which blood flows). The region extracting unit 8 extracts the tissue region and the luminal region from the medical image 10 on the basis of those reference points 15 and 16.
Next, reference points 15a and 16a matching the reference points 15 and 16 are set in the medical image 11.
The region extracting unit 8 extracts regions on the basis of the reference points 15a and 16a.
The extraction region displaying/measuring unit 9 generates data for display in which the boundary of the extraction region of the medical image 11 is superposed over the extraction region of the medical image 10, and transmits them to the image display unit 5. The image display unit 5 displays these data for display.
In the embodiment described above the tissue region and the luminal region of a blood vessel are extracted. On the basis of these tissue region and luminal region, the rate of constriction of the blood vessel is figured out. The method of calculating the rate of constriction of the blood vessel will be described afterwards with reference to
In the third embodiment, sets of image data obtained by imaging the same region of the same object with a single medical imaging apparatus from different angles are displayed side by side, and an extraction region extracted from one medical image is automatically extracted into the other medical image.
A medical image 18 in
Regarding the fourth embodiment, the description will focus on an ultrasonic imaging apparatus as a specific example of medical imaging apparatus.
The ultrasonic imaging apparatus, as shown in
The blood flow rate computing unit 106, which is to input the reflected echo signal obtained by the transceiver unit 102 and to take out a frequency signal having undergone a Doppler shift by the blood in the object, has a local oscillator which generates a reference frequency signal oscillating the vicinities of the frequency of the ultrasonic wave transmitted and received by the probe 100; a 90-degree phase shifter which shifts the phase of the reference frequency signal from this local oscillator by 90 degrees for instance; a mixer circuit which multiplies the reflected echo signal by the reference frequency signal from the local oscillator or the 90-degree phase shifter; and a high frequency filter which takes out the low frequency component from the output of this mixer circuit.
The blood flow rate computing unit 106, which computes the blood flow rate in the object by using a frequency signal having undergone a Doppler shift outputted from a Doppler demodulator circuit (hereinafter referred to as the “Doppler signal”), has within it an average velocity computing section which figures out from the Doppler signal inputted to it the average velocity of the blood flow in the blood vessel in the diagnosed region and a velocity variance computing section which figures out the velocity variance of the blood flow. Data obtained via the blood flow rate computing unit 106 are inputted and converted into color signals according to the state of the blood flow, and digital signals are converted into video signals.
The display unit 112 displaying the color signals (video signals) which are outputted and inputted to it, distinguished in color, red and blue for instance, is comprised of a TV monitor capable of color displaying.
The blood flow signal determining unit 119 determines the presence or absence of any blood flow signal by measuring whether or not there are color signals (whether or not there are Doppler signals) in the image resulting from the visualization by the display unit 112 of the Doppler signals figured out by the blood flow rate computing unit 106.
The image data extracting unit 120 sets regions of interest (ROI) in the displayed image. The size and direction (angle) of these regions of interest can be altered by the operator as desired. Instead of setting rectangular or oval regions of interest, arrangement may be so made in advance as to display only a region surrounded by a thick solid line on the display unit 112, and the region surrounded by the thick solid line treated as the region of interest. This region of interest, intended for detecting the blood flow rate in the diagnosed region of the object, is set on the B image in a position where there is the carotid blood vessel, for instance.
Next, the region extraction calculating unit 118 will be described with reference to
And a reference point 203 is set in the region outside the region 202. Setting of the reference point in a constricted region is present in advance to be, for instance, 1 mm outside the region 202. The control unit 116 searches pixels around the reference point 203, and links a region having pixel values within the range of thresholds. Eventually, a region 201 containing the reference point 203 and having pixel values within the range of thresholds is obtained. The region 201 is the vascular tissue region constituting the blood vessel wall.
Next, square measures S(a) and S(b) of the region 201 and the region 202 are figured out as shown in
[Equation 1]
Rate of constriction=S(b)/S(a) (1)
And the rate of constriction so figured out is displayed on the display unit 112. Incidentally, since the rate of constriction is a relative value, the diameters of approximate circles 205 and 206 may be figured out instead. For instance, as shown in
[Equation 2]
Rate of constriction=L2/L1 (2)
And the rate of constriction figured out is displayed on the display unit 112. Although the foregoing description referred to measurement of the rate of constriction by extracting a region in the short-axis direction of the carotid, the rate of constriction can also be measured in the long-axis direction.
A specific example of measurement of the rate of constriction by region extraction in the long-axis direction is shown in
The control unit 116 searches pixels around the reference point 304, links a region having pixel values within the range of thresholds, and a region 302 containing the reference point 304 and having pixel values within the range of thresholds is obtained. If the extent of the region 302 reaches the ends of the screen, the region 302 is linked to the ends of the screen. This region 302 is a luminal region.
And on both outsides of the region 302, a reference point 303 and a reference point 305 are set in two positions. For instance, the region extraction calculating unit 118 designates the reference point 303 and the reference point 305 as external points 0.5 mm away from the ends of the region 302. On the bases of these reference point 303 and reference point 305, in the same way as in
Next, the rate of constriction of the blood vessel is measured by using the region 301 and the region 302. As shown in
[Equation 3]
Rate of constriction=L2/L1, rate of constriction={(L1−(L3+L4))/L1 (3)
The display unit 112 displays this rate of constriction so figured out. In this way, it is possible to figure out the region of the blood vessel in the short-axis direction or the long-axis direction by using the region extraction method to measure the rate of constriction.
It is also possible for the constriction rate calculating unit 121 to calculate the average of the rate of constrictions measured in the long-axis direction and the short-axis direction by sliding the probe 100 in the section where the rate of constriction was figured out in the long-axis direction thereby to figure out the rate of constriction in that section in the axial direction, and to have the average displayed on the display unit 112. Therefore, it is possible to measure the rate of constriction with even higher accuracy.
Next, the operational procedure in this embodiment will be described with reference to
At S401, the blood flow rate computing unit 106 measures a Doppler signal (step 401). At S402, the region extraction calculating unit 118 sets a reference point where there is the Doppler signal and further sets the threshold range of pixels (step 402).
At S403, the region extraction calculating unit 118 searches pixels around the reference point 204, links a region having pixel values within the range of thresholds, obtains the region 202 containing the reference point 204 and having pixel values within the range of thresholds, and extracts a first region (step 403).
At S404, the region extraction calculating unit 118 sets the reference point 203 in a region outside the first region, and further sets the threshold range of pixels (step 404).
At S405, the region extraction calculating unit 118 searches pixels around the reference point 203, links a region having pixel values within the range of thresholds, obtains the region 201 containing the reference point 203 and having pixel values within the range of thresholds, and extracts a second region (step 405).
At S406, the constriction rate calculating unit 121 calculates the rate of constriction by using the square measures or diameters of the first region and the second region (step 406).
Although reference points are supposed to be set by the region extraction calculating unit 118 according to the invention, they may as well be manually set by using the control desk 114. Further, though the foregoing description referred to an embodiment in which the invention is applied to an ultrasonic imaging apparatus, the invention may as well be applied to some other medical imaging apparatus which images an object to be examined and generates image data, such as an X-ray CT apparatus, an MR apparatus, X-ray imaging apparatus or the like instead of an ultrasonic imaging apparatus.
Although the foregoing description referred to means for figuring out the rate of constriction by which the square measures of the luminal region and of the vascular tissue region are calculated from independent reference points by respectively desired region extraction methods, the tubule region extraction may as well be directly figured out of the Doppler signal indicating the region in which a blood flow is present. The Doppler signal which indicates presence of the blood flow is usually displayed in color on a monitor. When the reference point 203 for extracting a vascular tissue region is prescribed and the vascular tissue region is found out by the above-described region extraction method for instance, the luminal region in which a blood flow is present is displayed on the monitor with color information inside the vascular tissue region. It is possible to extract the luminal region merely by counting the number of pixels having that color information.
Although a case in which the Doppler signal having information on blood presence or absence is used for luminal region extraction has been described, what gives information on blood presence or absence is not limited to the Doppler signal. Signals obtained by any diagnostic method permitting discriminative displaying of a blood flow or a region in which a blood flow is present and the peripheral tissues can be used.
Further, the luminal region need not be directly found out by region extraction, but can also be extracted merely by finding out the vascular tissue region. A case of the short axis of a blood vessel for instance will be described. As shown in
[Equation 4]
Luminal region square measure S(b)=Blood vessel region square measure (S1)−Vascular tissue region square measure (S(a)) (4)
This can be similarly accomplished regarding the long axis of the blood vessel, too.
Although derivation of the rate of constriction of the blood vessel has been described so far, the rate of constriction can be similarly achieved for any other region needing its derivation.
The fifth embodiment is one example of utilization of the result of region extraction after region extraction from two medical images.
An image diagnosing support system in this embodiment has means for executing automatic adjustment of the display parameters of image displaying to make the extraction region especially well perceptible (gain/dynamic range automatic adjusting function). The gain/dynamic range automatic adjusting function may be executed by the extraction region displaying/measuring unit 9 in the first embodiment or gain/dynamic range automatic adjusting means may be provided in addition to the constituent elements in the first embodiment and the fourth embodiment.
The gain/dynamic range automatic adjusting function is a function to so adjust the gain/dynamic range as to match the distribution range of the image signal values in the extraction region with the maximum gradation range of image displaying to display. The perceptibility of the extraction region can be improved as a result. Incidentally, the other region than the extraction region (hereinafter referred to as the “background region”) is displayed in extreme white or black, but this poses no particular problem in image diagnosing.
The gain/dynamic range automatic adjusting means, when a region extracted from one medical image is superposed over an extraction region from the other medical image, automatically so adjusts various parameters as to make the two superposed region clearly perceptible. A number of preset adjusting patterns of the parameters, differentiated by the purpose, may be prepared in advance. The purposes of presetting include the following for instance.
Further, presetting may be made ready according to the conditions of comparison of the extraction region and the background region or the conditions of comparison of the extraction regions taken out of two medical images. The conditions of comparison may include the following for instance.
Incidentally, presetting may be made ready according to other conditions than the foregoing.
FIGS. 9 show medical images 90 and 91 obtained by imaging the same region of the same object at different times. Similar processing to that in the first embodiment is done to take out an extraction region β from the medical image 90. Similarly an extraction region γ is taken out from the medical image 91.
In
In the way shown in
In the way shown in
In the way shown in
Conventionally, when a plurality of medical images are to be comparatively observed, the gain/dynamic range is manually adjusted to make the noted region better perceivable, but this embodiment makes it possible to automatically obtain the most suitable superposed display images.
Furthermore, in each of medical diagnostic imaging apparatuses displaying the same screen, it is made possible to achieve superposed displaying of not only the desired extraction region but also intrinsic information. Examples include hardness in the extraction region and information to identify the lesioned region.
Although the foregoing description referred only to the superposition of a lesioned region, the applicability is not limited to this, but may also include information on textural hardness degrees in the extraction region distinguished by color.
The sixth embodiment concerns an image diagnosing support system provided with means for realizing a shape measuring function for extraction regions. The shape measuring function may be exercised by the extraction region displaying/measuring unit 9 in the first embodiment, or shape measuring means may be provided in addition to the constituent elements in the first embodiment and the fourth embodiment.
The shape measuring means measures the characteristic quantities of the extraction region including for instance the actually measured square measure (S) and the external circumferential length (L). Further, where there are two extraction regions, it measures the ratio between the characteristic quantities of the two extraction regions, including for instance the square measures (S1/S2) and the longer diameter ratio (L1/L2). It may also measure other items of geometric information and characteristic quantities. When an instruction to execute shape measuring is given in one medical image, automatic measuring of extraction regions is also done in the other medical image.
For instance in a state of displaying
In this embodiment, simultaneous automatic measuring is made possible of the same region contained in a plurality of medical images. This enables diagnosis assisting information to be provided during, for instance, observation of changes in the same patient over time before and after treatment.
In the seventh embodiment, an image diagnosing support system reads in a plurality of medical images picked up by a medical imaging apparatus of the same region of the same object at different times. These medical images contain information on the times of imaging.
And the image diagnosing support system, performing similar processing to that in the foregoing embodiment, extracts the same region from medical images picked up at different times. In this embodiment, the same blood vessel section is extracted. And the image diagnosing support system in this embodiment displays on the same screen time-series images of these extraction regions (the same blood vessel section imaged at different times) displayed along the time series and a hetero-angle image resulting from designation of a random time from those time-series images and displaying the same region at that random time (the time desired by the user) from a different angle. The time-series images are a collection of images of a plurality of time phase images, which represent changes of a blood vessel section over time.
The user selects a random time phase out of the time-series images 94 by using the input unit 6, which may be a mouse or a track ball. Then, the hetero-angle image 93 taken of the same region in that time phase from a different angle is shown. The hetero-angle image 93 may show only the section in the user-designated time phase (the outermost circle in the hetero-angle image 93) or be combined with a section in a later time phase than that time phase (the section indicated by the dotted line). The hetero-angle image 93 can be displayed by reading a medical image out of a cine-memory.
The image diagnosing support system may also display together thumbnail images 95, 96 and 97 of the hetero-angle image matching random time phases of the time-series images 94.
This embodiment enables time-series images comprising a plurality of medical images picked up at different times displayed along the time series and a hetero-angle image matching a random time phase in those time-series images to be displayed on the same screen, and can thereby contribute to improving diagnostic capabilities.
This can be applied to not only displaying of a medical image but also to uses wherein, when a plurality of images taken of the same object (or object of imaging) are to be displayed in parallel, processing to extract a region from one of the images results in extraction of a matching region in another image. It is also applicable to, besides measuring the rates of constriction of blood vessels, measuring the rates of constriction of other tubule organs or tubule objects of imaging.
Number | Date | Country | Kind |
---|---|---|---|
2004-165427 | Jun 2004 | JP | national |
2004-222713 | Jul 2004 | JP | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/JP05/09863 | 5/30/2005 | WO | 12/4/2006 |