Technical Field
The present invention relates to a magnetic resonance imaging (MRI) apparatus, and more particularly, to a technique for highlighting a designated region in an MR image of an examination target.
MRI is a technique for processing nuclear magnetic resonance signals generated from respective tissues of an examination target to create an image with different tissue contrasts, and this technique is widely utilized for image diagnosis support. The MRI allows obtainment of various images with different tissue contrasts by adjusting conditions (imaging conditions) under which nuclear magnetic resonance signals are generated. Furthermore, depending on examination purposes and objects, it is possible to acquire one or more types of images, such as T2* weighted images, T1 weighted images, proton density weighted images, and diffusion weighted images (susceptibility weighted images).
Among such various contrasted images, the T2* images emphasize differences in the transverse relaxation time TE (apparent transverse relaxation time T2*) of tissues, by extending the echo time TE. Thus, the T2* images are useful for diagnosing a lesion with a high susceptibility effect (e.g., hemorrhage) in brain images examples. Accordingly, many examination protocols include T2* weighted imaging as one of standard imaging types.
In MRI, according to a method for applying a gradient magnetic field that is applied when generating nuclear magnetic resonance signals, multi-slice 2D images and 3D images can be acquired. Usually, blood vessels travel linearly in the tissue, and when an attempt is made to discriminate between blood vessels and microbleeds (microhemorrhage or minor leak) in an MR image, the blood vessel is difficult to be identified in a 2D (two-dimensional) image, because the blood vessel is depicted as a small dot-like image, except for the blood vessel that travels along the cross section. It is particularly difficult to distinguish between normal blood vessels and microbleeds, and conventionally, algorithms for distinguishing between the normal blood vessels and the microbleeds are based on a 3D (three-dimensional) image.
For example, the specification of JP Patent No. 6775944 (hereinafter, referred to as Patent Literature 1) discloses that a projection process is performed on three-dimensional image data of a brain in a range from the brain surface to a predetermined depth, thereby acquiring a projection image that visualizes MB (microbleeds) or calcification generated in the brain. Furthermore, as a technique for discriminating the microbleeds, JP-A-2020-18695 (hereinafter, referred to as Patent Literature 2) discloses a technique for distinguishing between blood vessels and microbleeds, from a plurality of images obtained at different timings, utilizing that signals of venous blood and microbleeds have the same influence of magnetic susceptibility, but have different influence on phase caused by a blood flow.
Although 3D images are suitable for grasping a three-dimensional structure, there are problems as the following; for example, the time for imaging is generally longer than that of 2D images, and the images are susceptible to body motion. Furthermore, in the technique described in Patent Literature 1, although the projecting process allows displaying of an image where calcification or MB can be easily distinguished, a doctor or an examiner is required to view the image to determine whether or not the microbleeds are occurring, and an algorithm for this determination is not provided. In the technique described in Patent Literature 2, there are acquired echoes (nuclear magnetic resonance signals) for obtaining a plurality of images having different phases within the repetition period TR, and therefore, it is necessary to perform another imaging in addition to the imaging performed in a common routine examination.
An object of the present invention is to provide a technique that utilizes a multi-slice image widely used in routine examination to easily highlight the tissue such as the microbleeds (a designated region) which is hardly identified on 2D images.
In order to achieve the object above, the present invention utilizes a geometric feature of the designated region and a spatial feature in which the designated region is present, thereby highlighting the designated region. The spatial feature includes at least one of the followings; a distribution of surrounding tissues including the designated region (presence probability with respect to each tissue), and a spatial brightness distribution of pixel values of the designated region. The designated region includes a tissue such as a blood vessel and a lesion such as the microbleeds, and represents a portion that is specified as one region, according to characteristics of the tissue and the lesion.
That is, the MRI apparatus of the present invention comprises a reconstruction unit configured to collect magnetic resonance signals of an examination target and to reconstruct an image, and an image processing unit configured to process the image reconstructed by the reconstruction unit, to specify a region having a certain contrast (hereinafter, referred to as the designated region) included in the image. The image processing unit comprises a highlighting unit configured to highlight the designated region, based on shape information of the designated region and spatial information of the designated region. For example, the image processing unit includes a shape filtering unit and a spatial information analyzer, and the shape filtering unit acquires as the shape information, an image of a predetermined shape based on a geometric feature of the designated region. The spatial information analyzer utilizes an image of the predetermined shape to analyze the probability that the designated region exists in each tissue of the examination target, and brightness information of the predetermined shape.
The present invention also embraces an image processor having some or all the functions of the image processing unit in the above-described MRI apparatus.
Further, the image processing method of the present invention processes an image acquired by MRI and highlights the designated region included in the image, comprising a step of acquiring a candidate image of only a predetermined shape included in the image, and a step of acquiring spatial information of the predetermined shape, wherein the step of acquiring the spatial information includes at least one of a step of calculating a tissue distribution of the predetermined shape in the image, and a step of calculating a brightness distribution of the image of the predetermined shape.
The tissue distribution is information indicating how the surrounding tissues of the designated region are distributed, and the brightness distribution is information indicating a change in the brightness value of the designated region mainly due to the blooming effect.
According to the present invention, with respect to the designated region to be highlighted, the shape information obtained from the geometric features of the designated region is used, and further the spatial information such as the spatial distribution of the designated region is used. Accordingly, this allows the designated region to be automatically highlighted and presented even in 2D images.
There will now be described embodiments of an MRI apparatus and an image processing method according to the present invention.
First, with reference to
The nuclear magnetic resonance signals received by the receiver 14 of the imaging unit 10 are digitized and passed to the computer 20 as measurement data.
A structure, functions, and others of each unit constituting the imaging unit 10 are the same as those of publicly known MRI apparatuses, and the present invention can be applied to various known types of MRI apparatuses and elements. Thus, in here, the imaging unit 10 will not be described in detail.
The computer 20 is a machine or a workstation provided with a CPU, a GPU, and a memory, and has a control function (a control unit 20C) for controlling the operation of the imaging unit 10, and image processing functions (a reconstruction unit 20A and an image processing unit 20B) for performing various calculations on the measurement data acquired by the imaging unit 10 and on the image reconstructed from the measurement data. Each function of the computer 20 can be implemented, for example, when a CPU or a similar element uploads and executes programs of each function. Some of the functions of the computer 20, however, may be implemented by hardware such as a programmable IC (e.g., ASIC, FPGA). In addition, the functions of the image processing unit 20B may be implemented, in a remote computer connected to the MRI apparatus 1 by wired or wireless connection, or in a computer constructed on a cloud, and this type of computer (an image processor) is also embraced in the present invention.
The computer 20 includes a storage device 30 that stores data and results (including intermediate results) required for control and computation, and a UI (user interface) unit 40 that displays GUI and computation results to the user and accepts designations from the user. The UI unit 40 includes a display device and an input device (not shown).
The MRI apparatus of the present embodiment comprises a function (highlighting unit 21) in which the image processing unit 20B of the computer 20 highlights a particular tissue or region (hereinafter, referred to as a designated region) included in an image, using the image reconstructed by the reconstruction unit 20A. This function utilizes a geometric feature and spatial information of the designated region. For that purpose, the highlighting unit 21 comprises, for example, a shape filtering unit 23 that acquires an image of only a shape of the designated region, and a discrimination unit 27 that discriminates a specific tissue, using the image of only the shape acquired by the shape filtering unit.
The discrimination unit 27 uses one or more methods to make the discrimination. In one discrimination method, the discrimination unit 27 utilizes, as the spatial information, a result of analyzing a distribution (tissue distribution) of the designated region in the entire imaged tissue. In another method, the discrimination unit 27 utilizes, as the spatial information, a result of analyzing the brightness distribution of pixel values of the image having only the shape. In yet another method, the discrimination unit 27 makes the discrimination utilizing a CNN (Convolutional Neural Network) trained in advance using images including the designated region and the surrounding region (including the shape information and the spatial information of the designated region). The spatial information analyzer 25 as shown in
The processing of the image processing unit 20B will be described later. With reference to
First, under the control of the control unit 20C, the imaging unit 10 performs imaging according to imaging conditions set in an examination protocol, or according to imaging conditions set by a user, and collects nuclear magnetic resonance signals for obtaining an image of the subject. The pulse sequence used for the imaging is not particularly limited, but here, multi-slice 2D imaging is performed in which an area having a predetermined thickness is divided into a plurality of sections (slices) and imaging is performed for each slice. In the multi-slice 2D imaging, the pulse sequence is repeated with changing the slice position to be selected, and a 2D image with multiple slices is acquired.
The reconstruction unit 20A performs an operation such as the fast Fourier transform using the image data of the respective slices to obtain an image for each slice (S1). Basically, the multiple cross sections are in parallel, but in addition, an image of a cross section orthogonal thereto may also be acquired.
The image processing unit 20B (the highlighting unit 21) performs a process to highlight the designated region included in the image with respect to each image of the multiple cross sections. For this reason, first, a shape filter is applied based on a geometric feature of the designated region, and an image of only the predetermined shape is created (S2). For example, when the designated region corresponds to microbleeds, the shape filtering unit 23 applies the shape filter for extracting a small circular shape (granular shape) to create an image of only the granular shape. In this case, it is also possible to use a combination of a plurality of shape filters in order to remove other shapes that may be mixed due to the use of only one shape filter.
Then, the spatial information analyzer 25 analyzes the image obtained as a result of the filtering, and acquires spatial information such as tissue distribution features of individual granular shapes (S3). The spatial information includes information (tissue distribution) of the surrounding tissue in which the granular shapes in the target portion are distributed, a distribution (brightness distribution) of the pixel values in the individual granular shapes, or a combination thereof.
The discrimination unit 27 uses the analysis result of the spatial information analyzer 25 to discriminate between the designated region that is a target of the discrimination and the tissue that is similar in shape to the designated region but is different from the designated region, and extracts only the designated region (S4). When the discrimination unit 27 makes discrimination using a CNN that has learned images including both the shape and the spatial features of the designated region, the shape filtering unit 23 and the spatial information analyzer 25 may be omitted.
The processing above is performed on all the slices of the multi-slice image (S5), and the positions and sizes of the designated region can be finally specified in the entire region to be imaged. The information of the specified designated region is displayed, for example, being superimposed on the entire image (S6). The entire image may be a T2* weighted image used for specifying the designated region, or may be another image acquired in parallel (for example, a susceptibility-weighted image or a proton-density weighted image).
The user checks the position of the designated region displayed on the image, and if the designated region indicates microbleeds or calcification, the user can confirm the location of the microbleeds or the calcification occurrence.
According to the present embodiment, regarding a tissue or a lesion that has been difficult to be discriminated in conventional 2D images, such designated region can be discriminated and highlighted in the 2D image acquired by a normal MRI examination, by using the shape information and the spatial information of the designated region.
Next, an embodiment of the process performed by the image processing unit will be described, taking as an example, the case where the designated region corresponds to microbleeds.
In the first embodiment, the shape filtering unit 23 comprises a filter A for extracting a granular shape and a filter B for extracting a linear shape, and removes the shape extracted by the filter B from the shape extracted by the filter A to obtain an output of the shape filtering unit 23. Further, the spatial information analyzer 25 uses, as the spatial information of the designated region, information indicating in which portion the designated regions are distributed, in a plurality of organs or regions, and information obtained by analyzing the blooming effect (blurring or enlargement of a lesion outline due to a magnetic susceptibility effect of bleeding) of the granular shape extracted as the shape of the designated region. The discrimination unit 27 identifies the designated region based on the analysis result of the spatial information analyzer 25.
With reference to
As shown in
However, as long as the filter is capable of extracting a predetermined form, the Hough Transform, for example, may also be used without limited to the morphological filter. In addition to the two types of the filters, a filter for extracting features other than the shape may also be provided.
In order to acquire information indicating where the designated regions are distributed in the plurality of organs or regions, the spatial information analyzer 25 comprises a segmentation unit 251 that divides the image into the multiple organs or regions to create segmentation images with respect to each of the organs or tissues, and a probability calculator 252 that calculates the probability of the tissue in each of the segmentation images. The spatial information analyzer 25 is further provided with a feature analyzer 253 that analyzes the features of the predetermined shape and of the tissue surrounding the predetermined shape in order to analyze the blooming effect.
With reference to
First, the shape filtering unit 23 receives a T2* weighted image (brain image) created by the reconstruction unit 20A, and performs pre-processing where regions (backgrounds) other than the brain are removed using a mask (the mask that makes the brain region 0 and the rest 1) and also noise reduction is performed (S21). As the noise reduction, a publicly known average filter can be used, for example. Although the pre-processing S21 is not essential, the accuracy of subsequent processing (filtering, and so on) can be improved by performing such pre-processing. There are various known methods (e.g., A hybrid approach to the skull stripping problem in MRI: Neuroimage, 2004 July; 22 (3): 1060-75) for extracting the mask of brain region, and any of these methods may be used.
Then, the shape filtering unit 23 applies each of the morphological filters A231 and B232 to the image of the brain region (S22 and S23), to obtain an image (granular image) 503 in which a granular shape (circular or elliptical) is extracted and an image in which a linear shape is extracted. The granular image 503 and the linear image (not shown) resulting from the process S23 are subtracted from each other, linear components are removed from the granular image 503 (S24), and an image having only the granular shape (grain-line difference image) is obtained as a candidate image 505. Alternatively, instead of the difference, the linear image is divided from the granular image for each pixel, and an image (grain-line division image) highlighting the granular component is obtained as the candidate image. Further alternatively, a threshold processing is performed on the grain-line difference image or the grain-line division image with an appropriate threshold value, whereby a binary image is calculated in which the circular region is 1 and otherwise 0, and the binary image may be used as the candidate image. By combining the two types of filters as described above, it is possible to prevent mixing of unnecessary shapes and to extract only the shape to be discriminated.
Prior to obtaining this difference, as shown in
It is further possible that the linear shape image 502 is subtracted from the granular image 503 after the thresholding processing, then the difference image 504 may be subjected to a process of removing the granular shape mixed from other than the brain parenchyma, using the pixel information of the original image 500 (S24-2). This processing divides the image 504 after the subtraction into small regions (small patches), determining whether a value obtained by multiplying an intermediate value of pixel values in each small patch by a predetermined coefficient, is smaller than an average value in the mask (e.g., the brain region) of the original image 500, and when the value is smaller, the patch (the granular shape included in the patch) is excluded. By adding this processing, it is possible to reliably exclude the granular shape in the region outside the brain. The coefficient multiplying the intermediate value is an adjustment coefficient for preventing excessive exclusion, and a value such as 0.8 is used, for example. Alternatively, a histogram within the small patch may be analyzed to exclude granular shapes that have characteristics different from normal vessels or microbleeds. For example, a value obtained by multiplying the minimum value of T2* weighted images in the small patch by a constant (for example, 0.8) (the minimum value in the patch) may be compared with the average value in the mask, and the granular shape having a larger minimum value in the patch (a granular shape with light contrast) may be excluded.
The processing in the shape filtering unit 23 has been described so far, and the image of only the granular shape existing in the brain is obtained as the candidate image 505 by the series of processing.
Next, the spatial information analyzer 25 analyzes the spatial feature of the granular shape of the candidate image 505. In the present embodiment, based on the findings that a half or more of the microbleeds are contained in the cerebral parenchyma, the region of the brain image is divided into the cerebral parenchyma and cerebrospinal fluid (CSF), and the probability maps are generated respectively as spatial information. For this purpose, the segmentation unit 251 first creates a segmentation image for each region from the brain image of the subject. As the image used for segmentation,
The segmentation is a technique for generating images (segmentation images) divided for respective tissues, based on the features of each tissue appearing in the image, and there are known various algorithms such as the k-means method, the region growing method, and the nearest neighbor algorithm, and also methods that employ CNNs, and any of them may be adopted. In the present embodiment targeting the brain image, as shown in
Next, the probability calculator 252 calculates the probability that the granular shape of the candidate image is included in the cerebral parenchyma, and the probability that the granular shape of the candidate image is included in the CSF (S26). Specifically, as shown in
The segmentation followed by calculating the probability is performed in this way, and it is possible to accurately discriminate the spatial information of the microbleeds that are unevenly distributed in certain portions.
On the other hand, when the candidate image 505 is inputted, the feature analyzer 253 analyzes the features of the microbleeds with respect to the individual granular shapes. Several methods of the analysis may be adopted, and in the example as shown in
The feature analyzer 253 applies the CNN to the candidate image 505, and calculates the probabilities of blooming for the individual granular shapes. In the above description, the CNN is applied to the candidate image 505, but a small region corresponding to each of the granular shapes in the candidate image 505 may be cut out from the original image (T2* weighted image) 500, and the CNN may be applied to the image patch (
In addition to calculating the probability of blooming, the feature analyzer 253 may calculate statistic values such as a diameter and a volume of the discriminated granular shape. As for the diameter, for example, the lengths of lines crossing the granular shape in two or more directions are measured, and the length of the longest line is defined as the diameter. As for the volume, if the granular shape identified as the microbleeds remains in only one slice, it may be approximately calculated from the diameter and the slice thickness, by approximating the microbleeds to a sphere or a cylinder. If the granular shape identified as the microbleeds is placed at substantially the same position of multiple slices, the volume may be approximately calculated from the diameter of the granular shape obtained for each slice and the thickness of the cross section covered by the multiple slices.
The discrimination unit 27 integrates the result calculated by the probability calculator 252 and the result calculated by the feature analyzer 253 as described above, and determines whether the granular shape of the candidate image (or the image patch obtained by cutting out the region corresponding to the granular shape from the original image) is microbleeds or a normal blood vessel. For example, the probability of existing in brain parenchyma calculated by the probability calculator 252 is subjected to threshold processing to discriminate between the two types. For example, if the probability is 50% or more, it is discriminated as the microbleeds, and if the probability is less than 50%, it is considered as the normal blood vessel. Similarly, if the blooming probability calculated by the feature analyzer 253 is equal to or larger than a predetermined threshold value, it is determined as the microbleeds. The discrimination unit 27 integrates both results. The integration method may be, for example, taking AND of both results (the result of the probability calculator 252 and the result of the feature analyzer 253), and only those determined to be microbleeds in both may be discriminated as the microbleeds. Alternatively, taking OR of the two results, those determined to be microbleeds in either one may be included as microbleeds. Alternatively, the probabilities of both types may be multiplied.
Finally, thus obtained discrimination result 550, i.e., the information of the microbleeds (information such as the number, the positions, the sizes of the microbleeds) is presented to the user. As the presentation method, various methods can be adopted, such as showing portions of the microbleeds with a different contrast or color, superimposed on the original T2* weighted image, and displaying the information such as the number and sizes of the microbleeds together with the image. Examples of such methods are shown in
As described so far, the image processing unit 20B of the present embodiment targets the 2D-T2* weighted image, where the shape filtering unit 23 uses the filter A for filtering the granular shape and the filter B for filtering the linear shape to create the image of only the granular shape as the candidate image. The spatial information analyzer uses the segmentation images created from the image of the same subject to calculate the tissue distribution (the cerebral parenchyma probability and the CSF probability) of the candidate image, and calculates the blooming probability of the respective granular shapes. The discrimination unit 27 uses the analysis result of the spatial information analyzer 25 to perform the threshold processing on the candidate image, and identifies the granular shape having a high likelihood of microbleeds.
As described above, in addition to the geometric features appearing in a 2D image, the spatial feature of the extracted shape is used for discriminating and highlighting the microbleeds having been difficult to be discriminated in the 2D image conventionally. Further, the discrimination is performed for each of the multi-slice images, thereby grasping three-dimensional features together.
There will now be described a modification of the processing of the image processing unit 20B on the basis of the first embodiment. In the following modification, the same elements and processes as those in the first embodiment will not be described redundantly, and mainly different points will be described.
In the first embodiment, a T2* weighted image is used as the source image for creating the candidate image. On the other hand, quantitative susceptibility mapping (QSM) or a susceptibility weighted image (SWI) is known as an image that is excellent in visualizing blood, and these images can be used instead of the T2* weighted image. Imaging methods and calculation methods for acquiring the QSM and SWI are known in the art, and therefore will not be described. In the QSM, when a value of cerebral parenchyma is assumed as 0, calcified tissue becomes relatively a “negative” (diamagnetic) value, and the tissue of microbleeds becomes a “positive” (paramagnetic) value, so that not only discrimination of the microbleeds but also discrimination of calcified tissue is possible.
It is also possible to use the QSM image or the SWI supplementarily at the time of discrimination, for example, as an image at the time of generating segmentation images, rather than as the original image of the candidate image.
In the first embodiment, the trained CNN is used to determine the blooming effect. Instead, a gradient of brightness change, or a low-rank approximation may be used as a tool for analyzing the blooming effect.
As for the gradient of the brightness change, as shown in
In this case, the feature analyzer 253 obtains the distribution (brightness distribution) of the pixel values of the line L passing through the center of the granular shape, and the gradients of the rising and the falling of the distribution are calculated from the distribution. In the case where the feature analyzer 253 calculates the diameter of the granular shape as a statistic value, the line as to which the diameter has been obtained among multiple lines, may be used as the line L passing through the center of the granular shape. The gradients obtained for respective granular shapes are subjected to the threshold processing, and the probability of the microbleeds is calculated and outputted from the feature analyzer 253.
Low-rank approximation is a technique to compress the dimension of data by singular value decomposition by limiting the number of singular values. An image (matrix) is expressed only by a fixed number of base images, whereby the dimension is reduced, and then it is possible to calculate the probabilities of two kinds of granular shapes (blood vessel or microbleeds, etc.) robustly against noise and error.
The discrimination using the blooming probability obtained by the methods of the aforementioned modification is the same as the first embodiment. These methods do not need the CNN learning, and thus the methods can be easily implemented in the image processing unit.
The present modification features that there are prepared a plurality of tools for calculating the blooming probability in response to imaging conditions.
The size and shape of the blooming depicted in MR images may vary depending on the imaging conditions such as the static magnetic field strength, TE, and the direction of application of the static magnetic field (relative to the slicing direction). Therefore, there is a possibility that trained CNN and the feature analysis method assuming only one imaging condition cannot guarantee the certainty of the analysis result. In the present modification, multiple CNNs are prepared in response to a plurality of imaging conditions, and a CNN corresponding to the imaging conditions at the time of acquiring the target image is selected and used. In the case where the feature analyzer 253 uses the low-rank approximation, not the CNN, multiple base images are prepared, and the probability calculation using the low-rank approximation is performed with different base images depending on the imaging conditions.
The CNN or the base image may be selected by reading information of the imaging conditions associated with the image to be processed, and the image processing unit 20B may automatically determine the selection based on the information. Alternatively, options are presented to the user via the UI unit 40 so that the user can perform the selection.
In the first embodiment, for the purpose of discriminating the microbleeds occurring in the brain parenchyma, the shape filtering unit 23 uses two types of filters; the filter for the granular shape and the filter for the linear shape. In order to discriminate a linear region such as the hemosiderin deposition in a brain surface, the shape filtering unit 23 uses the filter for the linear shape as the main filter. If necessary, as in the first embodiment, there may be used filters such as a filter for removing a mixed-in shape other than the linear shape and a filter for limiting the length of the linear region.
When the hemosiderin deposition in the brain surface is a target, the spatial information analyzer 25 calculates a pia mater probability map as the spatial information, from a pia mater segmentation image (an image of the region excluding brain parenchyma and CSF, or a border area between the brain parenchyma and the CSF). The presence or absence (probability) of the blooming effect is calculated as in the first embodiment. Then, the result of the pia mater probability and the result of the blooming probability are integrated to make the discrimination.
In the first embodiment, the spatial information analyzer 25 calculates the probability of the candidate image in each tissue and the blooming probability, and the discrimination unit 27 makes discrimination on the candidate image based on the result. The present embodiment features that the discrimination unit 27 uses a trained CNN by using learning data obtained by annotation for the designated region including spatial information.
Therefore, as shown in
A target of the annotation of the CNN 26 is a normal structure image, including a normal structure (here, a blood vessel) and its surrounding tissue. By using a large number of such patch images as learning data, it is possible to learn the normal structure images including information of the surrounding tissues. The CNN 26 is trained using this learning data to output the probability that the inputted image is the normal structure, or the probability that the inputted image is a non-normal structure (e.g., a lesion like microbleeds). Learning of the CNN 26 may be performed by the image processing unit 20B or by a computer other than the image processing unit 20B.
As shown in
According to the present embodiment, the use of the trained CNN allows elimination of two processing lines of the spatial information analyzer 25; i.e., the tissue distribution (probability) calculation using the segmentation, and the blooming probability calculation. Thus, it is possible to simplify the processing of the discrimination unit 27.
In the second embodiment, the shape extraction and the discrimination are performed in two stages, and the candidate image created through filtering by the shape filtering unit 23 is subjected to the CNN processing. The CNN may also be used for the processing that includes the shape extraction.
In this case, the shape filtering unit 23 as shown in
According to the present modification, it is possible to perform tasks for the CNN learning in another image processing unit in advance, and this facilitates the discrimination process by the image processing unit 20B.
The present embodiment features that there is added a means enabling a modification on the processing result of the image processing unit 20B, from a viewpoint of a user (such as a doctor and an examiner). Other configurations are the same as those of the first or the second embodiment, and redundant description will not be given. However, the drawings used in describing the first and the second embodiments will be referred to as necessary.
As illustrated in
The display control unit 22 of the present embodiment provides a GUI for enabling the user to edit images displayed on the display device 41. For example, as shown in
On the other hand, as shown in
It is further possible to accept a modification in size, location, and so on, of the area to be marked, though not the modification of the discrimination result. In the example as shown in
The image processing unit 20B receives the result of the user edition as described above, and updates the discrimination result. In addition, the image processing unit 20B (feature analyzer 253) may calculate statistic values for the region newly added. Furthermore, as a result of the deletion, when the statistic values (for example, the number of microbleeds) are changed, those statistic values may be rewritten.
When there is a change in the discrimination result due to the edition by the user, the result may be updated and registered in a device such as the storage device 30, and transferred to the PACS 50, for instance. These processes may be performed automatically by the image processing unit 20B or may be performed upon receiving an instruction from the user.
According to the present embodiment, it is possible to obtain a highly reliable discrimination result by adding the function of the user's edition to the processing of the image processing unit 20B. Such reliable discrimination result may also help diagnosis in similar cases, as well as improving the accuracy of the CNN by utilizing this reliable discrimination result for the CNN training and relearning.
Number | Date | Country | Kind |
---|---|---|---|
2022-060537 | Mar 2022 | JP | national |