The present invention relates to processing of a medical image acquired by an apparatus such as an MRI apparatus and a CT apparatus. More particularly, the present invention relates to an image processing technique including an image quality adjusting function.
A diagnostic image obtained by imaging with an MRI apparatus is generally not a quantitative image in which pixel values are associated with physical quantities, but an image enhancing physical quantitative values such as longitudinal relaxation time (T1) and transverse relaxation time (T2) of a tissue, and thus the image is not standardized. Further, a quality of an outputted image depends on imaging conditions such as imaging parameters and post-processing conditions such as filtering (image reconstruction conditions), and there are enormous number of output patterns of the image quality, even in the same MRI apparatus. Further, since the image quality also depends on vendors and device specifications, the image quality may be different depending on a device model, even under the same imaging conditions. Therefore, there is a need for adjusting the image quality to satisfy a desire of a user, e.g., a doctor, who performs a diagnosis using the MRI apparatus. In order to adjust the image quality as desired by the doctor, however, there are required trial and error to perform imaging repeatedly while varying adjustment parameters that affect the image quality, and thus it is not easy to adjust the image quality.
In connection with the problem as described above, JP-A-2021-27891 (hereinafter, referred to as Patent Literature 1) discloses a technique of applying a trained model to medical image data so as to generate imaging parameters for the medical image data. In this method, when a user selects a medical image of a desired mode, imaging parameters associated with the selected medical image are outputted, and imaging is performed using the imaging parameters. Examples of the imaging parameters include types of collecting system (coil, etc.), types of collecting method (pulse sequence, etc.), types of temporal parameters (TE, TR, etc.), flip angle, imaging cross-section, types of reconstruction, FOV, matrix size, slice thickness, the number of phase encoding steps, scanning option, and so on.
In the technique as described in Patent Literature 1, the user designates an image whose imaging parameters are unknown with respect to a mode of a medical image, for example, a T1 enhanced image or a T2* enhanced image, or a DWI (diffusion weighted image), or further a map image. Then, the imaging parameters associated with the mode of the designated medical image are estimated. This technique, however, does not aim to perform a desired image quality adjustment nor to present parameters for obtaining a desired image quality.
When the technique of Patent Literature 1 is tried to be applied for the purpose of image quality adjustment, medical images matching a desired image quality are required in developing the trained model. But what image quality is favorable is determined from a personal point of view of a user, e.g., a doctor, and a desired medical image is not necessarily included in the already-existing image data. Further, as described above, the image quality also depends on the vendors and the device specifications. Accordingly, even if a desired image quality is inputted and imaging conditions are obtained for the medical data matching the desired image quality, the obtained image quality may be different even under the same imaging conditions, when the device model is different. Thus, even when the technique described in Patent Literature 1 is employed, it is not easy to perform adjustment so as to achieve a desired image quality in a particular apparatus to be applied.
An object of the present invention is to provide a means that facilitates presentation of conditions to obtain the image quality desired by the user, and this achieves savings in time and effort required for repetitive image quality adjustment.
In order to solve the above problem, the present invention provides a means that uses a database associating an image feature value extracted from image quality data, with adjustment parameters that affect an image quality, and on the basis of the image quality data inputted by the user, the present invention presents the adjustment parameters allowing obtainment of the image quality that is the same as or close to the image quality data inputted by the user. The adjustment parameters include various parameters such as imaging conditions and image reconstruction conditions.
That is, a medical image processing apparatus of the present invention comprises a reception unit configured to receive image quality data reflecting a user's desire for an image quality of a medical image, a feature value calculation unit configured to calculate an image feature value with respect to the image quality data, and a parameter presentation unit configured to use the database associating adjustment parameters related to the image quality with the image feature value, to determine and present the adjustment parameters suitable for the image quality data received by the reception unit.
In addition, a diagnostic imaging apparatus of the present invention includes an image processing unit comprising the above-described configuration of the medical image processing apparatus.
Further, a medical image processing method of the present invention comprises receiving image quality data reflecting a user's desire for an image quality of a medical image, calculating an image feature value with respect to the image quality data, and using a database associating adjustment parameters related to the image quality with the image feature value, to determine and present the adjustment parameters suitable for the image quality data being received.
According to the present invention, by simply inputting the image quality data, it is possible to present recommended imaging parameters that allow acquisition of an image with the image quality close to the user-desired image quality, thereby adjusting the image quality to the desired quality in a short time.
There will now be described embodiments of a medical image processing apparatus of the present invention with reference to the accompanying drawings. The medical image processing apparatus of the present invention may serve as a function of an operation part incorporated in a diagnostic imaging apparatus such as an MRI apparatus, a CT apparatus, and an ultrasound imaging apparatus, or may be an image processing apparatus independent of those kinds of diagnostic imaging apparatuses.
As shown in
When the medical image processing apparatus (hereinafter, simply referred to as an image processing apparatus) 3 independent of the diagnostic imaging apparatus 1 is provided, the image processing apparatus 3 or the image processing unit 30 includes a reception unit 31 configured to receive specific image data (hereinafter, collectively referred to as image quality data) reflecting a desire of a doctor or a technician (hereinafter, collectively referred to as a user) regarding the image quality of a medical image, a feature value calculation unit 33 configured to calculate an image feature value that specifies the received image quality data, and a parameter presentation unit 35 configured to use a database associating adjustment parameters of from taking the image until generating the image, with the image feature value, to determine and present an optimum adjustment parameters of the image data having the desired image quality. Further, there may also be provided a database 50 containing a large number of data necessary for the processing that will be described later.
The diagnostic imaging apparatus 1 (the image processing unit 30) and the image processing apparatus 3 are provided with a UI unit 40 configured to receive an input from the user and to present a processing result to the user. The UI unit 40 comprises an input device and a display device (output device) for interactive exchange with the user.
There will now be described an outline of the processing in the image processing unit 30 or in the image processing apparatus 3, using the image processing apparatus 3 as a representative.
When the reception unit 31 receives information (image quality data) related to the image quality of the image inputted via the UI unit 40 (S1), the feature value calculation unit 33 uses thus inputted information to calculate the image feature value of the image quality desired by the user (S2). The information (image quality data) related to the image quality inputted by the user may be, for example, an image itself having the image quality ideal for the user, including image data and its accompanying information, or information received through a selection of a desired image from a plurality of candidate images presented by the image processing apparatus 3. Entry of such information determines specific image data, i.e., the image quality data reflecting the user's desire for the image quality, and then the feature value calculation unit 33 calculates the image feature value of the image quality data using a general analysis method.
Meanwhile, the database 50 is prepared in advance, with respect to each of various images, associating an image having its adjustment parameters upon acquiring and generating the image, with an image having its feature value. The adjustment parameters may include, for example, imaging conditions (such as various imaging parameters and device conditions), and conditions at the time of image generation (such as reconstruction conditions, and conditions for filtering and noise reduction processing). Further, the database 50 may be created for each modality, or the type of modality may be included in the adjustment parameters. The database 50 may be a part of the image processing apparatus 3, or it may be prepared separately from the image processing apparatus 3 and accessible therefrom.
The parameter presentation unit 35 compares the image feature value for the predetermined image quality data calculated by the feature value calculation unit 33, with the feature values of the images stored in the database 50, specifies the closest feature value, and presents the adjustment parameters (adjustment parameter set) associated with the feature value, to the user via the UI unit 40, as suitable adjustment parameters (S3). Further, the suitable adjustment parameters specified by the parameter presentation unit 35 may be transferred to the diagnostic imaging apparatus 1, and the measurement unit 10 and the image generation unit 20 of the diagnostic imaging apparatus 1 may collect the measurement data and generate the image in accordance with the suitable adjustment parameters (S4). At this time, it is further possible that the adjustment parameters are presented to the user, to be modified as appropriate, and then the modified parameters are passed to the measurement unit 10 and the image generation unit 20 (S5). In the case where the image processing unit 30 of the diagnostic imaging apparatus 1 specifies the suitable adjustment parameters, the above-described steps S4 and S5 are performed as the processing within the apparatus.
According to the present embodiment, the adjustment parameters desired by the user can be easily set and applied to the diagnostic imaging apparatus. This eliminates the need for trial and error in order for the user to obtain a desired image quality, enabling generation of an image of a desired quality in a short time.
In view of the above-described configuration of the image processing apparatus and the outline of the processing thereof, embodiments of the detailed processing will be described below.
The reception unit 31 has the same function as that of the reception unit 31 in
There will now be described the processing of the present embodiment. Although the image data as a target in the present embodiment is not limited to an MR image, there will be described a case where the image targeted in the present embodiment is an image acquired by MRI, as an example.
[Receive image quality data: S1]
First, the reception unit 31 receives information related to an image quality inputted by the user. In the case where the user owns the image data of the user's desired image quality, this inputted information related to the image quality is data of DICOM (Digital Imaging and Communications in Medicine). The DICOM information includes image data, and as accompanying information, tag information such as a resolution, matrix size, and TE, TR, which are imaging parameters. By using the DICOM information, it is possible to restrict a selection range of the adjustment parameters in the subsequent steps, thereby facilitating data extraction from the database.
Here, the user can specify a data path of the DICOM information via the UI unit 40, thereby entering the image quality data, that is, establishing the interface between the reception unit 31 and the user. In this case, the image quality data received by the reception unit 31 is preferably a DICOM image, but when the user possesses only the image data of the desired image quality (without accompanying information), only the image data may be received.
In the case where the user does not have any particular image of the ideal image quality, that is, the user has an ideal image quality only as a concept, not in the form of a specific image, the image selection unit 311 (
A method of the image selection unit 311 to select a plurality of candidate images from the database is not limited to a specific method, but it is preferable to receive a multi-stage selection to achieve efficient selection. As illustrated in
In the case where the image is an MR image, for example, a selection of the image type of the MR image and a targeted site is received first. When the image type and the site are determined, various matrix sizes of a representative image (a freely selected image) out of multiple (a group of) images are presented, from a plurality of images of the image type and of the site, then prompting the user to select the matrix size. It is also possible to present the matrix sizes themselves for the selection, but presenting the various-sized images of the representative image can facilitate the user's selection of the matrix size. Once the matrix size is received, an image having the selected matrix size is presented. Also in the case above, it is possible to sequentially narrow the selection range such that lower classification (subgroup) (e.g., pulse sequence type) is presented or representative images belonging to the subgroup are presented to make a further selection. In here, there has been taken as an example the case where the upper group is made based on the matrix size, and the subgroup is made based on the pulse sequence type. However, the grouping is not limited to this example, and any category can be defined as the upper group in advance.
In a final step, the image presentation unit 312 presents a number of images to the extent reducing the burden on the user, and the user selects an image in response to the presentation of the images. Then, the reception unit 31 receives the selected image as image quality data. The number of received images is not limited to one, and more than one image, for example, two to several images, may be received. By receiving multiple images, it is possible to present as recommended adjustment parameters in a subsequent process, adjustment parameters associated with an image which is much closer to the image quality desired by the user.
When the image quality data is specified in step S1, the feature value calculation unit 33 calculates a feature value of the specified image quality data. The types of the feature value include, for example, SNR (signal-to-noise ratio), PSNR (Peak Signal to Noise Ratio), SSIM (Structural Similarity), CNR (Contrast to Noise Ratio), MSE (Mean Squared Error), sharpness, and so on. As method of calculating these feature values, general methods are known and specific descriptions thereof will not be provided here. It is also possible to employ a feature value that can be derived by using CNN, instead of or in combination with the objective feature values for which the calculation methods are already established. The feature value calculation unit 33 calculates one or more feature values.
More than one feature value can be represented as a one-dimensional vector into which individual feature values are concatenated.
The data extraction unit 34 compares the feature value calculated by the feature value calculation unit 33 with the feature values of a large number of images stored in the database 50, and extracts the adjustment parameters (suitable parameters) associated with the feature value (S3), so that the parameter presentation unit 35 presents suitable parameters.
As shown in
The database 50 may also retain, respectively for the adjustment parameter set and the feature values, interpolated data interpolating the discrete data and associating the adjustment parameters with the feature values. With this configuration, it is possible to hold the database that includes the feature value closer to the feature value calculated by the feature value calculation unit 33, allowing presentation of the adjustment parameters associated with the image closer to the image quality desired by the user, as the recommended adjustment parameters.
Types of the feature value in the database 50 do not necessarily have to coincide with the types of the feature value calculated by the feature value calculation unit 33. However, at least a part of the types overlaps with each other and they are calculated by the same method as the method that is used when the feature value calculation unit 33 calculates the feature value.
As shown in
The parameter presentation unit 35 selects adjustment parameters (a combination of plural adjustment parameters: adjustment parameter set) associated with the feature value selected by the data extraction unit 34. As shown in
Alternatively, the adjustment parameters included in the DICOM information may be added as the feature values of the images, to perform matching with the feature values stored in the data base 50 and the feature values 1 to N calculated by the feature value calculation unit 33. In any of the above cases, the adjustment parameters included in the DICOM information and the adjustment parameters associated with the image close to the image quality desired by the user can be presented as the recommended adjustment parameters.
The parameter presentation unit 35 presents the selected adjustment parameters as the recommended adjustment parameters. At this time, it is also possible to add a function that allows the user to edit the presented recommended adjustment parameters.
The user performs imaging with the diagnostic imaging apparatus 1 using the recommended adjustment parameters presented by the parameter presentation unit 35, or the recommended adjustment parameters selected or modified by the user. When the image processing unit 30 of the diagnostic imaging apparatus has a function of presenting recommended adjustment parameters, measurement using the presented or appropriately modified recommended adjustment parameters is performed and then an image is generated after the post-processing conditions are applied.
According to the present embodiment, it is possible to easily set conditions for obtaining an image of the image quality desired by the user in the diagnostic imaging apparatus (imaging apparatus). Further, even when the user does not have information about a specific image quality, various images are presented for the user's selection, thereby allowing the user to clarify a desired image quality and performing the condition setting for obtaining the image quality.
In the above-described embodiment, multiple feature values are treated equally to specify the image quality. When multiple feature values are used, weights may be assigned to the feature values. For example, in the case where multiple feature values are added up and represented as a one-dimensional vector, a weight may be assigned to the feature value to reflect the user's preference followed by adding up the feature values, instead of simple addition.
As a method of reflecting the user's preference in the weights, for example, the user may select what is to be emphasized among the feature values such as the above-described SNR and CNR, and the weight of the particular feature value may be increased at a predetermined ratio (such as 1.5 times and 2 times of the weight of other feature values). As another method, the operation for the user to select an image is repeated along with shuffling the selection-target images, thereby obtaining the weight indicating what the user emphasizes.
The present modification is the same as the first embodiment in the point that the image quality data is received and then matching is performed between the image feature value of the received data and the image feature values stored in the database. In the present modification, the database 50 stores not only the feature values and the adjustment parameters but also the images associated therewith, and when a set of suitable adjustment parameters is selected by the parameter presentation unit 35, an image associated therewith is presented together.
In the case where the database is made up of various images obtained by differentiating the adjustment parameters and the feature values of the images, with the use of standard phantoms standardizing each site, the images (virtual images) used for creating the database can be used without any change, as the images associated with the adjustment parameters.
As illustrated in
According to the present modification, the user can check the displayed virtual image, to confirm the image quality under the condition that the recommended adjustment parameters are used, allowing the user to obtain the adjustment parameters more suitable for the desired image quality.
In the present embodiment, the reception unit 31 of the image processing apparatus 3 receives the image quality data and presents the adjustment parameters in the same manner as in the first embodiment. The present embodiment features that a constraint condition for the adjustment parameters is received to present optimum adjustment parameters from among the adjustment parameters meeting the constraint condition.
As shown in
With reference to
First, the reception unit 31 receives constraints on the adjustment parameters together with the image quality data (S11). The constraints may include any of the restrictions on the imaging parameters and the restrictions on the post processing, but there may be constraints automatically determined from the specifications of a usable imaging apparatus, or constraints determined by the user's request. The latter constraints may include, for example, the imaging time, FOV, the number of slices (slice thickness).
To receive the constraint condition, it is possible to display a GUI listing items of the constraint conditions via the UI unit 40, allowing the user to enter a desired value, a degree, and so on. Entering the image quality data is the same as in the first embodiment, and when there is image data of a desired image quality, the image data may be inputted by specifying the data path. On the other hand, when there is no desired image data, a plurality of candidate images may be presented from the images in the database, in stages as appropriate according to the number of images.
Upon receiving the image quality data (S1), if the received image quality data includes the DICOM information, the reception unit 31 determines whether or not the information included in the DICOM information meets the constraints (S12). If it is determined that the information meets the constraints as a result of the determination, the feature value calculation unit 33 calculates the image feature value in the same manner as in the first embodiment. Then, as shown in
When the image quality data does not meet the constraint condition, or when the user does not have particular image quality data and selects a desired image from the image database 60 via the image selection unit 311 (
According to the present embodiment, the constraint condition is received in advance, thereby enabling determination of the adjustment parameters in which usable conditions and user's preference are reflected, and this achieves savings in time and effort for the processing such as parameter readjustment performed subsequently. For example, step S5 in
It is to be noted that the modification of the first embodiment can also be applied to the present embodiment. Further, as the modification of the present embodiment, in the above description, the image selection unit 311 selects the images based on the constraint condition, when segmentation of the image database is performed. As shown in
Number | Date | Country | Kind |
---|---|---|---|
2022-093909 | Jun 2022 | JP | national |